paper_id
stringlengths
43
43
summaries
sequence
abstractText
stringlengths
98
40k
authors
list
references
list
sections
list
year
int64
1.98k
2.02k
title
stringlengths
4
183
SP:f6b70565d5b35e145ed9f8cae717dee238f0086c
[ "This paper studies the problem of estimating a vector valued regression function by neural networks. They provide a bound on the in-sample prediction error for a neural network estimator under two types of regularizations; one that induces connection sparsity and another that induces node sparsity. The in-sample error is bounded by the in sample error of an estimator computed in the noiseless case.", "This paper studies mean-squared-error bounds for neural networks with small $\\ell_1$-norm. The use of $\\ell_1$-norm constraint is analogous to the use of LASSO in sparse linear regression. They give a \"mean-squared-error\" bound because they only analyze a fixed-design setting (where the goal is only to analyze the effect of noise) instead of the random-design setting which requires an additional analysis of generalization to fresh data. " ]
Neural networks are becoming increasingly popular in applications, but a comprehensive mathematical understanding of their potentials and limitations is still missing. In this paper, we study the prediction accuracies of neural networks from a statistical point of view. In particular, we establish statistical guarantees for deep learning with different types of sparsity-inducing regularization. Our bounds feature a mild dependence on network widths and depths, and, therefore, support the current trend toward wide and deep networks. The tools that we use in our derivations are uncommon in deep learning and, hence, might be of additional interest.
[]
[ { "authors": [ "M. Anthony", "P. Bartlett" ], "title": "Neural network learning: theoretical foundations", "venue": "Process Syst.,", "year": 2016 }, { "authors": [ "T. Ash" ], "title": "Dynamic node creation in backpropagation networks", "venue": "Connect. Sci.,", "year": 1989 }, { "authors": [ "B. Carl" ], "title": "Inequalities of Bernstein-Jackson-type and the degree of compactness of operators in Banach spaces", "venue": "In Ann. Inst. Fourier,", "year": 1985 }, { "authors": [ "S. Changpinyo", "M. Sandler", "A. Zhmoginov" ], "title": "The power of sparsity in convolutional neural networks", "venue": null, "year": 2017 }, { "authors": [ "A. Dalalyan", "M. Hebiri", "J. Lederer" ], "title": "On the prediction performance of the lasso", "venue": null, "year": 2017 }, { "authors": [ "J. Feng", "N. Simon" ], "title": "Sparse-input neural networks for high-dimensional nonparametric regression and classification", "venue": null, "year": 2017 }, { "authors": [ "N. Golowich", "A. Rakhlin", "O. Shamir" ], "title": "Size-independent sample complexity of neural networks", "venue": null, "year": 2017 }, { "authors": [ "S. Han", "H. Mao", "W. Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "In Proc. ICML,", "year": 2016 }, { "authors": [ "T. Hastie", "R. Tibshirani", "M. Wainwright" ], "title": "Statistical learning with sparsity: the lasso and generalizations", "venue": "CRC press,", "year": 2015 }, { "authors": [ "J. Kim", "V. Calhoun", "E. Shim", "J.-H. Lee" ], "title": "Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: evidence from whole-brain resting-state functional connectivity patterns of schizophrenia", "venue": null, "year": 2016 }, { "authors": [ "V. Koltchinskii" ], "title": "Rademacher penalties and structural risk minimization", "venue": "IEEE Trans. Inform. Theory,", "year": 2001 }, { "authors": [ "V. Koltchinskii", "D. Panchenko" ], "title": "Empirical margin distributions and bounding the generalization error of combined classifiers", "venue": "Ann. Statist.,", "year": 2002 }, { "authors": [ "Y. LeCun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "A. Ledent", "Y. Lei", "M. Kloft" ], "title": "Norm-based generalisation bounds for multi-class convolutional neural networks", "venue": null, "year": 1905 }, { "authors": [ "J. Lederer" ], "title": "Bounds for Rademacher processes via chaining", "venue": null, "year": 2010 }, { "authors": [ "J. Lederer" ], "title": "Risk bounds for robust deep learning", "venue": null, "year": 2020 }, { "authors": [ "J. Lederer" ], "title": "No spurious local minima: on the optimization landscapes of wide and deep neural networks", "venue": null, "year": 2020 }, { "authors": [ "J. Lederer", "S. van de Geer" ], "title": "New concentration inequalities for suprema of empirical processes", "venue": null, "year": 2014 }, { "authors": [ "J. Lederer", "M. Vogt" ], "title": "Estimating the lasso’s effective noise", "venue": null, "year": 2004 }, { "authors": [ "H. Lee", "C. Ekanadham", "A. Ng" ], "title": "Sparse deep belief net model for visual area V2", "venue": "In Adv. Neural Inf. Process. Syst.,", "year": 2008 }, { "authors": [ "W. Li", "J. Lederer" ], "title": "Tuning parameter calibration for `1-regularized logistic regression", "venue": "J. Statist. Plann. Inference,", "year": 2019 }, { "authors": [ "S. Liang", "R. Srikant" ], "title": "Why deep neural networks for function approximation", "venue": null, "year": 2016 }, { "authors": [ "B. Liu", "M. Wang", "H. Foroosh", "M. Tappen", "M. Pensky" ], "title": "Sparse convolutional neural networks", "venue": "In IEEE Int. Conf. Comput. Vis. Pattern Recognit.,", "year": 2015 }, { "authors": [ "C. McDiarmid" ], "title": "On the method of bounded differences", "venue": "Surv. Comb.,", "year": 1989 }, { "authors": [ "V. Nair", "G. Hinton" ], "title": "Rectified linear units improve restricted Boltzmann machines", "venue": "In Int. Conf. Mach. Learn., pp", "year": 2010 }, { "authors": [ "B. Neyshabur", "R. Tomioka", "N. Srebro" ], "title": "Norm-based capacity control in neural networks", "venue": "In Conf. Learn. Theory, pp", "year": 2015 }, { "authors": [ "L. Nie", "M. Wang", "L. Zhang", "S. Yan", "B. Zhang", "T.-S. Chua" ], "title": "Disease inference from health-related questions via sparse deep learning", "venue": "IEEE Trans. Knowl. Data Eng.,", "year": 2015 }, { "authors": [ "S. Scardapane", "D. Comminiello", "A. Hussain", "A. Uncini" ], "title": "Group sparse regularization for deep neural networks", "venue": null, "year": 2017 }, { "authors": [ "J. Schmidhuber" ], "title": "Deep learning in neural networks: An overview", "venue": "Neural Networks,", "year": 2015 }, { "authors": [ "J. Schmidt-Hieber" ], "title": "Nonparametric regression using deep neural networks with ReLU activation function", "venue": "Ann. Statist.,", "year": 2020 }, { "authors": [ "K. Simonyan", "A. Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In Int. Conf. Learn. Representations,", "year": 2015 }, { "authors": [ "M. Taheri", "F. Xie", "J. Lederer" ], "title": "Statistical guarantees for regularized neural networks", "venue": null, "year": 2006 }, { "authors": [ "M. Telgarsky" ], "title": "Benefits of depth in neural networks", "venue": "In Proc. Mach. Learn. Res.,", "year": 2016 }, { "authors": [ "R. Tibshirani" ], "title": "Regression shrinkage and selection via the lasso", "venue": "J. R. Stat. Soc. Ser. B. Stat. Methodol.,", "year": 1996 }, { "authors": [ "S. van de Geer" ], "title": "Empirical processes in M-estimation", "venue": null, "year": 2000 }, { "authors": [ "A. van der Vaart", "J. Wellner" ], "title": "Weak convergence and empirical processes", "venue": null, "year": 1996 }, { "authors": [ "W. Wen", "C. Wu", "Y. Wang", "Y. Chen", "H. Li" ], "title": "Learning structured sparsity in deep neural networks", "venue": "In Adv. Neural Inf. Process Syst.,", "year": 2016 }, { "authors": [ "H. Xiao", "K. Rasul", "R. Vollgraf" ], "title": "Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms", "venue": null, "year": 2017 }, { "authors": [ "D. Yarotsky" ], "title": "Error bounds for approximations with deep ReLU networks", "venue": "Neural Networks,", "year": 2017 }, { "authors": [ "R. Zhuang", "J. Lederer" ], "title": "Maximum regularized likelihood estimators: A general prediction theory and applications", "venue": "Stat, 7(1):e186,", "year": 2018 } ]
[ { "heading": null, "text": "Neural networks are becoming increasingly popular in applications, but a comprehensive mathematical understanding of their potentials and limitations is still missing. In this paper, we study the prediction accuracies of neural networks from a statistical point of view. In particular, we establish statistical guarantees for deep learning with different types of sparsity-inducing regularization. Our bounds feature a mild dependence on network widths and depths, and, therefore, support the current trend toward wide and deep networks. The tools that we use in our derivations are uncommon in deep learning and, hence, might be of additional interest." }, { "heading": "1 INTRODUCTION", "text": "Sparsity reduces network complexities and, consequently, lowers the demands on memory and computation, reduces overfitting, and improves interpretability (Changpinyo et al., 2017; Han et al., 2016; Kim et al., 2016; Liu et al., 2015; Wen et al., 2016). Three common notions of sparsity are connection sparsity, which means that there is only a small number of nonzero connections between nodes, node sparsity, which means that there is only a small number of active nodes (Alvarez & Salzmann, 2016; Changpinyo et al., 2017; Feng & Simon, 2017; Kim et al., 2016; Lee et al., 2008; Liu et al., 2015; Nie et al., 2015; Scardapane et al., 2017; Wen et al., 2016), and layer sparsity, which means that there is only a small number of active layers (Hebiri & Lederer, 2020). Approaches to achieving sparsity include augmenting small networks (Ash, 1989; Bello, 1992), pruning large networks (Simonyan & Zisserman, 2015; Han et al., 2016), constraint estimation (Ledent et al., 2019; Neyshabur et al., 2015; Schmidt-Hieber, 2020), and statistical regularization (Taheri et al., 2020).\nThe many empirical observations of the benefits of sparsity have sparked interest in mathematical support in the form of statistical theories. But such theories are still scarce and, in any case, have severe limitations. For example, statistical guarantees for deep learning with connection-sparse regularization have been established in Taheri et al. (2020), but they do not cover node sparsity, which, in view of the removal of entire nodes, has become especially popular. Moreover, their estimator involves an additional parameter, their theory is limited to a single output node, and their results have a suboptimal dependence on the input vectors. Statistical guarantees for constraint estimation over connection- and node-sparse networks follow from combining results in Neyshabur et al. (2015) and Bartlett & Mendelson (2002). But for computational and practical reasons, regularized estimation is typically preferred over constraint estimation in deep learning as well as in machine learning at large (Hastie et al., 2015). Moreover, their theory is limited to a single output node and ReLU activation, scales exponentially in the number of layers, and requires bounded loss functions. Statistical prediction guarantees for constraint estimation over connection-sparse networks have been derived in Schmidt-Hieber (2020), but their theory is limited to a single output node and ReLU activation and assumes bounded weights. In short, the existing statistical theory for deep learning with connection and node sparsity is still deficient.\nThe goal of this paper is to provide an improved theory for sparse deep learning. We focus on regression-type settings with layered, feedforward neural networks. The estimators under consideration consist of a standard least-squares estimator with additional regularizers that induce connection or node sparsity. We then derive our guarantees by using techniques from high-dimensional statistics (Dalalyan et al., 2017) and empirical process theory (van de Geer, 2000). In the case of\nsubgaussian noise, we find the rates√ l ( log[mnp] )3 n and √ mlp(log[mnp] )3 n\nfor the connection-sparse and node-sparse estimators, respectively, where l is the number of hidden layers, m the number of output nodes, n the number of samples, p the total number of parameters, and p the maximal width of the network. The rates suggest that sparsity-inducing approaches can provide accurate prediction even in very wide (with connection sparsity) and very deep (with either type of sparsity) networks while, at the same time, ensuring low network complexities. These findings underpin the current trend toward sparse but wide and especially deep networks from a statistical perspective.\nOutline of the paper Section 2 recapitulates the notions of connection and node sparsity and introduces the corresponding deep learning framework and estimators. Section 3 confirms the empirically-observed accuracies of connection- and node-sparse estimation in theory. Section 4 summarizes the key features and limitations of our work. The Appendix contains all proofs." }, { "heading": "2 CONNECTION- AND NODE-SPARSE DEEP LEARNING", "text": "We consider data (y1,x1), . . . , (yn,xn) ∈ Rm × Rd that are related via\nyi = g∗[xi] + ui for i ∈ {1, . . . , n} (1)\nfor an unknown data-generating function g∗ : Rd → Rm and unknown, random noise u1, . . . ,un ∈ Rm. We allow all aspects, namely yi, g∗, xi, and ui, to be unbounded. Our goal is to model the data-generating function with a feedforward neural network of the form\ngΘ[x] ··= Θlf l [ Θl−1 · · ·f1[Θ0x] ] for x ∈ Rd (2)\nindexed by the parameter space M ··= {Θ = (Θl, . . . ,Θ0) : Θj ∈ Rp j+1×pj}. The functions f j : Rpj → Rpj are called the activation functions, and p0 ··= d and pl+1 ··= m are called the input and output dimensions, respectively. The depth of the network is l, the maximal width is p ··= maxj∈{0,...,l−1} pj+1, and the total number of parameters is p ··= ∑l j=0 p j+1pj .\nIn practice, the total number of parameters often rivals or exceeds the number of samples: p ≈ n or p n. We then speak of high dimensionality. A common technique for avoiding overfitting in high-dimensional settings is regularization that induces additional structures, such as sparsity. Sparsity has the interesting side-effect of reducing the networks’ complexities, which can facilitate interpretations and reduce demands on energy and memory. Our first sparse estimator is\nΘ̂con ∈ arg min Θ∈M1\n{ n∑\ni=1\n∣∣∣∣yi − gΘ[xi]∣∣∣∣22 + rcon|||Θl|||1 }\n(3)\nfor a tuning parameter rcon ∈ [0,∞), a nonempty set of parameters M1 ⊂ {\nΘ ∈M : max j∈{0,...,l−1}\n|||Θj |||1 ≤ 1 } ,\nand the `1-norm\n|||Θj |||1 ··= pj+1∑ i=1 pj∑ k=1 |(Θj)ik| for j ∈ {0, . . . , l}, Θj ∈ Rp j+1×pj .\nThis estimator is an analog of the lasso estimator in linear regression (Tibshirani, 1996). It induces sparsity on the level of connections: the larger the tuning parameter rcon, the fewer connections among the nodes.\nDeep learning with `1-regularization has become common in theory and practice (Kim et al., 2016; Taheri et al., 2020). Our estimator (3) specifies one way to formulate this type of regularization. The estimator is indeed a regularized estimator (rather than a constraint estimator), because the complexity\nis regulated entirely through the tuning parameter rcon in the objective function (rather than through a tuning parameter in the set over which the objective function is optimized). But `1-regularization could also be formulated slightly differently. For example, one could consider the estimators\nΘcon ∈ arg min Θ∈M\n{ n∑\ni=1 ∣∣∣∣yi − gΘ[xi]∣∣∣∣22 + rcon l∏ j=0 |||Θj |||1\n} (4)\nor\nΘ̃con ∈ arg min Θ∈M\n{ n∑\ni=1 ∣∣∣∣yi − gΘ[xi]∣∣∣∣22 + rcon l∑ j=0 |||Θj |||1\n} . (5)\nThe differences among the estimators (3)–(5) are small: for example, our theory can be adjusted for (4) with almost no changes of the derivations. The differences among the estimators mainly concern the normalizations of the parameters; we illustrate this in the following proposition.\nProposition 1 (Scaling of Norms). Assume that the all-zeros parameter (0pl+1×pl , . . . ,0p1×p0) ∈ M1 is neither a solution of (3) nor of (5), that rcon > 0, and that the activation functions are nonnegative homogenous: f j [ab] = af j [b] for all j ∈ {1, . . . , l}, a ∈ [0,∞), and b ∈ Rpj . Then, |||(Θ̂con)0|||1, . . . , |||(Θ̂con)l−1|||1 = 1 (concerns the inner layers) for all solutions of (3), while |||(Θ̃con)0|||1 = · · · = |||(Θ̃con)l|||1 (concerns all layers) for at least one solution of (5).\nAnother way to formulate `1-regularization was proposed in Taheri et al. (2020): they reparametrize the networks through a scale parameter and a constraint version ofM and then to focus the regularization on the scale parameter only. Our above-stated estimator (3) is more elegant in that it avoids the reparametrization and the additional parameter.\nThe factor |||Θl|||1 in the regularization term of (3) measures the complexity of the network over the set M1, and the factor rcon regulates the complexity of the resulting estimator. This provides a convenient lever for data-adaptive complexity regularization through well-established calibration schemes for the tuning parameter, such as cross-validation. This practical aspect is an advantage of regularized formulations like ours as compared to constraint estimation over sets with a predefined complexity.\nThe constraints in the setM1 of the estimator (3) can also retain the expressiveness of the full parameterization that corresponds to the setM: for example, assuming again nonnegative-homogeneous activation, one can check that for every Γ ∈M, there is a Γ′ ∈ {Θ ∈M : maxj∈{0,...,l−1} |||Θj |||1 ≤ 1} such that gΓ = gΓ′—cf. Taheri et al. (2020, Proposition 1). In contrast, existing theories on neural networks often require the parameter space to be bounded, which limits the expressiveness of the networks.\nOur regularization approach is, therefore, closer to practical setups than constraint approaches. The price is that to develop prediction theories, we have to use different tools than those typically used in theoretical deep learning. For example, we cannot use established risk bounds such as Bartlett & Mendelson (2002, Theorem 8) (because Rademacher complexities over classes of unbounded functions are unbounded) or Lederer (2020a, Theorem 1) (because our loss function is not Lipschitz continuous) or established concentration bounds such as McDiarmid’s inequality in McDiarmid (1989, Lemma (3.3)) (because that would require a bounded loss). We instead invoke ideas from high-dimensional statistics, prove Lipschitz properties for neural networks, and use empirical process theory that is based on chaining (see the Appendix).\nOur second estimator is\nΘ̂node ∈ arg min Θ∈M2,1\n{ n∑\ni=1\n∣∣∣∣yi − gΘ[xi]∣∣∣∣22 + rnode|||Θl|||2,1 }\n(6)\nfor a tuning parameter rnode ∈ [0,∞), a nonempty set of parameters M2,1 ⊂ {\nΘ ∈M : max j∈{0,...,l−1}\n|||Θj |||2,1 ≤ 1 } ,\nand the `2/`1-norm\n|||Θj |||2,1 ··= pj∑ k=1 √√√√pj+1∑ i=1 |(Θj)ik|2 for j ∈ {0, . . . , l − 1}, Θj ∈ Rp j+1×pj .\nThis estimator is an analog of the group-lasso estimator in linear regression (Bakin, 1999). Again, to avoid ambiguities in the regularization, our formulation is slightly different from the standard formulations in the literature, but the fact that group-lasso regularizers leads to node-sparse networks has been discussed extensively before (Alvarez & Salzmann, 2016; Liu et al., 2015; Scardapane et al., 2017): the larger the tuning parameter rnode, the fewer active nodes in the network.\nThe above-stated comments about the specific form of the connection-sparse estimator also apply to the node-sparse estimator.\nAn illustration of connection and node sparsity is given in Figure 1. Connection-sparse networks have only a small number of active connections between nodes (left panel of Figure 1); node-sparse networks have inactive nodes, that is, completely unconnected nodes (right panel of Figure 1). The two notions of sparsity are connected: for example, connection sparsity can render entire nodes inactive “by accident” (see the layer that follows the input layer in the left panel of the figure). In general, node sparsity is the weaker assumption, because it allows for highly connected nodes; this observation is reflected in the theoretical guarantees in the following section.\nThe optimal network architecture for given data (such as the optimal width) is hardly known beforehand in a data analysis. A main feature of sparsity-inducing regularization is, therefore, that it adjusts parts of the network architecture to the data. In other words, sparsity-inducing regularization is a data-driven approach to adapting the complexity of the network.\nWhile versions of the estimators (3) and (6) are popular in deep learning, statistical analyses, especially of node-sparse deep learning, are scarce. Such a statistical analysis is, therefore, the goal of the following section." }, { "heading": "3 STATISTICAL PREDICTION GUARANTEES", "text": "We now develop statistical guarantees for the sparse estimators described above. The guarantees are formulated in terms of the squared average (in-sample) prediction error\nerr[Θ] ··= 1\nn n∑ i=1 ∣∣∣∣g∗[xi]− gΘ[xi]∣∣∣∣22 for Θ ∈M , which is a measure for how well the network gΘ fits the unknown function g∗ (which does not need to be a neural network) on the data at hand, and in terms of the prediction risk (or generalization error) for a new sample (y,x) that has the same distribution as the original data\nrisk[Θ] ··= E||y − gΘ[x]||22 for Θ ∈M , which measures how well the network gΘ can predict a new sample. We first study the prediction error, because it is agnostic to the distribution of the input data; in the end, we then translate the bounds for the prediction error into bounds for the generalization error.\nWe first observe that the networks in (2) can be somewhat “linearized:” For every parameter Θ ∈M1, there is a parameter\nΘ ∈M1 ··= { Θ = (Θl−1, . . . ,Θ0) : Θj ∈ Rp j+1×pj , max\nj∈{0,...,l−1} |||Θj |||1 ≤ 1\n}\nsuch that for every x ∈ Rd\ngΘ[x] = Θ lgΘ[x] with gΘ[x] ··= f\nl [ Θl−1 · · ·f1[Θ0x] ] ∈ Rp l . (7)\nThis additional notation allows us to disentangle the outermost layer (which is regularized directly) from the other layers (which are regularized indirectly). More generally speaking, the additional notation makes a connection to linear regression, where the above holds trivially with gΘ[x] = x.\nWe also define M2,1 ··= { Θ = (Θl−1, . . . ,Θ0) : Θj ∈ Rp j+1×pj , max\nj∈{0,...,l−1} |||Θj |||2,1 ≤ 1 } accordingly.\nIn high-dimensional linear regression, the quantity central to prediction guarantees is the effective noise (Lederer & Vogt, 2020). The effective noise is in our notation (with l = 0 and m = 1 to describe linear regression) 2|| ∑n i=1 uixi||∞. The above linearization allows us to generalize the effective noise to our general deep-learning framework:\nr∗con ··= 2 sup Ψ∈M1 ∣∣∣∣∣∣∣∣∣∣∣∣ n∑ i=1 ui ( gΨ[xi] )>∣∣∣∣∣∣∣∣∣∣∣∣ ∞\nr∗node ··= 2 √ m sup\nΨ∈M2,1 ∣∣∣∣∣∣∣∣∣∣∣∣ n∑ i=1 ui ( gΨ[xi] )>∣∣∣∣∣∣∣∣∣∣∣∣ ∞ ,\n(8)\nwhere |||A|||∞ ··= max(i,j)∈{1,...,m}×{1,...,pl}|Aij | for A ∈ Rm×p l\n. The effective noises, as we will see below, are the optimal tuning parameters in our theories; at the same time, the effective noises depend on the noise random variables u1, . . . ,un, which are unknown in practice. Accordingly, we call the quantities r∗con and r ∗ node the oracle tuning parameters.\nWe take a moment to compare the effective noises in (8) to Rademacher complexities (Koltchinskii, 2001; Koltchinskii & Panchenko, 2002). Rademacher complexities are the basis of a line of other statistical theories for deep learning (Bartlett & Mendelson, 2002; Golowich et al., 2017; Lederer, 2020a; Neyshabur et al., 2015). In our framework, the Rademacher complexities in the case m = 1 are (Lederer, 2020a, Definition 1)\nEx1,...,xn,k1,...,kn [\nsup Θ∈M1 ∣∣∣ 1 n n∑ i=1 kigΘ[xi] ∣∣∣] and Ex1,...,xn,k1,...,kn[ sup Θ∈M2,1 ∣∣∣ 1 n n∑ i=1 kigΘ[xi] ∣∣∣]\nfor i.i.d. Rademacher random variables k1, . . . , kn. The effective noises might look like (rescaled) empirical versions of these quantities at first sight, but this is not the case. Two immediate differences are that (8) apply to general m and circumvent the outermost layers of the networks. But more importantly, Rademacher complexities involve external i.i.d. Rademacher random variables that are not connected with the statistical model at hand, while the effective noises involve the noise variables, which are completely specified by the model and, therefore, can have any distribution (see our sub-Gaussian example further below). Hence, there are no general techniques to relate Rademacher complexities and effective noises.\nNot only are the two concepts distinct, but also they are used in very different ways. For example, existing theories use Rademacher complexities to measure the size of the function class at hand, while we use effective noises to measure the maximal impact of the stochastic noise on the estimators. (Our proofs also require a measure of the size of the function class, but this measure is entropy— cf. Lemma 1.) In general, our proof techniques are very different from those in the context of Rademacher complexities.\nWe can now state a general prediction guarantee. Theorem 1 (General Prediction Guarantees). If rcon ≥ r∗con, it holds that\nerr[Θ̂con] ≤ inf Θ∈M1\n{ err[Θ] +\n2rcon n |||Θl|||1\n} .\nSimilarly, if rnode ≥ r∗node, it holds that\nerr[Θ̂node] ≤ inf Θ∈M2,1\n{ err[Θ] +\n2rnode n |||Θl|||2,1\n} .\nEach bound contains an approximation error err[Θ] that captures how well the class of networks can approximate the true data-generating function g∗ and a statistical error proportional to rcon/n and rnode/n, respectively, that captures how well the estimator can select within the class of networks at hand. In other words, Theorem 1 ensures that the estimators (3) and (6) predict—up to the statistical error described by rcon/n and rnode/n, respectively—as well as the best connection- and node-sparse network. This observation can be illustrated further:\nCorollary 1 (Parametric Setting). If additionally g∗ = gΘ∗ for a Θ∗ ∈M1, it holds that\nerr[Θ̂con] ≤ 2rcon n |||(Θ∗)l|||1 .\nIf instead g∗ = gΘ∗ for a Θ∗ ∈M2,1, it holds that\nerr[Θ̂node] ≤ 2rnode\nn |||(Θ∗)l|||2,1 .\nHence, if the underlying data-generating function is a sparse network itself, the prediction errors of the estimators are essentially bounded by the statistical errors rcon/n and rnode/n.\nThe above-stated results also identify the oracle tuning parameters r∗con and r ∗ node as optimal tuning parameters: they give the best prediction guarantees in Theorem 1. But since the oracle tuning parameters are unknown in practice, the guarantees implicitly presume a calibration scheme that satisfies rcon ≈ r∗con in practice. A natural candidate is cross-validation, but there are no guarantees that cross-validation provides such tuning parameters. This is a limitation that our theories share with all other theories in the field.\nRather than dealing with the practical calibration of the tuning parameters, we exemplify the oracle tuning parameters in a specific setting. This analysis will illustrate the rates of convergences that we can expect from Theorem 1, and it will allow us to compare our theories with other theories in the literature. Assume that the activation functions satisfy f j [0pj ] = 0pj and are 1-Lipschitz continuous with respect to the Euclidean norms on the functions’ input and output spaces Rpj . A popular example is ReLU activation (Nair & Hinton, 2010), but the conditions are met by many other functions as well. Also, assume that the noise vectors u1, . . . ,un are independent and centered and have uniformly subgaussian entries (van de Geer, 2000, Display (8.2) on Page 126). Keep the input vectors fixed and capture their normalizations by\nv∞ ··= √√√√ 1 n n∑ i=1 ||xi||2∞ and v2 ··= √√√√ 1 n n∑ i=1 ||xi||22 .\nThen, we obtain the following bounds for the effective noises.\nProposition 2 (Subgaussian Noise). There is a constant c ∈ (0,∞) that depends only on the subgaussian parameters of the noise such that\nP { r∗con ≤ cv∞ √ nl ( log[2mnp] )3} ≥ 1− 1 n\nand\nP { r∗node ≤ cv2 √ mnlp ( log[2mnp] )3} ≥ 1− 1 n .\nBroadly speaking, this result combined with Theorem 1 illustrates that accurate prediction with connection- and node-sparse estimators is possible even when using very wide and deep networks. Let us analyze the factors one by one and compare them to the factors in the bounds of Taheri et al. (2020) and Neyshabur et al. (2015), which are the two most related papers. The connection-sparse case compares to the results in Taheri et al. (2020), and it compares to the results in Neyshabur et al. (2015) when setting the parameters in that paper to p = q = 1 (which gives a setting that is slightly more restrictive than ours) or p = 1; q =∞ (which gives a setting that is slightly less restrictive than ours). The node-sparse case compares to Neyshabur et al. (2015) with p = 2; q =∞ (which gives a setting that is more restrictive than ours, though). Our setup is also more general than the one in Neyshabur et al. (2015) in the sense that it allows for activation other than ReLU.\nThe dependence on n is, as usual, 1/ √ n up to logarithmic factors.\nIn the connection-sparse case, our bounds involve v∞ = √∑n\ni=1 ||xi||2∞/n rather than the factor v∞ ··= maxi∈{1,...,n} ||xi||∞ of Neyshabur et al. (2015) or the factor v2 = √∑n i=1 ||xi||22/n of\nTaheri et al. (2020). In principle, the improvements of v∞ over v∞ and v2 can be up to a factor √ n\nand up to a factor √ d, respectively; in practice, the improvements depend on the specifics on the data. For example, on the training data of MNIST (LeCun et al., 1998) and Fashion-MNIST (Xiao et al., 2017) ( √ n ≈ 250; √ d = 28 in both data sets), it holds that v∞ ≈ v∞ ≈ v2/9 and v∞ ≈ v∞ ≈ v2/12, respectively. In the node-sparse case, our bounds involve v2, which is again somewhat smaller than the factor v2 ··= maxi∈{1,...,n} ||xi||2 in Neyshabur et al. (2015). The main difference between the bounds for the connection-sparse and node-sparse estimators are their dependencies on the networks’ maximal width p. The bound for the connection-sparse estimator (3) depends on the width p only logarithmically (through p), while the bound for the node-sparse estimator (6) depends on p sublinearly. The dependence in the connection-sparse case is the same as in Taheri et al. (2020), while Neyshabur et al. (2015) can avoid even that logarithmic dependence (and, therefore, allow for networks with infinite widths). The node-sparse case in Neyshabur et al. (2015) does not involve our linear dependence on the width, but this difference stems from the fact that they use a more restrictive version of the grouping—we take the maximum over each layer, while they take the maximum over each node— and our results can be readily adjusted to their notion of group sparsity. These observations indicate that node sparsity as formulated above is suitable for slim networks (p n) but should be strengthened or complemented with other notions of sparsity otherwise. To give a numeric example, the training data in MNIST (LeCun et al., 1998) and Fashion-MNIST (Xiao et al., 2017) comprise n = 60 000 samples, which means that the width should be considerably smaller than 60 000 when using node sparsity alone. (Note that the input layer does not take part in p, which means that d could be larger.)\nFor unconstraint estimation, one can expect a linear dependence of the error on the total number of parameters (Anthony & Bartlett, 1999). Our bounds for the sparse estimators, in contrast, only have a log[p] dependence on the total number of parameters. This difference illustrates the virtue of regularization in general, and the virtue of sparsity in particular. Both of our bounds have a mild √ l dependence on the depth. These dependencies considerably improve on the exponentially-increasing dependencies on the depth in Neyshabur et al. (2015) and, therefore, are particularly suited to describe deep network architectures. Replacing the conditions maxj |||Θj |||1 ≤ 1 and maxj |||Θj |||2,1 ≤ 1 in the definitions of the connection-sparse and node-sparse estimators by the stricter conditions ∑ j |||Θj |||1 ≤ 1 and ∑ j |||Θj |||2,1 ≤ 1, respectively (cf. Taheri et al. (2020) and our discussion in Section 2), the dependence on the depth can be improved further from √ l to (2/l)l √ l (this only requires a simple adjustment of the last display in the proof of Proposition 4), which is exponentially decreasing in the depth.\nOur connection-sparse bounds have a mild log[m] dependence on the number of output nodes; the node-sparse bound involve an additional factor √ m. The case of multiple outputs has not been considered in statistical prediction bounds before.\nProposition 2 also highlights another advantage of our regularization approach over theories such as Neyshabur et al. (2015) that apply to constraint estimators. The theories for constraint estimators require bounding the sparsity levels directly, but in practice, suitable values for these bounds are rarely known. In our framework, in contrast, the sparsity is controlled via tuning parameters indirectly, and Proposition 2—although not providing a complete practical calibration scheme—gives insights into how these tuning parameters should scale with n, d, l, and so forth.\nWe also note that the bounds in Theorem 1 can be generalized readily to every estimator of the form\nΘ̂gen ∈ arg min Θ∈Mgen\n{ n∑\ni=1\n∣∣∣∣yi − gΘ[xi]∣∣∣∣22 + rgen|||Θl||| } ,\nwhere rgen ∈ [0,∞) is a tuning parameter,Mgen any nonempty subset ofM, and ||| · ||| any norm. The bound for such an estimator is then\nerr[Θ̂gen] ≤ inf Θ∈Mgen\n{ err[Θ] +\n2rgen n |||Θl|||\n}\nfor rgen ≥ r∗gen, where r∗gen is as r∗con but based on the dual norm of ||| · ||| instead of the dual norm of ||| · |||1. For example, one could impose connection sparsity on some layers and node sparsity on others, or one could impose different regularizations altogether. We omit the details to avoid digression.\nWe finally illustrate that the bounds for the prediction errors also entail bounds for the generalization errors. For simplicity, we consider a parametric setting and subgaussian noise again.\nProposition 3 (Generalization Guarantees). Assume that the inputs x,x1, . . . ,xn are i.i.d. random vectors, that the noise vectors u1, . . . ,un are independent and centered and have uniformly subgaussian entries, and that r∗con, r ∗ node → 0 as n → ∞. Consider an arbitrary positive constant b ∈ (0,∞). If g∗ = gΘ∗ for a Θ∗ ∈ M1 that is independent of the sample size n, it holds with probability at least 1− 1/n that\nrisk[Θ̂con] ≤ (1 + b) risk[Θ∗] + cv∞\n√ l ( log[2mnp] )3 n |||(Θ∗)l|||1\nfor a constant c ∈ (0,∞) that depends only on b and the subgaussian parameters of the noise. Similarly, if g∗ = gΘ∗ for a Θ∗ ∈ M2,1 that is independent of the sample size n, it holds with probability at least 1− 1/n that\nrisk[Θ̂con] ≤ (1 + b) risk[Θ∗] + cv2\n√ mlp ( log[2mnp] )3 n |||(Θ∗)l|||2,1\nfor a constant c ∈ (0,∞) that depends only on b and the subgaussian parameters of the noise.\nHence, the generalization errors are bounded by the same terms as the prediction errors." }, { "heading": "4 DISCUSSION", "text": "Our statistical theory for sparse deep learning incorporates node sparsity as well as connection sparsity, scales favorably in the number of layers, provides insights into how the tuning parameters should scale with the dimensions of the problem, and applies to unbounded loss functions. It is the first statistical theory that has all of these features—cf. Table 1. Additionally we avoid the introduction of an additional scaling parameter and improve the dependence of the rates on the input data. Finally, our novel proof approach based on high-dimensional statistics and empirical-process theory is of independent interest.\nEvidence for the benefits of deep networks has been established in practice (LeCun et al., 2015; Schmidhuber, 2015), approximation theory (Liang & Srikant, 2016; Telgarsky, 2016; Yarotsky, 2017), and statistics (Golowich et al., 2017; Taheri et al., 2020). Since our guarantees scale at most sublinearly in the number of layers (or even improve with increasing depth—see our comment on Page 7), our paper complements these lines of research and shows that sparsity-inducing regularization is an effective approach to coping with the complexity of deep and very deep networks.\nConnection sparsity limits the number of nonzero entries in each parameter matrix, while layer sparsity only limits the total number of nonzero rows. Hence, the number of columns in a parameter matrix, that is, the width of the preceding layer, is regularized only in the case of connection sparsity. Our theoretical results reflect this insight in that the bounds for the connection- and node-sparse estimators depend on the networks’ width logarithmically and sublinearly, respectively. Practically speaking, our results indicate that connection sparsity is suitable to handle wide networks, but node sparsity is suitable only when complemented by connection sparsity or other strategies.\nThe mild logarithmic dependence of our connection-sparse bounds on the number of output nodes illustrates that networks with many outputs can be learned in practice. Our prediction theory is the first one that considers multiple output nodes; a classification theory with a logarithmic dependence on the output nodes has been established very recently in Ledent et al. (2019).\nThe mathematical underpinnings of our theory are very different from those of most other papers in theoretical deep learning. The proof of the main theorem shares similarities with proofs in highdimensional statistics; to formulate and control the relevant empirical processes, we use the concept of effective noise, chaining, and Lipschitz properties of neural networks. These tools are not standard in deep learning theory and, therefore, might be of more general interest (see Appendix A.7 for further details).\nOur theory shares some limitations with all other current theories in deep learning: the network architectures are simpler than the ones typically used in practice (cf. Lederer (2020b), though); the bounds concern global optima rather than the local optima or saddle points provided by many practical algorithms; and the theory does not entail a practical scheme for the calibration of the tuning parameters. Nevertheless, our theory, and mathematical theory in general, provides insights about what accuracies to expect in practice and about what network types and estimators might be suitable for a given problem.\nIn summary, our paper highlights the benefits of sparsity in deep learning and, more generally, showcases the usefulness of statistical analyses for understanding neural networks." }, { "heading": "A APPENDIX", "text": "The Appendix consists of two auxiliary results and the proofs of Theorem 1 and Propositions 1 and 2. Our approach combines techniques from high-dimensional statistics and empirical-process theory that are very different from the techniques used in most other approaches in the literature." }, { "heading": "A.1 LIPSCHITZ PROPERTY", "text": "In this section, we prove a Lipschitz property that we use in the proof of Proposition 2.\nProposition 4 (Lipschitz Property). In the framework of Sections 2 and 3, it holds for all Θ,Γ ∈M1 that ∣∣∣∣gΘ[x]− gΓ[x]∣∣∣∣∞ ≤ √l||x||∞|||Θ− Γ|||F and for all Θ,Γ ∈M2,1 that∣∣∣∣gΘ[x]− gΓ[x]∣∣∣∣2 ≤ √l||x||2|||Θ− Γ|||F . The Frobenius norm is defined as\n|||Θ|||F ··= √√√√ l−1∑ j=0 |||Θj |||2F ··= √√√√ l−1∑ j=0 pj+1∑ i=1 pj∑ k=1 |(Θj)ik|2 for Θ ∈M2,1 =M1 ∪M2,1 .\nProposition 4 generalizes (Taheri et al., 2020, Proposition 2) to vector-valued network outputs and to node sparsity, and it replaces their ||x||2 with the smaller ||x||∞ in the connection-sparse case.\nProof of Proposition 4. This proof generalizes and sharpens the proof of Taheri et al. (2020), and it simplifies some arguments of that proof. We define the “inner subnetworks” of a network gΘ with Θ ∈M2,1 as the vector-valued functions\nS0gΘ : R d → Rp\n1\nx 7→ S0gΘ[x] ··= Θ 0x\nand\nSjgΘ : R d → Rp\nj+1 x 7→ SjgΘ[x] ··= Θ jf j [ · · ·f1[Θ0x] ] for j ∈ {1, . . . , l − 1}. Similarly, we define the “outer subnetworks” of gΘ as the real-valued functions\nSjgΘ : R pj → Rp l z 7→ SjgΘ[z] ··= f l [ Θl−1 · · ·f j [z] ] for j ∈ {1, . . . , l − 1} and\nSlgΘ : R pl → Rp l\nz 7→ SlgΘ[z] ··= f l[z] .\nThe initial network can be split into an inner and an outer network along every layer j ∈ {1, . . . , l}:\ngΘ[x] = S jgΘ [ Sj−1gΘ[x] ] for x ∈ Rd .\nWe call this our splitting argument.\nTo exploit the splitting argument, we derive a contraction result for the inner subnetworks and a Lipschitz result for the outer subnetworks. We denote the `2-operator norm of a matrix A, that is, the largest singular value of A, by |||A|||op. Using then the assumptions that the activation functions are 1-Lipschitz and f j [0pj ] = 0pj , we get for every Θ = (Θl−1, . . . ,Θ0) ∈M2,1 and x ∈ Rd that∣∣∣∣Sj−2gΘ[x]∣∣∣∣2 = ∣∣∣∣Θj−2f j−2[Sj−3gΘ[x]]∣∣∣∣2\n≤ |||Θj−2|||op ∣∣∣∣f j−2[Sj−3gΘ[x]]∣∣∣∣2\n≤ |||Θj−2|||op ∣∣∣∣Sj−3gΘ[x]∣∣∣∣2 ≤ · · ·\n≤ (j−2∏\nk=1\n|||Θk|||op ) ||Θ0x||2\n≤ (j−2∏\nk=0\n|||Θk|||op ) ||x||2\nfor all j ∈ {2, . . . , l}. Now, since |||Θk|||op ≤ |||Θk|||F ≤ |||Θk|||2,1 and Θ ∈ M2,1, we can deduce from the display that ∣∣∣∣Sj−2gΘ[x]∣∣∣∣2 ≤ (j−2∏\nk=0\n|||Θk|||2,1 ) ||x||2 .\nThis inequality is our contraction property.\nBy similar arguments, we get for every z1, z2 ∈ Rp j that∣∣∣∣SjgΘ[z1]− SjgΘ[z2]∣∣∣∣2 = ∣∣∣∣f l[Θl−1 · · ·f j [z1]]− f l[Θl−1 · · ·f j [z2]]∣∣∣∣2\n≤ ∣∣∣∣Θl−1[f l−1 · · ·f j [z1]]−Θl−1[f l−1 · · ·f j [z2]]∣∣∣∣2\n≤ |||Θl−1|||op ∣∣∣∣f l−1[· · ·f j [z1]]− f l−1[· · ·f j [z2]]∣∣∣∣2 ≤ · · ·\n≤ (l−1∏\nk=j\n|||Θk|||op ) ||z1 − z2||2\nfor j ∈ {1, . . . , l}, where ∏l−1\nk=l |||Θk|||op ··= 1. Hence, similarly as above,∣∣∣∣SjgΘ[z1]− SjgΘ[z2]∣∣∣∣2 ≤ (l−1∏ k=j |||Θk|||2,1 ) ||z1 − z2||2 .\nThis inequality is our Lipschitz property.\nWe now use the contraction and Lipschitz properties of the subnetworks to derive a Lipschitz result for the entire network. We consider two networks gΘ and gΓ with parameters Θ = (Θ\nl−1, . . . ,Θ0) ∈ M2,1 and Γ = (Γl−1, . . . ,Γ0) ∈M2,1, respectively. Our above-derived splitting argument applied with j = 1 and j = l, respectively, yields∣∣∣∣gΘ[x]− gΓ[x]∣∣∣∣2 = ∣∣∣∣S1gΘ[S0gΘ[x]]− SlgΓ[Sl−1gΓ[x]]∣∣∣∣2 . Elementary algebra and the fact that Sj−1gΘ[Sj−2gΓ[x]] = S jgΘ[Θ j−1f j−1[Sj−2gΓ[x]] for j ∈ {2, . . . , l} then allow us to derive∣∣∣∣gΘ[x]− gΓ[x]∣∣∣∣2 = ∣∣∣∣∣∣S1gΘ[S0gΘ[x]]− l∑\nj=1\n( SjgΘ [ Sj−1gΓ[x] ] − SjgΘ [ Sj−1gΓ[x] ]) − SlgΓ [ Sl−1gΓ[x] ]∣∣∣∣∣∣ 2\n= ∣∣∣∣∣∣S1gΘ[S0gΘ[x]]− S1gΘ[S0gΓ[x]] −\nl∑ j=2 ( SjgΘ [ Sj−1gΓ[x] ] − Sj−1gΘ [ Sj−2gΓ[x] ]) + SlgΘ [ Sl−1gΓ[x] ] − SlgΓ [ Sl−1gΓ[x] ]∣∣∣∣∣∣ 2\n= ∣∣∣∣∣∣S1gΘ[S0gΘ[x]]− S1gΘ[S0gΓ[x]] −\nl∑ j=2 ( SjgΘ [ Sj−1gΓ[x] ] − SjgΘ [ Θj−1f j−1 [ Sj−2gΓ[x] ]]) + SlgΘ [ Sl−1gΓ[x] ] − SlgΓ [ Sl−1gΓ[x] ]∣∣∣∣∣∣ 2\n≤ ∣∣∣∣S1gΘ[S0gΘ[x]]− S1gΘ[S0gΓ[x]]∣∣∣∣2 +\nl∑ j=2 ∣∣∣∣SjgΘ[Sj−1gΓ[x]]− SjgΘ[Θj−1f j−1[Sj−2gΓ[x]]]∣∣∣∣2 + ∣∣∣∣SlgΘ[Sl−1gΓ[x]]− SlgΓ[Sl−1gΓ[x]]∣∣∣∣2 .\nWe bound this further by using the above-derived Lipschitz property of the outer networks and the observation that SlgΘ[Sl−1gΓ[x]] = S\nlgΓ[Sl−1gΓ[x]]:∣∣∣∣gΘ[x]− gΓ[x]∣∣∣∣2 ≤ ( l−1∏ k=1 |||Θk|||2,1 )∣∣∣∣S0gΘ[x]− S0gΓ[x]∣∣∣∣2\n+ l∑ j=2 (l−1∏ k=j |||Θk|||2,1 )∣∣∣∣Sj−1gΓ[x]−Θj−1f j−1[Sj−2gΓ[x]]∣∣∣∣2 ,\nwhich is by the definition of the inner networks equivalent to\n∣∣∣∣gΘ[x]− gΓ[x]∣∣∣∣2 ≤ ( l−1∏ k=1 |||Θk|||2,1 ) ||Θ0x− Γ0x||2\n+ l∑ j=2 (l−1∏ k=j |||Θk|||2,1 )∣∣∣∣Γj−1f j−1[Sj−2gΓ[x]]−Θj−1f j−1[Sj−2gΓ[x]]∣∣∣∣2 .\nUsing the properties of the operator norm, we can deduce from this inequality that\n∣∣∣∣gΘ[x]− gΓ[x]∣∣∣∣2 ≤ ( l−1∏ k=1 |||Θk|||2,1 ) |||Θ0 − Γ0|||op||x||2\n+ l∑ j=2 (l−1∏ k=j |||Θk|||2,1 ) |||Γj−1 −Θj−1|||op ∣∣∣∣f j−1[Sj−2gΓ[x]]∣∣∣∣2 . Invoking the mentioned conditions on the activation functions and the contraction property for the inner subnetworks then yields∣∣∣∣gΘ[x]− gΓ[x]∣∣∣∣2 ≤ ( max\nv∈{0,...,l−1} ∏ k∈{0,...,l−1}\nk 6=v\nmax { |||Θk|||2,1, |||Γk|||2,1 })( l−1∑ j=0 |||Γj −Θj |||op ) ||x||2\n≤ √ l||x||2|||Θ− Γ|||F .\nThe proof for the connection-sparse case is almost the same. The main difference is that one needs to use the || · ||∞- and ||| · |||1-norms (rather than the || · ||2- and ||| · |||op-norms) and the inequality ||Ab||∞ ≤ |||A|||1||b||∞ (rather than the inequality ||Ab||2 ≤ |||A|||op||b||2) to establish suitable contraction and Lipschitz properties." }, { "heading": "A.2 ENTROPY BOUND", "text": "In this section, we establish bounds for the entropies of M1 and M2,1. The distance between two networks gΘ and gΓ is defined as dist[gΘ, gΓ] ··= √∑n i=1 ||gΘ[xi]− gΓ[xi]||2∞/n. Given this distance function and a radius t ∈ (0,∞), the metric entropy of a nonempty set A ⊂ {Θ = (Θl−1, . . . ,Θ0) : Θj ∈ Rpj+1×pj} is denoted by H[t,A]. We then get the following entropy bounds.\nLemma 1 (Entropy Bounds). In the framework of Sections 2 and 3, it holds for a constant cH ∈ (0,∞) and every t ∈ (0,∞) that\nH[t,M1] ≤ cH ⌈ (v∞) 2l\nt2\n⌉ log [ pt2\n(v∞)2l + 2 ] and\nH[t,M2,1] ≤ cH ⌈ (v∞) 2lp\nt2\n⌉ log [ pt2\n(v∞)2l + 2\n] .\nProof of Lemma 1. The first bound can be derived by combining established deterministic and randomization arguments (Carl, 1985);(Lederer, 2010, Proof of Theorem 1.1);(Taheri et al., 2020, Proposition 3).\nFor the second bound, observe that\n|||Θj |||1 = pj+1∑ i=1 pj∑ k=1 |(Θj)ik| ≤ √ pj+1 pj∑ k=1 √√√√pj+1∑ i=1 |(Θj)ik|2 = √ pj+1|||Θj |||2,1 = √ p|||Θj |||2,1\nfor all j ∈ {0, . . . , l− 1} and Θj ∈ Rpj+1×pj . We used in turn 1. the definition of the ||| · |||1-norm on Page 2, 2. the linearity and interchangeability of finite sums and the inequality ||a||1 ≤ √ b||a||2 for all a ∈ Rb, 3. the definition of the ||| · |||2,1-norm on Page 4, and 4. the definition of the width p on Page 2. Hence,M2,1 ⊂ √pM1. A bound for the entropies ofM2,1 can, therefore, be derived from the first bound by replacing the radii t on the right-hand side by t/√p." }, { "heading": "A.3 PROOF OF THEOREM 1", "text": "In this section, we state a proof for Theorem 1. The proof is inspired by derivations in highdimensional statistics—see, for example, (Zhuang & Lederer, 2018) and references therein.\nProof of Theorem 1. The main idea of the proof is to contrast the estimators’ objective functions evaluated at their minima with the estimators’ objective functions at other points. Our first step is to derive what we call a basic inequality. By the definition of the estimator in (6), it holds for every Θ ∈M2,1 that\nn∑ i=1 ∣∣∣∣yi − gΘ̂[xi]∣∣∣∣22 + rnode|||Θ̂l|||2,1 ≤ n∑ i=1 ∣∣∣∣yi − gΘ[xi]∣∣∣∣22 + rnode|||Θl|||2,1 , where we use the shorthand Θ̂ ··= Θ̂node. We then invoke the model in (1) to rewrite this inequality as\nn∑ i=1 ∣∣∣∣g∗[xi] + ui − gΘ̂[xi]∣∣∣∣22 + rnode|||Θ̂l|||2,1 ≤ n∑ i=1 ∣∣∣∣g∗[xi] + ui − gΘ[xi]∣∣∣∣22 + rnode|||Θl|||2,1 . Expanding the squared terms and rearranging the inequality then yields\nn∑ i=1 ∣∣∣∣g∗[xi]− gΘ̂[xi]∣∣∣∣22 ≤ n∑ i=1 ∣∣∣∣g∗[xi]− gΘ[xi]∣∣∣∣22 + 2\nn∑ i=1 ( gΘ̂[xi] )> ui − 2 n∑ i=1 ( gΘ[xi] )> ui + rnode|||Θl|||2,1 − rnode|||Θ̂ l |||2,1 .\nThis is our basic inequality.\nIn the remainder of the proof, we need to bound the first two terms in the last line of the basic inequality. We call these terms the empirical process terms. Using the reformulation of the networks in (7), we can write the empirical process term of a general parameter Γ ∈M2,1 according to\n2 n∑ i=1 ( gΓ[xi] )> ui = 2 n∑ i=1 ( ΓlgΓ[xi] )> ui\nwith Γ ∈M2,1. Using the 1. the properties of transpositions, 2. the definition of the trace function, 3. the cyclic property of the trace function, and 4. the linearity of the trace function yields further\n2 n∑ i=1 ( gΓ[xi] )> ui = 2 n∑ i=1 ( gΓ[xi] )> (Γl)>ui\n= 2 n∑ i=1 trace [( gΓ[xi] )> (Γl)>ui ] = 2\nn∑ i=1 trace [ ui ( gΓ[xi] )> (Γl)> ] = 2 trace\n[( n∑ i=1 ui ( gΓ[xi] )>) (Γl)> ] .\nNow, 1. denoting the column-vector that corresponds to the kth column of a matrix A by A•k, 2. using Hölder’s inequality, 3. using Hölder’s inequality again, and 4. again Hölder’s inequality and our definitions of the elementwise `∞-and `1-norms, we find\n2 n∑ i=1 ( gΓ[xi] )> ui = 2 pl∑ k=1 〈( n∑ i=1 ui ( gΓ[xi] )>) •k , (Γl)•k 〉\n≤ 2 pl∑\nk=1 ∣∣∣∣∣∣∣∣( n∑ i=1 ui ( gΓ[xi] )>) •k ∣∣∣∣∣∣∣∣ 2 ∣∣∣∣(Γl)•k∣∣∣∣2 ≤ 2 max\nk∈{1,...,pl} ∣∣∣∣∣∣∣∣( n∑ i=1 ui ( gΓ[xi] )>) •k ∣∣∣∣∣∣∣∣ 2 pl∑ k=1 ∣∣∣∣(Γl)•k∣∣∣∣2 ≤ 2 √ m\n∣∣∣∣∣∣∣∣∣∣∣∣ n∑ i=1 ui ( gΓ[xi] )>∣∣∣∣∣∣∣∣∣∣∣∣ ∞ |||Γl|||2,1 ,\nwhich implies in view of the definition of the effective noise in (8)\n2 n∑ i=1 ( gΓ[xi] )> ui ≤ r∗node|||Γl|||2,1 .\nThis inequality is our bound on the empirical process terms.\nWe can combine the bound on the empiricial process term and the basic inequality to find\nn∑ i=1 ∣∣∣∣g∗[xi]− gΘ̂[xi]∣∣∣∣22 ≤ n∑ i=1 ∣∣∣∣g∗[xi]− gΘ[xi]∣∣∣∣22 + r∗node|||Θ̂ l |||2,1 + r∗node|||Θl|||2,1 + rnode|||Θl|||2,1 − rnode|||Θ̂ l |||2,1 .\nUsing then the assumption rnode ≥ r∗node yields n∑\ni=1 ∣∣∣∣g∗[xi]− gΘ̂[xi]∣∣∣∣22 ≤ n∑ i=1 ∣∣∣∣g∗[xi]− gΘ[xi]∣∣∣∣22 + 2rnode|||Θl|||2,1 . Multiplying both sides by 1/n and taking the infimum over Θ ∈M2,1 on the right-hand side then gives\n1\nn n∑ i=1 ∣∣∣∣g∗[xi]− gΘ̂[xi]∣∣∣∣22 ≤ infΘ∈M2,1 { 1 n n∑ i=1 ∣∣∣∣g∗[xi]− gΘ[xi]∣∣∣∣22 + 2rnoden |||Θl|||2,1 } .\nInvoking the definition of the prediction error on Page 4 gives the desired result.\nThe proof for the connection-sparse estimator is virtually the same." }, { "heading": "A.4 PROOF OF PROPOSITION 1", "text": "In this section, we give a short proof of Proposition 1.\nProof of Proposition 1. Verify the fact that if the all-zeros parameter is neither a solution of (3) nor of (5), all solutions Θ̂con and Θ̃con of (3) and (5), respectively, satisfy (Θ̂con)j , (Θ̃con)j 6= 0pj+1×pj for all j ∈ {0, . . . , l}. It then follows from the assumed nonnegative homogeneity, rcon > 0, and the definition of the estimator in (3) that |||(Θ̂con)0|||1, . . . , |||(Θ̂con)l−1|||1 = 1 for all solutions Θ̂con.\nGiven a solution Θ̃con of (5), define a ··= |||(Θ̃con)0|||1/(l+ 1) + · · ·+ |||(Θ̃con)l|||1/(l+ 1) and verify the fact that Γ ∈ M with Γ0 ··= a(Θ̃con)0/|||(Θ̃con)0|||1,Γ1 ··= a(Θ̃con)1/|||(Θ̃con)1|||1, . . . has the same value in the objective function as Θ̃con." }, { "heading": "A.5 PROOF OF PROPOSITION 2", "text": "In this section, we establish a proof of Proposition 2. The key tools are the Lipschitz property of Proposition 4 and the entropy bounds of Lemma 1.\nProof of Proposition 2. The main idea is to rewrite the event under consideration in a form that is amenable to known tail bounds for suprema of empirical processes with subgaussian random variables.\nThe connection-sparse bound follows from\nP { r∗con ≥ cv∞ √ nl ( log[2mnp] )3} = P\n{ 2 sup Ψ∈M1 ∣∣∣∣∣∣∣∣∣∣∣∣ n∑ i=1 ui ( gΨ[xi] )>∣∣∣∣∣∣∣∣∣∣∣∣ ∞ ≥ cv∞ √ nl ( log[2mnp] )3}\n≤ mpl max j∈{1,...,m} k∈{1,...,pl} P\n{ 2 sup Ψ∈M1 ∣∣∣∣( n∑ i=1 ui ( gΨ[xi] )>) jk ∣∣∣∣ ≥ cv∞√nl(log[2mnp])3}\n≤ mpl · 1 mnp\n≤ 1 n ,\nwhere we use in turn 1. the definition of r∗con in (8), 2. the union bound, 3. van de Geer (2000, Corollary 8.3) and our Proposition 4 and Lemma 1, and 4. the inequality pl ≤ p = ∑l j=0 p\nj+1pj and consolidating the factors. The key concept underlying van de Geer (2000, Corollary 8.3 on Page 128) is chaining (van der Vaart & Wellner, 1996, Page 90). The same considerations also apply to the node-sparse case, but we get an additional factor √ m from the definition of the effective noise in (8) and a factor √p from the entropy bound in Lemma 1. The differences between the bounds for the connection- and node-sparse cases in terms of v∞ vs. v2 stem from the different Lipschitz constants in Proposition 4." }, { "heading": "A.6 PROOF OF PROPOSITION 3", "text": "Proof of Proposition 3. The proof is based on standard empirical-process theory, including contraction and symmetrization arguments.\nUsing basic algebra and measure theory, it is easy to show that\nrisk[Θ̂con] ≤ (1 + b) risk[Θ∗] + cb err[Θ̂con]\n+ cb ∣∣∣∣ 1n n∑\ni=1\n(∣∣∣∣g∗[xi]− gΘ̂con [xi]∣∣∣∣22 − E∣∣∣∣g∗[xi]− gΘ̂con [xi]∣∣∣∣22) ∣∣∣∣\nfor a constant cb ∈ (0,∞) that depends only on b. The first term in this bound is the minimal risk as stated in the proposition, and the second term can be bounded by Corollary 1 and Proposition 2. Hence, it remains to bound the third term.\nIn view of the law of large numbers, it is reasonable to hope for the third term to be small. But to make this precise, we have to keep in mind that the estimator itself depends on the input vectors. We, therefore, need to prepare the third term for the application of a uniform version of the law of large numbers. Using standard contraction arguments—see (Boucheron et al., 2013, Chapter 11.3), for example—and Hölder’s inequality, we can bound the third term by bounding\nmax { |||(Θ∗)l|||1, |||(Θ̂con)l|||1 } sup\nΘ∈M1 ∣∣∣∣∣∣∣∣∣∣∣∣ n∑ i=1 ( gΘ∗ [xi]−gΘ[xi]−E [ gΘ∗ [xi]−gΘ[xi] ])∣∣∣∣∣∣∣∣∣∣∣∣2 ∞ ,\nwhich removes the dependence on the estimator Θ̂con up to the leading factor. To see that we can also neglect that factor, verify (see Proposition 2 and the proof of Theorem 1) that |||(Θ̂con)l|||1 ≤ 2|||(Θ∗)l|||1 with high probability as long as r∗con ≥ cv∞ √ nl(log[2mnp])3 with c large enough. Consequently, we just need to consider the quantity\nsup Θ∈M1 ∣∣∣∣∣∣∣∣∣∣∣∣ n∑ i=1 ( gΘ∗ [xi]− gΘ[xi]− E [ gΘ∗ [xi]− gΘ[xi] ])∣∣∣∣∣∣∣∣∣∣∣∣2 ∞\nin the following.\nThe last step is to bring this term in a form that is amenable to our earlier proofs. Using standard symmetrization arguments—see van der Vaart & Wellner (1996, Chapter 2.3), for example)—we can bound this quantity by bounding\nsup Θ∈M1 ∣∣∣∣∣∣∣∣∣∣∣∣ n∑ i=1 ki ( gΘ∗ [xi]− gΘ[xi] )∣∣∣∣∣∣∣∣∣∣∣∣2 ∞ ,\nwhere k1, . . . , kn are i.i.d. Rademacher random variables. But even though k1, . . . , kn are i.i.d. Rademacher random variables, we do not resort to Rademacher complexities; instead, we use that Rademacher random variables are subgaussian, so that we can then proceed similarly as in the proof of Proposition 2.\nThe node-sparse case can be treated along the same lines." }, { "heading": "A.7 EXTENSIONS", "text": "Our proof approach disentangles the specifics of the objective function (proof of Theorem 1), of the network structure (proof of Proposition 4), and of the stochastic terms (proofs of Lemma 1 and Proposition 2). This feature allows one to generalize and extend the results of this paper in straightforward ways. For example, extensions to different noise distributions only need a corresponding version of Proposition 2—with everything else unchanged. One could envision, for example, using concentration inequalities for heavy-tailed distributions such as in Lederer & van de Geer (2014). Extensions to different loss functions, to give another example, can be established by adjusting Theorem 1 accordingly. This can be done, for example, by invoking ideas from specialized literature on high-dimensional logistic regression such as Li & Lederer (2019). We avoid going into further details to avoid digression; the key message is that the flexibility of the proofs is yet another advantage of our approach." } ]
2,020
null
SP:a020f6bca5d85f83d595e5b724e32394009dcd7e
[ "The paper proposes a neural topic model which log-likelihood is regularized by Sinkhorn distance, instead of following Variational AutoEncoder (VAE) approach. The proposed model is hence cannot be interpreted as a probabilistic generative model. Still, with respect to metrics such as Topic Coherence and Topic Diversity which don't require probabilistic interpretation of topic model, the proposed model performs very well across five standard benchmark datasets for topic modeling.", "The paper proposes a neural topic model derived from the perspective of optimal transport (OT). Topic embeddings are learned as part of the training process and is used to construct the cost matrix of the transport. The cost function based on the OT distance is further improved by combining with the cross-entropy loss and by using the Sinkhorn distance to replace the OT distance." ]
Recently, Neural Topic Models (NTMs) inspired by variational autoencoders have obtained increasingly research interest due to their promising results on text analysis. However, it is usually hard for existing NTMs to achieve good document representation and coherent/diverse topics at the same time. Moreover, they often degrade their performance severely on short documents. The requirement of reparameterisation could also comprise their training quality and model flexibility. To address these shortcomings, we present a new neural topic model via the theory of optimal transport (OT). Specifically, we propose to learn the topic distribution of a document by directly minimising its OT distance to the document’s word distributions. Importantly, the cost matrix of the OT distance models the weights between topics and words, which is constructed by the distances between topics and words in an embedding space. Our proposed model can be trained efficiently with a differentiable loss. Extensive experiments show that our framework significantly outperforms the state-of-the-art NTMs on discovering more coherent and diverse topics and deriving better document representations for both regular and short texts.
[ { "affiliations": [], "name": "OPTIMAL TRANSPORT" }, { "affiliations": [], "name": "He Zhao" }, { "affiliations": [], "name": "Dinh Phung" }, { "affiliations": [], "name": "Viet Huynh" }, { "affiliations": [], "name": "Trung Le" }, { "affiliations": [], "name": "Wray Buntine" } ]
[ { "authors": [ "Nikolaos Aletras", "Mark Stevenson" ], "title": "Evaluating topic coherence using distributional semantics", "venue": "In International Conference on Computational Semantics, pp", "year": 2013 }, { "authors": [ "David M Blei", "Thomas L Griffiths", "Michael I Jordan" ], "title": "The nested Chinese restaurant process and Bayesian nonparametric inference of topic hierarchies", "venue": "Journal of the ACM,", "year": 2010 }, { "authors": [ "Sophie Burkhardt", "Stefan Kramer" ], "title": "Decoupling sparsity and smoothness in the Dirichlet variational autoencoder topic model", "venue": null, "year": 2019 }, { "authors": [ "Dallas Card", "Chenhao Tan", "Noah A Smith" ], "title": "Neural models for documents with metadata", "venue": "In ACL,", "year": 2018 }, { "authors": [ "Marco Cuturi" ], "title": "Sinkhorn distances: Lightspeed computation of optimal transport", "venue": "In NIPS, pp. 2292–2300,", "year": 2013 }, { "authors": [ "Charlie Frogner", "Chiyuan Zhang", "Hossein Mobahi", "Mauricio Araya", "Tomaso A Poggio" ], "title": "Learning with a Wasserstein loss", "venue": "In NIPS, pp. 2053–2061,", "year": 2015 }, { "authors": [ "Zhe Gan", "R. Henao", "D. Carlson", "Lawrence Carin" ], "title": "Learning deep sigmoid belief networks with data augmentation", "venue": "In AISTATS,", "year": 2015 }, { "authors": [ "Viet Huynh", "He Zhao", "Dinh Phung" ], "title": "OTLDA: A geometry-aware optimal transport approach for topic modeling", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "ICLR,", "year": 2013 }, { "authors": [ "Rahul Krishnan", "Dawen Liang", "Matthew Hoffman" ], "title": "On the challenges of learning with inference networks on sparse, high-dimensional data", "venue": "In AISTATS,", "year": 2018 }, { "authors": [ "John D Lafferty", "David M Blei" ], "title": "Correlated topic models", "venue": "In NIPS, pp", "year": 2006 }, { "authors": [ "Jey Han Lau", "David Newman", "Timothy Baldwin" ], "title": "Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality", "venue": "In EACL,", "year": 2014 }, { "authors": [ "David D Lewis", "Yiming Yang", "Tony G Rose", "Fan Li" ], "title": "RCV1: A new benchmark collection for text categorization research", "venue": "JMLR, 5(Apr):361–397,", "year": 2004 }, { "authors": [ "Chenliang Li", "Haoran Wang", "Zhiqian Zhang", "Aixin Sun", "Zongyang Ma" ], "title": "Topic modeling for short texts with auxiliary word embeddings", "venue": "In SIGIR,", "year": 2016 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-SNE", "venue": "JMLR, 9(Nov): 2579–2605,", "year": 2008 }, { "authors": [ "Christopher D Manning", "Prabhakar Raghavan", "Hinrich Schütze" ], "title": "Introduction to Information Retrieval", "venue": null, "year": 2008 }, { "authors": [ "Yishu Miao", "Lei Yu", "Phil Blunsom" ], "title": "Neural variational inference for text processing", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Yishu Miao", "Edward Grefenstette", "Phil Blunsom" ], "title": "Discovering discrete latent topics with neural variational inference", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": null, "year": 2013 }, { "authors": [ "Feng Nan", "Ran Ding", "Ramesh Nallapati", "Bing Xiang" ], "title": "Topic modeling with Wasserstein autoencoders", "venue": "In ACL,", "year": 2019 }, { "authors": [ "Dat Quoc Nguyen", "Richard Billingsley", "Lan Du", "Mark Johnson" ], "title": "Improving topic models with latent feature word representations", "venue": "TACL, 3:299–313,", "year": 2015 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher Manning" ], "title": "GloVe: Global vectors for word representation", "venue": "In EMNLP, pp", "year": 2014 }, { "authors": [ "James Petterson", "Wray Buntine", "Shravan M Narayanamurthy", "Tibério S Caetano", "Alex J Smola" ], "title": "Word features for latent Dirichlet allocation", "venue": "In NIPS,", "year": 1921 }, { "authors": [ "Xuan-Hieu Phan", "Le-Minh Nguyen", "Susumu Horiguchi" ], "title": "Learning to classify short and sparse text & web with hidden topics from large-scale data collections", "venue": "In WWW, pp", "year": 2008 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Michael Röder", "Andreas Both", "Alexander Hinneburg" ], "title": "Exploring the space of topic coherence measures", "venue": "In WSDM, pp", "year": 2015 }, { "authors": [ "Antoine Rolet", "Marco Cuturi", "Gabriel Peyré" ], "title": "Fast dictionary learning with a smoothed Wasserstein loss", "venue": "In AISTATS,", "year": 2016 }, { "authors": [ "Morgan A Schmitz", "Matthieu Heitz", "Nicolas Bonneel", "Fred Ngole", "David Coeurjolly", "Marco Cuturi", "Gabriel Peyré", "Jean-Luc Starck" ], "title": "Wasserstein dictionary learning: Optimal transport-based unsupervised nonlinear dictionary learning", "venue": "SIAM Journal on Imaging Sciences,", "year": 2018 }, { "authors": [ "Vivien Seguy", "Bharath Bhushan Damodaran", "Remi Flamary", "Nicolas Courty", "Antoine Rolet", "Mathieu Blondel" ], "title": "Large scale optimal transport and mapping estimation", "venue": null, "year": 2018 }, { "authors": [ "Akash Srivastava", "Charles Sutton" ], "title": "Autoencoding variational inference for topic models", "venue": null, "year": 2017 }, { "authors": [ "Haodong Sun", "Haomin Zhou", "Hongyuan Zha", "Xiaojing Ye" ], "title": "Learning cost functions for optimal transport", "venue": "arXiv preprint arXiv:2002.09650,", "year": 2020 }, { "authors": [ "Daniele Vitale", "Paolo Ferragina", "Ugo Scaiella" ], "title": "Classification of short texts by deploying topical annotations", "venue": "In ECIR, pp", "year": 2012 }, { "authors": [ "Hongteng Xu", "Wenlin Wang", "Wei Liu", "Lawrence Carin" ], "title": "Distilled Wasserstein learning for word embedding and topic modeling", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Yi Yang", "Doug Downey", "Jordan Boyd-Graber" ], "title": "Efficient methods for incorporating knowledge into topic models", "venue": "In EMNLP, pp", "year": 2015 }, { "authors": [ "Mikhail Yurochkin", "Sebastian Claici", "Edward Chien", "Farzaneh Mirzazadeh", "Justin M Solomon" ], "title": "Hierarchical optimal transport for document representation", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Hao Zhang", "Bo Chen", "Dandan Guo", "Mingyuan Zhou" ], "title": "WHAI: Weibull hybrid autoencoding inference for deep topic modeling", "venue": null, "year": 2018 }, { "authors": [ "He Zhao", "Lan Du", "Wray Buntine" ], "title": "A word embeddings informed focused topic model", "venue": "In ACML,", "year": 2017 }, { "authors": [ "He Zhao", "Lan Du", "Wray Buntine", "Mingyuan Zhou" ], "title": "Dirichlet belief networks for topic structure learning", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "He Zhao", "Dinh Phung", "Viet Huynh", "Yuan Jin", "Lan Du", "Wray Buntine" ], "title": "Topic modelling meets deep neural networks: A survey", "venue": "arXiv preprint arXiv:2103.00498,", "year": 2021 }, { "authors": [ "Mingyuan Zhou", "Yulai Cong", "Bo Chen" ], "title": "Augmentable gamma belief networks", "venue": null, "year": 2016 }, { "authors": [ "Zhou" ], "title": "Such topic correlations can only be detected by specialised topic models (e.g.,in", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "As an unsupervised approach, topic modelling has enjoyed great success in automatic text analysis. In general, a topic model aims to discover a set of latent topics from a collection of documents, each of which describes an interpretable semantic concept. Topic models like Latent Dirichlet Allocation (LDA) (Blei et al., 2003) and its hierarchical/Bayesian extensions, e.g., in Blei et al. (2010); Paisley et al. (2015); Gan et al. (2015); Zhou et al. (2016) have achieved impressive performance for document analysis. Recently, the developments of Variational AutoEncoders (VAEs) and Autoencoding Variational Inference (AVI) (Kingma & Welling, 2013; Rezende et al., 2014) have facilitated the proposal of Neural Topic Models (NTMs) such as in Miao et al. (2016); Srivastava & Sutton (2017); Krishnan et al. (2018); Burkhardt & Kramer (2019). Inspired by VAE, many NTMs use an encoder that takes the Bag-of-Words (BoW) representation of a document as input and approximates the posterior distribution of the latent topics. The posterior samples are further input into a decoder to reconstruct the BoW representation. Compared with conventional topic models, NTMs usually enjoy better flexibility and scalability, which are important for the applications on large-scale data.\nDespite the promising performance and recent popularity, there are several shortcomings for existing NTMs, which could hinder their usefulness and further extensions. i) The training and inference processes of NTMs are typically complex due to the prior and posterior constructions of latent topics. To encourage topic sparsity and smoothness, Dirichlet (Burkhardt & Kramer, 2019) or gamma (Zhang et al., 2018) distributions are usually used as the prior and posterior of topics, but reparameterisation is inapplicable to them, thus, complex sampling schemes or approximations have to be used, which could limit the model flexibility. ii) A desideratum of a topic model is to generate better topical representations of documents with more coherent and diverse topics; but for many existing NTMs, it is hard to achieve good document representation and coherent/diverse topics at the same time. This is because the objective of NTMs is to achieve lower reconstruction error, which usually means topics are less coherent and diverse, as observed and analysed in Srivastava & Sutton (2017); Burkhardt & Kramer (2019). iii) It is well-known that topic models degrade their performance severely on short documents such as tweets, news headlines and product reviews, as each individual document contains insufficient word co-occurrence information. This issue can be exacerbated for NTMs because of the use of the encoder and decoder networks, which are usually more vulnerable to data sparsity.\nTo address the above shortcomings for NTMs, we in this paper propose a neural topic model, which is built upon a novel Optimal Transport (OT) framework derived from a new view of topic modelling. For a document, we consider its content to be encoded by two representations: the observed representation, x, a distribution over all the words in the vocabulary and the latent representation, z, a distribution over all the topics. x can be obtained by normalising a document’s word count vector while z needs to be learned by a model. For a document collection, the vocabulary size (i.e., the number of unique words) can be very large but one individual document usually consists of a tiny subset of the words. Therefore, x is a sparse and low-level representation of the semantic information of a document. As the number of topics is much smaller than the vocabulary size, z is the relatively dense and high-level representation of the same content. Therefore, the learning of a topic model can be viewed as the process of learning the distribution z to be as close to the distribution x as possible. Accordingly, it is crucial to investigate how to measure the distance between two distributions with different supports (i.e., words to x and topics to z). As optimal transport is a powerful tool for measuring the distance travelled in transporting the mass in one distribution to match another given a specific cost function, and recent development on computational OT (e.g., in Cuturi (2013); Frogner et al. (2015); Seguy et al. (2018); Peyré et al. (2019)) has shown the promising feasibility to efficiently compute OT for large-scale problems, it is natural for us to develop a new NTM based on the minimisation of OT.\nSpecifically, our model leverages an encoder that outputs topic distribution z of a document by taking its word count vector as input like standard NTMs, but we minimise the OT distance between x and z, which are two discrete distributions on the support of words and topics, respectively. Notably, the cost function of the OT distance specifies the weights between topics and words, which we define as the distance in an embedding space. To represent their semantics, all the topics and words are embedded in this space. By leveraging the pretrained word embeddings, the cost function is then a function of topic embeddings, which will be learned jointly with the encoder. With the advanced properties of OT on modelling geometric structures on spaces of probability distributions, our model is able to achieve a better balance between obtaining good document representation and generating coherent/diverse topics. In addition, our model eases the burden of designing complex sampling schemes for the posterior of NTMs. More interestingly, our model is a natural way of incorporating pretrained word embeddings, which have been demonstrated to alleviate the issue of insufficient word co-occurrence information in short texts (Zhao et al., 2017; Dieng et al., 2020). With extensive experiments, our model can be shown to enjoy the state-of-the-art performance in terms of both topic quality and document representations for both regular and short texts." }, { "heading": "2 BACKGROUND", "text": "In this section, we recap the essential background of neural topic models and optimal transport." }, { "heading": "2.1 NEURAL TOPIC MODELS", "text": "Most of existing NTMs can be viewed as the extensions of the framework of VAEs where the latent variables can be interpreted as topics. Suppose the document collection to be analysed has V unique words (i.e., vocabulary size). Each document consists of a word count vector denoted as x ∈ NV and a latent distribution over K topics: z ∈ RK . An NTM assumes that z for a document is generated from a prior distribution p(z) and x is generated by the conditional distribution pφ(x|z) that is modelled by a decoder φ. The model’s goal is to infer the topic distribution given the word counts, i.e., to calculate the posterior p(z|x), which is approximated by the variational distribution qθ(z|x) modelled by an encoder θ. Similar to VAEs, the training objective of NTMs is the maximisation of the Evidence Lower BOund (ELBO):\nmax θ,φ\n( Eqθ(z|x) [log pφ(x|z)]−KL [qθ(z|x) ‖ p(z)] ) . (1)\nThe first term above is the expected log-likelihood or reconstruction error. As x is a count-valued vector, it is usually assumed to be generated from the multinomial distribution: pφ(x|z) := Multi(φ(z)), where φ(z) is a probability vector output from the decoder. Therefore, the expected log-likelihood is proportional to xT log φ(z). The second term is the Kullback–Leibler (KL) divergence that regularises qθ(z|x) to be close to its prior p(z). To interpret topics with words, φ(z) is usually constructed by a single-layer network (Srivastava & Sutton, 2017): φ(z) := softmax(Wz), where W ∈ RV×K\nindicates the weights between topics and words. Different NTMs may vary in the prior and the posterior of z, for example, the model in Miao et al. (2017) applies Gaussian distributions for them and Srivastava & Sutton (2017); Burkhardt & Kramer (2019) show that Dirichlet is a better choice. However, reparameterisation cannot be directly applied to a Dirichlet, so various approximations and sampling schemes have been proposed." }, { "heading": "2.2 OPTIMAL TRANSPORT", "text": "OT distances have been widely used for the comparison of probabilities. Here we limit our discussion to OT for discrete distributions, although it applies for continuous distributions as well. Specifically, let us consider two probability vectors r ∈ ∆Dr and c ∈ ∆Dc , where ∆D denotes a D − 1 simplex. The OT distance1 between the two probability vectors can be defined as:\ndM(r, c) := min P∈U(r,c)\n〈P,M〉 , (2)\nwhere 〈·, ·〉 denotes the Frobenius dot-product; M ∈ RDr×Dc≥0 is the cost matrix/function of the transport; P ∈ RDr×Dc>0 is the transport matrix/plan; U(r, c) denotes the transport polytope of r and c, which is the polyhedral set of Dr × Dc matrices: U(r, c) := {P ∈ RDr×Dc>0 |P1Dc = r, PT1Dr = c}; and 1D is the D dimensional vector of ones. Intuitively, if we consider two discrete random variables X ∼ Categorical(r) and Y ∼ Categorical(c), the transport matrix P is a joint probability of (X,Y ), i.e., p(X = i, Y = j) = pij and U(r, c) is the set of all the joint probabilities. The above optimal transport distance can be computed by finding the optimal transport matrix P∗. It is also noteworthy that the Wasserstein distance can be viewed as a specific case of the OT distances.\nAs directly optimising Eq. (2) can be time-consuming for large-scale problems, a regularised optimal transport distance with an entropic constraint is introduced in Cuturi (2013), named the Sinkhorn distance:\ndM,α(r, c) := min P∈Uα(r,c)\n〈P,M〉 , (3)\nwhere Uα(r, c) := {P ∈ U(r, c)|h(P) ≥ h(r) + h(c) − α}, h(·) is the entropy function, and α ∈ [0,∞). To compute the Sinkhorn distance, a Lagrange multiplier is introduced for the entropy constraint to minimise Eq. (3), resulting in the Sinkhorn algorithm, widely-used for discrete OT problems." }, { "heading": "3 PROPOSED MODEL", "text": "Now we introduce the details of our proposed model. Specifically, we present each document as a distribution over V words, x̃ ∈ ∆V obtained by normalising x: x̃ := x/S where S := ∑V v=1 x is the length of a document. Also, each document is associated with a distribution over K topics: z ∈ ∆K , each entry of which indicates the proportion of one topic in this document. Like other NTMs, we leverage an encoder to generate z from x̃: z = softmax(θ(x̃)). Notably, θ is implemented with a neural network with dropout layers for adding randomness. As x̃ and z are two distributions with different supports for the same document, to learn the encoder, we propose to minimise the following OT distance to push z towards x̃:\nmin θ dM(x̃, z) . (4)\nHere M ∈ RV×K>0 is the cost matrix, where mvk indicates the semantic distance between topic k and word v. Therefore, each column of M captures the importance of the words in the corresponding topic. In addition to the encoder, M is a variable that needs to be learned in our model. However, learning the cost function is reported to be a non-trivial task (Cuturi & Avis, 2014; Sun et al., 2020). To address this problem, we specify the following construction of M:\nmvk = 1− cos(ev, gk) , (5) where cos(·, ·) ∈ [−1, 1] is the cosine similarity; gk ∈ RL and ev ∈ RL are the embeddings of topic k and word v, respectively.\n1To be precise, an OT distance becomes a “distance metric” in mathematics only if the cost function M is induced from a distance metric. We call it “OT distance” to assist the readability of our paper.\nThe embeddings are expected to capture the semantic information of the topics and words. Instead of learning the word embeddings, we propose to feed them with pretrained word embeddings such as word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014). This not only reduces the parameter space to make the learning of M more stable but also enables us to leverage the rich semantic information in pretrained word embeddings, which is beneficial for short documents. Here the cosine distance instead of others is used for two reasons: it is the most commonly-used distance metric for word embeddings and the cost matrix M is positive thus the similarity metric requires to be upper-bounded. As cosine similarity falls in the range of [−1, 1], we have M ∈ [0, 2]V×K . For easy presentation, we denote G ∈ RL×K and E ∈ RL×V as the collection of the embeddings of all topics and words, respectively. Now we can rewrite Eq. (4) as:\nmin θ,G dM(x̃, z) . (6)\nAlthough the mechanisms are totally different, both M of our model and W in NTMs (See Section 2.1) capture the relations between topics and words (M is distance while W is similarity). Here M is the cost function of our OT loss while W is the weights in the decoder of NTMs. Different from other NTMs based on VAEs, our model does not explicitly has a decoder to project z back to the word space to reconstruct x, as the OT distance facilitates us to compute the distance between z and x̃ directly. To further understand our model, we can actually project z to the space of x by “virtually” defining a decoder: φ(z) := softmax((2−M)z). With the notation of φ(z), we show the following theorem to reveal the relationships between other NTMs and ours, whose proof is shown in Section A of the appendix.\nTheorem 1. When V ≥ 8 and M ∈ [0, 2]V×K , we have:\ndM(x̃, z) ≤ −x̃T log φ(z). (7)\nWith Theorem 1, we have:\nLemma 1. Maximising the expected multinomial log-likelihood of NTMs is equivalent to minimising the upper bound of the OT distance in our model.\nFrogner et al. (2015) propose to minimise the OT distance between the predicted and true label distributions for classification tasks. It is reported in the paper that combining the OT loss with the conventional cross-entropy loss gives better performance on using either of them. As the expected multinomial log-likelihood is easier to learn and can be helpful to guide the optimisation of the OT distance, empirically inspired by Frogner et al. (2015) and theoretically motivated by Theorem 1, we propose the following joint loss for our model that combines the OT distance with the expected log-likelihood:\nmax θ,G\n( x̃T log φ(z)− dM(x̃, z) ) . (8)\nIf we compare the above loss with the ELBO of Eq. (1), it can be observed that similar to the KL divergence of NTMs, our OT distance can be viewed as a regularisation term to the expected loglikelihood (x̃T log φ(z) := 1Sx\nT log φ(z)). Compared with other NTMs, our model eases the burden of developing the prior/posterior distributions and the associated sampling schemes. Moreover, with OT’s ability to better modelling geometric structures, our model is able to achieve better performance in terms of both document representation and topic quality. In addition, the cost function of the OT distance provides a natural way of incorporating pretrained word embeddings, which boosts our model’s performance on short documents.\nFinally, we replace the OT distance with the Sinkhorn distance (Cuturi, 2013), which leads to the final loss function:\nmax θ,G\n( x̃T log φ(z)− dM,α(x̃, z) ) . (9)\nwhere z = softmax(θ(x̃)); M is parameterised by G; φ(z) := softmax((2 −M)z); x and x̃ are the word count vector and its normalisation, respectively; is the hyperparameter that controls the weight of the expected likelihood; α is the hyperparameter for the Sinkhorn distance.\nTo compute the Sinkhorn distance, we leverage the Sinkhorn algorithm (Cuturi, 2013). Accordingly, we name our model Neural Sinkhorn Topic Model (NSTM), whose training algorithm is shown in\ninput :Input documents, Pretrained word embeddings E, Topic number K, , α output :θ,G Randomly initialise θ and G; while Not converged do\nSample a batch of B input documents X; Column-wisely normalise X to get X̃ Compute M with G and E by Eq. (5); Compute Z = softmax(θ(X̃)); Compute the first term of Eq. (9); # Sinkhorn iterations # Ψ1 = ones(K,B)/K,Ψ2 = ones(V,B)/V ; H = e−M/α; while Ψ1 changes or any other relevant stopping criterion do\nΨ2 = X̃ 1/(HΨ1); Ψ1 = Z 1/(HTΨ2);\nend Compute the second term of Eq. (9): dM,α = sum(Ψ2T (H M)Ψ1); Compute the gradients of Eq. (9) in terms of θ,G; Update θ,G with the gradients;\nend Algorithm 1: Training algorithm for NSTM. X ∈ NV×B and Z ∈ RK×B>0 consists of the word count vectors and topic distributions for all the documents, respectively; is the element-wise multiplication.\nAlgorithm 1. It is noteworthy that the Sinkhorn iterations can be implemented with the tensors of TensorFlow/PyTorch (Patrini et al., 2020). Therefore, the loss of Eq. (9) is differentiable in terms of θ and G, which can be optimised jointly in one training iteration. After training the model, we can infer z by conducting a forward-pass of the encoder θ with the input x̃. In practice, x can be normalised by other methods e.g., softmax or one can use TF-IDF as the input data of the encoder." }, { "heading": "4 RELATED WORKS", "text": "We first consider NTMs (e.g. in Miao et al. (2016); Srivastava & Sutton (2017); Krishnan et al. (2018); Card et al. (2018); Burkhardt & Kramer (2019); Dieng et al. (2020) reviewed in Section 2.1 as the closest line of related works to ours. For a detailed survey of NTMs, we refer to Zhao et al. (2021). Connections and comparisons between our model and NTMs have been discussed in Section 3. In addition, word embeddings have been recently widely-used as complementary metadata for topic models, especially for modelling short texts. For Bayesian probabilistic topic models, word embeddings are usually incorporated into the generative process of word counts, such as in Petterson et al. (2010); Nguyen et al. (2015); Li et al. (2016); Zhao et al. (2017). Due to the flexibility of NTMs, word embeddings can be incorporated as part of the encoder input, such as in Card et al. (2018) or they can be used in the generative process of words such as in Dieng et al. (2020). Our novelty with NSTM is that word embeddings are naturally incorporated in the cost function of the OT distance.\nTo our knowledge, the works that connect topic modelling with OT are still very limited. In Yurochkin et al. (2019) authors proposed to compare two documents’ similarity with the OT distance between their topic distributions extracted from a pretrained LDA, but the aim is not to learn a topic model. Another recent work related to ours is Wasserstein LDA (WLDA) (Nan et al., 2019), which adapts the framework of Wasserstein AutoEncoders (WAEs) (Tolstikhin et al., 2018). The key difference from ours is that WLDA minimises the Wasserstein distance between the fake data generated with topics and real data, which can be viewed as an OT variant to VAE-NTMs. However, our NSTM directly minimises the OT distance between z and x, where there are no explicit generative processes from topics to data. Other two related works are Distilled Wasserstein Learning (DWL) (Xu et al., 2018) and Optimal Transport LDA (OTLDA) (Huynh et al., 2020), which adapt the idea of Wasserstein barycentres and Wasserstein Dictionary Learning (Rolet et al., 2016; Schmitz et al., 2018). There are fundamental differences of ours from DWL and OTLDA in terms of the relations between\ndocuments, topics, and words. Specifically, in DWL and OTLDA, documents and topics locate in one space of words (i.e., both are distributions over words) and x can be approximated with the weighted Wasserstein barycentres of all the topic-word distributions, where the weights can be interpreted as the topic proportions of the document, i.e., z. However, in NSTM, a document locates in both the topic space and the word space and topics and words are embedded in the embedding space. These differences lead to different views of topic modelling and different frameworks as well. Moreover, DWL mainly focuses on learning word embeddings and representations for International Classification of Diseases (ICD) codes, while NSTM aims to be a general method of topic modelling. Finally, DWL and OTLDA are not neural network models while ours is." }, { "heading": "5 EXPERIMENTS", "text": "We conduct extensive experiments on several benchmark text datasets to evaluate the performance of NSTM against the state-of-the-art neural topic models." }, { "heading": "5.1 EXPERIMENTAL SETTINGS", "text": "Datasets: Our experiments are conducted on five widely-used benchmark text datasets, varying in different sizes, including 20 News Groups (20NG)2, Web Snippets (WS) (Phan et al., 2008), Tag My News (TMN) (Vitale et al., 2012)3, Reuters extracted from the Reuters-21578 dataset4, Reuters Corpus Volume 2 (RCV2) (Lewis et al., 2004)5. The statistics of the datasets in the experiments are shown in Table 1. In particular, WS and TMN are short documents; 20NG, WS, and TMN are associated with document labels6.\nEvaluation metrics: We report Topic Coherence (TC) and Topic Diversity (TD) as performance metrics for topic quality. TC measures the semantic coherence in the most significant words (top words) of a topic, given a reference corpus. We apply the widely-used Normalized Pointwise Mutual Information (NPMI) (Aletras & Stevenson, 2013; Lau et al., 2014) computed over the top 10 words of each topic, by the Palmetto package (Röder et al., 2015)7. As not all the discovered topics are interpretable (Yang et al., 2015; Zhao et al., 2018), to comprehensively evaluate the topic quality, we choose the topics with the highest NPMI and report the average score over those selected topics. We vary the proportion of the selected topics from 10% to 100%, where 10% indicates the top 10% topics with the highest NPMI are selected and 100% means all the topics are used. TD, as its name implies, measures how diverse the discovered topics are. We define topic diversity to be the percentage of unique words in the top 25 words (Dieng et al., 2020) of the selected topics, similar in TC. TD close to 0 indicates redundant topics; TD close to 1 indicates more varied topics. As doc-topic distributions can be viewed as unsupervised document representations, to evaluate the quality of such representations, we perform document clustering tasks and report the purity and Normalized Mutual Information (NMI) (Manning et al., 2008) on 20NG, WS, and TMN, where the document labels are considered. With the default training/testing splits of the datasets, we train a model on the training documents and infer the topic distributions z on the testing documents. Given z, we\n2http://qwone.com/~jason/20Newsgroups/ 3http://acube.di.unipi.it/tmn-dataset/ 4https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html 5https://trec.nist.gov/data/reuters/reuters.html 6We do not consider the labels of Reuters and RCV2 as there are multiple labels for one document. 7http://palmetto.aksw.org\n(a) 20NG\n20 40 60 80 100 0\n0.1\n0.2\n0.3\n0.4\n0.5 0.6 NSTM ProdLDA DVAE ETM WLDA LDA\n(b) WS\n20 40 60 80 100 0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n0.5\n0.55\n(c) TMN\n20 40 60 80 100 0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n20 40 60 80 100 0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n20 40 60 80 100 0\n0.05\n0.1\n0.15\n0.2\n0.25\n20 40 60 80 100 0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\nFigure 2: The first row shows the km-Purity scores and the second row shows the corresponding km-NMI scores. In each subfigure, the horizontal axis indicates the number of KMeans clusters.\nadopt two strategies to perform the document clustering task: i) Following Nguyen et al. (2015), we use the most significant topic of a testing document as its clustering assignment to compute purity and NMI (denoted by top-Purity and top-NMI); ii) We apply the KMeans algorithm on z (over all the topics) of the testing documents and report the purity and NMI of the KMeans clusters (denoted by km-Purity and km-NMI). For the first strategy, the number of clusters equals to the number of topics while for the second one, we vary the number of clusters of KMeans in the range of {20, 40, 60, 80, 100}. Note that our goal is not to achieve the state-of-the-art document clustering results but compare document representations of topic models. For all the metrics, higher values indicate better performance.\nBaseline methods and their settings: We compare with the state-of-the-art NTMs, including: LDA with Products of Experts (ProdLDA) (Srivastava & Sutton, 2017), which replaces the mixture model in LDA with a product of experts and uses the AVI for training; Dirichlet VAE (DVAE) (Burkhardt & Kramer, 2019), which is a neural topic model imposing the Dirichlet prior/posterior on z. We use the variant of DVAE with rejection sampling VI, which is reported to perform the best; Embedding Topic Model (ETM) (Dieng et al., 2020), which is a topic model that incorporates word embeddings and is learned by AVI; Wasserstein LDA (WLDA) (Nan et al., 2019), which is a WAE-based topic model. For all the above baselines, we use their official code with the best reported settings.\nSettings for NSTM: NSTM is implemented on TensorFlow. For the encoder θ, to keep simplicity, we use a fully-connected neural network with one hidden layer of 200 units and ReLU as the activation function, followed by a dropout layer (rate=0.75) and a batch norm layer, same to the\nsettings of Burkhardt & Kramer (2019). For the Sinkhorn algorithm, following Cuturi (2013), the maximum number of iterations is 1,000 and the stop tolerance is 0.0058. In all the experiments, we fix α = 20 and = 0.07. We further vary the two hyperparameters to study our model’s sensitivity to them in Figure B.1 of the appendix. Finetuning the parameters specifically to a dataset may give better results. The optimisation of NSTM is done by Adam (Kingma & Ba, 2015) with learning rate 0.001 and batch size 200 for maximally 50 iterations. For NSTM and ETM, the 50-dimensional (i.e., L = 50, see Eq. (5)) GloVe word embeddings (Pennington et al., 2014) pre-trained on Wikipedia9 are used. We use the number of topics K = 100 in most cases and set K = 500 on RCV2 to test our model’s scalability." }, { "heading": "5.2 RESULTS", "text": "Quantitative results: We run all the models in comparison five times with different random seeds and report the mean and standard deviation (as error bars). We show the results of TC and TD in Figure 1 and top-Purity/NMI in Table 2, and km-Purity/NMI in Figure 2, respectively. We have the following remarks about the results: i) Our proposed NSTM outperforms the others significantly in terms of topic coherence while obtaining high topic diversity on all the datasets. Although others may have higher TD than ours in one dataset or two, they usually cannot achieve a high TC at the same time. ii) In terms of document clustering, our model performs the best in general with a significant gap over other NTMs, except the case where ours is the second for the KMeans clustering on 20NG. This demonstrates that NSTM is not only able to discover interpretable topics with better quality but also learn good document representations for clustering. It also shows that with the OT distance, our model can achieve a better balance among the comprehensive metrics of topic modelling. iii) For all the evaluation metrics, our model is consistently the best on the short documents including WS and TMN. This demonstrates the effectiveness of our way of incorporating pretrained word embeddings, which shows our model’s potential on short text topic modelling. Although ETM also uses pretrained word embeddings, its performance is incomparable to ours.\nScalability: NSTM has comparable scalability with other NTMs and is able to scale on large datasets with a large number of topics. To demonstrate the scalability, we run NSTM, DVAE, ProdLDA (as these three are implemented in TensorFlow, while ETM is in PyTorch, and WLDA is in MXNet) on RCV2 with K = 500. The three models run on a Titan RTX GPU with batch size 1,000. Figure 3 shows the training losses, which demonstrate that NSTM has similar learning speed to ProdLDA, better than DVAE. The TC and TD scores of this experiment are shown in Section C of the appendix, where it can be observed that with 500 topics, our model shows similar performance advantage over others.\nQualitative analysis: As topics in our model are embedded in the same space as pretrained word embeddings, they share similar geometric properties. Figure 4 shows a qualitative analysis. For the t-SNE (Maaten & Hinton, 2008) visualisation, we select the top 50 topics with the highest NPMI learned by a run of NSTM on RCV2 with K = 100 and feed their (50 dimensional) embeddings into the t-SNE method. We also show the top five words and the topic number (1 to 50) of each topic. We\n8The Sinkhorn algorithm usually reaches the stop tolerance in less than 50 iterations in NSTM 9https://nlp.stanford.edu/projects/glove/\ncan observe that although the words of the topics are different, the semantic similarity between the topics captured by the embeddings is highly interpretable. In addition, we take the GloVe embeddings of the polysemantic word “apple” and find the closest 10 related words among the 0.4 million words of the GloVe vocabulary according to their cosine similarity. It can be seen that by default “apple” refers to the Apple company more in GloVe. Either adding the embeddings of topic 1 that describes the concept of “food” or subtracting the embeddings of topic 46 that describes the concept of “tech companies” reveals the fruit semantic for the word “apple”. More qualitative analysis on topics are provided in Section E of the appendix." }, { "heading": "6 CONCLUSION", "text": "In this paper, we presented a novel neural topic model based on optimal transport, where a document is endowed with two representations: the word distribution, x̃, and the topic distribution, z. An OT distance is leveraged to compare the semantic distance between the two distributions, whose cost function is defined according to the cosine similarities between topics and words in the embedding space. z is obtained from an encoder that takes x̃ as input and is trained by minimising the OT distance between z and x̃. With pretrained word embeddings, topic embeddings are learned by the same minimisation of the OT distance in terms of the cost function. Our model has shown appealing properties that are able to overcome several shortcomings of existing neural topic models. extensive experiments have been conducted, showing that our model achieves state-of-the-art performance on both discovering quality topics and deriving useful document representations for both regular and short texts." }, { "heading": "ACKNOWLEDGMENTS", "text": "Trung Le was supported by AOARD grant FA2386-19-1-4040. Wray Buntine was supported by the Australian Research Council under award DP190100017." }, { "heading": "A PROOF OF THEOREM 1", "text": "Proof. Before showing the proof, we introduce the following notations: We denote k ∈ {1, · · · ,K} and v ∈ {1, · · · , V } as the indexes; The sth (s ∈ {1, · · · , S}) token of the document picks a word in the vocabulary, denoted by ws ∈ {1, · · · , V }; the normaliser in the softmax function of φ(z) is denoted as φ̂ so:\nφ̂ = V∑ v=1 e ∑K k=1 zk(2−mvk) = e2 V∑ v=1 e− ∑K k=1 zkmvk .\nWith these notations, we first have the following equation for the multinomial log-likelihood:\nx̃T log φ(z) = 1\nS S∑ s=1 log φ(z)ws\n= 1\nS S∑ s=1 ( K∑ k=1 zk(2−mwsk)− log φ̂ )\n= 2− log φ̂− 1 S S∑ s=1 K∑ k=1 zkmwsk . (A.1)\nRecall that in Eq. (1) of the main paper, the transport matrix P is one of the joint distributions of x̃ and z. We introduce the conditional distribution of z given x̃ as Q, where q(v, k) indicates the probability of assigning a token of word v to topic k.\nGiven that P satisfies P ∈ U(x̃, z) and pvk = x̃vq(v, k), Q must satisfy U ′(x̃, z) := {Q ∈ RV×K>0 | ∑V v=1 x̃vq(v, k) = zk}. With Q, we can rewrite the OT distance as:\ndM(x̃, z) = min Q∈U ′(x̃,z) V,K∑ v=1,k=1 x̃vq(v, k)mvk\n= 1\nS min Q∈U ′(x̃,z) K∑ k=1 S∑ s=1 q(ws, k)mwsk.\nIf we let q(v, k) = zk, meaning that all the tokens of a document to the topics according to the document’s doc-topic distribution, then Q satisfies U ′(x̃, z), which leads to:\ndM(z̃,x) ≤ 1\nS K∑ k=1 S∑ s=1 zkmwsk . (A.2)\nTogether with Eq. (A.1), the definition of φ̂, and the fact that mvk ≤ 2, we have:\nx̃T log φ(z) = 2− log φ̂− 1 S S∑ s=1 K∑ k=1 zkmwsk\n≤ − log ( V∑ v=1 e− ∑K k=1 zkmvk ) − dM(x̃, z) ≤ −(log V − 2)− dM(x̃, z) ≤ −dM(x̃, z) , (A.3)\nwhere the last equation holds if log V > 2, i.e., V ≥ 8." }, { "heading": "B PARAMETER SENSITIVITY", "text": "In the previous experiments, we fix the values of and α, which control the weight of the multinomial likelihood in Eq (9) and the weight of the entropic regularisation in the Sinkhorn distance, respectively. Here we report the performance of NSTM on 20NG (blue lines) under different settings of the two hyperparameters in Figure B.1. Moreover, we propose two variants of NSTM. The first one removes the Sinkhorn distance in the training loss of Eq. (9) (i.e., only the expected log-likelihood term left) and its performance is shown as the red lines. The second variant removes the the expected log-likelihood term in the training loss of Eq. (9) (i.e., only Sinkhorn distance left) and its performance is shown as yellow lines." }, { "heading": "C TC AND TD ON RCV2 WITH 500 TOPICS", "text": "The results are shown in Figure C.1." }, { "heading": "D AVERAGE SINKHORN DISTANCE WITH VARIED NUMBER OF TOPICS", "text": "In Figure D.1, we show the average Sinkhorn distance with varied number of topics on 20NG, WS, TMN, and Reuters. It can be observed that when K increases, there is a clear trend that dM(z,x) decreases.\n(a) 20NG\n5 25 50 75 100 125 150 175 200 0.66\n0.68\n0.7\n0.72\n0.74\n0.76\n0.78\n(b) WS\n5 25 50 75 100 125 150 175 200 0.35\n0.4\n0.45\n0.5\n0.55\n0.6\n(c) TMN\n5 25 50 75 100 125 150 175 200 0.35\n0.4\n0.45\n0.5\n0.55\n0.6\n(d) Reuters\n5 25 50 75 100 125 150 175 200 0.5\n0.52\n0.54\n0.56\n0.58\n0.6\n0.62" }, { "heading": "E MORE TOPIC EMBEDDING VISUALISATIONS", "text": "In Figure E.1, E.2, E.3, and E.4, we show the visualisations of 20NG, WS, TMN, and Reuters, respectively. We note that the topic embeddings in general present much better clustering structures of topics in the semantic space. Such topic correlations can only be detected by specialised topic models (e.g.,in Lafferty & Blei (2006); Blei et al. (2010); Zhou et al. (2016)). Instead, the correlations of topics in our model are implicitly captured by the semantic embeddings.\n7: abc television broadcast pbs nbc 22: editorial magazine newspapers news editor\n44: football soccer team league basketball 45: tournament finals champions match round 40: nba game nfl games giants\n33: web online internet users user 12: northern southern region western eastern 6: germany czech sweden switzerland prague\n31: medical nursing medicine psychiatric clinic 46: disorders diabetes disease brain cancer 5: disease influenza diseases virus flu 50: bird animals animal birds ca\n19: finite linear discrete parameters vector 9: mathematical computation computational methodology empirical 32: terminology pronunciation abbreviation language spelling\n38: collins smith allen moore walker 39: werner karl fischer hans berger 26: smith james moore robert clark 15: laura jane julie ann sarah\n13: car cars truck vehicles vehicle 27: aircraft navy air force fleet 11: mars spacecraft nasa discovery planet\n16: graduate undergraduate teaching faculty academic 29: biology biomedical science sciences neuroscience\n2: yang liu tao ping wang 3: election democratic candidates democrats vote 28: deputy secretary chief general senior\n18: interface open-source functionality interfaces web-based 36: microprocessors microprocessor macintosh pcs processors\n14: kansas carolina indiana texas ohio 35: wales england melbourne brisbane cardiff\n43: payments costs pay tax income 47: market prices expectations price rate\nFigure E.2: t-SNE visualisation of topic embeddings on WS.\n25: bus car train passenger vehicle 26: aboard ship flight plane landing 7: ford toyota nissan honda motors\n46: pay paying paid payments cash 43: company companies business firm firms 16: sachs merrill citigroup securities ubs 33: sudan algeria congo sudanese khartoum 3: thailand malaysia indonesia bangladesh india 11: chan yang wen peng liu\n31: graduate school college teaching student 22: missouri virginia carolina ohio maryland 19: phoenix dallas seattle philadelphia denver\n12: knee shoulder elbow ankle thigh 28: cancer patients diabetes disease complications 47: bird wild birds animals elephant 17: disease flu virus diseases epidemic\n8: catholic holy church religious orthodox 44: ceremony celebration celebrations celebrating celebrate\n42: sense brings tremendous passion experience 48: fact question reason reasons explain 45: caused severe risk impact problems …\n10: germany czech austrian austria german 1: mexico peru costa chile colombia 6: carlos jose luis gonzalez rodriguez 23: chelsea striker liverpool defender midfielder\n5: mariners sox mets yankees dodgers 18: michelle laura lisa ann jennifer 37: smith moore clark campbell allen 29: broncos cowboys usc redskins quarterback 5: mariners sox mets yankees dodgers 14: mother daughter father wife son …\nFigure E.3: t-SNE visualisation of topic embeddings on TMN.\n1: carlos jose luis antonio juan 3: sao paulo janeiro rio aires\n13: wellington brisbane adelaide melbourne perth 8: phoenix dallas philadelphia cincinnati denver 10: carolina texas missouri mississippi ohio 6: southwest northwest northeast north near 14: ship cargo ships boat vessel\n4: election party democratic candidate vote 26: meeting meanwhile met tuesday wednesday 41: meanwhile trade china thursday tuesday\n5: germany zurich stockholm frankfurt munich 7: fritz werner otto schmidt heinz 33: loss quarter pretax losses profit 48: pretax profit net quarter earnings 19: profit revenue ratio value cumulative 11: mobile micro wireless digital software\n42: government-owned subsidiary york-based chicago-based toronto-based 35: shares investment holdings company firm 12: company subsidiary firm maker corporation 18: subsidiary petroleum company venture corporation 40: stores sales sells retail retailer\n9: corn sugar beans milk juice 16: wheat crop corn harvest crops 43: exports oil prices year demand 39: prices price rose dropped fell …\n50: sales profit quarter year rose 38: billion dollars dlrs million totaled 20: december march september january july 25: company expects market share shares …47: institute research sciences university science 30: example instance similar addition particular 34: plans based company business agreed 45: substantially weaker undervalued marginally stronger 31: aims improve strengthen boost enhance\n23: share shares expects sale pay 46: charges alleged fraud charged charge 36: provisions requirement regulations impose provision 37: revenue payments dividend payment savings …\nFigure E.4: t-SNE visualisation of topic embeddings on Reuters." } ]
2,021
null
SP:1ea373170ff80da65268e36e30370f2116fa4ed3
[ "This paper proposes a new style transformer with external memory, which is updated and used through an attention mechanism. They also propose a new algorithm to train the memory, Memory Replay Back-Propagation (MRBP). The memory consists of key-value pair data and is recurrently updated after the segment encoding. Through this memory, it can attend the past knowledge without the limitation of the maximum temporal range. The MRBP algorithm trains the memory through the local back-propagation of loss to reduce memory overhead.", "The paper presents a new model for the task of language modeling especially suited for longer sequences. This new model dubbed as Memformer consists of Transformer encoder-decoder and a memory module to store the past information from the encoder outputs. The encoder bidirectionally attends to the immediate previous sequence/segment information and to the memory module, which is designed to capture useful information from the past history of the full sequence. The idea is that by bidirectionally attending simultaneously to the previous input segment and to a memory module, the decoder should be able to improve its generation capabilities." ]
Transformer models have obtained remarkable accomplishments in various NLP tasks. However, these models have efficiency issues on long sequences, as the complexity of their self-attention module scales quadratically with the sequence length. To remedy the limitation, we present Memformer, a novel language model that utilizes a single unified memory to encode and retrieve past information. It includes a new optimization scheme, Memory Replay Back-Propagation, which promotes long-range back-propagation through time with a significantly reduced memory requirement. Memformer achieves O(n) time complexity and O(1) space complexity in processing long sequences, meaning that the model can handle an infinite length sequence during inference. Our model is also compatible with other self-supervised tasks to further improve the performance on language modeling. Experimental results show that Memformer outperforms the previous long-range sequence models on WikiText-103, including Transformer-XL and Compressive Transformer.
[]
[ { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Iz Beltagy", "Matthew E. Peters", "Arman Cohan" ], "title": "Longformer: The long-document transformer", "venue": "CoRR, abs/2004.05150,", "year": 2020 }, { "authors": [ "Tianqi Chen", "Bing Xu", "Chiyuan Zhang", "Carlos Guestrin" ], "title": "Training deep nets with sublinear memory", "venue": "cost. CoRR,", "year": 2016 }, { "authors": [ "Rewon Child", "Scott Gray", "Alec Radford", "Ilya Sutskever" ], "title": "Generating long sequences with sparse transformers", "venue": "CoRR, abs/1904.10509,", "year": 2019 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime G. Carbonell", "Quoc Viet Le", "Ruslan Salakhutdinov" ], "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Comput.,", "year": 1997 }, { "authors": [ "Nikita Kitaev", "Lukasz Kaiser", "Anselm Levskaya" ], "title": "Reformer: The efficient transformer", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Mike Lewis", "Yinhan Liu", "Naman Goyal", "Marjan Ghazvininejad", "Abdelrahman Mohamed", "Omer Levy", "Veselin Stoyanov", "Luke Zettlemoyer" ], "title": "BART: denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020,", "year": 2020 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Jack W. Rae", "Anna Potapenko", "Siddhant M. Jayakumar", "Chloe Hillier", "Timothy P. Lillicrap" ], "title": "Compressive transformers for long-range sequence modelling", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "David E. Rumelhart", "Geoffrey E. Hinton", "Ronald J. Williams" ], "title": "Learning Representations by Back-Propagating Errors, pp. 696–699", "venue": null, "year": 1988 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers", "venue": "The Association for Computer Linguistics,", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Sinong Wang", "Belinda Z. Li", "Madian Khabsa", "Han Fang", "Hao Ma" ], "title": "Linformer: Self-attention with linear complexity", "venue": "CoRR, abs/2006.04768,", "year": 2020 }, { "authors": [ "Manzil Zaheer", "Guru Guruganesh", "Avinava Dubey", "Joshua Ainslie", "Chris Alberti", "Santiago Ontañón", "Philip Pham", "Anirudh Ravula", "Qifan Wang", "Li Yang", "Amr Ahmed" ], "title": "Big bird: Transformers for longer sequences", "venue": "URL https://arxiv.org/abs/2007", "year": 2007 } ]
[ { "heading": "1 INTRODUCTION", "text": "Memory has a fundamental role in human cognition. Humans perceive and encode sensory information into a compressed representation in neurons, and later our brains can effectively retrieve past information to accomplish various tasks. The formation of memories involves complex cognitive processes. Modeling and studying the behavior of human memory is still a challenging research problem in many academic areas.\nMany researchers have attempted to incorporate memory systems in artificial neural networks. Early works like recurrent neural networks (RNN) (Rumelhart et al., 1988), including LSTM (Hochreiter & Schmidhuber, 1997) model temporal sequences with their internal compressed state vector as memory. Although RNNs are theoretically Turing-complete, they are limited in preserving the longterm information due to the memory bottleneck. To alleviate the limitation, more powerful memory network architectures such as Neural Turing Machine (NTM) (Graves et al., 2014), Differential Neural Computer (DNC) (Graves et al., 2016) have been proposed by leveraging a large external memory. However, due to their complex memory addressing mechanism, they are not widely used in NLP.\nMore recently, Vaswani et al. (2017) proposes Transformer by ditching the use of memory and recurrence. Instead, it maintains all O(N2) dependencies in the sequence with self-attention (Bahdanau et al., 2015). Transformer and its followers have achieved great success in various NLP tasks. Nevertheless, the quadratic complexity can be extremely costly when the input sequence is long. Some works address the limitations of self-attention, including Reformer, Sparse Transformer, Longformer, Linformer, etc (Child et al., 2019; Kitaev et al., 2020; Wang et al., 2020). They successfully reduce the complexity of self-attention and can process longer sequences. However, the space cost still scales with sequence length, which cannot be fully eliminated without memory and recurrence.\nTransformer-XL (Dai et al., 2019) re-introduces the concept of memory and recurrence. It caches each layer’s hidden states of self-attention into a fixed size queue and re-uses them in the later attention computation. However, the memory as raw hidden states cannot effectively compress high-level information. Transformer-XL in practice needs a huge memory size to perform well. Compressive\nTransformer (Rae et al., 2020) improves upon Transformer-XL by further compressing its memories into fewer vectors via a compression network. However, as mentioned in the papers, both Transformer-XL and Compressive Transformer still have a theoretical maximum temporal range due to the uni-directional self-attention constraint.\nIn this work, we propose Memformer, which includes a more efficient memory system with a Transformer encoder-decoder architecture. The resulting model has a theoretically unlimited temporal range of memorization. We also improve the relative positional encoding in Transformer-XL with a simplified version. As the traditional back-propagation through time (BPTT) has an unaffordable memory cost for our model, we introduce a new optimization scheme, memory replay backpropagation (MRBP), to significantly reduce the memory cost of training recurrent neural networks with large memory. We show that Memformer is compatible with different self-supervised tasks and can further improve its performance on language modeling.\nOur main contributions can be summarized as follows: (1) We introduce a new optimization scheme for training recurrent neural networks with large memory and long temporal range. (2) We propose Memformer, a Transformer-based model, which outperforms the previous Transformer-XL and Compressive Transformer on WikiText-103 language modeling. (3) We show that Memformer is compatible with a wide range of self-supervised tasks other than autoregressive language modeling." }, { "heading": "2 METHODS", "text": "" }, { "heading": "2.1 SIMPLIFIED RELATIVE POSITIONAL ENCODING", "text": "The standard attention mechanism involves the dot product between the query vector qi and the key vector kj , where Wq,Wk,Wv are the projection matrices to produce the query, key, and value. TransformerXL proposes a new type of relative positional encoding method. The attention computation is decomposed into four parts: (a) content-based addressing, (b) content dependent positional bias, (c) global content bias, and (d) global positional bias. The relative positional embedding Ri−j provides the positional information between every pair of xi and xj . The equation is defined below. u and v are trainable parameters.\nAi,j = E > xiW > q WrExj︸ ︷︷ ︸ (a) +E>xiW > q WrRi−j︸ ︷︷ ︸ (b) +u>WkExj︸ ︷︷ ︸ (c) + v>WrRi−j︸ ︷︷ ︸ (d) . (1)\nHowever, we observe that (c) and (d) can be simplified by introducing a bias term to the original query and key projection. Thus, we re-formalize the self-attention, as shown in Eq. 3. The product of bq and Kx is equivalent to the term (c) global content bias. For the term (d), since v, Wr, and Ri−j are all trainable parameters, it can be simplified into the product between bq and bk, which has a similar effect to the global attention bias. Different from Transformer-XL that only injects positional information in the attention computation, our attention mechanism shown in Eq. 4 attends over the positional information and accumulate the results to have more robust output representations.\nQx = WqEx + bq; Kx = WkEx + bk; Vx = WvEx + bv (2)\nAi,j = Q > xiKxj +Q > xiRi−j (3) Hx = ∑ j Ai,j (Vxj +Ri−j) (4)" }, { "heading": "2.2 MEMFORMER", "text": "This section explains the details of Memformer. We first talk about the language model background and a new way of formulating language generation with text continuation. Then we describe an instance of such formulation, which is our proposed Memformer model. After that, we introduce the multi-task training setting. Finally, we describe the newly proposed optimization scheme, memory reply back-propagation to tackle the memory cost problem." }, { "heading": "2.2.1 BACKGROUND: STANDARD LANGUAGE MODEL", "text": "To understand Memformer better, we first study the standard language model. Given a document of N tokens x = (x1, x2, . . . , xN ), an standard language model learns the joint probability of the document by taking the product of each token’s probability conditioned to the previous tokens, which is defined as P (x) = ∏ t P (xt|x1:t).\nFigure 1a and 1b are the standard language models. They autoregressively predict the next token by feeding the previous generated tokens into the model. An extension of Figure 1a is to incorporate relative positional encoding and cache the past hidden states. Then this model would be equivalent to Transformer-XL.\nFigure 1b is an assumed language model with memory. Self-attention module now attends not only to its token inputs but also to the memory Mt at time t. After all the tokens in the segment are processed, the model summarizes the computed hidden states in the segment and produce the next timestep’s memory Mt+1. Each layer has its own individual memory representation. One limitation for this model is that the read and write operations on memory may not have enough capacity to retain important information due to the uni-directional attention." }, { "heading": "2.2.2 ENCODER-DECODER LANGUAGE MODEL", "text": "To address this capacity issue of uni-directional attention, we introduce a more powerful architecture shown in Figure 1c, where we have an encoder-decoder and a memory system. If a document is split into T segments of length L, for each segment st, we define st = [xt,1, xt,2, . . . xt,L]. The encoder’s role is to encode the segment st and inject the information into the memory Mt, while it also retrieves past information from the previous timestep’s memory Mt−1. The final output of the encoder will be fed into the decoder’s cross attention layers to predict the token probabilities of the next timestep’s segment st+1 as standard language modeling. The definition is as below:\nMt = Encoder(st,Mt−1) (5) P (st) = ∏\nn=1:L\nPDecoder(xt,n |xt,<n,Mt−1) (6)\nP (x) = ∏\nt=1:T\nPModel(st|s<t) (7)\nAt each timestep, the process can be deemed as a text continuation task. Given a text segment as the input, the model needs to continue that segment by generating the next text segment. Since the memory stores all the past information, we can autoregressively generate all the text segments in a document. In this fashion, the model can behave as a language model." }, { "heading": "2.2.3 MEMFORMER ENCODER-DECODER", "text": "To implement the encoder-decoder language model, we propose Memformer Encoder-Decoder. The model incorporates a Transformer encoder-decoder and a memory system. The encoder is equipped with two new modules: Memory Cross Attention (Figure 2b) and Memory Slot Attention (Figure 2c) to read from or write to the memory respectively. The encoder is fully responsible for encoding and retrieving past information via memory. The decoder then takes the last layer’s outputs from the encoder and feeds them into the cross attention module similar to the standard Transformer. For the text continuation task, we let the encoder take the input of the current timestep’s text segment, and let the decoder generate the next timestep’s segment tokens. Figure 2a shows the detailed structure.\nFigure 2b demonstrates how Memory Cross Attention module extracts information from the memory Mt with the current segment’s tokens X . Each input token’s hidden state is projected into queries, while the memory hidden states are projected into key-value pairs. Then the input hidden states will attend over the projected memory key-value pairs to produce the final outputs. This module can effectively retrieve past information from memory Mt given the current text segment.\nMemory Slot Attention in Figure 2c produces the next timestep’s memory Mt+1. This module takes the inputs of the previous timestep’s memory Mt and the encoder’s final hidden states. It then projects the memory into queries, keys, and values, while the encoder outputs are into keys and values. Since each memory slot should not be interfering with other memory slots, we design a special type of sparse attention pattern (details shown in Figure 2c). Thus, each slot in the memory can only attend over itself and the encoder outputs. This is to preserve the information in each slot longer over the time horizon. For example, if one slot only attends itself, then the information in that slot will not change in the next timestep." }, { "heading": "2.3 MULTI-TASK SELF-SUPERVISED LEARNING", "text": "Unlike existing models built either for denoising objectives or language modeling, Memformer can accomplish both types of tasks. This flexibility helps the model learn better representations of the document and strengthen the memory of past information. To avoid conflicts of different tasks, we use separate special tokens for each task. In this work, we only experiment with three self-supervised tasks. We believe that our model is flexible with many other self-supervised tasks to further improve performance. We randomly sample the following three tasks with a probability [0.6, 0.3, 0.1] during training.\nText Continuation This is the primary task, as our goal is for language modeling. Given the current timestep’s text segment, the model needs to generate the tokens in the next timestep’s segment.\nText Infilling This task is inspired by BART (Lewis et al., 2020). We mask some text spans in a document. The span length is drawn from a Poisson distribution (λ = 3.5). The span is replaced with a “[mask]” token. The model needs to predict these masked tokens.\nText Recall Reverse of the text continuation task, Text Recall needs to predict the previous text segment given the current timestep’s segment. This task aims to directly help the model to better preserve the past information." }, { "heading": "2.4 MEMORY REPLAY BACK-PROPAGATION", "text": "Memformer relies on the explicit memory to encode long-range document. At inference time, there is no additional memory cost because of the single unified memory design. Nevertheless, during training, such design would require back-propagation through time (BPTT) over a long range of timesteps so that the memory writer network can potentially learn to retain long-term information. The problem with BPTT is that it unrolls the entire computational graph during the forward pass and stores all the intermediate activations. This process would lead to impractically huge memory consumption for Memformer, which causes training almost impossible.\nA favorable existing approach to eliminate this problem is gradient checkpointing (Chen et al., 2016). The algorithm can significantly reduce the memory cost of a large computational graph. However, the standard gradient checkpointing still needs to compute all the nodes in the computational graph and store unnecessary hidden states during the forward pass. We propose Memory Replay Back-Propagation (MRBP), a more efficient variant of gradient checkpointing, by replaying the memory at each timestep to accomplish gradient back-propagation over long unrolls.\nMRBP is designed specifically for recurrent neural networks. The algorithm takes an input with a rollout [x0, x1, . . . , xT ] with length T and the previous memory M0. MRBP only traverses the critical path in the computational graph during the forward pass. It then obtains each timestep’s memory and stores those memories in the replay buffer. During the backward pass, MRBP backtracks the memories in the replay buffer from time T to 0 and recompute the partial computational graph for the local timestep. It continues the computation of the remaining graph with the output Ot to get the loss for back-propagation. There are two directions of gradients for the model. One direction of gradients comes from the local back-propagation of loss, while the other part comes from the back-propagation of the next memory’s Jacobin ∇Mt+1. The full algorithm is described in Algorithm 14\nAlgorithm 1: Memory Replay Back-Propagation Input: rollout=[x0, x1, . . . , xT ]: a list containing each timestep t’s input xt\nprevMemory: memory from the previous rollout . initialize a list to store all the memories computed\n1 replayBuffer = [] 2 replayBuffer.append(M0) ; . previous memory . forward pass 3 for t = 0, 1, 2, . . . , T − 1 do 4 Mt+1, = Model(xt, Mt) ; . No gradient 5 replayBuffer.append(Mt+1) 6 end . backward pass 7 ∇Mt+1 = 0 8 for t = T, T − 1, . . . , 1, 0 do 9 Mt+1, Ot = Model(xt, Mt) ; . Recompute 10 loss = floss(Ot) 11 loss.backward() 12 Mt+1.backward(∇Mt+1) ; . Computes ∇Mt 13 end 14 save MT for next rollout’s update" }, { "heading": "2.5 TEMPORAL RANGE ANALYSIS", "text": "We analyze the theoretical maximum temporal range here. Transformer-XL and Compressive Transformer store the past hidden states in a FIFO queue as their memories. However, they have a theoretical limitation for the maximum temporal range when modeling a sequence. Transformer-XL has a maximum temporal range of Nm × L, where Nm is the memory size, and L is the number of layers. Compressive Transformer extends the temporal range to L× (Nm + c×Ncm, by compressing the memories in Transformer-XL into the new compressed memories with a size of Ncm and a compression ratio c. If a sequence is longer than the maximum temporal range, the model will lose information when the stored memories are discarded. In contrast, Memformer has a single unified memory system, which theoretically has a maximum temporal range of infinity." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 SETTINGS", "text": "We conduct all experiments on WikiText-103 (Merity et al., 2017), which is a popular long-range language modeling benchmark. It contains 28K articles with an average length of 3.6K tokens per article. We adopt byte pair encoding (BPE) (Sennrich et al., 2016) to avoid unknown tokens. The original Transformer-XL and Compressive Transformer set the attention length to 1,600, close to the average length per article. We argue that such large attention length is against the purpose of using memory to compress information. Therefore, to better demonstrate memory efficiency, we set the input size to 64, and restrict the memory size under 128.\nBesides, we make the following changes to the baselines. We have two Transformer-XL model: base and large. Transformer-XL base has 12 layers. Transformer-XL large has 16 layers. Since Compressive Transformer does not have code released, we re-implement the model following the paper. Our Compressive Transformer has 12 layers. The compression ratio is set to 4. For a fair comparison, Memformer Encoder-Decoder has a 4-layer encoder and an 8-layer decoder. For all baselines and our models, the hidden size is set to 512, the feed-forward hidden size to 2048, the number of heads to 8, and the head size to 64. We disable dropout as it causes high variance in the final score due to randomness, making fair comparisons impossible. We use the simplified relative positional encoding for all models as it generally performs better under our setting." }, { "heading": "3.2 MAIN RESULTS", "text": "Table 1 shows the results on WikiText-103. We report the number of parameters, the number of items per second as the training speed, and perplexity for a comprehensive comparison. Memformer Encoder-Decoder achieves the best perplexity score with a efficient computation and memory tradeoff.\nWhen increasing Transformer-XL’s memory size, we observe that the perplexity drops as expected, because the attention length is also increased. Note that the speed decreases with a larger memory size. After we have enlarged the memory size of Transformer-XL to 128, the perplexity is still worse than Memformer Encoder-Decoder, and the speed is much slower. Since Memformer Encoder-Decoder has slightly more parameters, we compare our model with Transformer-XL large, which has 16 layers. In Transformer-XL, the number of layers is important for performance, as the maximum temporal range scales with the number of layers. Transformer-XL large indeed obtains better perplexity scores than Transformer XL-base models. However, our model still achieves better perplexity. Not to mention that Memformer Encoder-Decoder is 55% faster than Transformer-XL large. This suggests that Memformer Encoder-Decoder is more efficient in modeling the document than Transformer-XL.\nCompressive Transformer is another baseline we report in the table. It introduces an extra compression network to compress the memory hidden states in Transformer-XL. For a fair comparison, Compressive Transformer has half of the memory size for the compressed memory. With the same memory budget, Compressive Transformer performs better than Transformer-XL. However, the extra compression network requires more number of parameters and computation. We actually find that Transformer-XL is more efficient in terms of the number of parameters and speed under our setting." }, { "heading": "3.3 ABLATION STUDY", "text": "We conduct ablation studies to explore how each component contributes to Memformer’s good performance, by analyzing the performance improvement of the simplified relative positional encoding and memory replay back-propagation" }, { "heading": "3.4 MODEL HYPER-PARAMETERS", "text": "Effects of Multi-Task Training When we combine the three tasks text continuation, text infilling and text recall, the model yields the best performance. We find that when applying only text continuation and text recall, the performance drops. This drop might be because the model over-fits on the text recall task, which hurts the performance of text continuation task. Overall, the performance improvement of multi-task learning is marginal. However, in Figure 3a, we observe that models trained with multi-task learning have a smoother validation curve and is less prone to over-fitting. This indicates that with multi-task learning, the model is more robust and potentially learns better feature representations.\nEffects of Time Horizon We test how the time horizon for back-propagation affects the performance. The results are shown in Figure 3b. We vary the back-propagation time horizon from 1 to 32. When the time horizon is set to 1, back-propagation cannot pass gradients through memory to the previous timestep. Thus, we observe the performance is the worst when the time horizon is 1. As we increase the time horizon, the model achieves better perplexity scores. When the time horizon is increased to 32, we observe the marginal improvement on perplexity is almost gone.\nEffects of Memory Size A large memory size ideally helps to store more information. From Table 3c, we can see a huge improvement when increasing the memory size from 1 to 8. However, when we further increase the memory size, the perplexity stops decreasing. In future work, we will study how to gain more improvement with larger memory sizes." }, { "heading": "3.4.1 SIMPLIFIED RELATIVE POSITIONAL ENCODING", "text": "To test the performance of our simplified relative positional encoding (RPE), we only replace the self-attention layers in the original Transformer with the new module without changing other parts. The results in Table 2 show that our proposed simplified relative positional encoding has much better performance than the original Transformer-XL’s RPE in all metrics." }, { "heading": "3.4.2 MEMORY REPLAY BACK-PROPAGATION", "text": "To test MRBP’s effectiveness, we compare against the standard back-propagation through time (BPTT) and the standard gradient checkpointing (GC) algorithm. We use Memformer Decoder with 12 layers, 8 heads, 512 hidden size, and 32 memory size for all the experiments here. The time horizon for each truncated back-propagation update is set to 4.\nThe back-propagation through time (BPTT) approach is the fastest because it does not need recomputation However, it costs the most amount of memory due to unrolling the entire computational graph. While gradient checkpointing can save huge amount of memory, it is much slower than the other two methods (x0.48). In contrast, our MRBP saves more GPU memory with only slight speed degeneration (x0.90). When further increasing the time horizon to 16, we see that the GPU memory only increases 62 MB, suggesting the sub-linear memory growth with the time horizon." }, { "heading": "4 RELATED WORK", "text": "Optimizing the attention pattern of Transformer is one direction to process long sequences. Child et al. (2019) first proposes Sparse Transformer to reduce the computation complexity O(N) to with a sparse attention pattern for sequence modeling. Longformer (Beltagy et al., 2020) and Big Bird (Zaheer et al., 2020) follow Sparse Transformer and explore the effectiveness of different sparsity patterns. Reformer (Kitaev et al., 2020) applies a multi-round locality-sensitive hashing (LSH) to reduce the computation complexity to O(N logN). Linformer (Wang et al., 2020) further reduces the complexity to O(N) by observing that self-attention is low-rank and can be approximated with linear attention. However, the memory cost of these approaches still scales with the sequence length.\nMeanwhile, applying recurrence to Transformers is an orthogonal direction comparing to the efficient attention approaches. Recurrence enables the model to have constant memory complexity O(1) during inference. There are mainly two works exploring this direction. TransformerXL (Dai et al., 2019) uses relative positional encoding and consists of a segment-level recurrence mechanism to encode beyond a fixed-length context. Compressive Transformer (Rae et al., 2020) extends from Transformer XL by further compressing the previous segment information to achieve longer context. However, they have a theoretical maximum temporal range of context related to the memory size and the number of layers. In practice, TransformerXL and Compressive Transformer needs huge memory size to achieve good performance and are inefficient in their memory representations." }, { "heading": "5 CONCLUSION", "text": "In this work, we present Memformer, which takes advantage of a memory system to efficiently process long sequences with a linear time complexity and constant memory complexity. Along with Memformer, we introduce a new optimization scheme, Memory Replay Back-propagation, which enables training recurrent neural networks with large memory. Our model achieves strong perplexity results on WikiText-103. It is also flexible to a wide range of self-supervised learning tasks. With the infinite temporal range capability, we believe Memformer can spark interesting works in domains such as lifelong learning and memory-augmented meta-learning." } ]
2,020
null
SP:f010fddc7ee6523ff0afa0ea2b9e1a55027b09de
[ "The authors present a modification to spatial transformer networks that restricts the transformations to the group of diffeomorphisms. When combined with shape priors, this imposes topological constraints on the mappings produced by the network. These are important considerations in applications such as segmentation tasks where we expect there to be constraints on, for example, the number of connected components. The authors demonstrate the effectiveness of their approach in MNIST experiments and a breast tissue segmentation task. ", "This paper propose a novel method to incorporate shape prior in neural networks based on Diffeomorphic transformation. This is useful as by design it preserves certain desirable properties of output such as smooth boundaries and connected components which are of interest in medical imaging applications. The method is validated on Mnist for data invariance and a medical imaging task for segmentation." ]
In this paper we propose a spatial transformer network where the spatial transformations are limited to the group of diffeomorphisms. Diffeomorphic transformations are a kind of homeomorphism, which by definition preserve topology, a compelling property in certain applications. We apply this diffemorphic spatial transformer to model the output of a neural network as a topology preserving mapping of a prior shape. By carefully choosing the prior shape we can enforce properties on the output of the network without requiring any changes to the loss function, such as smooth boundaries and a hard constraint on the number of connected components. The diffeomorphic transformer networks outperform their non-diffeomorphic precursors when applied to learn data invariances in classification tasks. On a breast tissue segmentation task, we show that the approach is robust and flexible enough to deform simple artificial priors, such as Gaussian-shaped prior energies, into high-quality predictive probability densities. In addition to desirable topological properties, the segmentation maps have competitive quantitative fidelity compared to those obtained by direct estimation (i.e. plain U-Net).
[]
[ { "authors": [ "Matthew Chung Hai Lee", "Kersten Petersen", "Nick Pawlowski", "Ben Glocker", "Michiel Schaap" ], "title": "Tetris: Template transformer networks for image segmentation with shape priors", "venue": "IEEE transactions on medical imaging,", "year": 2019 }, { "authors": [ "Adrian V Dalca", "Guha Balakrishnan", "John Guttag", "Mert R Sabuncu" ], "title": "Unsupervised learning for fast probabilistic diffeomorphic registration", "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention,", "year": 2018 }, { "authors": [ "Max Jaderberg", "Karen Simonyan", "Andrew Zisserman" ], "title": "Spatial transformer networks. In Advances in neural information processing", "venue": null, "year": 2017 }, { "authors": [ "M Faisal Beg", "Michael I Miller", "Alain Trouvé", "Laurent Younes" ], "title": "Computing large deformation metric mappings via geodesic flows of diffeomorphisms", "venue": "International journal of computer vision,", "year": 2005 }, { "authors": [ "Tom Vercauteren", "Xavier Pennec", "Aymeric Perchant", "Nicholas Ayache" ], "title": "Diffeomorphic demons: Efficient non-parametric image registration", "venue": "NeuroImage, 45(1):S61–S72,", "year": 2009 }, { "authors": [ "Brian B Avants", "Charles L Epstein", "Murray Grossman", "James C Gee" ], "title": "Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain", "venue": "Medical image analysis,", "year": 2008 }, { "authors": [ "Grant Haskins", "Uwe Kruger", "Pingkun Yan" ], "title": "Deep learning in medical image registration: A survey", "venue": "Machine Vision and Applications,", "year": 2020 }, { "authors": [ "Morten Bro-Nielsen", "Claus Gramkow" ], "title": "Fast fluid registration of medical images", "venue": "In International Conference on Visualization in Biomedical Computing,", "year": 1996 }, { "authors": [ "John Ashburner" ], "title": "A fast diffeomorphic image registration", "venue": "algorithm. Neuroimage,", "year": 2007 }, { "authors": [ "Torsten Rohlfing", "Robert Brandt", "Randolf Menzel", "Daniel B Russakoff", "Calvin R Maurer" ], "title": "Quo vadis, atlas-based segmentation", "venue": "In Handbook of biomedical image analysis,", "year": 2005 }, { "authors": [ "Valerio Fortunati", "René F Verhaart", "Fedde van der Lijn", "Wiro J Niessen", "Jifke F Veenland", "Margarethus M Paulides", "Theo van Walsum" ], "title": "Tissue segmentation of head and neck ct images for treatment planning: a multiatlas approach combined with intensity modeling", "venue": "Medical physics,", "year": 1905 }, { "authors": [ "James C Gee", "Martin Reivich", "Ruzena Bajcsy" ], "title": "Elastically deforming a three-dimensional atlas to match anatomical brain images", "venue": "Journal of Computer Assisted Tomography,", "year": 1993 }, { "authors": [ "Miguel Monteiro", "Loïc Le Folgoc", "Daniel Coelho de Castro", "Nick Pawlowski", "Bernardo Marques", "Konstantinos Kamnitsas", "Mark van der Wilk", "Ben Glocker" ], "title": "Stochastic segmentation networks: Modelling spatially correlated aleatoric uncertainty", "venue": "arXiv preprint arXiv:2006.06015,", "year": 2020 }, { "authors": [ "Grzegorz Chlebus", "Andrea Schenk", "Jan Hendrik Moltz", "Bram van Ginneken", "Horst Karl Hahn", "Hans Meine" ], "title": "Automatic liver tumor segmentation in ct with fully convolutional neural networks and object-based postprocessing", "venue": "Scientific reports,", "year": 2018 }, { "authors": [ "Mohammad H Jafari", "Nader Karimi", "Ebrahim Nasr-Esfahani", "Shadrokh Samavi", "S Mohamad R Soroushmehr", "K Ward", "Kayvan Najarian" ], "title": "Skin lesion segmentation in clinical images using deep learning", "venue": "In 2016 23rd International conference on pattern recognition (ICPR),", "year": 2016 }, { "authors": [ "Xiaoling Hu", "Fuxin Li", "Dimitris Samaras", "Chao Chen" ], "title": "Topology-preserving deep image segmentation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Anjany Sekuboyina", "Markus Rempfler", "Jan Kukačka", "Giles Tetteh", "Alexander Valentinitsch", "Jan S Kirschke", "Bjoern H Menze" ], "title": "Btrfly net: Vertebrae labelling with energy-based adversarial learning of local spine prior", "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention,", "year": 2018 }, { "authors": [ "Nicki Skafte Detlefsen", "Oren Freifeld", "Søren Hauberg" ], "title": "Deep diffeomorphic transformer networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Oren Freifeld", "Soren Hauberg", "Kayhan Batmanghelich", "John W Fisher" ], "title": "Highly-expressive spaces of well-behaved transformations: Keeping it simple", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2015 }, { "authors": [ "Oren Freifeld", "Søren Hauberg", "Kayhan Batmanghelich", "Jonn W Fisher" ], "title": "Transformations based on continuous piecewise-affine velocity fields", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "Hadi Salman", "Payman Yadollahpour", "Tom Fletcher", "Kayhan Batmanghelich" ], "title": "Deep diffeomorphic normalizing flows", "venue": "arXiv preprint arXiv:1810.03256,", "year": 2018 }, { "authors": [ "Cleve Moler", "Charles Van Loan" ], "title": "Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later", "venue": "SIAM review,", "year": 2003 }, { "authors": [ "Vincent Arsigny", "Pierre Fillard", "Xavier Pennec", "Nicholas Ayache" ], "title": "Log-euclidean metrics for fast and simple calculus on diffusion tensors", "venue": "Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine,", "year": 2006 }, { "authors": [ "Mariano Cabezas", "Arnau Oliver", "Xavier Lladó", "Jordi Freixenet", "Meritxell Bach Cuadra" ], "title": "A review of atlas-based segmentation for magnetic resonance brain", "venue": "images. Computer methods and programs in biomedicine,", "year": 2011 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "arXiv preprint arXiv:1608.03983,", "year": 2016 }, { "authors": [ "Delmas Patrice" ], "title": "Image Processing and Analysis: A Primer, volume 3", "venue": "World Scientific,", "year": 2018 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In International Conference on Medical image computing and computerassisted intervention,", "year": 2015 }, { "authors": [ "Augustus Odena", "Vincent Dumoulin", "Chris Olah" ], "title": "Deconvolution and checkerboard artifacts. Distill, 2016", "venue": "doi: 10.23915/distill.00003. URL http://distill.pub/2016/ deconv-checkerboard", "year": 2016 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Instance normalization: The missing ingredient for fast stylization", "venue": "arXiv preprint arXiv:1607.08022,", "year": 2016 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "Mnist handwritten digit database", "venue": "ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist,", "year": 2010 }, { "authors": [ "Lee R Dice" ], "title": "Measures of the amount of ecologic association between species", "venue": null, "year": 1945 }, { "authors": [ "Th A Sorensen" ], "title": "A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on danish commons", "venue": "Biol. Skar.,", "year": 1948 }, { "authors": [ "Felix Hausdorff" ], "title": "Grundzuge der mengenlehre, volume 61", "venue": "American Mathematical Soc.,", "year": 1978 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Gary E Christensen" ], "title": "Consistent linear-elastic transformations for image matching", "venue": "In Biennial International Conference on Information Processing in Medical Imaging,", "year": 1999 }, { "authors": [ "Alexander Van-Brunt", "Matt Visser" ], "title": "Simplifying the reinsch algorithm for the baker–campbell– hausdorff series", "venue": "Journal of Mathematical Physics,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "The success of Convolutional Neural Networks (CNNs) in many modeling tasks is often attributed to their depth and inductive bias. An important inductive bias in CNNs is spatial symmetry (e.g. translational equivariance) which are embedded in the architecture through weight-sharing constraints. Alternatively, spatial transformers constrain networks through predicted spatial affine or thin-platespline transformations. In this work, we investigate a special type of spatial transformer, where the transformations are limited to flexible diffeomorphisms. Diffeomorphisms belong to the group of homeomorphisms that preserve topology by design, and thereby guarantee that relations between structures remain, i.e. connected (sub-)regions to stay connected.\nWe propose to use such diffeomorphic spatial transformer in a template transformer setting (Lee et al., 2019), where a prior shape is deformed to the output of the model. Here a neural network is used to predict the deformation of the shape, rather than the output itself. By introducing a diffeomorphic mapping of a prior shape, and carefully choosing properties of the prior shape, we can enforce desirable properties on the output, such as a smooth decision boundary or a constraint on the number of connected components.\nTo obtain flexible diffeomorphic transformations, we use a technique known as scaling-and-squaring which has been successfully applied in the context of image registration in prior work (Dalca et al., 2018), but has received relatively little attention in other areas in machine learning. In an attempt to increase flexibility of the flow, we try to approximate a time-dependent parameterisation using Baker-Campbell-Hausdorff (BCH) formula, rather than a stationary field. Hereby, diffeomorphic constraints are directly built into the architecture itself, not requiring any changes to the loss function.\nExperimentally, we first validate the diffeomorphic spatial transformer to learn data-invariances in a MNIST handwritten digits classification task, as proposed by (Jaderberg et al., 2015) to evaluate the original spatial transformer. The results show that better results can be achieved by employing diffeomorphic transformations. Additionally, we explore the use of diffeomorphic mappings in a spatial template transformer set-up for 3D medical breast tissue segmentation. We find that the diffeomorphic spatial transformer is able to deform simple prior shapes, such as a normally distributed energy, into high-quality predictive probability densities. We are successful in limiting the number of connected components in the output and achieve competitive performance measured by quantitative metrics compared to direct estimation of class probabilities." }, { "heading": "2 RELATED WORK", "text": "Spatial Transformers were introduced by Jaderberg et al. (2015) as a learnable module that deform an input image, and can be incorporated into CNNs for various tasks. In Spatial Transformer Networks (STNs), the module is used to learn data invariances in order to do better in image classification tasks. The work focuses on simple linear transformations (e.g. translations, rotations, affine) but also allows for more flexible mappings such as thin plate spline (TPS) transformations. The use of spatial transformations in template transformer setting was first proposed by Lee et al. (2019), but does not use diffeomorphisms and requires defining a discrete image as shape prior.\nIn the field of image registration, diffeomorphisms have been actively studied and have been succesfully applied in a variety of methods including LDDMM by Beg et al. (2005), Diffeomorphic Demons by Vercauteren et al. (2009), and SyN by Avants et al. (2008). More recently, efforts have been made to fuse such diffeomorphic image registration approaches with neural networks (Dalca et al. (2018), Haskins et al. (2020)). It is well known that although these models mathematically describe diffeomorphisms, transformations are not always diffeomorphic; in practice and negative Jacobian determinants can still occur due to approximation errors. To reduce such errors, additional regularisation is often applied (Bro-Nielsen and Gramkow (1996), Ashburner (2007), Dalca et al. (2018)), but typically requries careful tuning.\nImage registration has also been applied to perform segmentation by deforming a basis template commonly referred to as an ’atlas’ onto a target image (Rohlfing et al. (2005), Fortunati et al. (2013)), for instance by combining (e.g. averaging) manually labelled training annotations (Gee et al., 1993).\nThere have been some studies that investigated how to obtain smoother segmentation boundaries in neural-based image registration. For instance, Monteiro et al. (2020) proposed to model spatial correlation by modeling joint distributions over entire label maps, in contrast to pixel-wise estimates. In other work, post-processing steps have been applied in order to smooth predictions or to enforce topological constraints (Chlebus et al. (2018), Jafari et al. (2016)).\nThere have been some studies that try to enforce more consistent topology during training of neural network, but often use a soft constraint that required alteration of the loss function, such as in Hu et al. (2019), and GAN-based approaches which in addition require a separately trained discriminator model Sekuboyina et al. (2018).\nLastly, there have been some studies in which diffeomorphisms in context of spatial transformer networks were investigated. In Skafte Detlefsen et al. (2018), subsequent layers of spatial transformer layers with piece-wise affine transformations (PCAB) were used to construct a diffeomorphic neural network, but requires a tessellation strategy (Freifeld et al. (2015), Freifeld et al. (2017)). In Deep Diffeomorphic Normalizing Flows (Salman et al. (2018)) a neural network is used to predict diffeomorphic transformations as normalizing flow but to obtain more expressive posteriors for variational inference." }, { "heading": "3 DIFFEOMORPHIC SPATIAL TRANSFORMERS", "text": "The Spatial Transformer is a learnable module which explicitly allows for spatial manipulation of data within a neural network. The module takes an input feature map U passed through a learnable function which regresses the transformation parameters θ. A spatial grid G over the output is transformed to an output grid Tθ(G), which is applied to the input U to produce the output O. In the original spatial transformer, θ could represent arbitrary parameterised mappings such as a simple rotation, translation or affine transformation matrices. We propose flexible transformations in the group of diffeomorphisms Tθ ∈ D, which preserve topology, by continuity and continuity of the inverse. In Section 4, we will describe how we can use a diffeomorphic spatial transformer to warp a shape prior, as illustrated in Figure 1, in a template transformer setting illustrated in Figure 2.\nDiffeomorphic Transformation Let us define the diffeomorphic mapping φ = ψ(1)v ∈ D using an ordinary differential equation (ODE):\n∂ψ (t) v (x)\n∂t = v(ψ(t)v (x)) (1)\nwhere v is a stationary velocity field, ψ(0)v = Id is the identity transformation and t is time. By integrating over unit time we obtain ψ(1)v , the time 1 flow of the stationary velocity field v.\nThe most basic way to solve an ordinary differential equation from some initial point x0 is Euler’s method, in which the trajectory is approximated by taking small discrete steps and adding the difference to the running approximation in time. The method is straightforward to implement, but may take many steps to converge to good approximations. In this work, we will use a technique known as scaling-and-squaring (Moler and Van Loan, 2003), which allows for fast exponentiation of stationary velocity fields and thus the solution to the ODE defined in Equation 1.\nScaling-and-Squaring To solve the ODE from Equation 1, with a stationary velocity field v and the solution is the matrix exponential φ = exp(v), we use is the scaling-and-squaring method (Moler and Van Loan (2003), Arsigny et al. (2006)). The method is very similar to Euler’s method, but is typically more efficient by exploiting the relation exp(v) = exp(v/2T )2 T\nwith T ∈ N together with the fact that exp(v) can be well approximated by a Padé or Taylor approximation near the origin (i.e. for small ||v||). The main idea is to pick a certain step size T such that ||v||/2T < 0.5 and divide the diagonal values in v by the power integral 2T to obtain the approximation for exp(v/2T ) ≈ Id+v/2T and then squaring (self-composing) it T times to find obtain approximate solution for exp(v).\nAlgorithm 1: Approximating φ = exp(v) using scaling-and-squaring Result: φ = exp(v) T ← ceil(log2(max(||v||) + 1) φ0 ← v/2T for t = 1 to T do\nφt ← φt−1 ◦ φt−1 end\nThe approach can efficiently be implemented in existing numerical differentiation frameworks such as PyTorch or Tensorflow by element-wise dividing the vector components in velocity field v by 2T and then self-composing the resulting field 2T times using the linear grid sampling operation defined in Section 3.1\nSpatial Sampling To perform a spatial transformation on the input feature map, a sampler takes a set of sampling points Tθ(G), along with an input feature map U = I with input image I to produce output O. In case of template transformer, explained in Section 4, the input feature map would be a concatenation U = I ◦ S of an input image with some prior shape S. We follow the general sampling framework described in Jaderberg et al. (2015), defined for arbitrary sampling kernels of which the (sub)gradients can be computed, and the 3D trilinear interpolation in particular:\nOci = H∑ h W∑ w D∑ d Ichwd max(0, 1− |yci − h|) max(0, 1− |xci − w|) max(0, 1− |zci − d|) (2)\nThis procedure should be differentiable with respect to both the sampling grid coordinates and the input feature map by using partial (sub)gradients, allowing it to be used in conjunction with backpropagation.\n1For our experiments, we utilized the F.grid_sample function in PyTorch 1.6 to perform grid sampling.\nBaker–Campbell–Hausdorff formula Instead of parameterising our flow by a single stationary velocity field, we might also think of a piece-wise time-dependent sequence of vector fields. By parameterising the deformation as a time-dependent sequence of velocities we hope improve predictive performance by sequentially modeling larger movements first and detailed refinements thereafter. Composing multiple diffeomorphic transformations will also yield a diffeomorphic transformation, as the space of diffeomorphic transformations D is an algebraic group that is closed under the composition operation. The scaling-and-squaring algorithm offers an efficient way to find diffeomorphic transformations from a stationary vector field, but can not straightforwardly be applied to such time-dependent parameterisations. To address this, we can combine two timepoints, now A and B for simplicity of notation, to form the Lie exponential mapping:\nexp(Z) = exp(A) exp(B) (3) and apply the Baker-Campbell-Hausdorff (BCH) formula up to a certain order to approximate\nZ = bch(A,B) = ∞∑ n=1 zn(A,B) = A+B + 1 2 [A,B] + 1 12 [A, [A,B]]− 1 12 [B, [A,B]] + · · · (4)\nwhere [·, ·] is the Lie bracket. We apply the formula to approximate the logarithm of matrix exponentials of two noncommutative velocity fields Z = log(exp(A) exp(B)) and then use scaling-andsquaring one time to find the exponential exp(Z).\nBinary Tree Composition Naive composition of the T diffeomorphic transformations would result into a long chain of composition operations Φ = ((((((φ1◦φ2)◦φ3)◦φ4)◦φ5) · · · ) · · ·◦φT−1)◦φT ). To reduce possible interpolation errors in the resampling from growing as a result of such repetitive composing, we compose the field using a binary tree scheme Φ = (((φ1 ◦ φ2) ◦ (φ3 ◦ φ4)) ◦ (· · · ◦ (φT−1 ◦ φT ))). Treating the composition scheme as a tree structure, the depth now scales in an order of complexity O(T ) compared to O(log(T )) when using naive composition, reducing the maximum number of times an BCH approximation is repetitively applied to a single timepoint." }, { "heading": "4 DIFFEOMORPHIC TEMPLATE TRANSFORMER", "text": "Now that we have defined how to obtain a flexible diffeomorphic spatial transformer, we will investigate its use in a template transformer setting. We define the output of a segmentation model as diffeomorphic transformation of a prior shape, based on input image and the prior shape. By carefully choosing the prior shape and its properties, we obtain explicit control over the properties of the output, such as the number of connected components.\nLet the input of the model f be a feature map U ∈ RH×W×D×2C = I ◦ S be a concatenation of an input image I ∈ RH×W×D×C and an prior shape S ∈ RH×W×D×C along the channel dimension, with width W , height H , depth D and channels C that outputs a set of T velocity fields V = f(U) = {vt}Tt=1, where the fields vt ∈ RH×W×D×3 are concatenated along the channel dimension of the output. Then we compute the diffeomorphic transformation Φ = ∏T t=1 exp(vt) by approximating the product of matrix exponentials as discussed in Section 3. Lastly, we subsample the pixels in the original prior shape S using the diffeomorphic grid to obtain the output of the model O = TΦ(G)(S) as explained in Section 3. The resulting model is illustrated in Figure 2." }, { "heading": "4.1 PRIOR SHAPE", "text": "The template transformer can in principle use any prior shape, such as a discrete image by averaging annotations (or ‘atlas‘ (Gee et al., 1993) (Cabezas et al., 2011)). But, by carefully choosing a prior shape and by continuity of the diffeomorphic transformation, we can enforce properties such as single connected component and smooth boundaries on the model output. In this paper we aim to keep the prior shape a simple more general form to emphasise the expressivity of the diffeomorphism in our experiments. We choose an analytical shape prior inspired by the generalised multivariate Gaussian, and define the probability of a voxel at location x belonging to the main class by\np(x;µ,Σ, β) = exp [ −((x− µ)TΣ−1(x− µ))β · log(2) ] (5)\nwhere µ, Σ and β directly influence the mean, (co)variances and kurtosis of the prior shape in the spatial domain, and can be kept fixed or trained as part of the model parameters (see Section 5.4). The log(2) factor ensures that the decision boundary (p=0.5) is independent from β." }, { "heading": "5 EXPERIMENTS AND RESULTS", "text": "The diffeomorphic spatial transformer is evaluated on two tasks: a classification task using handwritten MNIST dataset and a medical 3D breast tissue segmentation problem in the template transformer setting. In both settings, its performance is compared with its non-diffeomorphic counterparts. For the segmentation, we additionally analyse the effect of training different shape prior parameters in Section 5.4." }, { "heading": "5.1 IMPLEMENTATION", "text": "For the MNIST classification experiments, we adapt an existing spatial transformer network implementation on Github2 and added transformations using diffeomorphic vector fields. We train for 10 epochs with a batch size of 64 and a learning rate of 0.001 (β1 = 0.9, β2 = 0.999) without any further learning rate decay.\nFor the 3D breast tissue experiments, we trained for 40k iterations using Adam (Kingma and Ba, 2014) with a batch size of 1 and a learning rate of 0.0002 (β1 = 0.9, β2 = 0.999) decayed with cosine annealing (Loshchilov and Hutter, 2016). The input was normalised using 1-99 percentile normalisation (Patrice et al., 2018) and training samples consist of randomly sampled 128× 128× 64 patches. For the neural network, a 3D U-Net (Ronneberger et al., 2015), 4 times spatial down- and up-sampling using linear interpolation (Odena et al., 2016), instance normalisation (Ulyanov et al., 2016) and the Leaky ReLU (slope 0.2) activation functions. Lastly, an hyperbolic tangent scaled with α = 256 limits the vector components of the vector fields to the range [−α, α]. The network uses a single input channel and 3× T output channels, with T = 4 and a 2nd order BCH approximation. We train the model using a standard cross entropy loss. The prior shape was initialized as a centred Gaussian with diagonal co-variance matrix with diagonal components set at half the volume size of the corresponding dimension. Aside from trying to optimise these values as part of the model parameters (see Section 5.4), we did not perform hyper-parameter tuning on these values. All experiments were performed on a single NVIDIA GeForce RTX 2080 Ti." }, { "heading": "5.2 MNIST EXPERIMENTS", "text": "In this experiment, analogous to the one described in Jaderberg et al. (2015), we take a simple CNN classifier with and without an additional spatial transformers added to the beginning of the network and train it end-to-end. We train and evaluate the model on the well-known MNIST dataset (LeCun et al., 2010) comprising 60000 training and 10000 testing images with size of 28 × 28 pixels that contain hand-drawn numbers in the range 0 to 10, with images randomly rotated to an angle α uniformly sampled from the range [−90, 90] degrees. The idea is that the spatial transformer can learn invariances in the data (e.g. translation) and thereby aid the classifier. The spatial transformer networks were designed in such a way that they have approximately the same parameter count, and the same classifier model was used in all cases. The experiment was repeated 20 times and standard deviations were reported. For fair comparison, we did not tune hyper-parameters in favor of the diffeomorphic-STN.\nIn Table 1, the results between different types of spatial transformers are shown. We find that our diffeomorphic spatial transformer network results in highest predictive accuracy, compared to non-diffeomorphic and more course thin-plate-spline (TPS) spatial transformers. The TPS models generate inherently smoother fields, and are therefore less prone to folding resulting in fewer negative Jacobian determinants. On the other hand, coarser TPS grids have less flexibility, which would make them unsuitable for application in complex anatomical segmentation tasks. We do observe that integration in our diffeomorphic model helps lowering the number of negative Jacobian determinants when compared to the same model without integration. In addition, we performed an experiment with an added regularisation term to the loss penalizing the spatial gradient of the field λ||∇φ||2, where λ = 10 controls the amount of regularisation. We find that this helps to limit the number of negative Jacobian determinants, but also negatively impacts overall accuracy.\n2Used TPS-STN implementation: https://github.com/WarBean/tps_stn_pytorch" }, { "heading": "5.3 SEGMENTATION EXPERIMENTS", "text": "To assess the applicability of diffeomorphic spatial transformer in a template transformer setting, we compare our differentiable spatial transformer with and without trained shape prior with direct estimation (i.e. plain U-Net) on a breast tissue segmentation task. We also evaluate a non-diffeomorphic spatial transformer, as is done in Lee et al. (2019), but applied it in combination with our shape prior as it was not obvious to us how to create a discrete 3D template from our 2D annotations.\nThe dataset comprises 20 training volumes and 20 evaluation volumes of dynamic contrast enhancement series of subjects with extremely dense breast tissue (Volpara Density Grade 4). Each series contains DCE-MRI images (384× 384× 60 voxels with spacing 0.97× 0.97× 3.00 mm resampled to 2.5mm3) acquired on a 3.0T Achieva or Ingenia Philips system in the axial plane with bilateral anatomic coverage. A randomly selected axial 2D slice was annotated to be used for training and evaluation labels. All annotations make up a single connected component.\nPerformance was measured using the well-known Sørensen–Dice coefficient (F1-score) (Dice (1945), Sorensen (1948)) and Hausdorff distance (Hausdorff, 1978) metrics. In addition, we measure the percentage of negative Jacobian determinants of the approximated flow, a well-known metric for deformation quality in image registration that measures the amount of folding. Lastly, we evaluate whether the connected components in the thresholded output (p > 0.5) is close to 1, as should be the case without approximation errors. The HD and CC metrics on this medical imaging tasks are particularly important indicating high-quality and robust results.\nIn Table 2, a comparison of a spatial template transformer with fixed shape prior, a diffeomorphic spatial transformer with fixed shape prior and a diffeomorphic spatial transformer with trained shape prior (trained mean µ, diagonal covariance diag(Σ) and β) with direct estimation (i.e. plain U-Net) is shown. We find that all template transformer models perform better in terms of Hausdorff distance. The diffeomorphic spatial template transformers perform worse in terms of Sørensen–Dice coefficient, but in combination with a trained shape prior are able to reduce the number of connected components and Hausdorff distance. Lastly, we observe negative Jacobian determinants as a result of approximation errors in all template transformers, but to a lower degree in the diffeomorphic models." }, { "heading": "5.4 ANALYSIS OF PRIOR SHAPES", "text": "In this section we empirically assess the impact of different prior shapes, with fixed or varying µ, Σ and β parameters. In Table 3, Sørensen–Dice coefficient, Hausdorff distance, ratio of negative Jacobians % |Jφ| < 0 and average number of connected components are reported for different combinations of trained prior shape parameters.\nWe find that learning parameters of the shape prior positively contributes to performance and helps to reduce the number of negative Jacobian determinants, most notably for learnt position µ. The result suggests that, in case of more complex shape priors such as a segmentation atlas, the model could benefit from deforming the prior shape with some linear transformations (e.g. translation or affine) before being warped by the diffeomorphic transformation predicted by the network." }, { "heading": "6 DISCUSSION AND CONCLUSION", "text": "We have presented a special type of spatial transformer where the spatial transformations are restricted to the group of diffeomorphisms. Diffeomorphic deformations are topology preserving by continuity and continuity of their inverse, which can be a compelling property when designing deep learning architectures. We show how expressive diffeomorphic mappings can be obtained by time-dependent parameterisation of multiple vector fields utilizing the Baker-Campbell-Hausdorff formula in combination with an efficient integration method known as scaling-and-squaring. By building these constraints directly into the architecture itself, no changes to the loss function are required. In addition, we propose to use the diffeomorphic spatial transformer in a template transformer setting, constraining the output of a neural segmentation model as topology-preserving mapping of an analytical prior shape. Hereby, we show that the diffeomorphic transform enforces smooth boundaries and explicit control over the topology of the output such as its number of connected components.\nThe diffeomorphic spatial transformer outperforms the original spatial transformer network when used to learn data-invariances on MNIST. In a template transformer set-up, we found that a neural network predicting a diffeomorphic mapping of a prior shape offers a flexible way to insert knowledge about the structure of the output without having to alter the loss function or optimisation scheme. We were able to warp shape priors into high-quality segmentations in a medical 3D breast tissue segmentation task, resulting in lower number of connected components and obtain higher performance in terms of Hausdorff distance but lower in terms of Dice Score compared with direct estimation (i.e. plain U-Net).\nTo show the effectiveness of the approach, we used a general and simple Gaussian-shaped shape prior as template. Interestingly, the method is flexible enough to find diffeomorphic mappings from such simple shapes into high-quality posteriors. We expect that designing shape priors, specifically tailored to a task (e.g. an atlas or average segmentation) might achieve even better results. It would be interesting to explore applicability on more complex anatomical structures, such as in coronary artery tree segmentation (Lee et al. (2019)).\nA piece-wise constant time-dependent parameterisation performed slightly better than modeling a stationary velocity field. This surprised us, because for every diffeomorphic mapping generated by a piece-wise constant time-dependent field there also exists a single (stationary) vector field that describes the same diffeomorphism. We hypothesise that directly optimising this stationary velocity field is harder and that the time-dependent parameterisation aids the optimisation process by allowing the network to model larger and more detailed deformations separately.\nWe used the BCH-formula to integrate a piece-wise time-dependent velocity field using the scalingand-squaring method. It would be interesting to see how this method relates to other ODE solvers that are capable of integrating time-dependent velocity fields, such as those proposed in Chen et al. (2018). However, we were unable to use these solvers under available hardware constraints.\nIn some cases, negative Jacobian determinants are still present in the obtained flows. This is likely caused by spatial discretisation and interpolation during resampling operations. Future research could assess whether the approach could benefit from regularisations of the vector fields (Ashburner, 2007), inverse consistency (Christensen, 1999) or better interpolation methods.\nTo conclude, we show that diffeomorphic spatial transformations can successfully be applied to preserve topology in neural networks, and for template transformer networks in particular. We have provided several insights on how to incorporate diffeomorphisms in neural network architectures for classification and segmentation. We expect that these insights can aid in tailoring neural network architectures to specific structure and geometry in data." }, { "heading": "ACKNOWLEDGEMENTS", "text": "Anonymised" }, { "heading": "B TIMING MEASUREMENTS", "text": "A comparison of performance in terms of inference time can be found in Table 4. Average inference time was calculated over 20 full 3d volumes on the breast tissue segmentation task on the U-Net model baseline, non-diffeomorphic model (without field-integration) and diffeomorphic models with a stationary and time-dependent vector field parameterisations. Due to integration procedures, the average inference time on the diffeomorphic template transformer models is slightly slower (≈ 10%) compared to the U-Net baseline, but well within practical bounds." }, { "heading": "C EXAMPLE VARIATIONS IN SHAPE PRIOR PARAMETERS", "text": "To illustrate how variations in the parameters µ, σ, Σ and β spatially change the shape prior, we plot the probability (white: p = 0 and purple p = 1) for each voxel p(x;µ,Σ/σI, β) with different parameter values both smooth (top) and thresholded (bottom):" } ]
2,020
null
SP:d442ae98d8f485119b8fdd7070d16a7cabc0f9ea
[ "This submission numerically shows that during exploring the neural network landscape, GD flow keeps increasing the sharpness. As a result, GD with a fixed learning rate will exhibit two phases during the dynamics. Denote by $\\eta$ the fixed learning rate. In the first phase, GD follows closely to the GD flow, and it finally converges to a region where the sharpness is roughly $2/\\eta$. Then, it transits into the second phase during which the sharpness hovers right at or above $2/\\eta$. In the second phase, GD cannot increase the sharpness anymore due to the dynamical stability constraint. Thus, the authors name it the Edge of Stability phase. What is interesting is that in the edge of stability phase, the loss is still decreasing steadily although not monotonically. ", "This paper presents an interesting observation for GD. That is, the sharpness of the learnt model in the final phase of the training (measured by the largest eigenvalue of the training loss Hessian) hovers right at the value 2/\\eta while the training loss. At the same time, the loss goes to unstable and non-monotonically decreasing. This pattern is consistent across architecture, activation functions, tasks, loss functions and BN. Comprehensive experiments are conducted to show this common observation. The paper is easy to follow." ]
We empirically demonstrate that full-batch gradient descent on neural network training objectives typically operates in a regime we call the Edge of Stability. In this regime, the maximum eigenvalue of the training loss Hessian hovers just above the value 2/(step size), and the training loss behaves non-monotonically over short timescales, yet consistently decreases over long timescales. Since this behavior is inconsistent with several widespread presumptions in the field of optimization, our findings raise questions as to whether these presumptions are relevant to neural network training. We hope that our findings will inspire future efforts aimed at rigorously understanding optimization at the Edge of Stability.
[ { "affiliations": [], "name": "Jeremy Cohen" }, { "affiliations": [], "name": "Simran Kaur" }, { "affiliations": [], "name": "Yuanzhi Li" }, { "affiliations": [], "name": "J. Zico Kolter" }, { "affiliations": [], "name": "Ameet Talwalkar" } ]
[ { "authors": [ "Naman Agarwal", "Zeyuan Allen-Zhu", "Brian Bullins", "Elad Hazan", "Tengyu Ma" ], "title": "Finding approximate local minima faster than gradient descent", "venue": "In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing,", "year": 2017 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Zhao Song" ], "title": "A convergence theory for deep learning via overparameterization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "David Barrett", "Benoit Dherin" ], "title": "Implicit gradient regularization", "venue": "In International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Léon Bottou", "Frank E. Curtis", "Jorge Nocedal" ], "title": "Optimization methods for large-scale machine learning", "venue": "Siam Reviews,", "year": 2018 }, { "authors": [ "Xiangyi Chen", "Sijia Liu", "Ruoyu Sun", "Mingyi Hong" ], "title": "On the convergence of a class of adam-type algorithms for non-convex optimization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Aaron Defazio" ], "title": "Understanding the role of momentum in non-convex optimization: Practical insights from a lyapunov analysis, 2020", "venue": null, "year": 2020 }, { "authors": [ "Alexandre Défossez", "Léon Bottou", "Francis Bach", "Nicolas Usunier" ], "title": "On the convergence of adam and adagrad", "venue": "arXiv preprint arXiv:2003.02395,", "year": 2020 }, { "authors": [ "Saber Elaydi" ], "title": "An introduction to difference equations", "venue": "Springer Science & Business Media,", "year": 2005 }, { "authors": [ "Behrooz Ghorbani", "Shankar Krishnan", "Ying Xiao" ], "title": "An investigation into neural net optimization via hessian eigenvalue density", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Niv Giladi", "Mor Shpigel Nacson", "Elad Hoffer", "Daniel Soudry" ], "title": "At stability’s edge: How to adjust hyperparameters to preserve minima selection in asynchronous training of neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Gabriel Goh" ], "title": "Why momentum really works", "venue": "Distill, 2(4):e6,", "year": 2017 }, { "authors": [ "Elad Hoffer", "Itay Hubara", "Daniel Soudry" ], "title": "Train longer, generalize better: closing the generalization gap in large batch training of neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Wenqing Hu", "Chris Junchi Li", "Lei Li", "Jian-Guo Liu" ], "title": "On the diffusion approximation of nonconvex stochastic gradient descent", "venue": "arXiv preprint arXiv:1705.07562,", "year": 2017 }, { "authors": [ "Like Hui", "Mikhail Belkin" ], "title": "Evaluation of neural architectures trained with square loss vs crossentropy in classification", "venue": null, "year": 2020 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clement Hongler" ], "title": "The asymptotic spectrum of the hessian of dnn throughout training", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Stanisław Jastrzębski", "Zachary Kenton", "Devansh Arpit", "Nicolas Ballas", "Asja Fischer", "Yoshua Bengio", "Amos Storkey" ], "title": "Three factors influencing minima in sgd", "venue": "arXiv preprint arXiv:1711.04623,", "year": 2017 }, { "authors": [ "Stanisław Jastrzębski", "Zachary Kenton", "Nicolas Ballas", "Asja Fischer", "Yoshua Bengio", "Amost Storkey" ], "title": "On the relation between the sharpest directions of DNN loss and the SGD step length", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Stanisław Jastrzębski", "Maciej Szymczak", "Stanislav Fort", "Devansh Arpit", "Jacek Tabor", "Kyunghyun Cho", "Krzysztof Geras" ], "title": "The break-even point on optimization trajectories of deep neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima, 2016", "venue": null, "year": 2016 }, { "authors": [ "Y LeCun", "L Bottou", "GB Orr", "K-R Muller" ], "title": "Efficient backprop", "venue": "Lecture notes in computer science,", "year": 1998 }, { "authors": [ "Yann LeCun", "Patrice Y. Simard", "Barak Pearlmutter" ], "title": "Automatic learning rate maximization by on-line estimation of the hessian’s eigenvectors", "venue": "In Advances in Neural Information Processing Systems", "year": 1993 }, { "authors": [ "Jaehoon Lee", "Lechao Xiao", "Samuel Schoenholz", "Yasaman Bahri", "Roman Novak", "Jascha SohlDickstein", "Jeffrey Pennington" ], "title": "Wide neural networks of any depth evolve as linear models under gradient descent", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Aitor Lewkowycz", "Yasaman Bahri", "Ethan Dyer", "Jascha Sohl-Dickstein", "Guy Gur-Ari" ], "title": "The large learning rate phase of deep learning: the catapult mechanism", "venue": "arXiv preprint arXiv:2003.02218,", "year": 2020 }, { "authors": [ "Xiaoyu Li", "Francesco Orabona" ], "title": "On the convergence of stochastic gradient descent with adaptive stepsizes", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Xinyan Li", "Qilong Gu", "Yingxue Zhou", "Tiancong Chen", "Arindam Banerjee" ], "title": "Hessian based analysis of sgd for deep nets: Dynamics and generalization", "venue": "In Proceedings of the 2020 SIAM International Conference on Data Mining,", "year": 2020 }, { "authors": [ "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning overparameterized neural networks via stochastic gradient descent on structured data", "venue": "In Proceedings of the 32nd International Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Zhiyuan Li", "Sanjeev Arora" ], "title": "An exponential learning rate schedule for deep learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zhiyuan Li", "Kaifeng Lyu", "Sanjeev Arora" ], "title": "Reconciling modern deep learning with traditional optimization analyses: The intrinsic learning rate", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Chaoyue Liu", "Libin Zhu", "Mikhail Belkin" ], "title": "Toward a theory of optimization for overparameterized systems of non-linear equations: the lessons of deep learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "James Martens" ], "title": "SECOND-ORDER OPTIMIZATION FOR NEURAL NETWORKS", "venue": "PhD thesis,", "year": 2016 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Rotem Mulayoff", "Tomer Michaeli" ], "title": "Unique properties of flat minima in deep networks, 2020", "venue": null, "year": 2020 }, { "authors": [ "Kamil Nar", "Shankar Sastry" ], "title": "Step size matters in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yurii Nesterov" ], "title": "Introductory lectures on convex programming volume", "venue": "i: Basic course", "year": 1998 }, { "authors": [ "Yurii E Nesterov" ], "title": "A method for solving the convex programming problem with convergence rate o (1/kˆ 2)", "venue": "In Dokl. akad. nauk Sssr,", "year": 1983 }, { "authors": [ "Vardan Papyan" ], "title": "The full spectrum of deepnet hessians at scale: Dynamics with sgd training and sample size", "venue": "arXiv preprint arXiv:1811.07062,", "year": 2018 }, { "authors": [ "Vardan Papyan" ], "title": "Measurements of three-level hierarchical structure in the outliers in the spectrum of deepnet hessians", "venue": "arXiv preprint arXiv:1901.08244,", "year": 2019 }, { "authors": [ "Vardan Papyan" ], "title": "Traces of class/cross-class structure pervade deep learning spectra, 2020", "venue": null, "year": 2020 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga", "Alban Desmaison", "Andreas Köpf", "Edward Yang", "Zach DeVito", "Martin Raison", "Alykhan Tejani", "Sasank Chilamkurthy", "Benoit Steiner", "Lu Fang", "Junjie Bai", "Soumith Chintala" ], "title": "Pytorch: An imperative style, high-performance deep learning library, 2019", "venue": null, "year": 2019 }, { "authors": [ "B.T. Polyak" ], "title": "Some methods of speeding up the convergence of iteration methods", "venue": "USSR Computational Mathematics and Mathematical Physics,", "year": 1964 }, { "authors": [ "William H. Press", "Saul A. Teukolsky", "William T. Vetterling", "Brian P. Flannery" ], "title": "Numerical Recipes in C (2nd Ed.): The Art of Scientific Computing", "venue": null, "year": 1992 }, { "authors": [ "Sashank J Reddi", "Ahmed Hefny", "Suvrit Sra", "Barnabas Poczos", "Alex Smola" ], "title": "Stochastic variance reduction for nonconvex optimization", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Sashank J. Reddi", "Zachary Charles", "Manzil Zaheer", "Zachary Garrett", "Keith Rush", "Jakub Konečný", "Sanjiv Kumar", "Hugh Brendan McMahan" ], "title": "Adaptive federated optimization", "venue": "In International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Levent Sagun", "Utku Evci", "V Ugur Guney", "Yann Dauphin", "Leon Bottou" ], "title": "Empirical analysis of the hessian of over-parametrized neural networks", "venue": "arXiv preprint arXiv:1706.04454,", "year": 2017 }, { "authors": [ "Karthik A Sankararaman", "Soham De", "Zheng Xu", "W Ronny Huang", "Tom Goldstein" ], "title": "The impact of neural network overparameterization on gradient confusion and stochastic gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Shibani Santurkar", "Dimitris Tsipras", "Andrew Ilyas", "Aleksander Mądry" ], "title": "How does batch normalization help optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tom Schaul", "Sixin Zhang", "Yann LeCun" ], "title": "No more pesky learning rates", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Daniel Soudry", "Elad Hoffer", "Mor Shpigel Nacson", "Suriya Gunasekar", "Nathan Srebro" ], "title": "The implicit bias of gradient descent on separable data", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Ilya Sutskever", "James Martens", "George Dahl", "Geoffrey Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In Proceedings of the 30th International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Sharan Vaswani", "Aaron Mishkin", "Issam Laradji", "Mark Schmidt", "Gauthier Gidel", "Simon LacosteJulien" ], "title": "Painless stochastic gradient: Interpolation, line-search, and convergence rates", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Rachel Ward", "Xiaoxia Wu", "Leon Bottou" ], "title": "Adagrad stepsizes: sharp convergence over nonconvex landscapes", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Lei Wu", "Chao Ma", "E Weinan" ], "title": "How sgd selects the global minima in over-parameterized learning: A dynamical stability perspective", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yuege Xie", "Xiaoxia Wu", "Rachel Ward" ], "title": "Linear convergence of adaptive stochastic gradient descent", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Zeke Xie", "Issei Sato", "Masashi Sugiyama" ], "title": "A diffusion theory for deep learning dynamics: Stochastic gradient descent exponentially favors flat minima", "venue": "In International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Chen Xing", "Devansh Arpit", "Christos Tsirigotis", "Yoshua Bengio" ], "title": "A walk with sgd", "venue": "arXiv preprint arXiv:1802.08770,", "year": 2018 }, { "authors": [ "Yang You", "Jing Li", "Sashank Reddi", "Jonathan Hseu", "Sanjiv Kumar", "Srinadh Bhojanapalli", "Xiaodan Song", "James Demmel", "Kurt Keutzer", "Cho-Jui Hsieh" ], "title": "Large batch optimization for deep learning: Training bert in 76 minutes", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Manzil Zaheer", "Sashank Reddi", "Devendra Sachan", "Satyen Kale", "Sanjiv Kumar" ], "title": "Adaptive methods for nonconvex optimization", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Jingzhao Zhang", "Tianxing He", "Suvrit Sra", "Ali Jadbabaie" ], "title": "Why gradient clipping accelerates training: A theoretical justification for adaptivity", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Dongruo Zhou", "Yiqi Tang", "Ziyan Yang", "Yuan Cao", "Quanquan Gu" ], "title": "On the convergence of adaptive gradient methods for nonconvex optimization", "venue": "arXiv preprint arXiv:1808.05671,", "year": 2018 }, { "authors": [ "Zhanxing Zhu", "Jingfeng Wu", "Bing Yu", "Lei Wu", "Jinwen Ma" ], "title": "The anisotropic noise in stochastic gradient descent: Its behavior of escaping from sharp minima and regularization effects", "venue": "In Proceedings of the 36th International Conference on Machine Learning. PMLR,", "year": 2019 }, { "authors": [ "Finally", "Nesterov momentum Sutskever" ], "title": "2016) is an adaptation of Nesterov’s accelerated gradient (Nesterov, 1983) for deep learning defined by the iteration", "venue": null, "year": 1983 }, { "authors": [ "Jastrzębski" ], "title": "Notice that when training with cross-entropy loss at batch size 8 (the blue line), the sharpness decreases throughout most of training. The train accuracy (not pictured) is only 66% when the sharpness starts to decrease, which suggests that the cause of this decrease is unrelated to the effect described in Appendix C, whereby the sharpness decreases at the end of training", "venue": null, "year": 2020 }, { "authors": [ "Santurkar" ], "title": "Observe that this quantity does rise to 2/η and hover there. We do not know why measuring the sharpness between iterates is necessary for batch-normalized networks, whereas for non-BN networks it suffices to measure the sharpness only at the iterates themselves", "venue": "In §K.1,", "year": 2018 }, { "authors": [ "Santurkar" ], "title": "2018) argued that batch normalization improves the effective smoothness along the optimization trajectory, where effective smoothness is defined as the Lipschitz constant of the gradient in the update direction (i.e. the negative gradient direction, for full-batch GD). That is, given an objective function f", "venue": null, "year": 2018 }, { "authors": [ "Santurkar" ], "title": "gradient descent enters the Edge of Stability", "venue": "network. Since", "year": 2018 }, { "authors": [ "full-batch GD" ], "title": "Figure 4(c) does show that the effective smoothness behaves more regularly for the BN network than for the non-BN network. But we disagree with their interpretation of this figure as demonstrating that BN improves the effective smoothness during training", "venue": "The other piece of evidence in Santurkar et al", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural networks are almost never trained using (full-batch) gradient descent, even though gradient descent is the conceptual basis for popular optimization algorithms such as SGD. In this paper, we train neural networks using gradient descent, and find two surprises. First, while little is known about the dynamics of neural network training in general, we find that in the special case of gradient descent, there is a simple characterization that holds across a broad range of network architectures and tasks. Second, this characterization is strongly at odds with prevailing beliefs in optimization.\nIn more detail, as we train neural networks using gradient descent with step size η, we measure the evolution of the sharpness — the maximum eigenvalue of the training loss Hessian. Empirically, the behavior of the sharpness is consistent across architectures and tasks: so long as the sharpness is less than the value 2/η, it tends to continually rise (§3.1). We call this phenomenon progressive sharpening. The significance of the value 2/η is that gradient descent on quadratic objectives is unstable if the sharpness exceeds this threshold (§2). Indeed, in neural network training, if the sharpness ever crosses 2/η, gradient descent quickly becomes destabilized — that is, the iterates start to oscillate with ever-increasing magnitude along the direction of greatest curvature. Yet once\nthis happens, gradient descent does not diverge entirely or stall. Instead, it enters a new regime we call the Edge of Stability1 (§3.2), in which (1) the sharpness hovers right at, or just above, the value 2/η; and (2) the train loss behaves non-monotonically, yet consistently decreases over long timescales. In this regime, gradient descent is constantly “trying” to increase the sharpness, but is constantly restrained from doing so. The net effect is that gradient descent continues to successfully optimize the training objective, but in such a way as to avoid further increasing the sharpness.2\nIn principle, it is possible to run gradient descent at step sizes η so small that the sharpness never rises to 2/η. However, these step sizes are suboptimal from the point of view of training speed, sometimes dramatically so. In particular, for standard architectures on the standard dataset CIFAR-10, such step sizes are so small as to be completely unreasonable — at all reasonable step sizes, gradient descent eventually enters the Edge of Stability (see §4). Thus, at least for standard networks on CIFAR-10, the Edge of Stability regime should be viewed as the “rule,” not the “exception.”\nAs we describe in §5, the Edge of Stability regime is inconsistent with several pieces of conventional wisdom in optimization theory: convergence analyses based on L-smoothness or monotone descent, quadratic Taylor approximations as a model for local progress, and certain heuristics for step size selection. We hope that our empirical findings will both nudge the optimization community away from widespread presumptions that appear to be untrue in the case of neural network training, and also point the way forward by identifying precise empirical phenomena suitable for further study.\nCertain aspects of the Edge of Stability have been observed in previous empirical studies of fullbatch gradient descent (Xing et al., 2018; Wu et al., 2018); our paper provides a unified explanation for these observations. Furthermore, Jastrzębski et al. (2020) proposed a simplified model for the evolution of the sharpness during stochastic gradient descent which matches our empirical observations in the special case of full-batch SGD (i.e. gradient descent). However, outside the full-batch special case, there is no evidence that their model matches experiments with any degree of quantitative precision, although their model does successfully predict the directional trend that large step sizes and/or small batch sizes steer SGD into regions of low sharpness. We discuss SGD at greater length in §6. To summarize, while the sharpness does not obey simple dynamics during SGD (as it does during GD), there are indications that the “Edge of Stability” intuition might generalize somehow to SGD, just in a way that does not center around the sharpness." }, { "heading": "2 BACKGROUND: STABILITY OF GRADIENT DESCENT ON QUADRATICS", "text": "In this section, we review the stability properties of gradient descent on quadratic functions. Later, we will see that the stability of gradient descent on neural training objectives is partly well-modeled by the stability of gradient descent on the quadratic Taylor approximation.\nOn a quadratic objective function f(x) = 12x TAx + bTx + c, gradient descent with step size η will diverge if3 any eigenvalue of A exceeds the threshold 2/η. To see why, consider first the onedimensional quadratic f(x) = 12ax\n2 + bx+ c, with a > 0. This function has optimum x∗ = −b/a. Consider running gradient descent with step size η starting from x0. The update rule is xt+1 = xt − η(axt + b), which means that the error xt − x∗ evolves as (xt+1 − x∗) = (1− ηa)(xt − x∗). Therefore, the error at step t is (xt − x∗) = (1 − ηa)t(x0 − x∗), and so the iterate at step t is xt = (1− ηa)t(x0−x∗) +x∗. If a > 2/η, then (1− ηa) < −1, so the sequence {xt} will oscillate around x∗ with ever-increasing magnitude, and diverge.\nNow consider the general d-dimensional case. Let (ai,qi) be the i-th largest eigenvalue/eigenvector of A. As shown in Appendix B, when the gradient descent iterates {xt} are expressed in the special coordinate system whose axes are the eigenvectors of A, each coordinate evolves separately. In particular, the coordinate for each eigenvector qi, namely 〈qi,xt〉, evolves according to the dynamics of gradient descent on a one-dimensional quadratic objective with second derivative ai.\n1This nomenclature was inspired by the title of Giladi et al. (2020). 2In the literature, the term “sharpness” has been used to refer to a variety of quantities, often connected to generalization (e.g. Keskar et al. (2016)). In this paper, “sharpness” strictly means the maximum eigenvalue of the training loss Hessian. We do not claim that this quantity has any connection to generalization.\n3For convex quadratics, this is “if and only if.” However, if A has a negative eigenvalue, then gradient descent with any (positive) step size will diverge along the corresponding eigenvector.\nTherefore, if ai > 2/η, then the sequence {〈qi,xt〉} will oscillate with ever-increasing magnitude; in this case, we say that the iterates {xt} diverge along the direction qi. To illustrate, Figure 2 shows a quadratic function with eigenvalues a1 = 20 and a2 = 1. In Figure 2(a), we run gradient descent with step size η = 0.09; since 0 < a2 < a1 < 2/η, gradient descent converges along both q1 and q2. In Figure 2(b), we use step size η = 0.11; since 0 < a2 < 2/η < a1, gradient descent converges along q2 yet diverges along q1, so diverges overall.\nPolyak momentum (Polyak, 1964) and Nesterov momentum (Nesterov, 1983; Sutskever et al., 2013) are notable variants of gradient descent which often improve the convergence speed. On quadratic functions, these two algorithms also diverge if the sharpness exceeds a certain threshold, which we call the “maximum stable sharpness,” or MSS. In particular, we prove in Appendix B that gradient descent with step size η and momentum parameter β diverges if the sharpness exceeds:\nMSSPolyak(η, β) = 1\nη (2 + 2β) , MSSNesterov(η, β) =\n1\nη\n( 2 + 2β\n1 + 2β\n) . (1)\nThe Polyak result previously appeared in Goh (2017); the Nesterov one seems to be new. Note that this discussion only applies to full-batch gradient descent. As we discuss in §6, several recent papers have proposed stability analyses for SGD (Wu et al., 2018; Jastrzębski et al., 2020).\nNeural network training objectives are not globally quadratic. However, the second-order Taylor approximation around any point x0 in parameter space is a quadratic function whose “A” matrix is the Hessian at x0. If any eigenvalue of this Hessian exceeds 2/η, gradient descent with step size η would diverge if run on this quadratic function — the iterates would oscillate with ever-increasing magnitude along the corresponding eigenvector. Therefore, at any point x0 in parameter space where the sharpness exceeds 2/η, gradient descent with step size η would diverge if run on the quadratic Taylor approximation to the training objective around x0." }, { "heading": "3 GRADIENT DESCENT ON NEURAL NETWORKS", "text": "In this section, we empirically characterize the behavior of gradient descent on neural network training objectives. Section 4 will show that this characterization holds broadly." }, { "heading": "3.1 PROGRESSIVE SHARPENING", "text": "When training neural networks, it seems to be a general rule that so long as the sharpness is small enough for gradient descent to be stable (< 2/η, for vanilla gradient descent), gradient descent has an overwhelming tendency to continually increase the sharpness. We call this phenomenon progressive sharpening. By “overwhelming tendency,” we mean that gradient descent can occasionally decrease the sharpness (especially at the beginning of training), but these brief decreases always seem be followed by a return to continual increase. Jastrzębski et al. (2020) previously hypothesized (in their Assumption 4) that a similar phenomenon may hold for SGD, but the evidence for, and the precise scope of, this effect are both currently far clearer for gradient descent than for SGD.\nProgressive sharpening is illustrated in Figure 3. Here, we use (full-batch) gradient descent to train a network on a subset of 5,000 examples from CIFAR-10, and we monitor the evolution of the sharpness during training. The network is a fully-connected architecture with two hidden layers of width 200, and tanh activations. In Figure 3(a), we train using the mean squared error loss for classification (Hui & Belkin, 2020), encoding the correct class with 1 and the other classes with 0. We use the small step size of η = 2/600, and stop when the training accuracy reaches 99%. We plot both the train loss and the sharpness, with a horizontal dashed line marking the stability threshold 2/η. Observe that the sharpness continually rises during training (except for a brief dip at the beginning). This is progressive sharpening. For this experiment, we intentionally chose a step size η small enough that the sharpness remained beneath 2/η for the entire duration of training.\nCross-entropy. When training with cross-entropy loss, there is an exception to the rule that the sharpness tends to continually increase: with cross-entropy loss, the sharpness typically drops at the end of training. This behavior can be seen in Figure 3(b), where we train the same network using the cross-entropy loss rather than MSE. This drop occurs because once most data points are classified correctly, gradient descent tries to drive the cross-entropy loss to zero by scaling up the margins, as detailed in Soudry et al. (2018). As we explain in Appendix C, this causes the sharpness to drop.\nThe effect of width. It is known that when networks parameterized in a certain way (the “NTK parameterization”) are made infinitely wide, the Hessian moves a vanishingly small amount during training (Jacot et al., 2018; Lee et al., 2019; Li & Liang, 2018), which implies that no progressive sharpening occurs. In Appendix D, we experiment with networks of varying width, under both NTK and standard parameterizations. We find that progressive sharpening occurs to a lesser degree as networks become increasingly wide. Nevertheless, our experiments in §4 demonstrate that progressive sharpening occurs to a dramatic degree for standard architectures on the standard dataset CIFAR-10.\nWe do not know why progressive sharpening occurs, or whether “sharp” solutions differ in any important way from “not sharp” solutions. These are important questions for future work. Note that Mulayoff & Michaeli (2020) studied the latter question in the context of deep linear networks." }, { "heading": "3.2 THE EDGE OF STABILITY", "text": "In the preceding section, we ran gradient descent using step sizes η so small that the sharpness never reached the stability threshold 2/η. In Figure 4(a), we start to train the same network at the larger step size of η = 0.01, and pause training once the sharpness rises to 2/η = 200. Recall from §2 that in any region where the sharpness exceeds 2/η, gradient descent with step size η would be unstable if run on the quadratic Taylor approximation to the training objective — the gradient descent iterates would oscillate with ever-increasing magnitude along the leading Hessian eigenvector. Empirically, we find that gradient descent on the real neural training objective behaves similarly — at first. Namely, let q1 be the leading Hessian eigenvector at the iteration where the sharpness reaches 2/η. In Figure 4(b), we resume training the network, and we monitor both the train loss and the quantity 〈q1,xt〉 for the next 215 iterations. Observe that 〈q1,xt〉 oscillates with ever-increasing magnitude, similar to the divergent quadratic example in Figure 2(b). At first, these oscillations are too small to affect the objective appreciably, and so the train loss continues to monotonically decrease. But eventually, these oscillations grow big enough that the train loss spikes.\nOnce gradient descent becomes destabilized in this manner, classical optimization theory gives no clues as to what will happen next. One might imagine that perhaps gradient descent might diverge entirely, or that gradient descent might stall while failing to make progress, or that gradient descent might jump to a flatter region and remain there. In reality, none of these outcomes occurs. In Figure 4(c), we plot both the train loss and 〈q1,xt〉 for 1000 iterations after the sharpness first crossed 2/η. Observe that gradient descent somehow avoids diverging entirely. Instead, after initially spiking around iteration 215, the train loss continues to decrease, albeit non-monotonically.\nThis numerical example is representative. In general, after the sharpness initially crosses 2/η, gradient descent enters a regime we call the Edge of Stability, in which (1) the sharpness hovers right at, or just above, the value 2/η; and (2) the train loss behaves non-monotonically over short timescales, yet decreases consistently over long timescales. Indeed, in Figure 5, we run gradient descent at a range of step sizes using both MSE and cross-entropy loss. The left plane plots the train loss curves, with a vertical dotted line (of the appropriate color) marking the iteration where the sharpness first crosses 2/η. Observe that the train loss decreases monotonically before this dotted line, but behaves non-monotonically afterwards. The middle plane plots the evolution of the sharpness, with a horizontal dashed line (of the appropriate color) at the value 2/η. Observe that once the sharpness reaches 2/η, it ceases to increase further, and instead hovers right at, or just above, the value 2/η for the remainder of training. (The precise meaning of “just above” varies: in Figure 5, for MSE loss, the sharpness hovers just a minuscule amount above 2/η, while for cross-entropy loss, the gap between the sharpness and 2/η is small yet non-miniscule.)\nAt the Edge of Stability, gradient descent is “trying” to increase the sharpness further, but is being restrained from doing so. To demonstrate this, in Figure 7, we train at step size 2/200 until reaching the Edge of Stability, and then at iteration 6,000 (marked by the vertical black line), we drop the step size to η = 2/300. Observe that after the learning rate drop, the sharpness immediately starts to increase, and only stops increasing once gradient descent is back at the Edge of Stability. Appendix O repeats this experiment on more architectures. Intuitively, gradient descent with fixed step sizes acts like a constrained optimization algorithm: the use of step size η imposes an implicit 2/η constraint on the sharpness (Nar & Sastry, 2018), and at the Edge of Stability this constraint is “active.”\nObserve from Figure 5 that there do exist step sizes η (in purple) small enough that the sharpness never rises to 2/η. We call such a step size stable. However, observe that with cross-entropy loss, it takes 3700 iterations to train at the stable step size in purple, but only 1000 iterations to train at the larger step size in blue. In general, we always observe that stable step sizes are suboptimal in terms\nof convergence speed. In fact, in §4 we will see that for standard networks on CIFAR-10, stable step sizes are so suboptimally small that they are completely unreasonable.\nThe “Edge of Stability” effect generalizes to gradient descent with momentum. In Figure 6, we train using gradient descent with step size η = 0.01, and varying amounts of either Polyak or Nesterov momentum. Observe that in each case, the sharpness rises until reaching the MSS given by Equation 1, and then plateaus there. Appendix N has more momentum experiments.\nIn Appendix P, we briefly examine the evolution of the next few Hessian eigenvalues during gradient descent. We find that each of these eigenvalues rises until plateauing near 2/η.\nPrior work. Aspects of the Edge of Stability have been observed previously in the literature. Wu et al. (2018) noted that the sharpness at the solution reached by full-batch gradient descent was not just less than 2/η, as was expected due to stability considerations, but was mysteriously approximately equal to 2/η. In retrospect, we can attribute this observation to progressive sharpening. Xing et al. (2018) observed that full-batch gradient descent eventually enters a regime (the Edge of Stability) in which the training loss behaves non-monotonically, and the iterates oscillate along the direction of largest curvature; however, they did not relate this regime to the sharpness. Lewkowycz et al. (2020) found that in neural network training, if the sharpness at initialization is larger than 2/η, then after becoming initially destabilized, gradient descent does not always diverge entirely (as the quadratic Taylor approximation would suggest), but rather sometimes “catapults” into a flatter region that is flat enough to stably accommodate the step size. It seems plausible that whichever properties of neural training objectives permit this so-called “catapult” behavior may also be the same properties that permit successful optimization at the Edge of Stability. Indeed, optimization at the Edge of Stability can conceivably be viewed as a never-ending series of micro-catapults. As we discuss at greater length in §6, several papers (Jastrzębski et al., 2017; 2019) have observed that large step sizes steer stochastic gradient descent into less sharp regions of the loss landscape, and Jastrzębski et al. (2020) attributed this effect to the stability properties of SGD. Finally, our precise characterization of the behavior of the sharpness during full-batch gradient descent adds to a growing body of work that empirically investigates the Hessian spectrum of neural networks (Sagun et al., 2017; Ghorbani et al., 2019; Li et al., 2020a; Papyan, 2018; 2019; 2020)." }, { "heading": "3.3 THE GRADIENT FLOW TRAJECTORY", "text": "In the right pane of Figure 5, we plot the evolution of the sharpness during gradient descent, with “time” = iteration × η, rather than iteration, on the x-axis. This allows us to directly compare the sharpness after, say, 100 iterations at η = 0.01 to the sharpness after 50 iterations at η = 0.02; both are time 1. Observe that when plotted by time, the sharpnesses for gradient descent at different step sizes coincide until the time where each reaches 2/η. This is because for this network, gradient descent at η = 0.01 and gradient descent at η = 0.02 initially travel the same path (moving at a speed proportional to η) until each reaches the point on that path where the sharpness hits 2/η. This path is the gradient flow trajectory. The gradient flow solution at time t is defined as the limit as η → 0 of the gradient descent iterate at iteration t/η (if this limit exists). The empirical finding of interest is that for this particular network, gradient descent does not only track the gradient flow trajectory in the limit of infinitesimally small step sizes, but for any step size that is less than 2/sharpness.\nWe can numerically approximate gradient flow trajectories by using the Runge-Kutta RK4 algorithm (Press et al., 1992) to numerically integrate the gradient flow ODE. Empirically, for many but not all networks studied in this paper, we find that gradient descent at any step size η closely tracks the Runge-Kutta trajectory until reaching the point on that trajectory where the sharpness hits 2/η. (This sometimes occurs even for networks with ReLU activations or max-pooling, which give rise to training objectives that are not continuously differentiable, which means that the gradient flow trajectory is not necessarily guaranteed to exist.) For such networks, the gradient flow trajectory provides a coherent framework for reasoning about which step sizes will eventually enter the Edge of Stability. Let λ0 be the sharpness at initialization, and let λmax be the maximum sharpness along the gradient flow trajectory. If η < 2/λmax, then gradient descent will stably track the gradient flow trajectory for the entire duration of training, and will never enter the Edge of Stability. On the other hand, if η ∈ [2/λmax, 2/λ0], then gradient descent will stably track the gradient flow trajectory only until reaching the point on that trajectory where the sharpness hits 2/η; shortly afterwards, gradient descent will become destabilized, depart the gradient flow trajectory, and enter the Edge of Stability." }, { "heading": "4 FURTHER EXPERIMENTS", "text": "Section 3 focused for exposition on a single architecture and task. In this section, we show that our characterization of gradient descent holds broadly across a wide range of architectures and tasks. We detail several known caveats and qualifications in Appendix A.\nArchitectures. In Appendix J, we fix the task of training a 5k subset of CIFAR-10, and we systematically vary the network architecture. We consider fully-connected networks, as well as convolutional networks with both max-pooling and average pooling. For all of these architectures, we consider tanh, ReLU, and ELU activations, and for fully-connected networks we moreover consider softplus and hardtanh —- eleven networks in total. We train each network with both cross-entropy and MSE loss. In each case, we successfully reproduce Figure 5.\nSince batch normalization (Ioffe & Szegedy, 2015) is known to have unusual optimization properties (Li & Arora, 2019), it is natural to wonder whether our findings still hold with batch normalization. In Appendix K, we confirm that they do, and we reconcile this point with Santurkar et al. (2018).\nTasks. In Appendix L, we verify our findings on: (1) a Transformer trained on the WikiText2 language modeling task; (2) fully-connected tanh networks with one hidden layer, trained on a synthetic one-dimensional toy regression task; and (3) deep linear networks trained on Gaussian data. In each case, the sharpness rises until hovering right at, or just above, the value 2/η.\nStandard networks on CIFAR-10. In Appendix M, we verify our findings on three standard architectures trained on the full CIFAR-10 dataset: a ResNet with BN, a VGG with BN, and a VGG without BN. For all three architectures, we find that progressive sharpening occurs to a dramatic degree, and, relatedly, that stable step sizes are dramatically suboptimal. For example, when we train the VGG-BN to 99% accuracy using gradient flow / Runge-Kutta, we find that the sharpness rises from 6.3 at initialization to a peak sharpness of 2227.6. Since this is an architecture for which gradient descent closely hews to the gradient flow trajectory, we can conclude that any stable step size for gradient descent would need to be less than 2/2227.6 = 0.000897. Training finishes at time 14.91, so gradient descent at any stable step size would require at least 14.91/0.000897 = 16, 622 iterations. Yet empirically, this network can be trained to completion at the larger, “Edge of Stability” step size of η = 0.16 in just 329 iterations. Therefore, training at a stable step size is suboptimal by a factor of at least 16622/329 = 50.5. The situation is similar for the other two architectures we consider. In short, for standard architectures on the standard dataset CIFAR-10, stable step sizes are not just suboptimally small, they are so suboptimal as to be completely unreasonable. For these networks, gradient descent at any reasonable step size eventually enters the Edge of Stability regime.\nTracking gradient flow. Recall that for some (but not all) architectures, gradient descent closely hews to the gradient flow trajectory so long as the sharpness is less than 2/η. Among the architectures considered in Appendix J, we found this to be true for the architectures with continuously differentiable components, as well as some, but not all, with ReLU, hardtanh, and max-pooling. Among the architectures in Appendix L, we found this to be true for the tanh network, but not for the deep linear network or the Transformer. Finally, we did find this to be true for the three standard architectures in Appendix M, even though those architectures use ReLU." }, { "heading": "5 DISCUSSION", "text": "We now explain why the behavior of gradient descent at the Edge of Stability contradicts several pieces of conventional wisdom in optimization.\nAt reasonable step sizes, gradient descent cannot be analyzed using (even local) L-smoothness Many convergence analyses of gradient descent assume a bound on the sharpness — either globally or, at the very least, along the optimization trajectory. This condition, called L-smoothness (Nesterov, 1998), is intended to guarantee that each gradient step will decrease the training objective by a certain amount; the weakest guarantee is that if the local sharpness is less than 2/η, then a gradient step with size η is guaranteed to decrease (rather than increase) the training objective. At a bare minimum, any convergence analysis of gradient descent based on L-smoothness will require the sharpness along the optimization trajectory to be less than 2/η. Yet to the contrary, at the Edge of Stability, the sharpness hovers just above 2/η. Therefore, at any step size for which gradient descent enters the Edge of Stability (which, on realistic architectures, includes any reasonable step size), gradient descent cannot be analyzed using L-smoothness. Li et al. (2020b) previously argued that convergence analyses based on L-smoothness do not apply to networks with both batch normalization and weight decay; our paper empirically extends this to neural networks without either.\nL-smoothness may be inappropriate when analyzing other optimization algorithms too It is common for optimization papers seemingly motivated by deep learning to analyze algorithms under the “non-convex but L-smooth”’ setting (Reddi et al., 2016; Agarwal et al., 2017; Zaheer et al., 2018; Zhou et al., 2018; Chen et al., 2019; Li & Orabona, 2019; Ward et al., 2019; You et al., 2020; Vaswani et al., 2019; Sankararaman et al., 2019; Reddi et al., 2021; Défossez et al., 2020; Xie et al., 2020; Liu et al., 2020; Defazio, 2020). Since our experiments focus on gradient descent, it does not necessarily follow that L-smoothness assumptions are unjustified when analyzing other optimization algorithms. However, gradient descent is arguably the simplest optimization algorithm, so we believe that the fact that (even local) L-smoothness fails even there should raise serious questions about the suitability of the L-smoothness assumption in neural network optimization more generally. In particular, the burden of proof should be on authors to empirically justify this assumption." }, { "heading": "At reasonable step sizes, gradient descent does not monotonically decrease the training loss", "text": "In neural network training, SGD does not monotonically decrease the training objective, in part due to minibatch randomness. However, it is often assumed that full-batch gradient descent would monotonically decrease the training objective, were it used to train neural networks. For example, Zhang et al. (2020) proposed a “relaxedL-smoothness” condition that is less restrictive than standard L-smoothness, and proved a convergence guarantee for gradient descent under this condition which asserted that the training objective will decrease monotonically. Likewise, some neural network analyses such as Allen-Zhu et al. (2019) also assert that the training objective will monotonically decrease. Yet, at the Edge of Stability, the training loss behaves non-monotonically over short timescales even as it consistently decreases over long timescales. Therefore, convergence analyses which assert monotone descent cannot possibly apply to gradient descent at reasonable step sizes.\nThe Edge of Stability is inherently non-quadratic It is tempting to try to reason about the behavior of gradient descent on neural network training objectives by analyzing, as a proxy, the behavior of gradient descent on the local quadratic Taylor approximation (LeCun et al., 1998). However, at the Edge of Stability, the behavior of gradient descent on the real neural training objective is irreconcilably different from the behavior of gradient descent on the quadratic Taylor approximation: the former makes consistent (if choppy) progress, whereas the latter would diverge (and this divergence would happen quickly, as we demonstrate in Appendix E). Thus, the behavior of gradient descent at the Edge of Stability is inherently non-quadratic.\nDogma for step size selection may be unjustified An influential piece of conventional wisdom concerning step size selection has its roots in the quadratic Taylor approximation model of gradient descent. This conventional wisdom (LeCun et al., 1993; 1998; Schaul et al., 2013) holds that if the sharpness at step t is λt, then the current step size ηt must be set no greater than 2/λt (in order to prevent divergence); and furthermore, barring additional information about the objective function, that ηt should optimally be set to 1/λt . Our findings complicate this conventional wisdom.\nTo start, it is nearly impossible to satisfy these prescriptions with a fixed step size: for any fixed (and reasonable) step size ηt = η, progressive sharpening eventually drives gradient descent into regions where the sharpness is just a bit greater than 2/η — which means that the step size η is purportedly impermissible. Furthermore, in Appendix F, we try running gradient descent with the purportedly optimal ηt = 1/λt rule, and find that this algorithm is soundly outperformed by the purportedly impermissible baseline of gradient descent with a fixed ηt = 1/λ0 step size, where λ0 is the sharpness at initialization. The ηt = 1/λt rule continually anneals the step size, and in so doing ensures that the training objective will decrease at each iteration, whereas the fixed ηt = 1/λ0 step size often increases the training objective. However, this non-monotonicity turns out to be a worthwhile price to pay in return for the ability to take larger steps." }, { "heading": "6 STOCHASTIC GRADIENT DESCENT", "text": "Our precise characterization of the behavior of the sharpness only applies to full-batch gradient descent. In contrast, during SGD, the sharpness does not always settle at any fixed value (Appendix G), let alone one that can be numerically predicted from the hyperparameters. Nevertheless, prior works (Jastrzębski et al., 2017; 2019; 2020) have demonstrated that large step sizes do steer SGD into regions of the landscape with lower sharpness; the 2/η rule we have identified for full-batch gradient descent is a special case of this observation. Furthermore, small batch sizes also steer SGD into regions with lower sharpness (Keskar et al., 2016; Jastrzębski et al., 2017), as we illustrate in Appendix G. Jastrzębski et al. (2020) attributed this phenomenon to the stability properties of SGD.\nEven though our findings only strictly hold for gradient descent, they may have relevance to SGD as well. First, since gradient descent is a special case of SGD, any general characterization of the dynamics of SGD must reduce to the Edge of Stability in the full-batch special case. Second, there are indications that the Edge of Stability may have some analogue for SGD. One way to interpret our main findings is that gradient descent “acclimates” to the step size in such a way that each update sometimes increases and sometimes decrease the train loss, yet an update with a smaller step size would consistently decrease the training loss. Along similar lines, in Appendix H we demonstrate that SGD “acclimates” to the step size and batch size in such a way that each SGD update sometimes increases and sometimes decreases the training loss in expectation, yet an SGD update with a smaller step size or larger batch size would consistently decrease the training loss in expectation.\nIn extending these findings to SGD, the question arises of how to model “stability” of SGD. This is a highly active area of research. Wu et al. (2018) proposed modeling stability in expectation, and gave a sufficient (but not necessary) criterion for the stability of SGD in expectation. Building on this framework, Jastrzębski et al. (2020) argued that if the Hessian is aligned with the second moment matrix of per-example gradients, then SGD is stable so long as a certain expression (involving the sharpness) is below a certain threshold. In the special full-batch case, their criterion reduces to the sharpness being beneath 2/η — a constraint which we have shown is “tight” throughout training. However, in the general SGD case, there is no evidence that their stability constraint is tight throughout training. Giladi et al. (2020) showed that the generalization gap in asynchronous SGD can be mostly ameliorated by setting the step size so as to ensure that stability properties in expectation remain identical to those of a well-tuned implementation of synchronous SGD. Finally, a number of papers have attempted to mathematically model the propensity of SGD to “escape from sharp minima” (Hu et al., 2017; Zhu et al., 2019; Xie et al., 2021)." }, { "heading": "7 CONCLUSION", "text": "We have empirically demonstrated that the behavior of gradient decent on neural training objectives is both surprisingly consistent across architectures and tasks, and surprisingly different from that envisioned in the conventional wisdom. Our findings raise a number of questions. Why does progressive sharpening occur? At the Edge of Stability, by what mechanism does gradient descent avoid diverging entirely? Since the conventional wisdom for step size selection is wrong, how should the gradient descent step size be set during deep learning? Does the “Edge of Stability” effect generalize in some way to optimization algorithms beyond gradient descent, such as SGD? We hope to inspire future efforts aimed at addressing these questions." }, { "heading": "8 ACKNOWLEDGEMENTS", "text": "This work was supported in part by DARPA FA875017C0141, the National Science Foundation grants IIS1705121 and IIS1838017, an Amazon Web Services Award, a JP Morgan A.I. Research Faculty Award, a Carnegie Bosch Institute Research Award, a Facebook Faculty Research Award, and a Block Center Grant. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of DARPA, the National Science Foundation, or any other funding agency." }, { "heading": "A CAVEATS", "text": "In this appendix, we list several caveats to our generic characterization of the dynamics of full-batch gradient descent on neural network training objectives.\n1. With cross-entropy loss, the sharpness often drops at the end of training As mentioned in the main text, when neural networks are trained on classification tasks using the cross-entropy loss, the sharpness frequently drops near the end of training, once the classification accuracy begins to approach 1. We explain this effect in Appendix C.\n2. For shallow or wide networks, or on simple datasets, sharpness doesn’t rise that much When the network is shallow (Figure 16 - 17) or wide (Figure 12 - 14), or when the dataset is “easy” or small (Figure 18), the sharpness may rise only a small amount over the gradient flow trajectory. For these optimization problems, “stable step sizes” (those for which gradient descent never enters the Edge of Stability) may be quite reasonable, and the range of “Edge of Stability” step sizes may be quite small.\n3. Sharpness sometimes drops at the beginning of training We sometimes observe that the sharpness drops at the very beginning of training, as the network leaves its initialization. This was more common when training with MSE loss than with cross-entropy loss. For most networks, this drop was very slight. However, the combination of both batch normalization and MSE loss sometimes caused situations where the sharpness was considerably large at initialization, and dropped precipitously as soon as training began. Figure 8 illustrates one such network.\n4. With batch normalization, need to look at sharpness between iterates As detailed in Appendix K, when we trained batch-normalized networks using very small step sizes η, we sometimes observed that the sharpness at the gradient descent iterates themselves plateaued below 2/η, even though the sharpness in between the iterates plateaued just above 2/η, as expected.\n5. With non-differentiable components, instability sometimes begins when the sharpness is a bit less than 2/η When training networks with ReLU or hardtanh activations, we sometimes observed that the sharpness started to plateau (and the training loss started to behave nonmonotonically) a bit before the instant when the sharpness crossed 2/η. For example, see Figures 41, 43 or Figures 45, 47. One potential explanation is that for such networks, the training objective is not continuously differentiable, so the second-order Taylor approximation around a given iterate may be a poor local model for the training objective at even an tiny distance away in weight space, if that weight change causes some activations to switch “ReLU case” from ≤ 0 to > 0, or visa versa." }, { "heading": "B STABILITY OF GRADIENT DESCENT ON QUADRATIC FUNCTIONS", "text": "This appendix describes the stability properties of gradient descent (and its momentum variants) when optimizing the quadratic objective function\nf(x) = 1\n2 xTAx + bTx + c (2)\nstarting from the initialization x0.\nTo review, vanilla gradient descent is defined by the iteration:\nxt+1 = xt − η∇f(xt).\nMeanwhile, gradient descent with Polyak (also called “heavy ball”) momentum (Polyak, 1964; Sutskever et al., 2013; Goodfellow et al., 2016) is defined by the iteration:\nvt+1 = βvt − η∇f(xt) xt+1 = xt + vt+1\nwhere vt is a “velocity” vector and 0 ≤ β < 1 is the momentum coefficient. For β = 0 the algorithm reduces to vanilla GD.\nFinally, Nesterov momentum Sutskever et al. (2013); Goodfellow et al. (2016) is an adaptation of Nesterov’s accelerated gradient (Nesterov, 1983) for deep learning defined by the iteration:\nvt+1 = βvt − η∇f(xt + βvt) xt+1 = xt + vt+1\nwhere vt is a “velocity” vector and 0 ≤ β < 1 is the momentum coefficient. For β = 0 the algorithm reduces to vanilla GD.\nAll three of these algorithms share a special property: on quadratic functions, they act independently along each Hessian eigenvector. That is, if we express the iterates in the Hessian eigenvector basis, then in this basis the coordinates evolve independent from one another under gradient descent. Proposition 1. Consider running vanilla gradient descent on the quadratic objective (2) starting from x0. Let (q, a) be an eigenvector/eigenvalue pair of A. If a > 2/η, then the sequence {qTxt} will diverge.\nProof. The update rule for gradient descent on this quadratic function is:\nxt+1 = xt − η(Axt + b) = (I− ηA)xt − ηb.\nTherefore, the quantity qTxt evolves under gradient descent as:\nqTxt+1 = q T (I− ηA)xt − ηb\n= (1− ηa)qTxt − η qTb. (qTA = aq)\nDefine x̃t = qTxt + 1aq Tb, and note that {qTxt} diverges if and only if {x̃t} diverges.\nThe quantity x̃t evolves under gradient descent according to the simple rule:\nx̃t+1 = (1− ηa)x̃t\nSince η > 0, if a > 2/η then (1− ηa) < −1, so the sequence {x̃t} will diverge.\nWe now prove analogous results for Nesterov and Polyak momentum.\nTheorem 1. Consider running Nesterov momentum on the quadratic objective (2) starting from any initialization. Let (q, a) be an eigenvector/eigenvalue pair of A. If a > 1η ( 2+2β 1+2β ) , then the sequence {qTxt} will diverge.\nProof. The update rules for Nesterov momentum on this quadratic function are:\nvt+1 = β(I− ηA)vt − ηb− ηAxt xt+1 = xt + vt+1.\nUsing the fact that vt = xt − xt−1, we can rewrite this as a recursion in xt alone:\nxt+1 = xt + β(I− ηA)(xt − xt−1)− ηb− ηAxt = (1 + β)(I− ηA)xt − β(I− ηA)xt−1 − ηb\nDefine x̃t = qTxt + 1aq Tb, and note that qTxt diverges iff x̃t diverges. It can be seen that x̃t evolves as:\nx̃t+1 = (1 + β)(1− ηa)x̃t − β(1− ηa)x̃t−1\nThis is a linear homogenous second-order difference equation. By Theorem 2.37 in Elaydi (2005), since η > 0 and β < 1, if a > 1η ( 2+2β 1+2β ) then this recurrence diverges.\nThe following result previously appeared in Goh (2017). Theorem 2. Consider running Polyak momentum on the quadratic objective (2) starting from any initialization. Let (q, a) be an eigenvector/eigenvalue pair of A. If a > 1η (2 + 2β), then the sequence {qTxt} will diverge.\nProof. Using the fact that vt = xt − xt−1, we can re-write the Polyak momentum recursion as a recursion in x alone:\nxt+1 = xt + βvt − η∇f(xt) = xt + (β(xt − xt−1)− η∇f(xt)) = (1 + β)xt − βxt−1 − η∇f(xt).\nFor the quadratic objective (2), this update rule amounts to:\nxt+1 = (1 + β)xt − βxt−1 − η(Axt + b) = (1 + β − ηA)xt − βxt−1 − ηb.\nMultiplying by qT , we obtain:\nqTxt+1 = (1 + β − ηa)qTxt − βqTxt−1 − ηqTb.\nNow, define x̃t = qTxt + 1aq Tb. Note that qTxt diverges iff x̃t diverges. It can be seen that x̃t evolves as:\nx̃t+1 = (1 + β − ηa)x̃t − βx̃t−1.\nThis is a linear homogenous second-order difference equation. By Theorem 2.37 in Elaydi (2005), since η > 0 and β < 1, if a > 1η (2 + 2β) then this recurrence diverges." }, { "heading": "C CROSS-ENTROPY LOSS", "text": "In this appendix, we explain why the sharpness decreases at the end of training when the crossentropy loss is used. Before considering the full multiclass case, let us first consider the simpler case of binary classification with the logistic loss.\nBinary classification with logistic loss We consider a dataset {xi, yi)}ni=1 ⊂ Rd × {−1, 1}, where the examples are vectors in Rd and the labels are binary {−1, 1}. We consider a neural network h : Rd × Rp → R which maps an input x ∈ Rd and a parameter vector θ ∈ Rp to a prediction h(x; θ) ∈ R. Let ` : R× {−1, 1} → R be the logistic loss function:\n`(z; y) = log(1 + exp(−zy))\n= − log p where p = 1 1 + exp(−zy) .\nThe second derivative of this loss function w.r.t z is:\n`′′(z; y) = p(1− p).\nThe full training objective is:\nf(θ) = 1\nn n∑ i=1 fi(θ) where fi(θ) = `(f(xi; θ); yi).\nThe Hessian of this training objective is the average of per-example Hessians:\n∇2f(θ) = 1 n n∑ i=1 ∇2fi(θ).\nFor any arbitrary loss function `, we have the so-called Gauss-Newton decomposition (Martens, 2016; Bottou et al., 2018) of the per-example Hessian∇2fi(θ):\n∇2fi(θ) = `′′(zi; yi)∇θh(xi; θ)∇θh(xi; θ)T + `′(zi; yi)∇2θh(xi; θ) where zi = h(xi; θ)\nwhere ∇θh(xi; θ) ∈ Rp is the gradient of the network output with respect to the weights, and `′ refers to the derivative of ` with respect to its first argument, the score.\nEmpirically, the first term in the decomposition (usually called the “Gauss-Newton matrix”) tends to dominate the second, which implies the following “Gauss-Newton approximation” to the Hessian:\n∇2f(θ) ≈ 1 n n∑ i=1 `′′(zi; yi)∇θh(xi; θ)∇θh(xi; θ)T (3)\nIn our experience, progressive sharpening affects ∇θh(xi; θ)∇θh(xi; θ)T . That is, ∇θh(xi; θ)∇θh(xi; θ)T tends to grow in scale continually during training. For the square loss `(z; y) = 12 (z − y)\n2, the second derivative `′′(z; y) = 1 is the identity, so the ∇2fi(θ) grows continually as well. In contrast, for the logistic loss, many of the `′′(zi; yi) decrease at the very end of training. Why is this? In Figure 9, we plot both the logistic loss `, and its second derivative `′′, as a function of the quantity yz, which is often called the “margin.”\nCrucially, observe that both ` and `′′ are decreasing in yz. Because the loss ` is decreasing in yz, once an example i is classified correctly (i.e. yizi > 0), the training objective can be optimized further by increasing the margin yizi. Because `′′ is also decreasing in yz, if the margin yizi increases, the term `′′(zi; yi) drops. Near the end of training, once most examples are classified correctly, gradient descent can easily increase the margins of all these examples by simply scaling up the final layer weight matrix. This causes the `′′(zi; yi) to drop. Therefore, even though progressive sharpening still applies to ∇θh(xi; θ)∇θh(xi; θ)T , the decrease in the `′′(zi; yi)’s pulls down the leading eigenvalue of the Gauss-Newton matrix in equation 3.\nThis effect is illustrated in Figure 10. Here, we train a network on the binary classification task of CIFAR-10 airplane vs. automobile, using the logistic loss. In Figure 10(e), we plot the margin yizi for 10 examples in the dataset. Notice that at the end of training, these margins all continually rise;\nthis is because gradient descent “games” the objective by increasing the margins of successfully classified examples. When the margin yizi rises, the second derivative `′′(zi; yi) drops. This can be seen from Figure 10(f), where we plot `′′(zi; yi) for these same 10 examples. Now, all the while, the leading eigenvalue of the matrix 1n ∑n i=1∇θh(xi; θ)∇θh(xi; θ)T keeps rising, as can be seen from Figure 10(d). However, because the `′′s are dropping, the leading eigenvalue of the Gauss-Newton matrix 1n ∑n i=1 `\n′′(zi; yi)∇θh(xi; θ)∇θh(xi; θ)T starts to decrease at the end of training, as can be seen from the green line in Figure 10(c). Finally, since the leading eigenvalue of the Gauss-Newton matrix is an excellent approximation to the leading eigenvalue of the Hessian (i.e. the sharpness), the sharpness also drops at the end of training, as can be seen from the orange line in Figure 10(c).\nMulticlass classification with cross-entropy loss We consider a dataset {(xi, yi)}ni=1 ⊂ Rd × {1, . . . , k}, where the examples are vectors in Rd and the labels are in {1, . . . , k}. We consider a neural network h : Rd × Rp → Rk which maps an input x ∈ Rd and a parameter vector θ ∈ Rp to a prediction h(x; θ) ∈ Rk. Let ` : Rk × {1, . . . , k} → R be the cross-entropy loss function:\n`(z; y) = − log exp(zy)∑k j=1 exp(zj)\n= − log py where p = exp(z)∑ j exp(zj) .\nThe Hessian∇2`(z; y) ∈ Rk×k of this loss function w.r.t the class scores z is:\n∇2`(z; y) = diag(p)− ppT .\nNow, for any loss function ` : Rk × {1, . . . , k} → R, we have the Gauss-Newton decomposition:\n∇2fi(θ) = JTi [ ∇2zi `(zi; yi) ] Ji + k∑ j=1 [∇zi`(zi; yi)]j ∇ 2 θhj(xi; θ)\nwhere zi = hi(xi; θ) ∈ Rk are the logits for example i, Ji ∈ Rk×p is the network output-toweights Jacobian for example i, ∇2zi`(zi; yi) ∈ R\nk×k is the Hessian of ` w.r.t its input zi, and ∇2θhj(xi; θ) ∈ Rp×p is the Hessian matrix of the j-th output of the network h on the i-th example. Dropping the second term yields the Gauss-Newton approximation:\n∇2fi(θ) ≈ JTi [ ∇2zi `(zi; yi) ] Ji.\nAs in the binary classification case discussed above: at the end of training, for many examples i, the yi entry of pi will tend toward 1 and the other entries of pi will tend to 0. Once this occurs, the matrix diag(pi)−pipTi will broadly decrease in scale: the diagonal entries of this matrix are of the form p(1 − p), which goes to zero as p → 0 or p → 1; and the off-diagonal entries are of the form −pq, which also goes to zero if p→ 0, q → 1 or p→ 0, q → 0 or p→ 1, q → 0. This effect is illustrated in Figure 11. Here, we train a network on CIFAR-10 using the cross-entropy loss. In Figure 11(e), for ten examples i in the dataset (with output scores zi = h(xi) ∈ Rk), we plot the margin zi[yi] − maxj 6=yi zi[j], which is the difference between the score of the correct class yi and the score of the next-highest class. Observe that for all of these examples, this margin rises at the end of training. In Figure 11(f), for those same ten examples, we plot the quantity pi[yi](1 − pi[yi]).Observe that for all of these examples, this quantity continually decreases at the end of training. Now, all the while, the leading eigenvalue of the matrix 1n ∑n i=1 J T i Ji keeps rising, as can be seen from Figure 11(d). However, because ∇2`(zi; yi) is decreasing, the leading eigenvalue of the Gauss-Newton matrix 1n ∑n i=1 J T i ∇2`(zi; yi) Ji starts to decrease at the end of training, as can be seen from the green line in Figure 11(c). Finally, since the leading eigenvalue of the Gauss-Newton matrix is an excellent approximation to the leading eigenvalue of the Hessian (i.e. the sharpness), the sharpness also drops at the end of training, as can be seen from the orange line in Figure 11(c)." }, { "heading": "D EMPIRICAL STUDY OF PROGRESSIVE SHARPENING", "text": "In this appendix, we empirically study how problem parameters such as network width, network depth, and dataset size affect the degree to which progressive sharpening occurs. To study progressive sharpening on its own, without the confounding factor of instability, we train neural networks using gradient flow (Runge-Kutta) rather than gradient descent. Informally speaking, gradient flow does “what gradient descent would do if gradient descent didn’t have to worry about instability.”\nWe observe that progressive sharpening occurs to a greater degree: (1) for narrower networks than for wider networks (which is consistent with infinite-width NTK theory), (2) for deeper networks than for shallower networks, and (3) for larger datasets than for smaller datasets.\nThe effect of width When networks parameterized in a certain way (the “NTK parameterization”) are made infinitely wide and trained using gradient flow, the Hessian moves a vanishingly small amount during training, implying that no progressive sharpening occurs (Jacot et al., 2018; Lee et al., 2019; Jacot et al., 2020; Li & Liang, 2018). Therefore, a natural hypothesis is that progressive sharpening might attenuate as network width increases. We now run an experiment which supports this hypothesis.\nWe consider fully-connected architectures with two hidden layers and tanh activations, with widths {32, 64, 128, 256, 512, 1024}. We train on a size-5,000 subset of CIFAR-10 using the cross-entropy loss. We train using gradient flow (details in §I.5). We consider both NTK parameterization and standard parameterization (Lee et al., 2019).\nIn Figure 12, for each width, we train NTK-parameterized networks from five different random initializations, and plot the evolution of the sharpness during gradient flow. Observe that the maximum sharpness along the gradient flow trajectory is larger for narrow networks, and smaller for wide networks. (As elsewhere in this paper, note that the sharpness drops at the end of training due to the cross-entropy loss.) In Figure 13, we plot summary statistics from these training runs. Namely, define λmax as the maximum sharpness over the gradient flow trajectory, and define λ0 as the initial sharpness. In Figure 13(a), for each width we plot the mean and standard deviation of the maximum sharpness λmax over the five different random initializations. Observe that λmax becomes smaller, on average, as the width is made larger. In Figure 13(b), for each width we plot the mean and standard deviation of the maximum sharpness gain λmax/λ0 over the five different random initializations. Observe that the maximum sharpness gain λmax/λ0 also becomes smaller as the width is made larger. NTK theory suggests that λmax/λ0 should deterministically tend to 1 as the width→∞, and Figure 13(b) is consistent with this prediction. In Figures 14 and 15, we conduct similar experiments, but with standard parameterization rather than NTK parameterization. Similar to NTK parameterization, we observe in Figure 14 that the sharpness rises more for narrow networks than for wide networks.\nEffect of depth We now explore the effect of network depth on progressive sharpening. We use gradient flow to train fully-connected tanh architectures of width 200 and varying depths — ranging from 1 hidden layer to 4 hidden layers. We train on a 5k subset of CIFAR-10 using both crossentropy loss (in Figure 16) and square loss (in Figure 17). For each depth, we train from five different random initializations (different colors). Observe that progressive sharpening occurs to a greater degree as network depth increases.\nEffect of dataset size We now explore the effect of dataset size on progressive sharpening. We use gradient flow to train a network on different-sized subsets of CIFAR-10. The network is a 2- hidden-layer, width-200 fully-connected tanh architecture, and we train using cross-entropy loss. The results are shown in Figure 18. Observe that progressive sharpening occurs to a greater degree as dataset size increases." }, { "heading": "E SPEED OF DIVERGENCE ON QUADRATIC TAYLOR APPROXIMATION", "text": "If the sharpness at some iterate is strictly greater than 2/η, then gradient descent with step size η is guaranteed to diverge if run on the quadratic Taylor approximation around that iterate. However, the speed of this divergence could conceivably be slow — in particular, the train loss might continue to decrease for many iterations before it starts to increase. In this appendix we empirically demonstrate, to the contrary, that at the Edge of Stability, gradient descent diverges quickly if, at some iterate, we start running gradient descent on the quadratic Taylor approximation around that iterate.\nWe consider the fully-connected tanh network from section 3, trained on a 5,000-sized subsample of CIFAR-10 using both cross-entropy loss and MSE loss. At some timestep t0 during training, we suddenly switch from running gradient descent on the real neural training objective, to running gradient descent on the quadratic Taylor approximation around the iterate at step t0. We do this for three timesteps before gradient descent has entered the Edge of Stability, and three afterwards. Figure 20 shows the results for cross-entropy loss, and Figure 21 shows the (similar) results for MSE loss. Before entering the Edge of Stability (top row), gradient descent on the quadratic Taylor approximation behaves similar to gradient descent on the real neural training objective — that is, the orange line almost overlaps the blue line. Yet after entering the Edge of Stability (bottom row), gradient descent on the quadratic Taylor approximation quickly diverges, whereas gradient descent on the real neural training objective makes consistent (if choppy) progress.\nIn short, when gradient descent is not at the Edge of Stability, the quadratic Taylor approximation serves as a good model for the local progress of gradient descent. But when gradient descent is at the Edge of Stability, the quadratic Taylor approximation is an extremely poor model for the local progress of gradient descent. It is conceivable that there exists some simple modification to the quadratic Taylor model which would fix this issue (e.g. perhaps if one ignores a certain direction, the quadratic Taylor model is accurate). Nevertheless, unless/until such a fix is discovered, it is unclear why quadratic Taylor approximations should yield any insight into the local behavior of gradient descent." }, { "heading": "F “OPTIMAL” STEP SIZE SELECTION", "text": "One heuristic for setting the step size of gradient descent is to set the step size at iteration t to ηt = 1/λt, where λt is the sharpness at iteration t. While this heuristic is computationally impractical due to the time required to compute the sharpness at each iteration, it is often regarded as an ideal, for instance in LeCun et al. (1998) (Eq. 39), LeCun et al. (1993), and Schaul et al. (2013) (Eq 8). The motivation for this heuristic is: if all that is known about the training objective is that the local sharpness is λ, then a step size of 1/λ maximizes the guaranteed decrease in the training objective that would result from taking a step.\nFirst, we demonstrate (on a single numerical example) that the dynamic step size ηt = 1/λt is outperformed by the baseline approach of gradient descent with a fixed step size ηt = 1/λ0, where λ0 is the sharpness at initialization. In Figure 22, we train the network from §3 using both the dynamic η = 1/λt step size heuristic as well as the baseline fixed step size of η = 1/λ0. Observe that the η = 1/λ0 baseline outperforms the 1/λt heuristic. Intuitively, because of progressive sharpening, the ηt = 1/λt heuristic anneals the step size, and therefore ends up taking steps that are suboptimally small. In contrast, while the ηt = 1/λ0 baseline quickly becomes unstable, this instability is apparently a worthwhile “price to pay” in return for the benefit of taking larger steps." }, { "heading": "MSE loss", "text": "Another natural idea is to dynamically set the step size at iteration t to ηt = 1.9/λt. This step size rule takes larger steps than the η = 1/λt rule while still remaining stable. In Figure 23, we compare this ηt = 1.9/λt rule to a baseline approach of gradient descent with a fixed step size ηt = 1.9/λ0, where λ0 is the sharpness at initialization. Observe that the baseline of a fixed 1.9/λ0 step size outperforms the dynamic ηt = 1.9/λt rule." }, { "heading": "MSE loss", "text": "" }, { "heading": "G EVOLUTION OF SHARPNESS DURING SGD", "text": "In this appendix, we briefly illustrate how the sharpness evolves during stochastic gradient descent. In Figure 24, we train the tanh network from §3 using SGD with both cross-entropy loss (top) and mean squared error (bottom). We train using a range of batch sizes (different colors). We observe the following:\n1. During large-batch SGD, the sharpness behaves similar to full-batch gradient descent: it rises to 2/η (marked by the black horizontal dashed line) and then hovers just above that value.\n2. Consistent with prior reports, we find that the smaller the batch size, the lower the sharpness (Keskar et al., 2016; Jastrzębski et al., 2017; 2019; 2020).\n3. Notice that when training with cross-entropy loss at batch size 8 (the blue line), the sharpness decreases throughout most of training. The train accuracy (not pictured) is only 66% when the sharpness starts to decrease, which suggests that the cause of this decrease is unrelated to the effect described in Appendix C, whereby the sharpness decreases at the end of training. Figure 5(a) of Jastrzębski et al. (2020) also depicts a network where the sharpness decreases during SGD training." }, { "heading": "H SGD ACCLIMATES TO THE HYPERPARAMETERS", "text": "In this appendix, we conduct an experiment which suggests that some version of “Edge of Stability” may hold for SGD.\nOne way to interpret our main findings is that gradient descent “acclimates” to the step size in such a way that each training update sometimes increases, and sometimes decreases, the training loss, yet an update with a smaller step size would always decrease the training loss. We now demonstrate that this interpretation may generalize to SGD. In particular, we will demonstrate that SGD seems to “acclimate” to the step size and batch size in such a way that an actual update sometimes increases and sometimes decreases the loss in expectation, yet a update with a larger step size or smaller batch size would almost always increase the loss in expectation, and a step with a smaller step size or a larger batch size would almost always decrease the loss in expectation.\nIn Figure 25, we train the tanh network from §3 with MSE loss, using SGD with step size 0.01 and batch size 32. We periodically compute the training loss (over the full dataset) and plot these on the left pane of Figure 25. Observe that the training loss does not decrease monotonically, but of course this is not surprising — SGD is a random algorithm. However, what may be more surprising is that SGD is not even decreasing the training loss in expectation. On the right pane of Figure 25, every 500 steps during training, we use the Monte Carlo method to approximately compute the expected change in training loss that would result from an SGD step (the expectation here is over the randomness involved in selecting the minibatch). Observe that at many points during training, an SGD step would decrease the loss (as desired) in expectation, but at other points, and SGD step would increase the loss in expectation.\nIn Figure 26(a), while training that network, we compute the expected change in training loss that would result from taking an SGD step with the same step size used during training (i.e. 0.01), but half the batch size used during training (i.e. 16). We observe that an SGD step with half the batch size would consistently cause an increase in the training loss in expectation. In Figure 26(b) we repeat this experiment, but with twice the batch size used during training (i.e. 64). Notice that an SGD step with twice the batch size would consistently cause a decrease in the training loss in expectation. In Figure 26(c) and (d) repeat this experiment with the step size; we observe that an SGD step with a larger step size (0.02) would consistently increase the training loss in expectation, while an SGD step with a smaller step size (0.005) would consistently decrease the training loss in expectation.\nIn Figure 27, as a “control” experiment, we both train and measure the expected loss change under the following four hyperparameter settings: (step size 0.01, batch size 16), (step size 0.01, batch size 64), (step size 0.02, batch size 32), and (step size 0.005, batch size 32). In each case, we observe that, after a brief period at the beginning of training, each SGD update sometime increases and sometimes decreases the training loss in expectation.\nTherefore, at least for this single network, we can conclude that no matter the hyperparameters, SGD quickly navigates to, and then lingers in, regions of the loss landscape in which an SGD update with those hyperparameters sometimes increases, and sometimes decreases the training loss in expectation, yet an SGD update with a smaller step size or larger batch size would consistently decrease the loss in expectation, and an SGD update with a larger step size or smaller batch size would consistently increase the loss in expectation." }, { "heading": "I EXPERIMENTAL DETAILS", "text": "I.1 VARYING ARCHITECTURES ON 5K SUBSET OF CIFAR-10\nDataset. The dataset consists of the first 5,000 examples from CIFAR-10. To preprocess the dataset, we subtracted the mean from each channel, and then divided each channel by the standard deviation (where both the mean and stddev were computed over the full CIFAR-10 dataset, not the 5k subset).\nArchitectures. We experimented with two architecture families: fully-connected and convolutional. For each of these two families, we experimented with several different activation functions, and for convolutional networks we experimented with both max pooling and average pooling.\nThe PyTorch code for e.g. the fully-connected ReLU network is as follows:\nnn.Sequential( nn.Flatten(), nn.Linear(3072, 200, bias=True), nn.ReLU(), nn.Linear(200, 200, bias=True) nn.ReLU(), nn.Linear(200, 10, bias=True)\n)\nNetworks with other activation functions would have nn.ReLU() replaced by nn.ELU(), nn.Tanh(), nn.Softplus(), or nn.Hardtanh().\nThe PyTorch code for e.g. the convolutional ReLU network with max-pooling is as follows:\nnn.Sequential( nn.Conv2d(3, 32, bias=True, kernel_size=3, padding=1), nn.ReLU(), nn.MaxPool2d(2), nn.Conv2d(32, 32, bias=True, kernel_size=3, padding=1), nn.ReLU(), nn.MaxPool2d(2), nn.Flatten(), nn.Linear(2048, 10, bias=True)\n)\nNetworks with other activation functions would have nn.ReLU() replaced by nn.ELU() or nn.Tanh(), and networks with average pooling would have nn.MaxPool2d(2) replaced by nn.AvgPool2d(2).\nFor all of these networks, we use the default PyTorch initialization. That is, both fully-connected layers and convolutional layers have the entries of their weight matrix and bias vector sampled i.i.d from Uniform(− 1√\nfan in , 1√ fan in ).\nLoss functions. For a k-class classification problem, if the network outputs are z ∈ Rk and the correct class is i ∈ [k], then the mean squared error (MSE) loss is defined as 1 2 [ (z[i]− 1)2 + ∑ j 6=i z[j] 2 ] . That is, we encode the correct class with a “1” and the other classes\nwith “0.” The cross-entropy loss is defined as − log (\nexp(z[i])∑ j 6=i exp(z[j])\n) ." }, { "heading": "I.2 STANDARD ARCHITECTURES ON CIFAR-10", "text": "To preprocess the CIFAR-10 dataset, we subtracted the mean from each channel, and then divided each channel by the standard deviation.\nSince training with full-batch gradient descent is slow, we opted to experiment on relatively shallow networks. The VGG networks (both with and without BN) are VGG-11’s, from the implementation here: https://github.com/chengyangfu/pytorch-vgg-cifar10/\nblob/master/vgg.py, with the dropout layers removed. The ResNet is the (non-fixup) ResNet32 implemented here: https://github.com/hongyi-zhang/Fixup.\nFor the two networks with batch normalization, running gradient descent with full-dataset batch normalization would not have been feasible under our GPU memory constraints. Therefore, we instead used ghost batch normalization (Hoffer et al., 2017) with 50 ghost batches of size 1,000. This means that we divided the 50,000 examples in CIFAR-10 into 50 fixed groups of size 1,000 each, and defined the overall objective function to be the average of 50 fixed batch-wise objectives. To correctly compute the overall gradient of this training objective, we can just run backprop 50 times (once on each group) and average the resulting gradients.\nTo compute the sharpness over the full CIFAR-10 dataset would have been computationally expensive. Therefore, in an approximation, we instead computed the sharpness over just the first 5,000 examples in the dataset (or, for the BN networks, over the first 5 batches out of 50)." }, { "heading": "I.3 BATCH NORMALIZATION EXPERIMENTS", "text": "We used the CNN architecture from §J (described above in §I.1), but with a BatchNorm2d() layer inserted after each activation layer.\nSince our GPUs did not have enough memory to run batch normalization with the full dataset of size 5,000, we used ghost batch normalization with five ghost batches of size 1,000 (see I.2 for details)." }, { "heading": "I.4 TRANSFORMER ON WIKITEXT-2", "text": "We used both the Transformer architecture and the preprocessing setup from the official PyTorch (Paszke et al., 2019) word-level language modeling tutorial: https://github.com/ pytorch/examples/tree/master/word_language_model. We used the settings ninp=200, nhead=2, nhid=200, nlayers=2, dropout=0. We set bptt = 35, which means that we divided the corpus into chunks of 35 tokens, and trained the network, using the negative log likelihood loss, to predict each token from the preceding tokens in the same chunk. Since computing the sharpness over the full dataset would not have been computationally practical, we computed the sharpness over a subset comprising 2500 training examples." }, { "heading": "I.5 RUNGE-KUTTA", "text": "We used the “RK4” fourth-order Runge-Kutta algorithm (Press et al., 1992) to numerically integrate the gradient flow ODE. The Runge-Kutta algorithm requires a step size. Rather than use a sophisticated algorithm for adaptive step size control, we decided to take advantage of the fact that we were already periodically computing the sharpness: at each step, we set the step size to α/λ, where α is a tunable parameter and λ is the most recent value for the sharpness. We set α = 1 or α = 0.5." }, { "heading": "I.6 RANDOM PROJECTIONS", "text": "In order to ascertain whether gradient descent at step size η initially followed the gradient flow trajectory (and, if so, for how long), we monitored the `2 distance in weight space between the gradient flow solution at time t and the gradient descent iterate at step t/η. One way to do would be as follows: (a) when running gradient flow, save the weights after every ∆t units of time, for some parameter ∆t; (b) when running gradient descent, save the weights at each ( ∆tη )-th step; (c) plot the difference between these two sequences. (Note that this approach requires ∆t to be divisible by η.)\nWe essentially used this approach, but with one modification: regularly saving the entire network weight vector would have consumed a large amount of disk space, so we instead saved lowdimensional random projections of the network weights. To be clear, let d be the number of network weights, and let k be the number of random projections (a tunable parameter chosen such that k d). Then we first generated a matrix M ∈ Rk×d by sampling each entry i.i.d from the standard normal distribution. During training, rather than periodically save the whole weight vector (a d-dimensional vector), we premultiplied this vector by the matrix M to obtain a k-dimensional vector, and we periodically saved these vectors instead. Then we plotted the `2 distance between the low-dimensional vectors from gradient flow, and the low-dimensional vectors from gradient descent." }, { "heading": "J EXPERIMENTS: VARY ARCHITECTURES", "text": "In this appendix, we fix the task as that of fitting a 5,000-sized subset of CIFAR-10, and we verify our main findings across a broad range of architectures.\nProcedure We consider fully-connected networks and convolutional networks, the latter with both max-pooling and average pooling. For all of these, we consider tanh, ReLU, and ELU activations, and for fully-connected networks we moreover consider softplus and hardtanh activations. We train each network with both cross-entropy and MSE loss. See §I.1 for full experimental details.\nIn each case, we first use the Runge-Kutta method to numerically integrate the gradient flow ODE (see §I.5 for details). For architectures that give rise to continuously differentiable training objectives, the gradient flow ODE is guaranteed to have a unique solution (which we call the gradient flow trajectory), and Runge-Kutta will return a numerical approximation to this solution. On the other hand, for architectures with ReLU, hardtanh, or max-pooling, the training objective is not continuously differentiable, so the gradient flow ODE does not necessarily have a unique solution, and there are no guarantees a priori on what Runge-Kutta will return (more on this below under the “findings” heading). Still, in both cases, since our implementation of Runge-Kutta automatically adjusts the step size based on the local sharpness in order to remain stable, the Runge-Kutta trajectory can be roughly viewed as “what gradient descent would do if instability was not an issue.”\nWe then run gradient descent at a range of step sizes. These step sizes η were chosen by hand so that the quantity 2/η would be spaced uniformly between λ0 (the sharpness at initialization) and λmax (the maximum sharpness along the Runge-Kutta trajectory).\nResults During Runge-Kutta, we observe that the sharpness tends to continually increase during training (progressive sharpening), with the exception that when cross-entropy loss is used, the sharpness decreases at the very end of training, as explained in Appendix C.\nDuring gradient descent with step size η, we observe that once the sharpness reaches 2/η, it ceases to increase much further, and instead hovers right at, or just above, the value 2/η. For reasons unknown, it tends to be true that for MSE loss, the sharpness hovers just a tiny bit above the value 2/η, while for cross-entropy loss the gap between the sharpness and the value 2/η is a bit larger.\nFor each step size, we monitor the distance between the gradient descent trajectory and the RungeKutta trajectory — that is, we monitor the distance between the Runge-Kutta iterate at time t and the gradient descent iterate at step t/η (see §I.6 for details). Empirically, for architectures that give rise to continuously differentiable training objectives, we observe that this distance is nearly zero before the sharpness hits 2/η, and it starts to climb immediately afterwards. This means that gradient descent closely tracks the gradient flow trajectory so long as the sharpness remains less than 2/η. Note that this finding was not a foregone conclusion: gradient descent is guaranteed to track the gradient flow trajectory in the limit of infinitesimal step sizes (since gradient descent is the forward Euler discretization of the gradient flow ODE), but for non-infinitesimal step sizes, there is discretization error, which is studied in Barrett & Dherin (2021). Our empirical finding is essentially that this discretization error is small compared to the difference between trajectories caused by instability.\nOn the other hand, for architectures with non-differentiable components such as ReLU or maxpooling, we sometimes observe that gradient descent tracks the Runge-Kutta trajectory so long as the sharpness remains less than 2/η, but we also sometimes observe that the gradient descent trajectories differ from one another (and from Runge-Kutta) from the beginning of training. In the former case, we can infer that the gradient flow trajectory apparently does exist, and is returned by Runge-Kutta; in the latter case, we can infer that either (a) the gradient flow trajectory does not exist, or (b) that it does exist (and is returned by Runge-Kutta), but the step sizes we used for gradient descent were too large to track it." }, { "heading": "J.1.1 SQUARE LOSS", "text": "" }, { "heading": "J.1 FULLY-CONNECTED TANH NETWORK", "text": "" }, { "heading": "J.1.2 CROSS-ENTROPY LOSS", "text": "" }, { "heading": "J.2.1 SQUARE LOSS", "text": "" }, { "heading": "J.2 FULLY-CONNECTED ELU NETWORK", "text": "" }, { "heading": "J.2.2 CROSS-ENTROPY LOSS", "text": "" }, { "heading": "J.3.1 SQUARE LOSS", "text": "" }, { "heading": "J.3 FULLY-CONNECTED SOFTPLUS NETWORK", "text": "" }, { "heading": "J.3.2 CROSS-ENTROPY LOSS", "text": "" }, { "heading": "J.4 FULLY-CONNECTED RELU NETWORK", "text": "Note that since the ReLU activation function is not continuously differentiable, the training objective is not continuously differentiable, and so a unique gradient flow trajectory is not guaranteed to exist." }, { "heading": "J.4.1 SQUARE LOSS", "text": "" }, { "heading": "J.4.2 CROSS-ENTROPY LOSS", "text": "" }, { "heading": "J.5 FULLY-CONNECTED HARD TANH NETWORK", "text": "Note that since the hardtanh function is not continuously differentiable, the training objective is not continuously differentiable, and so a unique gradient flow trajectory is not guaranteed to exist." }, { "heading": "J.5.1 SQUARE LOSS", "text": "" }, { "heading": "J.5.2 CROSS-ENTROPY LOSS", "text": "" }, { "heading": "J.6 CONVOLUTIONAL TANH NETWORK WITH MAX POOLING", "text": "Note that since max-pooling is not continuously differentiable, the training objective is not continuously differentiable, and so a unique gradient flow trajectory is not guaranteed to exist." }, { "heading": "J.6.1 SQUARE LOSS", "text": "" }, { "heading": "J.6.2 CROSS-ENTROPY LOSS", "text": "" }, { "heading": "J.7 CONVOLUTIONAL ELU NETWORK WITH MAX POOLING", "text": "Note that since max-pooling is not continuously differentiable, the training objective is not continuously differentiable, and so a unique gradient flow trajectory is not guaranteed to exist." }, { "heading": "J.7.1 SQUARE LOSS", "text": "" }, { "heading": "J.7.2 CROSS-ENTROPY LOSS", "text": "" }, { "heading": "J.8 CONVOLUTIONAL RELU NETWORK WITH MAX POOLING", "text": "Note that since ReLU and max-pooling are not continuously differentiable, the training objective is not continuously differentiable, and so a unique gradient flow trajectory is not guaranteed to exist." }, { "heading": "J.8.1 SQUARE LOSS", "text": "" }, { "heading": "J.8.2 CROSS-ENTROPY LOSS", "text": "" }, { "heading": "J.9.1 CROSS-ENTROPY LOSS", "text": "" }, { "heading": "J.9 CONVOLUTIONAL TANH NETWORK WITH AVERAGE POOLING", "text": "" }, { "heading": "J.10.1 SQUARE LOSS", "text": "" }, { "heading": "J.10 CONVOLUTIONAL ELU NETWORK WITH AVERAGE POOLING", "text": "" }, { "heading": "J.10.2 CROSS-ENTROPY LOSS", "text": "" }, { "heading": "J.11 CONVOLUTIONAL RELU NETWORK WITH AVERAGE POOLING", "text": "Note that since ReLU is not continuously differentiable, the training objective is not continuously differentiable, and so a unique gradient flow trajectory is not guaranteed to exist." }, { "heading": "J.11.1 SQUARE LOSS", "text": "" }, { "heading": "J.11.2 CROSS-ENTROPY LOSS", "text": "" }, { "heading": "K BATCH NORMALIZATION EXPERIMENTS", "text": "In this appendix, we demonstrate that our findings hold for networks that are trained with batch normalization (BN) (Ioffe & Szegedy, 2015). We experiment on a size-5,000 subset of CIFAR10, and we consider convolutional networks with three different activation functions: ELU (Figure 70-71, tanh (Figure 72-73), and ReLU (Figure 74-75). See §I.3 for experimental details.\nEmpirically, our findings hold for batch-normalized networks. The one catch is that when training batch-normalized networks at very small step sizes, it is apparently inadequate to measure the sharpness directly at the iterates themselves, as we do elsewhere in the paper. Namely, observe that in Figure 71(e), when we run gradient descent at the red step size, the sharpness (measured directly at the iterates) plateaus a bit beneath the value 2/η. At first, this might sound puzzling: after all, if the sharpness is less than 2/η then gradient descent should be stable. The explanation is that the sharpness in between successive iterates does in fact cross 2/η. In Figure 71(b), we track the maximum sharpness on the path “in between” successive iterates. (To estimate the maximum sharpness between a pair of successive iterates, we compute the sharpness at a grid of eight points spaced evenly between them, and then take the maximum of these values.) Observe that this quantity does rise to 2/η and hover there. We do not know why measuring the sharpness between iterates is necessary for batch-normalized networks, whereas for non-BN networks it suffices to measure the sharpness only at the iterates themselves.\nIn §K.1, we reconcile these findings with Santurkar et al. (2018)." }, { "heading": "K.1 RELATION TO SANTURKAR ET AL. (2018)", "text": "We have demonstrated that the sharpness hovers right at (or just above) the value 2/η when both BN and non-BN networks are trained using gradient descent at reasonable step sizes. Therefore, at least in the case of full-batch gradient descent, it cannot be said that batch normalization decreases the sharpness (i.e. improves the local L-smoothness) along the optimization trajectory.\nSanturkar et al. (2018) argued that batch normalization improves the effective smoothness along the optimization trajectory, where effective smoothness is defined as the Lipschitz constant of the gradient in the update direction (i.e. the negative gradient direction, for full-batch GD). That is, given an objective function f , an iterate θ, and a distance α, the effective smoothness of f at parameter θ and distance α is defined in Santurkar et al. (2018) as\nsup γ∈[0,α] ‖∇f(θ)−∇f(θ − γ∇f(θ))‖2 ‖γ∇f(θ)‖2\nwhere the sup can be numerically approximated by evaluating the given ratio at several values γ spaced uniformly between 0 and α.\nIn Figure 76, we train two ReLU CNNs — one with BN, one without — at the same set of step sizes, and we monitor both the sharpness (i.e. the L-smoothness) and the effective smoothness. When computing the effective smoothness, we use a distance α = η. Observe that for both the BN and the non-BN network, the effective smoothness initially hovers around zero, but once gradient descent enters the Edge of Stability, the effective smoothness jumps to the value 2/η and then remains there. Thus, at least for full-batch gradient descent on this particular architecture, batch normalization does not improve the effective smoothness along the optimization trajectory. (Despite this, note that for each step size, the BN network trains faster than the non-BN network, confirming that BN does accelerate training.)\nNote that this finding is actually consistent with Figure 4(c) in Santurkar et al. (2018), which is meant to show that BN improves effective smoothness when training a VGG network using SGD. Their Figure 4(c) shows that during SGD with step size η = 0.1, the effective smoothness hovers around the value 20 for both the BN and the non-BN network. Since 20 = 2/(0.1), this is fully consistent with our findings (though they use SGD rather than full-batch GD). Figure 4(c) does show that the effective smoothness behaves more regularly for the BN network than for the non-BN network. But we disagree with their interpretation of this figure as demonstrating that BN improves the effective smoothness during training.\nThe other piece of evidence in Santurkar et al. (2018) in support of the argument that batch normalization improves the effective smoothness during training is their Figure 9(c). This figure shows that a deep linear network (DLN) trained without BN has a much larger (i.e. worse) effective smoothness during training than a DLN trained with BN. However, for this figure, the distance α used to compute effective smoothness was larger than the training step size η by a factor of 30. The effective smoothness at distances larger than the step size does not affect training. We have verified that\nwhen effective smoothness is computed at a distance equal to the training step size (i.e. α = η), the effective smoothness for the DLN with BN and for the DLN without BN both hover right at 2/η.\nSpecifically, in Figure 77 and Figure 78, we train a DLN both with and without BN (respectively), and we measure the effective smoothness at a distance α = 30η, as done in Figure 9(c) of Santurkar et al. (2018). We use the same experimental setup and the same step size of η = 1e-6 as they do, and we repeat the experiment across four random seeds. Observe that when training the BN network, the effective smoothness hovers right at 2/η (marked by the horizontal black line), whereas when training the non-BN network, the effective smoothness is much larger. This is consistent with Figure 9(c) in Santurkar et al. (2018). However, in Figure 79 and 80, we measure the effective smoothness at the actual step size α = η. When effective smoothness is computed in this way, we observe that for both the network with BN and the network without BN, the effective smoothness hovers right at 2/η. Therefore, we conclude that there is no evidence that the use of batch normalization improves either the smoothness or the effective smoothness along the optimization trajectory. (That said, this experiment possibly explains why the batch-normalized network permits training with larger step sizes.)" }, { "heading": "L ADDITIONAL TASKS", "text": "So far, we have verified our findings on image classification and language modeling. In this appendix, we verify our findings on three additional tasks: training a Transformer on the WikiText-2 language modeling dataset (L.1), training a one-hidden-layer network on a one-dimensional toy regression task (L.2), and training a deep linear network (L.3)." }, { "heading": "L.1 TRANSFORMER ON WIKITEXT-2", "text": "We consider the problem of training a Transformer on the WikText-2 word-level language modeling dataset (Merity et al., 2016). See §I.4 for full experimental details. In Figure 81, we train using gradient flow (only partially, not to completion). Observe that the sharpness continually rises. In Figure 82, we train using gradient descent at a range of step sizes. Consistent with our general findings, for each step size η, we observe that the sharpness rises to 2/η and then hovers right at, or just above, that value. However, for this Transformer, we do not observe that gradient descent closely tracks the gradient flow trajectory at the beginning of training." }, { "heading": "L.2 ONE-DIMENSIONAL TOY REGRESSION TASK", "text": "Task Our toy regression problem is to approximate a Chebyshev polynomial using a neural network. To generate a toy dataset, we take 20 points spaced uniformly on the interval [−1, 1], and we label them noiselessly using the Chebyshev polynomial of some degree k. Note that the Chebyshev polynomial of degree k is a polynomial with k zeros that maps the domain [−1, 1] to the range [−1, 1]. Figure 83 shows the Chebyshev datasets for degree 3, 4, and 5.\nNetwork For the network, we use a tanh architecture with one hidden layer of h = 100 units, initialized using Xavier initialization. We train using the MSE loss until the loss reaches 0.05.\nResults In Figure 84, we fit the Chebyshev degree 3, 4, and 5 datasets using gradient flow. Empirically, the higher the degree, the more the sharpness rises during the course of training: on the degree 3 polynomial, the sharpness rises by a factor of 1.2; on the degree 4 polynomial, by a factor of 3.2; and on the degree 5 polynomial, by a factor of 63.5.\nIn Figure 85 and Figure 86, we fit the degree 4 and 5 datasets using gradient descent at a range of step sizes. We observe mostly the same Edge of Stability behavior as elsewhere in the paper. The only difference is that for the degree 5 dataset, after the sharpness hits 2/η, the training loss first undergoes a temporary period of non-monotonicity in which no progress is made, and then it decreases monotonically until training is finished (in contrast to our other experiments where we observe that training loss behaves non-monotonically at the same time as it is consistently decreasing)." }, { "heading": "L.3 DEEP LINEAR NETWORK", "text": "Task The task is to map n inputs x1, . . . ,xn ⊆ Rd to n targets y1, . . . ,yn ⊆ Rd using a function f : Rd → Rd. Error is measured using the square loss, i.e. the objective is 1n ∑n i=1 ‖f(xi)− yi‖22. Let X ∈ Rn×d be the vertical stack of the inputs, and let Y ∈ Rn×d be the vertical stack of the targets. We first generate X as a random whitened matrix (i.e. 1nX\nTX = I). To generate X as a random whitened matrix, we sample a n× d matrix of standard Gaussians, and then set X to be √ n times the Q factor in the QR factorization of that matrix. We then generate Y via Y = XAT , where A ∈ Rd×d is a random matrix whose entries are sampled i.i.d from the standard normal distribution. We use n = 50 datapoints with a dimension of d = 50.\nNetwork The function f : Rd → Rd is implemented as a L-layer deep linear network: f(x) = WL . . .W2W1x, with W` ∈ Rd×d. We initialize all layers of the deep linear network using Xavier initialization: all entries of each W` are drawn i.i.d from N (0, 1d ). We use a network with L = 20 layers.\nResults In Figure 87, we train the network using gradient flow. (Since it is unclear whether the network can be trained to zero loss, and how long this would take, we arbitrarily chose to stop training at time 100.) In Figure 88, we train the network using gradient descent at a range of step sizes. We observe mostly the same Edge of Stability behavior as elsewhere in the paper. The only difference is that in Figure 88, the train loss does not really behave non-monotonically — for each step size η, there is a brief blip at some point, but otherwise, the train loss decreases monotonically." }, { "heading": "M EXPERIMENTS: STANDARD ARCHITECTURES ON CIFAR-10", "text": "In this appendix, we demonstrate that our findings hold for three standard architectures on the standard dataset CIFAR-10. The three architectures are: a VGG with batch normalization (Figures 89-90), a VGG without batch normalization (Figures 91-92), and a ResNet with batch normalization (Figures 93 -94). See §I.2 for full experimental details.\nFor each of these three architectures, we confirm our main points: (1) so long as the sharpness is less than 2/η, the sharpness tends to increase; and (2) if the sharpness reaches 2/η, gradient descent enters a regime (the Edge of Stability) in which (a) the training loss behaves non-monotonically, yet consistently decreases over long timescales, and (b) the sharpness hovers right at, or just above, the value 2/η. Moreover, we observe that even though these architectures use the ReLU activation function (which is not continuously differentiable), gradient descent closely tracks the Runge-Kutta trajectory until reaching the point on that trajectory where the sharpness hits 2/η.\nFurthermore, we observe that for these three standard architectures, the following additional points hold: (1) progressive sharpening occurs to a dramatic degree, and (2) stable step sizes are so small as to be completely unreasonable.\nProgressive sharpening occurs to a dramatic degree To assess the degree of progressive sharpening, we train these networks using Runge-Kutta / gradient flow, which can be viewed as gradient descent with an infinitesimally small step size. (In practice, the Runge-Kutta algorithm does have a step size parameter, and throughout training, we periodically adjust this step size in order to ensure that the algorithm remains stable.) Intuitively, training with gradient flow tell us how far the sharpness would rise “if it didn’t have to worry about” instability caused by nonzero step sizes. In Figure 89, we train the VGG with BN to completion (99% training accuracy) using Runge-Kutta / gradient flow, and find that the sharpness rises from its initial value of 6.38 to a peak value of 2227.6. For the other two architectures, progressive sharpening occurs to such a degree that it is not computationally feasible for to train using Runge-Kutta / gradient flow all the way to completion. (The reason is that in regions where the sharpness is high, the Runge-Kutta step size must be made small, so Runge-Kutta requires very many iterations.) Therefore, we instead train these two networks only partially. In Figure 91, for the VGG without BN, we find that the sharpness rises from its initial value of 0.64 to the value 2461.78 at 37.1% accuracy, when we stop training. In Figure 93, for the ResNet, we find that the sharpness rises from its initial value of 1.07 to the value 760.6 at 43.2% accuracy, when we stop training. Thus, even though we observed in Appendix D that progressive sharpening attenuates as the width of fully-connected networks is made larger, it appears that either: (1) this does not happen for modern families of architectures such as ResNet and VGG, or (2) this does happen for modern families of architectures, but practical network widths lie on the narrow end of the scale.\nStable step sizes are so small as to be unreasonable Recall from 3.3 that if λmax is the maximum sharpness along the gradient flow trajectory, then any stable step size must be less than 2/λmax. Therefore, for these three architectures, because progressive sharpening occurs to a dramatic degree (i.e. λmax is extraordinarily large), any stable step size must be extraordinarily small, which means that training will require many iterations. Yet, at the same time, we find that these three networks can be successfully trained in far fewer iterations by using larger step step size. This means that training at a stable step size is extremely suboptimal. We now elaborate on this point:\nVGG with BN. For this network, gradient flow terminates at time 15.66, and the maximum sharpness along the gradient flow trajectory is 2227.6. Therefore, the largest stable step size is 2/2227.59 = 0.000897, and training to completion at this step size would take 14.91/0.000897 = 16622 iterations. Meanwhile, we empirically observe that the network can also be trained to completion at the much larger step size of η = 0.16 in just 329 iterations. Therefore, using a stable step size is suboptimal by a factor of at least 16622/329 = 50.5.\nFor the other two architectures, since we are unable to train to completion using gradient flow, we are unable to obtain a tight lower bound for the number of iterations required to run gradient descent to completion at a stable step size. Therefore, by extension, we are unable to compute a tight lower bound for the suboptimality factor of stable step sizes. As a substitute, we will instead compute\nboth: (1) a tight lower bound on the suboptimality of training partially at a stable step size, and (2) a very loose lower bound on the suboptimality of training to completion at a stable step size.\nVGG without BN. For this network, gradient flow reaches 37.1% accuracy at time 8, and the maximum sharpness up through this point on the gradient flow trajectory is 2461.8. Therefore, the largest stable step size is 2/2461.8 = 0.00081, and training to 37.1% accuracy at this step size would require 8/0.00081 = 9, 876 iterations. Meanwhile, we empirically observe that the network can also be trained to 37.1% accuracy at the larger step size of η = 0.16 in just 355 iterations. Therefore, when training this network to 37% accuracy, stable step sizes are suboptimal by a factor of at least 9876/355 = 27.8. This is a tight lower bound on the suboptimality of training to 37.1% accuracy at a stable step size.\nTo obtain a loose lower bound on the suboptimality of training the VGG without BN to completion at a stable step size, we note that (a) since the maximum sharpness up through time 8 is 2461.8, the maximum sharpness along the entire gradient flow trajectory must be at least 2461.8; and (b) since by time 8 gradient flow has only attained 37.1% training accuracy, the time to reach 99% accuracy (i.e. completion) must be at least 8. (Note that both of these lower bounds are extremely loose.) Therefore, training this network to completion at a stable step size would require at least 8/(2/2461.8) = 9, 876 iterations (which is the same number of iterations as training to 37.1% accuracy at a stable step size). Meanwhile, we find that the network can be trained to completion at the larger step size of η = 0.16 in just 1782 iterations. Therefore, training to completion at a stable step size is suboptimal by a factor of at least 9, 876/1782 = 5.54.\nResNet. For this network, gradient flow reaches 43.2% accuracy at time 70, and the maximum sharpness up through this point on the gradient flow trajectory is 760.6. Therefore, the largest stable step size is 2/760.6 = 0.0026, and training to 43.2% accuracy at this step size would require 70/0.0026 = 26, 923 iterations. Meanwhile, we empirically observe that the network can also be trained to 43.2% accuracy at the larger step size of η = 2.0 in just 99 iterations. Therefore, when training this network to 43.2% accuracy, stable step sizes are suboptimal by a factor of at least 26, 923/99 = 271.9. This is a tight lower bound on the suboptimality of training to 43.2% accuracy at a stable step size. For the loose lower bound on the suboptimality of training to completion at a stable step size, note that (by similar reasoning as the VGG-without-BN above), training to completion at a stable step size must require at least 26, 923 iterations. Meanwhile, we find that the network can be trained to completion at the larger step size of η = 2.0 in just 807 iterations. Therefore, training to completion at a stable step size is suboptimal by a factor of at least 26, 923/807 = 33.3." }, { "heading": "N EXPERIMENTS: MOMENTUM", "text": "This appendix contains systematic experiments for gradient descent with Polyak momentum and Nesterov momentum. Our aim is to demonstrate that the sharpness rises until reaching the maximum stable sharpness (MSS) given by equation 1, and then either plateaus just above that value, or oscillates around that value.\nWe experiment on a 5k-sized subset of CIFAR-10, using four architectures: a tanh fully-connected network (section N.1), a ReLU fully-connected network (section N.2), a tanh convolutional network (section N.3), and a ReLU convolutional network (section N.4). For each of these four architectures, we experiment with both the square loss (for classification) and cross-entropy loss. For each architecture and loss function, we experiment with both Polyak momentum at β = 0.9, and gradient descent with Nesterov momentum at β = 0.9. We run gradient descent at a range of several step sizes which were chosen by hand so that the MSS’s are approximately spaced evenly. Note that for Polyak momentum with step size η and momentum parameter β = 0.9, the MSS is 2+2βη = 3.8 η . For Netsterov momentum with step size η and momentum parameter β = 0.9, the MSS is 2+2βη(1+2β) ≈ 1.35714\nη . We run gradient descent until reaching 99% accuracy.\nWe find that the sharpness rises until reaching the maximum stable sharpness (MSS) given by equation 1, and then either plateaus just above that value, or oscillates around that value. Sometimes these oscillations are rapid (e.g. Figure 96), sometimes they are a bit slower (e.g. Figure 103), and sometimes they are slow (e.g. Figure 109)." }, { "heading": "N.1 FULLY-CONNECTED TANH NETWORK", "text": "In the leftmost plot, the vertical dotted line marks the iteration where the sharpness first crosses the MSS. (Note that unlike vanilla gradient descent, momentum gradient descent can sometimes cause the train loss to increase even when the algorithm is stable (Goh, 2017).) In the middle plot, the horizontal dashed line marks the MSS." }, { "heading": "N.1.2 CROSS-ENTROPY LOSS", "text": "" }, { "heading": "N.1.1 SQUARE LOSS", "text": "" }, { "heading": "N.2 FULLY-CONNECTED RELU NETWORK", "text": "In the leftmost plot, the vertical dotted line marks the iteration where the sharpness first crosses the MSS. (Note that unlike vanilla gradient descent, momentum gradient descent can sometimes cause the train loss to increase even when the algorithm is stable (Goh, 2017).)\nIn the middle plot, the horizontal dashed line marks the MSS." }, { "heading": "N.2.2 CROSS-ENTROPY LOSS", "text": "" }, { "heading": "N.2.1 SQUARE LOSS", "text": "" }, { "heading": "N.3 CONVOLUTIONAL TANH NETWORK", "text": "In the leftmost plot, the vertical dotted line marks the iteration where the sharpness first crosses the MSS. (Note that unlike vanilla gradient descent, momentum gradient descent can sometimes cause the train loss to increase even when the algorithm is stable (Goh, 2017).)\nIn the middle plot, the horizontal dashed line marks the MSS." }, { "heading": "N.3.2 CROSS-ENTROPY LOSS", "text": "" }, { "heading": "N.3.1 SQUARE LOSS", "text": "" }, { "heading": "N.4 CONVOLUTIONAL RELU NETWORK", "text": "In the leftmost plot, the vertical dotted line marks the iteration where the sharpness first crosses the MSS. (Note that unlike vanilla gradient descent, momentum gradient descent can sometimes cause the train loss to increase even when the algorithm is stable (Goh, 2017).) In the middle plot, the horizontal dashed line marks the MSS." }, { "heading": "N.4.2 CROSS-ENTROPY LOSS", "text": "" }, { "heading": "N.4.1 SQUARE LOSS", "text": "" }, { "heading": "O EXPERIMENTS: LEARNING RATE DROP", "text": "In this appendix, we run gradient descent until reaching the Edge of Stability, and then we cut the step size. We will see that the sharpness starts increasing as soon as the step size is cut, and only stops increasing once gradient descent is back at the Edge of Stability (or training is finished). As a consequence of this experiment, one can interpret the Edge of Stability as a regime in which gradient descent is constantly “trying” to increase the sharpness beyond 2/η, but is constantly being blocked from doing so. Our experiments focus on image classification on a 5k-sized subset of CIFAR-10. We study two architectures (a fully-connected tanh network and a convolutional ReLU network) and two loss functions (squared loss and cross-entropy loss)." }, { "heading": "O.1 FULLY-CONNECTED TANH NETWORK: SQUARE LOSS", "text": "" }, { "heading": "O.2 FULLY-CONNECTED TANH NETWORK: CROSS-ENTROPY LOSS", "text": "" }, { "heading": "O.3 CONVOLUTIONAL RELU NETWORK: SQUARE LOSS", "text": "" }, { "heading": "O.4 CONVOLUTIONAL RELU NETWORK: CROSS-ENTROPY LOSS", "text": "" }, { "heading": "P OTHER EIGENVALUES", "text": "Throughout most of this paper, we have studied the evolution of the maximum Hessian eigenvalue during gradient descent. In this appendix, we examine the evolution of the top six eigenvalues. While training the network from §3, we monitor the evolution of the top six Hessian eigenvalues. For each of cross-entropy loss and MSE loss, we train at four different step sizes. The results are shown in Figure 115. Observe that each of the top six eigenvalues rises and then plateaus. The precise details differ between MSE loss and cross-entropy loss. For MSE loss, each eigenvalue rises past 2/η, and then plateaus just above that value. In contrast, for cross-entropy loss, some of the lesser eigenvalues plateau below 2/η." } ]
2,021
null
SP:707b1ba524c785d8942517ba7dff17115012181f
[ "This paper provides a empirical study on the robustness of image classification models to distributions shifts. The authors construct three benchmark datasets that control for effects like artistic renditions of common classes, view-point changes, and geographic shifts (among others). The datasets are then used to test various hypotheses regarding robustness enhancing measures empirically. The authors additionally propose a novel augmentation scheme, that uses deep image processing networks together with random perturbations of their weights to synthesize distorted image samples.", "This paper investigates the robustness problem of computer vision model. To study the model robustness in a controlled setting, the author introduces three new robustness benchmarks: ImageNet-R, StreetView StoreFronts and DeepFashion Remixed. Each of them address different aspects of distribution drift in the real world. The author evaluates seven popular hypotheses on model robustness in the community on the three new datasets and has found counter-example for most of them. Based on those new results, the author concluded that model robustness problem is multi-variate in nature: no single solution could handle all aspects yet. And future work should be tested on multiple datasets to prove robustness. Moreover, the author also proposes a new data augmentation method using perturbed image-to-image deep learning model to generate visually diverse augmentations." ]
We introduce three new robustness benchmarks consisting of naturally occurring distribution changes in image style, geographic location, camera operation, and more. Using our benchmarks, we take stock of previously proposed hypotheses for out-of-distribution robustness and put them to the test. We find that using larger models and synthetic data augmentation can improve robustness on real-world distribution shifts, contrary to claims in prior work. Motivated by this, we introduce a new data augmentation method which advances the state-of-the-art and outperforms models pretrained with 1000× more labeled data. We find that synthetic augmentations can sometimes improve real-world robustness. We also find that some methods consistently help with distribution shifts in texture and local image statistics, but these methods do not help with some other distribution shifts like geographic changes. Hence no evaluated method consistently improves robustness. We conclude that future research must study multiple distribution shifts simultaneously.
[]
[ { "authors": [ "Dragomir Anguelov", "Carole Dulong", "Daniel Filip", "Christian Frueh", "Stéphane Lafon", "Richard Lyon", "Abhijit Ogale", "Luc Vincent", "Josh Weaver" ], "title": "Google street view: Capturing the world at street level", "venue": null, "year": 2010 }, { "authors": [ "Emma Beede", "Elizabeth Baylor", "Fred Hersch", "Anna Iurchenko", "Lauren Wilcox", "Paisan Ruamviboonsuk", "Laura M Vardoulakis" ], "title": "A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy", "venue": "In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems,", "year": 2020 }, { "authors": [ "Irving Biederman", "Ginny Ju" ], "title": "Surface versus edge-based determinants of visual recognition", "venue": "Cognitive psychology,", "year": 1988 }, { "authors": [ "Ekin Dogus Cubuk", "Barret Zoph", "Dandelion Mané", "Vijay Vasudevan", "Quoc V. Le" ], "title": "AutoAugment: Learning augmentation policies from data", "venue": null, "year": 2018 }, { "authors": [ "Jia Deng" ], "title": "Large scale visual recognition", "venue": "Technical report, PRINCETON UNIV NJ DEPT OF COMPUTER SCIENCE,", "year": 2012 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": null, "year": 2009 }, { "authors": [ "Samuel Dodge", "Lina Karam" ], "title": "A study and comparison of human and deep learning recognition performance under visual distortions", "venue": "26th international conference on computer communication and networks (ICCCN),", "year": 2017 }, { "authors": [ "Logan Engstrom", "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Jacob Steinhardt", "Aleksander Madry" ], "title": "Identifying statistical bias in dataset replication", "venue": null, "year": 2020 }, { "authors": [ "Shanghua Gao", "Ming-Ming Cheng", "Kai Zhao", "Xin-Yu Zhang", "Ming-Hsuan Yang", "Philip H.S. Torr" ], "title": "Res2net: A new multi-scale backbone architecture", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2019 }, { "authors": [ "Shanghua Gao", "Ming-Ming Cheng", "Kai Zhao", "Xinyu Zhang", "Ming-Hsuan Yang", "Philip H.S. Torr" ], "title": "Res2net: A new multi-scale backbone architecture. IEEE transactions on pattern analysis and machine intelligence, 2019b", "venue": null, "year": 2019 }, { "authors": [ "Yuying Ge", "Ruimao Zhang", "Xiaogang Wang", "Xiaoou Tang", "Ping Luo" ], "title": "Deepfashion2: A versatile benchmark for detection, pose estimation, segmentation and re-identification of clothing images", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Robert Geirhos", "Carlos R.M. Temme", "Jonas Rauber", "Heiko H. Schütt", "Matthias Bethge", "Felix A. Wichmann" ], "title": "Generalisation in humans and deep neural networks. NeurIPS, 2018", "venue": null, "year": 2018 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A Wichmann", "Wieland Brendel" ], "title": "Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": null, "year": 2019 }, { "authors": [ "Robert Geirhos", "Jörn-Henrik Jacobsen", "Claudio Michaelis", "Richard Zemel", "Wieland Brendel", "Matthias Bethge", "Felix A Wichmann" ], "title": "Shortcut learning in deep neural networks", "venue": "arXiv preprint arXiv:2004.07780,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition. corr abs/1512.03385", "venue": null, "year": 2015 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": null, "year": 2019 }, { "authors": [ "Dan Hendrycks", "Kimin Lee", "Mantas Mazeika" ], "title": "Using pre-training can improve model robustness and uncertainty", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Xiaoyuan Liu", "Eric Wallace", "Adam Dziedzic", "Rishabh Krishnan", "Dawn Song" ], "title": "Pretrained transformers improve out-of-distribution", "venue": null, "year": 2020 }, { "authors": [ "Dan Hendrycks", "Norman Mu", "Ekin D Cubuk", "Barret Zoph", "Justin Gilmer", "Balaji Lakshminarayanan" ], "title": "Augmix: A simple data processing method to improve robustness and uncertainty. ICLR, 2020b", "venue": null, "year": 2020 }, { "authors": [ "Shoji Itakura" ], "title": "Recognition of line-drawing representations by a chimpanzee (pan troglodytes)", "venue": "The Journal of General Psychology,", "year": 1994 }, { "authors": [ "Alexander Kolesnikov", "Lucas Beyer", "Xiaohua Zhai", "Joan Puigcerver", "Jessica Yung", "Sylvain Gelly", "Neil Houlsby" ], "title": "Large scale learning of general visual representations for transfer", "venue": null, "year": 1912 }, { "authors": [ "Simon Kornblith", "Jonathon Shlens", "Quoc V. Le" ], "title": "Do better ImageNet models transfer better", "venue": null, "year": 2018 }, { "authors": [ "Kimin Lee", "Kibok Lee", "Jinwoo Shin", "Honglak Lee" ], "title": "Network randomization: A simple technique for generalization in deep reinforcement learning", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Bee Lim", "Sanghyun Son", "Heewon Kim", "Seungjun Nah", "Kyoung Mu Lee" ], "title": "Enhanced deep residual networks for single image super-resolution", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition workshops,", "year": 2017 }, { "authors": [ "Raphael Gontijo Lopes", "Dong Yin", "Ben Poole", "Justin Gilmer", "Ekin Dogus Cubuk" ], "title": "Improving robustness without sacrificing accuracy with patch Gaussian augmentation", "venue": null, "year": 1906 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri abd Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": null, "year": 2018 }, { "authors": [ "A. Emin Orhan" ], "title": "Robustness properties of facebook’s", "venue": "ResNeXt WSL models. ArXiv,", "year": 2019 }, { "authors": [ "Benjamin Recht", "Rebecca Roelofs", "Ludwig Schmidt", "Vaishaal Shankar" ], "title": "Do ImageNet classifiers generalize to ImageNet? ArXiv", "venue": null, "year": 1902 }, { "authors": [ "Evgenia Rusak", "Lukas Schott", "Roland Zimmermann", "Julian Bitterwolf", "Oliver Bringmann", "Matthias Bethge", "Wieland Brendel" ], "title": "Increasing the robustness of dnns against image corruptions by playing the game of noise", "venue": null, "year": 2001 }, { "authors": [ "Masayuki Tanaka" ], "title": "Recognition of pictorial representations by chimpanzees (pan troglodytes)", "venue": "Animal Cognition,", "year": 2006 }, { "authors": [ "Rohan Taori", "Achal Dave", "Vaishaal Shankar", "Nicholas Carlini", "Benjamin Recht", "Ludwig Schmidt" ], "title": "When robustness doesn’t promote robustness: Synthetic vs. natural distribution shifts on imagenet, 2020", "venue": "URL https://openreview.net/forum?id=HyxPIyrFvH", "year": 2020 }, { "authors": [ "Lucas Theis", "Wenzhe Shi", "Andrew Cunningham", "Ferenc Huszár" ], "title": "Lossy image compression with compressive autoencoders", "venue": "arXiv preprint arXiv:1703.00395,", "year": 2017 }, { "authors": [ "Haohan Wang", "Songwei Ge", "Eric P. Xing", "Zachary C. Lipton" ], "title": "Learning robust global representations by penalizing local predictive power, 2019", "venue": null, "year": 2019 }, { "authors": [ "Haotao Wang", "Tianlong Chen", "Zhangyang Wang", "Kede Ma" ], "title": "I am going mad: Maximum discrepancy competition for comparing classifiers adaptively", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Eric Wong", "Leslie Rice", "J Zico Kolter" ], "title": "Fast is better than free: Revisiting adversarial training", "venue": "arXiv preprint arXiv:2001.03994,", "year": 2020 }, { "authors": [ "Sanghyun Woo", "Jongchan Park", "Joon-Young Lee", "In So Kweon" ], "title": "Cbam: Convolutional block attention module", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Cihang Xie", "Alan Yuille" ], "title": "Intriguing properties of adversarial training at scale", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks. 2016", "venue": "arXiv preprint arXiv:1611.05431,", "year": 2016 }, { "authors": [ "Dong Yin", "Raphael Gontijo Lopes", "Jonathon Shlens", "Ekin D Cubuk", "Justin Gilmer" ], "title": "A Fourier perspective on model robustness in computer vision", "venue": null, "year": 1906 }, { "authors": [ "Sangdoo Yun", "Dongyoon Han", "Seong Joon Oh", "Sanghyuk Chun", "Junsuk Choe", "Youngjoon Yoo" ], "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "venue": null, "year": 2019 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "arXiv preprint arXiv:1710.09412,", "year": 2017 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros", "Eli Shechtman", "Oliver Wang" ], "title": "The unreasonable effectiveness of deep features as a perceptual metric", "venue": null, "year": 2018 }, { "authors": [ "Zhun Zhong", "Liang Zheng", "Guoliang Kang", "Shaozi Li", "Yi Yang" ], "title": "Random erasing data augmentation", "venue": "arXiv preprint arXiv:1708.04896,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "While the research community must create robust models that generalize to new scenarios, the robustness literature (Dodge and Karam, 2017; Geirhos et al., 2020) lacks consensus on evaluation benchmarks and contains many dissonant hypotheses. Hendrycks et al. (2020a) find that many recent language models are already robust to many forms of distribution shift, while Yin et al. (2019) and Geirhos et al. (2019) find that vision models are largely fragile and argue that data augmentation offers one solution. In contrast, Taori et al. (2020) provide results suggesting that using pretraining and improving in-distribution test set accuracy improve natural robustness, whereas other methods do not.\nIn this paper we articulate and systematically study seven robustness hypotheses. The first four hypotheses concern methods for improving robustness, while the last three hypotheses concern abstract properties about robustness. These hypotheses are as follows.\n• Larger Models: increasing model size improves robustness (Hendrycks and Dietterich, 2019; Xie and Yuille, 2020). • Self-Attention: adding self-attention layers to models improves robustness (Hendrycks et al., 2019b). • Diverse Data Augmentation: robustness can increase through data augmentation (Yin et al., 2019). • Pretraining: pretraining on larger and more diverse datasets improves robustness (Orhan, 2019;\nHendrycks et al., 2019a). • Texture Bias: convolutional networks are biased towards texture, which harms robustness (Geirhos\net al., 2019). • Only IID Accuracy Matters: accuracy on independent and identically distributed test data entirely\ndetermines natural robustness. • Synthetic 6=⇒ Real: synthetic robustness interventions including diverse data augmentations\ndo not help with robustness on real-world distribution shifts (Taori et al., 2020).\nIt has been difficult to arbitrate these hypotheses because existing robustness datasets preclude the possibility of controlled experiments by varying multiple aspects simultaneously. For instance, Texture Bias was initially investigated with synthetic distortions (Geirhos et al., 2018), which conflicts with the Synthetic 6=⇒ Real hypothesis. On the other hand, natural distribution shifts often affect many factors (e.g., time, camera, location, etc.) simultaneously in unknown ways (Recht et al., 2019;\nHendrycks et al., 2019b). Existing datasets also lack diversity such that it is hard to extrapolate which methods will improve robustness more broadly. To address these issues and test the seven hypotheses outlined above, we introduce three new robustness benchmarks and a new data augmentation method.\nFirst we introduce ImageNet-Renditions (ImageNet-R), a 30,000 image test set containing various renditions (e.g., paintings, embroidery, etc.) of ImageNet object classes. These renditions are naturally occurring, with textures and local image statistics unlike those of ImageNet images, allowing us to more cleanly separate the Texture Bias and Synthetic 6=⇒ Real hypotheses. Next, we investigate natural shifts in the image capture process with StreetView StoreFronts (SVSF) and DeepFashion Remixed (DFR). SVSF contains business storefront images taken from Google Streetview, along with metadata allowing us to vary location, year, and even the camera type. DFR leverages the metadata from DeepFashion2 (Ge et al., 2019) to systematically shift object occlusion, orientation, zoom, and scale at test time. Both SVSF and DFR provide distribution shift controls and do not alter texture, which remove possible confounding variables affecting prior benchmarks.\nFinally, we contribute DeepAugment to increase robustness to some new types of distribution shift. This augmentation technique uses image-to-image neural networks for data augmentation, not data-independent Euclidean augmentations like image shearing or rotating as in previous work. DeepAugment achieves state-of-the-art robustness on our newly introduced ImageNet-R benchmark and a corruption robustness benchmark. DeepAugment can also be combined with other augmentation methods to outperform a model pretrained on 1000× more labeled data. After examining our results on these three datasets and others, we can rule out several of the above hypotheses while strengthening support for others. As one example, we find that synthetic data augmentation robustness interventions improve accuracy on ImageNet-R and real-world image blur distribution shifts, providing clear counterexamples to Synthetic 6=⇒ Real while lending support to the Diverse Data Augmentation and Texture Bias hypotheses. In the conclusion, we summarize the various strands of evidence for and against each hypothesis. Across our many experiments, we do not find a general method that consistently improves robustness, and some hypotheses require additional qualifications. While robustness is often spoken of and measured as a single scalar property like accuracy, our investigations suggest that robustness is not so simple. In light of our results, we hypothesize in the conclusion that robustness is multivariate." }, { "heading": "2 RELATED WORK", "text": "Robustness Benchmarks. Recent works (Hendrycks and Dietterich, 2019; Recht et al., 2019; Hendrycks et al., 2020a) have begun to characterize model performance on out-of-distribution (OOD) data with various new test sets, with dissonant findings. For instance, Hendrycks et al. (2020a) demonstrate that modern language processing models are moderately robust to numerous naturally occurring distribution shifts, and that Only IID Accuracy Matters is inaccurate for natural language\ntasks. For image recognition, Hendrycks and Dietterich (2019) analyze image models and show that they are sensitive to various simulated image corruptions (e.g., noise, blur, weather, JPEG compression, etc.) from their “ImageNet-C” benchmark.\nRecht et al. (2019) reproduce the ImageNet (Russakovsky et al., 2015) validation set for use as a benchmark of naturally occurring distribution shift in computer vision. Their evaluations show a 11-14% drop in accuracy from ImageNet to the new validation set, named ImageNetV2, across a wide range of architectures. Taori et al. (2020) use ImageNetV2 to measure natural robustness and dismiss Diverse Data Augmentation. Recently, Engstrom et al. (2020) identify statistical biases in ImageNetV2’s construction, and they estimate that reweighting ImageNetV2 to correct for these biases results in a less substantial 3.6% drop.\nData Augmentation. Geirhos et al. (2019); Yin et al. (2019); Hendrycks et al. (2020b) demonstrate that data augmentation can improve robustness on ImageNet-C. The space of augmentations that help robustness includes various types of noise (Madry et al., 2017; Rusak et al., 2020; Lopes et al., 2019), highly unnatural image transformations (Geirhos et al., 2019; Yun et al., 2019; Zhang et al., 2017), or compositions of simple image transformations such as Python Imaging Library operations (Cubuk et al., 2018; Hendrycks et al., 2020b). Some of these augmentations can improve accuracy on in-distribution examples as well as on out-of-distribution (OOD) examples." }, { "heading": "3 NEW BENCHMARKS", "text": "In order to evaluate the seven robustness hypotheses, we introduce three new benchmarks that capture new types of naturally occurring distribution shifts. ImageNet-Renditions (ImageNet-R) is a newly collected test set intended for ImageNet classifiers, whereas StreetView StoreFronts (SVSF) and DeepFashion Remixed (DFR) each contain their own training sets and multiple test sets. SVSF and DFR split data into a training and test sets based on various image attributes stored in the metadata. For example, we can select a test set with images produced by a camera different from the training set camera. We now describe the structure and collection of each dataset." }, { "heading": "3.1 IMAGENET-RENDITIONS (IMAGENET-R)", "text": "While current classifiers can learn some aspects of an object’s shape (Mordvintsev et al., 2015), they nonetheless rely heavily on natural textural cues (Geirhos et al., 2019). In contrast, human vision can process abstract visual renditions. For example, humans can recognize visual scenes from line drawings as quickly and accurately as they can from photographs (Biederman and Ju, 1988). Even some primates species have demonstrated the ability to recognize shape through line drawings (Itakura, 1994; Tanaka, 2006).\nTo measure generalization to various abstract visual renditions, we create the ImageNet-Rendition (ImageNet-R) dataset. ImageNet-R contains various artistic renditions of object classes from the original ImageNet dataset. Note the original ImageNet dataset discouraged such images since annotators were instructed to collect “photos only, no painting, no drawings, etc.” (Deng, 2012). We do the opposite.\nData Collection. ImageNet-R contains 30,000 image renditions for 200 ImageNet classes. We choose a subset of the ImageNet-1K classes, following Hendrycks et al. (2019b), for several reasons. A handful ImageNet classes already have many renditions, such as “triceratops.” We also choose a subset so that model misclassifications are egregious and to reduce label noise. The 200 class subset was also chosen based on rendition prevalence, as “strawberry” renditions were easier to obtain than “radiator” renditions. Were we to use all 1,000 ImageNet classes, annotators would be pressed to distinguish between Norwich terrier renditions as Norfolk terrier renditions, which is difficult. We collect images primarily from Flickr and use queries such as “art,” “cartoon,” “graffiti,” “embroidery,” “graphics,” “origami,” “painting,” “pattern,” “plastic object,” “plush object,” “sculpture,” “line drawing,” “tattoo,” “toy,” “video game,” and so on. Images are filtered by Amazon MTurk annotators using a modified collection interface from ImageNetV2 (Recht et al., 2019). For instance, after scraping Flickr images with the query “lighthouse cartoon,” we have MTurk annotators select true positive lighthouse renditions. Finally, as a second round of quality control, graduate students manually filter the resulting images and ensure that individual images have correct labels and do not contain multiple labels. Examples are depicted in Figure 2. ImageNet-R also includes the line drawings from Wang et al. (2019), excluding horizontally mirrored duplicate images, pitch black images, and images from the incorrectly collected “pirate ship” class." }, { "heading": "3.2 STREETVIEW STOREFRONTS (SVSF)", "text": "Computer vision applications often rely on data from complex pipelines that span different hardware, times, and geographies. Ambient variations in this pipeline may result in unexpected performance degradation, such as degradations experienced by health care providers in Thailand deploying laboratory-tuned diabetic retinopathy classifiers in the field (Beede et al., 2020). In order to study the effects of shifts in the image capture process we collect the StreetView StoreFronts (SVSF) dataset, a new image classification dataset sampled from Google StreetView imagery (Anguelov et al., 2010) focusing on three distribution shift sources: country, year, and camera.\nData Collection. SVSF consists of cropped images of business store fronts extracted from StreetView images by an object detection model. Each store front image is assigned the class label of the associated Google Maps business listing through a combination of machine learning models and human annotators. We combine several visually similar business types (e.g. drugstores and pharmacies) for a total of 20 classes, listed Appendix B.\nSplitting the data along the three metadata attributes of country, year, and camera, we create one training set and five test sets. We sample a training set and an in-distribution test set (200K and 10K images, respectively) from images taken in US/Mexico/Canada during 2019 using a “new” camera system. We then sample four OOD test sets (10K images each) which alter one attribute at a time while keeping the other two attributes consistent with the training distribution. Our test sets are year: 2017, 2018; country: France; and camera: “old.”" }, { "heading": "3.3 DEEPFASHION REMIXED", "text": "Changes in day-to-day camera operation can cause shifts in attributes such as object size, object occlusion, camera viewpoint, and camera zoom. To measure this, we repurpose DeepFashion2 (Ge et al., 2019) to create the DeepFashion Remixed (DFR) dataset. We designate a training set with 48K images and create eight out-of-distribution test sets to measure performance under shifts in object size, object occlusion, camera viewpoint, and camera zoom-in. DeepFashion Remixed is a multi-label classification task since images may contain more than one clothing item per image.\nData Collection. Similar to SVSF, we fix one value for each of the four metadata attributes in the training distribution. Specifically, the DFR training set contains images with medium scale, medium occlusion, side/back viewpoint, and no zoom-in. After sampling an IID test set, we construct eight\nOOD test distributions by altering one attribute at a time, obtaining test sets with minimal and heavy occlusion; small and large scale; frontal and not-worn viewpoints; and medium and large zoom-in. See Appendix B for details on test set sizes." }, { "heading": "4 DEEPAUGMENT", "text": "In order to further explore the Diverse Data Augmentation hypothesis, we introduce a new data augmentation technique. Whereas most previous data augmentations techniques use simple augmentation primitives applied to the raw image itself, we introduce DeepAugment, which distorts images by perturbing internal representations of deep networks.\nDeepAugment works by passing a clean image through an image-to-image network and introducing several perturbations during the forward pass. These perturbations are randomly sampled from a set of manually designed functions and applied to the network weights and to the feed-forward signal at random layers. For example, our set of perturbations includes zeroing, negating, convolving, transposing, applying activation functions, and more. This setup generates semantically consistent images with unique and diverse distortions Figure 3. Although our set of perturbations is designed with random operations, we show that DeepAugment still outperforms other methods on benchmarks such as ImageNet-C and ImageNet-R. We provide the pseudocode in Appendix C.\nFor our experiments, we specifically use the CAE (Theis et al., 2017) and EDSR (Lim et al., 2017) architectures as the basis for DeepAugment. CAE is an autoencoder architecture, and EDSR is a superresolution architecture. These two architectures show the DeepAugment approach works with different architectures. Each clean image in the original dataset and passed through the network and is thereby stochastically distored, resulting in two distorted versions of the clean dataset (one for CAE and one for EDSR). We then train on the augmented and clean data simultaneously and call this approach DeepAugment." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 SETUP", "text": "In this section we briefly describe the evaluated models, pretraining techniques, self-attention mechanisms, data augmentation methods, and note various implementation details.\nModel Architectures and Sizes. Most experiments are evaluated on a standard ResNet-50 model (He et al., 2015). Model size evaluations use ResNets or ResNeXts (Xie et al., 2016) of varying sizes.\nPretraining. For pretraining we use ImageNet-21K which contains approximately 21,000 classes and approximately 14 million labeled training images, or around 10× more labeled training data than ImageNet-1K. We tune Kolesnikov et al. (2019)’s ImageNet-21K model. We also use a large pretrained ResNeXt-101 model from Mahajan et al. (2018). This was pre-trained on on approximately 1 billion Instagram images with hashtag labels and fine-tuned on ImageNet-1K. This Weakly Supervised Learning (WSL) pretraining strategy uses approximately 1000× more labeled data. Self-Attention. When studying self-attention, we employ CBAM (Woo et al., 2018) and SE (Hu et al., 2018) modules, two forms of self-attention that help models learn spatially distant dependencies.\nData Augmentation. We use Style Transfer, AugMix, and DeepAugment to analyze the Diverse Data Augmentation hypothesis, and we contrast their performance with simpler noise augmentations such as Speckle Noise and adversarial noise. Style transfer (Geirhos et al., 2019) uses a style transfer network to apply artwork styles to training images. We use AugMix (Hendrycks et al., 2020b) which randomly composes simple augmentation operations (e.g., translate, posterize, solarize). DeepAugment, introduced above, distorts the weights and feedforward passes of image-to-image models to generate image augmentations. Speckle Noise data augmentation muliplies each pixel by (1 + x) with x sampled from a normal distribution (Rusak et al., 2020; Hendrycks and Dietterich, 2019). We also consider adversarial training as a form of adaptive data augmentation and use the model from Wong et al. (2020) trained against `∞ perturbations of size ε = 4/255." }, { "heading": "5.2 RESULTS", "text": "We now perform experiments on ImageNet-R, StreetView StoreFronts, DeepFashion Remixed. We also evaluate on ImageNet-C and compare and contrast it with real distribution shifts.\nImageNet-R. Table 1 shows performance on ImageNet-R as well as on ImageNet-200 (the original ImageNet data restricted to ImageNet-R’s 200 classes). This has several implications regarding the four method-specific hypotheses. Pretraining with ImageNet-21K (approximately 10× labeled data) hardly helps. Appendix A shows WSL pretraining can help, but Instagram has renditions, while ImageNet excludes them; hence we conclude comparable pretraining was ineffective. Notice\nSelf-Attention increases the IID/OOD gap. Compared to simpler data augmentation techniques such as Speckle Noise, the Diverse Data Augmentation techniques of Style Transfer, AugMix, and DeepAugment improve generalization. Note AugMix and DeepAugment improve in-distribution performance whereas Style transfer hurts it. Also, our new DeepAugment technique is the best standalone method with an error rate of 57.8%. Last, Larger Models reduce the IID/OOD gap.\nRegarding the three more abstract hypotheses, biasing networks away from natural textures through diverse data augmentation improved performance, so we find support for the Texture Bias hypothesis. The IID/OOD generalization gap varies greatly which condtradicts Only IID Accuracy Matters. Finally, since ImageNet-R contains real-world examples, and since synthetic data augmentation helps on ImageNet-R, we now have clear evidence against the Synthetic 6=⇒ Real hypothesis. StreetView StoreFronts. In Table 2, we evaluate data augmentation methods on SVSF and find that all of the tested methods have mostly similar performance and that no method helps much on country shift, where error rates roughly double across the board. Here evaluation is limited to augmentations due to a 30 day retention window for each instantiation of the dataset. Images captured in France contain noticeably different architectural styles and storefront designs than those captured in US/Mexico/Canada; meanwhile, we are unable to find conspicuous and consistent indicators of the camera and year. This may explain the relative insensitivity of evaluated methods to the camera and year shifts. Overall Diverse Data Augmentation shows limited benefit, suggesting either that data augmentation primarily helps combat texture bias as with ImageNet-R, or that existing augmentations are not diverse enough to capture high-level semantic shifts such as building architecture.\nDeepFashion Remixed. Table 3 shows our experimental findings on DFR, in which all evaluated methods have an average OOD mAP that is close to the baseline. In fact, most OOD mAP increases track IID mAP increases. In general, DFR’s size and occlusion shifts hurt performance the most. We also evaluate with Random Erasure augmentation, which deletes rectangles within the image, to simulate occlusion (Zhong et al., 2017). Random Erasure improved occlusion performance, but Style Transfer helped even more. Nothing substantially improved OOD performance beyond what is explained by IID performance, so here it would appear that Only IID Accuracy Matters. Our results do not provide clear evidence for the Larger Models, Self-Attention, Diverse Data Augmentation, and Pretraining hypotheses.\nImageNet-C. We now consider a previous robustness benchmark to reassess all seven hypotheses. We use the ImageNet-C dataset (Hendrycks and Dietterich, 2019) which applies 15 common image\ncorruptions (e.g., Gaussian noise, defocus blur, simulated fog, JPEG compression, etc.) across 5 severities to ImageNet-1K validation images. We find that DeepAugment improves robustness on ImageNet-C. Figure 4 shows that when models are trained with AugMix and DeepAugment, they attain the state-of-the-art, break the trendline, and exceed the corruption robustness provided by training on 1000× more labeled training data. Note the augmentations from AugMix and DeepAugment are disjoint from ImageNet-C’s corruptions. Full results are shown in Appendix A’s Table 8. This is evidence against the Only IID Accuracy Matters hypothesis and is evidence for the Larger Models, Self-Attention, Diverse Data Augmentation, Pretraining, and Texture Bias hypotheses.\nTaori et al. (2020) remind us that ImageNet-C uses various synthetic corruptions and suggest that they are divorced from real-world robustness. Real-world robustness requires generalizing to naturally occurring corruptions such as snow, fog, blur, low-lighting noise, and so on, but it is an open question whether ImageNet-C’s simulated corruptions meaningfully approximate real-world corruptions.\nFor our results analysis, we collect a small dataset of 1,000 real-world blurry images and find that ImageNet-C can track robustness to real-world corruptions. We collect the “Real Blurry Images” dataset with Flickr and query ImageNet object class names concatenated with the word “blurry.” Examples are in Figure 5. We then evaluate various models on real-world blurry images and find that all the robustness interventions that help with ImageNet-C also help with real-world blurry images. Hence ImageNet-C can track performance on real-world corruptions. Moreover, DeepAugment+AugMix has the lowest error rate on Real Blurry Images, which again contradicts the Synthetic 6=⇒ Real hypothesis. The upshot is that ImageNet-C is a controlled and systematic proxy for real-world robustness.\nWe collect 1,000 blurry images to see whether improvements on ImageNet-C’s simulated blurs correspond to improvements on real-world blurry images. Each image belongs to an ImageNet\nclass. Results from Table 5 show that Larger Models, Self-Attention, Diverse Data Augmentation, Pretraining all help, just like ImageNet-C. Here DeepAugment+AugMix attains state-of-the-art. These results suggest ImageNet-C’s simulated corruptions track real-world corruptions. In hindsight, this is expected since various computer vision problems have used synthetic corruptions as proxies for real-world corruptions, for decades. In short, ImageNet-C is a diverse and systematic benchmark that is correlated with improvements on real-world corruptions." }, { "heading": "6 CONCLUSION", "text": "In this paper we introduced three new benchmarks, ImageNet-Renditions, DeepFashion Remixed, and StreetView StoreFronts. With these benchmarks, we thoroughly tested seven robustness hypotheses– four about methods for robustness, and three about the nature of robustness.\nLet us consider the first four hypotheses, using the new information from ImageNet-C and our three new benchmarks. The Larger Models hypothesis was supported with ImageNet-C and ImageNet-R, but not with DFR. While Self-Attention noticeably helped ImageNet-C, it did not help with ImageNetR and DFR. Diverse Data Augmentation was ineffective for SVSF and DFR, but it greatly improved ImageNet-C and ImageNet-R accuracy. Pretraining greatly helped with ImageNet-C but hardly helped with DFR and ImageNet-R. This is summarized in Table 4. It was not obvious a priori that synthetic Diverse Data Augmentation could improve ImageNet-R accuracy, nor did previous research suggest that Pretraining would sometimes be ineffective. While no single method consistently helped across all distribution shifts, some helped more than others.\nOur analysis of these four hypotheses have implications for the remaining three hypotheses. Regarding Texture Bias, ImageNet-R shows that networks do not generalize well to renditions (which have different textures), but that diverse data augmentation (which often distorts textures) can recover accuracy. More generally, larger models and diverse data augmentation consistently helped on ImageNet-R, ImageNet-C, and Blurry Images, suggesting that these two interventions reduce texture bias. However, these methods helped little for geographic shifts, showing that there is more to robustness than texture bias alone. Regarding Only IID Accuracy Matters, while IID accuracy is a strong predictor of OOD accuracy, it is not decisive—Table 4 shows that many methods improve robustness across multiple distribution shifts, and recent experiments in NLP provide further counterexamples (Hendrycks et al., 2020a). Finally, Synthetic 6=⇒ Real has clear counterexamples given that DeepAugment greatly increases accuracy on ImageNet-R and Real Blurry Images. In summary, some previous hypotheses are implausible, and the Texture Bias hypothesis has the most support.\nOur seven hypotheses presented several conflicting accounts of robustness. What led to this conflict? We suspect it is because robustness is not one scalar like accuracy. The research community is reasonable in judging IID accuracy with a univariate metric like ImageNet classification accuracy, as models with higher ImageNet accuracy reliably have better fine-tuned classification accuracy on other tasks (Kornblith et al., 2018). In contrast, we argue it is too simplistic to judge OOD accuracy with a univariate metric like, say, ImageNetV2 or ImageNet-C accuracy. Instead we hypothesize that robustness is multivariate. This Multivariate hypothesis means that there is not a single scalar model property that wholly governs natural model robustness.\nIf robustness has many faces, future work should evaluate robustness using many distribution shifts; for example, ImageNet models should at least be tested against ImageNet-C and ImageNet-R. Future work could further characterize the space of distribution shifts. However, due to this paper, there are now more out-of-distribution robustness datasets than there are published robustness methods. Hence the research community should prioritize creating new robustness methods. If our Multivariate hypothesis is true, research should shift toward using multiple tests to develop models that are both robust and safe." }, { "heading": "A ADDITIONAL RESULTS", "text": "ImageNet-R. Expanded ImageNet-R results are in Table 7.\nWSL pretraining on Instagram images appears to yield dramatic improvements on ImageNet-R, but the authors note the prevalence of artistic renditions of object classes on the Instagram platform. While ImageNet’s data collection process actively excluded renditions, we do not have reason to believe the Instagram dataset excluded renditions. On a ResNeXt-101 32×8d model, WSL pretraining improves ImageNet-R performance by a massive 37.5% from 57.5% top-1 error to 24.2%. Ultimately, without examining the training images we are unable to determine whether ImageNet-R represents an actual distribution shift to the Instagram WSL models. However, we also observe that with greater controls, that is with ImageNet21K pre-training, pretraining hardly helped ImageNet-R performance, so it is not clear that more pretraining data improves ImageNet-R performance.\nIncreasing model size appears to automatically improve ImageNet-R performance, as shown in Figure 6. A ResNet50 (25.5M parameters) has 63.9% error, while a ResNet152 (60M) has 58.7% error. ResNeXt-50 32×4d (25.0M) attains 62.3% error and ResNeXt-101 32×8d (88M) attains 57.5% error.\nImageNet-C. Expanded ImageNet-C results are Table 8. We also tested whether model size improves performance on ImageNet-C for even larger models. With a different codebase, we trained ResNet-50, ResNet-152, and ResNet-500 models which achieved 80.6, 74.0, and 68.5 mCE respectively.\nExpanded comparisons between ImageNet-C and Real Blurry Images is in Table 5. Network Defocus\nBlur Glass Blur Motion Blur Zoom Blur ImageNet-C Blur Mean\nReal Blurry Images\nResNet-50 61 73 61 64 65 58.7 + ImageNet-21K Pretraining 56 69 53 59 59 54.8 + CBAM (Self-Attention) 60 69 56 61 62 56.5 + `∞ Adversarial Training 80 71 72 71 74 71.6 + Speckle Noise 57 68 60 64 62 56.9 + Style Transfer 57 68 55 64 61 56.7 + AugMix 52 65 46 51 54 54.4 + DeepAugment 48 60 51 61 55 54.2 + DeepAugment+AugMix 41 53 39 48 45 51.7\nResNet-152 (Larger Models) 67 81 66 74 58 54.3\nTable 5: ImageNet-C Blurs (Defocus, Glass, Motion, Zoom) vs Real Blurry Images. All values are error rates and percentages. The rank orderings of the models on Real Blurry Images are similar to the rank orderings for “ImageNet-C Blur Mean,” so ImageNet-C’s simulated blurs track real-world blur performance.\nImageNet-A. ImageNet-A (Hendrycks et al., 2019b) is an adversarially filtered test set and is constructed based on existing model weaknesses (see (Wang et al., 2020) for another robustness dataset algorithmically determined by model weaknesses). This dataset contains examples that are difficult for a ResNet-50 to classify, so examples solvable by simple spurious cues are are especially infrequent in this dataset. Results are in Table 9. Notice Res2Net architectures (Gao et al., 2019b) can greatly improve accuracy. Results also show that Larger Models, Self-Attention, and Pretraining help, while Diverse Data Augmentation usually does not help substantially.\nImplications for the Four Method Hypotheses. The Larger Models hypothesis has support with ImageNet-C (+), ImageNet-A (+), ImageNet-R (+), yet does not markedly improve DFR (−) performance. The Self-Attention hypothesis has support with ImageNet-C (+), ImageNet-A (+), yet does not help ImageNet-R (−) and DFR (−) performance. The Diverse Data Augmentation hypothesis has support with ImageNet-C (+), ImageNet-R (+), yet does not markedly improve ImageNet-A (−), DFR(−), nor SVSF (−) performance. The Pretraining hypothesis has support with ImageNet-C (+), ImageNet-A (+), yet does not markedly improve DFR (−) nor ImageNet-R (−) performance." }, { "heading": "B FURTHER DATASET DESCRIPTIONS", "text": "ImageNet-R Classes. The 200 ImageNet classes and their WordNet IDs in ImageNet-R are as follows.\nGoldfish, great white shark, hammerhead, stingray, hen, ostrich, goldfinch, junco, bald eagle, vulture, newt, axolotl, tree frog, iguana, African chameleon, cobra, scorpion, tarantula, centipede, peacock, lorikeet, hummingbird, toucan, duck, goose, black swan, koala, jellyfish, snail, lobster, hermit crab, flamingo, american egret, pelican, king penguin, grey whale, killer whale, sea lion, chihuahua, shih tzu, afghan hound, basset hound, beagle, bloodhound, italian greyhound, whippet, weimaraner, yorkshire terrier, boston terrier, scottish terrier, west highland white terrier, golden retriever, labrador retriever, cocker spaniels, collie, border collie, rottweiler, german shepherd dog, boxer, french bulldog, saint bernard, husky, dalmatian, pug, pomeranian, chow chow, pembroke welsh corgi, toy poodle, standard poodle, timber wolf, hyena, red fox, tabby cat, leopard, snow leopard, lion, tiger, cheetah, polar bear, meerkat, ladybug, fly, bee, ant, grasshopper, cockroach, mantis, dragonfly, monarch butterfly, starfish, wood rabbit, porcupine, fox squirrel, beaver, guinea pig, zebra, pig, hippopotamus, bison, gazelle, llama, skunk, badger, orangutan, gorilla, chimpanzee, gibbon, baboon, panda, eel, clown fish, puffer fish, accordion, ambulance, assault rifle, backpack, barn, wheelbarrow, basketball, bathtub, lighthouse, beer glass, binoculars, birdhouse, bow tie, broom, bucket, cauldron, candle, cannon, canoe, carousel, castle, mobile phone, cowboy hat, electric guitar, fire engine, flute, gasmask, grand piano, guillotine, hammer, harmonica, harp, hatchet, jeep, joystick, lab coat, lawn mower, lipstick, mailbox, missile, mitten, parachute, pickup truck, pirate ship, revolver, rugby ball, sandal, saxophone, school bus, schooner, shield, soccer ball, space shuttle, spider web, steam locomotive, scarf, submarine, tank, tennis ball, tractor, trombone, vase, violin, military aircraft, wine bottle, ice cream, bagel, pretzel, cheeseburger, hotdog, cabbage, broccoli, cucumber, bell pepper, mushroom, Granny Smith, strawberry, lemon, pineapple, banana, pomegranate, pizza, burrito, espresso, volcano, baseball player, scuba diver, acorn.\nn01443537, n01484850, n01494475, n01498041, n01514859, n01518878, n01531178, n01534433, n01614925, n01616318, n01630670, n01632777, n01644373, n01677366, n01694178, n01748264, n01770393, n01774750, n01784675, n01806143, n01820546, n01833805, n01843383, n01847000, n01855672, n01860187, n01882714, n01910747, n01944390, n01983481, n01986214, n02007558, n02009912, n02051845, n02056570,\nn02066245, n02071294, n02077923, n02085620, n02086240, n02088094, n02088238, n02088364, n02088466, n02091032, n02091134, n02092339, n02094433, n02096585, n02097298, n02098286, n02099601, n02099712, n02102318, n02106030, n02106166, n02106550, n02106662, n02108089, n02108915, n02109525, n02110185, n02110341, n02110958, n02112018, n02112137, n02113023, n02113624, n02113799, n02114367, n02117135, n02119022, n02123045, n02128385, n02128757, n02129165, n02129604, n02130308, n02134084, n02138441, n02165456, n02190166, n02206856, n02219486, n02226429, n02233338, n02236044, n02268443, n02279972, n02317335, n02325366, n02346627, n02356798, n02363005, n02364673, n02391049, n02395406, n02398521, n02410509, n02423022, n02437616, n02445715, n02447366, n02480495, n02480855, n02481823, n02483362, n02486410, n02510455, n02526121, n02607072, n02655020, n02672831, n02701002, n02749479, n02769748, n02793495, n02797295, n02802426, n02808440, n02814860, n02823750, n02841315, n02843684, n02883205, n02906734, n02909870, n02939185, n02948072, n02950826, n02951358, n02966193, n02980441, n02992529, n03124170, n03272010, n03345487, n03372029, n03424325, n03452741, n03467068, n03481172, n03494278, n03495258, n03498962, n03594945, n03602883, n03630383, n03649909, n03676483, n03710193, n03773504, n03775071, n03888257, n03930630, n03947888, n04086273, n04118538, n04133789, n04141076, n04146614, n04147183, n04192698, n04254680, n04266014, n04275548, n04310018, n04325704, n04347754, n04389033, n04409515, n04465501, n04487394, n04522168, n04536866, n04552348, n04591713, n07614500, n07693725, n07695742, n07697313, n07697537, n07714571, n07714990, n07718472, n07720875, n07734744, n07742313, n07745940, n07749582, n07753275, n07753592, n07768694, n07873807, n07880968, n07920052, n09472597, n09835506, n10565667, n12267677.\nSVSF. The classes are\n• auto shop • bakery • bank • beauty salon • car dealer • car wash • cell phone store\n• dentist • discount store • dry cleaner • furniture store • gas station • gym • hardware store\n• hotel\n• liquor store\n• pharmacy\n• religious institution\n• storage facility\n• veterinary care.\nDeepFashion Remixed. The classes are\n• short sleeve top • long sleeve top • short sleeve outerwear • long sleeve outerwear • vest\n• sling • shorts • trousers • skirt • short sleeve dress\n• long sleep dress\n• vest dress\n• sling dress.\nSize (small, moderate, or large) defines how much of the image the article of clothing takes up. Occlusion (slight, medium, or heavy) defines the degree to which the object is occluded from the camera. Viewpoint (front, side/back, or not worn) defines the camera position relative to the article of clothing. Zoom (no zoom, medium, or large) defines how much camera zoom was used to take the picture." }, { "heading": "C DEEPAUGMENT DETAILS", "text": "Pseudocode. Below is Pythonic pseudocode for DeepAugment. The basic structure of DeepAugment is agnostic to the backbone network used, but specifics such as which layers are chosen for various transforms may vary as the backbone architecture varies. We do not need to train many different image-to-image models to get diverse distortions (Zhang et al., 2018; Lee et al., 2020). We only use two existing models, the EDSR super-resolution model (Lim et al., 2017) and the CAE image compression model (Theis et al., 2017). See full code for such details.\nAt a high level, DeepAugment processes each image with an image-to-image network. The image-toimage network’s weights and feedforward activations are distorted with each pass. The distortion is made possible by, for example, negating the network’s weights and applying dropout to the feedforward activations. These modifications were not carefully chosen and demonstrate the utility of mixing together diverse operations without tuning. The resulting image is distorted and saved. This process generates an augmented dataset.\n1 def main(): 2 net.apply_weights(deepAugment_getNetwork()) # EDSR, CAE, ... 3 for image in dataset: # May be the ImageNet training set 4 if np.random.uniform() < 0.05: # Arbitrary refresh prob 5 net.apply_weights(deepAugment_getNetwork()) 6 new_image = net.deepAugment_forwardPass(image) 7 8 def deepAugment_getNetwork(): 9 weights = load_clean_weights()\n10 weight_distortions = sample_weight_distortions() 11 for d in weight_distortions: 12 weights = apply_distortion(d, weights) 13 return weights 14 15 def sample_weight_distortions(): 16 distortions = [ 17 negate_weights, 18 zero_weights, 19 flip_transpose_weights, 20 ... 21 ] 22 23 return random_subset(distortions) 24 25 def sample_signal_distortions(): 26 distortions = [ 27 gelu, 28 negate_signal_random_mask, 29 flip_signal, 30 ... 31 ] 32 33 return random_subset(distortions) 34\n35 36 class Network(): 37 def apply_weights(weights): 38 ... # Apply given weight tensors to network 39 40 # Clean forward pass. Compare to deepAugment_forwardPass() 41 def clean_forwardPass(X): 42 X = network.block1(X) 43 X = network.block2(X) 44 ... 45 X = network.blockN(X) 46 return X 47 48 # Our forward pass. Compare to clean_forwardPass() 49 def deepAugment_forwardPass(X): 50 # Returns a list of distortions, each of which 51 # will be applied at a different layer. 52 signal_distortions = sample_signal_distortions() 53 54 X = network.block1(X) 55 apply_layer_1_distortions(X, signal_distortions) 56 X = network.block2(X) 57 apply_layer_2_distortions(X, signal_distortions) 58 ... 59 apply_layer_N-1_distortions(X, signal_distortions) 60 X = network.blockN(X) 61 apply_layer_N_distortions(X, signal_distortions) 62 63 return X\nAblations. We run ablations on DeepAugment to understand the contributions from the EDSR and CAE models independently. Table 13 contains results of these experiments on ImageNet-R and Table 12 contains results of these experiments on ImageNet-C. In both tables, “DeepAugment (EDSR)” and “DeepAugment (CAE)” refer to experiments where we only use a single extra augmented training set (+ the standard training set), and train on those images.\nNoise2Net. We show that untrained, randomly sampled neural networks can provide useful deep augmentations, highlighting the efficacy of the DeepAugment approach. While in the main paper we use EDSR and CAE to create DeepAugment augmentations, in this section we explore the use of randomly initialized image-to-image networks to generate diverse image augmentations. We propose a DeepAugment method, Noise2Net.\nIn Noise2Net, the architecture and weights are randomly sampled. Noise2Net is the composition of several residual blocks: Block(x) = x + ε · fΘ(x), where Θ is randomly initialized and ε is a parameter that controls the strength of the augmentation. For all our experiments, we use 4 Res2Net blocks (Gao et al., 2019a) and ε ∼ U(0.375, 0.75). The weights of Noise2Net are resampled at every minibatch, and the dilation and kernel sizes of all the convolutions used in Noise2Net are randomly sampled every epoch. Hence Noise2Net augments an image to an augmented image by processing the image through a randomly sampled network with random weights.\nRecall that in the case of EDSR and CAE, we used networks to generate a static dataset, and then we trained normally on that static dataset. This setup could not be done on-the-fly. That is because we fed in one example at a time with EDSR and CAE. If we pass the entire minibatch through EDSR or CAE, we will end up applying the same augmentation to all images in the minibatch, reducing stochasticity and augmentation diversity. In contrast, Noise2Net enables us to process batches of images on-the-fly and obviates the need for creating a static augmented dataset.\nIn Noise2Net, each example is processed differently in parallel, so we generate more diverse augmentations in real-time. To make this possible, we use grouped convolutions. A grouped convolution with number of groups = N will take a set of kN channels as input, and apply N independent convolutions on channels {1, . . . , k}, {k + 1, . . . , 2k}, . . . , {(N − 1)k + 1, . . . , Nk}. Given a minibatch of size B, we can apply a randomly initialized grouped convolution with N = B groups in order to apply a different random convolutional filter to each element in the batch in a single forward pass. By replacing all the convolutions in each Res2Net block with a grouped convolution and randomly initializing network weights, we arrive at Noise2Net, a variant of DeepAugment. See Figure 7 for a high-level overview of Noise2Net and Figure 8 for sample outputs.\nWe evaluate the Noise2Net variant of DeepAugment on ImageNet-R. Table 13 shows that it outperforms the EDSR and CAE variants of DeepAugment, even though the network architecture is randomly sampled, its weights are random, and the network is not trained. This demonstrates the flexibility of the DeepAugment approach. Below is Pythonic pseudocode for training a classifier using the Noise2Net variant of DeepAugment.\nImageNet-200 (%) ImageNet-R (%) Gap\nResNet-50 7.9 63.9 56.0 + DeepAugment (EDSR) 7.9 60.3 55.1 + DeepAugment (CAE) 7.6 58.5 50.9 + DeepAugment (EDSR + CAE) 7.5 57.8 50.3 + DeepAugment (Noise2Net) 7.2 57.6 50.4\n+ DeepAugment (All 3) 7.4 56.0 48.6" } ]
2,020
null
SP:1c4488d4b73efbed04b1045b425d7804b405ce1f
[ "This works proposes an new auto-encoder variant based on an Optimal Transport (OT) penalty. While there are many such previous works of OT and auto-encoders, this work proposes a joint OT penalty on data and latent space. As the scalability of computing OT penalties in high dimensions is a concern, the authors address this by restricting to deterministic encoders and decoders in Theorem 1, an extension to joint distributions of Theorem 1 of Tolstikhin 2018. The resulting algorithm amounts to a loss involving L2 penalties for (1) the reconstruction loss (2) decoded latents (conditional on \"pseudo-inputs\") and real samples (3) encoded samples and the conditional latents. Next experimental results are shown on small-scale datasets (MNIST, Fashion-MNIST, Coil20, subest of CIFAR-10) and compared against the VAE, WAE-{GAN,MMD}, VampPrior, and MIM.", "This paper proposes to treat the encoding and the decoding pairs symmetrically as a solution to OT problems. SWAE minimizes $p(x_d, z_d)$ and $p(x_e, z_e)$ in a jointly manner and shows better latent representation learning and generation. Moreover, the symmetric treatment for encoding and decoding shows an advantage in data denoising. " ]
Leveraging the framework of Optimal Transport, we introduce a new family of generative autoencoders with a learnable prior, called Symmetric Wasserstein Autoencoders (SWAEs). We propose to symmetrically match the joint distributions of the observed data and the latent representation induced by the encoder and the decoder. The resulting algorithm jointly optimizes the modelling losses in both the data and the latent spaces with the loss in the data space leading to the denoising effect. With the symmetric treatment of the data and the latent representation, the algorithm implicitly preserves the local structure of the data in the latent space. To further improve the latent representation, we incorporate a reconstruction loss into the objective, which significantly benefits both the generation and reconstruction. We empirically show the superior performance of SWAEs over the state-of-the-art generative autoencoders in terms of classification, reconstruction, and generation.
[ { "affiliations": [], "name": "WASSERSTEIN AUTOENCODERS" } ]
[ { "authors": [ "Alexander A Alemi", "Ben Poole", "Ian Fischer", "Joshua V Dillon", "Rif A Saurous", "Kevin Murphy" ], "title": "Fixing a broken ELBO", "venue": "arXiv preprint arXiv:1711.00464,", "year": 2017 }, { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Andrei Atanov", "Arsenii Ashukha", "Kirill Struminsky", "Dmitry Vetrov", "Max Welling" ], "title": "The deep weight prior", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Yogesh Balaji", "Hamed Hassani", "Rama Chellappa", "Soheil Feizi" ], "title": "Entropic GANs meet VAEs: A statistical approach to compute sample likelihoods in GANs", "venue": null, "year": 2019 }, { "authors": [ "Liqun Chen", "Shuyang Dai", "Yunchen Pu", "Erjin Zhou", "Chunyuan Li", "Qinliang Su", "Changyou Chen", "Lawrence Carin" ], "title": "Symmetric variational autoencoder and connections to adversarial learning", "venue": null, "year": 2018 }, { "authors": [ "Ishan Deshpande", "Yuan-Ting Hu", "Ruoyu Sun", "Ayis Pyrros", "Nasir Siddiqui", "Sanmi Koyejo", "Zhizhen Zhao", "David Forsyth", "Alexander G Schwing" ], "title": "Max-sliced wasserstein distance and its use for GANs", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "NICE: Non-linear independent components estimation", "venue": "arXiv preprint arXiv:1410.8516,", "year": 2014 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real NVP", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Jeff Donahue", "Philipp Krähenbühl", "Trevor Darrell" ], "title": "Adversarial feature learning", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Vincent Dumoulin", "Ishmael Belghazi", "Ben Poole", "Olivier Mastropietro", "Alex Lamb", "Martin Arjovsky", "Aaron Courville" ], "title": "Adversarially learned inference", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Babak Esmaeili", "Hao Wu", "Sarthak Jain", "Alican Bozkurt", "Narayanaswamy Siddharth", "Brooks Paige", "Dana H Brooks", "Jennifer Dy", "Jan-Willem van de Meent" ], "title": "Structured disentangled representations", "venue": null, "year": 2019 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "GANs trained by a two time-scale update rule converge to a local Nash equilibrium", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Matthew D Hoffman", "Matthew J Johnson" ], "title": "ELBO surgery: yet another way to carve up the variational evidence lower bound", "venue": "In Workshop in Advances in Approximate Bayesian Inference,", "year": 2016 }, { "authors": [ "Zhiting Hu", "Zichao Yang", "Xiaodan Liang", "Ruslan Salakhutdinov", "Eric P. Xing" ], "title": "Toward controlled generation of text", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Alexej Klushyn", "Nutan Chen", "Richard Kurle", "Botond Cseke", "Patrick van der Smagt" ], "title": "Learning hierarchical priors in VAEs", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Soheil Kolouri", "Se Rim Park", "Matthew Thorpe", "Dejan Slepcev", "Gustavo K Rohde" ], "title": "Optimal mass transport: Signal processing and machine-learning applications", "venue": "IEEE signal processing magazine,", "year": 2017 }, { "authors": [ "Soheil Kolouri", "Phillip E Pope", "Charles E Martin", "Gustavo K Rohde" ], "title": "Sliced-Wasserstein autoencoders", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Chunyuan Li", "Hao Liu", "Changyou Chen", "Yuchen Pu", "Liqun Chen", "Ricardo Henao", "Lawrence Carin" ], "title": "Alice: Towards understanding adversarial learning for joint distribution matching", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Jerry Li", "Aleksander Madry", "John Peebles", "Ludwig Schmidt" ], "title": "Towards understanding the dynamics of generative adversarial networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Micha Livne", "Kevin Swersky", "David J Fleet" ], "title": "MIM: Mutual information machine", "venue": "arXiv preprint arXiv:1910.03175,", "year": 2019 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-SNE", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Khai Nguyen", "Nhat Ho", "Tung Pham", "Hung Bui" ], "title": "Distributional sliced-wasserstein and applications to generative modeling", "venue": "arXiv preprint arXiv:2002.07367,", "year": 2020 }, { "authors": [ "Gabriel Peyré", "Marco Cuturi" ], "title": "Computational optimal transport", "venue": "Foundations and Trends® in Machine Learning,", "year": 2019 }, { "authors": [ "Yuchen Pu", "Zhe Gan", "Ricardo Henao", "Chunyuan Li", "Shaobo Han", "Lawrence Carin" ], "title": "VAE learning via Stein variational gradient descent", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Yuchen Pu", "Weiyao Wang", "Ricardo Henao", "Liqun Chen", "Zhe Gan", "Chunyuan Li", "Lawrence Carin" ], "title": "Adversarial symmetric variational autoencoder", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Ali Razavi", "Aäron van den Oord", "Oriol Vinyals" ], "title": "Generating diverse high-fidelity images with VQ-VAE-2", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Maziar Sanjabi", "Jimmy Ba", "Meisam Razaviyayn", "Jason D Lee" ], "title": "On the convergence and robustness of training GANs with regularized optimal transport", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Filippo Santambrogio" ], "title": "Optimal transport for applied mathematicians", "venue": "Birkäuser, NY,", "year": 2015 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V. Le" ], "title": "Sequence to sequence learning with neural networks", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Jakub Tomczak", "Max Welling" ], "title": "VAE with a VampPrior", "venue": "In AISTATS,", "year": 2018 }, { "authors": [ "Benigno Uria", "Iain Murray", "Hugo Larochelle" ], "title": "RNADE: The real-valued neural autoregressive density-estimator", "venue": "In NIPS,", "year": 2013 }, { "authors": [ "Arash Vahdat", "Jan Kautz" ], "title": "Nvae: A deep hierarchical variational autoencoder", "venue": "arXiv preprint arXiv:2007.03898,", "year": 2020 }, { "authors": [ "Aaron Van Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Cédric Villani" ], "title": "Optimal transport: old and new, volume 338", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Yoshua Bengio", "Pierre-Antoine Manzagol" ], "title": "Extracting and composing robust features with denoising autoencoders", "venue": "In ICML,", "year": 2008 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Isabelle Lajoie", "Yoshua Bengio", "Pierre-Antoine Manzagol" ], "title": "Under review as a conference paper at ICLR", "venue": null, "year": 2021 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep generative models have emerged as powerful frameworks for modelling complex data. Widely used families of such models include Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), Variational Autoencoders (VAEs) (Rezende et al., 2014; Kingma & Welling, 2014), and autoregressive models (Uria et al., 2013; Van Oord et al., 2016). The VAE-based framework has been popular as it yields a bidirectional mapping, i.e., it consists of both an inference model (from data to latent space) and a generative model (from latent to data space). With an inference mechanism VAEs can provide a useful latent representation that captures salient information about the observed data. Such latent representation can in turn benefit downstream tasks such as clustering, classification, and data generation. In particular, the VAE-based approaches have achieved impressive performance results on challenging real-world applications, including image synthesizing (Razavi et al., 2019), natural text generation (Hu et al., 2017), and neural machine translation (Sutskever et al., 2014).\nVAEs aim to maximize a tractable variational lower bound on the log-likelihood of the observed data, commonly called the ELBO. Since VAEs focus on modelling the marginal likelihood of the data instead of the joint likelihood of the data and the latent representation, the quality of the latent is not well assessed (Alemi et al., 2017; Zhao et al., 2019), which is undesirable for learning useful representation. Besides the perspective of maximum-likelihood learning of the data, the objective of VAEs is equivalent to minimizing the KL divergence between the encoding and the decoding distributions, with the former modelling the joint distribution of the observed data and the latent representation induced by the encoder and the latter modelling the corresponding joint distribution induced by the decoder. Such connection has been revealed in several recent works (Livne et al., 2019; Esmaeili et al., 2019; Pu et al., 2017b; Chen et al., 2018). Due to the asymmetry of the KL divergence, it is highly likely that the generated samples are of a low probability in the data distribution, which often leads to unrealistic generated samples (Li et al., 2017b; Alemi et al., 2017).\nA lot of work has proposed to improve VAEs from different perspectives. For example, to enhance the latent expressive power VampPrior (Tomczak & Welling, 2018), normalizing flow (Rezende & Mohamed, 2015), and Stein VAEs (Pu et al., 2017a) replace the Gaussian distribution imposed on the latent variables with a more sophisticated and flexible distribution. However, these methods are all based on the objective of VAEs, which therefore are unable to alleviate the limitation of VAEs induced by the objective. To improve the latent representation (Zhao et al., 2019) explicitly includes the mutual information between the data and the latent into the objective. Moreover, to address the asymmetry of the KL divergence in VAEs (Livne et al., 2019; Chen et al., 2018; Pu et al., 2017b) leverage a symmetric divergence measure between the encoding and the decoding distributions.\nNevertheless, these methods typically involve a sophisticated objective function that either depends on unstable adversarial training or challenging approximation of the mutual information.\nIn this paper, we leverage Optimal Transport (OT) (Villani, 2008; Peyré et al., 2019) to symmetrically match the encoding and the decoding distributions. The OT optimization is generally challenging particularly in high dimension, and we address this difficulty by transforming the OT cost into a simpler form amenable to efficient numerical implementation. Owing to the symmetric treatment of the observed data and the latent representation, the local structure of the data can be implicitly preserved in the latent space. However, we found that with the symmetric treatment only the performance of the generative model may not be satisfactory. To improve the generative model we additionally include a reconstruction loss into the objective, which is shown to significantly benefit the quality of the generation and reconstruction.\nOur contributions can be summarized as follows. Firstly, we propose a new family of generative autoencoders, called Symmetric Wasserstein Autoencoders (SWAEs). Secondly, we adopt a learnable latent prior, parameterized as a mixture of the conditional priors given the learnable pseudo-inputs, which prevents SWAEs from over-regularizing the latent variables. Thirdly, we empirically perform an ablation study of SWAEs in terms of the KNN classification, denoising, reconstruction, and sample generation. Finally, we empirically verify, using benchmark tasks, the superior performance of SWAEs over several state-of-the-art generative autoencoders." }, { "heading": "2 SYMMETRIC WASSERSTEIN AUTOENCODERS", "text": "In this section we introduce a new family of generative autoencoders, called Symmetric Wasserstein Autoencoders (SWAEs)." }, { "heading": "2.1 OT FORMULATION", "text": "Denote the random vector at the encoder as e , (xe, ze) ∈ X×Z , which contains both the observed data xe ∈ X and the latent representation ze ∈ Z . We call the distribution p(e) ∼ p(xe)p(ze|xe) the encoding distribution, where p(xe) represents the data distribution and p(ze|xe) characterizes an inference model. Similarly, denote the random vector at the decoder as d , (xd, zd) ∈ X × Z , which consists of both the latent prior zd ∈ Z and the generated data xd ∈ X . We call the distribution p(d) ∼ p(zd)p(xd|zd) the decoding distribution, where p(zd) represents the prior distribution and p(xd|zd) characterizes a generative model. The objective of VAEs is equivalent to minimizing the (asymmetric) KL divergence between the encoding distribution p(e) and the decoding distribution p(d) (see Appendix A.1). To address the limitation in VAEs, first we propose to treat the data and the latent representation symmetrically instead of asymmetrically by minimizing the pth Wasserstein distance between p(e) and p(d) leveraging Optimal Transport (OT) (Villani, 2008; Peyré et al., 2019).\nOT provides a framework for comparing two distributions in a Lagrangian framework, which seeks the minimum cost for transporting one distribution to another. We focus on the primal problem of OT, and Kantorovich’s formulation (Peyré et al., 2019) is given by:\nWc(p(e), p(d)) , inf Γ∈P(e∼p(e),d∼p(d)) E(e,d)∼Γ c(e,d), (1)\nwhere P(e ∼ p (e),d ∼ p(d)), called the coupling between e and d, denotes the set of the joint distributions of e and d with the marginals p(e) and p(d), respectively, and c(e,d) : (X ,Z) × (X ,Z) → [0,+∞] denotes the cost function. When ((X ,Z) × (X ,Z), d) is a metric space and the cost function c(e,d) = dp(e,d) for p ≥ 1, Wp, the p-th root of Wc is defined as the p-th Wasserstein distance. In particular, it can be proved that the p-th Wasserstein distance is a metric hence symmetric, and metrizes the weak convergence (see, e.g., (Santambrogio, 2015)).\nOptimization of equation 1 is computationally prohibitive especially in high dimension (Peyré et al., 2019). To provide an efficient solution, we restrict to the deterministic encoder and decoder. Specifically, at the encoder we have the latent representation ze = E(xe) with the function E : X → Z , and at the decoder we have the generated data xd = D(zd) with the function D : Z → X . It turns out that with the deterministic condition instead of searching for an optimal coupling in high dimension, we can find a proper conditional distribution p(zd|xe) with the marginal p(zd).\nTheorem 1 Given the deterministic encoder E : X → Z and the deterministic decoder D : Z → X , the OT problem in equation 1 can be transformed to the following:\nWc(p(e), p(d)) = inf p(zd|xe) Ep(xe)Ep(zd|xe) c(e,d), (2)\nwhere the observed data follows the distribution p(xe) and the prior follows the distribution p(zd).\nThe proof of Theorem 1 extends that of Theorem 1 in (Tolstikhin et al., 2018), and is provided in Appendix A.2. If X × Z is the Euclidean space endowed with the Lp norm, then the expression in equation 2 equals the following:\nWc(p(e), p(d)) = inf p(zd|xe)\nEp(xe)Ep(zd|xe) ‖xe −D(zd)‖ p p + ‖E(xe)− zd‖pp, (3)\nwhere in the objective we call the first term the x-loss and the second term the z-loss. With the above transformation, we decompose the loss in the joint space into the losses in both the data and the latent spaces. Such decomposition is crucial and allows us to treat the data and the latent representation symmetrically.\nThe x-loss, i.e., ‖xe−D(zd)‖pp, represents the discrepancy in the data space, and can be interpreted from two different perspectives. Firstly, since D(zd) represents the generated data, the x-loss essentially minimizes the dissimilarity between the observed data and the generated data. Secondly, the x-loss is closely related to the objective of Denoising Autoencoders (DAs) (Vincent et al., 2008; 2010). In particular, DAs aim to minimize the discrepancy between the observed data and a partially destroyed version of the observed data. The corrupted data can be obtained by means of a stochastic mapping from the original data (e.g., via adding noises). By contrast, the x-loss can be explained in the same way with the generated data being interpreted as the corrupted data. This is because the prior zd in D(zd) is sampled from the conditional distribution p(zd|xe), which depends on the observed data xe. Consequently, the generated data D(zd), obtained by feeding zd to the decoder, is stochastically related to the observed data xe. With this insight, the same as the objective of DAs, the x-loss can lead to the denoising effect.\nThe z-loss, i.e., ‖E(xe)− zd‖pp, represents the discrepancy in the latent space. The whole objective in equation 3 hence simultaneously minimizes the discrepancy in the data and the latent spaces. Observe that in equation 3 E(xe) is the latent representation of xe at the encoder, while zd can be thought of as the latent representation of D(zd) at the decoder. With such connection, the optimization of equation 3 can preserve the local data structure in the latent space. More specifically, since xe and D(zd) are stochastically dependent, roughly speaking, if two data samples are close to each other in the data space, their corresponding latent representations are also expected to be close. This is due to the symmetric treatment of the data and the latent representation. In Figure 1 we illustrate this effect and compare SWAE with VAE.\nComparison with WAEs (Tolstikhin et al., 2018) The objective in equation 3 minimizes the OT cost between the joint distributions of the data and the latent, i.e., Wc(p(e), p(d)), while the objective of WAEs (Tolstikhin et al., 2018) minimizes the OT cost between the marginal distributions of the data, i.e., Wc(p(xe), p(xd)), where p(xd) is the marginal data distribution induced by the decoding distribution p(d). The problem of WAEs is first formulated as an optimization with the constraint p(ze) = p(zd), where p(ze) is the marginal distribution induced by the encoding distribution p(e), and then relaxed by adding a regularizer. With the deterministic decoder, the final optimization problem of WAEs is as follows:\ninf p(ze|xe) Ep(xe)Ep(ze|xe) c(xe, D(ze)) + λD(p(ze), p(zd)), (4)\nwhere D(, ) denotes some divergence measure. Comparing equation 4 to equation 3, we can see that both methods decompose the loss into the losses in the data and the latent spaces. Differently, in equation 4 the first term reflects the reconstruction loss in the data space and the second term represents the distribution-based dissimilarity in the latent space; while in equation 3 the x-loss is closely related to the denoising and the generation quality and the z-loss measures the sample-based dissimilarity. Moreover, equation 4 is optimized over the posterior p(ze|xe) with a fixed prior p(zd), while equation 3 is optimized over the conditional prior p(zd|xe) with a potentially learnable prior." }, { "heading": "2.2 IMPROVEMENT OF LATENT REPRESENTATION", "text": "The objective in equation 3 only seeks to match the encoding and the decoding distributions. Besides the encoder and the decoder structures, there is no explicit constraint on the correlation between the data and the latent representation within each joint distribution. Lacking of such constraint typically results in a low quality of reconstruction (Dumoulin et al., 2017; Li et al., 2017a). Therefore, we incorporate a reconstruction-based loss into the objective associated with a controllable coefficient. Additionally, since the dimension of the latent space is usually much smaller than that of the data space, we associate a weighting parameter to balance these two types of losses. Overall, the objective function can be represented as follows:\ninf p(zd|xe)\nEp(xe)Ep(zd|xe) β‖xe −D(zd)‖ p p + (1− β)‖xe −D(ze)‖pp + α‖E(xe)− zd‖pp, (5)\nwhere ‖xe − D(ze)‖pp denotes the reconstruction loss, and β(0 < β < 1) and α(α > 0) are the weighting parameters. The weighting parameter β controls the trade-off between the x-loss and the reconstruction loss, and a smaller value of β generally leads to better reconstruction. To achieve a better trade-off between the generation and reconstruction β needs to be carefully chosen. We will perform an ablation study of SWAEs and show the importance of including the reconstruction loss into the objective for the generative model in Section 3." }, { "heading": "2.3 ALGORITHM", "text": "Similar to many VAE-based generative models, we assume that the encoder, the decoder, and the conditional prior are parameterized by deep neural networks. Unlike the canonical VAEs, where the prior distribution is simple and given in advance, the proposed method adopts a learnable prior. The\nbenefits of a learnable prior, e.g., avoiding over-regularization and hence improving the quality of the latent representation, have been revealed in several recent works (Hoffman & Johnson, 2016; Tomczak & Welling, 2018; Atanov et al., 2019; Klushyn et al., 2019). Obviously, the conditional prior is related to the marginal prior via Exep(zd|xe) = p(zd). This indicates a way to design the prior as a mixture of the conditional distributions, i.e., p∗(zd) = 1N ∑N n=1 p(zd|xe,n), where xe,1, · · · ,xe,N are the training samples. To avoid over-fitting, similar to (Tomczak & Welling, 2018), we replace the training samples with learnable pseudo-inputs and parameterize the prior distribution p(zd) as pγ(zd) = 1 K ∑K k=1 pγ(zd|uk), where γ denotes the parameters of the conditional prior network, uk ∈ X are the learnable pseudo-inputs, and K is the number of the pseudo-inputs. We emphasize that the conditional prior p(zd|xe) (or approximated p(zd|uk)) is used to obtain the marginal prior p(zd); while the posterior p(ze|xe) is used for inference. In experiment, we parameterize the conditional prior as a Gaussian distribution.\nWe call the proposed generative model Symmetric Wasserstein Autoencoders (SWAEs) as we treat the observed data and the latent representation symmetrically. We summarize the training algorithm in Algorithm 1 and show the network architecture in Figure 2. As an example, we define the cost function c(, ) as the squared L2 norm.\nAlgorithm 1: Symmetric Wasserstein Autoencoders (SWAEs) Require: The number of the pseudo-inputs K. The weighting parameters β and α. Initialize\nthe parameters φ, θ, and γ of the encoder network, the decoder network, and the conditional prior network, respectively.\nwhile (φ, θ, γ, {uk}) not converged do 1. Sample {xe,1, · · · ,xe,N} from the training dataset. 2. Find the closest pseudo-input u(n) of each training sample from the set {u1, · · · ,uK}. 3. Sample zd,n from the conditional prior pγ(zd|u(n)) for n = 1, · · · , N . 4. Update (φ, θ, γ, {uk}) by descending the cost function\n1 N ∑N n=1 β‖xe,n −D(zd,n)‖22 + (1− β)‖xe,n −D(E(xe,n))‖22 + α‖E(xe,n)− zd,n‖22.\nSince we use the pseudo-inputs instead of the training samples in the conditional prior, given each training sample we need to find the closest pseudo-input in Step 2. To measure the similarity, we can use, e.g., the L2 norm or the cosine similarity. Since the dimension of the latent space is usually much smaller than that of the data space, to reduce the searching time we can alternatively perform Step 2 in the latent space as an approximation. Specifically, we can find the closest latent representation of E(xe,n) from the set {E(u1), · · · , E(uK)} so as to obtain the corresponding closest pseudo-input. From the experiment we found that such approximation results in little performance degradation, and we attribute it to the preservation of the local structure as explained before." }, { "heading": "3 EXPERIMENTAL RESULTS", "text": "In this section, we compare the performance of the proposed SWAE with several contemporary generative autoencoders, namely VAE (Kingma & Welling, 2014), WAE-GAN (Tolstikhin et al., 2018), WAE-MMD (Tolstikhin et al., 2018), VampPrior (Tomczak & Welling, 2018), and MIM (Livne et al., 2019), using four benchmark datasets: MNIST, Fashion-MNIST, Coil20, and CIFAR10 with a subset of classes (denoted as CIFAR10-sub)." }, { "heading": "3.1 EXPERIMENTAL SETUP", "text": "The design of neural network architectures is orthogonal to that of the algorithm objective, and can greatly affect the algorithm performance (Vahdat & Kautz, 2020). Since MIM has the same network architecture as that of VampPrior, for fair comparison we also build SWAE as well as VAE based on the VampPrior network architecture. In particular, VampPrior adopts the hierarchical latent structure with the convolutional layers (i.e., convHVAE (L = 2)), where the gating mechanism is utilized as an element-wise non-linearity. The building block of the network structure of VAE and SWAE is the same as that of VampPior except that the latent structure is non-hierarchical. Different from SWAE, the prior of VampPrior and MIM is designed as a mixture of the posteriors (instead\nof a mixture of the conditional priors as in SWAE) conditioned on the learnable pseudo-inputs. The pseudo-inputs in SWAE, VampPrior, and MIM are initialized with the training samples. For VampPrior and MIM, the number of the pseudo-inputs K is carefully chosen via the validation set. Unlike these two algorithms, for SWAE we found that increasing K improves the algorithm performance. The setup of K for SWAE, VampPrior, and MIM on all datasets can be found in Appendix A.3. For SWAE, we set the weighting parameter α to 1 in all cases; in Step 2 we use the L2 norm as the similarity measure in the data space. WAE-GAN and WAE-MMD are the WAEbased models, where the divergence measure in the latent space is based on GAN and the maximum mean discrepancy (MMD), respectively. The network structure of WAE-GAN and WAE-MMD is the same as that used in (Tolstikhin et al., 2018). The prior of VAE, WAE-GAN, and WAE-MMD is set as an isotropic Gaussian. A detailed description of the datasets, the applied network architectures, and the training parameters can be found in Appendix A.3." }, { "heading": "3.2 LATENT REPRESENTATION", "text": "The latent representation is expected to capture salient features of the observed data and be useful for the downstream applications. The considered datasets are all associated with the labels. In the experiment we use the latent representation for the K-Nearest Neighbor (KNN) classification and compare the classification accuracy of 5-NN in Table 1, where dim-z denotes the dimension of the latent space. The results of 3-NN and 10-NN are similar to those of 5-NN and thus are omitted. We found that the classification results of all algorithms on CIFAR10 are unsatisfactory based on the current networks (accuracy was around 0.3 − 0.4; this may due to the limited expressive power of the shallow network architectures used), so instead we create a subset of CIFAR10 (CIFAR10-sub) which contains 3 classes: bird, cat, and ship.\nSince the prior of VAE, WAE-GAN, and WAE-MMD is an isotropic Gaussian, setting dim-z greater than the intrinsic dimensionality of the observed data would force p(ze) to be in a manifold in the latent space (Tolstikhin et al., 2018). This makes it impossible to match the marginal p(ze) with the prior p(zd) and thus leads to unsatisfactory latent representation. Such concern can be verified particularly on Fashion-MNIST where the classification accuracy of VAE and WAE-GAN drops dramatically when dim-z is increased. For SWAE, we consider two cases: β = 1 (i.e., without the reconstruction loss) and β = 0.5. The classification accuracy of SWAE (β = 1) is comparable to SWAE (β = 0.5) and is generally superior for different values of dim-z to the benchmarks.\nTo further show the structure of the latent representation, we project the latent representation to 2D using t-SNE (Maaten & Hinton, 2008) as the visualization tool. As an example, we show the projection of the latent representation on MNIST in Figure 3. We can see that SWAEs keep the local structure of the observed data in the latent space and lead to tight clusters, which is consistent to our expectation as explained in Section 2.1." }, { "heading": "3.3 GENERATION AND RECONSTRUCTION", "text": "To generate new data, latent samples are first drawn from the marginal prior distribution p(zd) based on the conditional priors p(zd|uk), and then fed to the decoder. We put the generated images of all methods in Appendix A.4, and show the Fréchet Inception Distance (FID) (Heusel et al., 2017), which is commonly used for evaluating the quality of generated images, in Table 2. For SWAEs, we observe that the reconstruction loss term is crucial for improving the generation quality as SWAE (β = 1) generally cannot lead to the lowest FID. On MNIST and Fashion-MNIST, the FID of the best SWAE (indicated as β∗) is slightly higher than that of WAE-GAN, but lower than all the other benchmarks. The visual difference between SWAE (β∗) and WAE-GAN on MNIST and Fashion-MNIST is however negligible. In Section 2.1, we compare the formulation of SWAEs (β = 1) with WAE. In particular, the objective of WAE includes a distribution-based dissimilarity in the latent space while the z-loss in SWAEs measures the sample-based dissimilarity. On Coil20 and CIFAR10-sub, SWAE (β∗) achieves the lowest FID and generates new images that are visually much better than those generated by the benchmarks.\nIn Table 3, we compare the reconstruction loss, defined as ‖xe −D(ze)‖22, on the four datasets. As expected, increasing the value of dim-z can reduce the reconstruction loss but the reduction becomes marginal when dim-z is large enough. Additionally, since a smaller value of β leads to more emphasis on the reconstruction-based loss the quality of reconstruction is generally better. We observe that SWAE (β = 0.5) results in the lowest reconstruction loss in all cases. The reconstructed images of all methods are provided in Appendix A.4 for reference. Without including the reconstruction loss into the objective, the reconstruction quality of SWAE (β = 1) can be unsatisfactory (e.g., on CIFAR10-sub).\n3.4 DENOISING EFFECT WITH SWAE (β = 1)\nAs discussed in Section 2.1, the x-loss has a close relationship to the objective of Denoising Autoencoders (DAs). After training, we feed the noisy images, which are obtained by adding the Gaussian random samples with mean zero and standard deviation 0.3 to the clean test samples, to the encoder. In Figure 4, as an example, we show the reconstructed images on Fashion-MNIST. Since the reconstruction loss is highly related to the dimension of the latent space, for fair comparison we set dim-z\nto 80 for all methods. We observe that only SWAE (β = 1) can recover clean images. This observation confirms the denoising effect induced by the x-loss, and thus the resultant latent representation is robust to partial destruction of the observed data." }, { "heading": "4 RELATED WORK", "text": "The objective of VAEs uses the asymmetric KL divergence between the encoding and the decoding distributions (see Appendix A.1). To improve VAEs (Livne et al., 2019; Chen et al., 2018; Pu et al., 2017b) propose symmetric divergence measures instead of the asymmetric KL divergence in VAEbased generative models. For example, MIM (Livne et al., 2019) adopts the Jensen-Shannon (JS) divergence between the encoding and the decoding distributions together with a regularizer maximizing the mutual information between the data and the latent representation. Due to the difficulty of estimating the mutual information and the unavailability of the data distribution, an upper bound of the desired loss is proposed. AS-VAE (Pu et al., 2017b) and the following work (Chen et al., 2018) propose a symmetric form of the KL divergence optimized with adversarial training. These methods typically involve a difficult objective either depending on (unstable) adversarial training or containing the mutual information that requires further approximation. In contrast, the proposed SWAEs yield a simple expression of objective and do not involve adversarial training.\nCompared to VAEs, GANs lack an efficient inference model thus are incapable of providing the corresponding latent representation given the observed data. To bridge the gap between VAEs and GANs, recent works attempt to integrate an inference mechanism into GANs by symmetrically treating the observed data and the latent representation, i.e., the discriminator is trained to discriminate the joint samples in both the data and the latent spaces. In particular, the JS divergence between the encoding and the decoding distributions is deployed in ALI (Dumoulin et al., 2017) and BiGANs (Donahue et al., 2017). To address the non-identifiability issue in ALI (e.g., unfaithful reconstruction), later ALICE (Li et al., 2017a) proposes to regularize ALI using conditional entropy.\nGenerative modelling is closely related to minimizing a dissimilarity measure between two distributions. As opposed to many other commonly adopted dissimilarity measures, e.g., the JS and the KL divergences, the Wasserstein distances that arise from the OT problem provide a weaker distance\nbetween probability distributions (see (Santambrogio, 2015; Peyré et al., 2019; Kolouri et al., 2017) for more background on OT). This is crucial as in many applications the observed data are essentially supported on a low dimensional manifold. In such cases, common dissimilarity measures may fail to provide a useful gradient for training. Consequently, the Wasserstein distances have received a surge of attention for learning generative models (Arjovsky et al., 2017; Balaji et al., 2019; Sanjabi et al., 2018; Kolouri et al., 2019; Patrini et al., 2019; Tolstikhin et al., 2018; Deshpande et al., 2019; Nguyen et al., 2020). Particularly, the VAE-based models (Tolstikhin et al., 2018; Kolouri et al., 2019; Patrini et al., 2019) are all based on minimizing the OT cost of the marginal distributions in the data space with the difference of how to measure the divergence in the latent space: (Tolstikhin et al., 2018) proposes the GAN-based and the MMD-based divergences, (Kolouri et al., 2019) adopts the sliced-Wasserstein distance, and (Patrini et al., 2019) exploits the Sinkhorn divergence. Unlike these works, our proposed SWAEs directly minimize the OT cost of the joint distributions of the observed data and the latent representation with the inclusion of a reconstruction loss for further improving the generative model." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "We contributed a novel family of generative autoencoders, termed Symmetric Wasserstein Autoencoders (SWAEs) under the framework of OT. We proposed to symmetrically match the encoding and the decoding distributions with the inclusion of a reconstruction loss for further improving the generative model. We conducted empirical studies on benchmark tasks to confirm the superior performance of SWAEs over state-of-the-art generative autoencoders.\nWe believe that symmetrically aligning the encoding and the decoding distributions with a proper regularizer is crucial to improving the performance of generative models. To further enhance the performance of SWAEs, it is worthwhile to exploit other methods for the prior design, e.g., the flowbased approaches (Rezende & Mohamed, 2015; Dinh et al., 2014; 2016), and other forms of the reconstruction loss, e.g., the cross entropy." }, { "heading": "A APPENDIX", "text": "A.1 OBJECTIVE OF VAES\nThe objective of VAEs is to maximize a tractable variational lower bound on the data log-likelihood, called the Evidence Lower Bound (ELBO):\nEp(xe) [ Ep(ze|xe)[log p(xd|z)]−DKL(p(ze|xe)||p(zd)) ] . (6)\nIt can be also shown that the objective of VAEs is equivalent to minimizing the KL divergence (or maximizing the negative KL divergence) between the encoding and the decoding distributions (Livne et al., 2019; Esmaeili et al., 2019; Pu et al., 2017b; Chen et al., 2018):\n−DKL(p(xe, ze)||p(xd, zd)) = Ep(xe,ze) [ log p(xd, zd)\np(ze|xe)\n] − Ep(xe)[log p(xe)]. (7)\nThe right hand side of equation 7 is only different from equation 6 in terms of a constant, which is the entropy of the observed data.\nA.2 PROOF OF THEOREM 1\nThe proof extends that of Theorem 1 in (Tolstikhin et al., 2018). In particular, (Tolstikhin et al., 2018) aims to minimize the OT cost of the marginal distributions p(xe) and p(xd), and the proof there is based on the joint probability of three random variables: the observed data, the generated data, and the latent representation. In contrast, we propose to minimize the OT cost of the joint distributions of the observed data and the latent representation induced by the encoder and the decoder. As a result our proof is based on the joint distribution of four random variables (xe, ze,xd, zd) ∈ X ×Z ×X ×Z . We assume that the joint distribution p(xe, ze,xd, zd) satisfies the following three conditions:\n1. e , (xe, ze) ∼ p(xe)p(ze|xe); 2. d , (xd, zd) ∼ p(zd)p(xd|zd); and 3. xd ⊥⊥ xe|zd (conditional independence).\nThe first two conditions specify the encoder and the decoder respectively, and the last condition indicates that given the latent prior the generated data and the observed data are independent.\nDenote the set of the above joint distributions as P(xe, ze,xd, zd). Obviously, we have P(xe, ze,xd, zd) ⊆ P(e ∼ p (e),d ∼ p(d)) due to the third condition. If the decoder is deterministic, p(xd|zd) is a Dirac distribution thus P(xe, ze,xd, zd) = P(e ∼ p (e),d ∼ p(d)). With this result, we can rewrite the objective of the underlying OT problem as follows:\nWc(p(e), p(d)) = inf Γ∈P(xe,ze,xd,zd) E(e,d)∼Γ c(e,d)\n= inf Γ∈P(xe,ze,zd) E(xe,ze,zd)∼Γ c(e,d) (8)\n= inf p(ze|xe), p(zd|xe,ze) Ep(xe)Ep(ze|xe)Ep(zd|xe,ze) c(e,d) (9)\n= inf p(zd|xe) Ep(xe)Ep(zd|xe) c(e,d), (10)\nwhere in equation 8 P(xe, ze, zd) denotes the set of the joint distributions of (xe, ze, zd) induced by P(xe, ze,xd, zd) and it holds due to the deterministic decoder, and equation 10 holds due to the deterministic encoder.\nA.3 DATASETS AND NETWORK ARCHITECTURES\nIn this section, we briefly describe the datasets, the network architectures, and the hyperparameters that are used in our training algorithm.\n• MNIST: The dataset includes 70, 000 binarized images of numeric digits from 0 to 9, each of the size 28 × 28. There are 7, 000 images per class. The training set contains 50, 000 images, the validation set contains 10, 000 images for choosing the best model based on the loss function, and the test set contains 10, 000 images.\n• Fashion-MNIST: The dataset includes 70, 000 binarized images of fashion products in 10 classes. This dataset has the same image size and the split of the training, validation, and test sets as in MNIST.\n• Coil20: The dataset includes gray-scale images of 20 objects, each image of the size 32× 32. The training set contains 1040 images, the validation set contains 200 images for choosing the best model based on the loss function, and the test set contains 200 images.\n• CIFAR10-sub: The CIFAR-10 dataset consists of 60, 000 32 × 32 colour images in 10 classes with 6, 000 images per class. There are 40, 000 training, 10, 000 validation, and 10, 000 test images. We randomly select three classes to form the CIFAR10-sub dataset, namely bird, cat, and ship.\n• Celeba: The Celeba dataset is resized to 64 × 64 resolution. The training set contains 162, 770 images, the validation set contains 19, 867 images, and the test set contains 19, 962 images.\nNetwork architecture of SWAE: The building block of the network structure of SWAE is based on VampPrior, called GatedConv2d. GatedConv2d contains two convolutional layers with the gating mechanism utilized as an element-wise non-linearity. The parameters in the function GatedConv2d() represent the number of the input channels, the number of the output channels, kernel size, stride, and padding, respectively. The conditional prior network outputs the mean and the log-variance of a Gaussian distribution, based on which the latent prior is sampled.\n• The structure of the encoder network: GatedConv2d(1,32,7,1,3)GatedConv2d(32,32,3,2,1)-GatedConv2d(32,64,5,1,2)-GatedConv2d(64,64,3,2,1)GatedConv2d(64,6,3,1,1), followed by one fully-connected layer with no activation function.\n• The structure of the conditional prior network: The layers of GatedConv2d are the same as those in the encoder network, which are followed by two fully-connected layers. One produces the mean, and the other produces the log-variance with the activation function Hardtanh.\n• The structure of the decoder network: Two fully-connected layers with the gating mechanism, followed by GatedConv2d(1,64,3,1,1)-GatedConv2d(64,64,3,1,1)GatedConv2d(64,64,3,1,1)-GatedConv2d(64,64,3,1,1), followed by a convolutional layer with the activation function Sigmoid.\nThe algorithm is trained by Adam with the learning rate = 0.001, β1 = 0.9, and β2 = 0.999.\nSetup of the number of the pseudo-inputs K: As suggested in (Tomczak & Welling, 2018; Livne et al., 2019) we set the value of K in VampPrior and MIM on MNIST and Fashion-MNIST to 500. We found K = 500 is also suitable for VampPrior and MIM on Coil20, CIFAR10-sub, and Celeba. Unlike VampPrior and MIM, for SWAE we found that increasing K improves the performance and we set K to 4000 on MNIST, Fashion-MNIST, CIFAR10-sub, and Celeba. Coil20 is a relatively small dataset and we set K to 500 for SWAE, VampPrior, and MIM.\nA.4 MORE EXPERIMENTAL RESULTS\nIn this section, we show more experimental results based on the comparison with the benchmarks." } ]
2,020
null
SP:89dc84f203effa2b434cdf323ff251043336754e
[ "In this paper, the authors extend the self-supervised 2D jigsaw puzzle solving idea to 3D for self-supervised video representation learning. To make the 3D jigsaw puzzle problem tractable, they propose a two-fold idea. First, they constrain the 3D jigsaw puzzle solution space by factorizing the permutations into time, x, and y dimensions and by grouping pieces. Second, since the constrained 3D jigsaw is still intractable, they propose four surrogate tasks of the 3D jigsaw: 1) LLCD (detecting largest continuous cuboid), 2) CSPC (3D permutation pattern classification), 3) CLSC (contrastive learning over permuted clips), 4) CCMR (measuring the global continuity of the permuted clips)", "The paper presents a novel pretext task for self-supervised video representation learning (SSVRL). The authors design several surrogate tasks for tackling intentionally constructed constrained spatiotemporal jigsaw puzzles. The learned representations during training to solve the surrogate tasks can be transferred to other video tasks. The proposed method shows superior performances than state-of-the-art SSVRL approaches on action recognition and video retrieval benchmarks. " ]
This paper proposes a novel pretext task for self-supervised video representation learning by exploiting spatiotemporal continuity in videos. It is motivated by the fact that videos are spatiotemporal by nature and a representation learned to detect spatiotemporal continuity/discontinuity is thus beneficial for downstream video content analysis tasks. A natural choice of such a pretext task is to construct spatiotemporal (3D) jigsaw puzzles and learn to solve them. However, this task turns out to be intractable. We thus propose Constrained Spatiotemporal Jigsaw (CSJ) whereby the 3D jigsaws are formed in a constrained manner to ensure that large continuous spatiotemporal cuboids exist in a shuffled clip to provide sufficient cues for the model to reason about the continuity. With the constrained jigsaw puzzles, instead of solving them directly, which could still be extremely hard, we carefully design four surrogate tasks that are more solvable but meanwhile still ensure that the learned representation is sensitive to spatiotemporal continuity at both the local and global levels. Extensive experiments show that our CSJ achieves state-of-the-art on two downstream tasks across various benchmarks.
[]
[ { "authors": [ "Yazan Abu Farha", "Juergen Gall" ], "title": "Ms-tcn: Multi-stage temporal convolutional network for action segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Unaiza Ahsan", "Rishi Madhok", "Irfan Essa" ], "title": "Video jigsaw: Unsupervised learning of spatiotemporal context for video action recognition", "venue": "In WACV,", "year": 2019 }, { "authors": [ "Humam Alwassel", "Dhruv Mahajan", "Lorenzo Torresani", "Bernard Ghanem", "Du Tran" ], "title": "Selfsupervised learning by cross-modal audio-video clustering", "venue": "arXiv preprint arXiv:1911.12667,", "year": 2019 }, { "authors": [ "Sagie Benaim", "Ariel Ephrat", "Oran Lang", "Inbar Mosseri", "William T. Freeman", "Michael Rubinstein", "Michal Irani", "Tali Dekel" ], "title": "Speednet: Learning the speediness in videos", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In ICMLE, pp", "year": 2009 }, { "authors": [ "João Carreira", "Eric Noland", "Andras Banki-Horvath", "Chloe Hillier", "Andrew Zisserman" ], "title": "A short note about kinetics-600", "venue": "arXiv preprint arXiv:1808.01340,", "year": 2018 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Yue Chen", "Yalong Bai", "Wei Zhang", "Tao Mei" ], "title": "Destruction and construction learning for finegrained image recognition", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Ruoyi Du", "Dongliang Chang", "Ayan Kumar Bhunia", "Jiyang Xie", "Yi-Zhe Song", "Zhanyu Ma", "Jun Guo" ], "title": "Fine-grained visual classification via progressive multi-granularity training of jigsaw patches", "venue": null, "year": 2003 }, { "authors": [ "Alaaeldin El-Nouby", "Shuangfei Zhai", "Graham W. Taylor", "Joshua M. Susskind" ], "title": "Skip-clip: Selfsupervised spatiotemporal representation learning by future clip order ranking", "venue": null, "year": 1910 }, { "authors": [ "Basura Fernando", "Hakan Bilen", "Efstratios Gavves", "Stephen Gould" ], "title": "Self-supervised video representation learning with odd-one-out networks", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": null, "year": 2018 }, { "authors": [ "Tengda Han", "Weidi Xie", "Andrew Zisserman" ], "title": "Video representation learning by dense predictive coding", "venue": "In ICCV Workshop,", "year": 2019 }, { "authors": [ "Tengda Han", "Weidi Xie", "Andrew Zisserman" ], "title": "Memory-augmented dense predictive coding for video representation learning", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Kensho Hara", "Hirokatsu Kataoka", "Yutaka Satoh" ], "title": "Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Simon Jenni", "Givi Meishvili", "Paolo Favaro" ], "title": "Learning video representations by transforming time", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Longlong Jing", "Xiaodong Yang", "Jingen Liu", "Yingli Tian" ], "title": "Self-supervised spatiotemporal feature learning via video rotation prediction", "venue": "arXiv preprint arXiv:1811.11387,", "year": 2018 }, { "authors": [ "Will Kay", "João Carreira", "Karen Simonyan", "Brian Zhang", "Chloe Hillier", "Sudheendra Vijayanarasimhan", "Fabio Viola", "Tim Green", "Trevor Back", "Paul Natsev", "Mustafa Suleyman", "Andrew Zisserman" ], "title": "The kinetics human action video dataset", "venue": "arXiv preprint arXiv:1705.06950,", "year": 2017 }, { "authors": [ "Alex Kendall", "Yarin Gal", "Roberto Cipolla" ], "title": "Multi-task learning using uncertainty to weigh losses for scene geometry and semantics", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Dahun Kim", "Donghyeon Cho", "In So Kweon" ], "title": "Self-supervised video representation learning with space-time cubic puzzles", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Joshua Knights", "Anthony Vanderkop", "Daniel Ward", "Olivia Mackenzie-Ross", "Peyman Moghadam" ], "title": "Temporally coherent embeddings for self-supervised video representation learning", "venue": "arXiv preprint arXiv:2004.02753,", "year": 2020 }, { "authors": [ "Bruno Korbar", "Du Tran", "Lorenzo Torresani" ], "title": "Cooperative learning of audio and video models from self-supervised synchronization", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Hilde Kuehne", "Ali Arslan", "Thomas Serre" ], "title": "The language of actions: Recovering the syntax and semantics of goal-directed human activities", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2014 }, { "authors": [ "Hildegard Kuehne", "Hueihan Jhuang", "Estibaliz Garrote", "Tomaso Poggio", "Thomas Serre" ], "title": "Hmdb: A large video database for human motion recognition", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2011 }, { "authors": [ "Hsin-Ying Lee", "Jia-Bin Huang", "Maneesh Singh", "Ming-Hsuan Yang" ], "title": "Unsupervised representation learning by sorting sequences", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Piotr Dollár", "Ross Girshick", "Kaiming He", "Bharath Hariharan", "Serge Belongie" ], "title": "Feature pyramid networks for object detection", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Dezhao Luo", "Bo Fang", "Yu Zhou", "Yucan Zhou", "Dayan Wu", "Weiping Wang" ], "title": "Exploring relations in untrimmed videos for self-supervised learning", "venue": "arXiv preprint arXiv:2008.02711,", "year": 2020 }, { "authors": [ "Dezhao Luo", "Chang Liu", "Yu Zhou", "Dongbao Yang", "Can Ma", "Qixiang Ye", "Weiping Wang" ], "title": "Video cloze procedure for self-supervised spatio-temporal learning", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Antoine Miech", "Dimitri Zhukov", "Jean-Baptiste Alayrac", "Makarand Tapaswi", "Ivan Laptev", "Josef Sivic" ], "title": "Howto100m: Learning a text-video embedding by watching hundred million narrated video clips", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Ishan Misra", "Laurens van der Maaten" ], "title": "Self-supervised learning of pretext-invariant representations", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Ishan Misra", "C. Lawrence Zitnick", "Martial Hebert" ], "title": "Shuffle and learn: Unsupervised learning using temporal order verification", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Marie-Morgane Paumard", "David Picard", "Hedi Tabia" ], "title": "Deepzzle: Solving visual jigsaw puzzles with deep learning and shortest path optimization", "venue": null, "year": 2020 }, { "authors": [ "Rui Qian", "Tianjian Meng", "Boqing Gong", "Ming-Hsuan Yang", "Huisheng Wang", "Serge Belongie", "Yin Cui" ], "title": "Spatiotemporal contrastive video representation learning", "venue": "arXiv preprint arXiv:2008.03800,", "year": 2020 }, { "authors": [ "Khurram Soomro", "Amir Roshan Zamir", "Mubarak Shah" ], "title": "Ucf101: A dataset of 101 human actions classes from videos in the wild", "venue": "arXiv preprint arXiv:1212.0402,", "year": 2012 }, { "authors": [ "Simon M. Stringer", "Gavin Perry", "Edmund T. Rolls", "J.H. Proske" ], "title": "Learning invariant object recognition in the visual system with continuous transformations", "venue": "Biological Cybernetics,", "year": 2006 }, { "authors": [ "Chen Sun", "Fabien Baradel", "Kevin Murphy", "Cordelia Schmid" ], "title": "Learning video representations using contrastive bidirectional transformer", "venue": "arXiv preprint arXiv:1906.05743,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Jiangliu Wang", "Jianbo Jiao", "Linchao Bao", "Shengfeng He", "Yunhui Liu", "Wei Liu" ], "title": "Self-supervised spatio-temporal representation learning for videos by predicting motion and appearance statistics", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Jiangliu Wang", "Jianbo Jiao", "Yun-Hui Liu" ], "title": "Self-supervised video representation learning by pace prediction", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Donglai Wei", "Joseph Lim", "Andrew Zisserman", "William T. Freeman" ], "title": "Learning and using the arrow of time", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via nonparametric instance discrimination", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Dejing Xu", "Jun Xiao", "Zhou Zhao", "Jian Shao", "Di Xie", "Yueting Zhuang" ], "title": "Self-supervised spatiotemporal learning via video clip order prediction", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Ceyuan Yang", "Yinghao Xu", "Bo Dai", "Bolei Zhou" ], "title": "Video representation learning with visual tempo consistency", "venue": "arXiv preprint arXiv:2006.15489,", "year": 2020 }, { "authors": [ "Yuan Yao", "Chang Liu", "Dezhao Luo", "Yu Zhou", "Qixiang Ye" ], "title": "Video playback rate perception for self-supervised spatio-temporal representation learning", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Chengxu Zhuang", "Tianwei She", "Alex Andonian", "Max Sobol Mark", "Daniel Yamins" ], "title": "Unsupervised learning from video with deep neural embeddings", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Han" ], "title": "2020), we use the first training split as the pre-training dataset and the first testing split for evaluation. HMDB51 (Kuehne et al., 2011) is a relatively small action recognition dataset, consisting of 6,766 videos with 51 categories. It is also divided into three training/testing splits", "venue": "Following Wang et al", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Self-supervised learning (SSL) has achieved tremendous successes recently for static images (He et al., 2020; Chen et al., 2020) and shown to be able to outperform supervised learning on a wide range of downstream image understanding tasks. However, such successes have not yet been reproduced for videos. Since different SSL models differ mostly on the pretext tasks employed on the unlabeled training data, designing pretext tasks more suitable for videos is the current focus for self-supervised video representation learning (Han et al., 2020; Wang et al., 2020).\nVideos are spatiotemporal data and spatiotemporal analysis is the key to many video content understanding tasks. A good video representation learned from the self-supervised pretext task should therefore capture discriminative information jointly along both spatial and temporal dimensions. It is thus somewhat counter-intuitive to note that most existing SSL pretext tasks for videos do not explicitly require joint spatiotemporal video understanding. For example, some spatial pretext tasks have been borrowed from images without any modification (Jing et al., 2018), ignoring the temporal dimension. On the other hand, many recent video-specific pretext tasks typically involve speed or temporal order prediction (Lee et al., 2017; Wei et al., 2018; Benaim et al., 2020; Wang et al., 2020), i.e., operating predominately along the temporal axis.\nA natural choice for a spatiotemporal pretext task is to solve 3D jigsaw puzzles, whose 2D counterpart has been successfully used for images (Noroozi & Favaro, 2016). Indeed, solving 3D puzzles requires the learned model to understand spatiotemporal continuity, a key step towards video content understanding. However, directly solving a 3D puzzle turns out to be intractable: a puzzle of 3×3×3 pieces (the same size as a Rubik’s cube) can have 27! possible permutations. Video volume even in a short clip is much larger than that. Nevertheless, the latest neural sorting models (Paumard et al., 2020; Du et al., 2020) can only handle permutations a few orders of magnitude less, so offer no solution. This is hardly surprising because such a task is daunting even for humans: Most people would struggle with a standard Rubik’s cube, let alone a much larger one.\nIn this paper, we propose a novel Constrained Spatiotemporal Jigsaw (CSJ) pretext task for selfsupervised video representation learning. The key idea is to form 3D jigsaw puzzles in a constrained manner so that it becomes solvable. This is achieved by factorizing the permutations (shuffling)\ninto the three spatiotemporal dimensions and then applying them sequentially. This ensures that for a given video clip, large continuous spatiotemporal cuboids exist after the constrained shuffling to provide sufficient cues for the model to reason about spatiotemporal continuity (see Fig. 1(b)(c)). Such large continuous cuboids are also vital for human understanding of video as revealed in neuroscience and visual studies (Stringer et al., 2006; Chen et al., 2019). Even with the constrained puzzles, solving them directly could still be extremely hard. Consequently, instead of directly solving the puzzles (i.e., recovering the permutation matrix so that each piece can be put back), four surrogate tasks are carefully designed. They are more solvable but meanwhile still ensure that the learned representation is sensitive to spatiotemporal continuity at both the local and global levels. Concretely, given a video clip shuffled with our constrained permutations, we make sure that the top-2 largest continuous cuboids (LCCs) dominate the clip volume. The level of continuity in the shuffle clip as a whole is thus determined mainly by the volumes of these LCCs, and whether they are at the right order (see Fig. 1(d)(e)) both spatially and temporally. Our surrogate tasks are thus designed to locate these LCCs and predict their order so that the model learned with these tasks can be sensitive to spatiotemporal continuity both locally and globally.\nOur main contributions are three-fold: (1) We introduce a new pretext task for self-supervised video representation learning called Constrained Spatiotemporal Jigsaw (CSJ). To our best knowledge, this is the first work on self-supervised video representation learning that leverages spatiotemporal jigsaw understanding. (2) We propose a novel constrained shuffling method to construct easy 3D jigsaws containing large LCCs. Four surrogate tasks are then formulated in place of the original jigsaw solving tasks. They are much more solvable yet remain effective in learning spatiotemporal discriminative representations. (3) Extensive experiments show that our approach achieves state-ofthe-art on two downstream tasks across various benchmarks." }, { "heading": "2 RELATED WORK", "text": "Self-supervised Learning with Pretext Tasks Self-supervised learning (SSL) typically employs a pretext task to generate pseudo-labels for unlabeled data via some forms of data transformation. According to the transformations used by the pretext task, existing SSL methods for video presentation learning can be divided into three categories: (1) Spatial-Only Transformations: Derived from the original image domain (Gidaris et al., 2018), Jing et al. (2018) leveraged the spatial-only transformations for self-supervised video presentation learning. (2) Temporal-Only Transformations: Misra et al. (2016); Fernando et al. (2017); Lee et al. (2017); Wei et al. (2018) obtained shuffled video frames with the temporal-only transformations and then distinguished whether the shuffled frames are in chronological order. Xu et al. (2019) chose to shuffle video clips instead of frames. Benaim et al. (2020); Yao et al. (2020); Jenni et al. (2020) exploited the speed transformation via determining whether one video clip is accelerated. (3) Spatiotemporal Transformations: There are only a few recent approaches (Ahsan et al., 2019; Kim et al., 2019) that leveraged both spatial and temporal transformations by permuting 3D spatiotemporal cuboids. However, due to the aforementioned\nintractability of solving the spatiotemporal jigsaw puzzles, they only leveraged either temporal or spatial permutations as training signals, i.e., they exploited the two domains independently. Therefore, no true spatiotemporal permutations have been considered in Ahsan et al. (2019); Kim et al. (2019). In contrast, given that both spatial appearances and temporal relations are important cues for video representation learning, the focus of this work is on investigating how to exploit the spatial and temporal continuity jointly for self-supervised video presentation learning. To that end, our Constrained Spatiotemporal Jigsaw (CSJ) presents the first spatiotemporal continuity based pretext task for video SSL, thanks to a novel constrained 3D jigsaw and four surrogate tasks to reason about the continuity in the 3D jigsaw puzzles without solving them directly.\nSelf-supervised Learning with Contrastive Learning Contrastive learning is another selfsupervised learning approach that has become increasingly popular in the image domain (Misra & Maaten, 2020; He et al., 2020; Chen et al., 2020). Recently, it has been incorporated into video SSL as well. Contrastive learning and transformation based pretext tasks are orthogonal to each other and often combined in that different transformed versions of a data sample form the positive set used in contrastive learning. In El-Nouby et al. (2019); Knights et al. (2020); Qian et al. (2020); Wang et al. (2020); Yang et al. (2020), the positive/negative samples were generated based on temporal transformations only. In contrast, some recent works (Han et al., 2019; 2020; Zhuang et al., 2020) leveraged features from the future frame embeddings or with the memory bank (Wu et al., 2018). They modeled spatiotemporal representations using only contrastive learning without transformations. Contrastive learning is also exploited in one of our surrogate pretext tasks. Different from existing works, we explore the spatiotemporal transformations in the form of CSJ and employ contrastive learning to distinguish different levels of spatiotemporal continuity in shuffled jigsaws. This enables us to learn more discriminative spatiotemporal representations." }, { "heading": "3 CONSTRAINED SPATIOTEMPORAL JIGSAW", "text": "" }, { "heading": "3.1 PROBLEM DEFINITION", "text": "The main goal of self-supervised video representation learning is to learn a video feature representation function f(·) without using any human annotations. A general approach to achieving this goal is to generate a supervisory signal y from an unlabeled video clip x and construct a pretext task P to predict y from f(x). The process of solving the pretext task P encourages f(·) to learn discriminative spatiotemporal representations.\nThe pretext task P is constructed typically by applying to a video clip a transformation function t(·;θ) parameterized by θ and then automatically deriving y from θ, e.g., y can be the type of the transformation. Based on this premise, P is defined as the prediction of y using the feature map of the transformed video clip f(x̃), i.e., P : f(x̃) → y, where x̃ = t(x;θ). For example, in Lee et al. (2017), t(·;θ) denotes a temporal transformation that permutes the four frames of video clip x in a temporal order θ, x̃ = t(x;θ) is the shuffled clip, and the pseudo-label y is defined as the permutation order θ (e.g., 1324, 4312, etc.). The pretext task P is then a classification problem of 24 categories because there are 4! = 24 possible orders." }, { "heading": "3.2 CONSTRAINED PERMUTATIONS", "text": "Solving spatiotemporal video jigsaw puzzles seems to be an ideal pretext task for learning discriminative representation as it requires an understanding of spatiotemporal continuity. After shuffling the pixels in a video clip using a 3D permutation matrix, the pretext task is to recover the permutation matrix. However, as explained earlier, this task is intractable given even moderate video clip sizes. Our solution is to introduce constraints on the permutations. As a result, a new pretext task PCSJ based on Constrained Spatiotemporal Jigsaw (see Fig. 2(a)) is formulated, which is much easier to solve than a random/unconstrained jigsaw.\nSpecifically, our goal is to introduce constraints to the permutations so that the resultant shuffled video clip is guaranteed to have large continuous cuboids (see Fig. 2(a)). Similar to humans (Stringer et al., 2006), having large continuous cuboids is key for a model to understand a 3D jigsaw and therefore to have any chance to solve it. Formally, the volume of a shuffled video clip x̃ are denoted as {T,H,W}, measuring its sizes along the temporal, height, and width dimensions, respectively. A cuboid is defined as a crop of x̃: c = x̃t1:t2,h1:h2,w1:w2 , where t1, t2 ∈ {1, 2, . . . , T}, h1, h2 ∈\n{1, 2, . . . ,H}, w1, w2 ∈ {1, 2, . . . ,W}. If all the jigsaw pieces (smallest video clip unit, e.g. a pixel or a 3D pixel block) in c keep the same relative order as they were in x (before being shuffled), we call the cuboid c as a continuous cuboid ccont. The cuboid’s volume equals (t2 − t1)× (h2 − h1)× (w2 − w1), and the largest continuous cuboid (LCC) ccontmax is the ccont with the largest volume. We introduce two permutation strategies to ensure that the volumes of LCCs are large in relation to the whole video clip volume after our shuffling transformation t(·;θCSJ). First, instead of shuffling x in three spatiotemporal dimensions simultaneously, t(·;θCSJ) factorizes the permutations into the three spatiotemporal dimensions and then utilizes them sequentially to generate shuffled clips, e.g., in the order of T,W,H and only once. Note that the volume of the generated x̃ stays the same with different permutation orders (e.g., TWH and HTW ). Second, we shuffle a group of jigsaw pieces together instead of each piece individually along each dimension. Taking spatial shuffling as an example, if there are 8 pieces per frame (along each of the two spatial dimensions), θCSJ could be represented as the permutation from {12345678} to {84567123}. The longest and the secondlongest index ranges are: [2, 5] for coordinates {4567}, and [6, 8] for coordinates {123}. With these two permutation strategies, not only do we have large LCCs, but also they are guaranteed to have clearly separable boundaries (see Fig. 2(b)) with surrounding pieces due to the factorized and grouped permutation design. This means that they are easily detectable." }, { "heading": "3.3 SURROGATE TASKS", "text": "Having permutation constraints preserves more spatiotemporal continuity in the shuffled clip and reduces the amount of possible permutations. But exploiting these constraints to make a neural sorting model tractable is still far from trivial. Instead of solving the jigsaw directly, our PCSJ is thus formulated as four surrogate tasks: Largest Continuous Cuboid Detection (LCCD), Clip Shuffling Pattern Classification (CSPC), Contrastive Learning over Shuffled Clips (CLSC), and Clip Continuity Measure Regression (CCMR). As illustrated in Fig. 2(b), given an unlabeled clip x, we first construct a mini-batch of 8 clips {x̃1, x̃2, ..., x̃8} by shuffling x with different but related constrained permutations (to be detailed later). These shuffled clips and the raw clip x are then fed into a 3D CNN model f(·) for spatiotemporal representation learning with a non-local operation (Wang et al., 2018):\nfNL(x̃i) = NL(f(x̃i), f(x)), (1)\nwhere NL(·, ·) denotes the non-local operator, and f(x̃i) and f(x) denote the feature map of x̃i and x from the last convolutional layer of f(·), respectively. The resultant feature map fNL(x̃i) is further passed through a spatial pooling layer followed by a separately fully-connected layer for\neach surrogate task. Note that the raw video feature map f(x) is used as guidance through the nonlocal based attention mechanism to help fulfill the tasks. This is similar to humans needing to see the completed jigsaw picture to help solve the puzzle.\nBefore we detail the four tasks, we first explain how the eight permutations from the same raw clip are generated. First, the factorized and grouped permutations are applied to x to create one shuffled clip. By examining the largest and the second-largest continuous puzzle piece numbers of each dimension ({T,H,W}), we can easily identify the top-2 largest continuous cuboids (LCCs). Next, by varying the relative order of the top-2 LCCs either in the correct (original) order or the reverse order in each dimension, 2×2×2=8 permutations are obtained. By controlling the group size in permutation, we can make sure that the top-2 LCCs account for a large proportion, saying 80% of the total clip volume. Our four tasks are thus centered around these two LCCs as they largely determine the overall spatiotemporal continuity of the shuffled clip.\nThe first task LCCD is to locate the top-2 LCCs {ccontmax(j) : j = 1, 2} and formulated as a regression problem. Given a ground-truth LCC ccontmax(j), a Gaussian kernel is applied to its center to depict the possibility of each pixel in x̃ belonging to the LCC. This leads to a soft mask M jLCCD with the same\nsize of x̃: M jLCCD is all 0 outside the region of c cont max(j), and exp(−\n||a− ac||2\n2σ2g ) inside the region,\nwhere a,ac denote any pixel and the center point, respectively. σg is the hyper-parameter which is set as 1 empirically. In the training stage, FPN (Lin et al., 2017) is used for multi-level feature fusion. LCCD is optimized using the MSE loss in each point:\nLLCCD = ∑\nj∈{1,2} ∑ a∈x̃ MSE(M jLCCD(a),M j LCCD(a) ′ ), (2)\nwhere MSE(·, ·) denotes the MSE loss function, and M jLCCD(a) ′ is the prediction of each pixel a.\nCSPC is designed to recognize the shuffling pattern of a shuffled clip. As mentioned early, the eight shuffled clips in each mini-batch are created from the same raw clip and differ only in the relative order of the top-2 LCCs along each of the three dimensions. There are thus eight permutations depending on the order (correct or reverse) in each dimension. Based on this understanding, CSPC is formulated as a multi-class classification task to recognize each shuffled clip into one of these eight classes, which is optimized using the Cross-Entropy (CE) loss:\nLCSPC = ∑\ni∈{0,1,...,7}\nCE(lCSPC[i], l ′ CSPC[i]), (3)\nwhere CE(·, ·) denotes the CE loss function and l ′\nCSPC[i] is the predicted class label of i-th sample (shuffled clip) in each mini-batch.\nThe two tasks above emphasize on local spatiotemporal continuity understanding. In contrast, CLSC leverages the contrastive loss to encourage global continuity understanding. In particular, since the top-2 LCCs dominate the volume of a clip, it is safe to assume that if their relative order is correct in all three dimensions, the shuffled clip largely preserve continuity compared to the original clip, while all other 7 permutations feature large discontinuity in at least one dimension. We thus form a contrastive learning task with the original video x and the most continuous shuffled video x̃i as a positive pair, and x and the rest x̃j (j 6= i) as negative pairs. CLSC is optimized using the Noise Contrastive Estimation (NCE) (Tian et al., 2020) loss:\nLCLSC = −log exp(sim(f(x), f(x̃i))/τ) exp(sim(f(x), f(x̃i))/τ) + ∑ j exp(sim(f(x), f(x̃j))/τ) , (4)\nwhere sim(·, ·) is defined by the dot product: f(x)>f(x̃i), and τ is the temperature hyper-parameter. Note that the non-local operator is not used in CLSC.\nCCMR is similar to CLSC in that it also enforces global continuity understanding, but differs in that it is a regression task aimed at predicting a global continuity measure. We consider two such measures. Since the total size of the top-2 LCCs {ccontmax(j) : j = 1, 2} is a good indicator of how continuous a shuffle video clip is, the first measure lld directly measures the relative total size of the top-2 LCCs: lld = v(ccontmax(1)) + v(c cont max(2))\nv(x̃) , where v(·) represents the volume of a clip/cuboid.\nThe second measure lt/h/whd examines the shuffling degree of x̃ in each dimension, computed as the normalized hamming distance: hamming(x̃) Nc(Nc − 1)/2 , where hamming(·) denotes the hamming distance in each dimension between the original piece sequence and the permuted one, and Nc represents the number of pieces in each dimension so thatNc(Nc−1)/2 indicates the maximum possible hamming distance in the dimension. CCMR is optimized using the Mean Squared Error (MSE) loss:\nLCCMR = MSE([lld, l t hd, l h hd, l w hd], [l\n′ ld, l t′ hd, l h′ hd, l w′ hd ]), (5)\nwhere l ′ ld, l t′ hd, l h′ hd, l w′ hd are the prediction of the model." }, { "heading": "3.4 OVERALL LEARNING OBJECTIVE", "text": "Our entire CSJ framework is optimized end-to-end with the learning objective defined as:\nL = σ1LLCCD + σ2LCSPC + σ3LCLSC + σ4LCCMR, (6)\nwhere σ1, σ2, σ3, σ4 denote the weights for the four losses. We deploy the adaptive weighting mechanism (Kendall et al., 2018) to weight these tasks, and thus there is no free hyper-parameters to tune. We also adopt curriculum learning (Bengio et al., 2009; Korbar et al., 2018) to train our network by shuffling clips from easy to hard. More details are presented in Appendix. A.1 and A.2." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASETS AND SETTINGS", "text": "We select three benchmark datasets for performance evaluation: UCF101 (Soomro et al., 2012), HMDB51 (Kuehne et al., 2011), and Kinetics-400 (K400) (Kay et al., 2017), containing 13K/7K/306K video clips from 101/51/400 action classes, respectively. In the self-supervised pretraining stage, we utilize the first training split of UCF101/HMDB51 and the training split of K400 without using their labels. As in Han et al. (2020), we adopt R2D3D as the backbone network, which is modified from R3D (Hara et al., 2018) with fewer parameters. By fine-tuning the pre-trained model, we can evaluate the SSL performance on a downstream task (i.e., action classification). Following Han et al. (2019); He et al. (2020), two evaluation protocols are used: comparisons against state-of-the-arts follow the more popular fully fine-tuning evaluation protocol, but ablation analysis takes both the linear evaluation and fully fine-tuning protocols. For the experiments on supervised learning, we report top-1 accuracy on the first test split of UCF101/HMDB51 as the standard (Han et al., 2020). More details of the datasets are provided in Appendix B." }, { "heading": "4.2 IMPLEMENTATION DETAILS", "text": "Raw videos in these datasets are decoded at a frame rate of 24-30 fps. From each raw video, we start from a randomly selected frame index and sample a consecutive 16-frame video clip with a temporal stride of 4. For data augmentation, we first resize the video frames to 128×171 pixels, from which we extract random crops of size 112×112 pixels. We also apply random horizontal flipping and random color jittering to the video frames during training. We exploit only the raw RGB video frames as input, and do not leverage optical flow or other auxiliary signals for self-supervised pretraining. We adopt the Adam optimizer with a weight decay of 10−3 and a batch size of 8 per GPU (with a total of 32 GPUs). We deploy cosine annealing learning rate with an initial value of 10−4 and 100 epochs. The jigsaw puzzle piece sizes of {T,H,W} dimensions are set as 1, 4, 4, respectively. A 16×112×112 video clip thus contains 16×28×28 pieces. We set the temperature hyper-parameter τ to 0.07. A dropout of 0.5 is applied to the final layer of each task. More implementation details of the fine-tuning and test evaluation stages can be found in Appendix B." }, { "heading": "4.3 MAIN RESULTS", "text": "Comparison in Action Recognition A standard way to evaluate a self-supervised video representation learning model is to use it to initialize an action recognition model on a small dataset. Specifically, after self-supervised pre-training on UCF101/HMDB51/K400, we exploit the learned backbone for fully fine-tuning on UCF101 and HMDB51, following Han et al. (2020); Wang et al. (2020).\nWe consider one baseline: fully-supervised learning with pre-training on K400. Note that this baseline is commonly regarded as the upper bound of self-supervised representation learning (Alwassel et al., 2019). From Table 1, we have the following observations: (1) Our CSJ achieves state-of-theart performance on both UCF101 and HMDB51. Particularly, with the backbone R2D3D-18 that is weaker than R(2+1)D-18, our CSJ performs comparably w.r.t. Pace on UCF101 but achieves a 10% improvement over Pace on HMDB51. (2) By exploiting spatiotemporal transformations for self-supervised representation learning, our CSJ beats either methods with only temporal transformations (†) or methods with both spatial and temporal transformations (‡), as well as those learning spatiotemporal representations (∗) via only contrastive learning (w./o. spatiotemporal transformations). (3) Our CSJ also outperforms CBT (Sun et al., 2019), which used ten-times more massive datasets (K600 (Carreira et al., 2018) + Howto100M (Miech et al., 2019)) and multiple modalities (RGB+Audio). (4) Our CSJ is the closest to the fully-supervised one (upper bound), validating its effectiveness in self-supervised video representation learning.\nComparison in Video Retrieval We evaluate our CSJ method in the video retrieval task. Following Xu et al. (2019), we extract each video clips’ embeddings with the pre-training model and use each clip in the test set to query the k nearest clips in the training set. The comparative results in Table 2 show that our method outperforms all other self-supervised methods and achieves new state-of-the-art in video retrieval on UCF101. Particularly, our method beats the latest competitor" }, { "heading": "Tasks Linear Probe Fully Fine-tuning", "text": "PRP (Yao et al., 2020) on four out of five metrics. This indicates that our proposed CSJ is also effective for video representation learning in video retrieval." }, { "heading": "4.4 FURTHER EVALUATIONS", "text": "Ablation Study We conduct ablative experiments to validate the effectiveness of four CSJ surrogate tasks and two additional learning strategies. From Table 3, we can observe that: (1) Selfsupervised learning with each of the four tasks shows better generalization than fine-tuning the network from scratch (random initialization). (2) By training over all the four tasks jointly, we can achieve large performance gains (see ‘+LCCD’ vs. ‘CCMR’). (3) Each additional learning strategy (i.e., adaptive weighting or curriculum learning) leads to a small boost to the performance by 0.3- 0.5%. (4) Our full model achieves a remarkable classification accuracy of 70.4%, demonstrating the effectiveness of our proposed CSJ with only the RGB video stream (without additional optical flow, audio, or text modalities). More ablative analysis can be found in Appendix D.\nVisualization of Attention Maps Fig. 3 visualizes the attention map of the last feature maps from two models fine-tuned on UCF101 with or without adopting our self-supervised pre-training. Since each frame’s attention map involves four adjacent frames, it actually contains spatiotemporal semantic features. We can see that our self-supervised pre-training with CSJ indeed helps to better capture meaningful spatiotemporal information and thus recognize the action categories more correctly.\nVisualization of LCCD Predictions We also demonstrate the visualization of the LCCD predictions from the pre-trained models in Fig. 4. We can observe that solving the LCCD task indeed enables the model to learn the locations of LCCs and understand spatiotemporal continuity, which is a key step towards video content understanding." }, { "heading": "5 CONCLUSION", "text": "We have introduced a novel self-supervised video representation learning method named Constrained Spatiotemporal Jigsaw (CSJ). By introducing constrained permutations, our proposed CSJ is the first to leverage spatiotemporal jigsaw in self-supervised video representation learning. We also propose four surrogate tasks based on our constrained spatiotemporal jigsaws. They are designed to encourage a video representation model to understand the spatiotemporal continuity, a key building block towards video content analysis. Extensive experiments were carried out to validate the effectiveness of each of the four CSJ tasks and also show that our approach achieves the state-of-the-art on two downstream tasks across various benchmarks." }, { "heading": "A ADDITIONAL LEARNING STRATEGIES", "text": "" }, { "heading": "A.1 ADAPTIVE WEIGHT", "text": "Formally, our CSJ has two continuous outputs y1, y4 from LCCD and CCMR, and two discrete outputs y2, y3 from CSPC and CLSC, modeled with Gaussian likelihoods and softmax likelihoods, respectively. The joint loss for these four tasks L(W, σ1, σ2, σ3, σ4) is:\nL(W, σ1, σ2, σ3, σ4)\n= − logN (y1; fW(x), σ21) · − logN (y4; fW(x), σ24)\n· softmax(y2=c; fW(x), σ2) · softmax(y3=c; fW(x), σ3)\n= 1\n2σ21 ||y1 − fW(x)||2 + log σ1 −\n1\n2σ24 ||y4 − fW(x)||2 + log σ4\n− log p(y2|fW(x), σ2)− log p(y3|fW(x), σ3)\n≈ 1 2σ21 L1(W) + 1 σ22 L2(W) + 1 σ23 L3(W) + 1 2σ24 L4(W)\n+ log σ1 + log σ2 + log σ3 + log σ4,\n(7)\nwhere σ is the weight factor that can be automatically learned from the network, and the log likelihood for the output y is defined as:\nlog p(y = c|fW(x), σ) = 1 σ2 fWc (x)−log ∑ c′ exp( 1 σ2 fWc′ (x)). (8)" }, { "heading": "A.2 CURRICULUM LEARNING", "text": "We adopt curriculum learning (Korbar et al., 2018) to train our network by shuffling clips from easy to hard. Let d be the shuffle degree of a shuffled clip x̃, representing the number of continuous cuboids in each dimension. We gradually increase d from 3 to 5 during the training phase to produce more permuted clips. Note that when the video content is ambiguous in one dimension, e.g., a static video clip inflated from an image, there is no temporal variance to learn the transformation. Kim et al. (2019); Noroozi & Favaro (2016) also mentioned this problem as similar-looking ambiguity. To solve this problem, we calculate the variance on each dimension and set a threshold. If the variance is lower than the threshold, we decrease d from 3 to 1 so that the pieces are not shuffled in the corresponding dimension." }, { "heading": "B DATASETS AND IMPLEMENTATION", "text": "" }, { "heading": "B.1 DETAILS OF DATASETS", "text": "UCF101 (Soomro et al., 2012) is a widely-used dataset in the action recognition task, which contains 13,320 videos with 101 action classes. The dataset is divided into three training/testing splits. In this paper, following prior works (Wang et al., 2020; Han et al., 2020), we use the first training split as the pre-training dataset and the first testing split for evaluation.\nHMDB51 (Kuehne et al., 2011) is a relatively small action recognition dataset, consisting of 6,766 videos with 51 categories. It is also divided into three training/testing splits. Following Wang et al. (2020); Han et al. (2020), we use the first training split as the pre-training dataset and the first testing split for evaluation.\nKinetics-400 (K400) (Kay et al., 2017) is a very large action recognition dataset consisting of 400 human action classes and around 306k videos. In this work, we use the training split of K400 as the pre-training dataset.\nB.2 IMPLEMENTATION DETAILS\nIn the fine-tuning stage, weights of convolutional layers are initialized with self-supervised pretraining, but weights of fully-connected layers are randomly initialized. The whole network is then trained with the cross-entropy loss. The pre-processing and training strategies are the same as in the\nself-supervised pre-training stage, except that the total epochs are 300 and the initial learning rate is 10−3. We use a batch size of 64 per GPU and a total of 8 GPUs for fine-tuning.\nWe follow the standard evaluation protocol (Han et al., 2020) during inference and use ten-crop to take the same sequence length as training from the video. The predicted label of each video is calculated by averaging the softmax probabilities of all clips in the video." }, { "heading": "C NETWORK ARCHITECTURE", "text": "We deploy the same network backbone R2D3D as Han et al. (2019; 2020), which is a 3D-ResNet (R3D) similar to Hara et al. (2018). The only difference between R2D3D and R3D lies in that: R2D3D keeps the first two residual blocks as 2D convolutional blocks while R3D uses 3D blocks. Therefore, the modified R2D3D has fewer parameters (only the last two blocks are 3D convolutions). We present the CNN structure of R2D3D in Table 4." }, { "heading": "D ADDITIONAL ABLATION STUDIES", "text": "" }, { "heading": "D.1 LCCD", "text": "Instead of predicting center points using the detection method, we also design a segmentation method – largest continuous cuboid segmentation (LCCS) to predicts the location of top-2 LCCs {ccontmax(j) : j = 1, 2}. The difference between LCCD and LCCS lies in that: LCCS is formulated as a segmentation task to discriminate whether a pixel is in the region of ccontmax(j). Concretely, LCCS predicts a binary mask M jLCCS where only points in the region of {c cont max(j) are set to be 1, otherwise 0. As a result, LCCS is optimized using the Cross Entropy (CE) loss at each point:\nLLCCS = ∑\nj∈{1,2} ∑ a∈x̃ CE(M jLCCS(a),M j LCCS(a) ′ ), (9)\nwhere CE(·, ·) denotes the CE loss function, and M jLCCS(a) ′ is the predicted class of pixel a.\nWe report the performance of four different designs of LCCD in Table 5: (1) LCCS: LCCS is used instead of LCCD. (2) LCCD+MLCCS: The Gaussian mask MLCCD is substituted by the binary mask MLCCS, but the LCCD task is optimized using the MSE loss. (3) LCCD + L1: The LCCD task is\noptimized by the L1 loss. (4) LCCD + MSE: The LCCD task is optimized by the MSE loss. From Table 5, it can be seen that the segmentation task also helps self-supervised representation learning but doesn’t perform as well as LCCD. Also, under the three different settings of LCCD, the MSE loss with the Gaussian map performs the best." }, { "heading": "D.2 CLSC", "text": "Table 6 above shows the accuracies obtained with different temperatures τ used in contrastive learning. We can observe that: (1) When τ is in the range 1 ∼ 0.07, the accuracy increases with smaller τ . (2) When τ is large (e.g., 1), the accuracy drops considerably. In this work, τ is set to 0.0." }, { "heading": "D.3 CSPC", "text": "In addition to our CSPC with 8 pattern categories (see Sec. 3.3), we consider another two designs: (1) 2 Categories: the shuffled clip is discriminated by whether it has the same relative order of the top-2 LCCs as the raw clip. It is almost the same as CLSC but is optimized by the CE loss. (2) 4 Categories: the shuffled clip is discriminated by how it differs from the raw clip: non-difference, spatial-only difference, temporal-only difference, spatiotemporal difference. From Table 7, we can see that CSPC with 8 categories outperforms the other two designs. These results support our motivation for leveraging spatiotemporal transformations." }, { "heading": "D.4 CCMR", "text": "We report the performance of three different designs of CCMR: (1) ld: the learning degree lld is used as supervision, which only contains volume information. (2) hd: the hamming distances lthd, l h hd, l w hd are used, which contain only the relative order information. (3) ld + hd: both ld and hd are used as supervision. From Table 8, we can see that: First, both ld and hd help the model to learn continuous characteristics during pre-training, and hd outperforms ld by a small margin. Second, our CCMR learns the best representation by combining ld and hd." }, { "heading": "D.5 RESULTS OF DIRECTLY SOLVING CSJ", "text": "We also demonstrate the results of solving the CSJ task directly in Table 9. We randomly shuffle video clips into 4 × 4 × 4 jigsaw puzzles. To recognize the correct permutation, the model solve a (4! × 4! × 4!)-way classification task in the pre-training stage. We compare the CSJ task with the joint LCCD+CCMR task under the same setting for fair comparison. Linear evaluation is adopted to show the effectiveness of different tasks. We can observe from the table that solving LCCD+CCMR jointly is more effective than solving CSJ directly." }, { "heading": "E TEMPORAL ACTION SEGMENTATION", "text": "To show the effectiveness of our CSJ for solving new downstream tasks, we apply the pretrained model obtained by our CSJ to temporal action segmentation, which is more challenging than the\nconventional action recognition and retrieval tasks. Specifically, we choose to compare our CSJ model with the latest competitor MemDPC (Han et al., 2020) on the Breakfast dataset (Kuehne et al., 2014). For fair comparison, our CSJ model and the MemDPC model adopt the same R2D3D34 backbone. Due the time constraint, from the original Breakfast dataset, we only use a small subset of 200 long videos as the training set for fine-tuning, and select a few long videos for the test. For temporal action segmentation, we follow the overall framework of MS-TCN (Abu Farha & Gall, 2019), but changes its backbone to R2D3D-34 pretrained by our CSJ or MemDPC.\nWe present the qualitative results on two test videos in Fig. 5. We can clearly observe that our CSJ outperforms MemDPC on both test videos. Particularly, the predictions of our CSJ are much closer to the ground truth, but MemDPC tends to produce unwanted segments for temporal action segmentation: it wrongly recognizes the segment (color in yellow) in the middle part of the first video as ‘Pour Milk’, and the segment (color in black) in the last part of the second video as ‘Stir Coffee’. In conclusion, as compared to the latest SSVRL method MemDPC, our CSJ can learn more robust features for temporal action segmentation due to its ‘true’ spatiotemporal jigsaw understanding." } ]
2,020
null
SP:02c82e31ddcff1990d5cb3f8ecbb44392cb02892
[ "The paper proposes a framework for efficient architecture search for graphs. This is done by combining a differentiable DARTS-like architecture encoding with a transfer learning method, that searches on smaller graphs with similar properties, and then transfers to the target graphs. The experiments show that EGAN matches or exceeds both hand-designed and NAS-designed GNNs. Moreover, the method is very fast to run.", "This work proposes an efficient graph neural architecture search to address the problem of automatically designing GNN architecture for any graph-based task. Comparing with the existing NAS approaches for GNNs, the authors improves the search efficiency from the following three components: (1) a slim search space only consisting of the node aggregator, layer aggregator and skip connection; (2) a one-shot search algorithm, which is proposed in the previous NAS work; and (3) a transfer learning strategy, which searches architectures for large graphs via sampling proxy graphs. However, the current performance improvement over the human-designed models is marginal, which diminishes their research contribution." ]
Recently, graph neural networks (GNN) have been demonstrated effective in various graph-based tasks. To obtain state-of-the-art (SOTA) data-specific GNN architectures, researchers turn to the neural architecture search (NAS) methods. However, it remains to be a challenging problem to conduct efficient architecture search for GNN. In this work, we present a novel framework for Efficient GrAph Neural architecture search (EGAN). By designing a novel and expressive search space, an efficient one-shot NAS method based on stochastic relaxation and natural gradient is proposed. Further, to enable architecture search in large graphs, a transfer learning paradigm is designed. Extensive experiments, including node-level and graph-level tasks, are conducted. The results show that the proposed EGAN can obtain SOTA data-specific architectures, and reduce the search cost by two orders of magnitude compared to existing NAS baselines.
[]
[ { "authors": [ "Sami Abu-El-Haija", "Bryan Perozzi", "Amol Kapoor", "Nazanin Alipourfard", "Kristina Lerman", "Hrayr Harutyunyan", "Greg Ver Steeg", "Aram Galstyan" ], "title": "Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Youhei Akimoto", "Shinichi Shirakawa", "Nozomu Yoshinari", "Kento Uchida", "Shota Saito", "Kouhei Nishida" ], "title": "Adaptive stochastic natural gradient method for one-shot neural architecture search", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Bowen Baker", "Otkrist Gupta", "Nikhil Naik", "Ramesh Raskar" ], "title": "Designing neural network architectures using reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Gabriel Bender", "Pieter-Jan Kindermans", "Barret Zoph", "Vijay Vasudevan", "Quoc V. Le" ], "title": "Understanding and simplifying one-shot architecture search", "venue": "In ICML,", "year": 2018 }, { "authors": [ "James S Bergstra", "Rémi Bardenet", "Yoshua Bengio", "Balázs Kégl" ], "title": "Algorithms for hyper-parameter optimization", "venue": "In NeurIPS, pp", "year": 2011 }, { "authors": [ "Ming Chen", "Zhewei Wei", "Zengfeng Huang", "Bolin Ding", "Yaliang Li" ], "title": "Simple and deep graph convolutional networks", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Gabriele Corso", "Luca Cavalleri", "Dominique Beaini", "Pietro Liò", "Petar Veličković" ], "title": "Principal neighbourhood aggregation for graph nets", "venue": null, "year": 2020 }, { "authors": [ "Paul D Dobson", "Andrew J Doig" ], "title": "Distinguishing enzyme structures from non-enzymes without alignments", "venue": "Journal of molecular biology (JMB),", "year": 2003 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": null, "year": 2018 }, { "authors": [ "Matthias Fey", "Jan E. Lenssen" ], "title": "Fast graph representation learning with PyTorch Geometric", "venue": "In ICLRW,", "year": 2019 }, { "authors": [ "Luca Franceschi", "Mathias Niepert", "Massimiliano Pontil", "Xiao He" ], "title": "Learning discrete structures for graph neural networks", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Hongyang Gao", "Zhengyang Wang", "Shuiwang Ji" ], "title": "Large-scale learnable graph convolutional networks", "venue": "In KDD,", "year": 2018 }, { "authors": [ "Yang Gao", "Hong Yang", "Peng Zhang", "Chuan Zhou", "Yue Hu" ], "title": "Graph neural architecture search", "venue": "In IJCAI,", "year": 2020 }, { "authors": [ "Vikas K Garg", "Stefanie Jegelka", "Tommi Jaakkola" ], "title": "Generalization and representational limits of graph neural networks", "venue": null, "year": 2020 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Marco Gori", "Gabriele Monfardini", "Franco Scarselli" ], "title": "A new model for learning in graph domains", "venue": "In IJCNN,", "year": 2005 }, { "authors": [ "Zichao Guo", "Xiangyu Zhang", "Haoyuan Mu", "Wen Heng", "Zechun Liu", "Yichen Wei", "Jian Sun" ], "title": "Single path one-shot neural architecture search with uniform sampling", "venue": null, "year": 1904 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR, pp", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In ECCV, pp", "year": 2016 }, { "authors": [ "Weihua Hu", "Matthias Fey", "Marinka Zitnik", "Yuxiao Dong", "Hongyu Ren", "Bowen Liu", "Michele Catasta", "Jure Leskovec" ], "title": "Open graph benchmark: Datasets for machine learning on graphs, 2020", "venue": null, "year": 2020 }, { "authors": [ "Zhihao Jia", "Sina Lin", "Rex Ying", "Jiaxuan You", "Jure Leskovec", "Alex Aiken" ], "title": "Redundancy-free computation for graph neural networks", "venue": "In KDD,", "year": 2020 }, { "authors": [ "Shengli Jiang", "Prasanna Balaprakash" ], "title": "Graph neural network architecture search for molecular property prediction, 2020", "venue": null, "year": 2020 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": null, "year": 2016 }, { "authors": [ "Kwei-Herng Lai", "Daochen Zha", "Kaixiong Zhou", "Xia Hu" ], "title": "Policy-gnn: Aggregation optimization for graph neural networks", "venue": "In KDD,", "year": 2020 }, { "authors": [ "Junhyun Lee", "Inyeop Lee", "Jaewoo Kang" ], "title": "Self-attention graph pooling", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Jure Leskovec", "Christos Faloutsos" ], "title": "Sampling from large graphs", "venue": "In KDD, pp", "year": 2006 }, { "authors": [ "Guohao Li", "Guocheng Qian", "Itzel C Delgadillo", "Matthias Muller", "Ali Thabet", "Bernard Ghanem" ], "title": "SGAS: Sequential greedy architecture search", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture", "venue": "search. ICLR,", "year": 2019 }, { "authors": [ "Xin Liu", "Haojie Pan", "Mutian He", "Yangqiu Song", "Xin Jiang", "Lifeng Shang" ], "title": "Neural subgraph isomorphism counting", "venue": "In KDD,", "year": 2020 }, { "authors": [ "Ziqi Liu", "Chaochao Chen", "Longfei Li", "Jun Zhou", "Xiaolong Li", "Le Song", "Yuan Qi" ], "title": "Geniepath: Graph neural networks with adaptive receptive paths", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Chris J Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": null, "year": 2017 }, { "authors": [ "Asaf Noy", "Niv Nayman", "Tal Ridnik", "Nadav Zamir", "Sivan Doveh", "Itamar Friedman", "Raja Giryes", "Lihi Zelnik" ], "title": "Asap: Architecture search, anneal and prune", "venue": "In AISTAT,", "year": 2020 }, { "authors": [ "Matheus Nunes", "Gisele L Pappa" ], "title": "Neural architecture search in graph neural networks", "venue": "arXiv preprint arXiv:2008.00077,", "year": 2020 }, { "authors": [ "Sinno Jialin Pan", "Qiang Yang" ], "title": "A survey on transfer learning", "venue": "IEEE Transactions on Knowledge and Data Engineering (TKDE),", "year": 2009 }, { "authors": [ "Hongbin Pei", "Bingzhe Wei", "Kevin Chen-Chuan Chang", "Yu Lei", "Bo Yang" ], "title": "Geom-gcn: Geometric graph convolutional networks", "venue": null, "year": 2020 }, { "authors": [ "Wei Peng", "Xiaopeng Hong", "Haoyu Chen", "Guoying Zhao" ], "title": "Learning graph convolutional network for skeleton-based human action recognition by neural searching", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Hieu Pham", "Melody Guan", "Barret Zoph", "Quoc Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Ekagra Ranjan", "Soumya Sanyal", "Partha P Talukdar" ], "title": "Asap: Adaptive structure aware pooling for learning hierarchical graph representations", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Yu Rong", "Wenbing Huang", "Tingyang Xu", "Junzhou Huang" ], "title": "Dropedge: Towards deep graph convolutional networks on node classification", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Oleksandr Shchur", "Maximilian Mumme", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Pitfalls of graph neural network evaluation", "venue": "arXiv preprint arXiv:1811.05868,", "year": 2018 }, { "authors": [ "Mingxing Tan", "Quoc Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Felix Wu", "Amauri Souza", "Tianyi Zhang", "Christopher Fifty", "Tao Yu", "Kilian Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Sirui Xie", "Hehui Zheng", "Chunxiao Liu", "Liang Lin" ], "title": "SNAS: stochastic neural architecture search", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Keyulu Xu", "Chengtao Li", "Yonglong Tian", "Tomohiro Sonobe", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "Representation learning on graphs with jumping knowledge networks", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Rex Ying", "Ruining He", "Kaifeng Chen", "Pong Eksombatchai", "William L Hamilton", "Jure Leskovec" ], "title": "Graph convolutional neural networks for web-scale recommender systems", "venue": "In KDD,", "year": 2018 }, { "authors": [ "Zhitao Ying", "Jiaxuan You", "Christopher Morris", "Xiang Ren", "Will Hamilton", "Jure Leskovec" ], "title": "Hierarchical graph representation learning with differentiable pooling", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Jiaxuan You", "Haoze Wu", "Clark Barrett", "Raghuram Ramanujan", "Jure Leskovec" ], "title": "G2SAT: Learning to generate sat formulas", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Jiaxuan You", "Jure Leskovec", "Kaiming He", "Saining Xie" ], "title": "Graph structure of neural networks", "venue": "In ICML, pp", "year": 2020 }, { "authors": [ "Jiaxuan You", "Zhitao Ying", "Jure Leskovec" ], "title": "Design space for graph neural networks. volume", "venue": null, "year": 2020 }, { "authors": [ "Hanqing Zeng", "Hongkuan Zhou", "Ajitesh Srivastava", "Rajgopal Kannan", "Viktor Prasanna" ], "title": "GraphSaint: Graph sampling based inductive learning method", "venue": null, "year": 2020 }, { "authors": [ "Guo Zhang", "Hao He", "Dina Katabi" ], "title": "Circuit-GNN: Graph neural networks for distributed circuit design", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Lingxiao Zhao", "Leman Akoglu" ], "title": "Pairnorm: Tackling oversmoothing in gnns", "venue": "ICLR,", "year": 2020 }, { "authors": [ "Kaixiong Zhou", "Qingquan Song", "Xiao Huang", "Xia Hu" ], "title": "Auto-GNN: Neural architecture search of graph neural networks, 2019", "venue": null, "year": 2019 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "GAT Veličković" ], "title": "Here we give key explanations to these node aggregators in Table 8. For more details, we refer readers to the original papers. Table 7: The operations we use as node and layer aggregators for the search space of EGAN. Details of node aggregators are given in Table 8", "venue": null, "year": 2019 }, { "authors": [ "Xie" ], "title": "A.3 DISCUSSION ABOUT RECENT GNN METHODS Very recently there were some new GNN models proposed in the literature, e.g., MixHop (Abu-ElHaija et al., 2019)", "venue": "Geom-GCN (Pei et al.,", "year": 2019 }, { "authors": [ "El-Haija" ], "title": "2019) integrating neighbors of different hops in a GNN layer, or PairNorm (Zhao & Akoglu, 2020) working on the depth of a GNN models, and PNA (Corso et al., 2020) is a more recent GNN model, which proposes a composition of multiple aggregation functions in each GNN", "venue": null, "year": 2020 }, { "authors": [ "Hu" ], "title": "A.4.2 COMPARED METHODS We compare EGAN with two groups of state-of-the-art methods: human-designed GNN architectures and NAS methods for GNN. Human-designed GNNs", "venue": "For more details,", "year": 2020 }, { "authors": [ "• GCN (Kipf", "Welling" ], "title": "2016) proposes a sum aggregator normalized by the degrees of nodes", "venue": "GraphSAGE (Hamilton et al.,", "year": 2017 } ]
[ { "heading": null, "text": "Recently, graph neural networks (GNN) have been demonstrated effective in various graph-based tasks. To obtain state-of-the-art (SOTA) data-specific GNN architectures, researchers turn to the neural architecture search (NAS) methods. However, it remains to be a challenging problem to conduct efficient architecture search for GNN. In this work, we present a novel framework for Efficient GrAph Neural architecture search (EGAN). By designing a novel and expressive search space, an efficient one-shot NAS method based on stochastic relaxation and natural gradient is proposed. Further, to enable architecture search in large graphs, a transfer learning paradigm is designed. Extensive experiments, including node-level and graph-level tasks, are conducted. The results show that the proposed EGAN can obtain SOTA data-specific architectures, and reduce the search cost by two orders of magnitude compared to existing NAS baselines." }, { "heading": "1 INTRODUCTION", "text": "Recent years have witnessed the success of graph neural networks (GNN) (Gori et al., 2005; Battaglia et al., 2018) in various graph-based tasks, e.g., recommendation (Ying et al., 2018a), chemistry (Gilmer et al., 2017), circuit design (Zhang et al., 2019), subgraph counting (Liu et al., 2020), and SAT generation (You et al., 2019). To adapt to different graph-based tasks, various GNN models, e.g., GCN (Kipf & Welling, 2016), GAT (Veličković et al., 2018), or GIN (Xu et al., 2019), have been designed in the past five years. Most existing GNN models follow a neighborhood aggregation (or message passing) schema (Gilmer et al., 2017), as shown in the left part of Figure 1, which is that the representation of a node in a graph is learned by iteratively aggregating the features of its neighbors. Despite the broad applications of GNN models, researchers have to take efforts to design proper GNN architectures given different tasks by imposing different relational inductive biases (Battaglia et al., 2018). As pointed out by Battaglia et al. (2018), the GNN architectures can support one form of combinatorial generalization given different tasks, i.e., graphs. Then a natural and interesting question can be asked: Can we automatically design state-of-the-art (SOTA) GNN architectures for graph-based tasks? A straightforward solution is to adopt the neural architecture search (NAS) approaches, which have shown promising results in automatically designing architectures for convolutional neural networks (CNN) (Zoph & Le, 2017; Pham et al., 2018; Liu et al., 2019a; Tan & Le, 2019; You et al., 2020a).\nHowever, it is nontrivial to adopt NAS to GNN. The first challenge is to define the search space. One can design a dummy search space to include as many as possible the related parameters, e.g., aggregation functions, number of layers, activation functions, etc., on top of the message passing framework (Eq. (1)), however, it leads to quite a large discrete space, for example, 315,000 possible GNN architectures are generated by including just 12 types of model parameters in You et al. (2020b)), which is challenging for any search algorithm. The second challenge is to design an effective and efficient search algorithm. In the literature, reinforcement learning (RL) based and evolutionary based algorithms have been explored for GNN architecture search (Gao et al., 2020; Zhou et al., 2019; Lai et al., 2020; Nunes & Pappa, 2020). However, they are inherently computationally expensive due to the stand-alone training manner. In the NAS literature, by adopting the weight sharing strategy, one-shot NAS methods are orders of magnitude more efficient than RL based ones (Pham et al., 2018; Liu et al., 2019a; Xie et al., 2019; Guo et al., 2019). However, the one-shot methods cannot be directly applied to the aforementioned dummy search space, since it remains unknown how to search for some model parameters like number of layers and activation functions by the weight sharing strategy. Therefore, it is a challenging problem to conduct effective and efficient architecture search for GNN.\nIn this work, we propose a novel framework, called EGAN (Efficient GrAph Neural architecture search), to automatically design SOTA GNN architectures. Motivated by two well-established works (Xu et al., 2019; Garg et al., 2020) that the expressive capabilities of GNN models highly rely on the properties of the aggregation functions, a novel search space consisting of node and layer aggregators is designed, which can emulate many popular GNN models. Then by representing the search space as a directed acyclic graph (DAG) (Figure 1(c)), we design a one-shot framework by using the stochastic relaxation and natural gradient method, which can optimize the architecture selection and model parameters in a differentiable manner. To enable architecture search in large graphs, we further design a transfer learning paradigm, which firstly constructs a proxy graph out of the large graph by keeping the properties, and then searches for GNN architectures in the proxy graph, finally transfers the searched architecture to the large graph. To demonstrate the effectiveness and efficiency of the proposed framework, we apply EGAN to various tasks, from node-level to graph-level ones. The experimental results on ten different datasets show that EGAN can obtain SOTA data-specific architectures for different tasks, and at the same time, reduce the search cost by two orders of magnitude. Moreover, the transfer learning paradigm, to the best of our knowledge, is the first framework to enable architecture search in large graphs.\nNotations. Let G = (V, E) be a simple graph with node features X ∈ RN×d, where V and E represent the node and edge sets, respectively. N represents the number of nodes and d is the dimension of node features. We use N(v) to represent the first-order neighbors of a node v in G, i.e., N(v) = {u ∈ V|(v, u) ∈ E}. In the literature, we also create a new set Ñ(v) is the neighbor set including itself, i.e., Ñ(v) = {v} ∪ {u ∈ V|(v, u) ∈ E}." }, { "heading": "2 RELATED WORKS", "text": "GNN was first proposed in (Gori et al., 2005) and in the past five years different GNN models (Kipf & Welling, 2016; Hamilton et al., 2017; Veličković et al., 2018; Gao et al., 2018; Battaglia et al.,\n2018; Xu et al., 2019; 2018; Liu et al., 2019b; Abu-El-Haija et al., 2019; Wu et al., 2019; Zeng et al., 2020; Zhao & Akoglu, 2020; Rong et al., 2020) have been designed, and they rely on a neighborhood aggregation (or message passing) schema (Gilmer et al., 2017), which learns the representation of a given node by iteratively aggregating the hidden features (“message”) of its neighbors. Besides, in Xu et al. (2018); Chen et al. (2020), the design of residual networks (He et al., 2016a;b) are incorporated into existing message passing GNN models. Battaglia et al. (2018) pointed out that the GNN architectures can provide one form of combinatorial generalization for graph-based tasks, and Xu et al. (2019); Garg et al. (2020) further show that the expressive capability of existing GNN models is upper bounded by the well-known Weisfeiler-Lehman (WL) test. It will be an interesting question to explore GNN architectures with better combinatorial generalization, thus neural architecture search (NAS) can be a worthwhile approach for this consideration.\nNAS (Baker et al., 2017; Zoph & Le, 2017; Elsken et al., 2018) aims to automatically find SOTA architectures beyond human-designed ones, which have shown promising results in architecture design for CNN and Recurrent Neural Network (RNN) (Liu et al., 2019a; Zoph et al., 2018; Tan & Le, 2019). Existing NAS approaches can be roughly categorized into two groups according to search methods (Bender et al., 2018): i.e. the stand-alone and one-shot ones. The former ones tend to obtain the SOTA architecture from training thousands of architectures from scratch, including reinforcement learning (RL) based (Baker et al., 2017; Zoph & Le, 2017) and evolutionary-based ones (Real et al., 2019), while the latter ones tend to train a supernet containing thousands of architectures based on the weight sharing strategy, and then extract a submodel as the SOTA architecture at the end of the search phase (Pham et al., 2018; Bender et al., 2018; Liu et al., 2019a; Xie et al., 2019). The difference of training paradigms leads to that one-shot NAS methods tend to be orders of magnitude more efficient than the RL based ones.\nRecently, there are several works on architecture search for GNN, e.g., RL based ones (Gao et al., 2020; Zhou et al., 2019; Lai et al., 2020), and evolutionary-based ones (Nunes & Pappa, 2020; Jiang & Balaprakash, 2020), thus all existing works are computationally expensive. Franceschi et al. (2019) proposes to jointly learn the edge probabilities and the parameters of GCN given a graph, thus orthogonal to our work. Besides, Peng et al. (2020) proposes a one-shot NAS method for GCN architectures in the human action recognition task. In Li et al. (2020), a sequential greedy search method based on DARTS (Liu et al., 2019a) is proposed to search for GCN architectures, however, the tasks are more focused on vision related tasks, with only one dataset in conventional node classification task. In this work, to the best of our knowledge, for conventional node-level and graph-level classification tasks, we are the first to design a one-shot NAS method for GNN architecture search, which is thus, in nature, more efficient than existing NAS methods for GNN. In Appendix A.3, we give more detailed discussions about the comparisons between EGAN and more recent GNN models." }, { "heading": "3 THE PROPOSED FRAMEWORK", "text": "" }, { "heading": "3.1 THE DESIGN OF SEARCH SPACE", "text": "As introduced in Section 2, most existing GNN architectures rely on a message passing framework (Gilmer et al., 2017), which constitutes the backbone of the designed search space in this work. To be specific, a K-layer GNN can be written as follows: the l-th layer (l = 1, · · · ,K) updates hv for each node v by aggregating its neighborhood as\nh(l)v = σ(W (l) · Φn({h(l−1)u ,∀u ∈ Ñ(v)})), (1)\nwhere h(l)v ∈ Rdl represents the hidden features of a node v learned by the l-th layer, and dl is the corresponding dimension. W(l) is a trainable weight matrix shared by all nodes in the graph, and σ is a non-linear activation function, e.g., a sigmoid or ReLU. Φn is the key component, i.e., a pre-defined aggregation function, which varies across different GNN models.\nThus a dummy search space is to include as many as possible related parameters in Eq. (1). However, it leads to a very large search space, making the search process very expensive. In this work, motivated by two well-established works (Xu et al., 2019; Garg et al., 2020), which show that the expressive capabilities of GNN models highly rely on the properties of aggregation functions, we propose to search for different aggregation functions by simplifying the dummy search space. For other parameters, we do simple tuning in the re-training stage, which is also a standard practice in\nexisting NAS methods (Liu et al., 2019a; Xie et al., 2019). Then the first component of the proposed search space is the node aggregators, which consists of existing GNN models. To improve the expressive capability, we add the other component, layer aggregators, to combine the outputs of node aggregator in all layers, which have been demonstrated effective in JK-Network (Xu et al., 2018). Then we introduce the proposed search space, as shown in Figure 1(c), in the following:\n• Node aggregators: We choose 12 node aggregators based on popular GNN models, and they are presented in Table 7 in Appendix A.1. The node aggregator set is denoted by On.\n• Layer aggregators: We choose 3 layer aggregators as shown in Table 7 in Appendix A.1. Besides, we have two more operations, IDENTITY and ZERO, related to skip-connections. Instead of requiring skip-connections between all intermediate layers and the final layer in JK-Network, in this work, we generalize this option by proposing to search for the existence of skip-connections between each intermediate layer and the last layer. To connect, we choose IDENTITY, and ZERO otherwise. The layer aggregator set is denoted by Ol and the skip operation set by Os.\nTo further guarantee that K-hop neighborhood can always be accessed, we add one more constraint that the output of the node aggregator in the last layer should always be used as the input of the layer aggregator, thus for a K-layer GNN architecture, we need to search K− 1 IDENTITY or ZERO for the skip-connection options." }, { "heading": "3.2 DIFFERENTIABLE ARCHITECTURE SEARCH", "text": "Following existing NAS works (Liu et al., 2019a; Xie et al., 2019), we represent the search space by a directed acyclic graph (DAG), as shown in Figure 1(c), where nodes represent embeddings, and edges represent operations between the two end nodes. Then the intermediate nodes are\nxj = ∑\ni<j Õi,j(xi), (2)\nwhere Õi,j is the selected operation at edge (i, j). In our work, each edge corresponds to an operation in the search space, and we represent it with a distribution pα(Z), which generates the one-hot random variable Zi,j multiplied by the operation edge Oi,j in the DAG. Then the intermediate nodes in each child graph are represented by\nxj = ∑\ni<j Õi,j(xi) = ∑ i<j (Zi,j)TOi,j(xi). (3)\nNote that in our framework, as shown in Figure 1(c), for each node aggregator, the input is from the previous one, and for the layer aggregators, the input are from outputs of all node aggregators.\nFollowing the setting in Zoph & Le (2017) and Gao et al. (2020), the objective of the framework is EZ∼pα(Z) [ R(Z) ] = EZ∼pα(Z) [ LW(Z) ] , (4)\nwhere R(Z) represents the reward, which is defined by training loss LW(Z) in our framework. W represents the model parameters. In the GNN literature, node-level or graph-level classification tasks are commonly used, thus the cross-entropy loss is chosen, leading to a differentiable function of LW(Z). To make use of the differentiable nature of LW(Z), we design a differentiable search method to optimize Eq. (4). To be specific, we use the Gumbel-Softmax (Maddison et al., 2017; Xie et al., 2019; Noy et al., 2020) to relax the discrete architecture distribution to be continuous and differentiable with the reparameterization trick:\nZki,j = fαi,j (G k i,j) = exp((logα k i,j + G k i,j)/λ)/ ∑n l=0 exp((logαli,j + G l i,j)/λ), (5)\nwhere Zi,j is the softened one-hot random variable for operation selection at edge (i, j), and Gki,j = − log(− log(Uki,j)) is the k-th Gumbel random variable, Uki,j is a uniform random variable. αi,j is the architecture parameter. λ is the temperature of the softmax, which is steadily annealed to be close to zero (Xie et al., 2019; Noy et al., 2020). Then we can use gradient descent methods to optimize the operation parameters and model parameters together in an end-to-end manner. The gradients are given in Appendix A.2.\nTo improve the search efficiency, we further design an adaptive stochastic natural gradient method to update the architecture parameters in an end-to-end manner following Akimoto et al. (2019). To be specific, the update of α at the m-th iteration is given as:\nαm+1 = αm − ρH−1∇αL, (6)\nwhere ρ is the step size. H is the Fisher matrix, which can be computed as H = Epαm [p̄α(Z)p̄α(Z)T] with p̄α(Z) := ∇ log pα(Z). After the searching process terminates, we derive the final architecture by retaining the edge with the largest weight, which is the same as existing DARTS (Liu et al., 2019a) and SNAS (Xie et al., 2019). To make the final results more robust, the search process is executed 5 times with different random seeds, thus 5 architectures are obtained at the end of the search phase. Then the 5 architectures are re-trained from scratch with some hyperparameters tuning on the validation set, and the one with the best validation accuracy is returned as the final architecture." }, { "heading": "3.3 TRANSFER LEARNING PARADIGM", "text": "As introduced in Hamilton et al. (2017); Jia et al. (2020), when training GNN in large graphs, in each batch, the time and memory cost increases exponentially w.r.t. K, i.e., the number of GNN layers, with the worst cases of O(|V|). Obviously, it is extremely expensive in large graphs for any GNN model. The situation becomes more severe when conducting architecture search in large graphs since we are training a supernet emulating various GNN models. Therefore, it tends to be infeasible to directly search for architectures in large graphs. Motivated by transferring searched blocks and cells in CNN architectures from small to large data sets in the NAS literature (Zoph et al., 2018; Tan & Le, 2019), we propose to address the above problem by transfer learning (Pan & Yang, 2009).\nThe core idea of the transferable architecture search is to find a small proxy graph Gproxy (the source), then search in the proxy graph, finally tune the searched architecture {αn,αs,αl} in the large graph G (the target). However, in order to make the architecture transfer feasible, we need to make the proxy graph sharing the same property distribution with the original graph (Pan & Yang, 2009). Since the properties vary across different graphs, it is not suitable to transfer across different datasets, like that from CIFAR-10 to ImageNet for image classification (Zoph et al., 2018). Thus, we propose to sample a smaller graph from the original one, and then apply the transfer paradigm. Many distribution-preserving sampling schema have been proposed in an established work (Leskovec & Faloutsos, 2006), e.g., random sampling by node or edge, or sampling by PageRank. In this work, we adopt the Random PageRank Node (RPN) sampling method in Leskovec & Faloutsos (2006), which is empirically demonstrated to be able to preserve the properties by sampling not less than 15% nodes from the original graph. In Section 4.2.2, the experimental results shows that this transfer paradigm empirically works well." }, { "heading": "3.4 COMPARISONS WITH EXISTING NAS METHODS FOR GNN", "text": "In this section, as shown in Table 1, we emphasize the advantages of EGAN in the following:\n• In terms of the search space, EGAN can emulate more GNN models than existing methods. Moreover, by only focusing on the “aggregation function”, the total size of the search space is smaller than those of the previous methods, which also contributes to the efficiency improvements of EGAN.\n• In terms of the search algorithm, the one-shot nature of EGAN makes it much more efficient than stand-alone methods, e.g. GraphNAS.\n• The transfer paradigm of EGAN makes it feasible to conduct architecture search in large graphs. Therefore, the advantages of EGAN over existing NAS methods are evident, especially the efficiency." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we conduct extensive experiments to demonstrate the effectiveness and efficiency of the proposed EGAN, including node-level and graph-level tasks." }, { "heading": "4.1 SETUP", "text": "Datasets. For node-level tasks, we have three settings: transductive, inductive, and transfer. The task is node classification on 8 datasets, which are given in Appendix A.4.1. For graph-level tasks, the task is whole graph classification on 2 datasets, which are given in Appendix A.5.\nBaselines In general, we have two types of baselines: human-designed GNN models, and NAS methods. Details of baselines are given in Appendix A.4.2. Note that for NAS baselines in Table 1, we only use GraphNAS (Gao et al., 2020) and its variant using weight sharing (GraphNAS-WS). The search spaces of Auto-GNN (Zhou et al., 2019) and Nunes & Pappa (2020) are the same as GraphNAS, while their codes are not available. Jiang & Balaprakash (2020) is an concurrent work when we are preparing for this submission, and we will compare with it when the codes are available. For Policy-GNN (Lai et al., 2020), they work on searching for different numbers of layers per node in a selected GNN base model, i.e., GCN or GAT, thus it can be an orthogonal work to EGAN." }, { "heading": "4.2 PERFORMANCE COMPARISON", "text": "" }, { "heading": "4.2.1 PERFORMANCE COMPARISONS IN TRANSDUCTIVE AND INDUCTIVE TASKS", "text": "From Table 2, we can see that EGAN consistently obtain better or close performance compared to all baselines, which demonstrates the effectiveness of the proposed framework. In other words, with EGAN, we can obtain SOTA data-specific GNN architectures. When comparing EGAN with GraphNAS methods, the performance gain is evident. We attribute this to the expressive search space and the differentiable search algorithm. We further visualize the searched architectures in Figure 2, and we can see that the searched architectures vary per dataset. More figures can be checked in Figure 6 and 7 in Appendix A.6.3." }, { "heading": "4.2.2 PERFORMANCE COMPARISONS IN TRANSFER TASKS", "text": "For the transfer learning experiments, we use two large graphs, i.e., Reddit and Arixv. As introduced in Section 3.3, we firstly sample two smaller graphs (15% nodes), and run EGAN in these two smaller graphs to obtain the optimal architectures, then transfer the searched architectures to the original graphs, finally report the test results in them. In terms of baselines, we only report the\nresults of those, which are able to, or reported to, run in the two large graphs, i.e., Reddit and Arxiv. More details are given in Appendix A.4.4.\nThe results are shown in Table 3, from which we can see that the improvement of search cost is in two orders of magnitude, from 3 GPU days1 to less than a minute, which demonstrates the superiority of efficiency of the proposed transfer learning paradigm compared to direct architecture search in large graphs. Moreover, the performance of the transferred architectures is better or close to those of the baselines, which demonstrates the effectiveness of the transfer learning paradigm." }, { "heading": "4.2.3 GRAPH-LEVEL TASKS", "text": "The results of the graph-level task are shown in Table 4, and we can see that the performance trending is similar to node-level tasks. The searched architectures are shown in Figure 7 in Appendix A.6.3, which also shows that they are data specific. Note that the global pooling method, or the readout function, in the whole graph learning can also be incorporated into the search space of EGAN, thus it can also be learned. We leave this for future work. Taking into consideration the results of all tasks, the effectiveness of the proposed EGAN can be demonstrated." }, { "heading": "4.3 SEARCH EFFICIENCY", "text": "In this section, we conduct some experiments to show the superiority of efficiency of EGAN over NAS baselines. And for simplicity, we only use the four commonly used datasets in the node-level task, i.e., Cora, CiteSeer, PubMed, and PPI.\nFirstly, we record the running time of each method during the search phase, which represents the search cost of the NAS methods. The results are given in Table 5, from which we can see that the search cost of EGAN is two orders of magnitude smaller than those of NAS baselines.\nSecondly, we show the trending of the test accuracy w.r.t. the running time of different methods during the search phase. In each epoch, we obtain the best model currently, and report the test accuracy after retraining it from scratch. The result of Cora is shown in Figure 3, from which we\n1Note that we stop the search process after 3 days.\ncan observe that EGAN can obtain architectures with better performance more quickly than NAS baselines. More figures are shown in Figure 5 in Appendix A.6.2.\nTaking these results into consideration, the efficiency advantage of EGAN is significant, which is mainly attributed to the one-shot training paradigm as introduced in Section 3.\nFigure 3: Test accuracy w.r.t. elapsed time on Cora." }, { "heading": "4.4 ABLATION STUDY", "text": "In this section, we present two ablation studies on EGAN.\nFirstly, we show the importance of the layer aggregators in the designed search space by running EGAN in the search space without the layer aggregators, and the results are shown in Table 6, from which we can see that the performance consistently drops on all datasets except Computer when removing the layer aggregators. This observation aligns with the results in JK-Network (Xu et al., 2018) that the performance of GNN models can be improved by adding an extra layer.\nSecondly, we show the influence of K, i.e., the number of layers of GNN, in the search space, for which we conduct experiments with EGAN by varying layer K ∈ {1, 2, 3, 4, 5, 6} and show the test accuracy in Figure 4. The results suggest that along with the increment of layers, the test accuracy may decrease. Considering the computational resources, 3-layer architecture is a good choice for the backbone of EGAN in our experiments.\nFigure 4: Test accuracy w.r.t. different Ks." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "In this paper, we propose EGAN, an effective and efficient framework for graph neural architecture search. By designing a novel and expressive search space, we propose a one-shot NAS framework by stochastic relaxation and natural gradient. Further, to enable architecture search in large graphs, we design a transfer learning paradigm. We conduct extensive experiments, including node-level and graph-level tasks, which demonstrates the effectiveness and efficiency of EGAN compared to various baselines.\nBased on this work, we show that NAS approaches can obtain data-specific GNN architectures, which supports one form of combinatorial generalization for GNN models. For future work, we will explore more aspects regarding the combinatorial generalization of GNN models beyond the aggregation functions, like the construction of the graph, or the number of layers as done in PolicyGNN (Lai et al., 2020), as introduced in Battaglia et al. (2018)." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DETAILS OF NODE AGGREGATORS", "text": "As introduced in Section 3.1, we have 12 types of node aggregators, which are based on wellknown existing GNN models: GCN Kipf & Welling (2016), GraphSAGE Hamilton et al. (2017), GAT Veličković et al. (2018), GIN Xu et al. (2019), and GeniePath Liu et al. (2019b). Here we give key explanations to these node aggregators in Table 8. For more details, we refer readers to the original papers." }, { "heading": "A.2 GRADIENTS OF L", "text": "Here we give the gradients of Lw(Z) w.r.t. the hidden embeddings, model parameters, and the architecture parameters, which is in the following:\n∂L ∂xj\n= ∑\nm>j ∂L ∂xm ZTm ∂Om(xj) ∂xj ,\n∂L ∂Wki,j = ∂L ∂xj Zki,j ∂Oi,j(xi) ∂Wki,j , (7)\n∂L ∂αki,j = ∂L ∂xj OTi,j(xi)(δ(k ′ − k)− Zi,j)Zki,j 1 λαki,j .\nFor full derivation, we refer readers to the appendix of Xie et al. (2019)." }, { "heading": "A.3 DISCUSSION ABOUT RECENT GNN METHODS", "text": "Very recently there were some new GNN models proposed in the literature, e.g., MixHop (Abu-ElHaija et al., 2019), Geom-GCN (Pei et al., 2020), GraphSaint (Zeng et al., 2020), DropEdge (Rong et al., 2020), PariNorm (Zhao & Akoglu, 2020), PNA (Corso et al., 2020). We did not include all\nthese works into the search space of EGAN, since they can be regarded as orthogonal works of EGAN to the GNN literature, which means they are very likely to further increase the performance when integrating with EGAN.\nAs shown in Eq. (1), the embedding of a node v in the l-th layer of a K-layer GNN is computed as:\nhlv = σ(W l · AGGnode({hl−1u ,∀u ∈ Ñ(v)})).\nFrom this computation process, we can summarize four key components of a GNN model: aggregation function (AGGnode), number of layers (l), neighbors (Ñ(v)), and hyperparameters (σ, dimension size, etc.), which decide the properties of a GNN model, e.g., the model capacity, expressive capability, and prediction performance.\nTo be specific, EGAN mainly focus on the aggregation functions, which affect the expressive capability of GNN models. GraphSaint mainly focuses on neighbors selection in each layer, thus the “neighbor explosion” problem can be addressed. Geom-GCN also focuses on neighbors selection, which constructs a novel neighborhood set in the continuous space, thus the structural information can be utilized. DropEdge mainly focuses on the depth of a GNN model, i.e., the number of layers, which can alleviate the over-smoothing problem with the increasing of the number of GNN layers. Besides the three works, there are more other works on the four key components, like MixHop (AbuEl-Haija et al., 2019) integrating neighbors of different hops in a GNN layer, or PairNorm (Zhao & Akoglu, 2020) working on the depth of a GNN models, and PNA (Corso et al., 2020) is a more recent GNN model, which proposes a composition of multiple aggregation functions in each GNN layer. Therefore, all these works can be integrated as a whole to improve each other. For example, the DropEdge or Geom-GCN methods can further help EGAN in constructing more powerful GNN models. With PNA, we can use our framework to help search for the combinations of multiple aggregation functions in a single GNN layer, or include the PNA aggregator to our search space to see whether it can help further enhance the final performance. This is what we mean “orthogonal” works of EGAN. Since we mainly focus on the aggregation functions in this work, we only compare the GNN variants with different aggregations functions.\nMoreover, one purpose of this work is not to design the most powerful search space to include all aspects, but to demonstrate that the proposed EGAN, including the search space and search method, provides an alternative option to enhance GNN architecture search. We believe the application of NAS to GNN has unique values, and the proposed EGAN can benefit the GNN community." }, { "heading": "A.4 EXPERIMENT SETUP OF NODE-LEVEL TASKS", "text": "" }, { "heading": "A.4.1 DATASETS", "text": "Transductive Setting Only a subset of nodes in one graph are used as training data, and other nodes are used as validation and test data. For this setting, we use three benchmark dataset: Cora, CiteSeer, PubMed. They are all citation networks, provided by (Sen et al., 2008). Each node represents a paper, and each edge represents the citation relation between two papers. The datasets contain bagof-words features for each paper (node), and the task is to classify papers into different subjects based on the citation networks.\nBesides the three benchmark datasets, we use two more datasets: Coauthor CS and Amazon Computers, provided by (Shchur et al., 2018). Coauthor CS is a co-authorship graph where nodes are authors which are connected by an edge if they co-author a paper. Given paper keywords for each\nauthor’s paper as node features, the task is to map each author to its most active field of study. Amazon Computers is segments of the Amazon co-purchase graph where nodes represent goods which are linked by an edge if these goods are frequently bought together. Node features encode product reviews as bag-of-word feature vectors, and class labels are given by product category.\nFor all 5 datasets, We split the nodes in all graphs into 60%, 20%, 20% for training, validation, and test. For the transductive task, we use the classification accuracy as the evaluation metric.\nInductive Setting In this task, we use a number of graphs as training data, and other completely unseen graphs as validation/test data. For this setting, we use the PPI dataset, provided by (Hamilton et al., 2017), on which the task is to classify protein functions. PPI consists of 24 graphs, with each corresponding to a human tissue. Each node has positional gene sets, motif gene sets and immunological signatures as features and gene ontology sets as labels. 20 graphs are used for training, 2 graphs are used for validation and the rest for testing, respectively. For the inductive task, we use Micro-F1 as the evaluation metric.\nTransfer Setting In this task, we use two datasets, Reddit and Arxiv, which are two orders of magnitude larger than Cora and CiterSeer in number of nodes, as shown in Table 9. The Reddit dataset is provided by Hamilton et al. (2017), and the task is to predict the community to which different Reddit posts belong. Reddit is an online discussion forum where users comment on different topics. Each node represents a post, and each edge represents a link between two posts, when they are commented by the same user. The dataset contains word vectors as node features. The graph is constructed from Reddit posts made in the month of September 2014, and we follow the same settings in the original paper (Hamilton et al., 2017), which uses the first 20 days for training and the remaining days for test (with 30% used for validation).\nThe Arxiv dataset is constructed based on the citation network between all papers, and we use the specific version of ogbn-arxiv, provided by a recent open graph benchmark (OGB) project (Hu et al., 2020), where the task is to predict the 40 subject areas of Arxiv CS papers, e.g., cs.AI, cs.LG. Each node (paper) has a 128-dimensional feature vector obtained by averaging the embedding of words in its title and abstract. And all papers are also associated with the year that the corresponding paper was published. The dataset is split by time. To be specific, papers published before 2017 are used as training data, while those in 2018 and 2019 are used, respectively, as validation and test set. For more details, we refer readers to Hu et al. (2020)." }, { "heading": "A.4.2 COMPARED METHODS", "text": "We compare EGAN with two groups of state-of-the-art methods: human-designed GNN architectures and NAS methods for GNN.\nHuman-designed GNNs. We use the following popular GNN architectures:\n• GCN (Kipf & Welling, 2016) proposes a sum aggregator normalized by the degrees of nodes. • GraphSAGE (Hamilton et al., 2017) proposes scalable graph neural network with different aggre-\ngators: Mean, Sum, Max-Pool, LSTM. • GAT (Veličković et al., 2018) proposes the attention aggregators, and it has different vari-\nants according to the attention functions: GAT, GAT-SYS, GAT-LINEAR, GAT-COS, GATGENERALIZED-LINEAR. The detail of these attention functions are given in (Gao et al., 2020).\n• GIN (Xu et al., 2019) proposes to use Multi-layern Perceptron (MLP) as aggregators. • LGCN (Gao et al., 2018) proposes to automatically select topK neighbors for each node, and use\nthe 1-D regular convolutional operation as the aggregator. • GeniePath (Liu et al., 2019b) proposes a composition of attention and LSTM-style aggregators,\nwhich can learn adaptive neighborhood for each node. • Geom-GCN (Pei et al., 2020) propose a geometric bi-level aggregation schema over structure-\naware neighbors in an continuous space and neighbors by adjacency matrix.\nFor models with variants, like different aggregators in GraphSAGE or different attention functions in GAT, we report the best performance across the variants. Besides, we extend the idea of JKNetwork (Xu et al., 2018) in all models except for LGCN, and obtain 5 more variants: GCN-JK, GraphSAGE-JK, GAT-JK, GIN-JK, GeniePath-JK, which add an extra layer. In the experiments, we\nonly report the better performance of each GNN and its JK variant, which is denoted by the original name, as shown in Table 2, and 4, respectively.\nFor LGCN, we use the code released by the authors 2. For other baselines, we use the popular open source library Pytorch Geometric (PyG) (Fey & Lenssen, 2019) 3 (Version: 1.6.0), which implements various GNN models. For all baselines, we train it from scratch with the obtained best hyperparameters on validation datasets, and get the test performance. We repeat this process for 5 times, and report the final mean accuracy with standard deviation.\nNAS methods for GNN. We consider the following methods:\n• Random search (denoted as “Random”) is a simple baseline in NAS, which uniformly randomly samples architectures from the search space, and keeps track of the optimal architecture during the search process.\n• Bayesian optimization4 (denoted as “Bayesian”) (Bergstra et al., 2011) is a popular sequential model-based global optimization method for hyper-parameter optimization, which uses treestructured Parzen estimator as the metric for expected improvement.\n• GraphNAS5 (Gao et al., 2020), a NAS method for searching GNN architecture, which is based on reinforcement learning (Zoph & Le, 2017).\nRandom and Bayesian are searching on the designed search space of EGAN, where a GNN architecture is sampled from the search space, and trained till convergence to obtain the validation performance. 5000 models are sampled in total and the architecture with the best validation performance is trained from scratch, and do some hyperparameters tuning on the validation dataset, and obtain the test performance. For GraphNAS, we set the epoch of training the RL-based controller to 5000, and in each epoch, a GNN architecture is sampled, and trained for enough epochs (600 ∼ 1000 depending on datasets), update the parameters of RL-based controller. In the end, we sample 10 architectures and collect the top 5 architectures that achieve the best validation accuracy. Then the best architecture is trained from scratch. Again, we do some hyperparameters tuning based on the validation dataset, and report the best test performance. Note that we repeat the re-training of the architecture for five times, and report the final mean accuracy with standard deviation.\nNote that for human-designed GNN models and NAS methods, for fair comparison and good balance between efficiency and performance, we choose set the number of GNN layers to be 3, which is an empirically good choice in the literature (Veličković et al., 2018; Liu et al., 2019b).\nA.4.3 IMPLEMENTATION DETAILS OF EGAN\nOur experiments are running with Pytorch (version 1.6.0) on a GPU 2080Ti (memory: 12GB, cuda version: 10.2). We implement EGAN on top of the building codes provided by PyG (version 1.6.0) and SNAS 6. For all tasks, we run the search process for 5 times with different random seeds, and retrieve top-1 architecture each time. By collecting the best architecture out of the 5 top-1 architectures on validation datasets, we repeat 5 times the process in re-training the best one, finetuning hyperparameters on validation data, and reporting the test performance. Again, the final mean accuracy with standard deviations are reported.\nIn the training stage, we set the search epoch to 600 for all datasets except PPI (150), and the learning rate to 0.005, L2 norm to 0.0005, dropout rate to 0.5. In the fine-tuning stage, each architecture is trained from scratch with 600 epochs." }, { "heading": "A.4.4 BASELINES IN TRANSFER LEARNING", "text": "On Reddit, we follow the same settings of GraphSAGE in the original paper (Hamilton et al., 2017), except that we use a 3-layer GNN as backbone to search, while GraphSAGE use 2-layer. Since the performance of Reddit is only reported in GraphSAGE (Hamilton et al., 2017) and JK-Network (Xu\n2https://github.com/HongyangGao/LGCN 3https://github.com/rusty1s/pytorch geometric 4https://github.com/hyperopt/hyperopt 5https://github.com/GraphNAS/GraphNAS 6https://github.com/SNAS-Series/SNAS-Series\net al., 2018), we use them as human-designed architectures. Note that the authors of JK-Network did not release their implementation, thus we use the implementation by the PyG framework. To keep it consistent, we also use the implementation for GraphSAGE by the PyG framework. For Arxiv, we follow the same setting in OGB project (Hu et al., 2020), where only two human-designed architectures, GCN and GraphSAGE, are tested, thus we use these two as human-designed architectures.\nFor NAS baselines, we use the same NAS approaches in transductive and inductive tasks: Random, Bayesian, GraphNAS, and GraphNAS-WS, and run them directly in the original graphs, i.e., Reddit and Arxiv. However, since the code of GraphNAS and GraphNAS-WS crashed due to out of memory error, we only report the performance of Random and Bayesian. Besides, we report the search cost in terms of GPU hours to compare the efficiency of different methods." }, { "heading": "A.4.5 PERFORMANCE COMPARISONS OF GNN MODELS AND JK VARIANTS", "text": "The detailed performance of GNN baselines and their JK variants in Section 4.2.1 are in Table 10." }, { "heading": "A.5 EXPERIMENT SETUP OF GRAPH-LEVEL TASKS", "text": "" }, { "heading": "A.5.1 DATASETS", "text": "In this section, we evaluate EGAN on graph classification task two datasets: D&D and PROTEINS datasets, provided in Dobson & Doig (2003). These two datasets are both the protein graphs. In D&D dataset, nodes represent the amino acids and two nodes are connected iff the distance is less than 6 Ȧ. In PROTEINS dataset, nodes are secondary structure elements and edges represent nodes are in an amino acid or in a close 3D space. More information are shown in Table 11." }, { "heading": "A.5.2 BASELINES", "text": "Besides the GNN baselines in node-level tasks, we use three more methods, which use hierarchy pooling to learn whole graph representation given the embeddings of all nodes. DiffPool (Ying et al., 2018b), SAGPool (Lee et al., 2019) and ASAP (Ranjan et al., 2020) are latest methods based on hierarchical pooling schema, which learn node embeddings with node aggregators and coarsen graphs with pooling aggregators. The final graph embeddings are generated by a readout operation based on the final coarsend graph. DiffPool learns a soft assignment matrix for each node with any GNN methods, combine with entropy regularization and link prediction objective, so that the\ncoarsened graph can preserve as much information as possible. SAGPool learns node weights with attention mechanism and keeps the top-k nodes in pooling layer. In ASAP, it learns a soft cluster assignment matrix for each node with self-attention mechanism, and calculates the fitness score for each cluster and select top-k clusters.\nFor other methods, including GNN models in node-level tasks and NAS methods used in Table 4, to obtain the representation of a whole graph, we use the global sum pooling method at the end of retraining the derived architecture, i.e., the whole graph representation is obtained by the summation of the embeddings of all nodes. z = ∑ i∈V h (K) i ., K is the number of GNN layers.\nIn this section, we use 10-fold cross-validation accuracy as the evaluation metric, and the implementation details are presented in A.4.3. After finding the best architecture and tuning the hyperparameters, we report the mean accuracy and standard deviations on 10 folds data.\nIn the search stage, we set the search epoch to 150, and the learning rate to 0.01, L2 norm to 0.0005, dropout rate to 0.5. In the re-training stage, each architecture is trained from scratch with 100 epochs." }, { "heading": "A.5.3 PERFORMANCE COMPARISONS OF GNN MODELS AND JK VARIANTS", "text": "The detailed performance of GNN baselines and their JK variants in Section 4.2.3 are in Table 12." }, { "heading": "A.6 MORE EXPERIMENTAL RESULTS", "text": "" }, { "heading": "A.6.1 THE ADVANTAGE OF THE PROPOSED SEARCH SPACE", "text": "In Section 3.1, we discuss the advantages of search space between EGAN and GraphNAS/AutoGNN. In this section, we conduct experiments to further show the advantages. To be specific, we run GraphNAS over its own and EGAN’s search space, given the same time budget (20 hours), and compare the final test accuracy of the searched architectures in Table 13. From Table 13, we can see that despite the simplicity of the search space, EGAN can obtain better or at least close accuracy compared to GraphNAS, which means better architectures can be obtained given the same time budget, thus demonstrating the efficacy of the designed search space." }, { "heading": "A.6.2 TEST ACCURACY DURING THE SEARCH PHASE", "text": "In this section, we compare the efficiency of EGAN and NAS baselines by showing the test accuracy w.r.t the running time, as shown in Figure 5, from which we can observe that the efficiency improvements are in orders of magnitude, which aligns with the experiments in previous one-shot NAS methods, like DARTS (Liu et al., 2019a)." }, { "heading": "A.6.3 MORE SEARCHED ARCHITECTURES", "text": "" }, { "heading": "A.7 MORE HYPERPARAMETERS", "text": "For all GNN baselines in node-level tasks, we use the Adam optimizer, and set learning rate lr = 0.005, dropout p = 0.5, and L2 norm to 0.0005. For other parameters, we do some tuning, and present the best ones in Table 14.\nOn Reddit, the parameters for GraphSAGE and GraphSAGE-JK are as follows: lr = 0.005, dropout p = 0.5, and L2 norm to 0.0002, K = 3, d = 64, relu, epoch=30; For EGAN, lr = 0.006, dropout p = 0.3, and L2 norm to 0.0005.\nOn Arxiv, the parameters of EGAN are as follows: lr = 0.025, dropout p = 0.5, and L2 norm to 0.0005.\nFor all searched GNN architectures, the tuned hyperparameters are shown in Table 15." }, { "heading": "A.8 COMPARISONS WITH SNAS AND DARTS", "text": "In this section, to further demonstrate the efficiency of EGAN compared to DARTS (Liu et al., 2019a) and SNAS (Xie et al., 2019), we further record the trending of validation accuracy of the supernet by running them on the same search space during the search phase. These results are shown in Figure 8, from which we can see that EGAN can obtain larger validation accuracy more quickly than than SNAS and DARTS, which is attributed to the usage of natural gradient in Eq. (6)." } ]
2,020
null
SP:7bcf05b89cb5776ae03592d5619d859e5c8571bc
[ "The manuscript studies the problem of ensemble selection (pruning) with the ensemble consists of deep neural network models. The authors compare different diversity metrics, which they named collectively as Q-metric, visualize the accuracies of different ensembles on CIFAR-10 dataset where the ensembles are stratified by their sizes. Based on their observation, the authors further propose HQ-metric, HQ(\\alpha) and HQ(\\alpha +K) to improve the diversity score from Q-metrics. The authors evaluate their strategies on CIFAR-10 and on all of the Q-metircs and show that those Q-metric, when incorporating their proposed strategies, in general, is capable of selecting ensembles of higher accuracy. ", "The paper succeeds in developing diversity metrics that correlate better with ensemble accuracy than the original diversity metrics. However, this makes one wonder why one cannot just use ensemble accuracy directly. One can also use a combining scheme along the lines of (Freund, 1995) where it adds models that focus on the examples that will increase accuracy and allowing errors on examples where most of the models so far have either classified the examples correctly already or incorrectly (where there is no hope of recovery and so effort is not worthwhile). Additionally, the appendix has the algorithms and other substantive content that is central to the paper, which is not supposed to be the case." ]
Diverse deep ensembles hold the potential for improving accuracy and robustness of deep learning models. Both pairwise and non-pairwise ensemble diversity metrics have been proposed over the past two decades. However, it is also challenging to find the right metrics that can effectively prune those deep ensembles with insufficient ensemble diversity, thus failing to deliver effective ensemble accuracy. In this paper, we first compare six popular diversity metrics in the literature, coined as Q metrics, including both pairwise and non-pairwise representatives. We analyze their inherent limitations in capturing the negative correlation of ensemble member models, and thus inefficient in identifying and pruning low quality ensembles. We next present six HQ ensemble diversity metrics by extending the existing Q-metrics with three novel optimizations: (1) We introduce the concept of focal model and separately measure the ensemble diversity among the deep ensembles of the same team size with the concept of focal model, aiming to better capture the negative correlations of member models of an ensemble. (2) We introduce six HQ-diversity metrics to optimize the corresponding Q-metrics respectively in terms of measuring negative correlation among member models of an ensemble using its ensemble diversity score. (3) We introduce a two phase hierarchical pruning method to effectively identify and prune those deep ensembles with high HQ diversity scores, aiming to increase the lower and upper bounds on ensemble accuracy for the selected ensembles. By combining these three optimizations, deep ensembles selected based on our hierarchical diversity pruning approach significantly outperforms those selected by the corresponding Q-metrics. Comprehensive experimental evaluation over several benchmark datasets shows that our HQ-metrics can effectively select high diversity deep ensembles by pruning out those ensembles with insufficient diversity, and successfully increase the lower bound (worst case) accuracy of the selected deep ensembles, compared to those selected using the state-of-the-art Q-metrics.
[]
[ { "authors": [ "Gavin Brown", "Jeremy Wyatt", "Rachel Harris", "Xin Yao" ], "title": "Diversity creation methods: A survey and categorisation", "venue": "Information Fusion, 6:5–20,", "year": 2005 }, { "authors": [ "Chris Burges", "Tal Shaked", "Erin Renshaw", "Ari Lazier", "Matt Deeds", "Nicole Hamilton", "Greg Hullender" ], "title": "Learning to rank using gradient descent", "venue": "In Proceedings of the 22nd International Conference on Machine Learning,", "year": 2005 }, { "authors": [ "K. Chow", "W. Wei", "Y. Wu", "L. Liu" ], "title": "Denoising and verification cross-layer ensemble against black-box adversarial attacks", "venue": "IEEE International Conference on Big Data (Big Data),", "year": 2019 }, { "authors": [ "K. Chow", "W. Wei", "Y. Wu", "L. Liu" ], "title": "Denoising and verification cross-layer ensemble against black-box adversarial attacks", "venue": "IEEE International Conference on Big Data (Big Data),", "year": 2019 }, { "authors": [ "Joseph L Fleiss", "Bruce Levin", "Myunghee Cho Paik" ], "title": "Statistical methods for rates and proportions", "venue": "john wiley & sons,", "year": 2013 }, { "authors": [ "Stanislav Fort", "Huiyi Hu", "Balaji Lakshminarayanan" ], "title": "Deep Ensembles: A Loss Landscape Perspective", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": null, "year": 2015 }, { "authors": [ "Gao Huang", "Yixuan Li", "Geoff Pleiss", "Zhuang Liu", "John E. Hopcroft", "Kilian Q. Weinberger" ], "title": "Snapshot ensembles: Train 1, get M for free", "venue": "CoRR, abs/1704.00109,", "year": 2017 }, { "authors": [ "Yangqing Jia", "Evan Shelhamer", "Jeff Donahue", "Sergey Karayev", "Jonathan Long", "Ross Girshick", "Sergio Guadarrama", "Trevor Darrell" ], "title": "Caffe: Convolutional architecture for fast feature embedding", "venue": "In Proceedings of the 22Nd ACM International Conference on Multimedia, MM", "year": 2014 }, { "authors": [ "Cheng Ju", "Aurélien Bibaut", "Mark J. van der Laan" ], "title": "The relative performance of ensemble methods with deep convolutional neural networks for image classification, 2017", "venue": null, "year": 2017 }, { "authors": [ "Ron Kohavi", "David Wolpert" ], "title": "Bias plus variance decomposition for zero-one loss functions", "venue": "In Proceedings of the Thirteenth International Conference on International Conference on Machine Learning,", "year": 1996 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "Ludmila I. Kuncheva", "Christopher J. Whitaker" ], "title": "Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy", "venue": "Mach. Learn.,", "year": 2003 }, { "authors": [ "L. Liu", "W. Wei", "K. Chow", "M. Loper", "E. Gursoy", "S. Truex", "Y. Wu" ], "title": "Deep neural network ensembles against deception: Ensemble diversity, accuracy and robustness", "venue": "IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems (MASS),", "year": 2019 }, { "authors": [ "Qing Lu", "Lise Getoor" ], "title": "Link-based classification", "venue": "In Proceedings of the Twentieth International Conference on International Conference on Machine Learning,", "year": 2003 }, { "authors": [ "Mary L McHugh" ], "title": "Interrater reliability: the kappa statistic", "venue": "Biochemia medica,", "year": 2012 }, { "authors": [ "D. Partridge", "W. Krzanowski" ], "title": "Software diversity: practical statistics for its measurement and exploitation", "venue": "Information and Software Technology,", "year": 1997 }, { "authors": [ "Remigijus Paulavičius", "Julius Žilinskas" ], "title": "Analysis of different norms and corresponding lipschitz constants for global optimization", "venue": "Ukio Technologinis ir Ekonominis Vystymas,", "year": 2006 }, { "authors": [ "Robert E Schapire" ], "title": "A brief introduction to boosting", "venue": "In Ijcai,", "year": 1999 }, { "authors": [ "David B. Skalak" ], "title": "The sources of increased accuracy for two proposed boosting algorithms", "venue": "Proc. American Association for Arti Intelligence,", "year": 1996 }, { "authors": [ "Leslie N. Smith" ], "title": "Cyclical Learning Rates for Training", "venue": "Neural Networks. arXiv e-prints, art", "year": 2015 }, { "authors": [ "Tin Kam Ho" ], "title": "Random decision forests", "venue": "In Proceedings of 3rd International Conference on Document Analysis and Recognition,", "year": 1995 }, { "authors": [ "W. Wei", "L. Liu", "M. Loper", "K. Chow", "E. Gursoy", "S. Truex", "Y. Wu" ], "title": "Cross-layer strategic ensemble defense against adversarial examples", "venue": "In 2020 International Conference on Computing, Networking and Communications (ICNC),", "year": 2020 }, { "authors": [ "Wenqi Wei", "Ling Liu" ], "title": "Robust Deep Learning Ensemble against Deception", "venue": "arXiv e-prints, art", "year": 2020 }, { "authors": [ "Y. Wu", "L. Liu", "C. Pu", "W. Cao", "S. Sahin", "W. Wei", "Q. Zhang" ], "title": "A comparative measurement study of deep learning as a service framework", "venue": "IEEE Transactions on Services Computing,", "year": 1939 }, { "authors": [ "Yanzhao Wu", "Wenqi Cao", "Semih Sahin", "Ling Liu" ], "title": "Experimental Characterizations and Analysis of Deep Learning Frameworks", "venue": "IEEE 38th International Conference on Big Data,", "year": 2018 }, { "authors": [ "Yanzhao Wu", "Ling Liu", "Juhyun Bae", "Ka-Ho Chow", "Arun Iyengar", "Calton Pu", "Wenqi Wei", "Lei Yu", "Qi Zhang" ], "title": "Demystifying learning rate policies for high accuracy training of deep neural networks, 2019", "venue": null, "year": 2019 }, { "authors": [ "Yanzhao Wu", "Ka-Ho Chow", "Wenqi Wei", "Zhongwei Xie", "Ling Liu" ], "title": "Boosting ensemble accuracy", "venue": null, "year": 2021 }, { "authors": [ "Barret Zoph", "Quoc V. Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "Series A,", "year": 1900 }, { "authors": [ "Q ii" ], "title": "Statistics (QS): The Q statistics (Yule, 1900) is defined as QSij in Formula 2 between two models Fi", "venue": null, "year": 1900 }, { "authors": [ "ωik. iv. Fleiss" ], "title": "Kappa (FK): Similar to Cohen’s Kappa, the Fleiss’ Kappa (Fleiss et al., 2013) also measures the diversity from the perspective of agreement. But it is directly calculated from a team of more than 2 models as Formula", "venue": null, "year": 2013 }, { "authors": [ "v. Kohavi-Wolpert" ], "title": "Variance (KW): Kohavi-Wolpert Variance is derived by (Kuncheva & Whitaker, 2003) to measure the variability of the predicted class label for the sample x with the team of models F1, F2, ..., FS as Formula 6 shows. Higher value of KW variance indicates higher model diversity of the team", "venue": null, "year": 2003 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep ensembles with sufficient ensemble diversity hold potential of improving both accuracy and robustness of ensembles with their combined wisdom. The improvement can be measured by three criteria: (i) the average ensemble accuracy of the selected ensemble teams, (ii) the percentage of selected ensembles that exceed the highest accuracy of individual member models; (iii) the lower bound (worst case) and the upper bound (best case) accuracy of the selected ensembles. The higher these three measures, the higher quality of the ensemble teams. Ensemble learning can be broadly classified into two categories: (1) learning the ensemble of diverse models via diversity optimized joint-training, coined as the ensemble training approach, such as boosting (Schapire, 1999); and (2) learning to compose an ensemble of base models from a pool of existing pre-trained models through ensemble teaming based on ensemble diversity metrics (Partridge & Krzanowski, 1997; Liu et al., 2019; McHugh, 2012; Skalak, 1996), coined as the ensemble consensus approach. This paper is focused on improving the state-of-the-art results in the second category.\nRelated Work and Problem Statement. Ensemble diversity metrics are by design to capture the degree of negative correlation among the member models of an ensemble team (Brown et al., 2005; Liu et al., 2019; Kuncheva & Whitaker, 2003) such that the high diversity indicates high negative correlation among member models of an ensemble. Three orthogonal and yet complimentary\nthreads of efforts have been engaged in ensemble learning: (1) developing mechanisms to produce diverse base neural network models, (2) developing diversity metrics to select ensembles with high ensemble diversity from the candidate ensembles over the base model pool, and (3) developing consensus voting methods. The most popular consensus voting methods include the simple averaging, the weighted averaging, the majority voting, the plurality voting (Ju et al., 2017), and the learn to rank (Burges et al., 2005). For the base model selection, early efforts have been devoted to training diverse weak models to form a strong ensemble on a learning task, such as bagging (Breiman, 1996), boosting (Schapire, 1999), or different ways of selecting features, e.g., random forests (Tin Kam Ho, 1995). Several recent studies also produce diverse base models by varying the training hyper-parameters, such as snapshot ensemble (Huang et al., 2017), which utilizes the cyclic learning rates (Smith, 2015; Wu et al., 2019) to converge the single DNN model at different epochs to obtain the snapshots as the ensemble member models. Alternative method is to construct the pool of base models by using pre-trained models with different neural network backbones (Wu et al., 2020; Liu et al., 2019; Wei et al., 2020; Chow et al., 2019a). The research efforts on diversity metrics have proposed both pairwise and non-pairwise ensemble diversity measures (Fort et al., 2019; Wu et al., 2020; Liu et al., 2019), among which the three representative pairwise metrics are Cohen’s Kappa (CK) (McHugh, 2012), Q Statistics (QS) (Yule, 1900), Binary Disagreement (BD) (Skalak, 1996), and the three representative non-pairwise diversity metrics are Fleiss’ Kappa (FK) (Fleiss et al., 2013), Kohavi-Wolpert Variance (KW) (Kohavi & Wolpert, 1996; Kuncheva & Whitaker, 2003) and Generalized Diversity (GD) (Partridge & Krzanowski, 1997). These diversity metrics are widely used in several recent studies (Fort et al., 2019; Liu et al., 2019; Wu et al., 2020). Some early study has shown that these diversity metrics are correlated with respect to ensemble accuracy and diversity in the context of traditional machine learning models (Kuncheva & Whitaker, 2003). However, few studies to date have provided in-depth comparative critique on the effectiveness of these diversity metrics in pruning those low quality deep ensembles from the candidate ensembles due to their high negative correlation.\nScope and Contributions. In this paper, we focus on the problem of defining ensemble diversity metrics that can select diverse ensemble teams with high ensemble accuracy. We first investigate the six representative ensemble diversity metrics, coined as Q metrics. We identify and analyze their inherent limitations in capturing the negative correlation among the member models of an ensemble, and why pruning out those deep ensembles with low Q-diversity may not always guarantee to improve the ensemble accuracy. To address the inherent problems of Q metrics, we extend the existing six Q metrics with three optimizations: (1) We introduce the concept of the focal model and argue that one way to better capture the negative correlations among member models of an ensemble is to compute diversity scores for ensembles of fixed size based on the focal model. (2) We introduce the six HQ diversity metrics to optimize the six Q-diversity metrics respectively. (3) We develop a HQbased hierarchical pruning method, consisting of two stage pruning: the α filter and the K-Means filter. By combining these optimizations, the deep ensembles selected by our HQ-metrics can significantly outperform those deep ensembles selected by the corresponding Q metrics, showing that the HQ metrics based hierarchical pruning approach is efficient in identification and removal of low diversity deep ensembles. Comprehensive experiments are conducted on three benchmark datasets: CIFAR-10 (Krizhevsky & Hinton, 2009), ImageNet (Russakovsky et al., 2015), and Cora (Lu & Getoor, 2003). The results show that our hierarchical diversity pruning approach outperforms their corresponding Q-metrics in terms of the lower bound and the upper bound of ensemble accuracy over the deep ensembles selected, exhibiting the effectiveness of our HQ approach in pruning low diversity deep ensembles." }, { "heading": "2 HIERARCHICAL PRUNING WITH DIVERSITY METRICS", "text": "Existing studies on consensus based ensemble learning (Huang et al., 2017; Krizhevsky et al., 2012; Zoph & Le, 2016) generate the base model pool through two channels: (i) deep neural network training using different network structures or different configurations of hyperparameters (Breiman, 1996; Schapire, 1999; Zoph & Le, 2016; Hinton et al., 2015; Wu et al., 2018; 2019) and (ii) selecting the top performing pre-trained models from open-source projects (e.g., GitHub) and public model zoos (Jia et al., 2014; ONNX Developers, 2020; GTModelZoo Developers, 2020). Hence, an important technical challenge for deep ensemble learning is to define diversity metrics for producing high quality ensemble teaming strategies, aiming to boost the ensemble accuracy. Given that the number of possible ensemble teams increases exponentially with a small pool of base models, de-\nveloping proper ensemble diversity metrics is critical for effective pruning of deep ensembles with insufficient diversity. Consider a pool of M base models for a learning task on a given dataset D, denoted by BMSet(D)= {F1, ..., FM}. Let EnsSet denote the set of all possible ensemble teams that are composed from BMSet(D), with the ensemble team size S varying from 2 to M . We have a total of ∑M S=2 ( M S ) ensembles, i.e., |EnsSet| = ( M 2 ) + ( M 3 ) + ... + ( M M ) = 2M − (1 + M). The cardinality of the set of possible ensembles EnsSet grows exponentially with M , the number of base models. For example, M = 3, we have |EnsSet| = 4. When M becomes larger, such as M = 5, 10, 20, |EnsSet| = 26, 1013, 1048555. Hence, asM increases, it is non-trivial to construct a set of high-accuracy ensemble teams (GEnsSet), from the candidate set (EnsSet) of all possible ensembles that are composed from BMSet(D). Consider a pool of M = 10 base models for ImageNet, in which the highest performing base model is 78.25%, the lowest performing base model is 56.63%, and the average accuracy of these 10 base models is 71.60% (see Table 5 in Appendix Section F). For a pool of 10 base models, there will be a total of 1013 (210 − (10 + 1)) different ensembles with team size ranging from 2 to 10. The performance of these ensembles vary sharply, from 61.39% (lower bound) to 80.77% (upper bound). Randomly selecting an ensemble team from these 1013 teams in EnsSet(ImageNet) may lead to a non-trivial probability of selecting a team with the ensemble accuracy lower than the average member model accuracy of 71.60% over the 10 base models. Clearly, an efficient ensemble diversity metric should be able to prune out those ensemble teams with insufficient ensemble diversity and thus low ensemble accuracy, increasing (i) the average ensemble accuracy of the selected ensemble teams, (ii) the percentage of selected ensembles that exceed the highest accuracy of individual member models (i.e., 78.25% for the 10 base DNN models on ImageNet); and (iii) the lower bound (worst case) and the upper bound (best case) accuracy of the selected ensembles. A number of ensemble diversity metrics have been proposed to address this challenging problem. In this section, we first provide a comparative study of the six state-of-the-art Q-diversity metrics and analyze their inherent limitations in identifying and pruning out low diversity ensembles. Then we introduce our proposed HQ-diversity metrics and analyze the effectiveness of our HQ based hierarchical diversity approach in pruning low quality ensembles." }, { "heading": "2.1 Q-DIVERSITY METRICS AND THEIR LIMITATIONS", "text": "We outline the key notations for the six Q-diversity metrics in Table 1: three pairwise diversity metrics: Cohen’s Kappa (CK) (McHugh, 2012), Q Statistics (QS) (Yule, 1900) and Binary Disagreement (BD) (Skalak, 1996), and three non-pairwise diversity metrics: Fleiss’ Kappa (Fleiss et al., 2013) (FK), Kohavi-Wolpert Variance (KW) (Kohavi & Wolpert, 1996; Kuncheva & Whitaker, 2003) and Generalized Diversity (GD) (Partridge & Krzanowski, 1997). The arrow column ↑ | ↓ specifies the relationship between the Q-value and the ensemble diversity. The ↑ represents positive relationship of the Q-value and the ensemble diversity, that is a high Q-value refers to high ensemble diversity. The ↓ indicates the negative relationship, that is the low Q-value corresponds to high ensemble diversity. To facilitate the comparison of the six Q-diversity metrics such that the low Q-value refers to high ensemble diversity for all six Q-metrics, we apply (1−Q-value) when calculating Q-diversity score with BD, KW and GD. We refer readers to Appendix (Section C) for the formal definitions of the six Q-diversity metrics.\nGiven a Q-diversity metric, we calculate the diversity score for each ensemble team in the ensemble set (EnsSet) using a set of negative samples (NegSampSet) on which one or more models in the ensemble make prediction errors. The low Q-score indicates sufficient diversity among member models of an ensemble. Upon the completion of Q-diversity score computation for all ensembles in EnsSet, the diversity threshold based pruning is employed to remove those ensembles with insufficient diversity among ensemble member models. Either a pre-defined Q-diversity threshold or a\nmean threshold by taking the average value of all Q-diversity scores calculated for all candidate deep ensembles in EnsSet. The mean threshold tends to work better in general than a manually defined threshold. Once a mean threshold is obtained, those ensembles in EnsSet with their Q diversity scores below the threshold will be selected and placed into the diverse ensemble set GEnsSet, and the remaining ensembles are those with their Q scores higher than the threshold and thus will be pruned out. The pseudo code of the algorithm is included in Appendix (Algorithm 1). The last three columns of Table 1 show the mean threshold for all six Q-diversity metrics calculated on the set of 1013 candidate deep ensembles for the three benchmark datasets used in this study. We make two observations. First, different Q-diversity metrics capture the ensemble diversity from different perspectives with different diversity measurement principles, resulting in different Q-scores. Second, each Q-metric, say CK, is used to compare ensembles based on their Q-CK scores. Hence, even though the Q-KW metric has relatively high KW-specific Q scores for all ensemble teams, it can select the diverse ensembles based on the mean KW-threshold, in a similar manner as any of the other five Q metrics.\nLimitations of Q Metrics. Figure 1a and 1b show Q-KW and Q-GD metrics and their relationship with ensemble accuracy for all 1013 deep ensembles on CIFAR-10 respectively. Each dot represents a deep ensemble team with team sizes color-coded by the color diagram on the right. The vertical red dashed line represents the Q-KW and Q-GD mean thresholds of 0.868 and 0.476 respectively. The horizontal red and black dashed lines represent the maximum single model accuracy 96.68% and the average accuracy 94.25% of the 10 base models respectively. We use these two accuracy bounds as important references to quantify the quality of the deep ensembles selected using a Q metric with its mean threshold. Those deep ensembles on the left of the red vertical dash line are selected and added into GEnsSet given that their Q-scores are below the mean threshold (e.g., QKW or Q-GD). The ensembles on the right of this red vertical dash line are pruned out because their Q diversity scores exceed the mean threshold. Compare Figure 1a and 1b, it is visually clear that both Q metrics can select a sufficient number of good quality ensemble teams while at the same time, both Q metrics with mean threshold pruning will miss a large portion of teams with high ensemble accuracy, indicating the inherent limitations of both Q metrics and the mean threshold pruning with respect to capturing the concrete ensemble diversity in terms of low negative correlation among member models of an ensemble.\nTo better understand the inherent problems with the Q-diversity metrics, we performed another set of experiments by measuring the Q-GD metric over ensemble teams of fixed size S on CIFAR-10. Figure 1c shows a visualization of the results using the Q-GD scores computed over ensembles of size S = 4 with mean threshold indicated by the vertical red dashed line, showing a visually sharper trend in terms of the relationship between ensemble diversity and ensemble accuracy when comparing the selected ensemble teams (red dots) with those ensembles (black dots) on the right of the red vertical threshold line. However, relying on separating the diversity computation and comparison over ensemble teams of the same size alone may not be sufficient, because Figure 1c shows that (i) some selected ensemble teams have low accuracy, affecting all three ensemble quality measures (recall Section 2, page 3), and (ii) a fair number of ensemble teams with high ensemble accuracy (black dots on the top right side) are still missed. Similar observations are also found for other five Q-diversity metrics. We conclude our analysis with three arguments: (1) The Qdiversity metrics may not accurately capture the degree of negative correlation among the member models of an ensemble even when its ensemble Q-diversity score is below the mean threshold. (2) Comparing ensembles of different team size S using their Q scores may not be a fair measure of their true ensemble diversity in terms of the degree of negative correlation among member models\nof an ensemble. However, relying on ensembles of the same team size S alone is still insufficient. (3) Mean threshold is not a good Q-diversity pruning method in terms of capturing the intrinsic relationship between ensemble diversity and ensemble accuracy. This motivates us to propose the HQ diversity metrics with two phase pruning using learning algorithms." }, { "heading": "2.2 HQ-DIVERSITY METRICS AND THEIR TWO-PHASE PRUNING", "text": "The design of the six HQ metrics is to enhance the six existing popular Q-metrics with three optimizations. First, we argue that comparing ensembles of the same team size in terms of their diversity scores can better capture the intrinsic relationship between ensemble diversity and ensemble accuracy. Second, to further improve the comparison of ensembles of the same size S in terms of their ensemble diversity in the context of negative correlation, we introduce the concept of focal model to obtain the set of negative samples for computing the diversity scores of ensembles by taking each member model in turn as the focal model. This is motivated by adversarial robustness with ensemble defense (Chow et al., 2019b; Wei & Liu, 2020), which composes robust ensemble teams for each attack target model. The concept of focal model allows us to capture the ensemble diversity of a team by utilizing the focal model and its negative samples, and then obtain a unified HQ score by taking an average of the S focal model based diversity measurements for each ensemble team of size S. These two optimizations enable HQ scores to more accurately capture the ensemble diversity and its relationship with the ensemble accuracy. Finally, we employ the third optimization, which utilizes the two-phase HQ score based filtering process using the α filter and then the K-means filter to select the set of high quality ensembles.\nHQ (α): HQ metrics with α filter. We observe that if an ensemble team of size S has large HQ score (say [F5, F6]), indicating insufficient ensemble diversity, then all the ensemble teams that have larger size than S and contain all the member models of this ensemble team (e.g., [F5, F6, F7], [F0, F5, F6], [F0, F5, F6, F7], [F5, F6, F7, F8]) tend to have insufficient ensemble diversity (i.e., larger HQ score) as well. This motivates us to design a hierarchical pruning algorithm, coined as α filter. Concretely, we start with the set of ensembles of smallest team size, say S = 2, |EnsSet| = ( M 2 ) = M(M − 1) candidate ensembles. For M = 10 we will have 90 teams of size 2. Given a HQ metric, we first sort the ensembles of small size S, say S = 2, by their HQ scores in decreasing order, and then choose the top β (percentage) of ensembles of size S with large HQ value as our pruning targets at team size S. We recommend a conservative approach by using a small β (e.g., β = 5%, 10%). We first preemptively prune out the β(%) of the ensembles with large HQ scores and then prune all those ensembles that are super-sets of these β(%) of ensembles. Imagining a hierarchical structure with all teams of size 2 on the top, and each layer we add one\nadditional model to the teams such that all teams of size S + 1 are placed in the next tier. The bottom tier will be one ensemble team of size M . For each of the β(%) of ensembles of size 2 that are pruned out, this α filter algorithm will cut off the whole branch of ensemble teams that are supersets of this removed ensemble team. Due to space constraint, we include the Algorithm 2 to compute HQ metrics and the α filter algorithm in Appendix: section D and section E respectively.\nFigure 2 shows the visualization of applying α filter on two HQ metrics: HQ-KW and HQ-GD. The black dots denote the ensemble teams pruned out by using the α filter and the red dots are the ensembles selected after HQ metric with α pruning. We highlight two interesting observations. First, the α filter can effectively prune those ensembles with large HQ values (representing insufficient ensemble diversity). Compared Q-GD in Figure 1c with HQ-GD (α) in Figure 2e (both with S = 4), HQ-GD (α) can significantly improve the quality of selected ensemble teams while effectively pruning out most of the low accuracy ensembles. Second, both HQ-GD (α) and HQ-KW (α) diversity metrics display high correlation of measured ensemble diversity with the ensemble accuracy: low HQ scores correspond to high ensemble accuracy. Similar observations are found consistently for all HQ diversity metrics.\nHQ (α + K) metrics: HQ metrics with α filter followed by K-means filter. In our two-phase HQ diversity pruning approach, we introduce K-means filter to correct as much as possible the remaining errors in high quality ensemble team selection. Recall Figure 2a and 2d for ensemble teams of size S = 3, it is visually clear that the α filter is less effective in pruning out some ensemble teams of low accuracy, compared to teams of larger sizes, S = 4, 5 in Figure 2(b)(c)(e)(f). We introduce the second phase filtering by using a customized K-means clustering algorithm with K = 2 and two strategically chosen initial centroids: top left and bottom right (marked in the red and black unfilled circles respectively), aiming to learn two clusters of ensembles: (1) the cluster of ensembles with low HQ score and high ensemble accuracy, and (2) the cluster of ensembles with low accuracy and relatively larger HQ score. The clustering results are indicated by the two solid circles: the pink one for cluster (1) and the light grey one for cluster (2). The two phase filtering powered HQ (α+K) metrics can effectively remove those ensembles with low accuracy and insufficient diversity (i.e., higher HQ values), further improving the three ensemble accuracy measures (recall Section 2, page 3) compared to the HQ (α) metrics, increasing the lower bound accuracy and improving the worst-case ensemble selection quality. Figure 3 provides a visualization for ensemble teams of size S = 3 using three HQ (α+K) metrics: HQ-CK, HQ-KW and HQ-GD. The red dots and black dots show the two clusters produced by K-means, and the red vertical dashed line indicates the filtering threshold produced by the K-Means filter, which chooses the smallest HQ value from the cluster of low accuracy ensembles as the HQ-specific pruning threshold. By using HQ with two phase α+K filters, we can further fine tune the quality of ensemble selection by removing those ensembles with relatively low ensemble accuracy, effectively boosting the lower bound of ensemble accuracy for all the ensemble teams selected by HQ (α+K) metrics, compared to either HQ (α) or Q metrics." }, { "heading": "3 EXPERIMENTAL EVALUATION", "text": "Extensive experiments on three benchmark datasets (CIFAR-10, ImageNet, and Cora), with a total of 10 base models for each dataset, are conducted to evaluate our hierarchical diversity pruning methods. All the experiments were conducted on an Intel Xeon E5-1620 server with Nvidia GeForce GTX 1080Ti GPU on Ubuntu 16.04. Readers may refer to Appendix (section F) for further details on the base models used in this study and their accuracy results.\nCIFAR-10 Table 2 shows the experimental comparison of the ensemble teams selected by Q metrics with mean threshold, HQ (α) metrics and HQ (α + K) metrics for CIFAR-10. For the se-\nlected ensembles, we show their ensemble accuracy range (%) in the 4th column. The 5th column #(%)(Acc>96.68% (max)) shows the number and percentage of the ensembles selected, which have ensemble accuracy higher than the highest (max) single model accuracy of 96.68% over theM = 10 CIFAR-10 base models. The last column shows the number of selected ensembles with ensemble accuracy over 96.70%, exceeding the best 96.68% single base model accuracy. We highlight three interesting observations. First, compare to Q metrics, our HQ (α) metrics significantly reduce the number of candidate ensembles in #EnsSet (from 1013 to 230∼281) and improve the quality of selected ensembles. For example, with the α filter, HQ-BD, HQ-KW and HQ-GD can improve the ensemble accuracy lower bound from 93.56% to 93.88%, while HQ-CK, HQ-QS, HQ-BD, HQ-FK and HQ-KW all improve the accuracy upper bound from 96.72% or 96.74% to 97.01% or 97.15%. Second, the two phase filtering HQ (α+K) metrics further improved the quality of selected ensembles compared to both Q-metrics and HQ (α) metrics, e.g., increasing the lower bound of ensemble accuracy from 93.56%∼94.27% to 94.46%∼95.45%. Furthermore, 42.22% (38 out of 90) of the ensembles selected by HQ-GD (α + K) have the ensemble accuracy above 96.70%, showing that with random picking of an ensemble from the selected set (GEnsSet), HQ-GD has higher than 42% probability to choose an ensemble team with accuracy better than the max accuracy of the 10 single base models for CIFAR-10, compared to 17.93% by HQ-GD (α) and 7.26% by Q-GD. This further demonstrates the effectiveness of our HQ (α+K) metrics.\nImageNet Table 3 shows the same set of experiments on ImageNet. We make three observations. (1) For ImageNet, many ensembles generated by HQ metrics can achieve higher ensemble accuracy, better than the max single base model accuracy of 78.25% by the member model F5 (Table 5 in Appendix Section F), even without having F5 as a member model of the ensemble teams. For example, with α+K, HQ-BD and HQ-GD both have 19 ensemble teams that offer ensemble accuracy higher than the max single model accuracy of 78.25% by the member model F5, and yet do not have F5 as the member model of their ensemble teams. (2) Similar to CIFAR-10, many ensembles with low accuracy and insufficient HQ diversity are effectively pruned out by using our HQ (α) metrics. Compared to Q-metrics, our HQ (α) metrics effectively increase the accuracy lower bound of all selected ensembles from 61.39% to 68.99%, significant improvement over Q metrics. (3) The HQ (α + K) metrics further boost the lower bound ensemble accuracy over the corresponding HQ (α) metrics, with the lower bound (worst case) accuracy of 76.16%∼78.35%, significantly higher than Q metrics (61.39%∼70.79%). Three HQ (α+K) metrics (HQ-CK, HQ-QS, HQ-FK) achieve 100% of the selected ensembles with over 78.25% accuracy (the max single base model accuracy on ImageNet), while HQ-BD has over 90.91%, HQ-KW and HQ-GD have over 87.10% of the selected ensembles with their ensemble accuracy over the best single base model accuracy (78.25%). Clearly, the average accuracy of the selected ensembles by HQ (α+K) metrics is much higher than that by using Q-diversity metrics.\nEnsemble Accuracy Distribution. We further investigate the ensemble accuracy distribution for the ensemble teams selected by Q, HQ (α) and HQ (α + K) metrics. Figure 4 shows the visualization of the results. For CIFAR-10, we compare the ensemble teams selected by Q-GD (yellow triangles), HQ-GD (α) (blue dots), and HQ-GD (α+K) (red circles). It is visually clear that HQ-GD (α+K) diversity metric can effectively prune out more low accuracy ensembles with insufficient HQ scores compared to Q-GD and HQ-GD (α), although it still suffers from a few low accuracy ensembles, which dragged the improvement on the ensemble accuracy lower bound of 94.72% on CIFAR-10. For ImageNet, Figure 4b and 4c show that both HQ-BD (α + K) and HQ-GD (α + K) have the best performance with most of the selected ensembles on the top left (red circles), indicating high ensemble accuracy and high lower bound on the ensemble accuracy of all selected teams." }, { "heading": "4 CONCLUSION", "text": "We have presented a two-phase hierarchical ensemble diversity pruning approach for high quality ensemble selection. This paper makes three original contributions. First, we identify and analyze the inherent limitations of existing six ensemble diversity metrics, coined as Q-metrics. Second, we address the limitations of Q-metrics by introducing the six HQ diversity metrics respectively. Third, we develop a two phase HQ-based hierarchical pruning method with α filter followed by K-means filter. By combining these optimizations, the deep ensembles selected by our HQ (α + K) metrics can significantly outperform the deep ensembles selected by the corresponding Q metrics, showing that the HQ metrics based hierarchical pruning approach is efficient in identification and removal of low quality deep ensembles. Comprehensive experiments conducted on benchmark datasets of CIFAR-10 and ImageNet show that our hierarchical diversity pruning approach outperforms the corresponding Q-metrics in terms of the lower bound (worst case) and the upper bound (best case) of ensemble accuracy over the deep ensembles selected, in addition to the average ensemble accuracy of the selected ensemble teams, and the percentage of selected ensembles that exceed the highest accuracy of the member models in the base model pool." }, { "heading": "A DIVERSITY BY UNCORRELATED ERROR", "text": "Deep neural network ensembles use multiple (sayM > 1) deep neural networks to form a committee (team) to collaborate and combine the predictions of individual member models to make the final prediction. A consensus method will be used to combine the individual predictions, such as majority voting, plurality voting, or model averaging (the average of prediction vectors).\nA deep neural network classifier is typically trained to minimize a cross-entropy loss and output a probability vector to approximate a posteriori probability densities for the corresponding class. For a given input x, the ith element in the output probability vector of model Fk can be modeled as: fki (x) = p(ci|x) + ki (x), where p(ci|x) is the posteriori probability distribution of the ith class (ci) for the input x, and ki (x) is the error associated with this output. For making the Bayes optimum decision, x will be predicted as class ci if p(ci|x) > p(cj |x),∀j 6= i. Therefore, the Bayes optimum boundary locates at all points x∗ such that p(ci|x∗) = p(cj |x∗) where p(cj |x∗) = maxl 6=ip(cl|x). Given the neural network model will output fki (x) instead of p(ci|x), the decision boundary of the model, x̄, may vary from the optimum boundary by an offset o = x̄ − x∗. (TUMER & GHOSH, 1996) shows that the added error beyond Bayes error is Eadd = dσ2o 2 where d is the difference between the derivatives of the two posteriors and σ2o is the variance of the boundary offset o, σ 2 o = 2σ2 ki /d2. Combining the predictions of S models with model averaging (avg), the ith element in\nthe combined probability vector gives an approximation to p(ci|x) as favgi (x) = 1S ∑S k=1 f k i (x) = p(ci|x) + i(x), where i(x) = 1S k i (x). We can calculate the variance of i with\nσ2 i = 1\nS2 S∑ k=1 S∑ l=1 cov( ki (x), l i(x)) = 1 S2 S∑ k=1 σ2 ki + 1 S2 S∑ k=1 ∑ l 6=k cov( ki (x), l i(x))\nwhere cov() represents the covariance. With cov(a, b) = corr(a, b)σaσb, we can replace the covariance with correlation corr() and derive\nσ2 i = 1\nS2 S∑ k=1 σ2 ki + 1 S2 S∑ k=1 ∑ l 6=k corr( ki (x), l i(x))σ ki σ li\nLet δi denote the average correlation factor among these models, we have\nδi = 1\nS(S − 1) S∑ k=1 ∑ l 6=k corr( ki (x), l i(x))\nAssuming the common variance σ2 i = σ 2 ki holds for every model Fk, with δi we have\nσ2 i = 1\nS σ2 i + S − 1 S δiσ 2 i\nWith the variance of the ensemble decision boundary offset σ2oavg = σ2 i +σ2 j d2 given in (TUMER & GHOSH, 1996), we have\nσ2oavg = 1\nd2S (σ2 i + (S − 1)δiσ 2 i + σ 2 j + (S − 1)δjσ 2 j )\nAssume that the error between classes are i.i.d., that is σ2 i = σ 2 j . With σ 2 i = σ 2 ki (the previous assumption) and σ2o = 2σ2 k i d2 given in (TUMER & GHOSH, 1996), we have\nσ2oavg = 1\nd2S (2σ2 i + (S − 1)σ 2 i(δi + δj))\nσ2oavg = 2σ2 i d2S (1 + (S − 1)(δi + δj) 2 ) = 2σ2 ki d2S (1 + (S − 1)(δi + δj) 2 )\nσ2oavg = σ2o S (1 + (S − 1)δi + δj 2 )\nTo extend the above formula to include all classes, given δ = ∑C i=1 Piδi, where Pi is the prior probability of class ci and C is the total number of classes. Assuming the prior probability Pi of class ci is uniformly distributed, we have\nσ2oavg = σ2o S (1 + (S − 1)δ) = σ2o( 1 + (S − 1)δ) S )\nSo we can derive the added error for the ensemble prediction Eavgadd as\nEavgadd = dσ2oavg 2 = dσ2o 2 ( 1 + (S − 1)δ) S ) = Eadd( 1 + (S − 1)δ) S )\nTherefore, the ideal scenario is when all members in an ensemble team of size S are diverse. They can learn and predict with uncorrelated errors (negative correlation), i.e., δ = 0. Then a simple model averaging method can significantly reduce the overall prediction error by S. Meanwhile, the worst scenario happens when error of individual model are highly correlated with δ = 1, such as all S models are perfect duplicates, the error of the ensemble is identical to the initial errors without any improvement. In general, the correlation δ lies between 0 and 1, and therefore, it is always beneficial to use ensemble to reduce the prediction errors." }, { "heading": "B ENSEMBLE ROBUSTNESS", "text": "Let g(x) = fc(x) − fj(x), where c = argmax1≤i≤Cfi(x) is the predicted class label and j 6= c. Assume g(x) is Lipschitz continuous with Lipschitz constant Ljq , according to (Paulavičius & Žilinskas, 2006), we have\n|g(x)− g(y)| ≤ Ljq||x− y||p where Ljq = maxx||∇g(x)||q , 1p + 1 q = 1 and 1 ≤ p, q ≤ ∞.\nLet x = x0 + δ and y = x0, we have\n|g(x0 + δ)− g(x0)| ≤ Ljq||δ||p\nwhich can be rearranged as\ng(x0)− Ljq||δ||p ≤ g(x0 + δ) ≤ g(x0) + Ljq||δ||p\nWhen g(x0 + δ) = 0, the predicted class label will change. However, g(x0 + δ) is lower bounded by g(x0)−Ljq||δ||p ≤ g(x0 + δ). If 0 ≤ g(x0)−Ljq||δ||p, we have g(x0 + δ) ≥ 0 to ensure that the prediction result will not change with the small change δ on the input x0. This leads to\ng(x0)− Ljq||δ||p ≥ 0⇒ ||δ||p ≤ g(x0)\nLjq\nThat is\n||δ||p ≤ fc(x0)− fj(x0)\nLjq\nTo ensure the classification result will not change, that is argmax1≤i≤Cfi(x0 + δ) = c, we use the minimum of the bound on δ over j 6= c, that is\n||δ||p ≤ minj 6=c fc(x0)− fj(x0)\nLjq\nwhich indicates that as long as ||δ||p is small enough to fulfill the above bound, the classifier decision will never be changed, which marks the robustness of this classifier. The robustness bound (R) can be denoted as\nR = minj 6=c fc(x0)− fj(x0)\nLjq = minj 6=c fc(x0)− fj(x0) maxx||∇(fc(x)− fj(x))||q\nFor a model Fk, we have its upper bound\nRk = minj 6=c fkc (x0)− fkj (x0)\nmaxx||∇(fkc (x)− fkj (x))||q\nLet gkj (x) = f k c (x)− fkj (x), we have\nRk = minj 6=c gkj (x0)\nmaxx||∇(gkj (x))||q\nGiven S models, combining their predictions with model averaging (avg), we have the ith element in the combined probability vector as favgi (x) = 1 S ∑S k=1 f k i (x) corresponding to the robustness bound\nRavg = minj 6=c favgc (x0)− f avg j (x0)\nmaxx||∇(favgc (x)− favgj (x))||q = minj 6=c\ngavgc (x0)\nmaxx||∇(gavgc (x))||q\nAssume the minimum of the robustness bound can be achieved with the prediction result c and j for each model including the ensemble F avg , that is\nRk = gkj (x0)\nmaxx||∇(gkj (x))||q\nand\nRavg = gavgj (x0)\nmaxx||∇(gavgj (x))||q\nwhere gavgj (x) = 1 S ∑S k=1 g k j (x). The following property always holds that ∃1 ≤ k ≤ S,Rk ≤ Ravg , indicating that the ensemble can improve the robustness bound.\nWe prove the property by contradiction. First, we assume ∀1 ≤ k ≤ S,Rk > Ravg , that is\ngkj (x0)\nmaxx||∇(gkj (x))||q >\ngavgj (x0)\nmaxx||∇(gavgj (x))||q\nSo we have\ngkj (x0)(maxx||∇(g avg j (x))||q) > g avg j (x0)(maxx||∇(g k j (x))||q)\nFor each k ∈ {1, ..., S}, we have the above inequality. To add them all, we have\nS∑ k=1 gkj (x0)(maxx||∇(g avg j (x))||q) > S∑ k=1 gavgj (x0)(maxx||∇(g k j (x))||q)\nThat is\n(maxx||∇(gavgj (x))||q) S∑ k=1 gkj (x0) > g avg j (x0) S∑ k=1 (maxx||∇(gkj (x))||q)\nGiven gavgj (x) = 1 S ∑S k=1 g k j (x), we have\n(maxx||∇( S∑ k=1 gkj (x))||q) 1 S S∑ k=1 gkj (x0) > 1 S S∑ k=1 gkj (x0) S∑ k=1 (maxx||∇(gkj (x))||q)\nTherefore, we have\n(maxx||∇( S∑ k=1 gkj (x))||q) > S∑ k=1 (maxx||∇(gkj (x))||q)\nAccording to the triangle inequality, we have\nmaxx||∇( S∑ k=1 gkj (x))||q ≤ maxx( S∑ k=1 ||∇(gkj (x))||q) ≤ S∑ k=1 (maxx||∇(gkj (x))||q)\nwhich contradicts with our derived inequality. Therefore, the previous assumption does not hold. We show that ∃1 ≤ k ≤ S,Rk ≤ Ravg , demonstrating that the robustness of a member model can be further improved with ensemble.\nFurthermore, for a model F k, if its robustness bound Rk was not obtained with j. We have ∃i 6= j, i, j 6= c,Rk = g k i (x0)\nmaxx||∇(gki (x))||q ≤ g\nk j (x0)\nmaxx||∇(gkj (x))||q . The above claim still holds as long as each\nmodel makes the same prediction c." }, { "heading": "C ALGORITHMS FOR COMPUTING Q-DIVERSITY METRICS", "text": "We have covered six state-of-the-art diversity metrics (coined in this paper as Q-diversity metrics). In the literature, different studies will use one of these diversity metrics to select models and analyze the prediction results. However, there are few studies to provide guidelines for choosing them or to compare and evaluate these diversity metrics in terms of pruning out low diversity ensembles.\nIn general, diversity metrics can be classified into two major categories based on how the fault independence and uncorrelated errors are computed using a set of negative samples. They are pairwise metrics and non-pairwise metrics. We below describe six representative diversity metrics considered in our study: Cohen’s Kappa, Q Statistics and Binary Disagreement for pairwise, and Fleiss’ Kappa, Kohavi-Wolpert Variance and Generalized Diversity for non-pairwise.\nGiven a pool of M base models, all trained on the same dataset, one approach to create negative samples is to get the negative samples from the validation set of each model and then randomly select a subset of negative samples from the union of all M subsets of negative examples. Let X = {x1, x2, ..., xN} be the randomly selected N labeled negative examples on the training dataset. Given a base model Fi and a negative sample X, the output of Fi on X is a vector of binary values, denoted as ωi = [ωi,1, ωi,2, ..., ωi,N ]T , and ωi,k = 1 if Fi predicts xk correctly, otherwise, ωi,k = 0.\nPairwise Diversity Metrics For pairwise diversity metrics, they are calculated based on a pair of classifiers. Table 4 shows the relationship between a pair of classifiers Fi, Fj . For a labeled sample xk, four different types of prediction results emerge, such as both Fi and Fj make correct or wrong predictions and either Fi or Fj makes correct predictions. Correspondingly, we can count the number of samples in the four different types, that is Nab, which represents the number of elements xk ∈ X, such that ωi,k = a and ωj,k = b.\ni. Cohen’s Kappa (CK): The Cohen’s Kappa measures the diversity between the two classifiers Fi, Fj from the perspective of agreement (McHugh, 2012; Kuncheva & Whitaker, 2003). A lower Cohen’s kappa value implies lower agreement and higher diversity. Formula 1 shows the definition of the Cohen’s kappa (κij) between the two classifiers Fi, Fj . The value for the Cohen’s Kappa ranges from -1 to 1 with 0 representing the amount of agreement of random chance. (McHugh, 2012)\nκij = 2(N11N00 −N01N10)\n(N11 +N10)(N01 +N00) + (N11 +N01)(N10 +N00) (1)\nii. Q Statistics (QS): The Q statistics (Yule, 1900) is defined as QSij in Formula 2 between two models Fi, Fj . QSij varies between -1 and 1. When the models Fi, Fj are statistically independent, the expected QSij is 0. If the two models tend to recognize the same object similarly, QSij will have positive value. While two diverse models, recognizing the same object differently, will render a small or negative QSij value.\nQSij = N11N00 −N01N10\nN11N00 +N01N10 (2)\niii. Binary Disagreement (BD): The binary disagreement (Skalak, 1996; Kuncheva & Whitaker, 2003) is the ratio between (i) the number of samples on which one model is correct while the other is wrong to (ii) the total number of samples predicted by the two models Fi, Fj as Formula 3 shows.\nθij = N01 +N10\nN11 +N10 +N01 +N00 (3)\nFor an ensemble team of S models, as recommended by (Kuncheva & Whitaker, 2003), we calculate the averaged metric value over all pair of classifiers as Formula 4 shows, where Q represents a pairwise diversity metric.\nQ = 2\nS(S − 1) S−1∑ i=1 S∑ j=i+1 Qij (4)\nNon-pairwise Diversity Metrics Numerous non-pairwise diversity metrics are widely used for a team of over 2 models. To compare with pairwise diversity metrics, we focus on three representative non-pairwise diversity metrics.\nIn an ensemble team of S classifiers, we use l(xk) to denote the number of classifiers that correctly recognize xk, i.e., l(xk) = ∑S i=1 ωik.\niv. Fleiss’ Kappa (FK): Similar to Cohen’s Kappa, the Fleiss’ Kappa (Fleiss et al., 2013) also measures the diversity from the perspective of agreement. But it is directly calculated from a team of more than 2 models as Formula 5 shows, where p̄ is the average classification accuracy for the\nensemble team and κ is not obtained by simply averaging the Cohen’s kappa (κij).\np̄ = 1\nNS N∑ k=1 S∑ i=1 ωi,k\nκ = 1− 1 S\n∑N k=1 l(xk)(S − l(xk)\nN(S − 1)p̄(1− p̄)\n(5)\nv. Kohavi-Wolpert Variance (KW): Kohavi-Wolpert Variance is derived by (Kuncheva & Whitaker, 2003) to measure the variability of the predicted class label for the sample x with the team of models F1, F2, ..., FS as Formula 6 shows. Higher value of KW variance indicates higher model diversity of the team.\nKW = 1\nNS2 N∑ k=1 l(xk)(S − l(xk)) (6)\nvi. Generalized Diversity (GD): The generalized diversity has been proposed by (Partridge & Krzanowski, 1997) as Formula 7 shows. Y is a random variable, representing the proportion of classifiers (out of S) that fail to recognize a random sample x. The probability of Y = iS is denoted as pi, i.e., the probability of i (out of S) classifiers recognizing a randomly chosen sample x incorrectly. p(1) represented the expected probability of one randomly picked model failing while p(2) denotes the expected probability of both two randomly picked models failing. GD varies between 0 and 1. The maximum diversity (1) occurs when the failure of one model is accompanied by the correct recognition by the other model for two randomly picked models, that is p(2) = 0. When both two randomly picked models fail, we have p(1) = p(2), corresponding to the minimum diversity, 0.\np(1) = S∑ i=1 i S pi\np(2) = S∑ i=1 i(i− 1) S(S − 1) pi\nGD = 1− p(2) p(1)\n(7)\nAlgorithm 1 shows the sketch of the process of using a threshold-based filter. The diversity threshold calculation function is denoted as Θ, such as the mean function. First, we calculate the diversity measurements for all ensemble teams. Then based on the diversity threshold θ(Q) (Line 10), we can prune out the teams with low diversity (qi ≥ θ(Q)) and place the remaining high diversity ensembles into GEnsSet (Line 11∼15). With a proper threshold, θ, the threshold-based pruning can efficiently prune out low-diversity deep ensembles." }, { "heading": "D THE ALGORITHM FOR COMPUTING HQ-DIVERSITY METRICS", "text": "Unlike the Q-diversity metrics, HQ-diversity metrics calculate the diversity among the ensembles of the same size with a focal model. Algorithm 2 shows the skeleton of calculating the HQ diversity metrics for all the candidate ensembles inEnsSet. For each team size S (Line 6∼29), we follow two general steps to calculate the HQ diversity scores for each ensemble. First, each model in the base model pool will serve as the focal model. For the specific focal model Ffocal, letEnsSet(Ffocal, S) denote all candidate ensembles of size S, each containing the member model Ffocal. We first compute the Q-diversity score for each ensemble in EnsSet(Ffocal, S) with the negative samples drawn from the focal model Ffocal and store them in D(Q,S, Ffocal) (Line 10∼10). Then, in order to make them comparable across different focal models, we scale D(Q,S, Ffocal) into [0, 1] and store them into D(Q,S, Ffocal, Ti) for each ensemble Ti (Line 15∼18). Second, for each candidate ensemble (Ti) of size S, we perform a weighted average of the scaled diversity scores D(Q,S, Ffocal = Ti[j], Ti) associated with each of its member model Ti[j] to obtain the unified\nAlgorithm 1 Threshold-based Q-diversity Pruning 1: procedure THRESHOLD-BASED-PRUNING(NegSampSet,Q,Θ, EnsSet) 2: Input: NegSampSet: negative samples;Q the diversity metric; Θ: the diversity threshold calculation\nfunction; EnsSet: the set of ensemble teams to be considered; 3: Output: GEnsSet: the set of good ensemble teams. 4: Initialize GEnsSet = {}, D = {} 5: for i = 1 to |EnsSet| do 6: . calculate the diversity metric Q for Ti ∈ EnsSet 7: qi = DiversityMetric(Q,Ti, NegSampSet) 8: D.append(qi) . Store qi in the diversity measures D 9: end for 10: θ(Q) = Θ(D) . Calculate the diversity threshold 11: for i = 1 to |EnsSet| do 12: if qi < θ(Q) then 13: GEnsSet.add(Ti) . add qualified Ti 14: end if 15: end for 16: return GEnsSet 17: end procedure\nAlgorithm 2 HQ Diversity Metric Calculation 1: procedure GETHQ(NegSampSet,Q,EnsSet) 2: Input: NegSampSet: negative samples for each model; Q the diversity metric; EnsSet: the set of\nensemble teams to be considered; 3: Output: HQ: the set of HQ diversity measurements 4: Initialize D(Q) = {}, D(Q) = {} 5: Initialize HQ = {} . A map of HQ diversity metrics and teams 6: for S = 2 to M − 1 do 7: for focal = 0 to M − 1 do 8: Obtain EnsSet(Ffocal, S) with candidate teams of size S and containing Ffocal. 9: Initialize D(Q,S, Ffocal) = [ ] 10: for i = 1 to |EnsSet(Ffocal, S)| do 11: . calculate the diversity metric Q for Ti ∈ EnsSet(Ffocal, S) 12: qi = DiversityMetric(Q,Ti, NegSampSet(Ffocal)) 13: D(Q,S, Ffocal).append(qi) . add the acci into D(Q,S, Ffocal) 14: end for 15: for i = 1 to |EnsSet(Ffocal, S)| do 16: . scale the diversity metrics for ensemble teams of the same size 17: D(Q,S, Ffocal, Ti) =\nqi−min(D(Q,S,Ffocal)) max(D(Q,S,Ffocal))−min(D(Q,S,Ffocal))\n. Scale to [0, 1] 18: end for 19: end for 20: Obtain EnsSet(S) with candidate teams of size S 21: for i = 1 to |EnsSet(S)| do 22: Initialize tmpD = {} 23: for j = 0 to |Ti| do 24: tmpD.append(D(Q,S, Ffocal = Ti[j], Ti)) 25: end for 26: w = MemberModelAccuracyRank(Ti) . Obtain the weights for combining tmpD 27: HQ(Ti) = WeightedAverage(w, tmpD) 28: end for 29: end for 30: return HQ 31: end procedure\nHQ score. The weight is calculated with the corresponding rank of accuracy of the member model (Ti[j]) in the ensemble (Ti), i.e., the member model with higher accuracy will have higher weight (Line 21∼28).\nE THE ALGORITHM FOR THE α FILTER\nTo construct a deep ensemble teams of diverse models, we start with building the ensembles of a smaller size, such as S = 2 with ( M 2 ) = M(M−1) candidates. For a larger size, such as S∗ = S+1, we then extend these candidate ensembles of size S by adding another member model from the base model pool. This way of constructing deep ensembles enables us to efficiently form high quality deep ensembles step by step and strategically prune out low diversity ensembles.\nIntuitively, an ensemble team of a larger size S = 3, such as [F5, F6, F7] containing a subset of models with lower ensemble diversity (i.e., higher correlation), such as [F5, F6], then the other teams with size S = 3, such as [F5, F7, F9], which may have higher diversity than [F5, F6, F7], so we can preemptively prune out [F5, F6] for S = 2 to avoid calculating the diversity scores for ensembles with S > 2 containing [F5, F6] as Figure 5 shows.\nAlgorithm 3 α Filter 1: procedure α-FILTER(NegSampSet,Q, β,EnsSet) 2: Input: NegSampSet: negative samples; Q the diversity metric; β: the percentage of number of\nensemble teams to be pruned out each step; EnsSet: the set of ensemble teams to be considered; 3: Output: GEnsSet(Q): the set of good ensemble teams pruned by α-pruning with diversity metric Q. 4: Initialize GEnsSet(Q) = {}, D = {} 5: Initialize pruneSet = {} . To prune out. 6: for S = 2 to M do 7: Initialize tmpGEnsSet(Q,S) = {}. 8: Construct ensembles from EnsSet of size S into EnsSet(S) 9: for i = 1 to |EnsSet(S)| do 10: if Ti contains any group of models in pruneSet then 11: continue . Prune out this branch 12: else 13: qi = DiversityMetric(Q,Ti, NegSampSet) 14: D.append(qi) 15: tmpGEnsSet(Q,S).append(Ti) 16: end if 17: end for 18: n = β ∗ |tmpGEnsSet(Q,S)| 19: sort Ti ∈ tmpGEnsSet(Q,S) by qi 20: remove n teams of lowest diversity from tmpGEnsSet(Q,S) and add them in to pruneSet 21: GEnsSet(Q)∪ = tmpGEnsSet(Q,S) 22: end for 23: return GEnsSet(Q) 24: end procedure\nTherefore, with this property we can effectively prune out low-diversity deep ensembles. Algorithm 3 presents a skeleton of the pseudo code, describing this pruning process. NegSampSet contains the set of negative samples for calculating the diversity metric Q. β marks the percentage of the teams to be further pruned out for a fixed team size. By default, we set β = 10%. EnsSet contains the set of ensemble teams to be considered. For each team size, we omit all the teams that contain any group of models in pruneSet. For the remaining teams, we measure their diversity scores and ordered them based on the diversity score pi. Then we remove β of the remaining teams with the lowest diversity and add them into pruneSet for further pruning. This algorithm can significantly avoid exploring unpromising branches in searching for high-quality ensembles." }, { "heading": "F THE BASE MODEL POOLS FOR THREE BENCHMARK DATASETS", "text": "We evaluate the proposed hierarchical diversity pruning methods using three benchmark datasets, CIFAR-10, ImageNet, and Cora. The specification of these datasets and the base model pools for each of the datasets are included in this section as Table 5 shows. We use 10 base models in this study for each dataset, primarily collected from GTModelZoo (GTModelZoo Developers, 2020).\nG THE α FILTER ON Q DIVERSITY METRICS\nWe also applied the α filter with six Q-diversity metrics for pruning out low-diversity ensembles. Figure 6 shows the experimental results on the Q-GD metric on CIFAR-10, where Figure 6a, 6b and 6c present all the candidate ensembles of size 3, 4 and 5 respectively, and the relationship of the Q-diversity metric GD and ensemble accuracy. The black dots mark the ensembles that pruned out by the α filter while the red ones represent the remaining ensembles. Even though, the α filter can significantly filter many low-diversity ensembles, we still miss a fair number of ensembles with high ensemble accuracy. There are two primary reasons behind this observation: (1) The Q-diversity metrics fail to precisely capture the diversity of ensembles, therefore, when pruning out a low Qdiversity branch, such as in Figure 6a with S = 3, some ensembles of a larger size with high diversity (with low Q-GD values) many also be pruned out in Figure 6b with S = 4. (2) A few ensembles\nwith high ensemble accuracy have low diversity, demonstrating that Q-diversity metrics may not be effectively correlated to ensemble accuracy. We further perform a comprehensive evaluation as Table 6 shows on three datasets. Due to the above inherent problems with Q metrics, the α filter on our HQ metrics achieved much better performance than Q metrics, when comparing Table 6 with Table 2, 3, 7." }, { "heading": "H EXPERIMENTAL EVALUATION ON CORA DATASET", "text": "We also evaluate our methods on a popular graph dataset, Cora. The same set of experimental results are shown on Table 7. We found similar observations as CIFAR-10 and ImageNet. First, the α filter with HQ metrics works much better than Q metrics. HQ metrics can capture more high accuracy (≥ 89%) ensembles (14∼18) than 6∼17 by Q metrics with the mean threshold. Second, the combined hierarchical pruning method of the α filter and K-Means filter on HQ metrics (α+K) can significantly improve the ensemble accuracy lower bound from 82.10% to 86.70%∼87.80% as well as the probability of high accuracy ensembles among the selected ones." } ]
2,020
null
SP:276a1974451e9740ff761c45ff63de47aabe0534
[ "This paper proposes a new stochastic normalized gradient descent method with momentum (SNGM) for large batch training. They prove that unlike mometum SGD (MSGD), SNGM can adopt larger batch size to converge to the epsilon-stationary point with the same computation complexity (total number of gradient computation). The paper shows that SNGM with large batches is comparable to MSGD with small batches for training ResNet on CIFAR10 and ImageNet. The paper also shows that SNGM outperforms LARS on CIFAR10.", "Large batch training has been observed to not only significantly improve the training speed but also lead to a worse generalization performance. This paper considers how to improve the performance of MSGD in large batch training. They propose the so called normalized MSGD where instead of the gradient, the algorithm uses the normalized gradient to update the momentum. They also provide theoretical justification of this change by considering smooth and relaxed smooth function. O(1/\\epsilon^4) convergence rate is established." ]
Stochastic gradient descent (SGD) and its variants have been the dominating optimization methods in machine learning. Compared with small batch training, SGD with large batch training can better utilize the computational power of current multi-core systems like GPUs and can reduce the number of communication rounds in distributed training. Hence, SGD with large batch training has attracted more and more attention. However, existing empirical results show that large batch training typically leads to a drop of generalization accuracy. As a result, large batch training has also become a challenging topic. In this paper, we propose a novel method, called stochastic normalized gradient descent with momentum (SNGM), for large batch training. We theoretically prove that compared to momentum SGD (MSGD) which is one of the most widely used variants of SGD, SNGM can adopt a larger batch size to converge to the -stationary point with the same computation complexity (total number of gradient computation). Empirical results on deep learning also show that SNGM can achieve the state-of-the-art accuracy with a large batch size.
[]
[ { "authors": [ "Boris Ginsburg", "Patrice Castonguay", "Oleksii Hrinchuk", "Oleksii Kuchaiev", "Vitaly Lavrukhin", "Ryan Leary", "Jason Li", "Huyen Nguyen", "Jonathan M. Cohen" ], "title": "Stochastic gradient methods with layerwise adaptive moments for training of deep networks", "venue": null, "year": 1905 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross B. Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: training imagenet in 1 hour", "venue": null, "year": 2017 }, { "authors": [ "Elad Hazan", "Kfir Y. Levy", "Shai Shalev-Shwartz" ], "title": "Beyond convexity: Stochastic quasi-convex optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Elad Hoffer", "Itay Hubara", "Daniel Soudry" ], "title": "Train longer, generalize better: closing the generalization gap in large batch training of neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Yann A. LeCun", "Léon Bottou", "Genevieve B. Orr", "Klaus-Robert Müller" ], "title": "Efficient BackProp, pp. 9–48", "venue": null, "year": 2012 }, { "authors": [ "Mu Li", "David G. Andersen", "Alexander J. Smola", "Kai Yu" ], "title": "Communication efficient distributed machine learning with the parameter server", "venue": null, "year": 2014 }, { "authors": [ "Mu Li", "Tong Zhang", "Yuqiang Chen", "Alexander J. Smola" ], "title": "Efficient mini-batch training for stochastic optimization", "venue": "In Proceedings of the ACM Conference on Knowledge Discovery and Data Mining,", "year": 2014 }, { "authors": [ "Xiangru Lian", "Ce Zhang", "Huan Zhang", "Cho-Jui Hsieh", "Wei Zhang", "Ji Liu" ], "title": "Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Tao Lin", "Sebastian U. Stich", "Kumar Kshitij Patel", "Martin Jaggi" ], "title": "Don’t use large mini-batches, use local SGD", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yurii E. Nesterov" ], "title": "Introductory Lectures on Convex Optimization - A Basic Course, volume 87 of Applied Optimization", "venue": null, "year": 2004 }, { "authors": [ "Myle Ott", "Sergey Edunov", "David Grangier", "Michael Auli" ], "title": "Scaling neural machine translation", "venue": "In Proceedings of the Conference on Machine Translation,", "year": 2018 }, { "authors": [ "Boris Polyak" ], "title": "Some methods of speeding up the convergence of iteration methods", "venue": "Ussr Computational Mathematics and Mathematical Physics, 4:1–17,", "year": 1964 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A stochastic approximation method", "venue": "The Annals of Mathematical Statistics,", "year": 1951 }, { "authors": [ "Ashia C. Wilson", "Lester Mackey", "Andre Wibisono" ], "title": "Accelerating rescaled gradient descent: Fast optimization of smooth functions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Scaling SGD batch size to 32k for imagenet training", "venue": null, "year": 2017 }, { "authors": [ "Yang You", "Jing Li", "Sashank J. Reddi", "Jonathan Hseu", "Sanjiv Kumar", "Srinadh Bhojanapalli", "Xiaodan Song", "James Demmel", "Kurt Keutzer", "Cho-Jui Hsieh" ], "title": "Large batch optimization for deep learning: Training BERT in 76 minutes", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Hao Yu", "Rong Jin", "Sen Yang" ], "title": "On the linear speedup analysis of communication efficient momentum SGD for distributed non-convex optimization", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hao Yu", "Sen Yang", "Shenghuo Zhu" ], "title": "Parallel restarted SGD with faster convergence and less communication: Demystifying why model averaging works for deep learning", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Jingzhao Zhang", "Tianxing He", "Suvrit Sra", "Ali Jadbabaie" ], "title": "Why gradient clipping accelerates training: A theoretical justification for adaptivity", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "In machine learning, we often need to solve the following empirical risk minimization problem:\nmin w∈Rd\nF (w) = 1\nn n∑ i=1 fi(w), (1)\nwhere w ∈ Rd denotes the model parameter, n denotes the number of training samples, fi(w) denotes the loss on the ith training sample. The problem in (1) can be used to formulate a broad family of machine learning models, such as logistic regression and deep learning models.\nStochastic gradient descent (SGD) Robbins & Monro (1951) and its variants have been the dominating optimization methods for solving (1). SGD and its variants are iterative methods. In the tth iteration, these methods randomly choose a subset (also called mini-batch) It ⊂ {1, 2, . . . , n} and compute the stochastic mini-batch gradient 1/B ∑ i∈It ∇fi(wt) for updating the model parameter, where B = |It| is the batch size. Existing works Li et al. (2014b); Yu et al. (2019a) have proved that with the batch size of B, SGD and its momentum variant, called momentum SGD (MSGD), achieve a O(1/ √ TB) convergence rate for smooth non-convex problems, where T is total number of model parameter updates.\nWith the population of multi-core systems and the easy implementation for data parallelism, many distributed variants of SGD have been proposed, including parallel SGD Li et al. (2014a), decentralized SGD Lian et al. (2017), local SGD Yu et al. (2019b); Lin et al. (2020), local momentum SGD Yu et al. (2019a) and so on. Theoretical results show that all these methods can achieve aO(1/ √ TKb) convergence rate for smooth non-convex problems. Here, b is the batch size on each worker and K is the number of workers. By setting Kb = B, we can observe that the convergence rate of these distributed methods is consistent with that of sequential methods. In distributed settings, a small number of model parameter updates T implies a small synchronize cost and communication cost. Hence, a small T can further speed up the training process. Based on theO(1/ √ TKb) convergence rate, we can find that if we adopt a larger b, the T will be smaller. Hence, large batch training can reduce the number of communication rounds in distributed training. Another benefit of adopting\nlarge batch training is to better utilize the computational power of current multi-core systems like GPUs You et al. (2017). Hence, large batch training has recently attracted more and more attention in machine learning.\nUnfortunately, empirical results LeCun et al. (2012); Keskar et al. (2017) show that existing SGD methods with a large batch size will lead to a drop of generalization accuracy on deep learning models. Figure 1 shows the comparison of training loss and test accuracy between MSGD with a small batch size and MSGD with a large batch size. We can find that large batch training does degrade both training loss and test accuracy. Many works try to explain this phenomenon Keskar et al. (2017); Hoffer et al. (2017). They observe that SGD with a small batch size typically makes the model parameter converge to a flatten minimum while SGD with a large batch size typically makes the model parameter fall into the region of a sharp minimum. And usually, a flatten minimum can achieve better generalization ability than a sharp minimum. Hence, large batch training has also become a challenging topic.\nRecently, many methods have been proposed for improving the performance of SGD with a large batch size. The work in Goyal et al. (2017); You et al. (2020) proposes many tricks like warmup, momentum correction and linearly scaling the learning rate, for large batch training. The work in You et al. (2017) observes that the norms of gradient at different layers of deep neural networks are widely different and the authors propose the layer-wise adaptive rate scaling method (LARS). The work in Ginsburg et al. (2019) also proposes a similar method that updates the model parameter in a layer-wise way. Most of these methods lack theoretical evidence to explain why they can adopt a large batch size. Although the work in You et al. (2020) proposes some theoretical explanations for LARS, the implementation is still not consistent with its theorems in which both of the momentum coefficient and weight decay are set as zeros.\nIn this paper, we propose a novel method, called stochastic normalized gradient descent with momentum (SNGM), for large batch training. SNGM combines normalized gradient Nesterov (2004); Hazan et al. (2015); Wilson et al. (2019) and Polyak’s momentum technique Polyak (1964) together. The main contributions of this paper are outlined as follows:\n• We theoretically prove that compared to MSGD which is one of the most widely used variants of SGD, SNGM can adopt a larger batch size to converge to the -stationary point with the same computation complexity (total number of gradient computation). That is to say, SNGM needs a smaller number of parameter update, and hence has faster training speed than MSGD.\n• For a relaxed smooth objective function (see Definition 2), we theoretically show that SNGM can achieve an -stationary point with a computation complexity of O(1/ 4). To the best of our knowledge, this is the first work that analyzes the computation complexity of stochastic optimization methods for a relaxed smooth objective function.\n• Empirical results on deep learning also show that SNGM can achieve the state-of-the-art accuracy with a large batch size." }, { "heading": "2 PRELIMINARIES", "text": "In this paper, we use ‖·‖ to denote the Euclidean norm, use w∗ to denote one of the optimal solutions of (1), i.e., w∗ ∈ argminw F (w). We call w an -stationary point of F (w) if ‖∇F (w)‖ ≤ . The computation complexity of an algorithm is the total number of its gradient computation. We also give the following assumption and definitions:\nAssumption 1 (σ-bounded variance) For any w, E‖∇fi(w)−∇F (w)‖2 ≤ σ2 (σ > 0).\nDefinition 1 (Smoothness) A function φ(·) is L-smooth (L > 0) if for any u,w,\nφ(u) ≤ φ(w) +∇φ(w)>(u−w) + L 2 ‖u−w‖2.\nL is called smoothness constant in this paper.\nDefinition 2 (Relaxed smoothness Zhang et al. (2020)) A function φ(·) is (L, λ)-smooth (L ≥ 0, λ ≥ 0) if φ(·) is twice differentiable and for any w,\n‖∇2φ(w)‖ ≤ L+ λ‖∇φ(w)‖,\nwhere∇2φ(w) denotes the Hessian matrix of φ(w).\nFrom the above definition, we can observe that if a function φ(w) is (L, 0)-smooth, then it is a classical L-smooth function Nesterov (2004). For a (L, λ)-smooth function, we have the following property Zhang et al. (2020):\nLemma 1 If φ(·) is (L, λ)-smooth, then for any u,w, α such that ‖u−w‖ ≤ α, we have\n‖∇φ(u)‖ ≤ (Lα+ ‖∇φ(w)‖)eλα.\nAll the proofs of lemmas and corollaries of this paper are put in the supplementary." }, { "heading": "3 RELATIONSHIP BETWEEN SMOOTHNESS CONSTANT AND BATCH SIZE", "text": "In this section, we deeply analyze the convergence property of MSGD to find the relationship between smoothness constant and batch size, which provides insightful hint for designing our new method SNGM.\nMSGD can be written as follows:\nvt+1 = βvt + gt, (2) wt+1 = wt − ηvt+1, (3)\nwhere gt = 1/B ∑ i∈It ∇fi(wt) is a stochastic mini-batch gradient with a batch size of B, and vt+1 is the Polyak’s momentum Polyak (1964).\nWe aim to find how large the batch size can be without loss of performance. The convergence rate of MSGD with the batch size B for L-smooth functions can be derived from the work in Yu et al. (2019a). That is to say, when η ≤ (1− β)2/((1 + β)L), we obtain\n1\nT T−1∑ t=0 E‖∇F (wt)‖2 ≤ 2(1− β)[F (w0)− F (w∗)] ηT + Lησ2 (1− β)2B + 4L2η2σ2 (1− β)2 ,\n=O( B ηC ) +O( η B ) +O(η2), (4)\nwhere C = TB denotes the computation complexity (total number of gradient computation). According to Corollary 1 in Yu et al. (2019a), we set η = √ B/ √ T = B/ √ C and obtain that\n1\nT T−1∑ t=0 E‖∇F (wt)‖ ≤\n√ O( 1√\nC ) +O(B\n2\nC ). (5)\nAlgorithm 1 SNGM Initialization: η > 0, β ∈ [0, 1), B > 0, T > 0,u0 = 0,w0; for t = 0, 1, . . . , T − 1 do\nRandomly choose B function indices, denoted as It; Compute a mini-batch gradient gt = 1B ∑ i∈It ∇fi(wt); ut+1 = βut + gt ‖gt‖ ;\nwt+1 = wt − ηut+1; end for\nSince η ≤ (1− β)2/((1+ β)L) is necessary for (4), we firstly obtain that B ≤ O( √ C/L). Furthermore, according to the right term of (5), we have to set B such that B2/C ≤ 1/ √ C, i.e., B ≤ C1/4, for O(1/ 4) computation complexity guarantee. Hence in MSGD, we have to set the batch size satisfying\nB ≤ O(min{ √ C L , C1/4}). (6)\nWe can observe that a larger L leads to a smaller batch size in MSGD. If B does not satisfy (6), MSGD will get higher computation complexity.\nIn fact, to the best of our knowledge, among all the existing convergence analysis of SGD and its variants on both convex and non-convex problems, we can observe three necessary conditions for the O(1/ 4) computation complexity guarantee Li et al. (2014b;a); Lian et al. (2017); Yu et al. (2019b;a): (a) the objective function is L-smooth; (b) the learning rate η is less than O(1/L); (c) the batch size B is proportional to the learning rate η. One direct corollary is that the batch size is limited by the smooth constant L, i.e., B ≤ O(1/L). Hence, we can not increase the batch size casually in these SGD based methods. Otherwise, it may slow down the convergence rate and we need to compute more gradients, which is consistent with the observations in Hoffer et al. (2017)." }, { "heading": "4 STOCHASTIC NORMALIZED GRADIENT DESCENT WITH MOMENTUM", "text": "In this section, we propose our novel methods, called stochastic normalized gradient descent with momentum (SNGM), which is presented in Algorithm 1. In the t-th iteration, SNGM runs the following update:\nut+1 = βut + gt ‖gt‖ , (7)\nwt+1 = wt − ηut+1, (8) where gt = 1/B ∑ i∈It ∇fi(wt) is a stochastic mini-batch gradient with a batch size of B. When β = 0, SNGM will degenerate to stochastic normalized gradient descent (SNGD) Hazan et al. (2015). The ut is a variant of Polyak’s momentum. But different from Polyak’s MSGD which adopts gt directly for updating ut+1, SNGM adopts the normalized gradient gt/‖gt‖ for updating ut+1. In MSGD, we can observe that if gt is large, then ut may be large as well and this may lead to a bad model parameter. Hence, we have to control the learning rate in MSGD, i.e., η ≤ (1/L), for a L-smooth objective function. The following lemma shows that ‖ut‖ in SNGM can be well controlled whatever gt is large or small.\nLemma 2 Let {ut} be the sequence produced by (7), then we have ∀t ≥ 0,\n‖ut‖ ≤ 1\n1− β ." }, { "heading": "4.1 SMOOTH OBJECTIVE FUNCTION", "text": "For a smooth objective function, we have the following convergence result of SNGM:\nTable 1: Comparison between MSGD and SNGM for a L-smooth objective function. C denotes the computation complexity (total number of gradient computation).\n1 T\n∑T−1\nt=0 E‖∇F (wt)‖ learning rate batch size\nMSGD √ O( 1√C ) +O( B2 C ) B√ C min{ √ C L , C 1/4} SNGM O( 1C1/4 ) √ B√ C √ C\nTheorem 1 Let F (w) be a L-smooth function (L > 0). The sequence {wt} is produced by Algorithm 1. Then for any η > 0, B > 0, we have\n1\nT T−1∑ t=0 E‖∇F (wt)‖ ≤ 2(1− β)[F (w0)− F (w∗)] ηT + Lκη + 2σ√ B , (9)\nwhere κ = 1+β(1−β)2 .\nProof 1 See the supplementary.\nWe can observe that different from (4) which needs η ≤ O(1/L), (9) is true for any positive learning rate. According to Theorem 1, we obtain the following computation complexity of SNGM:\nCorollary 1 Let F (w) be a L-smooth function (L > 0). The sequence {wt} is produced by Algorithm 1. Given any total number of gradient computation C > 0, let T = dC/Be,\nB =\n√ C(1− β)σ2\n2L(1 + β)(F (w0)− F (w∗)) ,\nand\nη =\n√ 2(1− β)3(F (w0)− F (w∗))B\n(1 + β)LC .\nThen we have\n1\nT T−1∑ t=0 E‖∇F (wt)‖ ≤ 2 √ 2 4\n√ 8L(1 + β)[F (w0)− F (w∗)]σ2\n(1− β)C = O( 1 C1/4 ).\nHence, the computation complexity for achieving an -stationary point is O(1/ 4).\nIt is easy to verify that the η and B in Corollary 1 make the right term of (9) minimal. However, the η and B rely on the L and F (w∗) which are usually unknown in practice. The following corollary shows the computation complexity of SNGM with simple settings about learning rate and batch size.\nCorollary 2 Let F (w) be a L-smooth function (L > 0). The sequence {wt} is produced by Algorithm 1. Given any total number of gradient computation C > 0, let T = dC/Be, B = √ C and\nη = √ B/C. Then we have\n1\nT T−1∑ t=0 E‖∇F (wt)‖ ≤ 2(1− β)[F (w0)− F (w∗)] C1/4 + L(1 + β) (1− β)2C1/4 + 2σ C1/4 = O( 1 C1/4 ).\nHence, the computation complexity for achieving an -stationary point is O(1/ 4).\nAccording to Corollary 2, the batch size of SNGM can be set as O( √ C), which does not rely on the smooth constant L, and the O(1/ 4) computation complexity is still guaranteed (see Table 1). Hence, SNGM can adopt a larger batch size than MSGD, especially when L is large." }, { "heading": "4.2 RELAXED SMOOTH OBJECTIVE FUNCTION", "text": "Recently, the authors in Zhang et al. (2020) observe the relaxed smooth property in deep neural networks. According to Definition 2, the relaxed smooth property is more general than L-smooth property. For a relaxed smooth objective function, we have the following convergence result of SNGM:\nTheorem 2 Let F (w) be a (L, λ)-smooth function (L ≥ 0, λ > 0). The sequence {wt} is produced by Algorithm 1 with the learning rate η and batch size B. Then we have\n1\nT T−1∑ t=0 E‖∇F (wt)‖ ≤ 2(1− β)[F (w0)− F (w∗)] ηT + 8Lκη + 4σ√ B , (10)\nwhere κ = 1+β(1−β)2 and η ≤ 1/(8κλ).\nProof 2 The proof is similar to that of Theorem 1. See the supplementary.\nAccording to Theorem 2, we obtain the computation complexity of SNGM:\nCorollary 3 LetF (w) be a (L, λ)-smooth function (L ≥ 0, λ ≥ 0). The sequence {wt} is produced by Algorithm 1. Given any total number of gradient computation C > 0, let T = dC/Be, B = √ C\nand η = 4 √ 1/C ≤ 1/(8κλ). Then we have\n1\nT T−1∑ t=0 E‖∇F (wt)‖ ≤ 2(1− β)[F (w0)− F (w∗)] C1/4 + 8L(1 + β) (1− β)2C1/4 + 4σ C1/4 = O( 1 C1/4 ).\nHence, the computation complexity for achieving an -stationary point is O(1/ 4).\nAccording to Corollary 3, SNGM with a batch size of B = √ C can still guarantee a O(1/ 4) computation complexity for a relaxed smooth objective function." }, { "heading": "5 EXPERIMENTS", "text": "All experiments are conducted with the platform of PyTorch, on a server with eight NVIDIA Tesla V100 (32G) GPU cards. The datasets for evaluation include CIFAR10 and ImageNet." }, { "heading": "5.1 ON CIFAR10", "text": "First, we evaluate SNGM by training ResNet20 and ResNet56 on CIFAR10. CIFAR10 contains 50k training samples and 10k test samples. We compare SNGM with MSGD and an existing large batch training method LARS You et al. (2017). We implement LARS by using the open source code 1. The standard strategy He et al. (2016) for training the two models on CIFAR10 is using MSGD with a weight decay of 0.0001, a batch size of 128, an initial learning rate of 0.1, and dividing the learning rate at the 80th and 120th epochs. We also adopt this strategy for MSGD in this experiment. For SNGM and LARS, we set a large batch size of 4096 and also a weight decay of 0.0001. Following You et al. (2017), we adopt the poly power learning rate strategy and adopt the gradient accumulation Ott et al. (2018) with a batch size of 128 for the two large batch training methods. The momentum coefficient is 0.9 for all methods. Different from existing heuristic methods for large batch training, we do not adopt the warm-up strategy for SNGM.\nThe results are presented in Figure 2. As can be seen, SNGM achieves better convergence rate on training loss than LARS. The detailed information about the final convergence results is presented in Table 2. We can observe that MSGD with a batch size of 4096 leads to a significant drop of test accuracy. SNGM with a batch size of 4096 achieves almost the same test accuracy as MSGD with a batch size of 128. But the other large batch training method LARS achieves worse test accuracy than MSGD with a batch size of 128. These results successfully verify the effectiveness of SNGM.\n1https://github.com/noahgolmant/pytorch-lars" }, { "heading": "5.2 ON IMAGENET", "text": "We also compare SNGM with MSGD by training ResNet18 and ResNet50 on ImageNet. The standard strategy He et al. (2016) for training the two models on ImageNet is using MSGD with a weight decay of 0.0001, a batch size of 256, an initial learning rate of 0.1, and dividing the learning rate at the 30th and 60th epochs. We also adopt this strategy for MSGD in this experiment. For SNGM, we set a larger batch size of 8192 and a weight decay of 0.0001. We still adopt the poly power learning rate and the gradient accumulation with a batch size of 128 for SNGM. We do not adopt the warmup strategy for SNGM either. The momentum coefficient is 0.9 in the two methods. The results are\npresented in Figure 3 and Table 3. As can be seen, SNGM with a larger batch size achieves almost the same test accuracy as MSGD with a small batch size." }, { "heading": "6 CONCLUSION", "text": "In this paper, we propose a novel method called stochastic normalized gradient descent with momentum (SNGM), for large batch training. We theoretically prove that compared to MSGD which is one of the most widely used variants of SGD, SNGM can adopt a larger batch size to converge to the -stationary point with the same computation complexity. Empirical results on deep learning also show that SNGM can achieve the state-of-the-art accuracy with a large batch size." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 PROOF OF LEMMA 1", "text": "The proof follows Zhang et al. (2020). We put it here for completeness. For any u,w, let r(x) = x(u−w) +w, p(x) = ‖∇φ(r(x))‖, x ∈ [0, 1]. Then we have\np(x) =‖∇φ(r(x))‖ = ‖ ∫ x 0 Hφ(r(y))r ′(y)dy +∇φ(r(0))‖\n=‖ ∫ x 0 Hφ(r(y))(u−w)dy +∇φ(w)‖\n≤‖u−w‖ ∫ x 0 ‖Hφ(r(y))‖dy + ‖∇φ(w)‖\n≤α ∫ x 0 (L+ λ‖∇φ(r(y))‖)dy + ‖∇φ(w)‖\n=Lα+ ‖∇φ(w)‖+ λα ∫ x 0 p(y)dy.\nAccording to Gronwall’s Inequality, we obtain\np(x) ≤ (Lα+ ‖∇φ(w)‖)eλα." }, { "heading": "A.2 PROOF OF LEMMA 2", "text": "According to (7), we have\n‖ut+1‖ ≤β‖ut‖+ 1 ≤β2‖ut−1‖+ β + 1 ≤βt+1‖u0‖+ βt + βt−1 + · · ·+ 1\n≤ 1 1− β ." }, { "heading": "A.3 PROOF OF THEOREM 1", "text": "Let zt = wt + β1−β (wt −wt−1), then we have wt+1 = wt − η gt ‖gt‖ + β(wt −wt−1) and\nzt+1 = 1\n1− β wt+1 −\nβ\n1− β wt\n=zt − η 1− β gt ‖gt‖ .\nUsing the smooth property, we obtain\nF (zt+1) ≤F (zt)− η\n1− β ∇F (zt)T gt ‖gt‖ + Lη2 2(1− β)2\n=F (zt)− η\n1− β ‖gt‖+\nLη2\n2(1− β)2\n− η 1− β [(∇F (zt)−∇F (wt))T gt ‖gt‖ + (∇F (wt)− gt)T gt ‖gt‖ ]\n≤F (zt)− η\n1− β ‖gt‖+\nLη2\n2(1− β)2 +\nη\n1− β [L‖zt −wt‖+ ‖∇F (wt)− gt‖] (11)\nSince wt+1 −wt = β(wt −wt−1)− ηgt/‖gt‖, we obtain\n‖wt+1 −wt‖ ≤ β‖wt −wt−1‖+ η ≤ η\n1− β .\nHence, ‖wt −wt−1‖ ≤ η/(1− β) and\n‖zt −wt‖ = β\n1− β ‖wt −wt−1‖ ≤\nβη\n(1− β)2 . (12)\nCombining the above equations, we obtain\n‖gt‖ ≤ (1− β)[F (zt)− F (zt+1)]\nη +\nLη\n2(1− β) +\nLβη\n(1− β)2 + ‖∇F (wt)− gt‖.\nSince ‖∇F (wt)‖ ≤ ‖∇F (wt)− gt‖+ ‖gt‖, we obtain\n‖∇F (wt)‖ ≤ (1− β)[F (zt)− F (zt+1)]\nη +\nLη\n2(1− β) +\nLβη\n(1− β)2 + 2‖∇F (wt)− gt‖.\nUsing the fact that E‖∇F (wt)− gt‖ ≤ σ/ √ B and summing up the above inequality from t = 0 to T − 1, we obtain\n1\nT T−1∑ t=0 E‖∇F (wt)‖ ≤ 2(1− β)[F (w0)− F (w∗)] ηT + Lκη + 2σ√ B ." }, { "heading": "A.4 PROOF OF THEOREM 2", "text": "Let zt = wt + β1−β (wt −wt−1), then we have wt+1 = wt − η gt ‖gt‖ + β(wt −wt−1) and\nzt+1 = 1\n1− β wt+1 −\nβ\n1− β wt\n= 1\n1− β [wt − η gt ‖gt‖ + β(wt −wt−1)]− β 1− β wt\n= 1\n1− β wt −\nβ\n1− β wt−1 −\nη 1− β gt ‖gt‖\n=zt − η 1− β gt ‖gt‖ .\nUsing the Taylor theorem, there exists ξt such that\nF (zt+1) ≤F (zt)− η\n1− β ∇F (zt)T gt ‖gt‖ + ‖HF (ξt)‖η2 2(1− β)2\n=F (zt)− η\n1− β ‖gt‖+\n‖HF (ξt)‖η2\n2(1− β)2\n− η 1− β [(∇F (zt)−∇F (wt))T gt ‖gt‖ + (∇F (wt)− gt)T gt ‖gt‖ ]. (13)\nLet ψt(w) = (∇F (w)−∇F (wt))T gt‖gt‖ . Using the Taylor theorem, there exists ζt such that\n|ψt(zt)| =|ψt(wt) +∇ψt(ζt)(zt −wt)| = |∇ψ(ζt)(zt −wt)| ≤‖HF (ζt)‖‖zt −wt‖. (14)\nCombining (13) and (14), we obtain\n‖gt‖ ≤ (1− β)[F (zt)− F (zt+1)] η + ‖HF (ξt)‖η 2(1− β)\n+ (‖HF (ζt)‖‖zt −wt‖+ ‖∇F (wt)− gt‖). (15)\nSince wt+1 −wt = β(wt −wt−1)− ηgt/‖gt‖, we obtain\n‖wt+1 −wt‖ ≤ β‖wt −wt−1‖+ η ≤ η\n1− β .\nHence, ‖wt −wt−1‖ ≤ η/(1− β) and\n‖zt −wt‖ = β\n1− β ‖wt −wt−1‖ ≤\nβη\n(1− β)2 . (16)\nCombining (15) and (16), we obtain\n‖gt‖ ≤ (1− β)[F (zt)− F (zt+1)] η + ‖HF (ξt)‖η 2(1− β) + ‖HF (ζt)‖βη (1− β)2\n+ ‖∇F (wt)− gt‖. Since ‖∇F (wt)‖ ≤ ‖∇F (wt)− gt‖+ ‖gt‖, we obtain\n‖∇F (wt)‖ ≤ (1− β)[F (zt)− F (zt+1)]\nη +\nη\n2(1− β) ‖HF (ξt)‖+\nβη\n(1− β)2 ‖HF (ζt)‖\n+ 2‖∇F (wt)− gt‖.\nNext, we bound the two Hessian matrices. For convenience, we denote κ = 1+β(1−β)2 . Since ‖zt − wt‖ ≤ βη/(1− β)2 and\n‖zt+1 −wt‖ ≤‖zt+1 − zt‖+ ‖zt −wt‖\n≤η( 1 1− β + β (1− β)2 )\n≤κη\n≤ 1 λ ,\nwe obtain ‖HF (ζt)‖ ≤ L+ (L+ λ‖∇F (wt)‖)e, ‖HF (ξt)‖ ≤ L+ (L+ λ‖∇F (wt)‖)e.\nThen we obtain\n‖∇F (wt)‖ ≤ (1− β)[F (zt)− F (zt+1)]\nη + [\nη\n2(1− β) +\nβη\n(1− β)2 ][L+ (L+ λ‖∇F (wt)‖)e]\n+ 2‖∇F (wt)− gt‖\n≤ (1− β)[F (zt)− F (zt+1)] η + 4κη[L+ λ‖∇F (wt)‖]\n+ 2‖∇F (wt)− gt‖. Since 4λκη ≤ 1/2, we obtain\n‖∇F (wt)‖ ≤ 2(1− β)[F (zt)− F (zt+1)]\nη + 8Lcη + 4‖∇F (wt)− gt‖.\nSumming up the above inequality from t = 0 to T − 1, we obtain\n1\nT T−1∑ t=0 E‖∇F (wt)‖ ≤ 2(1− β)[F (w0)− F (w∗)] ηT + 8Lκη + 4σ√ B .\nwhere η ≤ 18λκ and we use the fact that E‖∇F (wt)− gt‖ ≤ σ/ √ B." }, { "heading": "A.5 PROOF OF COROLLARY 1", "text": "Let x = 2(1− β)[F (w0)− F (w∗)], y = Lκ, z = 2σ. Then we have\nxB Cη + yη + z√ B ≥ 2\n√ xyB\nC + z√ B ≥ 2\n√ 2z √ xyB\nC = 2 √ 2 4\n√ xyz2\nC . The equal sign works if and only if η = √ Bx/Cy, B = √ Cz2/(4xy). Then we obtain\n1\nT T−1∑ t=0 E‖∇F (wt)‖ ≤ 2 √ 2 4\n√ 8L(1 + β)[F (w0)− F (w∗)]σ2\n(1− β)C ." }, { "heading": "A.6 PROOF OF COROLLARY 2", "text": "By plugging T = dC/Be, B = √ C and η = 4 √ 1/C into (9), we obtain the result." }, { "heading": "A.7 PROOF OF COROLLARY 3", "text": "By plugging T = dC/Be, B = √ C and η = 4 √ 1/C into (10), we obtain the result." } ]
2,020
null
SP:c3236039988295311cdf505107bffa85b883e680
[ "This paper introduces a weighted line graph formulation (WLGCL) which corrects the over-counting (\"bias\") of high-degree node features in a line-graph based convolutional network. Further, the paper uses Incidence Matrix to implement WLGCL updates which reduces the space complexity ($O(N^4) \\to O(N^3)$) and time complexity ($O(N^4 C) \\to O(N^4)$) compared to the naive implementation. The paper shows empirical evaluation on downstream task of graph classification and shows gain in accuracy. ", "The paper proposed a GNN model based on a weighted line graph, which adds weights to the line graph for the original input graph in a node/graph property prediction task. The line graph is a graph built on the original graph but with edges as nodes. A new convolution called weighted line graph convolution layer (WLGCL) is proposed to overcome the issue of \"biased topological information\" of the line graph. The weights for the line graph in WLGCL are computed based on the node degree of the original graph, which implies the node degree in the line graph is always 2. The WLGCL can be implemented for different kinds of graph convolution, which rule incorporates graph connectivity, node features and edge features. " ]
Line graphs have shown to be effective in improving feature learning in graph neural networks. Line graphs can encode topology information of their original graphs and provide a complementary representational perspective. In this work, we show that the encoded information in line graphs is biased. To overcome this issue, we propose a weighted line graph that corrects biases in line graphs by assigning normalized weights to edges. Based on our weighted line graphs, we develop a weighted line graph convolution layer that takes advantage of line graph structures for better feature learning. In particular, it performs message passing operations on both the original graph and its corresponding weighted line graph. To address efficiency issues in line graph neural networks, we propose to use an incidence matrix to accurately compute the adjacency matrix of the weighted line graph, leading to dramatic reductions in computational resource usage. Experimental results on both real and simulated datasets demonstrate the effectiveness and efficiency of our proposed methods.
[]
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for largescale machine learning", "venue": "In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI}", "year": 2016 }, { "authors": [ "Sambaran Bandyopadhyay", "Anirban Biswas", "MN Murty", "Ramasuri Narayanam" ], "title": "Beyond node embedding: A direct unsupervised edge representation framework for homogeneous networks", "venue": null, "year": 1912 }, { "authors": [ "Karsten M Borgwardt", "Cheng Soon Ong", "Stefan Schönauer", "SVN Vishwanathan", "Alex J Smola", "Hans-Peter Kriegel" ], "title": "Protein function prediction via graph kernels", "venue": "i47–i56,", "year": 2005 }, { "authors": [ "Zhengdao Chen", "Joan Bruna Estrach", "Lisha Li" ], "title": "Supervised community detection with line graph neural networks", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Paul D Dobson", "Andrew J Doig" ], "title": "Distinguishing enzyme structures from non-enzymes without alignments", "venue": "Journal of molecular biology,", "year": 2003 }, { "authors": [ "TS Evans", "Renaud Lambiotte" ], "title": "Line graphs, link partitions, and overlapping communities", "venue": "Physical Review E,", "year": 2009 }, { "authors": [ "Hongyang Gao", "Shuiwang Ji" ], "title": "Graph U-Nets", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hongyang Gao", "Zhengyang Wang", "Shuiwang Ji" ], "title": "Large-scale learnable graph convolutional networks", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Martin Charles Golumbic" ], "title": "Algorithmic graph theory and perfect graphs", "venue": null, "year": 2004 }, { "authors": [ "Liyu Gong", "Qiang Cheng" ], "title": "Exploiting edge features for graph neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Marco Gori", "Gabriele Monfardini", "Franco Scarselli" ], "title": "A new model for learning in graph domains", "venue": "IEEE International Joint Conference on Neural Networks,", "year": 2005 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Xiaodong Jiang", "Pengsheng Ji", "Sheng Li" ], "title": "Censnet: convolution with edge-node switching in graph neural networks", "venue": "In Proceedings of the 28th International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "The International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Junhyun Lee", "Inyeop Lee", "Jaewoo Kang" ], "title": "Self-attention graph pooling", "venue": "In Proceedings of The 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Federico Monti", "Oleksandr Shchur", "Aleksandar Bojchevski", "Or Litany", "Stephan Günnemann", "Michael M Bronstein" ], "title": "Dual-primal graph convolutional networks", "venue": "arXiv preprint arXiv:1806.00770,", "year": 2018 }, { "authors": [ "Mathias Niepert", "Mohamed Ahmed", "Konstantin Kutzkov" ], "title": "Learning convolutional neural networks for graphs", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "Nino Shervashidze", "Pascal Schweitzer", "Erik Jan van Leeuwen", "Kurt Mehlhorn", "Karsten M Borgwardt" ], "title": "Weisfeiler-lehman graph kernels", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Bhalchandra D Thatte" ], "title": "Kocay’s lemma, whitney’s theorem, and some polynomial invariant reconstruction problems. the electronic journal of combinatorics", "venue": "pp. R63–R63,", "year": 2005 }, { "authors": [ "Hannu Toivonen", "Ashwin Srinivasan", "Ross D King", "Stefan Kramer", "Christoph Helma" ], "title": "Statistical evaluation of the predictive toxicology challenge 2000–2001", "venue": null, "year": 2003 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Nikil Wale", "Ian A Watson", "George Karypis" ], "title": "Comparison of descriptor spaces for chemical compound retrieval and classification", "venue": "Knowledge and Information Systems,", "year": 2008 }, { "authors": [ "Xi Xiong", "Kaan Ozbay", "Li Jin", "Chen Feng" ], "title": "Dynamic prediction of origin-destination flows using fusion line graph convolutional networks", "venue": null, "year": 1905 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "arXiv preprint arXiv:1810.00826,", "year": 2018 }, { "authors": [ "Pinar Yanardag", "SVN Vishwanathan" ], "title": "A structural smoothing framework for robust graph comparison", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Weichi Yao", "Afonso S Bandeira", "Soledad Villar" ], "title": "Experimental performance of graph neural networks on random instances of max-cut", "venue": "In Wavelets and Sparsity XVIII,", "year": 2019 }, { "authors": [ "Zhitao Ying", "Jiaxuan You", "Christopher Morris", "Xiang Ren", "Will Hamilton", "Jure Leskovec" ], "title": "Hierarchical graph representation learning with differentiable pooling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Muhan Zhang", "Yixin Chen" ], "title": "Link prediction based on graph neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Muhan Zhang", "Zhicheng Cui", "Marion Neumann", "Yixin Chen" ], "title": "An end-to-end deep learning architecture for graph classification", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Kai Zhou", "Tomasz P Michalak", "Marcin Waniek", "Talal Rahwan", "Yevgeniy Vorobeychik" ], "title": "Attacking similarity-based link prediction in social networks", "venue": "In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 305–313. International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2019 }, { "authors": [ "Shichao Zhu", "Chuan Zhou", "Shirui Pan", "Xingquan Zhu", "Bin Wang" ], "title": "Relation structure-aware heterogeneous graph neural network", "venue": "IEEE International Conference on Data Mining (ICDM),", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph neural networks (Gori et al., 2005; Scarselli et al., 2009; Hamilton et al., 2017) have shown to be competent in solving challenging tasks in the field of network embedding. Many tasks have been significantly advanced by graph deep learning methods such as node classification tasks (Kipf & Welling, 2017; Veličković et al., 2017; Gao et al., 2018), graph classification tasks (Ying et al., 2018; Zhang et al., 2018), link prediction tasks (Zhang & Chen, 2018; Zhou et al., 2019), and community detection tasks (Chen et al., 2019). Currently, most graph neural networks capture the relationships among nodes through message passing operations. Recently, some works (Chen et al., 2019) use extra graph structures such as line graphs to enhance message passing operations in graph neural networks from different graph perspectives. A line graph is a graph that is derived from an original graph to represent connectivity between edges in the original graph. Since line graphs can encode the topology information, message passing operations on line graphs can enhance network embeddings in graph neural networks. However, graph neural networks that leverage line graph structures need to deal with two challenging issues; those are bias and inefficiency. Topology information in original graphs is encoded in line graphs but in a biased way. In particular, node features are either overstated or understated depending on their degrees. Besides, line graphs can be much bigger graphs than original graphs depending on the graph density. Message passing operations of graph neural networks on line graphs lead to significant use of computational resources.\nIn this work, we propose to construct a weighted line graph that can correct biases in encoded topology information of line graphs. To this end, we assign each edge in a line graph a normalized weight such that each node in the line graph has a weighted degree of 2. In this weighted line graph, the dynamics of node features are the same as those in its original graph. Based on our weighted line graph, we propose a weighted line graph convolution layer (WLGCL) that performs a message passing operation on both original graph structures and weighted line graph structures. To address inefficiency issues existing in graph neural networks that use line graph structures, we further propose to implement our WLGCL via an incidence matrix, which can dramatically reduce the usage of computational resources. Based on our WLGCL, we build a family of weighted line graph convolutional networks (WLGCNs). We evaluate our methods on graph classification tasks and show that WLGCNs consistently outperform previous state-of-the-art models. Experiments on simulated data demonstrate the efficiency advantage of our implementation." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "In graph theory, a line graph is a graph derived from an undirected graph. It represents the connectivity among edges in the original graph. Given a graph G, the corresponding line graph L(G) is constructed by using edges in G as vertices in L(G). Two nodes in L(G) are adjacent if they share a common end node in the graph G (Golumbic, 2004). Note that the edges (a, b) and (b, a) in an undirected graph G correspond to the same vertex in the line graph L(G). The Whitney graph isomorphism theorem (Thatte, 2005) stated that a line graph has a one-to-one correspondence to its original graph. This theorem guarantees that the line graph can encode the topology information in the original graph. Recently, some works (Monti et al., 2018; Chen et al., 2019; Bandyopadhyay et al., 2019; Jiang et al., 2019) proposes to use the line graph structure to enhance the message passing operations in graph neural networks. Since the line graph can encode the topology information, the message passing on the line graph can enhance the network embeddings in graph neural networks. In graph neural networks that use line graph structures, features are passed and transformed in both the original graph structures and the line graph structures, thereby leading to better feature learnings and performances." }, { "heading": "3 WEIGHTED LINE GRAPH CONVOLUTIONAL NETWORKS", "text": "In this work, we propose the weighted line graph to address the bias in the line graph when encoding graph topology information. Based on our weighted line graph, we propose the weighted line graph convolution layer (WLGCL) for better feature learning by leveraging line graph structures. Besides, graph neural networks using line graphs consume excessive computational resources. To solve the inefficiency issue, we propose to use the incidence matrix to implement the WLGCL, which can dramatically reduce the usage of computational resources." }, { "heading": "3.1 BENEFIT AND BIAS OF LINE GRAPH REPRESENTATIONS", "text": "In this section, we describe the benefit and bias of using line graph representations.\nBenefit In message-passing operations, edges are usually given equal importance and edge features are not well explored. This can constrain the capacity of GNNs, especially on graphs with edge features. In the chemistry domain, a compound can be converted into a graph, where atoms are nodes and chemical bonds are edges. On such kinds of graphs, edges have different properties and thus different importance. However, message-passing operations underestimate the importance of edges. To address this issue, the line graph structure can be used to leverage edge features and different edge importance (Jiang et al., 2019; Chen et al., 2019; Zhu et al., 2019). The line graph, by its nature, enables graph neural networks to encode and propagate edge features in the graph. The line graph neural networks that take advantage of line graph structures have shown to be promising on graph-related tasks (Chen et al., 2019; Xiong et al., 2019; Yao et al., 2019). By encoding node and edge features simultaneously, line graph neural networks enhance the feature learning on graphs.\nBias According to the Whitney graph isomorphism theorem, the line graph L(G) encodes the topology information of the original graph G, but the dynamics and topology of G are not correctly represented in L(G) (Evans & Lambiotte, 2009). As described in the previous section, each edge in the graph G corresponds to a vertex in the line graph L(G). The features of each edge contain features of its two end nodes. A vertex with a degree d in the original graph G will generate\nd× (d− 1)/2 edges in the line graph L(G). The message passing frequency of this node’s features will change from O(d) in the original graph to O(d2) in the line graph. From this point, the line graph encodes the topology information in the original graph but in a biased way. In the original graph, a node’s features will be passed to d neighbors. But in the corresponding line graph, the information will be passed to d× (d− 1)/2 nodes. The topology structure in the line graph L(G) will overstate the importance of features for nodes with high degrees in the graph. On the contrary, the nodes with smaller degrees will be relatively understated, thereby leading to biased topology information encoded in the line graph. Note that popular adjacency matrix normalization methods (Kipf & Welling, 2017; Veličković et al., 2017; Gao & Ji, 2019; Gong & Cheng, 2019) cannot address this issue." }, { "heading": "3.2 WEIGHTED LINE GRAPH", "text": "In the previous section, we show that the line graph L(G) constructed from the original graph G encodes biased graph topology information. To address this issue, we propose to construct a weighted line graph that assigns normalized weights to edges. In a regular line graph L(G), each edge is assigned an equal weight of 1, thereby leading to a biased encoding of the graph topology information. To correct the bias, we need to normalize edge weights in the line graph.\nConsidering each edge in G has two ends, it is intuitive to normalize the weighted degree of the corresponding node in L(G) to 2. To this end, the weight for an edge in the adjacency matrix F of L(G) is computed as:\nF(a,b),(b,c) =\n{ 1 Db\nif a 6= c 1 Db + 1Da , if a = c (1)\nwhere a, b, and c are nodes in the graph G, (a, b) and (b, c) are edges in the graph G that are connected by the node b. Db is the degree of the node b in the graph G. To facilitate the message passing operation, we add self-loops on the weighted line graph WL(G). The weights for self-loop edges computed by the second case consider the fact that they are self-connected by both ends. Figure 2 illustrates an example of a graph and its corresponding weighted line graph.\nTheorem 1. Given the edge weights in the weighted line graph WL(G) defined by Eq. (1), the weighted degree for a node (a, b) in WL(G) is 2.\nThe proof of Theorem 1 is provided in the supplementary material. By constructing the weighted line graph with normalized edge weights defined in Eq. (1), each node (a, b) has a weighted degree of 2. Given a node a with a degree of d, it has d related edges in G and d related nodes in L(G). The message passing frequency of node a’s features in the weighted line graph WL(G) is ∑d i=1 2 = O(d), which is consistent with that in the original graph G. Thus, the weighted line graph encodes the topology information of the original graph in an unbiased way." }, { "heading": "3.3 WEIGHTED LINE GRAPH CONVOLUTION LAYER", "text": "In this section, we propose the weighted line graph convolution layer (WLGCL) that leverages our proposed weighted line graph for feature representations learnings. In this layer, node features are passed and aggregated in both the original graph G and the corresponding weighted line graph WL(G).\nSuppose an undirected attributed graph G has N nodes and E edges. Each node and each edge in the graph contains Cn and Ce features, respectively. In the layer `, an adjacency matrix A(`) ∈ RN×N , a node feature matrix X(`) ∈ RN×Cn , and a edge feature matrix Y (`) ∈ RE×Ce are used to represent the graph connectivity, node features, and edge features, respectively. Here, we construct the adjacency matrix F (`) ∈ RE×E of the corresponding weighted line graph. The layer-wise propagation rule of the weighted line graph convolution layer ` is defined as:\nŶ (`) = F (`)Y (`), ∈RE×Ce (2)\nK (`) L = B (`)Ŷ (`), ∈RN×Ce (3) K(`) = A(`)X(`), ∈RN×Cn (4)\nX(`+1) = K(`)W (`) +K (`) L W (`) L , ∈R N×C′ (5)\nwhere W (`) ∈ RCn×C′ and W (`)L ∈ RCe×C ′ are matrices of trainable parameters. B(`) ∈ RN×E is the incidence matrix of the graph G that shows the connectivity between nodes and edges.\nTo enable message passing on the line graph L(G), each edge in the graph G needs to have features. However, edge features are not available on some graphs. To address this issue, we can compute features for an edge (a, b) by summing up features of its two end nodes: Y (`)(a,b) = X (`) a +X (`) b . Here, we use the summation operation to ensure the permutation invariant property in this layer. Then, we perform message passing and aggregation on the line graph in Eq. (2). With updated edges features, Eq. (3) generates new nodes features with edge features Y (`). Eq. (4) performs feature passing and aggregation on the graph G, which leads to aggregated nodes features K(`). In Eq. (5), aggregated features from the graph G and the line graph L(G) are transformed and combined, which produces the output feature matrix X(`+1). Note that we can apply popular adjacency matrix normalization methods (Kipf & Welling, 2017) on the adjacency matrix A(`), the line graph adjacency matrix F (`), and the incidence matrix B(`).\nIn the WLGCL, we use the line graph structure as a complement to the original graph structure, thereby leading to enhanced feature learnings. Here, we use a simple feature aggregation method as used in GCN (Kipf & Welling, 2017). Other complicated and advanced feature aggregation methods such as GAT (Veličković et al., 2017) can be easily applied by changing Eq. (2) and Eq. (4) accordingly. Figure 3 provides an illustration of our WLGCL." }, { "heading": "3.4 WEIGHTED LINE GRAPH CONVOLUTION LAYER VIA INCIDENCE MATRIX", "text": "In this section, we propose to implement the WLGCL using the incidence matrix. When edge features are generated from node features as described in Section 3.3, it can significantly reduce the usage of computational resources while taking advantage of the line graph structure.\nOne practical challenge of using a line graph structure is that it consumes excessive computational resources in terms of memory usage and execution time. To use a line graph in a graph neural network, we need to store its adjacency matrix, compute edge features, and perform message passing operation. Our proposed WLGCL also faces this challenge. Space and time complexities of Eq. (2), which plays the dominating role, are O(E2) = O(N4) and O(E2C) = O(N4C), respectively. Here, we set Cn = Ce for simplicity. To address this issue, we propose to use the incidence matrix B to compute the weighted line graph adjacency matrix F . The adjacency matrix F can be accurately computed with the following theorem. Theorem 2. Given an undirected graph, its incidence matrix B ∈ RN×E , and its degree matrix D ∈ RN , the adjacency matrix F ∈ RE×E of the weighted line graph with weights defined by Eq. (1) can be exactly computed by\nF = BT diag (D)−1 B, (6)\nwhere diag(·) takes a vector as input and constructs a squared diagonal matrix using the vector elements as the main diagonal elements.\nThe proof of Theorem 2 is provided in the supplementary material. Based on the results from Theorem 2, we can update the equations (Eq. (2-3)) to generate K(`)L in the WLGCL by replacing the adjacency matrix F with Eq. (6):\nK (`) L = B (`)F (`)B(`) T X(`) = B(`)B(`) T diag (D)−1 B(`)B(`) T X(`)\n= H(`)diag (D)−1 H(`)X(`), (7)\nwhere B(`) T X(`) computes edge features using node features. Notably, H(`) = B(`)B(`) T only needs to be computed once. With computed K(`)L , we output the new feature matrix X (`+1) using equations Eq. (4) and Eq. (5).\nBy using the implementation in Eq. (7), space and time complexities of the WLGCL are reduced to O(N × E) = O(N3) and O(N2 × E) + O(N2 × C) = O(N4), respectively. Compared to the naive WLGCL implementation, they are reduced by a factor of N and C, respectively. In the experimental study part, we show that the WLGCL implemented as Eq. (7) dramatically saves the computational resources compared to the naive implementation. Notably, the results in Eq. (6) can be applied to other graph neural networks that leverage the benefits of line graph structures." }, { "heading": "3.5 WEIGHTED LINE GRAPH CONVOLUTIONAL NETWORKS", "text": "In this section, we build a family of weighted line graph convolutional networks (WLGCNets) that utilize our proposed WLGCLs. In WLGCNets, an embedding layer such as a fully-connected layer or GCN layer is firstly used to learn low-dimensional representations for nodes in the graph. Then\nwe stack multiple blocks, each of which consists of a WLGCL and a pooling layer (Gao & Ji, 2019). Here, the WLGCL encodes high-level features while the pooling layer outputs a coarsened graph. We use the gPool layer to produce a coarsened graph that helps to retain original graph structure information. To deal with the variety of graph sizes in terms of the number of nodes, we apply global readout operations on the outputs including maximization, averaging and summation (Xu et al., 2018). The outputs of the first GCN layer and all blocks are stacked together in the feature dimension and fed into a multi-layer perceptron network for classification. Figure 4 provides an example of our WLGCNets." }, { "heading": "4 EXPERIMENTAL STUDY", "text": "In this section, we evaluate our proposed WLGCL and WLGCNet on graph classification tasks. We demonstrate the effectiveness of our methods by comparing our networks with previous state-ofthe-art models in terms of the graph classification accuracy. Besides, we evaluate the efficiency of our implementation of the WLGCL in terms of the usage of computational resources. We conduct ablation experiments to demonstrate the contributions of our methods. The code and experimental setups are provided in the supplementary material." }, { "heading": "4.1 PERFORMANCE STUDY", "text": "To evaluate our methods and WLGCNets, we conduct experiments on graph classification tasks using seven datasets; those are PROTEINS, D&D (Dobson & Doig, 2003), IMDB-MULTI (IMDBM), REDDIT-BINARY (RDTB), REDDIT-MULTI5K (RDT5K), COLLAB, and REDDIT-MULTI12K (RDT12K) (Yanardag & Vishwanathan, 2015). REDDIT datasets are benchmarking large graph datasets used for evaluating graph neural networks in the community. On the datasets without node features such as RDT12K, we use one-hot encodings of node degrees as node features (Xu et al., 2018). To produce less biased evaluation results, we follow the practices in (Xu et al., 2018; Zhang et al., 2018) and perform 10-fold cross-validation on training datasets. We use the average accuracy across 10 fold testing results with variances.\nWe report the graph classification accuracy along with performances of previous state-of-the-art models. The results are summarized in Table 1. We can observe from the results that our proposed WLGCNets significantly outperform previous models by margins of 1.3%, 1.8%, 3.8%, 1.7%, 0.7%, 2.5%, 3.2% on PROTEINS, D&D, IMDB-MULTI, REDDIT-BINARY, REDDIT-MULTI5K, COLLAB, and REDDIT-MULTI12K datasets, respectively. The promising results, especially on large benchmarking datasets such as REDDIT-MULTI12K, demonstrate the effectiveness of our proposed methods and models for network embeddings. Note that our WLGCNet uses the gPool layer from the g-U-Net. The superior performances of WLGCNets over the g-U-Net demonstrate the performance gains are from our proposed WLGCLs." }, { "heading": "4.2 COMPUTATIONAL EFFICIENCY STUDY", "text": "In Section 3.4, we propose an efficient implementation of WLGCL using the incidence matrix, which can save dramatic computational resources compared to the naive one. Here, we conduct experiments on simulated data to evaluate the efficiency of our methods. We build networks that contain a single layer to remove the influence of other factors. We conduct experiments on graphs of different sizes in terms of the number of nodes. Since WLGCL takes advantage of line graph structures, the graph density has a significant impact on the layer efficiency. Here, the graph density is defined as 2E/(N × (N − 1)). To investigate the impact of the graph density, we conduct experiments on graphs with the same size but different numbers of edges.\nBy using the TensorFlow profile tool (Abadi et al., 2016), we report the computational resources used by networks including the number of multiply-adds (MAdd), the amount of memory usage, and the CPU execution time. The comparison results are summarized in Table 2. We can observe from the results that the WLGCLs with our proposed implementation use significantly less computational resources than WLGCLs with naive implementation in terms of the memory usage and CPU execution time. By comparing the results on first three inputs, the advantage on efficiency of our method over the naive implementation becomes much larger as the increase of the graph density with the same graph size. When comparing results of the last two inputs with the same number of edges but different graph sizes, we can observe that the efficiency advantage of our proposed methods remains the same. This shows that the graph density is a key factor that influences the usage of computational resources, especially on dense graphs." }, { "heading": "4.3 RESULTS ON SMALL DATASETS", "text": "In the previous section, we evaluate our methods on benchmarking datasets that are relatively large in terms of the number of graphs and the number of nodes in graphs. To provide a comprehensive evaluation, we conduct experiments on relatively small datasets to evaluate the risk of over-fitting of our methods. Here, we use three datasets; those are MUTAG (Wale et al., 2008), PTC (Toivonen et al., 2003), and IMDBBINARY (Borgwardt et al., 2005). MUTAG and PTC datasets are bioinformatics datasets with categorical features on nodes. We follow the same experimental settings as in Section 4.1. The results in terms of the graph clas-\nsification accuracy are summarized in Table 3 with performances of previous state-of-the-art models. We can observe from the results that our WLGCNet outperforms previous models by margins of 0.4%, 6.0%, and 3.4% on MUTAG, PTC, and IMDB-BINARY, respectively. This demonstrates that our proposed models will not increase the risk of the over-fitting problem even on small datasets." }, { "heading": "4.4 ABLATION STUDY OF WEIGHTED LINE GRAPH CONVOLUTION LAYERS", "text": "In this section, we conduct ablation studies based on WLGCNets to demonstrate the contribution of our WLGCLs to the entire network. To explore the advantage of line graph structures, we construct a network that removes all layers using line graphs. Based on the WLGCNet, we replace WLGCLs by GCNs using the same number of trainable parameters, which we denote as WLGCNetg . To compare our weighted line graph with the regular line graph, we modify our WLGCLs to use regular line graph structures. We denote the resulting network as WLGCNetl. We evaluate\nthese networks on three datasets; those are REDDIT-BINARY, REDDIT-MULTI5K, and REDDITMULTI12K datasets. Table 4 summaries the graph classification results. We can observe from the results that both WLGCNet and WLGCNetl achieve better performances than WLGCNetg , which demonstrates the benefits of utilizing line graph structures on graph neural networks. When comparing WLGCNet with WLGCNetl, WLGCNet outperforms WLGCNetl by margins of 0.5%, 0.5%, and 0.7% on REDDIT-BINARY, REDDIT-MULTI5K, and REDDIT-MULTI12K datasets, respectively. This indicates that our proposed WLGCL utilizes weighted line graph structures with unbiased topology information encoded, thereby leading to better performances.\n4.5 NETWORK DEPTH STUDY\nNetwork depth in terms of the number of blocks is an important hyper-parameter in the WLGCNet. In previous experiments, we use three blocks in WLGCNets based on our empirical experiences. In this section, we investigate the impact of the network depth in WLGCNets on network embeddings. Based on our WLGCNet, we vary the network depth from 1 to 5, which covers a reasonable range. We evaluate these networks on PTC, PROTEINS, and REDDITBINARY datasets and report the graph classification accuracies. Figure 5 plots the results of WLGCNets with different numbers of blocks. We can observe from the figure that the best performances are achieved on WLGCNets with three blocks on all three datasets. When the net-\nwork depth increases, the performances decrease, which indicates the over-fitting issue." }, { "heading": "5 CONCLUSION", "text": "In this work, we consider the biased topology information encoding in graph neural networks that utilize line graph structures to enhance network embeddings. A line graph constructed from a graph can encode the topology information. However, the dynamics in the line graph are inconsistent with that in the original graph. On line graphs, the features of nodes with high degrees are more frequently passed in the graph, which causes understatement or overstatement of node features. To address this issue, we propose the weighted line graph that assigns normalized weights on edges such that the weighted degree of each node is 2. Based on the weighted line graph, we propose the weighted line graph layer that leverages the advantage of the weighted line graph structure. A practical challenge faced by graph neural networks on line graphs is that they consume excessive computational resources, especially on dense graphs. To address this limitation, we propose to use the incidence matrix to implement the WLGCL, which can dramatically save the computational resources. Based on the WLGCL, we build a family of weighted line graph convolutional networks." }, { "heading": "A EXPERIMENTAL SETUP", "text": "We describe the experimental setup for graph classification tasks. In this work, we mainly evaluate our methods on graph classification datasets such as social network datasets and bioinformatics datasets. The node features are created using one-hot encodings and fed into the networks. In WLGCNets, we use GCN layers as the graph embedding layers. After the first GCN layer, we stack three blocks as described in Section 3.5. The outputs of the GCN layer and WLGCLs in three blocks are processed by a readout function and concatenated as the network output. The readout function performs three global pooling operations; those are maximization, averaging, and summation. The network outputs are fed into a classifier to produce predictions. Here, we use a two-layer feedforward network with 512 units in the hidden layer as the classifier. We apply dropout (Srivastava et al., 2014) on the network and the classifier.\nWe use an Adam optimizer (Kingma & Ba, 2015) with a learning rate of 0.001 to train WLGCNets. To prevent over-fitting, we apply the L2 regularization on trainable parameters with a weight decay rate of 0.0008. All models are trained for 200 epochs using one NVIDIA GeForce RTX 2080 Ti GPU on an Ubuntu 18.04 system." }, { "heading": "B PROOF FOR THEOREM 1", "text": "Proof. Given nodes a and b with degrees Da and Db in a graph G, a node (a, b) in the corresponding weighted line graph WL(G) connects to Da−1 and Db−1 nodes through a and b in G, respectively. The weighted degree of the node (a, b) is computed by summing up the weights of edges that connect (a, b) to other nodes through a and b, and the weight of its self-loop:\nWLD(a,b) = Da−1∑ i=1 1 Da + Db−1∑ j=1 1 Db + ( 1 Da + 1 Db )\n= Da∑ i=1 1 Da + Db∑ j=1 1 Db = 2.\n(8)\nThis completes the proof." }, { "heading": "C PROOF FOR THEOREM 2", "text": "Proof. We construct a weighted incidence matrix by normalizing the weights as B̂i,(i,j) = 1/Di. Thus, the weighted incidence matrix is computed as B̂ = diag (D)−1 B. In the incidence graph, each edge is connected to its two end nodes. Thus, each column in the incidence matrix B:,(a,b) has two non-zero entries; those are Ba,(a,b) and Bb,(a,b). The same rule applies to the weighted incidence matrix B̂. Based on this observation, we have(\nBT B̂ ) (a,b),(b,c) = N∑ i=1 BT(a,b),i × B̂i,(b,c)\n= BT(a,b),aB̂a,(b,c) +B T (a,b),bB̂b,(b,c)\n=\n{ 1 Db\nif a 6= c 1 Db + 1Da if a = c = F(a,b),(b,c).\n(9)\nThis completes the proof." } ]
2,020
null
SP:4647fc008073e5ee4e432f84e645aedb7faf736d
[ "The paper proposed a learned variant of the well-known iterative Hessian sketch (IHS) method of Pilanci and Wainwright, for efficiently solving least-squares regression. The proposed method is essentially a learned variant of the count-sketch, where the positions of the non-zero entries are random while the value is learned. While getting a learned variant for IHS is an interesting direction, the current theoretical contribution of this paper is only incremental, and most importantly, the reviewer is unconvinced for the practicality of the current approach.", "Sketching is a popular technique in numerical linear algebra for achieving various desirable properties (e.g., lower complexity, one pass methods). The present paper considers a particular kind of sketch for which the sketch matrix is learned from data. It shows how such learned sketches can be used in two types of problems: Hessian sketching (Sec. 3) and Hessian regression (Sec. 4). The authors give both algorithms and provide theoretical guarantees. They also apply these techniques to a number of both synthetic and real datasets in the experiments. For the most part, the experiments indicate that the proposed methods give a consistent, but not necessarily very large, improvement." ]
Sketching is a dimensionality reduction technique where one compresses a matrix by often random linear combinations. A line of work has shown how to sketch the Hessian to speed up each iteration in a second order method, but such sketches usually depend only on the matrix at hand, and in a number of cases are even oblivious to the input matrix. One could instead hope to learn a distribution on sketching matrices that is optimized for the specific distribution of input matrices. We show how to design learned sketches for the Hessian in the context of second order methods, where we learn potentially different sketches for the different iterations of an optimization procedure. We show empirically that learned sketches, compared with their “non-learned” counterparts, improve the approximation accuracy for important problems, including LASSO, SVM, and matrix estimation with nuclear norm constraints. Several of our schemes can be proven to perform no worse than their unlearned counterparts.
[]
[ { "authors": [ "Marcin Andrychowicz", "Misha Denil", "Sergio Gomez", "Matthew W Hoffman", "David Pfau", "Tom Schaul", "Brendan Shillingford", "Nando De Freitas" ], "title": "Learning to learn by gradient descent by gradient descent", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Stephen Boyd", "Lieven Vandenberghe" ], "title": "Convex Optimization", "venue": null, "year": 2004 }, { "authors": [ "Kenneth L. Clarkson", "David P. Woodruff" ], "title": "Low-rank approximation and regression in input sparsity time", "venue": "J. ACM,", "year": 2017 }, { "authors": [ "Michael B. Cohen" ], "title": "Nearly tight oblivious subspace embeddings by trace inequalities", "venue": "In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms,", "year": 2016 }, { "authors": [ "Graham Cormode", "Charlie Dickens" ], "title": "Iterative hessian sketch in input sparsity time", "venue": "CoRR, abs/1910.14166,", "year": 2019 }, { "authors": [ "Nikita Doikov", "Peter Richtárik" ], "title": "Randomized block cubic Newton method", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yihe Dong", "Piotr Indyk", "Ilya P. Razenshteyn", "Tal Wagner" ], "title": "Learning space partitions for nearest neighbor search", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Robert M. Gower", "Donald Goldfarb", "Peter Richtárik" ], "title": "Stochastic block BFGS: squeezing more curvature out of data", "venue": "In Proceedings of the 33nd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Robert M. Gower", "Filip Hanzely", "Peter Richtárik", "Sebastian U. Stich" ], "title": "Accelerated stochastic matrix inversion: General theory and speeding up BFGS rules for faster second-order optimization", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Robert M. Gower", "Dmitry Kovalev", "Felix Lieder", "Peter Richtárik" ], "title": "RSN: randomized subspace Newton", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Chen-Yu Hsu", "Piotr Indyk", "Dina Katabi", "Ali Vakilian" ], "title": "Learning-based frequency estimation algorithms", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Piotr Indyk", "Ali Vakilian", "Yang Yuan" ], "title": "Learning-based low-rank approximations", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Sudhir B. Kylasa", "Fred (Farbod) Roosta", "Michael W. Mahoney", "Ananth Grama" ], "title": "GPU accelerated sub-sampled Newton’s method for convex classification problems", "venue": "In Proceedings of the 2019 SIAM International Conference on Data Mining, SDM 2019,", "year": 2019 }, { "authors": [ "Xiang Li", "Shusen Wang", "Zhihua Zhang" ], "title": "Do subsampled newton methods work for highdimensional data", "venue": "In The Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Yu-Feng Li", "Ivor W. Tsang", "James Tin-Yau Kwok", "Zhi-Hua Zhou" ], "title": "Tighter and convex maximum margin clustering", "venue": "Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Simin Liu", "Tianrui Liu", "Ali Vakilian", "Yulin Wan", "David P. Woodruff" ], "title": "On learned sketches for randomized numerical linear algebra", "venue": "[cs.LG],", "year": 2020 }, { "authors": [ "J. Nelson", "H.L. Nguyên" ], "title": "Osnap: Faster numerical linear algebra algorithms via sparser subspace embeddings", "venue": "IEEE 54th Annual Symposium on Foundations of Computer Science,", "year": 2013 }, { "authors": [ "Mert Pilanci", "Martin J. Wainwright" ], "title": "Randomized sketches of convex programs with sharp guarantees", "venue": "IEEE Trans. Inf. Theory,", "year": 2015 }, { "authors": [ "Mert Pilanci", "Martin J. Wainwright" ], "title": "Iterative Hessian sketch: Fast and accurate solution approximation for constrained least-squares", "venue": "J. Mach. Learn. Res.,", "year": 2016 }, { "authors": [ "Mert Pilanci", "Martin J. Wainwright" ], "title": "Newton sketch: A near linear-time optimization algorithm with linear-quadratic convergence", "venue": "SIAM J. Optim.,", "year": 2017 }, { "authors": [ "Farbod Roosta-Khorasani", "Michael W. Mahoney" ], "title": "Sub-sampled Newton methods", "venue": "Math. Program.,", "year": 2019 }, { "authors": [ "T. Sarlos" ], "title": "Improved approximation algorithms for large matrices via random projections", "venue": "47th Annual IEEE Symposium on Foundations of Computer Science", "year": 2006 }, { "authors": [ "Jan van den Brand", "Binghui Peng", "Zhao Song", "Omri Weinstein" ], "title": "Training (overparametrized) neural networksin near-linear time", "venue": "[cs.LG],", "year": 2020 }, { "authors": [ "Roman Vershynin" ], "title": "Introduction to the non-asymptotic analysis of random matrices, pp. 210–268", "venue": null, "year": 2012 }, { "authors": [ "David P. Woodruff" ], "title": "Sketching as a tool for numerical linear algebra", "venue": "ISSN 1551-305X. doi: 10.1561/0400000060", "year": 2014 }, { "authors": [ "Peng Xu", "Jiyan Yang", "Farbod Roosta-Khorasani", "Christopher Ré", "Michael W. Mahoney" ], "title": "Subsampled Newton methods with non-uniform sampling", "venue": "In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Peng Xu", "Fred Roosta", "Michael W. Mahoney" ], "title": "Second-order optimization for non-convex machine learning: an empirical study", "venue": "In Proceedings of the 2020 SIAM International Conference on Data Mining,", "year": 2020 }, { "authors": [ "den Brand" ], "title": "σmin(AP ) ≤ σmax(AP ) ≤ 5/4 and thus one can set η = 1 in Algorithm 3 and achieve a linear convergence. The only difference is that here we estimate σmin(AP ) and σmax(AP ) and set the step size", "venue": null, "year": 2020 }, { "authors": [ "den Brand" ], "title": "2020), and the rest of the proof follows as in there. D IHS EXPERIMENT: MATRIX ESTIMATION WITH NUCLEAR NORM CONSTRAINT As stated in Section 5.3, the mean errors of the two datasets when m = 40 for the synthetic dataset", "venue": null, "year": 2020 } ]
[ { "heading": null, "text": "Sketching is a dimensionality reduction technique where one compresses a matrix by often random linear combinations. A line of work has shown how to sketch the Hessian to speed up each iteration in a second order method, but such sketches usually depend only on the matrix at hand, and in a number of cases are even oblivious to the input matrix. One could instead hope to learn a distribution on sketching matrices that is optimized for the specific distribution of input matrices. We show how to design learned sketches for the Hessian in the context of second order methods, where we learn potentially different sketches for the different iterations of an optimization procedure. We show empirically that learned sketches, compared with their “non-learned” counterparts, improve the approximation accuracy for important problems, including LASSO, SVM, and matrix estimation with nuclear norm constraints. Several of our schemes can be proven to perform no worse than their unlearned counterparts." }, { "heading": "1 INTRODUCTION", "text": "Large-scale optimization problems are abundant and solving them efficiently requires powerful tools to make the computation practical. This is especially true of second order methods which often are less practical than first order ones. Although second order methods may have many fewer iterations, each iteration could involve inverting a large Hessian, which is cubic time; in contrast, first order methods such as stochastic gradient descent are linear time per iteration.\nIn order to make second order methods faster in each iteration, a large body of work has looked at dimensionality reduction techniques, such as sampling, sketching, or approximating the Hessian by a low rank matrix. See, for example, (Gower et al., 2016; Xu et al., 2016; Pilanci & Wainwright, 2016; 2017; Doikov & Richtárik, 2018; Gower et al., 2018; Roosta-Khorasani & Mahoney, 2019; Gower et al., 2019; Kylasa et al., 2019; Xu et al., 2020; Li et al., 2020). Our focus is on sketching techniques, which often consist of multiplying the Hessian by a random matrix chosen independently of the Hessian. Sketching has a long history in theoretical computer science (see, e.g., (Woodruff, 2014) for a survey), and we describe such methods more below. A special case of sketching is sampling, which in practice is often uniform sampling, and hence oblivious to properties of the actual matrix. Other times the sampling is non-uniform, and based on squared norms of submatrices of the Hessian or on the so-called leverage scores of the Hessian.\nOur focus is on sketching techniques, and in particular, we follow the framework of (Pilanci & Wainwright, 2016; 2017) which introduce the iterative Hessian sketch and Newton sketch, as well as the high accuracy refinement given in (van den Brand et al., 2020). If one were to run Newton’s method to find a point where the gradient is zero, in each iteration one needs to solve an equation involving the current Hessian and gradient to find the update direction. When the Hessian can be decomposed as A>A for an n × d matrix A with n d, then sketching is particularly suitable. The iterative Hessian sketch was proposed in (Pilanci & Wainwright, 2016), where A is replaced with S · A, for a random matrix S which could be i.i.d. Gaussian or drawn from a more structured family of random matrices such as the Subsampled Randomized Hadamard Transforms or COUNTSKETCH matrices; the latter was done in (Cormode & Dickens, 2019). The Newton sketch was proposed in (Pilanci & Wainwright, 2017), which extended sketching methods beyond constrained least-squares problems to any twice differentiable function subject to a closed convex constraint set. Using this sketch inside of interior point updates has led to much faster algorithms for an extensive body of convex optimization problems Pilanci & Wainwright (2017). By instead using sketching as\na preconditioner, an application of the work of (van den Brand et al., 2020) (see Appendix E) was able to improve the dependence on the accuracy parameter to logarithmic.\nIn general, the idea behind sketching is the following. One chooses a random matrix S, drawn from a certain family of random matrices, and computes SA. IfA is tall-and-skinny, then S is short-and-fat, and thus SA is a small, roughly square matrix. Moreover, SA preserves important properties of A. One typically desired property is that S is a subspace embedding, meaning that simultaneously for all x, one has ‖SAx‖2 = (1± )‖Ax‖2. An observation exploited in (Cormode & Dickens, 2019), building off of the COUNT-SKETCH random matrices S introduced in randomized linear algebra in (Clarkson & Woodruff, 2017), is that if S contains O(1) non-zero entries per column, then SA can be computed in O(nnz(A)) time, where nnz(A) denotes the number of nonzeros in A. This is sometimes referred to as input sparsity running time.\nEach iteration of a second order method often involves solving an equation of the form A>Ax = A>b, where A>A is the Hessian and b is the gradient. For a number of problems, one has access to a matrixA ∈ Rn×d with n d, which is also an assumption made in Pilanci & Wainwright (2017). Therefore, the solution x is the minimizer to a constrained least squares regression problem:\nmin x∈C\n1 2 ‖Ax− b‖22 , (1)\nwhere C is a convex constraint set in Rd. For the unconstrained case (C = Rd), various classical sketches that attain the subspace embedding property can provably yield high-accuracy approximate solutions (see, e.g., (Sarlos, 2006; Nelson & Nguyên, 2013; Cohen, 2016; Clarkson & Woodruff, 2017)); for the general constrained case, the Iterative Hessian Sketch (IHS) was proposed by Pilanci & Wainwright (2016) as an effective approach and Cormode & Dickens (2019) employed sparse sketches to achieve input-sparsity running time for IHS. All sketches used in these results are dataoblivious random sketches.\nLearned Sketching. In the last few years, an exciting new notion of learned sketching has emerged. Here the idea is that one often sees independent samples of matrices A from a distribution D, and can train a model to learn the entries in a sketching matrix S on these samples. When given a future sample B, also drawn from D, the learned sketching matrix S will be such that S · B is a much more accurate compression of B than if S had the same number of rows and were instead drawn without knowledge of D. Moreover, the learned sketch S is often sparse, therefore allowing S · B to be applied very quickly. For large datasets B this is particularly important, and distinguishes this approach from other transfer learning approaches, e.g., (Andrychowicz et al., 2016), which can be considerably slower in this context.\nLearned sketches were first used in the data stream context for finding frequent items (Hsu et al., 2019) and have subsequently been applied to a number of other problems on large data. For example, Indyk et al. (2019) showed that learned sketches yield significantly small errors for low rank approximation. In (Dong et al., 2020), significant improvements to nearest neighbor search were obtained via learned sketches. More recently, Liu et al. (2020) extended learned sketches to several problems in numerical linear algebra, including least-squares and robust regression, as well as k-means clustering.\nDespite the number of problems that learned sketches have been applied to, they have not been applied to convex optimization in general. Given that such methods often require solving a large overdetermined least squares problem in each iteration, it is hopeful that one can improve each iteration using learned sketches. However, a number of natural questions arise: (1) how should we learn the sketch? (2) should we apply the same learned sketch in each iteration, or learn it in the next iteration by training on a data set involving previously learned sketches from prior iterations?\nOur Contributions. In this work we answer the above questions and derive the first learned sketches for a wide number of problems in convex optimization.\nNamely, we apply learned sketches to constrained least-squares problems, including LASSO, support vector machines (SVM), and matrix regression with nuclear norm constraints. We show empirically that learned sketches demonstrate superior accuracy over random oblivious sketches for each of these problems. Specifically, compared with three classical sketches (Gaussian, COUNT-SKETCH and Sparse Johnson-Lindenstrauss Transforms; see definitions in Section 2), the learned sketches in each of the first few iterations\n• improve the LASSO error f(x) − f(x∗) by 80% to 87% in two real-world datasets, where f(x) = 12 ‖Ax− b‖ 2 2 + ‖x‖1; • improve the dual SVM error f(x) − f(x∗) by 10–30% for a synthetic and a real-world dataset, as well as by 30%–40% for another real-world dataset, where f(x) = ‖Bx‖22; • improve the matrix estimation error f(X) − f(X∗) by at least 30% for a synthetic dataset and at least 95% for a real-world data set, where f(X) = ‖AX −B‖2F .\nTherefore, the learned sketches attain a smaller error within the same number of iterations, and in fact, within the same limit on the maximum runtime, since our sketches are extremely sparse (see below).\nWe also study the general framework of convex optimization in (van den Brand et al., 2020), and show that also for sketching-based preconditioning, learned sketches demonstrate considerable advantages. More precisely, by using a learned sketch with the same number of rows as an oblivious sketch, we are able to obtain a much better preconditioner with the same overall running time.\nAll of our learned sketches S are extremely sparse, meaning that they contain a single non-zero entry per column. Following the previous work of (Indyk et al., 2019), we choose the position of the nonzero entry in each column to be uniformly random, while the value of the nonzero entry is learned. This already demonstrates a significant advantage over non-learned sketches, and has a fast training time. Importantly, because of such sparsity, our sketches can be applied in input sparsity time given a new optimization problem.\nWe also provide several theoretical results, showing how to algorithmically use learned sketches in conjunction with random sketches so as to do no worse than random sketches." }, { "heading": "2 PRELIMINARIES", "text": "Classical Sketches. Below we review several classical sketches that have been used for solving optimization problems.\n• Gaussian sketch: S = 1√ m G, where G is an m× n Gaussian random matrix. • COUNT-SKETCH: Each column of S has only a single non-zero entry. The position of the nonzero entry is chosen uniformly over the m entries in the column and the value of the entry is either +1 or −1, each with probability 1/2. Further, the columns are chosen independently. • Sparse Johnson-Lindenstrauss Transform (SJLT): S is the vertical concatenation of s independent COUNT-SKETCH matrices, each of dimension m/s× n.\nCOUNT-SKETCH-type Sketch. A COUNT-SKETCH-type sketch is characterized by a tuple (m,n, p, v), where m,n are positive integers and p, v are n-dimensional real vectors, defined as follows. The sketching matrix S has dimensions m × n and Spi,i = vi for all 1 ≤ i ≤ n while all the other entries of S are 0. When m and n are clear from context, we may characterize such a sketching matrix by (p, v) only.\nSubspace Embeddings. For a matrix A ∈ Rn×d, we say a matrix S ∈ Rm×n is a (1± )-subspace embedding for the column span of A if (1− ) ‖Ax‖2 ≤ ‖SAx‖2 ≤ (1 + ) ‖Ax‖2 for all x ∈ Rd. The classical sketches above, with appropriate parameters, are all subspace embedding matrices with at least a constant probability; our focus is on COUNT-SKETCH which can be applied in input sparsity running time. We summarize the parameters needed for a subspace embedding below:\n• Gaussian sketch: m = O(d/ 2). It is a dense matrix and computing SA costs O(m · nnz(A)) = O(nnz(A)d/ 2) time. • COUNT-SKETCH: m = O(d2/ 2) (Clarkson & Woodruff, 2017). Although the number of rows is quadratic in d/ , the sketch matrix S is sparse and computing SA takes only O(nnz(A)) time. • SJLT: m = O(d/ 2) and has s = O(1/ ) non-zeros per column (Nelson & Nguyên, 2013; Cohen, 2016). Computing SA takes O(snnz(A)) = O(nnz(A)/ ) time.\nIterative Hessian Sketch. The Iterative Hessian Sketching (IHS) method (Pilanci & Wainwright, 2016) solves the constrained least-squares problem (1) by iteratively performing the update\nxt+1 = arg min x∈C\n1 2 ‖St+1A(x− xt)‖22 − 〈A >(b−Axt), x− xt〉, (2)\nwhere St+1 is a sketching matrix. It is not difficult to see that for the unsketched version (St+1 is the identity matrix) of the minimization above, the optimal solution xt+1 coincides with the optimal solution to the constrained least square problem (1). The IHS approximates the Hessian A>A by a sketched version (St+1A)>(St+1A) to improve runtime, as St+1A typically has very few rows.\nUnconstrained Convex Optimization. Consider an unconstrained convex optimization problem minx f(x), where f is smooth and strongly convex, and its Hessian ∇2f is Lipschitz continuous. This problem can be solved by Newton’s method, which iteratively performs the update\nxt+1 = xt − arg min z ∥∥∥(∇2f(xt)1/2)>(∇2f(xt)1/2)z −∇f(xt)∥∥∥ 2 ,\nprovided it is given a good initial point x0. In each step, it requires solving a regression problem of the form minz ∥∥A>Az − y∥∥ 2 , which, with access to A, can be solved with a fast regression solver in (van den Brand et al., 2020). The regression solver first computes a preconditioner R via a QR decomposition such that SAR has orthonormal columns, where S is a sketching matrix, then solves ẑ = arg minz′ ∥∥(AR)>(AR)z′ − y∥∥ 2\nby gradient descent and returns Rẑ in the end. Here, the point of sketching is that the QR decomposition of SA can be computed much more efficiently than the QR decomposition of A since S has only a small number of rows.\nAlgorithm 1 LEARN-SKETCH: Gradient descent algorithm for learning the sketch values Require: Atrain = {A1, ..., AN} (Ai ∈ Rn×d),\nlearning rate α 1: Randomly initialize p, v for a Count-Sketch-\ntype sketch 2: for t = 0 to steps do 3: Form S using p, v 4: Sample batch Abatch from Atrain 5: v ← v − α∂L(S,Abatch)∂v 6: end for\nLearning a Sketch. We use the same learning algorithm in (Liu et al., 2020), given in Algorithm 1. The algorithm aims to minimize the mean loss function L(S,A) = 1 N ∑N i=1 L(S,Ai), where S is the learned sketch,L(S,A) is the loss function of S applied to a data matrix A, and A = {A1, . . . , AN} is a (random) subset of training data.\n3 HESSIAN SKETCH\nAlgorithm 2 Solver for (3) 1: S1 ← learned sketch, S2 ← random sketch 2: (Ẑi,1, Ẑi,2)← ESTIMATE(Si, A), i = 1, 2 3: if Ẑ1,2/Ẑ1,1 < Ẑ2,2/Ẑ2,1 then 4: x̂← solution of (3) with S = S1 5: else 6: x̂← solution of (3) with S = S2 7: end if 8: return x̂ 9: function ESTIMATE(S,A) 10: T ← sparse (1± η)-subspace embedding matrix for d-dimensional subspaces 11: (Q,R)← QR(TA) 12: Ẑ1 ← σmin(SAR−1) 13: Ẑ2 ← (1± η)-approximation to∥∥(SAR−1)>(SAR−1)− I∥∥\nop\n14: return (Ẑ1, Ẑ2) 15: end function\nIn this section, we consider the minimization problem\nmin x∈C\n{ 1\n2 ‖SAx‖22 − 〈A\n>y, x〉 } , (3)\nwhich is used as a subroutine for the IHS (cf. (2)). We present an algorithm with the learned sketch in Algorithm 2. To analyze its performance, we let R be the column space of A ∈ Rm×n and define the following quantities (corresponding exactly to the unconstrained case in Pilanci & Wainwright (2016))\nZ1(S) = inf v∈R∩Sn−1\n‖Sv‖22 ,\nZ2(S) = sup u,v∈R∩Sn−1\n〈 u, (S>S − In)v 〉 ,\nwhere Sn−1 denotes the Euclidean unit sphere in Rn.\nThe following is the estimation guarantee of Ẑ1 and Ẑ2. The proof is postponed to Appendix A. Lemma 3.1. Suppose that η ∈ (0, 13 ) is a small constant, A is of full rank and S has O(d\n2) rows. The function ESTIMATE(S,A) returns inO((nnz(A) log(1/η)+poly(d/η)) time Ẑ1, Ẑ2 which with probability at least 0.99 satisfy that Z1(S)1+η ≤ Ẑ1 ≤ Z1(S) 1−η and Z2(S) (1+η)2 − 3η ≤ Ẑ2 ≤ Z2(S) (1−η)2 + 3η.\nNote for a matrixA, ‖A‖op = supx 6=0 ‖Ax‖2 ‖x‖2 is its operator norm. Similar to (Pilanci & Wainwright, 2016, Proposition 1), we have the following guarantee. The proof is postponed to Appendix B.\nTheorem 3.2. Let η ∈ (0, 13 ) be a small constant. Suppose that A is of full rank and S1 and S2 are both COUNT-SKETCH-type sketches with O(d2) rows. Algorithm 2 returns a solution x̂ which, with\nprobability at least 0.98, satisfies that ‖A(x̂− x∗)‖2 ≤ (1 + η)4 ( min { Ẑ1,2\nẐ1,1 , Ẑ2,2 Ẑ2,1\n} + 4η ) ‖Ax∗‖2\nin O(nnz(A) log( 1η ) + poly( d η )) time, where x ∗ = arg minx∈C ‖Ax− b‖2 is the least-squares solution.\n4 HESSIAN REGRESSION Algorithm 3 Fast Regression Solver for (4)\n1: S1 ← learned sketch, S2 ← random sketch 2: (Qi, Ri)← QR(SiA), i = 1, 2 3: (σi, σ′i)← EIG(AR −1 i ), i = 1, 2\n. EIG(B) returns the max and min singular values of B\n4: if σ1/σ′1 < σ2/σ′2 then 5: P ← R−11 , η ← 1/(σ21 + (σ′1)2) 6: else 7: P ← R−12 , η ← 1/(σ22 + (σ′2)2) 8: end if 9: z0 ← 0\n10: while ∥∥A>APzt − y∥∥2 ≥ ‖y‖2 do 11: zt+1 ← zt − η(P>A>AP )(P>A>APzt−P>y) 12: end while 13: return Pzt\nIn this section, we consider the minimization problem\nmin z ∥∥A>Az − y∥∥ 2 , (4)\nwhich is used as a subroutine for the unconstrained convex optimization problem minx f(x) with A>A being the Hessian matrix∇2f(x) (See Section 2). Here A ∈ Rn×d, y ∈ Rd, and we have access to A. We incorporate a learned sketch into the fast regression solver in (van den Brand et al., 2020) and present the algorithm in Algorithm 3.\nHere the subroutine EIG(B) applies a (1 + η)-subspace embedding sketch T to B for some small constant η and returns the maximum and the minimum singular values of TB. Since B admits the form of AR, the sketched matrix TB can be calculated as (TA)R and thus can be computed in O(nnz(A) + poly(d)) time if T is a COUNT-SKETCH matrix of O(d2) rows. The extreme singular values of TB can be found by SVD or the Lanczos’s algorithm.\nSimilar to Lemma 4.2 in (van den Brand et al., 2020), we have the following guarantee of Algorithm 3. The proof parallels the proof in (van den Brand et al., 2020) and is postponed to Appendix C.\nTheorem 4.1. Suppose that S1 and S2 are both COUNT-SKETCH-type sketches with O(d2) rows. Algorithm 3 returns a solution x′ such that ∥∥A>Ax′ − y∥∥ 2 ≤ ‖y‖2 in O(nnz(A)) + Õ(nd · (min{σ1/σ′1, σ2/σ′2})2 · log(κ(A)/ ) + poly(d)) time. Remark 4.2. In Algorithm 3, S2 can be chosen to be a subspace embedding matrix for ddimensional subspaces, in which case, AR−12 has condition number close to 1 (see, e.g., (Woodruff, 2014, p38)) and the full algorithm would run faster than the trivial O(nd2)-time solver to (4).\nRemark 4.3. For the original unconstrained convex optimization problem minx f(x), one can run the entire optimization procedure with learned sketches versus the entire optimization procedure with random sketches, compare the objective values at the end, and choose the better of the two. For least-squares f(x) = 12 ‖Ax− b‖ 2 2, the value of f(x) can be approximated efficiently by a sparse subspace embedding matrix in O(nnz(A) + nnz(b) + poly(d)) time." }, { "heading": "5 IHS EXPERIMENTS", "text": "Training. We learn the sketching matrix in each iteration (also called a round) separately. Recall that in the (t+1)-st round of IHS, the optimization problem we need to solve (2) depends on xt. An issue is that we do not know xt, which is needed to generate the training data for the (t+ 1)-st round. Our solution is to use the sketch matrix in the previous round to obtain xt by solving (2), and then use it to generate the training data in the next round. That is, in the first round, we train the sketch matrix S1 to solve the problem x1 = arg minx∈C 1 2 ‖S1Ax‖ 2 2−〈A>b, x〉 and use x1 to generate the training data for the optimization problem for x2, and so on. The loss function we use is the unsketched objective function in the (t+1)-st iteration, i.e.,L(St+1, A) = 12 ‖A(xt+1 − xt)‖ 2 2−〈AT (b−Axt), xt+1−xt〉, where xt+1 is the solution to (2) and thus depends on St+1.\nComparison. We compare the learned sketch against three classical sketches: Gaussian, COUNTSKETCH, and SJLT (see Section 2) in all experiments. The quantity we compare is a certain error, defined individually for each problem, in each round of the iteration of the IHS or as a function of the runtime of the algorithm. All of our experiments are conducted on a laptop with 1.90GHz CPU and 16GB RAM." }, { "heading": "5.1 LASSO", "text": "Figure 2: Test error of LASSO on CO emissions dataset, m = 3d\nFigure 4: Test error of LASSO on greenhouse gas dataset,m = 3.5d\ngregated over one hour (by means of average or sum), which can help to predict the CO emission. We divide the raw data into 120 (Ai, bi) such that Ai ∈ R300×9, bi ∈ R300×1. The data in each matrix is sorted in chronological order.|(A, b)train| = 96, |(A, b)test| = 24. • Greenhouse gas2: time series of measured greenhouse gas concentrations in the California atmosphere. Each (A, b) corresponds to a different measurement location. Ai ∈ R327×14, bi ∈ R327×1, and |(A, b)train| = 400, |(A, b)test| = 100. (This dataset was also used in (Liu et al.,\n5.2 SUPPORT VECTOR MACHINE\nIn the context of binary classification, a labeled sample is a pair (ai, zi), where ai ∈ Rn is a vector representing a collection of features and zi ∈ {−1,+1} is the associated class label. Given a set of labeled patterns {(ai, zi)}di=1, the support vector machine (SVM) estimates the weight vector w∗ by minimizing the function\nw∗ = arg min w∈Rn\n{ C\n2 d∑ i=1 g(zi, 〈wi, ai〉) + 1 2 ‖w‖22\n} ,\nwhere C is a parameter. Here we use the squared hinge loss g(zi, 〈w, ai〉) := (1− zi〈w, ai〉)2+.\n1https://archive.ics.uci.edu/ml/datasets/Gas+Turbine+CO+and+NOx+Emission+Data+Set 2https://archive.ics.uci.edu/ml/datasets/Greenhouse+Gas+Observing+Network\nThe dual of this problem can be written as a constrained minimization problem (see, e.g., (Li et al., 2009; Pilanci & Wainwright, 2015)),\nx∗ := arg min x∈∆d ‖Bx‖22 ,\nover the domain ∆d = {x ∈ Rd : x ≥ 0 and ‖x‖1 = 1}, the positive simplex in Rd. Here B = [(AD)> 1√\nC Id] > ∈ R(n+d)×d, where A is an n × d matrix with ai ∈ Rn as its i-th column\nand D = diag(z) is a d× d diagonal matrix. We conduct experiments on the following three datasets:\n• Random Gaussian (synthetic): We follow the same construction in Pilanci & Wainwright (2016). We generate a two-component Gaussian mixture model, based on the component distributions N(µ0, I) and N(µ1, I), where µ0 and µ1 are uniformly distributed in [−3, 3]. Placing equal weights on each component, we draw d samples from this mixture distribution. • Swarm behavior3: Each instance in the dataset has n = 2400 features and the task is to predict whether the instance is flocking or not flocking. We use only the first 6000 instances of the raw data, and divide them into 200 smaller groups of instances. Each group contains d = 30 instances, corresponding to a Bi of size 2430× 30. The training data consists of 160 groups and the test data consist of 40 groups. • Gisette4: Gisette is a handwritten digit recognition problem which asks to to separate the highly confusable digits ‘4’ and ‘9’. Each instance has n = 5000 features. The raw data is divided into 200 smaller groups, where each contains d = 30 instances and corresponds to a Bi of size 5030× 30. The training data consists of 160 groups and the test data consists of 40 groups.\nWe choose m = 10d in all experiments and define the error as ‖Bx‖22 − ‖Bx∗‖ 2 2. For random sketches, we take the average error over five independent trials, and for learned sketches, over three independent trials. For the Gisette dataset, we use the learned sketch in all rounds. We plot in a (natural) logarithmic scale the mean errors of the three datasets in Figures 5 to 7. For Gisette and random Gaussian datasets, using the learned sketches reduced the error by 10%–30%, and for the Swarm Behavior dataset, the learned sketches reduce the error by about 30%–40%." }, { "heading": "5.3 MATRIX ESTIMATION WITH NUCLEAR NORM CONSTRAINT", "text": "In many applications, for the problem\nX∗ := arg min X∈Rd1×d2 ‖AX −B‖2F ,\nit is reasonable to model the matrix X∗ as having low rank. Similar to `1-minimization for compressive sensing, a standard relaxation of the rank constraint is to minimize the nuclear norm of X , defined as ‖X‖∗ := ∑min{d1,d2} j=1 σj(X), where σj(X) is the j-th largest singular value of X .\nHence, the matrix estimation problem we consider here is\nX∗ := arg min X∈Rd1×d2 ‖AX −B‖2F such that ‖X‖∗ ≤ ρ.\nwhere ρ > 0 is a user-defined radius as a regularization parameter. 3https://archive.ics.uci.edu/ml/datasets/Swarm+Behaviour 4https://www.csie.ntu.edu.tw/∼cjlin/libsvmtools/datasets/binary.html#gisette\nFigure 8: Test error of matrix estimation on synthetic data, m = 50\nFigure 9: Test error of matrix estimation on Tunnel dataset, m = 50\nWe conduct experiments on the following two datasets:\n• Synthetic Dataset: We generate the pair (Ai, Bi) as Bi = AiX ∗ i + Wi, where Ai ∈ Rn×d1 with i.i.d\nN(0, 1) entries. X∗i ∈ Rd1×d2 is a matrix with rank at most r. Wi is noise with i.i.d N(0, σ2) entries. Here we set n = 500, d1 = d2 = 7, r = 3, ρ = 30. |(A,B)|train = 270, |(A,B)|test = 30. • Tunnel5: The data set is a time series of gas concentrations measured by eight sensors in a wind tunnel. Each (A,B) corresponds to a different data collection trial. Ai ∈ R13530×5, Bi ∈ R13530×6, |(A,B)|train = 144, |(A,B)|test = 36. The same dataset and parameters were also used in (Liu et al., 2020) for regression tasks. In our nuclear norm constraint, we set ρ = 10.\nWe choose m = 40, 50 for the synthetic dataset and m = 10, 50 for the Tunnel dataset and define the error to be 1 2 (‖AX −B‖ 2 F − ‖AX∗ −B‖ 2 F ). For each data point,\nwe take the average error of five independent trials. The mean errors of the two datasets when m = 50 are plotted in a (natural) logarithmic scale in Figures 8 and 9. We observe that the classical sketches yield approximately the same order of error, while the learned sketches improve the error by at least 30% for the synthetic dataset and surprisingly by at least 95% for the Tunnel dataset. The huge improvement on the Tunnel dataset may be due to the fact that the matrices Ai have many duplicate rows. We defer the results when m = 10 or 40 to Appendix D, which show that the learned sketches yield much smaller errors than the random sketches and the random sketches could converge significantly more slowly with considerably larger errors in the first several rounds.\n5.4 RUNTIME OF LEARNED SKETCHES\nAs stated in Section 2, our learned sketch matrices S are all COUNT-SKETCH-type matrices (each column contains a single nonzero entry), the matrix product SA can thus be computed in O(nnz(A)) time and the overall algorithm is expected to be fast. To verify this, we plot in an error-versus-runtime plot for the SVM and matrix estimation with nuclear norm constraint tasks in Figures 10 and 11 (corresponding to the datasets in Figures 7 and 9). The runtime consists only of the time for sketching and solving the optimization problem and does not include the time for loading the data. We run the same experiment three times. Each time we take an average over all test data. From the plot we can observe that the learned sketch and COUNT-SKETCH have the fastest runtimes, which are slightly faster than that of the SJLT and significantly faster than that of the Gaussian sketch." }, { "heading": "6 FAST REGRESSION EXPERIMENT", "text": "We consider the unconstrained least-squares problem, i.e., (5) with λ = 0, using the CO emission, greenhouse gas datasets, and the following Census dataset:\n• Census data6: this dataset consists of annual salary and related features on people who reported that they worked 40 or more weeks in the previous year and worked 35 or more hours per week. We randomly sample 5000 instances to create (Ai, bi), where A ∈ R5000×11 and b ∈ R5000×1, |(A,B)|train = 160, |(A,B)|test = 40.\n5https://archive.ics.uci.edu/ml/datasets/Gas+sensor+array+exposed+to+turbulent+gas+mixtures 6https://github.com/chocjy/randomized-quantile-regression-solvers/tree/master/matlab/data\nTraining. We optimize the learned sketch S1 by gradient descent (Algorithm 1), where L(S,A) = κ(AR−11 ), where R1 is computed as in Algorithm 3 and κ(M) denotes the condition number of a matrix M .\nNext we discuss how to generate the training data. Since we use Newton’s method to solve an unconstrained convex optimization problem (see Section 2), in the t-round, we need to solve a regression problem minz ∥∥(∇2f(xt)1/2)>(∇2f(xt)1/2)z −∇f(xt)∥∥2. Reformulating it as minz ∥∥A>Az − y∥∥ 2 , we see that A and y depend on the previous solution xt. Hence, we take xt to be the solution obtained from Algorithm 3 using the learned sketch St, and this generates A and y for the (t+ 1)-st round.\nExperiment. For the CO emission dataset, we set m = 70, and for the Census dataset, we set m = 500. For the η in the Algorithm 3, we set η = 1 for the first round and η = 0.2 for the next rounds for the CO emission dataset, and η = 1 for all rounds for the Census dataset. We leave the setting and results on the greenhouse gas dataset to Appendix E.\nWe examine the accuracy of the subproblem (4) and define the error to be ∥∥A>ARzt − y∥∥2 / ‖y‖2. We run the first three subroutines of solving the subproblem for the CO emission dataset and the Census dataset. The average error of three independent trials is plotted in Figures 12, 13 and 16. We observe that for the CO emission dataset, the classical sketches have a similar performance and the learned sketches lead to a fast convergence in the subroutine with the first-round error at least 80% smaller; for the Census dataset, the learned sketch achieves the smallest error for all three rounds, where we reduce about 60% error in the first round and about 50% error in the third round. Note that the learned sketch always considerably outperforms COUNT-SKETCH in all cases." }, { "heading": "7 CONCLUSION", "text": "We demonstrated the superiority of using learned sketches, over classical random sketches, in the Iterative Hessian Sketching method and fast regression solvers for unconstrained least-squares. Compared with random sketches, our learned sketches of the same size can considerably reduce the error in the loss function (i.e., f(x) − f(x∗), where x is the output of the sketched algorithm and x∗ the optimal solution to the unsketched problem) for a given threshold of maximum number of iterations or maximum runtime. Learned sketches also admit a smaller sketch size. When the sketch size is small, the algorithm with random sketches may fail to converge, or converge slowly, while the algorithm with learned sketches converges quickly." }, { "heading": "A PROOF OF LEMMA 3.1", "text": "Suppose that AR−1 = UW , where U ∈ Rn×d has orthonormal columns, which form an orthonormal basis of the column space of A. Since T is a subspace embedding of the column space of A with probability 0.99, it holds for all x ∈ Rd that\n1\n1 + η\n∥∥TAR−1x∥∥ 2 ≤ ∥∥AR−1x∥∥ 2 ≤ 1 1− η ∥∥TAR−1x∥∥ 2 .\nSince ∥∥TAR−1x∥∥ 2\n= ‖Qx‖2 = ‖x‖2 and\n‖Wx‖2 = ‖UWx‖2 = ∥∥AR−1x∥∥ 2 (6)\nwe have that 1\n1 + η ‖x‖2 ≤ ‖Wx‖2 ≤\n1\n1− η ‖x‖2 , x ∈ R d. (7)\nIt is easy to see that\nZ1(S) = min x∈Sd−1 ‖SUx‖2 = min y 6=0 ‖SUWy‖2 ‖Wy‖2 ,\nand thus,\nmin y 6=0 (1− η) ‖SUWy‖2 ‖y‖2 ≤ Z1(S) ≤ min y 6=0 (1 + η) ‖SUWy‖2 ‖y‖2 .\nRecall that SUW = SAR−1. We see that\n(1− η)σmin(SAR−1) ≤ Z1(S) ≤ (1 + η)σmin(SAR−1).\nBy definition, Z2(S) = ∥∥UT (S>S − In)U∥∥op . It follows from (7) that\n(1− η)2 ∥∥WTUT (STS − In)UW∥∥op ≤ Z2(S) ≤ (1 + η)2 ∥∥WTUT (STS − In)UW∥∥op .\nand from (7), (6) and (Vershynin, 2012, Lemma 5.36) that∥∥(AR−1)>(AR−1)− I∥∥ op ≤ 3η. Since ∥∥WTUT (STS − In)UW∥∥op = ∥∥(AR−1)>(STS − In)AR−1∥∥op and ∥∥(AR−1)>STSAR−1 − I∥∥ op − ∥∥(AR−1)>(AR−1)− I∥∥ op\n≤ ∥∥(AR−1)>(STS − In)AR−1∥∥op\n≤ ∥∥(AR−1)>STSAR−1 − I∥∥ op + ∥∥(AR−1)>(AR−1)− I∥∥ op ,\nit follows that\n(1− η)2 ∥∥(SAR−1)>SAR−1 − I∥∥\nop − 3(1− η)2η\n≤ Z2(S) ≤ (1 + η)2 ∥∥(SAR−1)>SAR−1 − I∥∥ op + 3(1 + η)2η.\nWe have so far proved the correctness of the approximation and we shall analyze the runtime below.\nSince S and T are sparse, computing SA and TA takes O(nnz(A)) time. The QR decomposition of TA, which is a matrix of size poly(d/η) × d, can be computed in poly(d/η) time. The matrix SAR−1 can be computed in poly(d) time. Since it has size poly(d)× d, its smallest singular value can be computed in poly(d) time. To approximate Z2(S), we can use the power method to estimate∥∥(SAR−1)TSAR−1 − I∥∥\nop up to a (1± η)-factor in O((nnz(A) + poly(d)) log(1/η)) time." }, { "heading": "B PROOF OF THEOREM 3.2", "text": "In Lemma 3.1, we have with probability at least 0.99 that\nẐ2 Ẑ1 ≥\n1 (1+η)2Z2(S)− 3η\n1 1−ηZ1(S)\n≥ 1− η (1 + η)2 Z2(S) Z1(S) − 3η Z1(S) .\nWhen S is random subspace embedding, it holds with probability at least 0.99 that Z1(S) ≥ 3/4 and so, by a union bound, it holds with probability at least 0.98 that\nẐ2 Ẑ1 ≥ 1 (1 + η)4 Z2(S) Z1(S) − 4η,\nor, Z2(S)\nZ1(S) ≤ (1 + η)4\n( Ẑ2\nẐ1 + 4η\n) .\nThe correctness of our claim then follows from (Pilanci & Wainwright, 2016, Proposition 1), together with the fact that S2 is a random subspace embedding. The runtime follows from Lemma 3.1 and (Cormode & Dickens, 2019, Theorem 2.2)." }, { "heading": "C PROOF OF THEOREM 4.1", "text": "The proof follows almost an identical argument as that in (van den Brand et al., 2020, Lemma B.1). In (van den Brand et al., 2020), it is assumed (in our notation) that 3/4 ≤ σmin(AP ) ≤ σmax(AP ) ≤ 5/4 and thus one can set η = 1 in Algorithm 3 and achieve a linear convergence. The only difference is that here we estimate σmin(AP ) and σmax(AP ) and set the step size η in the gradient descent algorithm accordingly. By standard bounds for gradient descent (see, e.g., (Boyd & Vandenberghe, 2004, p468)), with a choice of step size η = 2/(σ2max(AP ) + σ 2 min(AP )), after O((σmax(AP )/σmin(AP )) 2 log(1/ )) iterations, we can find zt such that∥∥P>A>AP (zt − z∗)∥∥2 ≤ ∥∥P>A>AP (z0 − z∗)∥∥2 ,\nwhere z∗ = arg minz ∥∥P>A>APz − P>y∥∥ 2 is the optimal least-squares solution. This establishes Eq. (11) in the proof in (van den Brand et al., 2020), and the rest of the proof follows as in there.\nD IHS EXPERIMENT: MATRIX ESTIMATION WITH NUCLEAR NORM CONSTRAINT\nAs stated in Section 5.3, the mean errors of the two datasets when m = 40 for the synthetic dataset and m = 10 for the Tunnel dataset are plotted in a (natural) logarithmic scale in Figures 14 and 15." }, { "heading": "E FAST REGRESSION EXPERIMENT: GREENHOUSE GAS", "text": "We set m = 100, η = 0.2 in the first round and η = 1 in the second round. The mean errors of the first two calls of the fast regression subroutine are plotted in Figure 16. We can observe that Gaussian and sparse JL sketches have a significantly better performance than COUNT-SKETCH, and the learned sketch shows again a significant reduction of more than 50% in the first-round error." } ]
2,020
null
SP:45d0d17b384044473db2e2e164c56558044d2542
[ "The paper is about ANN being best-known models of developed primate visual systems. However this fact does not yet mean that the way those systems are trained is also similar. This distinction and a step towards answering this question is the main motivation of this work. The authors demonstrate a set of ideas that while drastically reducing the number of updates maintain high Brain Predictability according to the BrainScore. The significance of this result in my opinion largely depends on how well we can map those observations and methods to biological meaning and knowledge on how primate brains are trained (see the discussion point below).", "This paper presents an empirical study that elucidates potential mechanisms through which models of adult-like visual streams can \"develop\" from less specific/coarser model instantiations. In particular, the authors consider existing ventral stream models whose internal representations and behavior are most brain-like (amongst several other models) and probe how these fair in impoverished regimes of available labeled data and model plasticity (number of \"trainable\" synapses). They introduce a novel weight initialization mechanism, Weight Compression (WC), that allows their models to retain good performance even at the beginning of training, before any synaptic update. They also explore a particular methodology for fine-tuning, Critical Training (CT), that selectively updates parameters that seem to yield the most benefit. Finally, they explore these methods/algorithms' transfer performance from one ventral stream model (CORnet-S) to two additional models (ResNet-50 and MobileNet).", "The paper addresses the question of how many weight updates are needed to train a deep network before it takes on biologically realistic representations. The paper uses CORnet-S (a network that has been proposed to resemble primate ventral stream), and BrainScore (a benchmark of how closely related deep network responses are to visual responses in primate ventral stream). Three ways of reducing the numbers of weight updates are explored, each of which is found to vastly reduce updates while moderately reducing BrainScore. First, the network is simply trained for fewer epochs. Second, weights are initialized with clusters of weights found after training. Third, only a subset of layers is updated. A combination of methods leads to 80% of full brain predictivity with 0.5% of the standard number of weight updates. ", "## Updated the score This paper proposes to address an important research question for connecting biological (BNN) and artificial neural networks (ANN). Although after training, ANN replicates various salient features of BNN, the way they are often trained is biologically implausible and thus, it is hard to argue that ANNs are suitable for modeling BNNs convincingly. In particular, this work focuses on the already existing CORnet who has shown a high Brain-score. The idea of the authors is to show that they can largely reduce the number of updates when using their methods while still retaining a high Brain-score, thus proposing a potential training mechanism for BNNs.", "The paper is concerned with closing the gap between the amount of training in deep networks and in developing brains, as the current deep learning models use an unrealistically large number of synaptic updates. The authors address that with three strategies: less training, weight clustering and training in a subset of layers. All methods are tested individually and in combination with each other on primate ventral stream data. " ]
After training on large datasets, certain deep neural networks are surprisingly good models of the neural mechanisms of adult primate visual object recognition. Nevertheless, these models are considered poor models of the development of the visual system because they posit millions of sequential, precisely coordinated synaptic updates, each based on a labeled image. While ongoing research is pursuing the use of unsupervised proxies for labels, we here explore a complementary strategy of reducing the required number of supervised synaptic updates to produce an adult-like ventral visual stream (as judged by the match to V1, V2, V4, IT, and behavior). Such models might require less precise machinery and energy expenditure to coordinate these updates and would thus move us closer to viable neuroscientific hypotheses about how the visual system wires itself up. Relative to standard model training on labeled images in ImageNet, we here demonstrate that the total number of supervised weight updates can be substantially reduced using three complementary strategies: First, we find that only 2% of supervised updates (epochs and images) are needed to achieve∼80% of a fully trained model’s match to adult ventral stream. Specifically, training benefits predictions of higher visual cortex the most whereas predictions of earlier areas improve only marginally over the course of training. Second, by improving the random distribution of synaptic connectivity, we find that 54% of the brain match can already be achieved “at birth” (i.e. no training at all). Third, we find that, by training only ∼5% of model synapses, we can still achieve nearly 80% of the match to the ventral stream. This approach further improves on ImageNet performance over previous attempts in computer vision of minimizing trained components without substantially increasing the number of trained parameters. These results reflect first steps in modeling not just primate adult visual processing during inference, but also how the ventral visual stream might be “wired up” by evolution (a model’s “birth” state) and by developmental learning (a model’s updates based on visual experience).
[]
[ { "authors": [ "Martin Schrimpf", "Idan A Blank", "Greta Tuckute", "Carina Kauf", "Eghbal A Hosseini", "NANCY G KANWISHER", "Joshua B. Tenenbaum", "Evelina Fedorenko" ], "title": "Artificial Neural Networks Accurately Predict Language Processing in the Brain", "venue": "bioRxiv preprint,", "year": 2020 }, { "authors": [ "Jonas Kubilius", "Martin Schrimpf", "Ha Hong", "Najib J. Majaj", "Rishi Rajalingham", "Elias B. Issa", "Kohitij Kar", "Pouya Bashivan", "Jonathan Prescott-Roy", "Kailyn Schmidt", "Aran Nayebi", "Daniel Bear", "Daniel L.K. Yamins", "James J. DiCarlo" ], "title": "Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Joel Dapello", "Tiago Marques", "Martin Schrimpf", "Franziska Geiger", "David D. Cox", "James J. DiCarlo" ], "title": "Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations", "venue": "In Neural Information Processing Systems (NeurIPS)", "year": 2020 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Daniel LK Yamins", "Ha Hong", "Charles F Cadieu", "Ethan A Solomon", "Darren Seibert", "James J DiCarlo" ], "title": "Performance-optimized hierarchical models predict neural responses in higher visual cortex", "venue": "Proceedings of the National Academy of Sciences,", "year": 2014 }, { "authors": [ "Seyed-Mahdi Khaligh-Razavi", "Nikolaus Kriegeskorte" ], "title": "Deep supervised, but not unsupervised, models may explain it cortical representation", "venue": "PLoS computational biology,", "year": 2014 }, { "authors": [ "Santiago A Cadena", "George H Denfield", "Edgar Y Walker", "Leon A Gatys", "Andreas S Tolias", "Matthias Bethge", "Alexander S Ecker" ], "title": "Deep convolutional models improve predictions of macaque v1 responses", "venue": null, "year": 2017 }, { "authors": [ "Hanlin Tang", "Martin Schrimpf", "William Lotter", "Charlotte Moerman", "Ana Paredes", "J.O. Josue Ortega Caro", "Walter Hardesty", "David Cox", "Gabriel Kreiman" ], "title": "Recurrent computations for visual pattern completion", "venue": "Proceedings of the National Academy of Sciences (PNAS),", "year": 2018 }, { "authors": [ "Martin Schrimpf", "Jonas Kubilius", "Ha Hong", "Najib J. Majaj", "Rishi Rajalingham", "Elias B. Issa", "Kohitij Kar", "Pouya Bashivan", "Jonathan Prescott-Roy", "Kailyn Schmidt", "Daniel L.K. Yamins", "James J. DiCarlo" ], "title": "Brain-Score: Which artificial neural network for object recognition is most brain-like? bioRxiv, 2018", "venue": null, "year": 2018 }, { "authors": [ "Rishi Rajalingham", "Elias B Issa", "Pouya Bashivan", "Kohitij Kar", "Kailyn Schmidt", "James J DiCarlo" ], "title": "Largescale, high-resolution comparison of the core visual object recognition behavior of humans, monkeys, and state-of-the-art deep artificial neural networks", "venue": "Journal of Neuroscience,", "year": 2018 }, { "authors": [ "Darren Seibert" ], "title": "High-level visual object representation in juvenile and adult primates", "venue": "PhD thesis, Massachusetts Institute of Technology,", "year": 2018 }, { "authors": [ "Anthony Zador" ], "title": "A Critique of Pure Learning: What Artificial Neural Networks can Learn from Animal Brains", "venue": "bioRxiv preprint,", "year": 2019 }, { "authors": [ "J. Anthony Movshon", "Lynne Kiorpes" ], "title": "Analysis of the development of spatial contrast sensitivity in monkey and human infants", "venue": "Journal of the Optical Society of America A (JOSA A),", "year": 1988 }, { "authors": [ "Lynne Kiorpes", "J. Anthony Movshon" ], "title": "Development of sensitivity to visual motion in macaque monkeys", "venue": "Visual Neuroscience,", "year": 2004 }, { "authors": [ "Mathew E. Diamond", "Wei Huang", "Ford F. Ebner" ], "title": "Laminar comparison of somatosensory cortical plasticity", "venue": null, "year": 1994 }, { "authors": [ "Aniek Schoups", "Rufin Vogels", "Ning Qian", "Guy Orban" ], "title": "Practising orientation identification improves orientation coding in V1", "venue": "neurons. Nature,", "year": 2001 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the Knowledge in a Neural Network", "venue": "arXiv preprint,", "year": 2015 }, { "authors": [ "Jang Hyun Cho", "Bharath Hariharan" ], "title": "On the efficacy of knowledge distillation", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive Representation Distillation", "venue": "arXiv preprint,", "year": 2019 }, { "authors": [ "Nicholas Cheney", "Martin Schrimpf", "Gabriel Kreiman" ], "title": "On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations", "venue": null, "year": 2017 }, { "authors": [ "Ari S. Morcos", "David G.T. Barrett", "Neil C. Rabinowitz", "Matthew Botvinick" ], "title": "On the importance of single directions for generalization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Junru Wu", "Yue Wang", "Zhenyu Wu", "Zhangyang Wang", "Ashok Veeraraghavan", "Yingyan Lin" ], "title": "Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Guy Gur-Ari", "Daniel A. Roberts", "Ethan Dyer" ], "title": "Gradient Descent Happens in a Tiny Subspace", "venue": "arXiv preprint,", "year": 2018 }, { "authors": [ "Yonglong Tian", "Yue Wang", "Dilip Krishnan", "Joshua B. Tenenbaum", "Phillip Isola" ], "title": "Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need", "venue": "arXiv preprint,", "year": 2020 }, { "authors": [ "Jonathan Frankle", "David J Schwab", "Ari S Morcos" ], "title": "Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2021 }, { "authors": [ "Jonathan Frankle", "Gintare Karolina Dziugaite", "Daniel M. Roy", "Michael Carbin" ], "title": "The Lottery Ticket Hypothesis at Scale", "venue": "arXiv preprint,", "year": 2019 }, { "authors": [ "Vivek Ramanujan", "Mitchell Wortsman", "Aniruddha Kembhavi", "Ali Farhadi", "Mohammad Rastegari" ], "title": "What’s Hidden in a Randomly Weighted Neural Network", "venue": null, "year": 2019 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep Clustering for Unsupervised Learning of Visual Features", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X Yu", "Dahua Lin" ], "title": "Unsupervised Feature Learning via Non-parametric Instance Discrimination", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Chengxu Zhuang", "Alex Zhai", "Daniel Yamins" ], "title": "Local aggregation for unsupervised learning of visual embeddings", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Olivier J. Hénaff", "Aravind Srinivas", "Jeffrey De Fauw", "Ali Razavi", "Carl Doersch", "S.M. Ali Eslami", "Aaron van den Oord" ], "title": "Data-Efficient Image Recognition with Contrastive Predictive Coding", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Talia Konkle", "George A. Alvarez" ], "title": "Instance-level contrastive learning yields human brain-like representation without category-supervision", "venue": "bioRxiv preprint,", "year": 2020 }, { "authors": [ "Chengxu Zhuang", "Siming Yan", "Aran Nayebi", "Martin Schrimpf", "Michael C. Frank", "James J. DiCarlo", "Daniel L.K. Yamins" ], "title": "Unsupervised Neural Network Models of the Ventral Visual Stream", "venue": "bioRxiv preprint,", "year": 2020 }, { "authors": [ "Timothy P. Lillicrap", "Daniel Cownden", "Douglas B. Tweed", "Colin J. Akerman" ], "title": "Random synaptic feedback weights support error backpropagation for deep learning", "venue": "Nature Communications,", "year": 2016 }, { "authors": [ "Benjamin Scellier", "Yoshua Bengio" ], "title": "Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation", "venue": "Frontiers in Computational Neuroscience,", "year": 2017 }, { "authors": [ "Isabella Pozzi", "Sander M Bohté", "Pieter R Roelfsema" ], "title": "Attention-Gated Brain Propagation: How the brain can implement reward-based error backpropagation", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2020 }, { "authors": [ "Jeremy Freeman", "Corey M Ziemba", "David J Heeger", "Eero P Simoncelli", "J Anthony Movshon" ], "title": "A functional and perceptual signature of the second visual area in primates", "venue": "Nature Neuroscience,", "year": 2013 }, { "authors": [ "Najib J Majaj", "Ha Hong", "Ethan A Solomon", "James J DiCarlo" ], "title": "Simple learned weighted sums of inferior temporal neuronal firing rates accurately predict human core object recognition performance", "venue": "Journal of Neuroscience,", "year": 2015 }, { "authors": [ "L. A" ], "title": "Yarbus. Eye movements and vision", "venue": null, "year": 1967 }, { "authors": [ "Agostino Gibaldi", "Silvio P. Sabatini" ], "title": "The saccade main sequence revised: A fast and repeatable tool for oculomotor analysis", "venue": "Behavior Research Methods,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving Deep into Rectifiers: Surpassing HumanLevel Performance on ImageNet Classification", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "G Leuba", "R Kraftsik" ], "title": "Changes in volume, surface estimate, three-dimensional shape and total number of neurons of the human primary visual cortex from midgestation until old age", "venue": "Anatomy and Embryology,", "year": 1994 }, { "authors": [ "David H Hubel", "Torsten N Wiesel" ], "title": "Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex", "venue": "The Journal of physiology,", "year": 1962 }, { "authors": [ "J.P. Jones", "L.A. Palmer" ], "title": "The two-dimensional spatial structure of simple receptive fields in cat striate cortex", "venue": "Journal of Neurophysiology,", "year": 1987 }, { "authors": [ "Matthew D Zeiler", "Rob Fergus" ], "title": "Visualizing and Understanding Convolutional Networks", "venue": "arXiv preprint,", "year": 2013 }, { "authors": [ "Andrew G. Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications", "venue": null, "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Maximilian Riesenhuber", "Tomaso Poggio" ], "title": "Hierarchical models of object recognition in cortex", "venue": "Nature neuroscience,", "year": 1999 }, { "authors": [ "Dave Ellemberg", "Terri L. Lewis", "Chang Hong Liu", "Daphne Maurer" ], "title": "Development of spatial and temporal vision during childhood", "venue": "Vision Research,", "year": 1999 }, { "authors": [ "Kalanit Grill-Spector", "Golijeh Golarai", "John Gabrieli" ], "title": "Developmental neuroimaging of the human ventral visual cortex", "venue": "Trends in Cognitive Sciences,", "year": 2008 }, { "authors": [ "Stephen Grossberg" ], "title": "Competitive learning: From interactive activation to adaptive resonance", "venue": "Cognitive Science,", "year": 1987 }, { "authors": [ "James C.R. Whittington", "Rafal Bogacz" ], "title": "Theories of Error Back-Propagation in the Brain", "venue": "Trends in Cognitive Sciences,", "year": 2019 }, { "authors": [ "Eric Hunsberger" ], "title": "Spiking Deep Neural Networks: Engineered and Biological Approaches to Object Recognition", "venue": null, "year": 2017 }, { "authors": [ "Sindy Löwe", "Peter O’Connor", "Bastiaan S. Veeling" ], "title": "Putting An End to End-to-End: Gradient-Isolated Learning of Representations", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Yuwen Xiong", "Mengye Ren", "Raquel Urtasun" ], "title": "LoCo: Local Contrastive Representation Learning", "venue": null, "year": 2020 }, { "authors": [ "Evelyn Fix", "J.L. Hodges" ], "title": "Discriminatory analysis, nonparametric discrimination", "venue": "Technical report, United States Air Force,", "year": 1951 }, { "authors": [ "Gideon Schwarz" ], "title": "Estimating the Dimension of a Model", "venue": "Annals of Statistics,", "year": 1978 }, { "authors": [ "Andrew D. Huberman", "Marla B. Feller", "Barbara Chapman" ], "title": "Mechanisms Underlying Development of Visual Maps and Receptive Fields", "venue": "Annual Review of Neuroscience,", "year": 2008 }, { "authors": [ "Howard" ], "title": "The training time of a full CORnet-S with standard Imagenet dataset for 43 epochs is ∼2.5 days. All variations with less weights/images/epochs trained in shorter time. Reference models trained for 4 days", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Particular artificial neural networks (ANNs) are the leading mechanistic models of visual processing in the primate visual ventral stream (Schrimpf et al., 2020; Kubilius et al., 2019; Dapello et al., 2020). After training on large-scale datasets such as ImageNet (Deng et al., 2009) by updating weights based on labeled images, internal representations of these ANNs partly match neural representations in the primate visual system from early visual cortex V1 through V2 and V4 to high-level IT (Yamins et al., 2014; Khaligh-Razavi & Kriegeskorte, 2014; Cadena et al., 2017; Tang et al., 2018; Schrimpf et al., 2018; Kubilius et al., 2019), and model object recognition behavior can partly account for primate object recognition behavior (Rajalingham et al., 2018; Schrimpf et al., 2018).\nRecently, such models have been criticized due to how their learning departs from brain development because they require many more labeled examples than is reasonable for biological systems’ limited waking (visual) experience (Seibert, 2018; Zador, 2019). For example, all the current top models of the primate ventral stream rely on trillions of supervised synaptic updates, i.e. the training of millions of parameters with millions of labeled examples over dozens of epochs. In biological\nsystems, on the other hand, the at-birth synaptic wiring as encoded by the genome already provides structure that is sufficient for macaques to exhibit adult-like visual representations after a few months (Movshon & Kiorpes, 1988; Kiorpes & Movshon, 2004; Seibert, 2018), which restricts the amount of experience dependent learning. Furthermore, different neuronal populations in cortical circuits undergo different plasticity mechanisms: neurons in supragranular and infragranular layers adapt more rapidly than those in layer 4 which receives inputs from lower areas (Diamond et al., 1994; Schoups et al., 2001), while current artificial synapses, on the other hand, all change under the same plasticity mechanism. While current models provide a basic understanding of the neural mechanisms of adult ventral stream inference, can we start to build models that provide an understanding of how the ventral stream “wires itself up” – models of the initial state at birth and how it develops during postnatal life?\nRelated Work. Several papers have addressed related questions in machine learning: Distilled student networks can be trained on the outputs of a teacher network (Hinton et al., 2015; Cho & Hariharan, 2019; Tian et al., 2019), and, in pruning studies, networks with knocked out synapses perform reasonably well (Cheney et al., 2017; Morcos et al., 2018), demonstrating that models with many trained parameters can be compressed (Wu et al., 2018) which is further supported by the convergence of training gradients onto a small subspace (Gur-Ari et al., 2018). Tian et al. (2020) show that a pre-trained encoder’s fixed features can be used to train a thin decoder with performance close to full fine-tuning and recent theoretically-driven work has found that training only BatchNorm layers (Frankle et al., 2021) or determining the right parameters from a large pool of weights (Frankle et al., 2019; Ramanujan et al., 2019) can already achieve high classification accuracy. Unsupervised approaches are also starting to develop useful representations without requiring many labels by inferring internal labels such as clusters or representational similarity (Caron et al., 2018; Wu et al., 2018; Zhuang et al., 2019; Hénaff et al., 2019; Konkle & Alvarez, 2020; Zhuang et al., 2020). Many attempts are also being made to make the learning algorithms themselves more biologically plausible (e.g. Lillicrap et al., 2016; Scellier & Bengio, 2017; Pozzi et al., 2020). Nevertheless, all of these approaches require many synaptic updates in the form of labeled samples or precise machinery to determine the right set of weights. In this work, we take first steps of relating findings in machine learning to neuroscience and using such models to explore hypotheses about the product of evolution (a model’s “birth state”) while simultaneously reducing the number of supervised synaptic updates (a model’s visual experience dependent development) without sacrificing high brain predictivity.\nOur contributions follow from a framework in which evolution endows the visual system with a well-chosen, yet still largely random “birth” pattern of synaptic connectivity (architecture + initialization), and developmental learning corresponds to training a fraction of the synaptic weights using very few supervised labels. We do not view the proposed changes as fully biological models of post-natal development, only that they more concretely correspond to biology than current models. Solving the entire problem of development all at once is too much for one study, but even partial improvements in this direction will likely be informative to further work. Specifically,\n1. we build models with a fraction of supervised updates (training epochs and labeled images) that retain high similarity to the primate ventral visual stream (quantified by a brain predictivity score from benchmarks on Brain-Score (Schrimpf et al., 2018)) and find that layers corresponding to higher visual regions such as IT are most dependent on training, 2. we improve the “at-birth” synaptic connectivity to show that even low-capacity evolutionarily encoded information might lead to reasonable initial representations with no training at all, 3. we propose a thin, “critical training” technique which reduces the number of trained synapses while maintaining high brain predictivity and improves over previous computer vision attempts to minimize trained components, 4. we combine these three techniques to build models with two orders of magnitude fewer supervised synaptic updates but high brain predictivity relative to a fully trained model\nCode and pre-trained models are available through GitHub: https://anonymous.4open. science/r/anonymous-3A61/." }, { "heading": "2 MODELING PRIMATE VISION", "text": "We evaluate all models on a suite of ventral stream benchmarks in Brain-Score (Schrimpf et al., 2018; 2020), and we base the new models presented here on the CORnet-S architecture, one of the most accurate models of adult primate visual processing (Kubilius et al., 2019).\nBrain-Score benchmarks. To obtain quantified scores for brain-likeness, we use a thorough set of benchmarks from Brain-Score (Schrimpf et al., 2018). To keep scores comparable, we only included those neural benchmarks from Brain-Score (Schrimpf et al., 2018) with the same predictivity metric. All benchmarks feed the same images to a candidate model that were used for primate experiments while “recording” activations or measuring behavioral outputs. Specifically, the V1 and V2 benchmarks present 315 images of naturalistic textures and compare model representations to primate single-unit recordings from Freeman et al. (2013) (102 V1 and 103 V2 neurons); the V4 and IT benchmarks present 2,560 naturalistic images and compare models to primate Utah array recordings from Majaj et al. (2015) (88 V4 and 168 IT electrodes). A linear regression is fit from model to primate representations in response to 90% of the images and its prediction score on the held-out 10% of images is evaluated with Pearson correlation, cross-validated 10 times. The behavioral benchmark presents 240 images and compares model to primate behavioral responses from Rajalingham et al. (2018). A logistic classifier is fit on models’ penultimate representations on 2,160 separate labeled images. The classifier is then used to estimate probabilities for 240 held-out images. Per-image confusion patterns between model and primate are compared with a Pearson correlation. All benchmark scores are normalized by the respective ceiling. We primarily report the average brain predictivity score as the mean of V1, V2, V4, IT, and behavioral scores.\nWe note that the Brain-Score benchmarks in this study are based on limited data and thus present a possible limitation. Nonetheless, they are the most extensive set of primate ventral stream neuronal and behavioral benchmarks that is currently available and the scores generalize to new experiments (Kubilius et al., 2019).\nBrain-Score provides separate sets of data as public benchmarks which we use to determine the type of distribution in Section 4, and the layer-to-region commitments of reference models.\nCORnet-S. One of the current best model architectures on the Brain-Score benchmarks is CORnet-S (Kubilius et al., 2019), a shallow recurrent model which anatomically commits to ventral stream regions. CORnet-S has four computational areas, analogous to the ventral visual areas V1, V2, V4, and IT, and a linear decoder that maps from neurons in the model’s last visual area to its behavioral choices. The recurrent circuitry (Figure 3B) uses up- and down-sampling convolutions to process features and is identical in each of the models visual areas (except for V1COR), but varies by the total number of neurons in each area. We base all models developed here on the CORnet-S architecture and use the same hyper-parameters as proposed in (Kubilius et al., 2019). Representations are read out at the end of anatomically corresponding areas." }, { "heading": "3 HIGH SCORES IN BRAIN PREDICTIVITY CAN BE ACHIEVED WITH FEW SUPERVISED UPDATES", "text": "We evaluated the brain predictivity scores of CORnet-S variants that were trained with a combination of fewer epochs and images. Models were trained with an initial learning rate of 0.1, divided by 10 when loss did not improve over 3 epochs, and stopping after three decrements.\nFigure 1 shows model scores on neural and behavioral Brain-Score measures, relative to a model trained for 43 epochs on all 1.28M labeled ImageNet images. In Panel A, we compare the average score over the five brain measures of various models to the number of supervised updates that each model was trained with, defined as the number of labeled images times the number of epochs. While a fully trained model reaches an average score of .42 after 55,040,000 supervised updates (43 epochs × 1.28M images), a model with only 100,000 updates already achieves 50% of that score, and 1,000,000 updates increase brain predictivity scores to 76%. Models are close to convergence score after 10,000,000 supervised updates with performance nearly equal to full training (97%). Scores grow logarithmically with an approximate 5% score increase for every order of magnitude more supervised updates.\nFigures 1B and C show individual neural and behavioral scores of models trained with fewer training epochs or labeled images independently. Early to mid visual representations (V1, V2, and V4 scores) are especially closely met with only few supervised updates, reaching 50% of the final trained model in fractions of the first epoch (Figure 1B). After only one full iteration over the training set, V1, V2, and V4 scores are close to their final score (all >80%) while IT requires two epochs to reach a comparable level. Behavioral scores take slightly longer to converge (>80% after 7 epochs).\nSimilarly, when training until convergence with fractions of the 1.28M total images, 50,000 images are sufficient to obtain high neural scores (80% of full training in V1, V2, V4, IT). Behavioral scores again require more training: half the standard number of labeled images is needed to surpass 80%.\nConcretely relating supervised updates to primate ventral stream development, Seibert (2018) establishes that no more than∼4 months – or 10 million seconds – of waking visual experience is needed to reach adult-level primate IT cortex (as assessed by its capability to support adult level object recognition). From this estimate, we can compute how many supervised updates per second different models in Figure 1A would require (assuming those updates are evenly distributed over the 10 million seconds). For instance, the fully trained model’s 55 million supervised updates translate to 5.5 updates every second, whereas the model with 1 million updates and 76% relative brain predictivity translates to one labeled image update every 10 seconds which appears more plausible given the upper limit of 2-3 saccades per second in humans (Yarbus, 1967; Gibaldi & Sabatini, 2020)." }, { "heading": "4 “AT-BIRTH” SYNAPTIC CONNECTIVITY YIELDS REASONABLE BRAIN PREDICTIVITY WITH NO TRAINING AT ALL", "text": "If few supervised updates can get model representations fairly close to a fully trained model (Figure 1), how close are the initial representations without any training? In relation to biology and following the introduced framework of treating all consecutive training as developmental learning, these “at-birth” synaptic connections would result from information encoded in the genome as a product of evolution.\nDue to the genome’s capacity bottleneck, it is infeasible to precisely encode every synapse. Primary visual cortex alone contains∼1.4E8 neurons per hemisphere (Leuba & Kraftsik, 1994), ∼1E3 synapses per neuron, each requiring ∼37 bits per synapse (Zador, 2019). Thus, without any clever rules, specifying the connections in only one hemisphere of V1 could require up to ∼5.2E12 bits\n– orders of magnitude more than the entire genome’s 1GB = 8E9 bits (Zador, 2019). Sampling synaptic weights from reasonably compressed distributions on the other hand places little memory requirements on genetic encoding while potentially yielding useful initial weights. Current machine learning techniques for initializing weights, such as Kaiming Normal (He et al., 2015), sample from a Gaussian distribution centered around zero.\nTo test the hypothesis that the genome might already encode more powerful initial representations with synaptic wiring sampled from distributions specified by only few bits, we explored multidimensional distributions as a more expressive alternative. These distributions only require a small number of parameters, but unlike current generic initializers, we explicitly specify them for each layer. To determine the right parameterization, we compress a trained model’s weights into clusters which we then sample from (“Weight Compression, WC”).\nMore specifically, for all convolutional layers except the first, we cluster the kernel weights and later sample from the clusters. We determine the number of clusters with the elbow method (Thorndike, 1953): 11 for V1, 13 for V2, 16 for V4, and 15 for IT. To capture the relative importance of clusters we fit a normal distribution to the cluster frequency over kernels. In batch normalization layers, we fit one normal distribution each to the weights and biases. For the first convolutional layer only, we employ a Gabor prior on the weights following studies in V1 (Hubel & Wiesel, 1962; Jones & Palmer, 1987) (Appendix B). This results in 33 KB (4,166 parameters) to specify network initialization, compared to 423 MB for a trained model’s weights (assuming 8 byte per parameter).\nModel interpretability studies (Zeiler & Fergus, 2013; Olah et al., 2020; Cammarata et al., 2020) classify model weights, comparable to WC’s representation. Visualizing the weight compressions from trained CORnet-S weights (Figure 2B), we find that the first layer’s Gabor filters qualitatively align with an analysis by Cammarata et al. (2020). Cluster centers seem to represent an intuitive division of channel types with opposite types in every layer.\nApplying WC to CORnet-S, we first obtain a compressed and clustered set of parameters, from which we sample entirely new weights to yield a new model CORnet-SWC. This model is not trained at all and we only evaluate the goodness of its initial wiring on the suite of Brain-Score benchmarks. Strikingly, we find that even without any training, CORnet-SWC achieves 54± 1.5%of the brain predictivity score relative to a fully-trained model (Figure 2), representing a 12 percent point improvement (n = 10 seeds; permutation test p < 1E−5) over the Kaiming Normal initialized model with a score of 43 ± 1.7%. Early ventral stream regions V1 and V2 are predicted especially well with no loss in score but we note that these two benchmarks are less well predicted by the trained model to begin with. V4 scores also approximate those of a trained model relatively well\n(75%). The major drop occurs in the IT and especially behavioral scores where CORnet-SWC only reaches 39% and 6% of the trained model’s score respectively. Similarly, a trained linear decoder on CORnet-SWC’s IT representations only reaches 5% of a trained model’s ImageNet top-1 accuracy. While intuitively sampling weights from a compression of trained weights should somewhat recover reasonable brain predictivity scores, we note that not every implementation satisfies both high brain predictivity and sufficient compression for the genome bottleneck (Appendix B.3).\nWeight Compression explores the hypothesis that evolution may have discovered an initialization strategy with improved at-birth representations (relative to current initializations). WC is most likely not how evolution found the at-birth synaptic connections, but shows that with nearly identical capacity, an alternative initialization distribution leads to networks that are more brain-like in their adult state – revealing a new space of possibilities (hypotheses) that should be considered (see Appendix B.4 for more details on biological plausibility). These findings further suggests that matching representations in higher visual regions might be especially dependent on visual experience whereas early visual regions might already be reasonably well specified without experience." }, { "heading": "5 TRAINING THIN DOWN-SAMPLING LAYERS REDUCES THE NUMBER OF UPDATED SYNAPSES WHILE MAINTAINING HIGH BRAIN PREDICTIVITY", "text": "While improved “at-birth” connectivity can reach 54% of a fully-trained model’s score, additional experience dependent updates appear necessary to reach higher predictivities. With standard train-\ning, each iteration simultaneously updates all of the millions of synaptic weights in the neural network, which may be difficult to implement biologically. Alternatively, learning could take place preferentially in specific components. Cortical circuits are heterogeneous and different neuronal populations undergo distinct plasticity mechanisms. For example, neurons in supra- and infragranular layers adapt more rapidly than those in layer 4, where inputs from lower areas arrive, as observed in rat somatosensory cortex (Diamond et al., 1994) and primate V1 (Schoups et al., 2001).\nAs a proof-of-principle that training a reduced set of layers can retain high performance, we propose a novel thin training technique, which we term Critical Training (CT; Figure 3A). CT updates only the weights in critical layers, instead of updating every single model synapse. In CORnet-S, each of the blocks has one down-sampling layer to produce an area’s final representation (Figure 3B). We explore successive variants of applying CT up to a block in the architecture and then training the following blocks, e.g. freezing V1, V2, V4 with critical training of the respective down-sampling layers and additional IT training. The final CT ventral stream model is almost completely frozen and only the synapses generating each cortical area’s output are trained.\nWe compared Critical Training against two alternative approaches: 1) reducing the trained parameters by freezing entire model blocks, for instance keeping V1 and V2 blocks fixed while training V4 and IT blocks. We term this block-wise freezing and training approach Downstream Training (DT). And 2) an approach proposed by Frankle et al. (2021) where only the BatchNorm parameters in a network are trained while all other parameters are kept at their initial values (“BatchNorm, BN”).\nCompared to standard back-propagation training all the weights, all three approaches (CT, DT, BN) reduce the number of trained parameters (Figure 3C). However, while the average score with DT (gray) already drops below 65% with over a quarter of trained parameters remaining and BN drops to 62% with very few parameters, CT (blue) maintains over 75% with only 1.4 out of 52.8 million parameters trained. The choice of which critical layers to train also matters: training the connecting layers between regions – i.e. the last (down-sampling, default) or first (up-sampling) layer – retains most of the performance whereas training layers such as BatchNorm performs worse.\nBy reducing the number of trained parameters, Critical Training also yields engineering benefits with more than 40% of the ImageNet score maintained at< 3% of parameters trained – a significant improvement over 22% accuracy with BatchNorm (Frankle et al., 2021) while adding only a small number of additional weights. CT further reduces training time by 30% per epoch." }, { "heading": "6 HIGH BRAIN PREDICTIVITY CAN BE ACHIEVED WITH A RELATIVELY SMALL NUMBER OF SUPERVISED SYNAPTIC UPDATES", "text": "All three training reduction methods independently minimize the number of supervised synaptic updates required to reach a reasonably high brain predictivity score. Reducing the number of supervised updates minimizes the number of epochs and images (Section 3); Weight Compression (WC) improves the at-birth synaptic connectivity for high initial scores with no training at all (Section 4); and Critical Training (CT) reduces the number of synapses that are updated during training (Section 5). Testing synergies between these strategies, we combined all three methods to build novel models that only require a small number of supervised synaptic updates to reasonably capture the mechanisms of adult ventral visual stream processing and object recognition behavior.\nFigure 4A shows the average brain predictivity of a range of models with varying numbers of supervised synaptic updates relative to a standard trained CORnet-S (black dot, 3,000 trillion supervised synaptic updates). With a reduced number of supervised updates (training epochs and labeled images) but standard initialization and training all weights (light blue dots), models require 5.2 trillion updates to achieve>50% of the score of a fully trained model and about 100 trillion updates to reach 80% brain score. Adding WC+CT (dark blue dots), the corresponding model already reaches 53% at birth with 0 supervised synaptic updates. At 0.5% the updates of a fully trained model (14 trillion vs. 3,000 trillion), models then reach 79% of the score ( model with modeling choices marked in Figures 1 to 3). Continuing standard training from this 79% model, we can achieve 100% of the score with 15 additional epochs of 1,028 trillion supervised synaptic updates (one third of the fully trained model’s 3,000 trillion updates). Reference models (gray dots) MobileNet (Howard et al., 2017) and ResNet (He et al., 2016) obtain high scores, but also require many supervised synaptic updates. HMAX (Riesenhuber & Poggio, 1999) is fully specified with no updates but lacks in score.\nWe next examined interactions between methods by comparing models initialized with WC and trained with CT to models with standard initialization and training all weights, when both are trained with fewer epochs and images. Figure 4B shows the percent point difference between the two model families. WC+CT yield strong benefits (green numbers) in a regime with few supervised updates, improving by up to 27 percent points when training for only 1 epoch on 1,000 images. With many updates on the other hand, WC+CT is less advantageous than standard training (red numbers): with all 43 epochs and 1.28M images, the score reduces by 17 percent points. WC+CT therefore most positively interact with a small budget of supervised updates (which is the focus of this work)." }, { "heading": "7 DISSECTING TRAINING REDUCTIONS", "text": "We asked whether the developed techniques would generalize to architectures other than the CORnet-S architecture they were based on. Beyond establishing the methods as more general, this can be seen as a novel way to construct model taxonomies. We therefore applied Weight Compression (WC) and Critical Training (CT) to ResNet-50 (He et al., 2016) and MobileNet-V1 (Howard et al., 2017) architectures, both high-performing models on Brain-Score. We used WC distributions determined on CORnet-S, i.e. we tested transfer without re-fitting. WC+CT maintain 91% of the score in ResNet despite an almost 80% reduction in parameters. When applied to MobileNet, the average score drops by 22% and parameters are reduced less strongly (43%). This difference in performance could be due to MobileNet already being very compressed, or having a less similar architecture.\nWith most analyses so far comparing an average score, we dissected the relative contributions of WC and CT to individual benchmarks (Figure 5B). We compared KN to WC initialization, as well as resulting models after critical training (KN+CT and WC+CT). WC initialization improves most over KN in early visual regions V1 and V2, while additional training with CT is most beneficial in mid- to high-level visual cortex V4 and IT, as well as the behavioral benchmark.\n57% parameters =78% score\n21% parameters = 91% score\nFigure 5: Transfer to other networks and individual scores comparison. A Transfer to other networks. We sample from WC initializations determined on CORnet-S, followed by Critical Training of only down-sampling layers. B Absolute scores on individual benchmarks of combinations of initialization (KN/WC, Figure 2), and with critical training (CT, Figure 3) techniques." }, { "heading": "8 DISCUSSION", "text": "We developed a range of models with neural and behavioral scores approaching those of the current leading model of the adult ventral visual stream as quantified in Brain-Score, while requiring only a fraction of supervised synaptic updates. These models were built by complementarily 1) reducing the number of supervised updates, i.e. training epochs and labeled images; 2) improving the “at birth” distribution of synaptic connectivity; and 3) training only critical synapses at the end of each model area. The techniques and resulting models proposed here are first steps to more closely modeling not just adult primate visual processing, but also exploring the underlying mechanisms of evolution and developmental learning.\nThese proof-of-principle demonstrations are far from accounting for the rich information encoded in the genome or the developmental learning that together result in adult mechanisms of visual processing, and require further experimental validation. We here started from CORnet-S, one of the leading models on Brain-Score, but does not fully predict all brain measurements (0.42 absolute score). We verified favorable transfer to models with similar architectures such as ResNet, but generalization to an already compressed MobileNet was limited (Figure 5A).\nRelating to genomic mechanisms, the proposed techniques should generalize to other domains such as auditory processing. With the capacity bottleneck in the genome, mechanisms for wiring up would likely be shared between similar systems. The fact that early visual areas converge earlier during training (Figure 1) and are better predicted than higher areas by WC initialization is consistent with developmental studies of the primate ventral stream. In humans, behaviors that rely on lowlevel spatial and temporal processing of visual inputs reach adult-like performance considerably earlier than complex visual behaviors that rely on higher cortical regions, such as face perception (Ellemberg et al., 1999; Grill-Spector et al., 2008).\nA critical component in more closely modeling primate development is to reduce the dependence on labels altogether. Recent unsupervised approaches are starting to rival the classification performance of supervised models (Caron et al., 2018; Hénaff et al., 2019; Zhuang et al., 2020) and combining them with the advances presented here could further reduce the number of synaptic updates. More precise biological measurements are required to quantify the number of (parallel) experiencedependent updates. Current unsupervised techniques however still require back-propagation which is routinely criticized as non-biological, among others due to the propagation of gradients (Grossberg, 1987; Whittington & Bogacz, 2019; Hunsberger, 2017). Local learning rules (Löwe et al., 2019; Xiong et al., 2020) might alleviate these concerns and with critical training (Figure 3), it could be sufficient to learn in only a subset of layers.\nThe changes to model initialization and training presented here serve as a proof-of-principle that models can be changed to more closely align with primate development by reducing training steps with labeled images and improving initialization. It is also possible to achieve high brain predictivity when training only a fraction of weights, but all these models are still far from the actual biological mechanisms. We expect future work in this direction to further close the gap with improved evolutionarily encoded wiring mechanisms and developmental learning rules." }, { "heading": "A BENCHMARK DETAILS", "text": "We use the benchmarks as implemented in www.github.com/brain-score/ brain-score at commit 96b0711, and convert base models to brain models with www.github.com/brain-score/model-tools at commit 2f778c6. Images were presented at 4 degrees without aperture for the V1 and V2 benchmarks and at 8 degrees for the V4, IT, and behavior benchmarks. Models committed to an input size of 8 degrees visual angle." }, { "heading": "B WEIGHT COMPRESSION DETAILS", "text": "For all convolutional layers except the first, we cluster kernel weights in a layer using the k-means algorithm (Fix & Hodges, 1951). The number of clusters is determined using elbow (Thorndike, 1953) (see Table 1). To capture the relative importance of clusters we fit a normal distribution Nf for each cluster with µf as the cluster frequency over kernels and σf as the frequency standard deviation. To sample weights for a kernel, we first sample a cluster distribution i ∼ Nf per kernel and then obtain channel weights by sampling from a Gaussian with ~µi as the cluster center and the standard deviation ~σi of clustered weights. In batch normalization layers, we fit one normal distribution each to the weights and biases." }, { "heading": "B.1 COMPRESSING THE FIRST LAYER WITH A GABOR PRIOR", "text": "The weight compression approach we use in Section 4 is based on different initialization techniques, applied to different layers. For the very first layer of size 7×7 we found a Gabor filter most effective following studies in V1 (Hubel & Wiesel, 1962; Jones & Palmer, 1987). To generate the Gabor kernels we fit trained channel weights to a Gabor function\nGθ,f,φ,nx,ny,C(x, y) = 1\n2πσxσy exp\n[ −0.5(x2rot/σ2x + y2rot/σ2y) ] cos (2πf + φ)C (1)\nwhere\nxrot = x cos(θ) + y sin(θ) yrot = −x sin(θ) + y cos(θ) (2)\nσx = nx f σy = ny f\n(3)\nxrot and yrot are the orthogonal and parallel orientations relative to the grating, θ is the angle of the grating orientation, f is the spatial frequency of the grating, φ is the phase of the grating relative to the Gaussian envelope, σx and σy are the standard deviations of the Gaussian envelope orthogonal and parallel to the grating, which can be defined as multiples (nx and ny) of the inverse of the grating frequency and C is a scaling factor. The function is fit per channel, which leads to a set of Gabor parameter for each of the 3 RGB channels. We then fit a multidimensional mixture of Gaussians to the combination of all filter parameter per kernel, resulting in a kernel parameter set. For the three RGB input channels in the first layer and the 8 Gabor parameters we therefore fit to 3× 8 = 27 parameters. We evaluate the best number of components (number of distinct Gaussian distributions) based on the Bayesian Information Criterion (Schwarz, 1978) and use 4 components for the first layer of CORnet-S. To generate new kernels we sample a kernel parameter set from this mixture distribution and apply them to the described Gabor function that spans the weight values." }, { "heading": "B.2 COMPRESSING BATCHNORM LAYERS", "text": "In addition to convolutional layers, models consist of several Batchnorm layers, which contain a learnable bias and weight term. To initialize these terms, we fit a normal distribution per weight and bias vector of the trained values and sample from this distribution. Note that BatchNorm layers contain running average means and standard deviations for normalization purposes, which are applied at validation time. Those terms are set to zero when no training has happened, but cause score changes once the model has processed the dataset. During training the mean and standard deviation of the current batch are used instead." }, { "heading": "B.3 ALTERNATIVE APPROACHES", "text": "We have explored a variety of weight compression methods applied to different layers and evaluate their performance “at birth” without training and when trained with critical training.\nFigure 7 shows brain predictivities of several alternative compression methods implemented as follows:\n• WC Weight compression approach with clustering as described in Section 4, using a Gabor prior approach for the first layer, noisy cluster sampling for convolutional layers and fitted normal distributions for Batchnorm layers (4,166 parameters for CORnet-S).\nWe speculate that increasing the number of clusters would follow logarithmic growth with respect to the performance of the model: i.e. a small number of clusters can already yield useful wiring (as explored here), while adding more clusters will have a positive effect but with decreasing effect size." }, { "heading": "B.4 BIOLOGICAL PLAUSIBILITY", "text": "Weight Compression (WC)’s cluster-based initialization generates an “at-birth” network that already captures some useful aspects of the visual inputs in a compressed manner and allows the system to learn faster. The way that these clusters are determined (from a previously trained network) is not biologically plausible and does not correspond to any evolutionary mechanism we are aware of. However, independent of how these clusters are determined, WC shows that it is possible to encode certain priors about a system’s wiring diagram in a very compressed manner. Evolution also acquired aspects of the visual inputs (with a different strategy) and encoded them in the genome in a lower information regime.\nFor example, we know that there are multiple neuro-developmental mechanisms, such as spontaneous retinal waves and axon guidance cues, that significantly shape the architecture and function of the visual system requiring no (or very little) visual experience ((Huberman et al., 2008)). These mechanisms depend on a relatively small number of proteins encoded in a genome and give rise to a highly complex pattern of at-birth synaptic connectivity representing a very large compression of information." }, { "heading": "C WC INITIALIZED AND CT TRAINED MODEL ANALYSIS", "text": "Our best model WC+CT benefits from a combination of improved initialization through weight compression, and critical training. Figure 8A shows models with standard initialization and training all weights, but with fewer supervised updates (cf. Figure 1), models that only train down-sampling layers (CT), and models that combine critical training with weight compression (WC+CT). A model initialized with weight compression achieves (only WC) 54% brain predictivity score with 0 supervised synaptic updates. Figure 8B and C show detailed brain predicitivity scores, relative to a fully trained model, for models initialized and trained with WC+CT (B) and models initialized with standard Kaiming Normal and training all weights (C) when trained with a range of epochs and labeled images. The specific benchmark scores when either training with all labeled images for a varying number of epochs (Figure 9A) or when training with fewer labeled images until convergence (Figure 9B) show the benchmarks of early visual achieve the best results, relative to a fully trained model. The V1 score is identical over all training states, since we do not train the V1 area.\nNotably, ImageNet performance of these networks seems to not be predictive of their brain predictivity, since even untrained networks with at-chance ImageNet performance correspond reasonably well to e.g. V1 (Figures 1 and 2). New normative tasks might be required to explain these results, such as model robustness to image corruptions (Dapello et al., 2020)." }, { "heading": "D DISSECTING TRAINING REDUCTIONS – DETAILS", "text": "" }, { "heading": "D.1 TRANSFER TO RESNET AND MOBILENET", "text": "To show the generalization of our approach we applied the weight compression methods to a ResNet50 (He et al., 2016) and a MobileNet (Howard et al., 2017) (version 1, multiplier 1.0, image size 224) architecture. We do not regenerate sampling distributions or clusters based on the new architectures trained weights, but used the CORnet-S based distributions to sample new weights for the different architectures. Since CORnet-S is inspired by ResNet modules, we applied our critical training approach by training all conv3 layers (equivalent down sampling layers) of ResNet50. For MobileNet we explored various layer mappings. When training only the very few layers that result in reduced feature size, which are implemented as depthwise separable convolutional layers and appear three times overall, performance performance dropped close to random. Those layers however are mapped to CORnet-S’ conv2 layers due to their 3× 3 kernels whereas critical training in CORnet-S trains conv3 down-sampling layers with a kernel size of 1 × 1. To transfer our critical training approach, we therefore additionally train the 1 × 1 MobileNet layers corresponding to conv3. This training version allows for more training but still reduces the amount of trained parameters by 43% while maintaining 78% of the original score. For both transfer methods we initialize the\nfirst layer using the Gabor method based on CORnet-S’s mixture-of-Gaussian distribution. Since the Gabor function is scalable we can produce Gabor kernels of varying size. Furthermore we disable BatchNorm biases and weights in all transfer models by freezing them to default values. We found that transferring those distributions on new architectures harms brain predictivity scores. Nevertheless, the BatchNorm layers still normalize activations by applying the running average and standard deviation." }, { "heading": "D.2 COMPARISON OF TECHNIQUES TO REDUCE SUPERVISED SYNAPTIC UPDATES (FIG. 5B)", "text": "To analyse the relative contributions of Weight Compression and Critical Training we compare brain predictivitiy scores of different models in Figure 5B:\n• KN A model initialized by standard Kaiming Normal initialization without training. • WC A model initialized by our Weight Compression initialization, described in Section 4,\nwithout training.\n• KN+CT The KN-initialized model trained with Critical Training until convergence, i.e. three downstream layers and the decoder are trained and all other layers remain unchanged.\n• WC+CT The WC-initialized model with Critical Training. V1 scores do not change because weights in the V1 model area are all frozen." }, { "heading": "E TRAINING DETAILS", "text": "We used PyTorch 0.4.1 and trained the model using the ImageNet 2012 training set Deng et al. (2009). We used a batch size of 256 images and trained on a QuadroRTX6000 GPU until convergence. We start with a learning rate of 0.1 and decrease it four times by a factor of ten when training loss does not decrease over a period of three epochs. For optimization, we use Stochastic Gradient Descent with a weight decay 0.0001, momentum 0.9, and a cross-entropy loss between image labels and model logits. We trained all models with these settings except the standard Mobilenet, where we used the pretrained tensorflow model. Since the number of epochs for this model are not clearly stated, we use the published value of 100 training epochs Howard et al. (2017). The training time of a full CORnet-S with standard Imagenet dataset for 43 epochs is ∼2.5 days. All variations with less weights/images/epochs trained in shorter time. Reference models trained for 4 days at most under\nthe described settings. If not further specified, we show results of one training run. When showing error bars we used seeds 0, 42 and 94.\nCode to reproduce our analyses from scratch, including the framework for weight compression and critical training, as well as pre-trained models, will be made available through GitHub." } ]
2,021
WIRING UP VISION: MINIMIZING SUPERVISED SYNAPTIC UPDATES NEEDED
SP:9070183afc9422af7dcef84aea785cb59bbba3ae
[ "This paper develops new stability bounds for SGD. The main difference from the existing studies is that they consider stability bounds for normalized loss functions where the parameters are normalized to have a norm of $1$. This paper considers both convex and nonconvex cases. For the convex case, the authors develop uniform stability bounds and high-probability bounds. For the nonconvex case, the authors develop on-average stability bounds for neural networks. Experimental results are also given.", "This paper considers the generalization bound for stochastic gradient descent. The authors leverage normalized loss function to analyze the stability of SGD algorithms which further yields the generalization bound. They provide the on-average stability result for non-convex optimization under the ReLU neural network setting. The theoretical results deepen our understanding of the performance of the SGD algorithm and an experiment is provided to illustrate theoretical findings. ", "This paper conducted a stability analysis of Stochastic Gradient Descent (SGD) for empirical risk minimization induced by the so-called normalized loss function. Here, the normalization is taken with respect to parameters involved in an individual loss; see (4) for the definition of the normalized loss function. The paper should be regarded as a theoretical paper. The main results are stability bounds of SGD for convex and nonconvex ERM schemes. ", "This paper considers the problem of understanding the generalization of SGD using the stability framework. The well-known result in this line of work is the paper by Hardt'16. In Hardt'16, the stability is measured using the difference between the \"actual\" weights of two copy of SGD which differ in a single data point. The main observation by the authors in this paper is that in many cases, the loss function is invariant to the scaling of weights. Then, they reformulate the stability analysis using the \"normalized loss function\" which is defined by l^alpha(w,z) = loss(alpha*w/||w||,z) where alpha is a constant. Their main results are the new stability analysis for this new notion for convex and non-convex settings. Specifically, for the convex case the analysis is very similar to the Hardt paper. For the non-convex the authors define a new measure for generalization \"zeta\" in Theorem 4 which governs the stability." ]
We prove new generalization bounds for stochastic gradient descent for both the convex and non-convex cases. Our analysis is based on the stability framework. We analyze stability with respect to the normalized version of the loss function used for training. This leads to investigating a form of angle-wise stability instead of euclidean stability in weights. For neural networks, the measure of distance we consider is invariant to rescaling the weights of each layer. Furthermore, we exploit the notion of on-average stability in order to obtain a data-dependent quantity in the bound. This data-dependent quantity is seen to be more favorable when training with larger learning rates in our numerical experiments. This might help to shed some light on why larger learning rates can lead to better generalization in some practical scenarios.
[]
[ { "authors": [ "Raef Bassily", "Vitaly Feldman", "Cristóbal Guzmán", "Kunal Talwar" ], "title": "Stability of stochastic gradient descent on nonsmooth convex losses", "venue": "Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems", "year": 2020 }, { "authors": [ "Olivier Bousquet", "André Elisseeff" ], "title": "Stability and generalization", "venue": "J. Mach. Learn. Res.,", "year": 2002 }, { "authors": [ "André Elisseeff", "Theodoros Evgeniou", "Massimiliano Pontil" ], "title": "Stability of randomized learning algorithms", "venue": "J. Mach. Learn. Res.,", "year": 2005 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross B. Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: training imagenet in 1 hour", "venue": "CoRR, abs/1706.02677,", "year": 2017 }, { "authors": [ "Moritz Hardt", "Ben Recht", "Yoram Singer" ], "title": "Train faster, generalize better: Stability of stochastic gradient descent", "venue": "In Proceedings of the 33nd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Fengxiang He", "Tongliang Liu", "Dacheng Tao" ], "title": "Control batch size and learning rate to generalize well: Theoretical and empirical evidence", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Elad Hoffer", "Itay Hubara", "Daniel Soudry" ], "title": "Train longer, generalize better: closing the generalization gap in large batch training of neural networks", "venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Stanislaw Jastrzebski", "Maciej Szymczak", "Stanislav Fort", "Devansh Arpit", "Jacek Tabor", "Kyunghyun Cho", "Krzysztof Geras" ], "title": "The break-even point on optimization trajectories of deep neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Ilja Kuzborskij", "Christoph H. Lampert" ], "title": "Data-dependent stability of stochastic gradient descent", "venue": "In Proceedings of the 35th International Conference on Machine Learning, ICML 2018,", "year": 2018 }, { "authors": [ "Yann LeCun", "Corinna Cortes. MNIST handwritten digit database." ], "title": "URL http://yann", "venue": "lecun.com/exdb/mnist/.", "year": 2010 }, { "authors": [ "Qianli Liao", "Brando Miranda", "Andrzej Banburski", "Jack Hidary", "Tomaso A. Poggio" ], "title": "A surprising linear relationship predicts test performance in deep networks", "venue": "CoRR, abs/1807.09659,", "year": 2018 }, { "authors": [ "Tongliang Liu", "Gábor Lugosi", "Gergely Neu", "Dacheng Tao" ], "title": "Algorithmic stability and hypothesis complexity", "venue": "In Proceedings of the 34th International Conference on Machine Learning, ICML 2017,", "year": 2017 }, { "authors": [ "Ben London" ], "title": "A pac-bayesian analysis of randomized learning with application to stochastic gradient descent", "venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Kaifeng Lyu", "Jian Li" ], "title": "Gradient descent maximizes the margin of homogeneous neural networks", "venue": "CoRR, abs/1906.05890,", "year": 2019 }, { "authors": [ "Mor Shpigel Nacson", "Nathan Srebro", "Daniel Soudry" ], "title": "Stochastic gradient descent on separable data: Exact convergence with a fixed learning rate", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Vaishnavh Nagarajan", "J. Zico Kolter" ], "title": "Uniform convergence may be unable to explain generalization in deep learning", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Behnam Neyshabur", "Ryota Tomioka", "Nathan Srebro" ], "title": "Norm-based capacity control in neural networks", "venue": "In Proceedings of The 28th Conference on Learning Theory, COLT 2015,", "year": 2015 }, { "authors": [ "Tomaso A. Poggio", "Andrzej Banburski", "Qianli Liao" ], "title": "Theoretical issues in deep networks: Approximation, optimization and generalization", "venue": "CoRR, abs/1908.09375,", "year": 2019 }, { "authors": [ "Samuel L. Smith", "Quoc V. Le" ], "title": "A bayesian perspective on generalization and stochastic gradient descent", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Daniel Soudry", "Elad Hoffer", "Mor Shpigel Nacson", "Nathan Srebro" ], "title": "The implicit bias of gradient descent on separable data", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Zhuoning Yuan", "Yan Yan", "Rong Jin", "Tianbao Yang" ], "title": "Stagewise training accelerates convergence of testing error over SGD", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Y. Zhou", "Y. Liang", "H Zhang" ], "title": "Understanding generalization error of sgd in nonconvex optimization", "venue": "Machine Learning,", "year": 2021 }, { "authors": [ "Hardt" ], "title": "2016), uniform stability with respect to the same loss the algorithm is executed on is considered. This is a natural choice, however if we are interested in the 0− 1 loss, different set of parameters w, w′ can represent equivalent classifiers (that is, predict the same label for any input). This is the case for logistic regression since any rescaling of the parameters yields the same classifier", "venue": null, "year": 2016 }, { "authors": [ "Hardt" ], "title": "2016)) Assume that the loss function f(·; z) is β-smooth, convex and L−Lipschitz for every z. Suppose that we run SGD with step sizes αt ≤ 2/β for T", "venue": "λi. For ease of comparison,", "year": 2016 }, { "authors": [ "Hardt" ], "title": "t be the output of A after t steps on training set S (i) for some i ∈", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "In the last few years, deep learning has succeeded in establishing state-of-the-art performances in a wide variety of tasks in fields like computer vision, natural language processing and bioinformatics (LeCun et al., 2015). Understanding when and how these networks generalize better is important to keep improving their performance. Many works starting mainly from Neyshabur et al. (2015), Zhang et al. (2017) and Keskar et al. (2017) hint a rich interplay between regularization and the optimization process of learning the weights of the network. The idea is that a form of inductive bias can be realized implicitly by the optimization algorithm. The most popular algorithm to train neural networks is stochastic gradient descent (SGD). It is therefore of great interest to study the generalization properties of this algorithm. An approach that is particularly well suited to investigate learning algorithms directly is the framework of stability (Bousquet & Elisseeff, 2002), (Elisseeff et al., 2005). It is argued in Nagarajan & Kolter (2019) that generalization bounds based on uniform convergence might be condemned to be essentially vacuous for deep networks. Stability bounds offer a possible alternative by trying to bound directly the generalization error of the output of the algorithm. The seminal work of Hardt et al. (2016) exploits this framework to study SGD for both the convex and non-convex cases. The main intuitive idea is to look at how much changing one example in the training set can generate a different trajectory when running SGD. If the two trajectories must remain close to each other then the algorithm has better stability.\nThis raises the question of how to best measure the distance between two classifiers. Our work investigates a measure of distance respecting invariances in homogeneous neural networks (and linear classifiers) instead of the usual euclidean distance. The measure of distance we consider is directly related to analyzing stability with respect to the normalized loss function instead of the standard loss function used for training. In the convex case, we prove an upper bound on uniform stability with respect to the normalized loss function, which can then be used to prove a high probability bound on the test error of the output of SGD. In the non-convex case, we propose an analysis directly targeted toward homogeneous neural networks. We prove an upper bound on the on-average stability with respect to the normalized loss function, which can then be used to give a generalization bound on the test error. One nice advantage coming with our approach is that we do not need to assume that the loss function is bounded. Indeed, even if the loss function used for training is unbounded, the normalized loss is necessarily bounded.\nOur main results for neural networks involve a data-dependent quantity that we estimate during training in our numerical experiments. The quantity is the sum over each layer of the ratio between\nthe norm of the gradient for this layer and the norm of the parameters for the layer. We observe that larger learning rates lead to trajectories in parameter space keeping this quantity smaller during training. There are two ways to get our data-dependent quantity smaller during training. The first is by facilitating convergence (having smaller norms for the gradients). The second is by increasing the weights of the network. If the weights are larger, the same magnitude for an update in weight space results in a smaller change in angle (see Figure 1). In our experiments, larger learning rates are seen to be more favorable in both regards.\nOur main contributions are summarized as follows:" }, { "heading": "2 RELATED WORK", "text": "Normalized loss functions have been considered before (Poggio et al., 2019), (Liao et al., 2018). In Liao et al. (2018), test error is seen to be well correlated with the normalized loss. This observation is one motivation for our study. We might expect generalization bounds on the test error to be better by using the normalized surrogate loss in the analysis. (Poggio et al., 2019) writes down a generalization bound based on Rademacher complexity but motivated by the possible limitations of uniform convergence for deep learning (Nagarajan & Kolter, 2019) we take the stability approach instead.\nGeneralization of SGD has been investigated before in a large body of literature. Soudry et al. (2018) showed that gradient descent converges to the max-margin solution for logistic regression and Lyu & Li (2019) provides an extension to deep non-linear homogeneous networks. Nacson et al. (2019) gives similar results for stochastic gradient descent. From the point of view of stability, starting from Hardt et al. (2016) without being exhaustive, a few representative examples are Bassily et al. (2020), Yuan et al. (2019), Kuzborskij & Lampert (2018),Liu et al. (2017), London (2017).\nSince the work of Zhang et al. (2017) showing that currently used deep neural networks are so overparameterized that they can easily fit random labels, taking properties of the data distribution into account seems necessary to understand generalization of deep networks. In the context of stability, this means moving from uniform stability to on-average stability. This is the main concern of the work of Kuzborskij & Lampert (2018). They develop data-dependent stability bounds for SGD by extending over the work of Hardt et al. (2016). Their results have a dependence on the risk of the initialization point and the curvature of the initialization. They have to assume a bound on the noise of the stochastic gradient. We do not make this assumption in our work. Furthermore, we maintain in our bounds for neural networks the properties after the “burn-in” period and therefore closer to\nthe final output since we are interested in the effect of the learning rate on the trajectory. This is motivated by the empirical work of Jastrzebski et al. (2020) arguing that in the early phase of training, the learning rate and batch size determine the properties of the trajectory after a “break-even point”. Another work interested in on-average stability is Zhou et al. (2021). Differently from our work, their approach makes the extra assumptions that the variance of the stochastic gradients is bounded and also that the loss is bounded. Furthermore, our analysis directly exploits the structure of neural networks and the properties following from using homogeneous non-linearities.\nIt has been observed in the early work of Keskar et al. (2017) that training with larger batch sizes can lead to a deterioration in test accuracy. The simplest strategy to reduce (at least partially) the gap with small batch training is to increase the learning rate (He et al., 2019), (Smith & Le, 2018), (Hoffer et al., 2017), (Goyal et al., 2017). We choose this scenario to investigate empirically the relevance of our stability bound for SGD on neural networks. Note that the results in Hardt et al. (2016) are more favorable to smaller learning rates. It seems therefore important in order to get theory closer to practice to understand better in what sense larger learning rates can improve stability." }, { "heading": "3 PRELIMINARIES", "text": "Let l(w, z) be a non-negative loss function. Furthermore, let A be a randomized algorithm and denote by A(S) the output of A when trained on training set S = {z1, · · · , zn} ∼ Dn. The true risk for a classifier w is given as\nLD(w) := Ez∼Dl(w, z)\nand the empirical risk is given by\nLS(w) := 1 n ∑n i=1 l(w, zi).\nWhen considering the 0− 1 loss of a classifier w, we will write L0−1D (w). Furthermore, we will add a superscript α when the normalized losses lα are under consideration (these will be defined more clearly in the subsequent sections respectively for the convex case and the non-convex case). Our main interest is to ensure small test error and so we want to bound L0−1D (w). The usual approach is to minimize a surrogate loss upper bounding the 0− 1 loss. In this paper, we consider stochastic gradient descent with different batch sizes to minimize the empirical surrogate loss. The update rule of this algorithm for learning rates λt and a subset Bt ⊂ S of size B is given by\nwt+1 = wt − λt 1\nB ∑ zj∈Bt ∇l(wt, zj). (1)\nWe assume sampling uniformly with replacement in order to form each batch of training examples. In order to investigate generalization of this algorithm, we consider the framework of stability (Bousquet & Elisseeff, 2002).\nWe now give the definitions for uniform stability and on-average stability (random pointwise hypothesis stability in Elisseeff et al. (2005)) for randomized algorithms (see also Hardt et al. (2016) and Kuzborskij & Lampert (2018)). The definitions can be formulated with respect to any loss function but since we will study stability with respect to the lα losses, we write the definitions in the context of this special case.\nDefinition 1 The algorithm A is said to be αuni-uniformly stable if for all i ∈ {1, . . . , n}\nsup S,z′i,z\nE [ |lα(A(S), z)− lα(A(S(i)), z)| ] ≤ αuni. (2)\nHere, the expectation is taken over the randomness of A. The notation S(i) means that we replace the ith example of S with z′i.\nDefinition 2 The algorithm A is said to be αav-on-average stable if for all i ∈ {1, . . . , n} E [ |lα(A(S), z)− lα(A(S(i)), z)| ] ≤ αav. (3)\nHere, the expectation is taken over S ∼ Dn, z ∼ D and the randomness of A. The notation S(i) means that we replace the ith example of S with z.\nThroughout the paper, ||·|| will denote the euclidean norm for vectors and the Frobenius norm for matrices. The proofs are given in Appendix A for the convex case and in Appendix B for the non-convex case." }, { "heading": "4 CONVEX CASE: A FIRST STEP TOWARD THE NON-CONVEX CASE", "text": "Since the convex case is easier to handle, it can be seen as a good preparation for the non-convex case. Consider a linear classifier parameterized by either a vector of weights (binary case) or a matrix of weights (multi-class case) that we denote by w in both cases. The normalized losses are defined by\nlα(w, z) := l(α w\n||w|| , z), (4)\nfor α > 0.\nIn order to state the main result of this section, we need two common assumptions: L-Lipschitzness of l as a function of w and β-smoothness.\nDefinition 3 The function l(w, z) is L−Lipschitz for all z in the domain (with respect to w) if for all w,w′, z,\n|l(w, z)− l(w′, z)|≤ L||w − w′||. (5)\nDefinition 4 The function l(w, z) is β−smooth if for all w,w′, z, ||∇l(w, z)−∇l(w′, z)||≤ β||w − w′||. (6)\nWe are now ready to state the main result of this section.\nTheorem 1 Assume that l(w, z) is convex, β−smooth and L−Lipschitz for all z. Furthermore, assume that the initial point w0 satisfies ||w0||≥ K for some K such that K̂ = K −L ∑T−1 i=0 λi > 0 for a sequence of learning rates λi ≤ 2/β. SGD is then run with batch size B on loss function l(w, z) for T steps with the learning rates λt starting from w0. Denote by αuni the uniform stability of this algorithm with respect to lα. Then,\nαuni ≤ α 2L2B\nnK̂ T−1∑ i=0 λi. (7)\nWhat is the main difference between our bound and the bound in Hardt et al. (2016) (see theorem 7 in Appendix A) ? Our bound takes into account the norm of the initialization. The meaning of the bound is that it is not enough to use small learning rates and a small number of epochs to guarantee good stability (with respect to the normalized loss). We also need to take into account the norm of the parameters (here the norm of the initialization) to make sure that the “effective” learning rates are small. Note that all classifiers are contained in any ball around the origin even if the radius of the ball is arbitrarily small. Therefore, all control over stability is lost very close to the origin where even a small step (in Euclidean distance) can lead to a drastic change in the classifier. The norm of the initialization must therefore be large enough to ensure that the trajectory cannot get too close to the origin (in worst case, since uniform stability is considered). An alternative if the conditions of the theorem are too strong in some practical scenarios is to use on-average stability (l = 1 layer in the results of section 5). As a side note, we also incorporated the batch size into the bound which is not present in Hardt et al. (2016) (only B = 1 is considered).\nFrom this result, it is now possible to obtain a high probability bound for the test error. The bound is over draws of training sets S but not over the randomness of A. 1 So, we actually have the expected\n1It is possible to obtain a bound holding over the randomness of A by exploiting the framework of Elisseeff et al. (2005). However, the term involving ρ in their theorem 15 does not converge to 0 when the size of the training set grows to infinity.\ntest error over the randomness of A in the bound. This is reminiscent of PAC-Bayes bounds where here the posterior distribution would be induced from the randomness of the algorithm A.\nTheorem 2 Fix α > 0. Let Mα := sup{l(w, z) s.t. ||w||≤ α, ||x||≤ R}. Then, for any n > 1 and δ ∈ (0, 1), the following hold with probability greater or equal to 1− δ over draws of training sets S:\n(8)EAL0−1D (A(S)) ≤ EAL α S(A(S)) + α uni + (2n α uni +Mα)\n√ ln(1/δ)\n2n .\nProof: The proof is an application of McDiarmid’s concentration bound. Note that we do not need the training loss to be bounded since we consider the normalized loss which is bounded. The proof follows the same line as theorem 12 in Bousquet & Elisseeff (2002) and we do not replicate it here. Note that we need to use that uniform stability implies generalization in expectation which is proven for example in theorem 2.2 from Hardt et al. (2016).\nFurthermore, a bound holding uniformly over all α’s can be obtained using standard techniques.\nTheorem 3 Let C > 0. Assume that lα(w, z) is a convex function of α for all w, z and that αuni is a non-decreasing function of α. Then, for any n > 1 and δ ∈ (0, 1), the following hold with probability greater or equal to 1− δ over draws of training sets S:\nEAL0−1D (A(S)) ≤ inf α∈(0,C]\n{ EA max (Lα/2S (A(S)), L α S(A(S))) + α uni + (2n α uni +\nMα)\n√ 2 ln( √\n2(2 + log2 C − log2 α)) + ln(1/δ) 2n\n} .\nIn the next section, we investigate the non-convex case. We exploit on-average stability to obtain a data-dependent quantity in the bound. Note that it is also argued in Kuzborskij & Lampert (2018) that the worst case analysis of uniform stability might not be appropriate for deep learning." }, { "heading": "5 NON-CONVEX CASE", "text": "We consider homogeneous neural networks in the setup of multiclass classification. Write f(x) = Wl(σ(· · ·W2(σ(W1x)))), where x is an input to the neural network, Wi denotes the weight matrix at layer i and σ denotes a homogeneous non-linearity (σ(cx) = ckσ(x) for any constant c > 0). Examples for the non-linearity are the ReLU function (k = 1), the quadratic function (k = 2) and the identity function (k = 1, leading to deep linear networks). Consider a non-negative loss function l(s, y) that receives a score vector s = f(x) and a label y as inputs. We require the loss function to be L-Lipschitz for all y as a function of s. That is, for all s, s′, y,\n|l(s, y)− l(s′, y)|≤ L||s− s′||. (9)\nFor example, we can use the cross-entropy loss (softmax function with negative log likelihood). In this case, it is simple to show by bounding the norm of the gradient of l(s, y) with respect to s that we can use L = √ 2. Note that this is slightly different from the Lipschitz assumption of the previous section (given with respect to the weights w).\nIn order to control the behaviour of the non-linearity, we assume that for any c > 0, there exist constants Bc and Lc such that for any x, y ∈ Rd with ||x||≤ c and ||y||≤ c we have\n||σ(x)|| ≤ Bc||x||, (10) ||σ(x)− σ(y)|| ≤ Lc||x− y||. (11)\nNote that the non-linearity σ is being applied component-wise when the input is a vector as above. It is easy to verify that, for the ReLU function, we have Bc = 1 and Lc = 1 for all c. Furthermore, for the quadratic function x2, we have Bc = c and Lc = 2c. The following lemma will be the starting point for our analysis.\nLemma 1 Assume that ||x||≤ R. Let α1, · · ·αl be positive real numbers and, for 1 ≤ j ≤ l, denote W̃j :=\nWj ||Wj || and W̃ ′ j := W ′j ||W ′j || . Write sj = αjW̃j(σ(· · ·α2W̃2(σ(α1W̃1x)))) and\ns′j = αjW̃ ′ j(σ(· · ·α2W̃ ′2(σ(α1W̃ ′1x)))). Also, let cj be an upper bound on the norm of layer j (this will be a constant depending on α1, · · ·αj and R). Then, we have\n(12)||sl − s′l||≤ R ( l∏ j=1 αj ) l∑ i=1 τi||W̃i − W̃ ′i ||,\nwhere τi = l∏\nj=1,j 6=i\n( Bcj if j < i Lcj−1 if j > i ) .\nThe previous lemma motivates a measure of “distance” between neural networks.\nDefinition 5 For neural networks f and g, where the weight matrices of f are given by W1 · · ·Wl and the weight matrices of g are given by W ′1 · · ·W ′l , define\nd(f, g) := l∑ i=1 τi|| Wi ||Wi|| − W ′ i ||W ′i || ||. (13)\nNote that this distance function is invariant to rescaling the weights of any layer. This is a desirable property since in a homogeneous network such a reparametrization leaves the class predicted by the classifier unchanged for any input to the network.\nLet α1, · · ·αl be positive real numbers. We define the lα1,···αl(f, z) losses to be equal to\n(14)l(αl Wl ||Wl|| (σ(· · ·α2 W2 ||W2|| (σ(α1 W1 ||W1|| x)))), y),\nwhere z = (x, y) and f is the neural network with weight matrix at layer i given by Wi. That is, we project the weight matrices to give the norm αi to layer i and then we evaluate the loss l on this “normalized” network. For simplicity, we will only consider the case where all the αi’s are equal to say α and we will write lα(f, z). From our definitions and lemma 1, we have that for all z and neural networks f and g, |lα(f, z)− lα(g, z)|≤ LRαld(f, g). (15) In order to bound stability with respect to lα, we will have to ensure that the two trajectories cannot diverge too much in terms of d(f, g).\nWe will consider two separate cases: the smooth case and the non-smooth case. When the activation function is smooth (for example xk for k ≥ 1), we will exploit the concept of layer-wise smoothness defined below.\nDefinition 6 Consider the gradient of the loss function with respect to the parameters W for some training example z. The vector containing only the partial derivatives for the weights of layer j will be denoted by ∇(j)l(W, z). We define {βj}lj=1-layerwise smoothness as the following property: For all j, z, W = (W1, · · · ,Wl) and W ′ = (W ′1, · · · ,W ′l ),\n||∇(j)l(W, z)−∇(j)l(W ′, z)||≤ βj ||Wj −W ′j ||. (16)\nWe also let β := max{βj}. Note that β is upper bounding the spectral norm of the bloc diagonal approximation of the Hessian.\nWe are now ready to state the main theorem of this section for the smooth case.\nTheorem 4 Suppose that the loss function l(s, y) is L-Lipschitz for all y, non-negative and that lα(f, z) is bounded above by Mα. Furthermore, assume {βj}lj=1-layerwise smoothness and that ||x||≤ R. Finally, let B denote the batch size, λt ≤ ct the learning rates and T the number of iterations SGD is being run. Then,\n(17) αav ≤ inf t0∈{1,2,..., nB }\n[ 2BLRαl\n(n−B)β ( T − 1 t0 − 1 )cβ T−1∑ t=t0 ζt +Mα Bt0 n ] ,\nwhere β = max{βj} and ζt := ∑l j=1 τjEA,S,z [ Cj(S, z) T−t ||∇ (j)LBt (Wt)|| K\n(j) t (S,z)\n] ,\nwith K(j)t (S, z) := min{||Wj,t||, ||W ′j,t||} and Cj(S, z) := max t0≤t≤T−1\nK (j) t (S, z) K (j) t+1(S, z) .\nTo evaluate the bound, we need to find the best t0. There is a tradeoff here between two quantities. A small t0 is better for the term Mα Bt0n but is worse for the remaining term. This establishes the best “burn in” period. The amount of “exploration” before t0 does not effect the generalization bound. The amount of exploration measured by ζt (and through the learning rate via the value of c) becomes important only after iteration t0. The bound will be better if we can reach a region in parameter space such that the classifier is then effectively not changing much. This is measured through the norm of the gradient but also takes into account the norm of the parameters (via K(j)t (S, z)). When we reach larger norms of parameters, stability (with respect to the normalized loss) is less negatively affected. The intuitive reason is the following: the same magnitude of step results in a smaller change in the classifier if the parameters are larger (see Figure 1). In Hoffer et al. (2017), it is observed that small batch training and larger learning rates (finding solutions generalizing better) are reaching larger norms of parameters (see also our Figure 3). Using standard Euclidean distance in the analysis of stability would lead us to believe that this behaviour is highly undesirable. Our analysis shows that this behaviour can actually be favorable to the on-average stability with respect to the normalized loss. The quantity ζt also involves the terms Cj(S, z) measuring how fast the norm of the parameters is growing from one iteration to the next. The value of Cj(S, z) is better (smaller) if the norm of the parameters grows faster.\nThe non-smooth case is also of great interest since the ReLU activation function is very common in practice.\nTheorem 5 Suppose that the loss function l(s, y) is L-Lipschitz for all y, non-negative and that lα(f, z) is bounded above by Mα. Furthermore, assume ||x||≤ R. Finally, let B denote the batch size, λt the learning rates and T the number of iterations SGD is being run. Then,\n(18) αav ≤ inf t0∈{1,2,..., nB }\n[ 2LRαl T−1∑ t=t0 λtζt +Mα Bt0 n ] ,\nwhere ζt is defined as in theorem 4.\nExploiting theorem 12 in Elisseeff et al. (2005), it is then possible to get a probabilistic bound on the test error (holding over the randomness in the training sets and the randomness in the algorithm).\nTheorem 6 Fix α > 0. Then, for any n > 1 and δ ∈ (0, 1), the following hold with probability greater or equal to 1− δ over draws of training sets S and the randomness of the algorithm A:\nL0−1D (A(S)) ≤ L α S(A(S)) +\n√( 1\nδ\n) 2M2α + 12nMα α av\nn . (19)" }, { "heading": "6 EXPERIMENTS", "text": "6.1 LEARNING RATES AND ζt\nIn this section we conduct some experiments on the datasets CIFAR10 Krizhevsky (2009) and MNIST LeCun & Cortes (2010). We consider the scenario where we try to reduce the performance gap\nbetween small batch and large batch training by increasing the learning rate. We will give some evidence suggesting that the quantity ζt can be of interest to assess generalization in this case.\nWe use a global learning rate being decayed one time by a factor of 10 in our experiments. No weight decay or momentum is used to stay closer to our theoretical analysis of SGD. Note that in principle, the learning rate could be as large as we want during the inital burn-in period (before t0) without hurting stability. However, this burn-in period must be inside the first epoch in the theoretical results we presented. Since in practice we train for many epochs, it is not clear if such a small burn-in period is long enough to be significant in current practice. We still think that the quantity ζt is relevant to investigate empirically. We approximate its value on a training set S with the quantity ζ̂t(S) := ∑l j=1 ||∇jLBt (Wt)|| ||Wj,t|| . The quantities Cj(S, z) can be evaluated empirically to be very close\nto 1 and so we neglect them in the expression for ζ̂t(S). Also note that τj = 1 for all j in the case of ReLU networks. Instead of plotting the value for each iteration, we average ζ̂t(S) for each epoch. This leads to smoother curves.\nWe use a 5-layer convolutional Relu network consisting in 2 convolutional layers with maxpooling and then 3 fully connected layers with cross-entropy loss on CIFAR10. We use also the cross-entropy loss on MNIST but the neural network is a 6-layers fully connected network. In both cases, we use batch-normalization to facilitate training. All the results in the figures are obtained when using a batch size of 2048. We started by training with a smaller batch size of 256 and then tried to reduce the gap in performance between large batch and small batch training by increasing the learning rate. For example, on CIFAR10, we obtain a test accuracy of 86.23% when using a batch size of 256 and a learning rate of 0.5. When increasing the batch size to 2048 (and maintaining the learning rate to 0.5), the test accuracy dropped to 85.14%. This happened even if the training loss reach approximately the same value in both cases (0.0123 for batch size 256 and 0.0167 for batch size 2048). We then increased the learning rate to 1.0 and then to 1.5 reaching 85.63% in both cases (not completely solving the gap but reducing it). A similar phenomenon happens for MNIST. Here, with batch size 256 we get 98.57% (lr = 0.05) of test accuracy and for batch size 2048, we get 97.52% (lr = 0.05), 98.00% (lr = 0.1) and 98.39% (lr = 0.5). We plotted the values of ζ̂t(S) during training in Figure 2. We can see that it is better during all training when increasing the learning rate.\nTo compare with the analysis from Hardt et al. (2016), the quantity ζt would be replaced with a global Lipschitz constant which would not be affected by the actual trajectory of the algorithm. Therefore, in comparison to our bound, the bound in Hardt et al. (2016) would be much more favorable to smaller learning rates. In other words, the worst case analysis of uniform convergence would require much smaller learning rates to be used than our result to guarantee good stability. The quantity ζt can be improved by accelerating convergence because of the numerator (norm of the gradients) but also by increasing the denominator (norm of the parameters). A larger learning rate can help in both these regards (see Figure 3). Note also that considering only the norm of the gradients without the norm of the parameters would lead to a less favorable quantity compared to considering both the norm of the gradients and the norm of the parameters. A standard analysis of stability (without the normalized loss) similar to Kuzborskij & Lampert (2018) would not benefit from the norm of the parameters." }, { "heading": "6.2 GENERALIZATION BOUND AND TEST ERROR", "text": "We show in this section the usefulness of considering the normalized loss for bounding the test error. We evaluate the bound in theorem 6 and compare it to an analogous version for the unnormalized loss. For this analogous version, we replace the upper bound M on the loss function by the largest loss achieved during training. Furthermore, the quantity av is upper bounded by the Lipschitz constant times the Euclidean distance between the weights of the networks. The Lipschitz constant is replaced by the largest norm of gradients obtained during training. For the normalized loss, we upper bound αav by LRα\nlEd(f, g) (see equation 15). We plot the test error, the upper bound for the normalized case with α = 1.0 and the upper bound for the unnormalized case in Figure 4. Further experiments (with label noise and a comparison of Adam and SGD) are given in Appendix C." }, { "heading": "7 CONCLUSION", "text": "We investigated the stability (uniform and on-average) of SGD with respect to the normalized loss functions. This leads naturally to consider a more meaningful measure of distance between classifiers. Our experimental results show that stability might not be as bad as expected when using larger learning rates in training deep neural networks. We hope that our analysis will be a helpful step in understanding generalization in deep learning. Future work could investigate the on-average stability with respect to lα losses for different optimization algorithms." }, { "heading": "A APPENDIX: PROOFS FOR THE CONVEX CASE", "text": "In Hardt et al. (2016), uniform stability with respect to the same loss the algorithm is executed on is considered. This is a natural choice, however if we are interested in the 0− 1 loss, different set of parameters w, w′ can represent equivalent classifiers (that is, predict the same label for any input). This is the case for logistic regression since any rescaling of the parameters yields the same classifier (but they can have different training losses). This is also the case for homogeneous neural networks where we can rescale each layer without affecting the classifier. This is why we consider stability with respect to normalized losses instead. Note that we are still considering SGD executed on the original loss l (we do not change the algorithm A). The intuitive idea is to measure stability in terms of angles (more precisely, we consider distances between normalized vectors) instead of standard\neuclidean distances (see Figure 1). The proofs in Hardt et al. (2016) consist in bounding E||wt−w′t||, where wt represents the weights at iteration t when training on S and w′t represents the weights at iteration t when training on the modified training set S(i). We will instead bound E|| wt||wt|| − w′t ||w′t|| || (or E[d(f, g)] for an appropriate measure of “distance” d between neural networks f and g).\nLemma 2 Let v, w ∈ Rn and 0 < c ≤ min{||v||, ||w||}. Then,\n|| v||v|| − w ||w|| ||≤ ||v−w|| c .\nProof: The proof follows from basic linear algebra manipulations. We give it here for completeness since it is important in what follows. We need to show that\n〈 v||v|| − w ||w|| , v ||v|| − w ||w|| 〉 ≤ 〈v−w,v−w〉 c2 .\nAfter some manipulations, one can see that this is equivalent to show that\n||v||2+||w||2−2c2 + 2(c2 − ||v||||w||)〈 v||v|| , w ||w|| 〉 ≥ 0.\nFrom Cauchy-Schwarz inequality, 〈 v||v|| , w ||w|| 〉 ≤ 1. Since c 2 − ||v||||w||≤ 0, the proof will be completed by showing that\n||v||2+||w||2−2c2 + 2(c2 − ||v||||w||) ≥ 0.\nBut this is true since\n||v||2+||w||2−2||v||||w||= (||v||−||w||)2.\nLemma 3 Assume that the initial point w0 satisfies ||w0||≥ K and that SGD is run with batch size B and a sequence of learning rates λt on an L−Lipschitz loss function l(w, z) for all z. Then, for all t ≥ 1,\n||wt||≥ K − L ∑t−1 i=0 λi.\nProof:\n||wt|| = ||wt−1 − λt−1 1\nB B∑ j=1 ∇l(wt−1, zj)||\n≥ ||wt−1||−λt−1 1 B || B∑ j=1 ∇l(wt−1, zj)||\n≥ ||wt−1||−λt−1L ≥ ||wt−2||−λt−2L− λt−1L ≥ · · ·\n≥ ||w0||−L t−1∑ i=0 λi\n≥ K − L t−1∑ i=0 λi.\nFor ease of comparison, we give the statement of Theorem 3.8 in Hardt et al. (2016).\nTheorem 7 (Theorem 3.8 in Hardt et al. (2016)) Assume that the loss function f(·; z) is β-smooth, convex and L−Lipschitz for every z. Suppose that we run SGD with step sizes αt ≤ 2/β for T steps. Then, SGD satisfies uniform stability with\nuni ≤ 2L2\nn T∑ t=1 αt.\nWe are now ready to prove Theorem 1.\nProof of Theorem 1: The proof is similar to Hardt et al. (2016). Let wt denotes the output of A after t steps on training set S and w′t be the output of A after t steps on training set S\n(i) for some i ∈ {1, · · ·n}. From convexity, the update rule is 1−expansive (see lemma 3.7 from Hardt et al. (2016)). This property can be used when the example i is not being picked at some iteration. Otherwise, the triangular inequality is used. Since the probability of picking the example i in a mini-batch of size B is smaller than Bn (sampling with replacement) and exploiting Lemma 2 and Lemma 3, we get\nE [ || wt+1 ||wt+1|| − w′t+1 ||w′t+1|| || ] ≤ 1 K̂ E [ ||wt+1 − w′t+1|| ] ≤ 1 K̂ [ B n ( E||wt − w′t||+2Lλt ) + ( 1− B n ) E||wt − w′t||\n] = 1\nK̂\n( E||wt − w′t||+\n2BLλt n\n) .\nNote that this is true since E||wt−w′t||≤ E||wt−w′t||+2Lλt. Solving the recursion for E||wt−w′t||, we have\nE||wt − w′t||≤ 2BLn ∑t−1 i=0 λi.\nTherefore,\nE [ || wt+1||wt+1|| − w′t+1 ||w′t+1|| || ] ≤ 2BL nK̂ ∑t i=0 λi\nThe result then follows from the inequality\n|l(α w||w|| , z)− l(α w′ ||w′|| , z)|≤ Lα|| w ||w|| − w′ ||w′|| ||.\nWe finally prove Theorem 3.\nProof of Theorem 3: To simplify the text, write (α, δ) := EALαS(A(S)) + αstab + (2n αstab + Mα) √ ln(1/δ) 2n . For i ≥ 1, let αi = 2 (1−i)C and δi = δ2i2 . For any fixed i, we have\nPS{EAL0−1D (A(S)) > (αi, δi)} < δi.\nTherefore,\nPS{∀i, EAL0−1D (A(S)) ≤ (αi, δi)} = 1− PS{∃i, EAL0−1D (A(S)) > (αi, δi)}\n≥ 1− ∞∑ i=1 PS{EAL0−1D (A(S)) > (αi, δi)}\n≥ 1− ∞∑ i=1 δi ≥ 1− δ.\nThe last inequality follows from\n∞∑ i=1 δi = δ 2 ∞∑ i=1 1 i2 = δ 2 π2 6 ≤ δ.\nWe want to show that the set\n{S : ∀i, EAL0−1D (A(S)) ≤ (αi, δi)}\nis contained in the set\n{S : ∀α ∈ (0, C], EAL0−1D (A(S)) ≤ EA max (Lα/2S (A(S)), LαS(A(S))) + αstab + (2n αstab +Mα) √ 2 ln( √ 2(2+log2 C−log2 α))+ln(1/δ) 2n }.\nLet S be such that ∀i, EAL0−1D (A(S)) ≤ (αi, δi). Let α ∈ (0, C]. Then, there exists i such that αi ≤ α ≤ 2αi. We have\nEAL0−1D (A(S)) ≤ EAL αi S (A(S)) + αi stab + (2n αi stab +Mαi)\n√ ln(1/δi)\n2n\n≤ EALαiS (A(S)) + α stab + (2n α stab +Mα)\n√ ln(1/δi)\n2n ≤ EALαiS (A(S)) + α stab + (2n\nα stab +Mα)√\n2 ln( √\n2(2 + log2 C − log2 α)) + ln(1/δ) 2n\nThe second inequality is true since both αstab and Mα are non-decreasing functions of α and αi ≤ α. The last inequality is true since 1δi = 2i2 δ ≤ 2(2+log2 C−log2 α) 2\nδ . Finally, the proof is concluded by using the convexity of LαS(A(S)) with respect to α. Indeed, since α 2 ≤ αi ≤ α, we must have\nLαiS (A(S)) ≤ max (L α/2 S (A(S)), L α S(A(S)))." }, { "heading": "B APPENDIX: PROOFS FOR THE NON-CONVEX CASE", "text": "Proof of Lemma 1: The proof is done by induction on the number of layers l. Suppose the result is true for l − 1 layers. Then we have,\n||sl − s′l||= αl||W̃lσ(sl−1)− W̃ ′lσ(s′l−1)|| = αl||W̃lσ(sl−1)− W̃ ′lσ(sl−1)− W̃ ′l (σ(s′l−1)− σ(sl−1))|| ≤ αl||W̃lσ(sl−1)− W̃ ′lσ(sl−1)||+αl||W̃ ′l (σ(s′l−1)− σ(sl−1))|| ≤ αl||W̃l − W̃ ′l || ||σ(sl−1)||+αl||W̃ ′l || ||σ(s′l−1)− σ(sl−1)|| ≤ αl||W̃l − W̃ ′l || ||sl−1||Bcl−1 + αlLcl−1 ||s′l−1 − sl−1||\n≤ Rαl||W̃l − W̃ ′l || l−1∏ j=1 Bcjαj + αl Lcl−1 ||s′l−1 − sl−1||\n≤R l∏\nj=1 αj ||W̃l−W̃ ′l || l−1∏ j=1 Bcj +αlLcl−1 R l−1∏ j=1 αj l−1∑ i=1 [ ||W̃i−W̃ ′i || l−1∏ j=1,j 6=i ( Bcj if j < i Lcj−1 if j > i )]\n= R l∏ j=1 αj l∑ i=1 [ ||W̃i − W̃ ′i || l∏ j=1,j 6=i ( Bcj if j < i Lcj−1 if j > i )] .\nThe proof is finally concluded by observing that for one layer we have, ||s′1− s1||≤ Rα1||W̃1−W̃ ′1||.\nDefinition 7 Let us introduce some notations. Let δ(j)t (S, z) := ||Wj,t −W ′j,t|| and ∆ (j) t (S, z) := EA[δ(j)t (S, z) | ∀k, δ (k) t0 (S, z) = 0]. Here, Wj,t is obtained when training with S for t iterations and W ′j,t is obtained when training with S (i) for t iterations. The condition inside the expectation is that after t0 iterations, the two networks are still exactly the same. Since we are interested in\ndistances after normalization, we consider δ̃(j)t (S, z) := || Wj,t ||Wj,t|| − W ′j,t ||W ′j,t|| || and ∆̃(j)t (S, z) :=\nEA[δ̃(j)t (S, z) | ∀k, δ (k) t0 (S, z) = 0]. We will further need δ̂ (j) t (S, z) :=\nδ (j) t (S,z) K (j) t (S,z) and ∆̂(j)t (S, z) :=\nEA[Cj(S, z)T−tδ̂(j)t (S, z) | ∀k, δ (k) t0 (S, z) = 0], where K (j) t (S, z) := min{||Wj,t||, ||W ′j,t||} and\nCj(S, z) := max t0≤t≤T−1\nK (j) t (S, z) K (j) t+1(S, z) .\nBefore proving Theorem 4, we establish a lemma. Note that the structure of the proof of the following Lemma and of Theorem 4 is similar to the corresponding results in Hardt et al. (2016) and in Kuzborskij & Lampert (2018).\nLemma 4 Suppose that the loss function l(s, y) is L-Lipschitz for all y, non-negative and that lα(f, z) is bounded above by Mα. Also, assume that ||x||≤ R. Furthermore, let B denote the batch size and T the number of iterations SGD is being run. Then, for any t0 ∈ {0, 1, 2, . . . , nB }, the on-average stability satisfies\nαav ≤ LRαl l∑\nj=1\nτjES,z [ EA[δ̃(j)T (S, z) | ∀k, δ (k) t0 (S, z) = 0] ] +Mα( Bt0 n ).\nProof: Write the quantity |lα(f, z)− lα(g, z)| as the sum of |lα(f, z)− lα(g, z)|I{∀k, δ(k)t0 (S, z) = 0} and |lα(f, z)− lα(g, z)|I{∃k : δ(k)t0 (S, z) 6= 0}. We bound the first term by using the fact that\n|lα(f, z)− lα(g, z)|≤ LRαld(f, g) = LRαl l∑\nj=1\nτj δ̃ (j) T (S, z).\nFor the second term, we use that lα(f, z) is bounded above by Mα and non-negative to write\n|lα(f, z)− lα(g, z)|≤Mα.\nThe result then follows from the fact that the probability of picking example i in t0 iterations is smaller than Bt0n .\nProof of theorem 4: From Lemma 2, we always have δ̃(j)t (S, z) ≤ δ̂ (j) t (S, z). Therefore, from the previous Lemma,\nαav ≤ LRαl l∑\nj=1\nτjES,z [ ∆̂ (j) T (S, z) ] +Mα(\nBt0 n ).\nFirst note that under our definitions,\nδ̂ (j) t+1(S, z) =\nδ (j) t+1(S, z)\nK (j) t+1(S, z)\n≤ Cj(S, z) δ (j) t+1(S, z)\nK (j) t (S, z)\n≤ Cj(S, z) K\n(j) t (S, z)\n[ δ (j) t (S, z) + λt||∇(j)LBt(Wt)−∇(j)LB′t(W ′ t )|| ]\n= Cj(S, z) δ̂ (j) t (S, z) + λtCj(S, z)\n||∇(j)LBt(Wt)−∇(j)LB′t(W ′ t )||\nK (j) t (S, z)\n.\nTherefore,\nCj(S, z) T−(t+1) δ̂ (j) t+1(S, z) ≤ Cj(S, z)T−t δ̂ (j) t (S, z)\n+ λtCj(S, z) T−t ||∇ (j)LBt(Wt)−∇(j)LB′t(W ′ t )||\nK (j) t (S, z)\n.\nHere, Bt denotes the batch of samples at iteration t when training on S and B′t denotes the batch of samples at iteration t when training on S(i). When Bt = B′t, we will use {βj}lj=1-layerwise smoothness to bound ||∇(j)LBt(Wt) − ∇(j)LB′t(W ′ t )||. Otherwise, we use simply the triangular inequality. Let p(B,n) be the probability of picking the example i in a mini-batch of size B (this is smaller than Bn ). For t ≥ t0, we have\n∆̂ (j) t+1(S, z) ≤ (1− p(B,n))(1 + βjλt)∆̂ (j) t (S, z) + p(B,n)\n( ∆̂\n(j) t (S, z) +\nλtEA[Cj(S, z)T−t ||∇(j)LBt(Wt)|| K\n(j) t (S, z)\n+ Cj(S, z) T−t ||∇ (j)LB′t(W ′ t )||\nK (j) t (S, z)\n] ) .\nDefine ∆̂(j)t := ES,z∆̂ (j) t (S, z) and ζ (j) t := EA,S,zCj(S, z)T−t ||∇(j)LBt (Wt)|| K\n(j) t (S,z)\nfor any t. Taking the expectation over S and z on both\nsides of the previous inequality, we get\n∆̂ (j) t+1 ≤ (1− p(B,n))(1 + βjλt)∆̂ (j) t + p(B,n)(∆̂ (j) t + 2λtζ (j) t ).\nThis is true since EA,S,zCj(S, z)T−t ||∇(j)LBt (Wt)||\nK (j) t (S,z)\n= EA,S,zCj(S, z)T−t ||∇(j)LB′t (W ′ t )||\nK (j) t (S,z)\n. Rearranging terms and\nusing 1 + x ≤ exp(x), we get\n∆̂ (j) t+1 ≤ [1 + (1− p(B,n))βjλt]∆̂ (j) t + 2p(B,n)λtζ (j) t\n≤ exp((1− p(B,n))βjλt)∆̂(j)t + 2p(B,n)λtζ (j) t .\nDeveloping the recursion yields\n∆̂ (j) T ≤ T−1∑ t=t0 2p(B,n)λtζ (j) t T−1∏ k=t+1 exp ( (1− p(B,n))cβj k )\n≤ 2B n T−1∑ t=t0 λtζ (j) t exp ( (1− p(B,n))cβj T−1∑ k=t+1 1 k )\n≤ 2B n T−1∑ t=t0 λtζ (j) t exp ( (1− p(B,n))cβj log( T − 1 t ) )\n≤ 2Bc n T−1∑ t=t0 ζ (j) t t ( T − 1 t )(1−p(B,n))cβj ≤ 2Bc\nn T−1∑ t=t0 ζ (j) t t ( T − 1 t )(1−p(B,n))cβ\n≤ 2Bc n max t0≤t≤T−1 {ζ(j)t }(T − 1)(1−p(B,n))cβ T−1∑ t=t0 ( 1 t )(1−p(B,n))cβ−1 ≤ 2Bc nc(1− p(B,n))β max t0≤t≤T−1 {ζ(j)t } ( T − 1 t0 − 1\n)(1−p(B,n))cβ ≤ 2B\n(n−B)β ( T − 1 t0 − 1 )cβ max t0≤t≤T−1 {ζ(j)t }.\nTherefore, αav is upper bounded by\ninf t0∈{1,2,..., nB }\n[ 2BLRαl\n(n−B)β ( T − 1 t0 − 1 )cβ l∑ j=1 τj max t0≤t≤T−1 {ζ(j)t }+Mα( Bt0 n ) ] .\nTo complete the proof, we will use that maxt0≤t≤T−1{ζ (j) t } ≤ ∑T−1 t=t0 ζ (j) t and reverse the sum order.\nWith the definition ζt := ∑l j=1 τjζ (j) t , we then have\nαav ≤ inf t0∈{1,2,..., nB }\n[ 2BLRαl\n(n−B)β ( T − 1 t0 − 1 )cβ T−1∑ t=t0 ζt +Mα( Bt0 n ) ] .\nProof of Theorem 5: The beginning of the proof is the same as the proof of Theorem 4. However, in the case where smoothness is not assume (for example, ReLU neural networks), it is not possible to exploit the property of layer-wise smoothness. Instead, only the triangular inequality is used to bound ||∇(j)LBt(Wt)−∇(j)LB′t(W ′ t )||. This leads to the inequality\n∆̂ (j) t+1 ≤ ∆̂ (j) t + 2λtζ (j) t .\nSolving the recursion then yields\n∆̂ (j) T ≤ 2 ∑T−1 t=t0 λtζ (j) t .\nAs a consequence,\nαav ≤ inf t0∈{1,2,..., nB }\n[ 2LRαl T−1∑ t=t0 λtζt +Mα Bt0 n ] ,\nwhere ζt = ∑l j=1 τjζ (j) t , concluding the proof." }, { "heading": "C APPENDIX: MORE EXPERIMENTS", "text": "" } ]
2,021
null
SP:11a4f15893b32b9391d04a507bed8528a130f533
[ "The authors of this manuscript proposed a generative dynamics system for the modelling and generation of 3D conformations of molecules. Specifically, there are three components: (1) conditional graph continuous flow (CGCF) to transform random noise to distances, (2)a closed-form distribution p(R|d, G), and (3) an energy-based tilting model (ETM) to capture long-range interactions and correct the position matrix distribution. The proposed framework was compared with two deep learning methods for conformation generations -- CVGAE & GraphDG, as well as the computational chemistry tool RDKit on GEOM-QM9, GEOM-Drugs, and ISO17 data sets. Comparisons in terms of COV and MAT scores show that the proposed method (particularly the one enhanced with ETM) can outperform baselines. Further comparisons of distances densities show that CGCF (but without ETM) worked best over baselines. ", "This paper presents an approach to generate diverse small molecule conformations given its graph by combining a conditional flow-based model with an energy-based model. Sampling is performed in two separate stages: 1) a normalizing flow produces a distribution over interatomic distances (which is then postprocessed into cartesian coordinates), 2) sampled coordinates are refined by Langevin dynamics with gradient signal produced from an energy-based model. The models are trained separately." ]
We study how to generate molecule conformations (i.e., 3D structures) from a molecular graph. Traditional methods, such as molecular dynamics, sample conformations via computationally expensive simulations. Recently, machine learning methods have shown great potential by training on a large collection of conformation data. Challenges arise from the limited model capacity for capturing complex distributions of conformations and the difficulty in modeling long-range dependencies between atoms. Inspired by the recent progress in deep generative models, in this paper, we propose a novel probabilistic framework to generate valid and diverse conformations given a molecular graph. We propose a method combining the advantages of both flow-based and energy-based models, enjoying: (1) a high model capacity to estimate the multimodal conformation distribution; (2) explicitly capturing the complex long-range dependencies between atoms in the observation space. Extensive experiments demonstrate the superior performance of the proposed method on several benchmarks, including conformation generation and distance modeling tasks, with a significant improvement over existing generative models for molecular conformation sampling1.
[ { "affiliations": [], "name": "Minkai Xu" }, { "affiliations": [], "name": "Shitong Luo" }, { "affiliations": [], "name": "Yoshua Bengio" }, { "affiliations": [], "name": "Jian Peng" }, { "affiliations": [], "name": "Jian Tang" } ]
[ { "authors": [ "Mohammed AlQuraishi" ], "title": "End-to-end differentiable learning of protein structure", "venue": "Cell systems,", "year": 2019 }, { "authors": [ "Simon Axelrod", "Rafael Gomez-Bombarelli" ], "title": "Geom: Energy-annotated molecular conformations for property prediction and molecular generation", "venue": "arXiv preprint arXiv:2006.05531,", "year": 2020 }, { "authors": [ "Andrew J Ballard", "Stefano Martiniani", "Jacob D Stevenson", "Sandeep Somani", "David J Wales" ], "title": "Exploiting the potential energy landscape to sample free energy", "venue": "Wiley Interdisciplinary Reviews: Computational Molecular Science,", "year": 2015 }, { "authors": [ "Yoshua Bengio", "Grégoire Mesnil", "Yann Dauphin", "Salah Rifai" ], "title": "Better mixing via deep representations", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Yoshua Bengio", "Li Yao", "Guillaume Alain", "Pascal Vincent" ], "title": "Generalized denoising auto-encoders as generative models", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Ricky TQ Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Gordon M Crippen", "Timothy F Havel" ], "title": "Distance geometry and molecular conformation, volume 74", "venue": "Research Studies Press Taunton,", "year": 1988 }, { "authors": [ "Peter Dayan", "Geoffrey E Hinton", "Radford M Neal", "Richard S Zemel" ], "title": "The helmholtz machine", "venue": "Neural computation,", "year": 1995 }, { "authors": [ "Marco De Vivo", "Matteo Masetti", "Giovanni Bottegoni", "Andrea Cavalli" ], "title": "Role of molecular dynamics and related methods in drug discovery", "venue": "Journal of medicinal chemistry,", "year": 2016 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "Nice: Non-linear independent components estimation", "venue": "arXiv preprint arXiv:1410.8516,", "year": 2014 }, { "authors": [ "Yilun Du", "Igor Mordatch" ], "title": "Implicit generation and generalization in energy-based models", "venue": "arXiv preprint arXiv:1903.08689,", "year": 2019 }, { "authors": [ "Niklas Gebauer", "Michael Gastegger", "Kristof Schütt" ], "title": "Symmetry-adapted generation of 3d point sets for the targeted discovery of molecules", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "arXiv preprint arXiv:1704.01212,", "year": 2017 }, { "authors": [ "Will Grathwohl", "Ricky TQ Chen", "Jesse Bettencourt", "Ilya Sutskever", "David Duvenaud" ], "title": "Ffjord: Free-form continuous dynamics for scalable reversible generative models", "venue": "arXiv preprint arXiv:1810.01367,", "year": 2018 }, { "authors": [ "Arthur Gretton", "Karsten M Borgwardt", "Malte J Rasch", "Bernhard Schölkopf", "Alexander Smola" ], "title": "A kernel two-sample test", "venue": "The Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Thomas A Halgren" ], "title": "Merck molecular force field. i. basis, form, scope, parameterization, and performance of mmff94", "venue": "Journal of computational chemistry,", "year": 1996 }, { "authors": [ "Thomas A Halgren" ], "title": "Merck molecular force field. v. extension of mmff94 using experimental data, additional computational data, and empirical rules", "venue": "Journal of Computational Chemistry,", "year": 1996 }, { "authors": [ "Paul CD Hawkins" ], "title": "Conformation generation: the state of the art", "venue": "Journal of Chemical Information and Modeling,", "year": 2017 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "Moritz Hoffmann", "Frank Noé" ], "title": "Generating valid euclidean distance matrices", "venue": "arXiv preprint arXiv:1910.03131,", "year": 2019 }, { "authors": [ "John Ingraham", "Adam J Riesselman", "Chris Sander", "Debora S Marks" ], "title": "Learning protein structure with a differentiable simulator", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Yann LeCun", "Sumit Chopra", "Raia Hadsell", "M Ranzato", "F Huang" ], "title": "A tutorial on energy-based learning", "venue": "Predicting structured data,", "year": 2006 }, { "authors": [ "Tobias Lemke", "Christine Peter" ], "title": "Encodermap: Dimensionality reduction and generation of molecule conformations", "venue": "Journal of chemical theory and computation,", "year": 2019 }, { "authors": [ "Leo Liberti", "Carlile Lavor", "Nelson Maculan", "Antonio Mucherino" ], "title": "Euclidean distance geometry and applications", "venue": "SIAM review,", "year": 2014 }, { "authors": [ "Elman Mansimov", "Omar Mahmood", "Seokho Kang", "Kyunghyun Cho" ], "title": "Molecular geometry prediction using a deep generative graph neural network", "venue": null, "year": 1904 }, { "authors": [ "Jiquan Ngiam", "Zhenghao Chen", "Pang W Koh", "Andrew Y Ng" ], "title": "Learning deep energy models", "venue": "In Proceedings of the 28th international conference on machine learning", "year": 2011 }, { "authors": [ "Frank Noé", "Simon Olsson", "Jonas Köhler", "Hao Wu" ], "title": "Boltzmann generators: Sampling equilibrium states of many-body systems with deep", "venue": "learning. Science,", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Raghunathan Ramakrishnan", "Pavlo O Dral", "Matthias Rupp", "O Anatole Von Lilienfeld" ], "title": "Quantum chemistry structures and properties of 134 kilo molecules", "venue": "Scientific data,", "year": 2014 }, { "authors": [ "Anthony K Rappé", "Carla J Casewit", "KS Colwell", "William A Goddard III", "W Mason Skiff" ], "title": "Uff, a full periodic table force field for molecular mechanics and molecular dynamics simulations", "venue": "Journal of the American chemical society,", "year": 1992 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "arXiv preprint arXiv:1505.05770,", "year": 2015 }, { "authors": [ "Sereina Riniker", "Gregory A Landrum" ], "title": "Better informed distance geometry: using what we know to improve conformation generation", "venue": "Journal of chemical information and modeling,", "year": 2015 }, { "authors": [ "Kristof Schütt", "Pieter-Jan Kindermans", "Huziel Enoc Sauceda Felix", "Stefan Chmiela", "Alexandre Tkatchenko", "Klaus-Robert Müller" ], "title": "Schnet: A continuous-filter convolutional neural network for modeling quantum interactions", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Andrew W Senior", "Richard Evans", "John Jumper", "James Kirkpatrick", "Laurent Sifre", "Tim Green", "Chongli Qin", "Augustin Žı́dek", "Alexander WR Nelson", "Alex Bridgland" ], "title": "Improved protein structure prediction using potentials from deep learning", "venue": null, "year": 2020 }, { "authors": [ "Chence Shi", "Minkai Xu", "Zhaocheng Zhu", "Weinan Zhang", "Ming Zhang", "Jian Tang" ], "title": "Graphaf: a flow-based autoregressive model for molecular graph generation", "venue": "arXiv preprint arXiv:2001.09382,", "year": 2020 }, { "authors": [ "Gregor NC Simm", "José Miguel Hernández-Lobato" ], "title": "A generative model for molecular distance geometry", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Justin S Smith", "Olexandr Isayev", "Adrian E Roitberg" ], "title": "Ani-1: an extensible neural network potential with dft accuracy at force field computational cost", "venue": "Chemical science,", "year": 2017 }, { "authors": [ "Yuxuan Song", "Qiwei Ye", "Minkai Xu", "Tie-Yan Liu" ], "title": "Discriminator contrastive divergence: Semi-amortized generative modeling by exploring energy of the discriminator", "venue": "arXiv preprint arXiv:2004.01704,", "year": 2020 }, { "authors": [ "Jianwen Xie", "Yang Lu", "Song-Chun Zhu", "Yingnian Wu" ], "title": "A theory of generative convnet", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Jiaxuan You", "Bowen Liu", "Zhitao Ying", "Vijay Pande", "Jure Leskovec" ], "title": "Graph convolutional policy network for goal-directed molecular graph generation", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Recently", "Gebauer" ], "title": "2019) and Hoffmann & Noé (2019) propose to directly generate 3D structures with deep generative models. However, these models can hardly capture graph- or bond-based structure, which is typically complex and highly branched", "venue": "Some other works (Lemke & Peter,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recently, we have witnessed the success of graph-based representations for molecular modeling in a variety of tasks such as property prediction (Gilmer et al., 2017) and molecule generation (You et al., 2018; Shi et al., 2020). However, a more natural and intrinsic representation of a molecule is its 3D structure, commonly known as the molecular geometry or conformation, which represents each atom by its 3D coordinate. The conformation of a molecule determines its biological and physical properties such as charge distribution, steric constraints, as well as interactions with other molecules. Furthermore, large molecules tend to comprise a number of rotatable bonds, which may induce flexible conformation changes and a large number of feasible conformations in nature. Generating valid and stable conformations of a given molecule remains very challenging. Experimentally, such structures are determined by expensive and time-consuming crystallography. Computational approaches based on Markov chain Monte Carlo (MCMC) or molecular dynamics (MD) (De Vivo et al., 2016) are computationally expensive, especially for large molecules (Ballard et al., 2015).\nMachine learning methods have recently shown great potential for molecular conformation generation by training on a large collection of data to model the probability distribution of potential conformations R based on the molecular graph G, i.e., p(R|G). For example, Mansimov et al.\n∗Equal contribution. Work was done during Shitong’s internship at Mila. 1Code is available at https://github.com/DeepGraphLearning/CGCF-ConfGen.\n(2019) proposed a Conditional Variational Graph Autoencoders (CVGAE) for molecular conformation generation. A graph neural network (Gilmer et al., 2017) is first applied to the molecular graph to get the atom representations, based on which 3D coordinates are further generated. One limitation of such an approach is that by directly generating the 3D coordinates of atoms it fails to model the rotational and translational invariance of molecular conformations. To address this issue, instead of generating the 3D coordinates directly, Simm & Hernández-Lobato (2020) recently proposed to first model the molecule’s distance geometry (i.e., the distances between atoms)—which are rotationally and translationally invariant—and then generate the molecular conformation based on the distance geometry through a post-processing algorithm (Liberti et al., 2014). Similar to Mansimov et al. (2019), a few layers of graph neural networks are applied to the molecular graph to learn the representations of different edges, which are further used to generate the distances of different edges independently. This approach is capable of more often generating valid molecular conformations.\nAlthough these new approaches have made tremendous progress, the problem remains very challenging and far from solved. First, each molecule may have multiple stable conformations around a number of states which are thermodynamically stable. In other words, the distribution p(R|G) is very complex and multi-modal. Models with high capacity are required to model such complex distributions. Second, existing approaches usually apply a few layers of graph neural networks to learn the representations of nodes (or edges) and then generate the 3D coordinates (or distances) based on their representations independently. Such approaches are necessarily limited to capturing a single mode of p(R|G) (since the coordinates or distances are sampled independently) and are incapable of modeling multimodal joint distributions and the form of the graph neural net computation makes it difficult to capture long-range dependencies between atoms, especially in large molecules.\nInspired by the recent progress with deep generative models, this paper proposes a novel and principled probabilistic framework for molecular geometry generation, which addresses the above two limitations. Our framework combines the advantages of normalizing flows (Dinh et al., 2014) and energy-based approaches (LeCun et al., 2006), which have a strong model capacity for modeling complex distributions, are flexible to model long-range dependency between atoms, and enjoy efficient sampling and training procedures. Similar to the work of Simm & Hernández-Lobato (2020), we also first learn the distribution of distances d given the graph G, i.e., p(d|G), and define another distribution of conformations R given the distances d, i.e., p(R|d,G). Specifically, we propose a novel Conditional Graph Continuous Flow (CGCF) for distance geometry (d) generation conditioned on the molecular graph G. Given a molecular graph G, CGCF defines an invertible mapping between a base distribution (e.g., a multivariate normal distribution) and the molecular distance geometry, using a virtually infinite number of graph transformation layers on atoms represented by a Neural Ordinary Differential Equations architecture (Chen et al., 2018). Such an approach enjoys very high flexibility to model complex distributions of distance geometry. Once the molecular distance geometry d is generated, we further generate the 3D coordinates R by searching from the probability p(R|d,G). Though the CGCF has a high capacity for modeling complex distributions, the distances of different edges are still independently updated in the transformations, which limits its capacity for modeling long-range dependency between atoms in the sampling process. Therefore, we further propose another unnormalized probability function, i.e., an energy-based model (EBM) (Hinton & Salakhutdinov, 2006; LeCun et al., 2006; Ngiam et al., 2011), which acts as a tilting term of the flow-based distribution and directly models the joint distribution of R. Specifically, the EBM trains an energy function E(R,G), which is approximated by a neural network. The flow- and energy-based models are combined in a novel way for joint training and mutual enhancement. First, energy-based methods are usually difficult to train due to the slow sampling process. In addition, the distribution of conformations is usually highly multi-modal, and the sampling procedures based on Gibbs sampling or Langevin Dynamics (Bengio et al., 2013a;b) tend to get trapped around modes, making it difficult to mix between different modes (Bengio et al., 2013a). Here we use the flow-based model as a proposal distribution for the energy model, which is capable to generate diverse samples for training energy models. Second, the flow-based model lacks the capacity to explicitly model the long-range dependencies between atoms, which we find can however be effectively modeled by an energy function E(R,G). Our sampling process can be therefore viewed as a two-stage dynamic system, where we first take the flow-based model to quickly synthesize realistic conformations and then used the learned energy E(R,G) to refine the generated conformations through Langevin Dynamics.\nWe conduct comprehensive experiments on several recently proposed benchmarks, including GEOM-QM9, GEOM-Drugs (Axelrod & Gomez-Bombarelli, 2020) and ISO17 (Simm & Hernández-Lobato, 2020). Numerical evaluations show that our proposed framework consistently outperforms the previous state-of-the-art (GraphDG) on both conformation generation and distance modeling tasks, with a clear margin." }, { "heading": "2 PROBLEM DEFINITION AND PRELIMINARIES", "text": "" }, { "heading": "2.1 PROBLEM DEFINITION", "text": "Notations. Following existing work (Simm & Hernández-Lobato, 2020), each molecule is represented as an undirected graph G = 〈V, E〉, where V is the set of nodes representing atoms and E is the set of edges representing inter-atomic bonds. Each node v in V is labeled with atomic properties such as element type. The edge in E connecting u and v is denoted as euv , and is labeled with its bond type. We also follow the previous work (Simm & Hernández-Lobato, 2020) to expand the molecular graph with auxiliary bonds, which is elaborated in Appendix B. For the molecular 3D representation, each atom in V is assigned with a 3D position vector r ∈ R3. We denote duv = ‖ru − rv‖2 as the Euclidean distance between the uth and vth atom. Therefore, we can represent all the positions {rv}v∈V as a matrix R ∈ R|V|×3 and all the distances between connected nodes {duv}euv∈E as a vector d ∈ R|E|. Problem Definition. The problem of molecular conformation generation is defined as a conditional generation process. More specifically, our goal is to model the conditional distribution of atomic positions R given the molecular graph G, i.e., p(R|G)." }, { "heading": "2.2 PRELIMINARIES", "text": "Continuous Normalizing Flow. A normalizing flow (Dinh et al., 2014; Rezende & Mohamed, 2015) defines a series of invertible deterministic transformations from an initial known distribution p(z) to a more complicated one p(x). Recently, normalizing flows have been generalized from discrete number of layers to continuous (Chen et al., 2018; Grathwohl et al., 2018) by defining the transformation fθ as a continuous-time dynamic ∂z(t) ∂t = fθ(z(t), t). Formally, with the latent variable z(t0) ∼ p(z) at the start time, the continuous normalizing flow (CNF) defines the transformation x = z(t0) + ∫ t1 t0 fθ(z(t), t)dt. Then the exact density for pθ(x) can be computed by:\nlog pθ(x) = log p(z(t0))− ∫ t1 t0 Tr ( ∂fθ ∂z(t) ) dt (1)\nwhere z(t0) can be obtained by inverting the continuous dynamic z(t0) = x + ∫ t0 t1 fθ(z(t), t)dt. A black-box ordinary differential equation (ODE) solver can be applied to estimate the outputs and inputs gradients and optimize the CNF model (Chen et al., 2018; Grathwohl et al., 2018).\nEnergy-based Models. Energy-based models (EBMs) (Dayan et al., 1995; Hinton & Salakhutdinov, 2006; LeCun et al., 2006) use a scalar parametric energy function Eφ(x) to fit the data distribution. Formally, the energy function induces a density function with the Boltzmann distribution pφ(x) = exp(−Eφ(x))/Z(φ), where Z = ∫ exp(−Eφ(x)) dx denotes the partition function. EBM can be learned with Noise contrastive estimation (NCE) (Gutmann & Hyvärinen, 2010) by treating the normalizing constant as a free parameter. Given the training examples from both the dataset and a noise distribution q(x), φ can be estimated by maximizing the following objective function:\nJ(φ) = Epdata [ log\npφ(x)\npφ(x) + q(x)\n] + Eq [ log\nq(x)\npφ(x) + q(x)\n] , (2)\nwhich turns the estimation of EBM into a discriminative learning problem. Sampling from Eφ can be done with a variety of methods such as Markov chain Monte Carlo (MCMC) or Gibbs sampling (Hinton & Salakhutdinov, 2006), possibly accelerated using Langevin dynamics (Du & Mordatch, 2019; Song et al., 2020), which leverages the gradient of the EBM to conduct sampling:\nxk = xk−1 − 2 ∇xEφ (xk−1) +\n√ ω, ω ∼ N (0, I), (3)\nwhere refers to the step size. x0 are the samples drawn from a random initial distribution and we take the xK withK Langevin dynamics steps as the generated samples of the stationary distribution.\nPublished as a conference paper at ICLR 2021\n<latexit sha1_base64=\"vIs0DtZMVgyrXYZyWvMVVQo2iJ8=\">AAAB8nicbVDLSgMxFM3UV62vqks3wSK4kDIjBXVXcKHLCvYB06Fk0kwbmkmG5I5Qhn6GGxeKuPVr3Pk3ZtpZaOuBwOGce8m5J0wEN+C6305pbX1jc6u8XdnZ3ds/qB4edYxKNWVtqoTSvZAYJrhkbeAgWC/RjMShYN1wcpv73SemDVfyEaYJC2IykjzilICV/H5MYEyJyO5mg2rNrbtz4FXiFaSGCrQG1a/+UNE0ZhKoIMb4nptAkBENnAo2q/RTwxJCJ2TEfEsliZkJsnnkGT6zyhBHStsnAc/V3xsZiY2ZxqGdzCOaZS8X//P8FKLrIOMySYFJuvgoSgUGhfP78ZBrRkFMLSFUc5sV0zHRhIJtqWJL8JZPXiWdy7rXqN88NGrNi6KOMjpBp+gceegKNdE9aqE2okihZ/SK3hxwXpx352MxWnKKnWP0B87nD3aukVM=</latexit>\n<latexit sha1_base64=\"ZURxyFRkumcoZhtLB9tuZDzUxpQ=\">AAACFHicbVDLSgMxFM34rPU16rKbYBEqSJmRgroruNGNVLAP6JSSSTNtaJIZkoxQhln4E/6CW927E7fu3folZtpZ2NYDIYdz7uXee/yIUaUd59taWV1b39gsbBW3d3b39u2Dw5YKY4lJE4cslB0fKcKoIE1NNSOdSBLEfUba/vg689uPRCoaigc9iUiPo6GgAcVIG6lvlzyO9AgjltylFc/niZOewey/TU/7dtmpOlPAZeLmpAxyNPr2jzcIccyJ0JghpbquE+legqSmmJG06MWKRAiP0ZB0DRWIE9VLpkek8MQoAxiE0jyh4VT925EgrtSE+6YyW1ktepn4n9eNdXDZS6iIYk0Eng0KYgZ1CLNE4IBKgjWbGIKwpGZXiEdIIqxNbnNTfJ4WTSjuYgTLpHVedWvVq/tauV7J4ymAEjgGFeCCC1AHN6ABmgCDJ/ACXsGb9Wy9Wx/W56x0xcp7jsAcrK9foQCeKQ==</latexit>\n<latexit sha1_base64=\"S5vzONPPGcE2Bnxqdal57hBhNUE=\">AAACAnicbVDLSsNAFJ3UV62vqks3g0Wom5JIQd0V3LisYB+QhjKZTNqh8wgzE6GE7PwFt7p3J279Ebd+idM2C9t64MLhnHu5954wYVQb1/12ShubW9s75d3K3v7B4VH1+KSrZaow6WDJpOqHSBNGBekYahjpJ4ogHjLSCyd3M7/3RJSmUjyaaUICjkaCxhQjYyV/EPIsyutm6F4OqzW34c4B14lXkBoo0B5WfwaRxCknwmCGtPY9NzFBhpShmJG8Mkg1SRCeoBHxLRWIEx1k85NzeGGVCMZS2RIGztW/ExniWk95aDs5MmO96s3E/zw/NfFNkFGRpIYIvFgUpwwaCWf/w4gqgg2bWoKwovZWiMdIIWxsSktbQp5XbCjeagTrpHvV8JqN24dmrVUv4imDM3AO6sAD16AF7kEbdAAGEryAV/DmPDvvzofzuWgtOcXMKViC8/UL8JyXWw==</latexit>\n<latexit sha1_base64=\"F31lh5CKHaiA3aXYW2ZI+Y7ONAQ=\">AAAB/3icbVA9SwNBEJ2LXzF+RS1tFoNgIeFOAmoXEMQygvmA5Ah7m71kye7dsTsnhJDCv2CrvZ3Y+lNs/SVukitM4oOBx3szzMwLEikMuu63k1tb39jcym8Xdnb39g+Kh0cNE6ea8TqLZaxbATVciojXUaDkrURzqgLJm8Hwduo3n7g2Io4ecZRwX9F+JELBKFqpddft4IAj7RZLbtmdgawSLyMlyFDrFn86vZilikfIJDWm7bkJ+mOqUTDJJ4VOanhC2ZD2edvSiCpu/PHs3gk5s0qPhLG2FSGZqX8nxlQZM1KB7VQUB2bZm4r/ee0Uw2t/LKIkRR6x+aIwlQRjMn2e9ITmDOXIEsq0sLcSNqCaMrQRLWwJ1KRgQ/GWI1gljcuyVynfPFRK1YssnjycwCmcgwdXUIV7qEEdGEh4gVd4c56dd+fD+Zy35pxs5hgW4Hz9AkIZln0=</latexit>\n<latexit sha1_base64=\"3uFAD5t20r5hkITaCaR1FRhMgvY=\">AAACAnicbVDLSsNAFJ3UV62vqks3g0Wom5JIQd0V3LisYB+QhjKZTNqh8wgzE6GE7PwFt7p3J279Ebd+idM2C9t64MLhnHu5954wYVQb1/12ShubW9s75d3K3v7B4VH1+KSrZaow6WDJpOqHSBNGBekYahjpJ4ogHjLSCyd3M7/3RJSmUjyaaUICjkaCxhQjYyV/EPIsyutm6F0OqzW34c4B14lXkBoo0B5WfwaRxCknwmCGtPY9NzFBhpShmJG8Mkg1SRCeoBHxLRWIEx1k85NzeGGVCMZS2RIGztW/ExniWk95aDs5MmO96s3E/zw/NfFNkFGRpIYIvFgUpwwaCWf/w4gqgg2bWoKwovZWiMdIIWxsSktbQp5XbCjeagTrpHvV8JqN24dmrVUv4imDM3AO6sAD16AF7kEbdAAGEryAV/DmPDvvzofzuWgtOcXMKViC8/UL8jGXXA==</latexit>\n<latexit sha1_base64=\"BSPQokcnd67Ox4bSGWxni9WSepw=\">AAACF3icbVBNS8NAEN3Ur1q/qh4FWSxCBSmJFNRbwYMeK9gPaErZbDft0t0k7E6EEnPzT/gXvOrdm3j16NVf4rbNwVYfDDzem2FmnhcJrsG2v6zc0vLK6lp+vbCxubW9U9zda+owVpQ1aChC1faIZoIHrAEcBGtHihHpCdbyRlcTv3XPlOZhcAfjiHUlGQTc55SAkXrFw6jnwpABKbueTPopfsCuJDCkRCTX6UmvWLIr9hT4L3EyUkIZ6r3it9sPaSxZAFQQrTuOHUE3IQo4FSwtuLFmEaEjMmAdQwMime4m0z9SfGyUPvZDZSoAPFV/TyREaj2Wnumc3KgXvYn4n9eJwb/oJjyIYmABnS3yY4EhxJNQcJ8rRkGMDSFUcXMrpkOiCAUT3dwWT6YFE4qzGMFf0jyrONXK5W21VDvN4smjA3SEyshB56iGblAdNRBFj+gZvaBX68l6s96tj1lrzspm9tEcrM8feuyfvQ==</latexit>\n<latexit sha1_base64=\"dhDz+mg0jfbxnERYd0jVT19v2HI=\">AAACGHicbZDLSsNAFIYnXmu9RV26cLAIFUpJpKDuCi50WcVeoAllMpm2Q2eSMDMRSszSl/AV3Orenbh159YncdJmYVt/GPj4zzmcM78XMSqVZX0bS8srq2vrhY3i5tb2zq65t9+SYSwwaeKQhaLjIUkYDUhTUcVIJxIEcY+Rtje6yurtByIkDYN7NY6Iy9EgoH2KkdJWzzyKyo7Hk7sUPsIM/LQCHY7UECOWXKenPbNkVa2J4CLYOZRArkbP/HH8EMecBAozJGXXtiLlJkgoihlJi04sSYTwCA1IV2OAOJFuMvlICk+048N+KPQLFJy4fycSxKUcc093ZjfK+Vpm/lfrxqp/4SY0iGJFAjxd1I8ZVCHMUoE+FQQrNtaAsKD6VoiHSCCsdHYzWzyeFnUo9nwEi9A6q9q16uVtrVSv5PEUwCE4BmVgg3NQBzegAZoAgyfwAl7Bm/FsvBsfxue0dcnIZw7AjIyvX4fin7c=</latexit>\n<latexit sha1_base64=\"gQ1W5BUBQuSs4llOR92XD3cLttw=\">AAAB/XicbVA9SwNBEJ2LXzF+RS1tFoOQKtyJoHYBG8so5gOSI+xt9pI1u3vH7p4QjsO/YKu9ndj6W2z9JW6SK0zig4HHezPMzAtizrRx3W+nsLa+sblV3C7t7O7tH5QPj1o6ShShTRLxSHUCrClnkjYNM5x2YkWxCDhtB+Obqd9+okqzSD6YSUx9gYeShYxgY6VWLxDpfdYvV9yaOwNaJV5OKpCj0S//9AYRSQSVhnCsdddzY+OnWBlGOM1KvUTTGJMxHtKupRILqv10dm2GzqwyQGGkbEmDZurfiRQLrScisJ0Cm5Fe9qbif143MeGVnzIZJ4ZKMl8UJhyZCE1fRwOmKDF8YgkmitlbERlhhYmxAS1sCURWsqF4yxGsktZ5zbuoXd9dVOrVPJ4inMApVMGDS6jDLTSgCQQe4QVe4c15dt6dD+dz3lpw8pljWIDz9QvyzpXD</latexit>\n<latexit sha1_base64=\"OpcR7prbb0BxuXfNQa1FkN+WMYo=\">AAACGHicbZDLSsNAFIYn9VbrLerShYNFqCAlkYK6K7jQZRV7gSaUyWTaDp1JwsxEKDFLX8JXcKt7d+LWnVufxEmbhW39YeDjP+dwzvxexKhUlvVtFJaWV1bXiuuljc2t7R1zd68lw1hg0sQhC0XHQ5IwGpCmooqRTiQI4h4jbW90ldXbD0RIGgb3ahwRl6NBQPsUI6WtnnkYVRyPJ3cpfIQZ+OkpdDhSQ4xYcp2e9MyyVbUmgotg51AGuRo988fxQxxzEijMkJRd24qUmyChKGYkLTmxJBHCIzQgXY0B4kS6yeQjKTzWjg/7odAvUHDi/p1IEJdyzD3dmd0o52uZ+V+tG6v+hZvQIIoVCfB0UT9mUIUwSwX6VBCs2FgDwoLqWyEeIoGw0tnNbPF4WtKh2PMRLELrrGrXqpe3tXK9ksdTBAfgCFSADc5BHdyABmgCDJ7AC3gFb8az8W58GJ/T1oKRz+yDGRlfv4aun7M=</latexit>\n<latexit sha1_base64=\"lhItiocMBRqGObaIUKv7lTDqZ9g=\">AAAB/XicbVBNSwMxEJ2tX7V+VT16CRbBg5RdKVRvBRE8VrAf0C4lm2bb2CS7JFmhLMW/4FXv3sSrv8Wrv8S03YNtfTDweG+GmXlBzJk2rvvt5NbWNza38tuFnd29/YPi4VFTR4kitEEiHql2gDXlTNKGYYbTdqwoFgGnrWB0M/VbT1RpFskHM46pL/BAspARbKzUvO114yHrFUtu2Z0BrRIvIyXIUO8Vf7r9iCSCSkM41rrjubHxU6wMI5xOCt1E0xiTER7QjqUSC6r9dHbtBJ1ZpY/CSNmSBs3UvxMpFlqPRWA7BTZDvexNxf+8TmLCKz9lMk4MlWS+KEw4MhGavo76TFFi+NgSTBSztyIyxAoTYwNa2BKIScGG4i1HsEqal2WvUr6+r5RqF1k8eTiBUzgHD6pQgzuoQwMIPMILvMKb8+y8Ox/O57w152Qzx7AA5+sXoUCVkw==</latexit>\n<latexit sha1_base64=\"8NEpmi0zn1Gl+TfONZYwYrCHrJU=\">AAACFHicbVDLSsNAFJ34rPUVddnNYBEqlJJIQd0VRHRZxT6gCWUynbRDZ5IwMxFKyMKf8Bfc6t6duHXv1i9x0mZhWw9cOJxzL/fe40WMSmVZ38bK6tr6xmZhq7i9s7u3bx4ctmUYC0xaOGSh6HpIEkYD0lJUMdKNBEHcY6Tjja8yv/NIhKRh8KAmEXE5GgbUpxgpLfXN0nXfiUa04ng8uU+r0OFIjTBiyU162jfLVs2aAi4TOydlkKPZN3+cQYhjTgKFGZKyZ1uRchMkFMWMpEUnliRCeIyGpKdpgDiRbjJ9IoUnWhlAPxS6AgWn6t+JBHEpJ9zTndmNctHLxP+8Xqz8CzehQRQrEuDZIj9mUIUwSwQOqCBYsYkmCAuqb4V4hATCSuc2t8XjaVGHYi9GsEzaZzW7Xru8q5cb1TyeAiiBY1ABNjgHDXALmqAFMHgCL+AVvBnPxrvxYXzOWleMfOYIzMH4+gWBw54d</latexit>\n<latexit sha1_base64=\"gQ1W5BUBQuSs4llOR92XD3cLttw=\">AAAB/XicbVA9SwNBEJ2LXzF+RS1tFoOQKtyJoHYBG8so5gOSI+xt9pI1u3vH7p4QjsO/YKu9ndj6W2z9JW6SK0zig4HHezPMzAtizrRx3W+nsLa+sblV3C7t7O7tH5QPj1o6ShShTRLxSHUCrClnkjYNM5x2YkWxCDhtB+Obqd9+okqzSD6YSUx9gYeShYxgY6VWLxDpfdYvV9yaOwNaJV5OKpCj0S//9AYRSQSVhnCsdddzY+OnWBlGOM1KvUTTGJMxHtKupRILqv10dm2GzqwyQGGkbEmDZurfiRQLrScisJ0Cm5Fe9qbif143MeGVnzIZJ4ZKMl8UJhyZCE1fRwOmKDF8YgkmitlbERlhhYmxAS1sCURWsqF4yxGsktZ5zbuoXd9dVOrVPJ4inMApVMGDS6jDLTSgCQQe4QVe4c15dt6dD+dz3lpw8pljWIDz9QvyzpXD</latexit>\n<latexit sha1_base64=\"ORiRlP3d8Ck3qImxG8IgZjn4WBQ=\">AAAB+HicbVA9SwNBEJ2LXzF+RS1tFoNgIeFOAmoXsLFMwHxAcoS9zSRZsnt37O4J8cgvsNXeTmz9N7b+EjfJFSbxwcDjvRlm5gWx4Nq47reT29jc2t7J7xb29g8Oj4rHJ00dJYphg0UiUu2AahQ8xIbhRmA7VkhlILAVjO9nfusJleZR+GgmMfqSDkM+4IwaK9XjXrHklt05yDrxMlKCDLVe8afbj1giMTRMUK07nhsbP6XKcCZwWugmGmPKxnSIHUtDKlH76fzQKbmwSp8MImUrNGSu/p1IqdR6IgPbKakZ6VVvJv7ndRIzuPVTHsaJwZAtFg0SQUxEZl+TPlfIjJhYQpni9lbCRlRRZmw2S1sCOS3YULzVCNZJ87rsVcp39UqpepXFk4czOIdL8OAGqvAANWgAA4QXeIU359l5dz6cz0VrzslmTmEJztcvBTaTkA==</latexit>" }, { "heading": "3 METHOD", "text": "" }, { "heading": "3.1 OVERVIEW", "text": "We first present a high-level description of our model. Directly learning a generative model on Cartesian coordinates heavily depends on the (arbitrary) rotation and translation (Mansimov et al., 2019). Therefore, in this paper we take the atomic pairwise distances as intermediate variables to generate conformations, which are invariant to rotation and translation. More precisely, the cornerstone of our method is to factorize the conditional distribution pθ(R|G) into the following formulation:\npθ(R|G) = ∫ p(R|d,G) · pθ(d|G) dd, (4)\nwhere pθ(d|G) models the distribution of inter-atomic distances given the graph G and p(R|d,G) models the distribution of conformations given the distances d. In particular, the conditional generative model pθ(d|G) is parameterized as a conditional graph continuous flow, which can be seen as a continuous dynamics system to transform the random initial noise to meaningful distances. This flow model enables us to capture the long-range dependencies between atoms in the hidden space during the dynamic steps.\nThough CGCF can capture the dependency between atoms in the hidden space, the distances of different edges are still independently updated in the transformations, which limits the capacity of modeling the dependency between atoms in the sampling process. Therefore we further propose to correct pθ(R|G) with an energy-based tilting term Eφ(R,G):\npθ,φ(R|G) ∝ pθ(R|G) · exp(−Eφ(R,G)). (5) The tilting term is directly defined on the joint distribution of R and G, which explicitly captures the long-range interaction directly in observation space. The tilted distribution pθ,φ(R|G) can be used to provide refinement or optimization for the conformations generated from pθ(R|G). This energy function is also designed to be invariant to rotation and translation.\nIn the following parts, we will firstly describe our flow-based generative model pθ(R|G) in Section 3.2 and elaborate the energy-based tilting model Eφ(R,G) in Section 3.3. Then we introduce the two-stage sampling process with both deterministic and stochastic dynamics in Section 3.4. An illustration of the whole framework is given in Fig. 1." }, { "heading": "3.2 FLOW-BASED GENERATIVE MODEL", "text": "Conditional Graph Continuous Flows pθ(d|G). We parameterize the conditional distribution of distances pθ(d|G) with the continuous normalizing flow, named Conditional Graph Continuous Flow (CGCF). CGCF defines the distribution through the following dynamics system:\nd = Fθ(d(t0),G) = d(t0) + ∫ t1 t0 fθ(d(t), t;G)dt, d(t0) ∼ N (0, I) (6)\nwhere the dynamic fθ is implemented by Message Passing Neural Networks (MPNN) (Gilmer et al., 2017), which is a widely used architecture for representation learning on molecular graphs. MPNN takes node attributes, edge attributes and the bonds lengths d(t) as input to compute the node and edge embeddings. Each message passing layer updates the node embeddings by aggregating the information from neighboring nodes according to its hidden vectors of respective nodes and edges. Final features are fed into a neural network to compute the value of the dynamic fθ for all distances independently. As t1 → ∞, our dynamic can have an infinite number of steps and is capable to model long-range dependencies. The invertibility of Fθ allows us to not only conduct fast sampling, but also easily optimize the parameter set θ by minimizing the exact negative log-likelihood:\nLmle(d,G; θ) = −Epdata log pθ(d|G) = −Epdata [ log p(d(t0)) + ∫ t1 t0 Tr ( ∂fθ,G ∂d(t) ) dt ] . (7)\nClosed-form p(R|d,G). The generated pair-wise distances can be converted into 3D structures through postprocessing methods such as the classic Euclidean Distance Geometry (EDG) algorithm. In this paper, we adopt an alternative way by defining the conformations as a conditional distribution:\np(R|d,G) = 1 Z exp { − ∑ euv∈E αuv ( ‖ru − rv‖2 − duv )2} , (8)\nwhere Z is the partition function to normalize the probability and {αuv} are parameters that control the variance of desired Cartesian coordinates, which can be either learned or manually designed according to the graph structure G. With the probabilistic formulation, we can conduct either sampling via MCMC or searching the local optimum with optimization methods. This simple function is fast to calculate, making the generation procedure very efficient with a negligible computational cost.\nCompared with the conventional EDG algorithm adopted in GraphDG (Simm & Hernández-Lobato, 2020), our probabilistic solution enjoys following advantages: 1) p(R|d,G) enables the calculation for the likelihood pθ(R|G) of Eq. 4 by approximation methods, and thus can be further combined with the energy-based tilting term Eφ(R,G) to induce a superior distribution; 2) GraphDG suffers the drawback that when invalid sets of distances are generated, EDG will fail to construct 3D structure. By contrast, our method can always be successful to generate conformations by sampling from the distribution p(R|d,G)." }, { "heading": "3.3 ENERGY-BASED TILTING MODEL", "text": "The last part of our framework is the Energy-based Tiling Model (ETM) Eφ(R,G), which helps model the long-range interactions between atoms explicitly in the observation space. Eφ(R,G) takes the form of SchNet (Schütt et al., 2017), which is widely used to model the potential-energy surfaces and energy-conserving force fields for molecules. The continuous-filter convolutional layers in SchNet allow each atom to aggregate the representations of all single, pairwise, and higherorder interactions between the atoms through non-linear functions. The final atomic representations are pooled to a single vector and then passed into a network to produce the scalar output.\nTypically the EBMs can be learned by maximum likelihood, which usually requires the lengthy MCMC procedure and is time-consuming for training. In this work, we learn the ETM by Noise Contrastive Estimation (Gutmann & Hyvärinen, 2010), which is much more efficient. In practice, the noise distribution is required to be close to data distribution, otherwise the classification problem would be too easy and would not guide Eφ to learn much about the modality of the data. We propose to take the pre-trained CGCF to serve as a strong noise distribution, leading to the following discriminative learning objective for the ETM2:\nLnce(R,G;φ) =− Epdata [ log\n1 1 + exp(Eφ(R,G)) ] − Epθ [ log\n1 1 + exp(−Eφ(R,G)) ] . (9)" }, { "heading": "3.4 SAMPLING", "text": "We employ a two-stage dynamic system to synthesize a possible conformation given the molecular graph representation G. In the first stage, we first draw a latent variable ẑ from the Gaussian prior\n2Detailed derivations of the training loss can be found in Appendix F.\nN (0, I), and then pass it through the continuous deterministic dynamics model Fθ defined in Eq. 6 to get d̂0 = Fθ(ẑ0, G). Then an optimization procedure such as stochastic gradient descent is employed to search the realistic conformations R with local maximum probability of p(R|d,G) (defined in Eq. 8). By doing this, an initial conformation R(0) can be generated. In the second stage, we further refine the initial conformation R(0) with the energy-based model defined in Eq. 5 with K steps of Langevin dynamics:\nRk = Rk−1 − 2 ∇REθ,φ (R|G) +\n√ ω, ω ∼ N (0, I),\nwhere Eθ,φ(R|G) = − log pθ,φ(R|G) = Eφ(R,G)− log ∫ p(R|d,G)pθ(d|G)dd.\n(10)\nwhere denotes the step size. The second integration term in Eθ,φ can be estimated through approximate methods. In practice, we use Monte Carlo Integration to conduct the approximation, which is simple yet effective with just a few distance samples from the CGCF model pθ(d|G)." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 EXPERIMENT SETUP", "text": "Evaluation Tasks. To evaluate the performance of proposed model, we conduct experiments by comparing with the counterparts on: (1) Conformation Generation evaluates the model’s capacity to learn the distribution of conformations by measuring the diversity and accuracy of generated samples (section 4.2); (2) Distribution over distances is first proposed in Simm & Hernández-Lobato (2020), which concentrate on the distance geometry of generated conformations (section 4.2).\nBenchmarks. We use the recent proposed GEOM-QM9 and GEOM-Drugs (Axelrod & GomezBombarelli, 2020) datasets for conformation generation task and ISO17 dataset (Simm & Hernández-Lobato, 2020) for distances modeling task. The choice of different datasets is because of their distinct properties. Specifically, GEOM datasets consist of stable conformations, which is suitable to evaluate the conformation generation task. By contrast, ISO17 contains snapshots of molecular dynamics simulations, where the structures are not equilibrium conformations but can reflect the density around the equilibrium state. Therefore, it is more suitable for the assessment of similarity between the model distribution and the data distribution around equilibrium states.\nMore specifically, GEOM-QM9 is an extension to the QM9 (Ramakrishnan et al., 2014) dataset: it contains multiple conformations for most molecules while the original QM9 only contains one. This dataset is limited to 9 heavy atoms (29 total atoms), with small molecular mass and few rotatable bonds. We randomly draw 50000 conformation-molecule pairs from GEOM-QM9 to be the training set, and take another 17813 conformations covering 150 molecular graphs as the test set. GEOM-Drugs dataset consists of much larger drug molecules, up to a maximum of 181 atoms (91 heavy atoms). It also contains multiple conformations for each molecule, with a larger variance in structures, e.g., there are the 6.5 rotatable bonds in average. We randomly take 50000 conformationmolecule pairs from GEOM-Drugs as the training set, and another 9161 conformations (covering 100 molecular graphs) as the test split. ISO17 dataset is also built upon QM9 datasets, which consists of 197 molecules, each with 5000 conformations. Following Simm & Hernández-Lobato (2020), we also split ISO17 into the training set with 167 molecules and the test set with another 30 molecules.\nBaselines. We compared our proposed method with the following state-of-the-art conformation generation methods. CVGAE (Mansimov et al., 2019) uses a conditional version of VAE to directly generate the 3D coordinates of atoms given the molecular graph. GraphDG (Simm & HernándezLobato, 2020) also employs the conditional VAE framework. Instead of directly modeling the 3D structure, they propose to learn the distribution over distances. Then the distances are converted into conformations with an EDG algorithm. Furthermore, we also take RDKit (Riniker & Landrum, 2015) as a baseline model, which is a classical EDG approach built upon extensive calculation collections in computational chemistry." }, { "heading": "4.2 CONFORMATION GENERATION", "text": "In this section, we evaluate the ability of the proposed method to model the equilibrium conformations. We focus on both the diversity and accuracy of the generated samples. More specifically, diversity measures the model’s capacity to generate multi-modal conformations, which is essential for discovering new conformations, while accuracy concentrates on the similarity between generated conformations and the equilibrium conformations.\nEvaluation. For numerical evaluations, we follow previous work (Hawkins, 2017; Mansimov et al., 2019) to calculate the Root-Mean-Square Deviation (RMSD) of the heavy atoms between generated samples and reference ones. Precisely, given the generated conformation R and the reference R∗, we obtain R̂ by translating and rotating R∗ to minimize the following predefined RMSD metric:\nRMSD(R, R̂) = ( 1 n n∑ i=1 ‖Ri − R̂i‖2 ) 1 2 , (11)\nwhere n is the number of heavy atoms. Then the smallest distance is taken as the evaluation metric. Built upon the RMSD metric, we define Coverage (COV) and Matching (MAT) score to measure the diversity and quality respectively. Intuitively, COV measures the fraction of conformations in the reference set that are matched by at least one conformation in the generated set. For each conformation in the generated set, its neighbors in the reference set within a given RMSD threshold\nTable 2: Comparison of distances density modeling with different methods. We compare the marginal distribution of single (p(duv|G)), pair (p(duv, dij |G)) and all (p(d|G)) edges between C and O atoms. Molecular graphs G are taken from the test set of ISO17. We take two metrics into consideration: 1) median MMD between the ground truth and generated ones, and 2) mean ranking (1 to 3) based on the MMD metric.\nSingle Pair All Mean Median Mean Median Mean Median\nRDKit 3.4513 3.1602 3.8452 3.6287 4.0866 3.7519 CVGAE 4.1789 4.1762 4.9184 5.1856 5.9747 5.9928 GraphDG 0.7645 0.2346 0.8920 0.3287 1.1949 0.5485\nCGCF 0.4490 0.1786 0.5509 0.2734 0.8703 0.4447 CGCF + ETM 0.5703 0.2411 0.6901 0.3482 1.0706 0.5411\nδ are marked as matched:\nCOV(Sg(G),Sr(G)) = 1 |Sr| ∣∣∣{R ∈ Sr∣∣RMSD(R,R′) < δ, ∃R′ ∈ Sg}∣∣∣, (12)\nwhere Sg(G) denotes the generated conformations set for molecular graph G, and Sr(G) denotes the reference set. In practice, the number of samples in the generated set is two times of the reference set. Typically, a higher COV score means the a better diversity performance. The COV score is able to evaluate whether the generated conformations are diverse enough to cover the ground-truth.\nWhile COV is effective to measure the diversity and detect the mode-collapse case, it is still possible for the model to achieve high COV with a high threshold tolerance. Here we define the MAT score as a complement to measure the quality of generated samples. For each conformation in the reference set, the RMSD distance to its nearest neighbor in the generated set is computed and averaged:\nMAT(Sg(G),Sr(G)) = 1 |Sr| ∑\nR′∈Sr\nmin R∈Sg\nRMSD(R,R′). (13)\nThis metric concentrate on the accuracy of generated conformations. More realistic generated samples lead to a lower matching score.\nResults. Tab. 1 shows that compared with the existing state-of-the-art baselines, our CGCF model can already achieve superior performance on all four metrics (top 4 rows). As a CNF-based model, CGCF holds much the higher generative capacity for both diversity and quality compared than VAE approaches. The results are further improved when combined with ETM to explicitly incorporate the long-range correlations. We visualize several representative examples in Fig. 2, and leave more examples in Appendix G. A meaningful observation is that though competitive over other neural models, the rule-based RDKit method occasionally shows better performance than our model, which indicates that RDKit can generate more realistic structures. We argue that this is because after generating the initial coordinates, RDKit involves additional hand-designed molecular force field (FF) energy functions (Rappé et al., 1992; Halgren, 1996a) to find the stable conformations with local minimal energy. By contrast, instead of finding the local minimums, our deep generative models aim to model and sample from the potential distribution of structures. To yield a better comparison, we further test our model by taking the generated structures as initial states and utilize the Merck Molecular Force Field (MMFF) (Halgren, 1996a) to find the local stable points. A more precise description of about the MMFF Force Field algorithms in RDKit is given in Appendix I. This postprocessing procedure is also employed in the previous work (Mansimov et al., 2019). Additional results in Tab. 1 verify our conjecture that FF plays a vital role in generating more realistic structures, and demonstrate the capacity of our method to generate high-quality initial coordinates." }, { "heading": "4.3 DISTRIBUTIONS OVER DISTANCES", "text": "Tough primarily designed for 3D coordinates, we also following Simm & Hernández-Lobato (2020) to evaluate the generated distributions of pairwise distance, which can be viewed as a representative element of the model capacity to model the inter-atomic interactions.\nEvaluation. Let p(duv|G) denote the conditional distribution of distances on each edge euv given a molecular graph G. The set of distances are computed from the generated conformations R. We\ncalculate maximum mean discrepancy (MMD) (Gretton et al., 2012) to compare the generated distributions and the ground-truth distributions. Specifically, we evaluate the distribution of individual distances p(duv|G), pair distances p(duv, dij |G) and all distances p(d|G). For this benchmark, the number of samples in the generated set is the same as the reference set.\nResults. The results of MMD are summarized in Tab. 2. The statistics show that RDKit suffers the worst performance, which is because it just aims to generate the most stable structures as illustrated in Section 4.2. For CGCF, the generated samples are significantly closer to the ground-truth distribution than baseline methods, where we consistently achieve the best numerical results. Besides, we notice that ETM will slightly hurt the performance in this task. However, one should note that this phenomenon is natural because typically ETM will sharpen the generated distribution towards the stable conformations with local minimal energy. By contrast, the ISO17 dataset consists of snapshots of molecular dynamics where the structures are not equilibrium conformations but samples from the density around the equilibrium state. Therefore, ETM will slightly hurt the results. This phenomenon is also consistent with the observations for RDKit. Instead of generating unbiased samples from the underlying distribution, RDKit will only generate the stable ones with local minimal energy by involving the hand-designed molecular force field (Simm & Hernández-Lobato, 2020). And as shown in the results, though highly competitive in Tab. 1, RDKit also suffers much weaker results in Tab. 2. The marginal distributions P (duv|G) for pairwise distances in visualized in Appendix K, which further demonstrate the superior capacity of our proposed method.\nWe also follow Mansimov et al. (2019) to calculate the diversity of conformations generated by all compared methods, which is measured by calculating the mean and standard deviation of the pairwise RMSD between each pair of generated conformations per molecule. The results shown in Tab. 3 demonstrate that while our method can achieve the lowest MMD, it does not collapse to generating extremely similar conformations. Besides, we observe that ETM will slightly hurt the diversity of CGCF, which verifies our statement that ETM will sharpen the generated distribution towards the stable conformations with local minimal energy." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "In this paper, we propose a novel probabilistic framework for molecular conformation generation. Our generative model combines the advantage of both flow-based and energy-based models, which is capable of modeling the complex multi-modal geometric distribution and highly branched atomic correlations. Experimental results show that our method outperforms all previous state-of-the-art baselines on the standard benchmarks. Future work includes applying our framework on much larger datasets and extending it to more challenging structures (e.g., proteins)." }, { "heading": "ACKNOWLEDGMENTS", "text": "This project is supported by the Natural Sciences and Engineering Research Council (NSERC) Discovery Grant, the Canada CIFAR AI Chair Program, collaboration grants between Microsoft Research and Mila, Samsung Electronics Co., Ldt., Amazon Faculty Research Award, Tencent AI Lab Rhino-Bird Gift Fund and a NRC Collaborative R&D Project (AI4D-CORE-06). This project was also partially funded by IVADO Fundamental Research Project grant PRF-2019-3583139727." }, { "heading": "A RELATED WORKS", "text": "Conformation Generation. There have been results showing deep learning speeding up molecular dynamics simulation by learning efficient alternatives to quantum mechanics-based energy calculations (Schütt et al., 2017; Smith et al., 2017). However, though accelerated by neural networks, these approaches are still time-consuming due to the lengthy MCMC process. Recently, Gebauer et al. (2019) and Hoffmann & Noé (2019) propose to directly generate 3D structures with deep generative models. However, these models can hardly capture graph- or bond-based structure, which is typically complex and highly branched. Some other works (Lemke & Peter, 2019; AlQuraishi, 2019; Ingraham et al., 2019; Noé et al., 2019; Senior et al., 2020) also focus on learning models to directly generate 3D structure, but focus on the protein folding problem. Unfortunately, proteins are linear structures while general molecules are highly branched, making these methods not naturally transferable to general molecular conformation generation tasks.\nEnergy-based Generative Model. There has been a long history for energy-based generative models. Xie et al. (2016) proposes to train an energy-based model parameterized by modern deep neural network and learned it by Langevin based MLE. The model is called generative ConvNet since it can be derived from the discriminative ConvNet. In particular, this paper is the first to formulate modern ConvNet-parametrized EBM as exponential tilting of a reference distribution, and connect it to discriminative ConvNet classifier. More recently, Du & Mordatch (2019) implemented the deep EBMs with ConvNet as energy function and achieved impressive results on image generation.\nDifferent from the previous works, we concentrate on molecular geometry generation, and propose a novel and principled probabilistic framework to address the domain-specific problems. More specifically, we first predict the atomic distances through the continuous normalizing flow, and then convert them to the desired 3D conformation and optimize it with the energy-based model. This procedure enables us to keep the rotational and translational invariance property. Besides, to the best of our knowledge, we are the first one to combine neural ODE with EBMs. We take the ODE model to improve the training of EBM, and combine both to conduct the two-stage sampling dynamics." }, { "heading": "B DATA PREPROCESS", "text": "Inspired by classic molecular distance geometry (Crippen et al., 1988), we also generate the confirmations by firstly predicting all the pairwise distances, which enjoys the invariant property to rotation and translation. Since the bonds existing in the molecular graph are not sufficient to characterize a conformation, we pre-process the graphs by extending auxiliary edges. Specifically, the atoms that are 2 or 3 hops away are connected with virtual bonds, labeled differently from the real bonds of the original graph. These extra edges contribute to reducing the degrees of freedom in the 3D coordinates, with the edges between second neighbors helping to fix the angles between atoms, and those between third neighbors fixing dihedral angles." }, { "heading": "C NETWORK ARCHITECTURE", "text": "In this section, we elaborate on the network architecture details of CGCF and ETM.\nC.1 CONTINUOUS GRAPH FLOW\nIn CGCF, the dynamic function fθ defined in Eq. 6 is instanced with a message passing neural networks. Given the node attributes, edge attributes and intermediate edge lengths as input, we first embed them into the feature space through feedforward networks:\nh(0)v = NodeEmbedding(v), v ∈ V, heuv = EdgeEmbedding(euv, duv(t0)), euv ∈ E .\n(14)\nThen, the node and edge features along are passed sequentially into L layers message passing networks with the graph structure G:\nh(`)v = MLP ( h(`−1)v + ∑ u∈NG(v) σ(h(`−1)u + heuv ) ) , ` = 1 . . . L, (15)\nwhere NG(v) denotes the first neighbors in the graph G and σ is the activation function. After L message passing layers, we use the final hidden representation h(L) as the node representations. Then for each bond, the corresponding node features are aggregated along with the edge feature to be fed into a neural network to compute the value of the dynamic fθ:\n∂duv ∂t = NN(hu,hv,heuv , t). (16)\nC.2 ENERGY-BASED TILTING MODEL\nThe ETM is implemented with SchNet. It takes both the graph and conformation information as input and output a scalar to indicate the energy level. Let the atoms are described by a tuple of features Xl = (xl1, . . . ,x l n), where n denote the number of atoms and l denote the layer. Then given the positions R, the node embeddings are updated by the convolution with all surrounding atoms:\nxl+1i = ( X l ∗W l ) i = natoms∑ j=0 xlj ◦W l (rj − ri) , (17)\nwhere ”o” represents the element-wise multiplication. It is straightforward that the above function enables to include translational and rotational invariance by computing pairwise distances instead of using relative positions. After L convolutional layers, we perform a sum-pooling operator over the node embeddings to calculate the global embedding for the whole molecular structure. Then the global embedding is fed into a feedforward network to compute the scalar of the energy level." }, { "heading": "D TWO-STAGE DYNAMIC SYSTEM FOR SAMPLING", "text": "Algorithm 1 Sampling Procedure of the Proposed Method Input: molecular graph G, CGCF model with parameter θ, ETM with parameter φ, the number of optimization steps for p(R|d,G)M and its step size r, the number of MCMC steps for Eθ,φ N and its step size Output: molecular conformation R\n1: Sample d(t0) ∼ N (0, I) 2: d = Fθ(d(t0),G) 3: for m = 1, ...,M do 4: Rm = Rm−1 + r∇R log p(R|d,G) 5: end for 6: for n = 1, ..., N do 7: Rn = Rn−1 − 2∇REθ,φ(R|G) + √ ω, ω ∼ N (0, I), 8: end for\nE IMPLEMENTATION DETAILS\nOur model is implemented in PyTorch (Paszke et al., 2017). The MPNN in CGCF is implemented with 3 layers, and the embedding dimension is set as 128. And the SchNet in ETM is implemented with 6 layers with the embedding dimension set as 128. We train our CGCF with a batch size of 128 and a learning rate of 0.001 until convergence. After obtaining the CGCF, we train the ETM with a batch size of 384 and a learning rate of 0.001 until convergence. For all experimental settings, we use Adam (Kingma & Ba, 2014) to optimize our model." }, { "heading": "F DETAILED DERIVATIONS OF ENERGY-BASED MODEL", "text": "Here we present the detailed derivations of the training objective function of Energy-based Tilting Model (ETM) in Eq. 9:\nLnce(R,G;φ) =− Epdata [ log\npθ,φ(R|G) pθ,φ(R|G) + pθ(R|G)\n] − Epθ [ log\npθ(R|G) pθ,φ(R|G) + pθ(R|G) ] =− Epdata [ log\npθ(R|G) exp(−Eφ(R,G)) pθ(R|G) exp(−Eφ(R,G)) + pθ(R|G) ] − Epθ [ log\npθ(R|G) pθ(R|G) exp(−Eφ(R,G)) + pθ(R|G) ] =− Epdata [ log 1 1 + exp(Eφ(R,G)) ] − Epθ [ log\n1 1 + exp(−Eφ(R,G)) ] .\n(18)" }, { "heading": "G MORE GENERATED SAMPLES", "text": "We present more visualizations of generated 3D structures in Fig. 3, which are generated from our model (CGCF + ETM) learned on both GEOM-QM9 and GEOM-Drugs datasets. The visualizations demonstrate that our proposed framework holds the high capacity to model the chemical structures in the 3D coordinates." }, { "heading": "H MORE RESULTS OF COVERAGE SCORE", "text": "We give more results of the coverage (COV) score with different threshold δ in Fig. 4. As shown in the figure, our proposed method can consistently outperform the previous state-of-the-art baselines CVGAE and GraphDG, which demonstrate the effectiveness of our model.\nI IMPLEMENTATION FOR MMFF\nIn this section, we give a more precise description of the MMFF Force Field implementation in the RDKit toolkit (Riniker & Landrum, 2015).\nIn MMFF, the energy expression is constituted by seven terms: bond stretching, angle bending, stretch-bend, out-of-plane bending, torsional, van der Waals and electrostatic. The detailed functional form of individual terms can be found in the original literature (Halgren, 1996a). To build the force field for a given molecular system, the first step is to assign correct types to each atom. At the second step, atom-centered partial charges are computed according to the MMFF charge model (Halgren, 1996b). Then, all bonded and non-bonded interactions in the molecular system under study, depending on its structure and connectivity, are loaded into the energy expression. Optionally, external restraining terms can be added to the MMFF energy expression, with the purpose of constraining selected internal coordinates during geometry optimizations. Once all bonded and non-bonded interactions, plus optional restraints, have been loaded into the MMFF energy expression, potential gradients of the system under study can be computed to minimize the energy." }, { "heading": "J MORE EVALUATIONS FOR CONFORMATION GENERATION", "text": "Junk Rate. The COV and MAT score in Section 4.2 do not appear to explicitly measure the generated false samples. Here we additionally define Junk rate measurement. Intuitively, JUNK measures the fraction of generated conformations that are far away from all the conformations in the reference set. For each conformation in the generated set, it will be marked as a false sample if its RMSD to all the conformations of reference set are above a given threshold δ:\nJUNK(Sg(G),Sr(G)) = 1 |Sg| ∣∣∣{R ∈ Sg∣∣RMSD(R,R′) > δ, ∀R′ ∈ Sr}∣∣∣, (19)\nTypically, a lower JUNK rate means better generated quality. The results are shown in Tab. 4. As shown in the table, our CGCF model can already outperform the existing state-of-the-art baselines with an obvious margin. The results are further improved when combined with ETM to explicitly incorporate the long-range correlations." }, { "heading": "K DISTANCE DISTRIBUTION VISUALIZATION", "text": "In Fig. 5, we plot the marginal distributions p(duv|G) for all pairwise distances between C and O atoms of a molecular graph in the ISO17 test set. As shown in the figure, though primarily designed\nfor 3D structure generation, our method can make much better estimation of the distances than GraphDG, which is the state-of-the-art model for molecular geometry prediction. As a representative element of the pairwise property between atoms, the inter-atomic distances demonstrate the capacity of our model to capture the inter-atomic interactions." } ]
2,021
LEARNING NEURAL GENERATIVE DYNAMICS FOR MOLECULAR CONFORMATION GENERATION
SP:fdd497d17b5a12017b4ceb377de57bfc18ebd815
[ "In this paper the authors propose a novel architecture, called Mass-Conserving LSTM (MC-LSTM) based on LSTM. The authors base their work over the hypothesis that the real world is based over conservation laws related to mass, energy, etc. Thus, they propose that also the quantities involved in deep learning models should be conserved. To do so, they aim at exploiting the memory cells of the LSTM as mass accumulators and then force the conservation laws via the model equations. The authors finally show successfully the potential of this novel network into three experimental settings where several types of “conservation” are required (e.g. mass conservation, energy conservation, etc).", "The paper provides an interesting and novel LSTM structure named MC-LSTM, which extends the inductive bias of LSTM to deal with some real-world problems limited by conservation laws. The authors do some experiments related to traffic forecasting and hydrology to illustrate the effectiveness of MC-LSTM. The new architecture is well-suited for predicting some physical systems, which is valuable." ]
The success of Convolutional Neural Networks (CNNs) in computer vision is mainly driven by their strong inductive bias, which is strong enough to allow CNNs to solve vision-related tasks with random weights, meaning without learning. Similarly, Long Short-Term Memory (LSTM) has a strong inductive bias towards storing information over time. However, many real-world systems are governed by conservation laws, which lead to the redistribution of particular quantities — e.g. in physical and economical systems. Our novel Mass-Conserving LSTM (MC-LSTM) adheres to these conservation laws by extending the inductive bias of LSTM to model the redistribution of those stored quantities. MC-LSTMs set a new state-of-the-art for neural arithmetic units at learning arithmetic operations, such as addition tasks, which have a strong conservation law, as the sum is constant over time. Further, MC-LSTM is applied to traffic forecasting, modeling a pendulum, and a large benchmark dataset in hydrology, where it sets a new state-of-the-art for predicting peak flows. In the hydrology example, we show that MC-LSTM states correlate with real world processes and are therefore interpretable.
[]
[ { "authors": [ "Nans Addor", "Andrew J Newman", "Naoki Mizukami", "Martyn P Clark" ], "title": "The camels data set: catchment attributes and meteorology for large-sample studies", "venue": "Hydrology and Earth System Sciences (HESS),", "year": 2017 }, { "authors": [ "Nans Addor", "Andrew J. Newman", "Naoki Mizukami", "Martyn P. Clark" ], "title": "Catchment attributes for large-sample studies", "venue": "Boulder, CO: UCAR/NCAR,", "year": 2017 }, { "authors": [ "Eric A Anderson" ], "title": "National weather service river forecast system: Snow accumulation and ablation model", "venue": "NOAA Tech. Memo. NWS HYDRO-17,", "year": 1973 }, { "authors": [ "Maren Awiszus", "Bodo Rosenhahn" ], "title": "Markov chain neural networks", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW),", "year": 2018 }, { "authors": [ "Sebastian Bach", "Alexander Binder", "Grégoire Montavon", "Frederick Klauschen", "Klaus-Robert Müller", "Wojciech Samek" ], "title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation", "venue": "PloS one,", "year": 2015 }, { "authors": [ "Samy Bengio", "Oriol Vinyals", "Navdeep Jaitly", "Noam Shazeer" ], "title": "Scheduled sampling for sequence prediction with recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Tom Beucler", "Michael Pritchard", "Stephan Rasp", "Pierre Gentine", "Jordan Ott", "Pierre Baldi" ], "title": "Enforcing analytic constraints in neural-networks emulating physical systems, 2019a", "venue": null, "year": 2019 }, { "authors": [ "Tom Beucler", "Stephan Rasp", "Michael Pritchard", "Pierre Gentine" ], "title": "Achieving conservation of energy in neural network emulators for climate modeling", "venue": "arXiv preprint arXiv:1906.06622,", "year": 2019 }, { "authors": [ "Tom Beucler", "Stephan Rasp", "Michael Pritchard", "Pierre Gentine" ], "title": "Achieving conservation of energy in neural network emulators for climate modeling", "venue": "ICML Workshop “Climate Change: How Can AI Help?”,", "year": 2019 }, { "authors": [ "Keith Beven" ], "title": "Deep learning, hydrological processes and the uniqueness of place", "venue": "Hydrological Processes,", "year": 2020 }, { "authors": [ "Keith J Beven" ], "title": "Rainfall-runoff modelling: the primer", "venue": null, "year": 2011 }, { "authors": [ "Bernd Bohnet", "Ryan McDonald", "Goncalo Simoes", "Daniel Andor", "Emily Pitler", "Joshua Maynez" ], "title": "Morphosyntactic tagging with a meta-bilstm model over context sensitive token encodings", "venue": "arXiv preprint arXiv:1805.08237,", "year": 2018 }, { "authors": [ "Charles Bordenave", "Pietro Caputo", "Djalil Chafai" ], "title": "Circular law theorem for random markov matrices", "venue": "Probability Theory and Related Fields,", "year": 2012 }, { "authors": [ "Zhengdao Chen", "Jianyu Zhang", "Martin Arjovsky", "Léon Bottou" ], "title": "Symplectic recurrent neural networks", "venue": "arXiv preprint arXiv:1909.13334,", "year": 2019 }, { "authors": [ "Kyunghyun Cho", "Bart van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "venue": "In Proceedings of the Conference on Empirical Methods in Natural Language Processing,", "year": 2014 }, { "authors": [ "N. Cohen", "A. Shashua" ], "title": "Inductive bias of deep convolutional networks through pooling geometry", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Miles Cranmer", "Alvaro Sanchez Gonzalez", "Peter Battaglia", "Rui Xu", "Kyle Cranmer", "David Spergel", "Shirley Ho" ], "title": "Discovering symbolic models from deep learning with inductive biases", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Zhiyong Cui", "Kristian Henrickson", "Ruimin Ke", "Yinhai Wang" ], "title": "Traffic graph convolutional recurrent neural network: A deep learning framework for network-scale traffic learning and forecasting", "venue": "IEEE Transactions on Intelligent Transportation Systems,", "year": 2019 }, { "authors": [ "Gustavo Deco", "Wilfried Brauer" ], "title": "Nonlinear higher-order statistical decorrelation by volumeconserving neural architectures", "venue": "Neural Networks,", "year": 1995 }, { "authors": [ "Martin R Evans", "Tom Hanney" ], "title": "Nonequilibrium statistical mechanics of the zero-range process and related models", "venue": "Journal of Physics A: Mathematical and General,", "year": 2005 }, { "authors": [ "R Allan Freeze", "RL Harlan" ], "title": "Blueprint for a physically-based, digitally-simulated hydrologic response model", "venue": "Journal of Hydrology,", "year": 1969 }, { "authors": [ "Kunihiko Fukushima" ], "title": "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position", "venue": "Biological Cybernetics,", "year": 1980 }, { "authors": [ "A. Gaier", "D. Ha" ], "title": "Weight agnostic neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "C.R. Gallistel" ], "title": "Finding numbers in the brain", "venue": "Philosophical Transactions of the Royal Society B: Biological Sciences,", "year": 2018 }, { "authors": [ "Felix A. Gers", "Jürgen Schmidhuber", "Fred Cummins" ], "title": "Learning to forget: Continual prediction with lstm", "venue": "Neural Computation,", "year": 2000 }, { "authors": [ "Samuel Greydanus", "Misko Dzamba", "Jason Yosinski" ], "title": "Hamiltonian neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "M. Hairer" ], "title": "Ergodic properties of markov processes", "venue": "Lecture notes,", "year": 2018 }, { "authors": [ "K. He", "Y. Wang", "J. Hopcroft" ], "title": "A powerful generative model using random weights for the deep image representation", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Sepp Hochreiter" ], "title": "Untersuchungen zu dynamischen neuronalen Netzen", "venue": "PhD thesis, Technische Universität München,", "year": 1991 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Raban Iten", "Tony Metger", "Henrik Wilming", "Lídia Del Rio", "Renato Renner" ], "title": "Discovering physical concepts with neural networks", "venue": "Physical Review Letters,", "year": 2020 }, { "authors": [ "Xiaowei Jia", "Jared Willard", "Anuj Karpatne", "Jordan Read", "Jacob Zwart", "Michael Steinbach", "Vipin Kumar" ], "title": "Physics guided rnns for modeling dynamical systems: A case study in simulating lake temperature profiles", "venue": "In Proceedings of the 2019 SIAM International Conference on Data Mining,", "year": 2019 }, { "authors": [ "Anuj Karpatne", "Gowtham Atluri", "James H Faghmous", "Michael Steinbach", "Arindam Banerjee", "Auroop Ganguly", "Shashi Shekhar", "Nagiza Samatova", "Vipin Kumar" ], "title": "Theory-guided data science: A new paradigm for scientific discovery from data", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Ba Jimmy" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Günter Klambauer", "Thomas Unterthiner", "Andreas Mayr", "Sepp Hochreiter" ], "title": "Self-normalizing neural networks. In Advances in neural information processing systems", "venue": null, "year": 2017 }, { "authors": [ "Elena Kochkina", "Maria Liakata", "Isabelle Augenstein" ], "title": "Turing at semeval-2017 task 8: Sequential approach to rumour stance classification with branch-lstm", "venue": "arXiv preprint arXiv:1704.07221,", "year": 2017 }, { "authors": [ "Frederik Kratzert", "Daniel Klotz", "Claire Brenner", "Karsten Schulz", "Mathew Herrnegger" ], "title": "Rainfall– runoff modelling using long short-term memory (lstm) networks", "venue": "Hydrology and Earth System Sciences,", "year": 2018 }, { "authors": [ "Frederik Kratzert", "Mathew Herrnegger", "Daniel Klotz", "Sepp Hochreiter", "Günter Klambauer" ], "title": "NeuralHydrology–Interpreting LSTMs in Hydrology", "venue": null, "year": 2019 }, { "authors": [ "Frederik Kratzert", "Daniel Klotz", "Mathew Herrnegger", "Alden K Sampson", "Sepp Hochreiter", "Grey S Nearing" ], "title": "Toward improved predictions in ungauged basins: Exploiting the power of machine learning", "venue": "Water Resources Research,", "year": 2019 }, { "authors": [ "Frederik Kratzert", "Daniel Klotz", "Guy Shalev", "Günter Klambauer", "Sepp Hochreiter", "Grey Nearing" ], "title": "Towards learning universal, regional, and local hydrological behaviors via machine learning applied to large-sample datasets", "venue": "Hydrology and Earth System Sciences,", "year": 2019 }, { "authors": [ "Frederik Kratzert", "Daniel Klotz", "Sepp Hochreiter", "Grey Nearing" ], "title": "A note on leveraging synergy in multiple meteorological datasets with deep learning for rainfall-runoff modeling", "venue": "Hydrology and Earth System Sciences Discussions,", "year": 2020 }, { "authors": [ "David P Kreil", "Michael K Kopp", "David Jonietz", "Moritz Neun", "Aleksandra Gruca", "Pedro Herruzo", "Henry Martin", "Ali Soleymani", "Sepp Hochreiter" ], "title": "The surprising efficiency of framing geo-spatial time series forecasting as a video prediction task–insights from the iarai traffic4cast competition at neurips 2019", "venue": "NeurIPS", "year": 2019 }, { "authors": [ "Sebastian Lapuschkin", "Stephan Wäldchen", "Alexander Binder", "Grégoire Montavon", "Wojciech Samek", "Klaus-Robert Müller" ], "title": "Unmasking clever hans predictors and assessing what machines really learn", "venue": "Nature communications,", "year": 2019 }, { "authors": [ "Y. LeCun", "Y. Bengio" ], "title": "Convolutional Networks for Images, Speech, and Time Series, pp. 255–258", "venue": null, "year": 1998 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Gang Liu", "Jiabao Guo" ], "title": "Bidirectional lstm with attention mechanism and convolutional layer for text", "venue": "classification. Neurocomputing,", "year": 2019 }, { "authors": [ "Yang Liu", "Zhiyuan Liu", "Ruo Jia" ], "title": "Deeppf: A deep learning based architecture for metro passenger flow prediction", "venue": "Transportation Research Part C: Emerging Technologies,", "year": 2019 }, { "authors": [ "Andreas Madsen", "Alexander Rosenberg Johansen" ], "title": "Neural arithmetic units", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "T.M. Mitchell" ], "title": "The need for biases in learning generalizations", "venue": "Technical Report CBM-TR-117,", "year": 1980 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Naoki Mizukami", "Martyn P. Clark", "Andrew J. Newman", "Andrew W. Wood", "Ethan D. Gutmann", "Bart Nijssen", "Oldrich Rakovec", "Luis Samaniego" ], "title": "Towards seamless large-domain parameter estimation for hydrologic models", "venue": "Water Resources Research,", "year": 2017 }, { "authors": [ "Naoki Mizukami", "Oldrich Rakovec", "Andrew J Newman", "Martyn P Clark", "Andrew W Wood", "Hoshin V Gupta", "Rohini Kumar" ], "title": "On the choice of calibration metrics for “high-flow” estimation using hydrologic models", "venue": "Hydrology and Earth System Sciences,", "year": 2019 }, { "authors": [ "Do H Nam", "Donald R Drew" ], "title": "Traffic dynamics: Method for estimating freeway travel times in real time from flow measurements", "venue": "Journal of Transportation Engineering,", "year": 1996 }, { "authors": [ "Grey S. Nearing", "Yudong Tian", "Hoshin V. Gupta", "Martyn P. Clark", "Kenneth W. Harrison", "Steven V. Weijs" ], "title": "A philosophical basis for hydrological uncertainty", "venue": "Hydrological Sciences Journal,", "year": 2016 }, { "authors": [ "AJ Newman", "K Sampson", "MP Clark", "A Bock", "RJ Viger", "D Blodgett" ], "title": "A large-sample watershedscale hydrometeorological dataset for the contiguous USA", "venue": "Boulder, CO: UCAR/NCAR,", "year": 2014 }, { "authors": [ "AJ Newman", "MP Clark", "Kevin Sampson", "Andrew Wood", "LE Hay", "A Bock", "RJ Viger", "D Blodgett", "L Brekke", "JR Arnold" ], "title": "Development of a large-sample watershed-scale hydrometeorological data set for the contiguous USA: data set characteristics and assessment of regional variability in hydrologic model performance", "venue": "Hydrology and Earth System Sciences,", "year": 2015 }, { "authors": [ "Andrew J Newman", "Naoki Mizukami", "Martyn P Clark", "Andrew W Wood", "Bart Nijssen", "Grey Nearing" ], "title": "Benchmarking of a physically based hydrologic model", "venue": "Journal of Hydrometeorology,", "year": 2017 }, { "authors": [ "Andreas Nieder" ], "title": "The neuronal code for number", "venue": "Nature Reviews Neuroscience,", "year": 2016 }, { "authors": [ "Christopher Olah" ], "title": "Understanding LSTM networks, 2015. URL https://colah.github.io/ posts/2015-08-Understanding-LSTMs", "venue": null, "year": 2015 }, { "authors": [ "George Papamakarios", "Eric Nalisnick", "Danilo Jimenez Rezende", "Shakir Mohamed", "Balaji Lakshminarayanan" ], "title": "Normalizing flows for probabilistic modeling and inference", "venue": "Technical report,", "year": 2019 }, { "authors": [ "Herschel Rabitz", "Ömer F Aliş", "Jeffrey Shorter", "Kyurhee Shim" ], "title": "Efficient input—output model representations", "venue": "Computer physics communications,", "year": 1999 }, { "authors": [ "Maziar Raissi", "Paris Perdikaris", "George E Karniadakis" ], "title": "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations", "venue": "Journal of Computational Physics,", "year": 2019 }, { "authors": [ "Oldrich Rakovec", "Naoki Mizukami", "Rohini Kumar", "Andrew J Newman", "Stephan Thober", "Andrew W Wood", "Martyn P Clark", "Luis Samaniego" ], "title": "Diagnostic evaluation of large-domain hydrologic models calibrated across the contiguous united states", "venue": "Journal of Geophysical Research: Atmospheres,", "year": 2019 }, { "authors": [ "Danilo Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "In Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Andrew M Saxe", "James L McClelland", "Surya Ganguli" ], "title": "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "J. Schmidhuber", "D. Wierstra", "M. Gagliolo", "F. Gomez" ], "title": "Training recurrent networks by Evolino", "venue": "Neural Computation,", "year": 2007 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Learning to control fast-weight memories: An alternative to dynamic recurrent networks", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Deep learning in neural networks: An overview", "venue": "Neural networks,", "year": 2015 }, { "authors": [ "Michael Schmidt", "Hod Lipson" ], "title": "Distilling free-form natural laws from experimental data", "venue": null, "year": 2009 }, { "authors": [ "Jan Seibert", "Marc J.P. Vis", "Elizabeth Lewis", "H.J. van Meerveld" ], "title": "Upper and lower benchmarks in hydrological modelling", "venue": "Hydrological Processes,", "year": 2018 }, { "authors": [ "SL Sellars" ], "title": "grand challenges” in big data and the earth sciences", "venue": "Bulletin of the American Meteorological Society,", "year": 2018 }, { "authors": [ "Xin-Hua Song", "Philip K Hopke" ], "title": "Solving the chemical mass balance problem using an artificial neural network", "venue": "Environmental science & technology,", "year": 1996 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V Le" ], "title": "Sequence to sequence learning with neural networks. In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "David Alexander Tedjopurnomo", "Zhifeng Bao", "Baihua Zheng", "Farhana Choudhury", "AK Qin" ], "title": "A survey on modern deep neural network for traffic prediction: Trends, methods and challenges", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2020 }, { "authors": [ "E. Todini" ], "title": "Rainfall-runoff modeling — past, present and future", "venue": "Journal of Hydrology,", "year": 1988 }, { "authors": [ "Andrew Trask", "Felix Hill", "Scott E Reed", "Jack Rae", "Chris Dyer", "Phil Blunsom" ], "title": "Neural arithmetic logic units", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "D. Ulyanov", "A. Vedaldi", "V. Lempitsky" ], "title": "Deep image prior", "venue": "International Journal of Computer Vision,", "year": 2020 }, { "authors": [ "A.J. van der Schaft", "M. Dalsmo", "B.M. Maschke" ], "title": "Mathematical structures in the network representation of energy-conserving physical systems", "venue": "In Proceedings of 35th IEEE Conference on Decision and Control,", "year": 1996 }, { "authors": [ "Lelitha Vanajakshi", "LR Rilett" ], "title": "Loop detector data diagnostics based on conservation-of-vehicles principle", "venue": "Transportation research record,", "year": 2004 }, { "authors": [ "Xinping Xiao", "Huiming Duan" ], "title": "A new grey model for traffic flow mechanics", "venue": "Engineering Applications of Artificial Intelligence,", "year": 2020 }, { "authors": [ "LI Yitian", "Roy R Gu" ], "title": "Modeling flow and sediment transport in a river system using an artificial neural network", "venue": "Environmental management,", "year": 2003 }, { "authors": [ "Zheng Zhao", "Weihai Chen", "Xingming Wu", "Peter CY Chen", "Jingmeng Liu" ], "title": "Lstm network: a deep learning approach for short-term traffic forecast", "venue": "IET Intelligent Transport Systems,", "year": 2017 }, { "authors": [ "Madsen", "Johansen" ], "title": "This means that all networks had again a single hidden layer. The NAU, Neural Multiplication Unit (NMU) and NALU networks all had two hidden units and, respectively, NAU, NMU and NALU output layers. The first, recurrent layer for the first two networks was a NAU and the NALU network used a recurrent NALU layer. For the exact initialization of NAU and NALU, we refer to (Madsen", "venue": null, "year": 2020 }, { "authors": [ "Trask" ], "title": "2020) fixed the number of hidden units to two with the idea that each unit can learn one term of the addition operation", "venue": "For the addition task (i.e.,", "year": 2020 }, { "authors": [ "VIC (Newman" ], "title": "2017), three different model structures of FUSE1, mHM (Mizukami", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Inductive biases enabled the success of CNNs and LSTMs. One of the greatest success stories of deep learning is Convolutional Neural Networks (CNNs) (Fukushima, 1980; LeCun & Bengio, 1998; Schmidhuber, 2015; LeCun et al., 2015) whose proficiency can be attributed to their strong inductive bias towards visual tasks (Cohen & Shashua, 2017; Gaier & Ha, 2019). The effect of this inductive bias has been demonstrated by CNNs that solve vision-related tasks with random weights, meaning without learning (He et al., 2016; Gaier & Ha, 2019; Ulyanov et al., 2020). Another success story is Long Short-Term Memory (LSTM) (Hochreiter, 1991; Hochreiter & Schmidhuber, 1997), which has a strong inductive bias toward storing information through its memory cells. This inductive bias allows LSTM to excel at speech, text, and language tasks (Sutskever et al., 2014; Bohnet et al., 2018; Kochkina et al., 2017; Liu & Guo, 2019), as well as timeseries prediction. Even with random weights and only a learned linear output layer LSTM is better at predicting timeseries than reservoir methods (Schmidhuber et al., 2007). In a seminal paper on biases in machine learning, Mitchell (1980) stated that “biases and initial knowledge are at the heart of the ability to generalize beyond observed data”. Therefore, choosing an appropriate architecture and inductive bias for deep neural networks is key to generalization.\nMechanisms beyond storing are required for real-world applications. While LSTM can store information over time, real-world applications require mechanisms that go beyond storing. Many real-world systems are governed by conservation laws related to mass, energy, momentum, charge, or particle counts, which are often expressed through continuity equations. In physical systems, different types of energies, mass or particles have to be conserved (Evans & Hanney, 2005; Rabitz et al., 1999; van der Schaft et al., 1996), in hydrology it is the amount of water (Freeze & Harlan, 1969; Beven, 2011), in traffic and transportation the number of vehicles (Vanajakshi & Rilett, 2004; Xiao & Duan, 2020; Zhao et al., 2017), and in logistics the amount of goods, money or products. A real-world task could be to predict outgoing goods from a warehouse based on a general state of the warehouse, i.e., how many goods are in storage, and incoming supplies. If the predictions are not precise, then they do not lead to an optimal control of the production process. For modeling such systems, certain inputs must be conserved but also redistributed across storage locations within the system. We will\nAll code to reproduce the results will be made available on GitHub.\nrefer to conserved inputs as mass, but note that this can be any type of conserved quantity. We argue that for modeling such systems, specialized mechanisms should be used to represent locations & whereabouts, objects, or storage & placing locations and thus enable conservation.\nConservation laws should pervade machine learning models in the physical world. Since a large part of machine learning models are developed to be deployed in the real world, in which conservation laws are omnipresent rather than the exception, these models should adhere to them automatically and benefit from them. However, standard deep learning approaches struggle at conserving quantities across layers or timesteps (Beucler et al., 2019b; Greydanus et al., 2019; Song & Hopke, 1996; Yitian & Gu, 2003), and often solve a task by exploiting spurious correlations (Szegedy et al., 2014; Lapuschkin et al., 2019). Thus, an inductive bias of deep learning approaches via mass conservation over time in an open system, where mass can be added and removed, could lead to a higher generalization performance than standard deep learning for the above-mentioned tasks.\nA mass-conserving LSTM. In this work, we introduce Mass-Conserving LSTM (MC-LSTM), a variant of LSTM that enforces mass conservation by design. MC-LSTM is a recurrent neural network with an architecture inspired by the gating mechanism in LSTMs. MC-LSTM has a strong inductive bias to guarantee the conservation of mass. This conservation is implemented by means of left-stochastic matrices, which ensure the sum of the memory cells in the network represents the current mass in the system. These left-stochastic matrices also enforce the mass to be conserved through time. The MC-LSTM gates operate as control units on mass flux. Inputs are divided into a subset of mass inputs, which are propagated through time and are conserved, and a subset of auxiliary inputs, which serve as inputs to the gates for controlling mass fluxes. We demonstrate that MC-LSTMs excel at tasks where conservation of mass is required and that it is highly apt at solving real-world problems in the physical domain.\nContributions. We propose a novel neural network architecture based on LSTM that conserves quantities, such as mass, energy, or count, of a specified set of inputs. We show properties of this novel architecture, called MC-LSTM, and demonstrate that these properties render it a powerful neural arithmetic unit. Further, we show its applicability in real-world areas of traffic forecasting and modeling the pendulum. In hydrology, large-scale benchmark experiments reveal that MC-LSTM has powerful predictive quality and can supply interpretable representations." }, { "heading": "2 MASS-CONSERVING LSTM", "text": "The original LSTM introduced memory cells to Recurrent Neural Networks (RNNs), which alleviate the vanishing gradient problem (Hochreiter, 1991). This is achieved by means of a fixed recurrent self-connection of the memory cells. If we denote the values in the memory cells at time t by ct, this recurrence can be formulated as\nct = ct−1 + f(xt,ht−1), (1)\nwhere x and h are, respectively, the forward inputs and recurrent inputs, and f is some function that computes the increment for the memory cells. Here, we used the original formulation of LSTM without forget gate (Hochreiter & Schmidhuber, 1997), but in all experiments we also consider LSTM with forget gate (Gers et al., 2000).\nMC-LSTMs modify this recurrence to guarantee the conservation of the mass input.The key idea is to use the memory cells from LSTMs as mass accumulators, or mass storage. The conservation law is implemented by three architectural changes. First, the increment, computed by f in Eq. (1), has to distribute mass from inputs into accumulators. Second, the mass that leaves MC-LSTM must also disappear from the accumulators. Third, mass has to be redistributed between mass accumulators. These changes mean that all gates explicitly represent mass fluxes.\nSince, in general, not all inputs must be conserved, we distinguish between mass inputs, x, and auxiliary inputs, a. The former represents the quantity to be conserved and will fill the mass accumulators in MC-LSTM. The auxiliary inputs are used to control the gates. To keep the notation uncluttered, and without loss of generality, we use a single mass input at each timestep, xt, to introduce the architecture.\nThe forward pass of MC-LSTM at timestep t can be specified as follows:\nmttot = R t · ct−1 + it · xt (2)\nct = (1− ot) mttot (3) ht = ot mttot. (4)\nwhere it and ot are the input- and output gates, respectively, andR is a positive left-stochastic matrix, i.e., 1T ·R = 1, for redistributing mass in the accumulators. The total mass mtot is the redistributed mass,Rt · ct−1, plus the mass influx, or new mass, it · xt. The current mass in the system is stored in ct.\nNote the differences between Eq. (1) and Eq. (3). First, the increment of the memory cells no longer depends on ht. Instead, mass inputs are distributed by means of the normalized i (see Eq. 5). Furthermore,Rt replaces the implicit identity matrix of LSTM to redistribute mass among memory cells. Finally, Eq. (3) introduces 1 − ot as a forget gate on the total mass, mtot. Together with Eq. (4), this assures that no outgoing mass is stored in the accumulators. This formulation has some similarity to Gated Recurrent Units (GRU) (Cho et al., 2014), however the gates are not used for mixing the old and new cell state, but for splitting off the output.\nBasic gating and redistribution. The MC-LSTM gates at timestep t are computed as follows:\nit = softmax(W i · at +U i · ct−1\n‖ct−1‖1 + bi) (5)\not = σ(W o · at +Uo · ct−1\n‖ct−1‖1 + bo) (6)\nRt = softmax(Br), (7)\nwhere the softmax operator is applied column-wise, σ is the logistic sigmoid function, andW i, bi, W o, bo, and Br are learnable model parameters. Note that for the input gate and redistribution matrix, the requirement is that they are column normalized. This can also be achieved by other means than using the softmax function. For example, an alternative way to ensure a column-normalized matrixRt is to use a normalized logistic, σ̃(rkj) =\nσ(rkj)∑ n σ(rkn)\n. Also note that MC-LSTMs compute the gates from the memory cells, directly. This is in contrast with the original LSTM, which uses the activations from the previous time step. The accumulated values from the memory cells, ct, are normalized to counter saturation of the sigmoids and to supply probability vectors that represent the current distribution of the mass across cell states We use this variation e.g. in our experiments with neural arithmetics (see Sec. 5.1).\nTime-dependent redistribution. It can also be useful to predict a redistribution matrix for each sample and timestep, similar to how the gates are computed:\nRt = softmax ( Wr · at + Ur · ct−1\n‖ct−1‖1 +Br\n) , (8)\nwhere the parameters Wr and Ur are weight tensors and their multiplications result inK×K matrices. Again, the softmax function is applied column-wise. This version collapses to a time-independent redistribution matrix if Wr and Ur are equal to 0. Thus, there exists the option to initialize Wr and Ur with weights that are small in absolute value compared to the weights ofBr, to favour learning time-independent redistribution matrices. We use this variant in the hydrology experiments (see Sec. 5.4).\nRedistribution via a hypernetwork. Even more general, a hypernetwork (Schmidhuber, 1992; Ha et al., 2017) that we denote with g can be used to procure R. The hypernetwork has to produce\na column-normalized, square matrix Rt = g(a0, . . . ,at, c0, . . . , ct−1). Notably, a hypernetwork can be used to design an autoregressive version of MC-LSTMs, if the network additionally predicts auxiliary inputs for the next time step. We use this variant in the pendulum experiments (see Sec. 5.3)." }, { "heading": "3 PROPERTIES", "text": "Conservation. MC-LSTM guarantees that mass is conserved over time. This is a direct consequence of connecting memory cells with stochastic matrices. The mass conservation ensures that no mass can be removed or added implicitly, which makes it easier to learn functions that generalize well. The exact meaning of this mass conservation is formalized in Theorem 1.\nTheorem 1 (Conservation property). Let mτc = ∑K k=1 c τ k be the mass contained in the system and\nmτh = ∑K k=1 h τ k be the mass efflux, or, respectively, the accumulated mass in the MC-LSTM storage and the outputs at time τ . At any timestep τ , we have:\nmτc = m 0 c + τ∑ t=1 xt − τ∑ t=1 mth. (9)\nThat is, the change of mass in the memory cells is the difference between the input and output mass, accumulated over time.\nThe proof is by induction over τ (see Appendix C). Note that it is still possible for input mass to be stored indefinitely in a memory cell so that it does not appear at the output. This can be a useful feature if not all of the input mass is needed at the output. In this case, the network can learn that one cell should operate as a collector for excess mass in the system.\nBoundedness of cell states. In each timestep τ , the memory cells, cτk, are bounded by the sum of mass inputs ∑τ t=1 x t +m0c , that is |cτk| ≤ ∑τ t=1 x\nt +m0c . Furthermore, if the series of mass inputs converges, limτ→∞ ∑τ t=1 x\nτ = m∞x , then also the sum of cell states converges (see Appendix, Corollary 1).\nInitialization and gradient flow. MC-LSTM with Rt = I has a similar gradient flow to LSTM with forget gate (Gers et al., 2000). Thus, the main difference in the gradient flow is determined by the redistribution matrix R. The forward pass of MC-LSTM without gates ct = Rtct−1 leads to the following backward expression ∂c t\n∂ct−1 = R t. Hence, MC-LSTM should be initialized with a\nredistribution matrix close to the identity matrix to ensure a stable gradient flow as in LSTMs. For random redistribution matrices, the circular law theorem for random Markov matrices (Bordenave et al., 2012) can be used to analyze the gradient flow in more detail, see Appendix, Section D.\nComputational complexity. Whereas the gates in a traditional LSTM are vectors, the input gate and redistribution matrix of an MC-LSTM are matrices in the most general case. This means that MC-LSTM is, in general, computationally more demanding than LSTM. Concretely, the forward pass for a single timestep in MC-LSTM requires O(K3 +K2(M + L) +KML) Multiply-Accumulate operations (MACs), whereas LSTM takes O(K2 +K(M + L)) MACs per timestep. Here, M , L and K are the number of mass inputs, auxiliary inputs and outputs, respectively. When using a timeindependent redistribution matrix cf. Eq. (7), the complexity reduces to O(K2M +KML) MACs.\nPotential interpretability through inductive bias and accessible mass in cell states. The representations within the model can be interpreted directly as accumulated mass. If one mass or energy quantity is known, the MC-LSTM architecture would allow to force a particular cell state to represent this quantity, which could facilitate learning and interpretability. An illustrative example is the case of rainfall runoff modelling, where observations, say of the soil moisture or groundwater-state, could be used to guide the learning of an explicit memory cell of MC-LSTM." }, { "heading": "4 SPECIAL CASES AND RELATED WORK", "text": "Relation to Markov chains. In a special case MC-LSTM collapses to a finite Markov chain, when c0 is a probability vector, the mass input is zero xt = 0 for all t, there is no input and output gate, and the redistribution matrix is constant over timeRt = R. For finite Markov chains, the dynamics are known to converge, ifR is irreducible (see e.g. Hairer (2018, Theorem 3.13.)). Awiszus & Rosenhahn (2018) aim to model a Markov Chain by having a feed-forward network predict the state distribution given the current state distribution. In order to insert randomness to the network, a random seed is appended to the input, which allows to simulate Markov processes. Although MC-LSTMs are closely related to Markov chains, they do not explicitly learn the transition matrix, as is the case for Markov chain neural networks. MC-LSTMs would have to learn the transition matrix implicitly.\nRelation to normalizing flows and volume-conserving neural networks. In contrast to normalizing flows (Rezende & Mohamed, 2015; Papamakarios et al., 2019), which transform inputs in each layer and trace their density through layers or timesteps, MC-LSTMs transform distributions and do not aim to trace individual inputs through timesteps. Normalizing flows thereby conserve information about the input in the first layer and can use the inverted mapping to trace an input back to the initial space. MC-LSTMs are concerned with modeling the changes of the initial distribution over time and can guarantee that a multinomial distribution is mapped to a multinomial distribution. For MC-LSTMs without gates, the sequence of cell states c0, . . . , cT constitutes a normalizing flow if an initial distribution p0(c0) is available. In more detail, MC-LSTM can be considered a linear flow with the mapping ct+1 = Rtct and p(ct+1) = p(ct)|detRt|−1 in this case. The gate providing the redistribution matrix (see Eq. 8) is the conditioner in a normalizing flow model. From the perspective of normalizing flows, MC-LSTM can be considered as a flow trained in a supervised fashion. Deco & Brauer (1995) proposed volume-conserving neural networks, which conserve the volume spanned by input vectors and thus the information of the starting point of an input is kept. In other words, they are constructed so that the Jacobians of the mapping from one layer to the next have a determinant of 1. In contrast, the MC-LSTMs determinant of the Jacobians (of the mapping) is smaller than 1 (except for degenerate cases), which means that volume of the inputs is not conserved.\nRelation to Layer-wise Relevance Propagation. Layer-wise Relevance Propagation (LRP) (Bach et al., 2015) is similar to our approach with respect to the idea that the sum of a quantity, the relevance Ql is conserved over layers l. LRP aims to maintain the sum of the relevance values∑I k=1Q l−1 i = ∑I k=1Q l−1 i backward through a classifier in order to a obtain relevance values for each input feature.\nRelation to other networks that conserve particular properties. While a standard feed-forward neural network does not give guarantees aside from the conservation of the proximity of datapoints through the continuity property. The conservation of the first moments of the data distribution in the form of normalization techniques (Ioffe & Szegedy, 2015) has had tremendous success. Here, batch normalization (Ioffe & Szegedy, 2015) could exactly conserve mean and variance across layers, whereas self-normalization (Klambauer et al., 2017) conserves those approximately. The conservation of the spectral norm of each layer in the forward pass has enabled the stable training of generative adversarial networks (Miyato et al., 2018). The conservation of the spectral norm of the errors through the backward pass of an RNN has enabled the avoidance of the vanishing gradient problem (Hochreiter, 1991; Hochreiter & Schmidhuber, 1997). In this work, we explore an architecture that exactly conserves the mass of a subset of the input, where mass is defined as a physical quantity such as mass or energy.\nRelation to neural networks for physical systems. Neural networks have been shown to discover physical concepts such as the conservation of energies (Iten et al., 2020), and neural networks could allow to learn natural laws from observations (Schmidt & Lipson, 2009; Cranmer et al., 2020b). MC-LSTM can be seen as a neural network architecture with physical constraints (Karpatne et al., 2017; Beucler et al., 2019c). It is however also possible to impose conservation laws by using other means, e.g. initialization, constrained optimization or soft constraints (as, for example, proposed by Karpatne et al., 2017; Beucler et al., 2019c;a; Jia et al., 2019). Hamiltonian neural networks (Greydanus et al., 2019) and Symplectic Recurrent Neural Networks make energy conserving predictions by using the Hamiltonian (Chen et al., 2019), a function that maps the inputs to the quantity that\nneeds to be conserved. By using the symplectic gradients, it is possible to move around in the input space, without changing the output of the Hamiltonian. Lagrangian Neural Networks (Cranmer et al., 2020a), extend the Hamiltonian concept by making it possible to use arbitrary coordinates as inputs.\nAll of these approaches, while very promising, assume closed physical systems and are thus to restrictive for the application we have in mind. Raissi et al. (2019) propose to enforce physical constraints on simple feed-forward networks by computing the partial derivatives with respect to the inputs and computing the partial differential equations explicitly with the resulting terms. This approach, while promising, does require an exact knowledge of the governing equations. By contrast, our approach is able to learn its own representation of the underlying process, while obeying the pre-specified conservation properties." }, { "heading": "5 EXPERIMENTS", "text": "In the following, we discuss the experiments we conducted to demonstrate the broad applicability and high predictive performance of MC-LSTM in settings where mass conservation is required. For more details on the datasets and hyperparameter selection for each experiment, we refer to Appendix B." }, { "heading": "5.1 ARITHMETIC TASKS", "text": "Addition problem. We first considered a problem for which exact mass conservation is required. One example for such a problem has been described in the original LSTM paper (Hochreiter & Schmidhuber, 1997), showing that LSTM is capable of summing two arbitrarily marked elements in a sequence of random numbers. We show that MC-LSTM is able to solve this task, but also generalizes better to longer sequences, input values in a different range and more summands. Table 1 summarizes the results of this method comparison and shows that MC-LSTM significantly outperformed the other models on all tests (p-value ≤ 0.03, Wilcoxon test). In Appendix B.1.5, we provide a qualitative analysis of the learned model behavior for this task.\nRecurrent arithmetic. Following Madsen & Johansen (2020), the inputs for this task are sequences of vectors, uniformly drawn from [1, 2]10. For each vector in the sequence, the sum over two random subsets is calculated. Those values are then summed over time, leading to two values. The target output is obtained by applying the arithmetic operation to these two values. The auxiliary input for MC-LSTM is a sequence of ones, where the last element is −1 to signal the end of the sequence. We evaluated MC-LSTM against NAUs and Neural Accumulators (NACs) directly in the framework of Madsen & Johansen (2020). NACs and NAUs use the architecture as presented in (Madsen & Johansen, 2020). That is, a single hidden layer with two neurons, where the first layer is recurrent. The MC-LSTM model has two layers, of which the second one is a fully connected linear layer. For subtraction an extra cell was necessary to properly discard redundant input mass.\nTable 2: Recurrent arithmetic task results. MC-LSTMs for addition and subtraction/multiplication have two and three neurons, respectively. Error bars represent 95%-confidence intervals.\naddition subtraction multiplication\nsuccess ratea updatesb success ratea updatesb success ratea updatesb\nMC-LSTM 96% +2%−6% 4.6 · 10 5 81% +6%−9% 1.2 · 10 5 67% +8%−10% 1.8 · 10 5\nNAU / NMU 88% +5%−8% 8.1 · 10 4 60% +9%−10% 6.1 · 10 4 34% +10%−9% 8.5 · 10 4\nNAC 56% +9%−10% 3.2 · 10 5 86% +5%−8% 4.5 · 10 4 0% +4%−0% – NALU 10% +7%−4% 1.0 · 10 6 0% +4%−0% – 1% +4% −1% 4.3 · 10 5\na Percentage of runs that generalized to longer sequences. b Median number of updates necessary to solve the task.\nFor testing, we took the model with the lowest validation error, c.f. early stopping. The performance was measured by the percentage of runs that successfully generalized to longer sequences. Generalization is considered successful if the error is lower than the numerical imprecision of the exact operation (Madsen & Johansen, 2020). The summary in Tab. 2 shows that MC-LSTM was able to significantly outperform the competing models (p-value 0.03 for addition and 3e−6 for multiplication, proportion test). In Appendix B.1.5, we provide a qualitative analysis of the learned model behavior for this task.\nStatic arithmetic. To enable a direct comparison with the results reported in Madsen & Johansen (2020), we also compared MC-LSTM on the static arithmetic task, see Appendix B.1.3.\nMNIST arithmetic. We tested that feature extractors can be learned from MNIST images (LeCun et al., 1998) to perform arithmetic on the images (Madsen & Johansen, 2020). The input is a sequence of MNIST images and the target output is the corresponding sum of the labels. Auxiliary inputs are all 1, except the last entry, which is −1, to indicate the end of the sequence. The models are the same as in the recurrent arithmetic task with CNN to convert the images to (mass) inputs for these networks. The network is learned end-to-end. L2-regularization is added to the output of CNN to prevent its outputs from growing arbitrarily large. The results for this experiment are depicted in Fig. 2. MC-LSTM significantly outperforms the state-of-the-art, NAU (p-value 0.002, Binomial test)." }, { "heading": "5.2 INBOUND-OUTBOUND TRAFFIC FORECASTING", "text": "We examined the usage of MC-LSTMs for traffic forecasting in situations in which inbound and outbound traffic counts of a city are available (see Fig. 3). For this type of data, a conservation-of-vehicles principle (Nam & Drew, 1996) must hold, since vehicles can only leave the city if they have entered it before or had been there in the first place. Based on data for the traffic4cast 2020 challenge (Kreil et al., 2020), we constructed a dataset to model inbound and outbound traffic in three different cities: Berlin, Istanbul and Moscow. We compared MC-LSTM against LSTM, which is the state-of-the-art method for several types of traffic forecasting situations (Zhao et al., 2017; Tedjopurnomo et al., 2020), and found that MC-LSTM significantly outperforms LSTM in this traffic forecasting setting (all p-values ≤ 0.01, Wilcoxon test). For details, see Appendix B.2." }, { "heading": "5.3 PENDULUM WITH FRICTION", "text": "In the area of physics, we examined the usability of MC-LSTM for the problem of modeling a swinging pendulum with friction. Here, the total energy is the conserved property. During the movement of the pendulum, kinetic energy is converted into potential energy and vice-versa. This conversion between both energies has to be learned by the off-diagonal values of the redistribution matrix. A qualitative analysis of a trained MC-LSTM for this problem can be found in Appendix B.3.1.\nAccounting for friction, energy dissipates and the swinging slows over time until towards a fixed point. This type of behavior presents a difficulty for machine learning and is impossible for methods that assume the pendulum to be a closed system, such as Hamiltonian networks (Greydanus et al., 2019). We generated 120 datasets of timeseries data of a pendulum where we used multiple different settings for initial angle, length of the pendulum, and the amount of friction. We then selected LSTM and MC-LSTM models and compared them with respect to predictive MSE. For an example, see Fig. 4. Overall, MC-LSTM has significantly outperformed LSTM with a mean MSE of 0.01 (standard deviation 0.04) compared to 0.05 (standard deviation 0.15; with a p-value 6.9e−8, Wilcoxon test)." }, { "heading": "5.4 HYDROLOGY: RAINFALL RUNOFF MODELING", "text": "We tested MC-LSTM for large-sample hydrological modeling following Kratzert et al. (2019c). An ensemble of 10 MC-LSTMs was trained on 10 years of data from 447 basins using the publiclyavailable CAMELS dataset (Newman et al., 2015; Addor et al., 2017a). The mass input is precipitation and auxiliary inputs are: daily min. and max. temperature, solar radiation, and vapor pressure, plus 27 basin characteristics related to geology, vegetation, and climate (described by Kratzert et al., 2019c). All models besides MC-LSTM and LSTM were trained by different research groups with experience using each model. More details are given in Appendix B.4.2.\nAs shown in Tab. 3 MC-LSTM performed better with respect to the Nash–Sutcliffe Efficiency (NSE; the R2 between simulated and observed runoff) than any other mass-conserving hydrology model, although slightly worse than LSTM.\nNSE is often not the most important metric in hydrology, since water managers are typically concerned primarily with extremes (e.g. floods). MC-LSTM performed significantly better (p = 0.025, Wilcoxon test) than all models, including LSTM, with respect to high volume flows (FHV), at or above the 98th percentile flow in each basin. This makes MC-LSTM the current state-of-the-art model for flood prediction. MC-LSTM also performed significantly better than LSTM on low volume flows (FLV) and overall bias, however there are other hydrology models that are better for predicting low flows (which is important, e.g. for managing droughts).\nModel states and environmental processes. It is an open challenge to bridge the gap between the fact that LSTM approaches give generally better predictions than other models (especially for flood prediction) and the fact that water managers need predictions that help them understand not only how much water will be in a river at a given time, but also how water moves through a basin.\nSnow processes are difficult to observe and model. Kratzert et al. (2019a) showed that LSTM learns to track snow in memory cells without requiring snow data for training. We found similar behavior in MC-LSTMs, which has the advantage of doing this with memory cells that are true mass storages. Figure 5 shows the snow as the sum over a subset of MC-LSTM memory states and snow water equivalent (SWE) modeled by the well-established Snow-17 snow model (Anderson, 1973) (Pearson correlation coefficient r ≥ 0.91). It is important to remember that MC-LSTMs did not have access to any snow data during training. In the best case it is possible to take advantage of the inductive bias to predict how much water will be stored as snow under different conditions by using simple combinations or mixtures of the internal states. Future work will determine whether this is possible with other difficult-to-observe states and fluxes." }, { "heading": "6 CONCLUSION.", "text": "We have demonstrated that with the concept of inductive biases an RNN can be designed that has the property to conserve mass of particular inputs. This architecture is highly proficient as neural arithmetic unit and is well-suited for predicting physical systems like hydrological processes, in which water mass has to be conserved." }, { "heading": "A NOTATION OVERVIEW", "text": "Most of the notation used throughout the paper, is summarized in Tab. A.1." }, { "heading": "B EXPERIMENTAL DETAILS", "text": "In the following, we provide further details on the experimental setups." }, { "heading": "B.1 NEURAL ARITHMETIC", "text": "Neural networks that learn arithmetic operations have recently come into focus (Trask et al., 2018; Madsen & Johansen, 2020). Specialized neural modules for arithmetic operations could play a role for complex AI systems since cognitive studies indicate that there is a part of the brain that enables animals and humans to perform basic arithmetic operations (Nieder, 2016; Gallistel, 2018). Although this primitive number processor can only perform approximate arithmetic, it is a fundamental part of our ability to understand and interpret numbers (Dehaene, 2011)." }, { "heading": "B.1.1 DETAILS ON DATASETS", "text": "We consider the addition problem that was proposed in the original LSTM paper (Hochreiter & Schmidhuber, 1997). We chose input values in the range [0, 0.5] in order to be able to use the fast standard implementations of LSTM. For this task, 20 000 samples were generated using a fixed\nrandom seed to create a dataset, which was split in 50% training and 50% validation samples. For the test data, a different random seed was used.\nA definition of the static arithmetic task is provided by (Madsen & Johansen, 2020). The following presents this definition and its extension to the recurrent arithmetic task (c.f. Trask et al., 2018).\nThe input for the static version is a vector, x ∈ U(1, 2)100, consisting of numbers that are drawn randomly from a uniform distribution. The target, y, is computed as\ny = ( a+c∑ k=a xk ) ( b+c∑ k=b xk ) ,\nwhere c ∈ N, a ≤ b ≤ a+ c ∈ N and ∈ {+,−, ·}. For the recurrent variant , the input consists of a sequence of T vectors, denoted by xt ∈ U(1, 2)10, t ∈ {1, . . . , T}, and the labels are computed as\ny = ( T∑ t=1 a+c∑ k=a xtk ) ( T∑ t=1 b+c∑ k=b xtk ) .\nFor these experiments, no fixed datasets were used. Instead, samples were generated on the fly. Note that since the subsets overlap, i.e., inputs are re-used, this data does not have mass conservation properties.\nFor a more detailed description of the MNIST addition data, we refer to (Trask et al., 2018) and the appendix of (Madsen & Johansen, 2020)." }, { "heading": "B.1.2 DETAILS ON HYPERPARAMETERS.", "text": "For the addition problem, every network had a single hidden layer with 10 units. The output layer was a linear, fully connected layer for all MC-LSTM and LSTM variants. The NAU (Madsen & Johansen, 2020) and NALU/NAC (Trask et al., 2018) networks used their corresponding output layer. Also, we used a more common L2 regularization scheme with low regularization constant (10−4) to keep the weights ternary for the NAU, rather than the strategy used in the reference implementation from Madsen & Johansen (2020). Optimization was done using Adam (Kingma & Jimmy, 2015) for all models. The initial learning rate was selected from {0.1, 0.05, 0.01, 0.005, 0.001} on the validation data for each method individually. All methods were trained for 100 epochs.\nThe weight matrices of LSTM were initialized in a standard way, using orthogonal and identity matrices for the forward and recurrent weights, respectively. Biases were initialized to be zero, except for the bias in the forget gate, which was initialized to 3. This should benefit the gradient flow for the first updates. Similarly, MC-LSTM is initialized so that the redistribution matrix (cf. Eq. 7) is (close to) the identity matrix. Otherwise we used orthogonal initialization (Saxe et al., 2014). The bias for the output gate was initialized to -3. This stimulates the output gates to stay closed (keep mass in the system), which has a similar effect as setting the forget gate bias in LSTM. This practically holds for all subsequently described experiments.\nFor the recurrent arithmetic tasks, we tried to stay as close as possible to the setup that was used by Madsen & Johansen (2020). This means that all networks had again a single hidden layer. The NAU, Neural Multiplication Unit (NMU) and NALU networks all had two hidden units and, respectively, NAU, NMU and NALU output layers. The first, recurrent layer for the first two networks was a NAU and the NALU network used a recurrent NALU layer. For the exact initialization of NAU and NALU, we refer to (Madsen & Johansen, 2020).\nThe MC-LSTM models used a fully connected linear layer with L2-regularization for projecting the hidden state to the output prediction for the addition and subtraction tasks. It is important to use a free linear layer in order to compensate for the fact that the data does not have mass-conserving properties. However, it is important to note that the mass conservation in MC-LSTM is still necessary to solve this task. For the multiplication problem, we used a multiplicative, non-recurrent variant of MC-LSTM with an extra scalar parameter to allow the conserved mass to be re-scaled if necessary. This multiplicative layer is described in more detail in Appendix B.1.3.\nWhereas the addition could be solved with two hidden units, MC-LSTM needed three hidden units to solve both subtraction and multiplication. This extra unit, which we refer to as the trash cell, allows\nMC-LSTMs to get rid of excessive mass that should not influence the prediction. Note that, since the mass inputs are vectors, the input gate has to be computed in a similar fashion as the redistribution matrix. Adam was again used for the optimization. We used the same learning rate as Madsen & Johansen (2020) (0.001) to train the NAU, NMU and NALU networks. For MC-LSTM the learning rate was increased to 0.01 for addition and subtraction and 0.05 for multiplication after a manual search on the validation set. All models were trained for two million update steps.\nIn a similar fashion, we used the same models from (Madsen & Johansen, 2020) for the MNIST addition task. For MC-LSTM, we replaced the recurrent NAU layer with a MC-LSTM layer and the output layer was replaced with a fully connected linear layer. In this scenario, increasing the learning rate was not necessary. This can probably be explained by the fact that training CNN to regress the MNIST images is the main challenge during learning. We also used a standard L2-regularization on the outputs of CNN instead of the implementation proposed in (Madsen & Johansen, 2020) for this task." }, { "heading": "B.1.3 STATIC ARITHMETIC", "text": "This experiment should enable a more direct comparison to the results from Madsen & Johansen (2020) than the recurrent variant. The data for the static task is equivalent to that of the recurrent task with sequence length one. For more details on the data, we refer to Appendix B.1.1 or (Madsen & Johansen, 2020).\nSince the static task does not require a recurrent model, we discarded the redistribution matrix in MC-LSTM. The result is a layer with only input and output gates, which we refer to as an MassConserving Fully Connected (MC-FC) layer. We compared this model to the results reported in (Madsen & Johansen, 2020), using the code base that accompanied the paper. All NALU and NAU networks had a single hidden layer. Similar to the recurrent task, MC-LSTM required two hidden units for addition and three for subtraction. Mathematically, an MC-FC with K hidden neurons and M inputs can be defined as MC-FC : RM → RK : x 7→ y, where\ny = diag(o) · I · x I = softmax(BI) o = σ(bo),\nwhere the softmax operates on the row dimension to get a column-normalized matrix, I , for the input gate.\nUsing the log-exp transform (c.f. Trask et al., 2018), a multiplicative MC-FC with scaling parameter, α, can be constructed as follows, exp(MC-FC(log(x)) + α). The scaling parameter is necessary to break the mass conservation when it is not needed. By replacing the output layer with this multiplicative MC-FC, it can also be used to solve the multiplication problem. This network also required three hidden neurons. This model was compared to a NMU network with two hidden neurons and NALU network.\nAll models were trained for two million updates with the Adam optimizer (Kingma & Jimmy, 2015). The learning rate was set to 0.001 for all networks, except for the MC-FC network, which needed a lower learning rate of 0.0001, and the multiplicative MC-FC variant, which was trained with learning rate 0.01. These hyperparameters were found using a manual search.\nSince the input consists of a vector, the input gate predicts a left-stochastic matrix, similar to the redistribution matrix. This allows us to verify generalization abilities of the inductive bias in MC-LSTMs. The performance was measured in a similar way as the recurrent task, except that generalization was tested over the range of the input values (Madsen & Johansen, 2020). Concretely, the models were trained on input values in [1, 2] and tested on input values in the range [2, 6]. Table B.2 shows that MC-FC is able to match or outperform both NALU and NAU on this task." }, { "heading": "B.1.4 COMPARISON WITH TIME-DEPENDENT MC-LSTM", "text": "We used MC-LSTM with a time-independent redistribution matrix, as in Eq. (7), to solve the addition problem. This resembles another form of inductive bias, since we know that no redistribution across cells is necessary to solve this problem and it results also in a more efficient model, because less parameters have to be learned. However, for the sake of flexibility, we also verified that it is possible to use the more general time-dependent redistribution matrix (cf. Eq. 8). The results of this experiment can be found in Table B.3.\nAlthough the performance of MC-LSTM with time-dependent redistribution matrix is slightly worse than that of the more efficient MC-LSTM variant, it still outperforms all other models on the generalisation tasks. This can partly be explained by the fact that is harder to train a time-dependent redistribution matrix, and the training budget is limited to 100 epochs." }, { "heading": "B.1.5 QUALITATIVE ANALYSIS OF THE MC-LSTM MODELS TRAINED ON ARITHMETIC TASKS", "text": "Addition Problem. To reiterate, we used MC-LSTM with 10 hidden units and linear output layer. The model has to learn to sum all mass inputs of the timesteps, where the auxiliary input (the marker) equals at = 1, and ignore all other values. At the final timestep — where the auxiliary input equals at = −1 — the network should output the sum of all previously mark mass inputs. In our experiment, the model has learned to store the marked input values in a single cell, while all other mass inputs mainly end up in a single, different cell. That is, a single cell learns to accumulate the inputs to compute the solution and the other cells are used as trash cells. In Fig. B.1, we visualize the cell states for a single input sample over time, where the orange and the blue line denote the mass accumulator and the main trash cell, respectively.\nWe can see that at the last time step — where the network is queried to return the accumulated sum — the value of that particular cell does not drop to zero (i.e., not the entire value that is actually accumulated is removed from the system). For this particular model, the corresponding output gate\nvalue for this cell at the last time step is 0.18134. That is, only 18.134% of the actual accumulated value is returned. However, the weight of the linear layer that corresponds to this cell for this model is 5.5263. If we multiply these two values, the result is 1.0021, which means the model recovers the value stored in the cell state. For all other cells (grey lines), either the output gate at the last time step, the weight of the linear layer, or the cell value itself is zero. That means that the model output is only determined by the value of the single cell that acted as accumulator of the marked values (orange line).\nWe also analyzed MC-LSTM without linear output layer for the same addition task. In this case, the model output is determined as the sum over the outgoing mass. As before, the model also uses a single cell to store the accumulated values of the marked timesteps. However, because no scaling can be learned from the linear output layer, the model learned to fully open the output at the query timestep.\nRecurrent Arithmetic. In the following we take a closer look at the solution that is learned with MC-LSTM. Concretely, we look at the weights of a MC-LSTM model that successfully solves the following recurrent arithmetic task:\ny = T∑ t=1 (xt6 + x t 7) T∑ t=1 (xt7 + x t 8),\nwhere ∈ {−,+}, given a sequence of input vectors xt ∈ R10 (the only purpose of the colors is to provide an aid to readers). We highlight the following observations:\n1. For the addition task (i.e., ≡ +), MC-LSTM has two units (see Appendix B.1.2 for details on the experiments). Trask et al. (2018); Madsen & Johansen (2020) fixed the number of hidden units to two with the idea that each unit can learn one term of the addition operation ( ). However, if we take a look at the input gate of our model, we find that the first cell is used to accumulate (xt1+ . . .+x t 5+0.5x t 6+0.5x t 8+x t 9+x t 10) and the second cell collects\n(0.5xt6 + x t 7 + 0.5x t 8). Since the learned redistribution matrix is the identity matrix, these accumulators operate individually. This means that, instead of computing the individual terms, MC-LSTM directly computes the solution, scaled by a factor 1/2 in its second cell. The first cell accumulates the rest of the mass, which it does not need for the prediction. In other words, it operates as some sort of trash cell. Note that due to the mass-conservation property, it would be impossible to\ncompute each side of the operation individually. After all, xt7 appears on both sides of the central operation ( ), and therefore the data is not mass conserving. The output gate is always open for the trash cell and closed for the other cell, indicating that redundant mass is discarded through the output of the MC-LSTM in every timestep and the scaled solution is properly accumulated. However, in the final timestep — when the prediction is to be made, the output gate for the trash cell is closed and opened for the other cell. That is, the accumulated solution is passed to the final linear layer, which scales the output of MC-LSTM by a factor of two to get the correct solution.\n2. For the subtraction task (i.e., ≡ −), a similar behavior can be observed. In this case, the final model requires three units to properly generalize. The first two cells accumulate xt6 and xt8, respectively. The last cell operates as trash cell and collects (x t 1+. . .+x t 5+x t 7+x t 9+x t 10).\nThe redistribution matrix is the identity matrix for the first two cells. For the trash cell, equal parts (0.4938) are redistributed to the two other cells. The output gate operates in a similar fashion as for addition. Finally, the linear layer computes the difference between the first two cells with weights 1, -1 and the trash cell is ignored with weight 0. Although MC-LSTM with two units was not able to generalize well enough for the Madsen & Johansen (2020) benchmarks, it did turn out to be able to provide a reasonable solution (albeit with numerical flaws). With two cells, the network learned to store (0.5xt1+. . .+0.5x t 5+x t 6+ 0.5xt7+0.5x t 9+0.5x t 10) in one cell, and (0.5x t 1+. . .+0.5x t 5+0.5x t 7+x t 8+0.5x t 9+0.5x t 10) in the other cell. With a similar linear layer as for the three-unit variant, this solution should also compute a correct solution for the subtraction task.\nB.2 INBOUND-OUTBOUND TRAFFIC FORECAST\nTraffic forecasting considers a large number of different settings and tasks (Tedjopurnomo et al., 2020). For example whether the physical network topology of streets can be exploited by using graph neural networks combined with LSTMs (Cui et al., 2019). Within traffic forecasting mass conservation translates to a conservation-of-vehicles principle. Generally, models that adhere to this principle are desired (Vanajakshi & Rilett, 2004; Zhao et al., 2017) since they could be useful for long-term forecasts. Many recent benchmarking datasets for traffic forecasts are usually uni-directional and are measured at few streets. Thus conservation laws cannot be directly applied (Tedjopurnomo et al., 2020).\nWe demonstrate how MC-LSTM can be used in traffic forecasting settings. A typical setting for vehicle conservation is when traffic counts for inbound and outbound roads of a city are available. In this case, all vehicles that come from an inbound road must either be within a city or leave the city on an outbound road. The setting is similar to passenger flows in inbound and outbound metro (Liu et al., 2019), where LSTMs have also prevailed. We were able to extract such data from a recent dataset based on GPS-locations (Kreil et al., 2020) of vehicles at a fine geographic grid around cities, which represents good approximation of a vehicle conserving scenario.\nAn approximately mass-conserving traffic dataset Based on the data for the traffic4cast 2020 challenge (Kreil et al., 2020), we constructed a dataset to model inbound and outbound traffic of three different cities: Berlin, Istanbul and Moscow. The original data consists of 181 sequences of multi-channel images encoding traffic volume and speed for every five minutes in four (binned) directions. Every sequence corresponds to a single day in the first half of the year. In order to get the traffic flow from the multi-channel images at every timestep, we defined a frame around the city and collected the traffic-volume data for every pixel on the border of this frame. This is illustrated in Fig. 3. For simplicity, we ignored the fact that a single-pixel frame might have issues with fast-moving vehicles.\nBy taking into account the direction of the vehicles, the inbound and outbound traffic can be combined for every pixel on the border of our frame. To get a more tractable dataset, we additionally combined the pixels of the four edges of the frame to end up with eight values: four values for the incoming traffic, i.e: one for each border of the frame, and four values for the outgoing traffic. The inbound traffic would be the mass input for MC-LSTM and the target outputs are the outbound traffic along the different borders. The auxiliary input is the current daytime, encoded as a value between zero and one.\nDuring inference, all 288 timesteps of the inbound and outbound measurements are used to find out which model learned the traffic dynamics from the sparse training data best. For this purpose, we used the 18 days of validation data from the original dataset as test set, which are distributed across the second half of the year. In order to enable a fair comparison between LSTM and MC-LSTM, the data for LSTM was normalized to zero mean and unit variance for training and inference (using statistics from the training data). MC-LSTM does not need this pre-processing step and is fed the raw data.\nModel and Hyperparameters For the traffic prediction, we used LSTM followed by a fully connected layer as baseline (c.f. Zhao et al., 2017; Liu et al., 2019). For MC-LSTM, we chose to enforce end-to-end mass conservation by using a MC-FC output layer, which is described in detail in Appendix B.1.3. For the initialization of the models, we refer to the details of the arithmetic experiments in Appendix B.1.\nFor each model and for each city, the best hyperparameters were found by performing a grid search on the validation data. This means that the hyperparameters were chosen to minimize the error on the nine 5-minute intervals. For all models, the number of hidden neurons was chosen from {10, 50, 100} and for the learning rate, the options were {0.100, 0.050, 0.010, 0.005, 0.001}. All models were trained for 2 000 epochs using the Adam optimizer (Kingma & Jimmy, 2015). Additionally, we considered values in {0, 5} for the initial value for the forget gate bias in LSTM. For MC-LSTM, the extra hyperparameters were the initial cell state value (∈ {0, 100}) — i.e., how much cars are in each memory cell at timestep zero — and whether or not the initial cell state should be trained via backpropagation. The results of the hyperparameter search can be found in Tab. B.4.\nThe idea behind tuning the initial cell state, is that unlike with LSTM, the cell state in MC-LSTM directly reflects the number of cars that can drive out of a city during the first timesteps. If the initial cell state is too high or too low, this might negatively affect the prediction capabilities of the model. If it would be possible to estimate the number of cars in a city at the start of the sequence, this could also be used to get better estimates for the initial cell state. However, from the results of the hyperparameter search (see Tab. B.4), we might have overestimated the importance of these hyperparameters.\nResults. All models were evaluated on the test data, using the checkpoint after 2 000 epochs for fifty runs. An example of what the predictions of both models look like for an arbitrary day in an arbitrarily chosen city is displayed in Fig. B.2. The average Root MSE (RMSE) and Mean Absolute Error (MAE) are summarized in Tab. B.5. The results show that MC-LSTM is able to generalize significantly better than LSTM for this task. The RMSE of MC-LSTM is significantly better than\nLSTM (p-values 4e−10, 8e−3, and 4e−10 for Istanbul, Berlin, and Moscow, respectively, Wilcoxon test)." }, { "heading": "B.3 PENDULUM WITH FRICTION", "text": "In the area of physics, we consider the problem of modeling a swinging pendulum with friction. The conserved quantity of interest is the total energy. During the movement of the pendulum, kinetic energy is converted into potential energy and vice-versa. Neglecting friction, the total energy is conserved and the movement would continue indefinitely. Accounting for friction, energy dissipates and the swinging slows over time until a fixed point is reached. This type of behavior presents a difficulty for machine learning and is impossible for methods that assume the pendulum to be closed systems, such as Hamiltonian networks (Greydanus et al., 2019). We postulated that both energy conversion and dissipation can be fitted by machine learning models, but that an appropriate inductive bias will allow to generalize from the learned data with more ease.\nTo train the model, we generated a set of timeseries using the differential equations for a pendulum with friction. We used multiple different settings for initial angle, length of the pendulum, the amount of friction, the length of the training-period and with and without Gaussian noise. Each model received the initial kinetic and potential energy of the pendulum and must predict the consecutive timesteps. The time series starts always with the pendulum at the maximum displacement — i.e., the entire energy in the system is potential energy. We generated timeseries of potential- and kinetic energies by iterating the following settings/conditions: initial amplitude ({0.2, 0.4}), pendulum length ({0.75, 1}), length of training sequence in terms of timesteps ({100, 200, 400}), noise level ({0, 0.01}), and dampening constant ({0.0, 0.1, 0.2, 0.4, 0.8}). All combinations of those settings\nwere used to generate a total of 120 datasets, for which we train both models (the auto-regressive LSTM and MC-LSTM).\nWe trained an auto-regressive LSTM that receives its current state and a low-dimensional temporal embedding (using nine sinusoidal curves with different frequencies) to predict the potential and kinetic energy of the pendulum. Similarly, MC-LSTM is trained in an autoregressive mode, where a hypernetwork obtains the current state and the same temporal embedding as LSTM. The modelsetup is thus similar to an autoregressive model with exogenous variables from classical timeseries modelling literature. To obtain suitable hyperparameters we manualy adjusted the learning rate (0.01), hidden size of LSTM (256), the hypernetwork for estimating the redistribution (a fully connected network with 3 layers, ReLu activations and hidden sizes of 50, 100, and 2 respectively), optimizer (Adam, Kingma & Jimmy, 2015) and the training procedure (crucially, the amount of additionally considered timesteps in the loss after a threshold is reached. See explanation of the used loss below), on a separately generated validation dataset.\nFor MC-LSTM, a hidden size of two was used so that each state directly maps to the two energies. The hypernetwork consists of three fully connected layers of size 50, 100 and 4, respectively. To account for the critical values at the extreme-points of the pendulum (i.e. the amplitudes — where the energy is present only in the form of potential energy — and the midpoint — where only kinetic energy exists), we slightly offset the cell state from the actual predicted value by using a linear regression with a slope of 1.02 and an intercept −0.01. For both models, we used a combination of Pearson’s correlation of the energy signals and the MSE as a loss function (by subtracting the former mean from the latter). Further, we used a simple curriculum to deal with the long autoregressive nature of the timeseries (Bengio et al., 2015): Starting at a time window of eleven we added five additional timesteps whenever the combined loss was below −0.9. Overall, MC-LSTM has significantly outperformed LSTM with a mean MSE of 0.01 (standard deviation 0.04) compared to 0.05 (standard deviation 0.15; with a p-value 6.9e−8, Wilcoxon test)." }, { "heading": "B.3.1 QUALITATIVE ANALYSIS OF THE MC-LSTM MODELS TRAINED FOR A PENDULUM", "text": "In the following, we analyse the behavior of the simplest pendulum setup, i.e., the one without friction. Special to the problem of the pendulum without friction is that there are no mass in- or outputs and the whole dynamic of the system has to be modeled by the redistribution matrix. The initial state of the system is given by the displacement of the pendulum at the start, where all energy is stored as potential energy. Afterwards, the pendulum oscillates, converting potential to kinetic energy and vice-versa.\nIn MC-LSTM, the conversion between the two forms of energy has to be learned by the redistribution matrix. More specifically, the off-diagonal elements denote the fraction of energy that is converted from one form to the other. In contrast, the diagonal elements of the redistribution matrix denote the fraction of energy that is not converted.\nIn Fig. B.3, we visualize the off-diagonal elements of the redistribution matrix (i.e., the conversion of energy) for the pendulum task without friction, as well as the modeled potential and kinetic energy. We can see that an increasing fraction of energy is converted into the other form, until the total energy of the system is stored as either kinetic or potential energy. As soon as the total energy is e.g. converted into kinetic energy, the corresponding off-diagonal element (the orange line of the upper plot in Fig. B.3) drops to zero. Here, the other off-diagonal element (the blue line of the upper plot in Fig. B.3) starts to increase, meaning that energy is converted back from kinetic into potential energy. Note that the differences in the maximum values of the off-diagonal elements is not important, since at this point the corresponding energy is already approximately zero." }, { "heading": "B.4 HYDROLOGY", "text": "Modeling river discharge from meteorological data (e.g., precipitation, temperature) is one of the most important tasks in hydrology, and is necessary for water resource management and risk mitigation related to flooding. Recently, Kratzert et al. (2019c; 2020) established LSTM-based models as state-of-the-art in rainfall runoff modeling, outperforming traditional hydrological models by a large margin against most metrics (including peak flows, which is critical for flood prediction). However, the hydrology community is still reluctant to adopt these methods (e.g. Beven, 2020). A recent\nworkshop on ‘Big Data and the Earth Sciences’ Sellars (2018) reported that “[m]any participants who have worked in modeling physical-based systems continue to raise caution about the lack of physical understanding of ML methods that rely on data-driven approaches.”\nOne of of the most basic principles in watershed modeling is mass conservation. Whether water is treated as a resource (e.g. droughts) or hazard (e.g. floods), a modeller must be sure that they are accounting for all of the water in a catchment. Thus, most models conserve mass (Todini, 1988), and attempt to explicitly implement the most important physical processes. The downside of this ‘model everything’ strategy is that errors are introduced for every real-world process that is not implemented in a model, or implemented incorrectly. In contrast, MC-LSTM is able to learn any necessary behavior that can be induced from the signal (like LSTM) while still conserving the overall water budget." }, { "heading": "B.4.1 DETAILS ON THE DATASET", "text": "The data used in all hydrology related experiments is the publicly available Catchment Attributes and Meteorology for Large-sample Studies (CAMELS) dataset (Newman et al., 2014; Addor et al., 2017b). CAMELS contains data for 671 basins and is curated by the US National Center for Atmospheric Research (NCAR). It contains only basins with relatively low anthropogenic influence (e.g., dams and reservoirs) and basin sizes range from 4 to 25 000 km2. The basins cover a range of different geo- and eco-climatologies, as described by Newman et al. (2015) and Addor et al. (2017a). Out of all 671 basins, we used 447 — these are the basins for which simulations from all benchmark models are available (see Sec. B.4.4). To reiterate, we used benchmark hydrology models that were trained and tested by other groups with experience using these models, and were therefore limited to the 447 basis with results for all benchmark models. The spatial distribution of the 447 basins across the contiguous USA (CONUS) is shown in Fig. B.4.\nFor each catchment, roughly 30 years of daily meteorological data from three different products exist (DayMet, Maurer, NLDAS). Each meteorological dataset consist of five different variables: daily cumulative precipitation, daily minimum and maximum temperature, average short-wave radiation and vapor pressure. We used the Maurer forcing data because this is the data product that was used by all benchmark models (see Sec. B.4.4). In addition to meteorological data, CAMELS also\nincludes a set of static catchment attributes derived from remote sensing or CONUS-wide available data products. The static catchment attributes can broadly be grouped into climatic, vegetation or hydrological indices, as well as soil and topological properties. In this study, we used the same 27 catchment attributes as Kratzert et al. (2019c). Target data were daily averaged streamflow observations originally from the USGS streamflow gauge network, which are also included in the CAMELS dataset.\nTraining, validation and test set. Following the calibration and test procedure of the benchmark hydrology models, we trained on streamflow observations from 1 October 1999 through 30 September 2008 and tested on observations from 1 October 1989 to 30 September 1999. The remaining period (1 October 1980 to 30 September 1989) was used as validation period for hyperparameter tuning." }, { "heading": "B.4.2 DETAILS ON THE TRAINING SETUP AND MC-LSTM HYPERPARAMETERS", "text": "The general model setup follows insights from previous studies (Kratzert et al., 2018; 2019c;b; 2020), where LSTMs were used for the same task. We use sequences of 365 timesteps (days) of meteorological inputs to predict discharge at the last timestep of the sequence (sequence-to-one prediction). The mass input x in this experiment was catchment averaged precipitation (mm/day) and the auxiliary inputs a were the 4 remaining meteorological variables (min. and max. temperature, short-wave radiation and vapor pressure) as well as the 27 static catchment attributes, which are constant over time.\nWe tested a variety of MC-LSTM model configurations and adaptions for this specific task, which are briefly described below:\n1. Processing auxiliary inputs with LSTM: Instead of directly using the auxiliary inputs in the input gate (Eq. 5), output gate (Eq. 6) and time-dependent mass redistribution (Eq. 8), we first processed the auxiliary inputs a with LSTM and then used the output of this LSTM as the auxiliary inputs. The idea was to add additional memory for the auxiliary inputs, since in its base form only mass can be stored in the cell states of MC-LSTM. This could be seen as a specific adaption for the rainfall runoff modeling application, since information about the weather today and in the past ought to be useful for controlling the gates and mass redistribution. Empirically however, we could not see any significant performance gain and therefore decided to not use the more complex version with an additional LSTM.\n2. Auxiliary output + regularization to account for evapotranspiration: Of all precipitation falling in a catchment, only a part ends as discharge in the river. Large portions of precipitation are lost to the atmosphere in form of evaporation (from e.g. open water surfaces) and transpiration (from e.g. plants and trees), and to groundwater. One approach to account for this “mass loss” is the following: instead of summing over outgoing mass (Eq. 4), we used a linear layer to connect the outgoing mass to two output neurons. One neuron was fitted against the observed discharge data, while the second was used to estimate water loss due to unobserved sinks. A regularization term was added to the loss function to account for this. This regularization term was computed as the difference between the sum of the outgoing mass from MC-LSTM and the sum over the two output neurons. This did work, and the timeseries of the second auxiliary output neuron gave interesting results (i.e. matching the expected behavior of the annual evapotranspiration cycle), however results were not significantly better compared to our final model setup, which is why we rejected this architectural change.\n3. Explicit trash cell Another way to account for evapotranspiration that we tested is to allow the model to use one memory cell as explicit “trash cell”. That is, instead of deriving the final model prediction as the sum over the entire outgoing mass vector, we only calculate the sum over all but e.g. one element (see Eq. 13). This simple modification allows the model to use e.g. the first memory cell to discard mass from the system, which is then ignored for the model prediction. We found that this modification improved performance, and thus integrated it into our final model setup.\n4. Input/output scaling to account for input/output uncertainty: Both, input and output data in our applications inherit large uncertainties (Nearing et al., 2016), which is not ideal for mass-conserving models (and likely one of the reasons why LSTM performs so well compared to all other mass-conserving models). To account for that, we tried three different adaptions. First, we used a small fully connected network to derive time-dependent scaling weights for the mass input, which we regularized to be close to one. Second, we used a linear layer with positive weights to map the outgoing mass to the final model prediction, where all weights were initialized to one and the bias to zero. Third, we combined both. Out of the three, the input scaling resulted in the best performing model, however the results were worse than not scaling.\n5. Time-dependent redistribution matrix variants: For this experiment, a time-dependent redistribution matrix is necessary, since the underlying real-world processes (such as snow melt and thus conversion from snow into e.g. soil moisture or surface runoff) are timedependent. Since using the redistribution matrix as proposed in Eq. 8 is memory-demanding, especially for models with larger numbers of memory cells, we also tried to use a different method for this experiment. Here, we learned a fixed matrix (as in Eq. 7) and only calculated two vectors for each timestep. The final redistribution matrix was then derived as the outer product of the two time-dependent vectors and the static matrix. This resulted in lower memory consumption, however the model performance deteriorated significantly, which could be a hint towards the complexity required to learn the redistributing processes in this problem.\n6. Activation function of the redistribution matrix: We tested several different activation functions for the redistribution matrix in this experiment. Among those were the normalized sigmoid function (that is used e.g. for the input gate), the softmax function (as in Eq. 8) and the normalized ReLU activation function (see Eq. 18). We could achieve the best results using the normalized ReLU variant and can only hypothesize the reason for that: In this application (rainfall-runoff modelling) there are several state processes that are strictly disconnected. One example is snow and groundwater: groundwater will never turn into snow and snow will never transform into groundwater (not directly at least, it will first need to percolate through upper soil layers). Using normalized sigmoids or softmax makes it numerically harder (or impossible) to not distributed at least some mass between every cell — because activations can never be exactly zero. The normalized ReLU activation can do so, however, which might be the reason that it worked better in this case.\nAs an extension to the standard MC-LSTM model introduced in Eq. (5) to Eq. (8), we also used the mass input (precipitation) in all gates. The reason is the following: Different amounts of precipitations can lead to different processes. For example, low amounts of precipitation could be absorbed by the\nsoil and stored as soil moisture, leading to effectively no immediate discharge contribution. Large amounts of precipitation on the other hand, could lead to direct surface runoff, if the water cannot infiltrate the soil at the rate of the precipitation falling down. Therefore, it is crucial that the gates have access to the information contained in the precipitation input. The final model design used in all hydrology experiments is described by the following equations:\nmttot = R t · ct−1 + it · xt (10)\nct = (1− ot) mttot (11) ht = ot mttot (12)\nŷ = n∑ i=2 hti, (13)\nwith the gates being defined by\nit = σ̃(W i · at +U i · ct−1\n‖ct−1‖1 + V i · xt + bi) (14)\not = σ(W o · at +Uo · ct−1\n‖ct−1‖1 + V o · xt + bo) (15)\nRt = R̃eLU ( Wr · at + Ur · ct−1\n‖ct−1‖1 + Vr · xt +Br\n) , (16)\nwhere σ̃ is the normalized logistic function and R̃eLU is the normalized rectified linear unit (ReLU) that we define in the following. The normalized logistic function defined of the input gate is defined by:\nσ̃(ik) = σ(ik)∑ k σ(ik) . (17)\nIn this experiment, the activation function for the redistribution gate is the normalized ReLU function defined by:\nR̃eLU(sk) = max(sk, 0)∑ kmax(sk, 0) , (18)\nwhere s is some input vector to the normalized ReLU function.\nWe manually tried different sets of hyperparameters, because a large-scale automatic hyperparameter search was not feasible. Besides trying out all variants as described above, the main hyperparameter that we tuned for the final model was the number of memory cells. For other parameters, such as learning rate, mini-batch size, number of training epochs, we relied on previous work using LSTMs on the same dataset.\nThe final hyperparameters are a hidden size of 64 memory cells and a mini-batch size of 256. We used the Adam optimizer (Kingma & Jimmy, 2015) with a scheduled learning rate starting at 0.01 then lowering the learning rate after 20 epochs to 0.005 and after another 5 epochs to 0.001. We trained the model for a total number of 30 epochs and used the weights of the last epoch for the final model evaluation. All weight matrices were initialized as (semi) orthogonal matrices (Saxe et al., 2014) and all bias terms with a constant value of zero. The only exception was the bias of the output gate, which we initialized to −3, to keep the output gate closed at the beginning of the training." }, { "heading": "B.4.3 DETAILS ON THE LSTM MODEL", "text": "For LSTM, we largely relied on expertise from previous studies (Kratzert et al., 2018; 2019c;b; 2020). The only hyperparameter we adapted was the number of memory cells, since we used fewer basins\n(447) than in the previous studies (531). We found that LSTM with 128 memory cells, compared to the 256 used in previous studies, resulted in slightly better results. Apart from that, we trained LSTMs with the same inputs and settings (sequence-to-one with a sequence length of 365) as described in the previous section for MC-LSTM. We used the standard LSTM implementation from the PyTorch package (Paszke et al., 2019), i.e., with forget gate (Gers et al., 2000). We manually initialized the bias of the forget gate to be 3 in order to keep the forget gate open at the beginning of the training." }, { "heading": "B.4.4 DETAILS ON THE BENCHMARK MODELS", "text": "The benchmark models were first collected by Kratzert et al. (2019c). All models were configured, trained and run by several different research groups, most often the respective model developers themselves. This was done to avoid any potential to favor our own models. All models used the same forcing data (Maurer) and the same time periods to train and test. The models can be classified in two groups:\n1. Models trained for individual watersheds. These are SAC-SMA (Newman et al., 2017), VIC (Newman et al., 2017), three different model structures of FUSE1, mHM (Mizukami et al., 2019) and HBV (Seibert et al., 2018). For the HBV model, two different simulations exist: First, the ensemble average of 1000 untrained HBV models (lower benchmark) and second, the ensemble average of 100 trained HBV models (upper benchmarks). For details see (Seibert et al., 2018).\n2. Models trained regionally. For hydrological models, regional training means that one parameter transfer model was trained, which estimates watershed-specific model parameters through globally trained model functions from e.g. soil maps or other catchment attributes. For this setting, the benchmark dataset includes simulations of the VIC model (Mizukami et al., 2017) and mHM (Rakovec et al., 2019)." }, { "heading": "B.4.5 DETAILED RESULTS.", "text": "Table B.6 provides results for MC-LSTM and LSTM averaged over the n = 10 model repetitions.\nTable B.7 provides the complete benchmarking results.\n1Provided by Nans Addor on personal communication" }, { "heading": "C THEOREMS & PROOFS", "text": "Theorem 1 (Conservation property). Let mτc = ∑ k c τ k and m τ h = ∑ k h τ k be, respectively, the mass in the MC-LSTM storage and the outputs at time τ . At any timestep τ , we have:\nmτc = m 0 c + τ∑ t=1 xt − τ∑ t=1 mth.\nThat is, the change of mass in the cell states is the difference between input and output mass, accumulated over time.\nProof. The proof is by induction and we usemtot = Rt · ct−1 + it · xt from Eq.(2).\nFor τ = 0, we have m0c = m 0 c + ∑0 t=1 x t − ∑0 t=1m t h, which is trivially true when using the\nconvention that ∑0 t=1 = 0.\nAssuming that the statement holds for τ = T , we show that it must also hold for τ = T + 1.\nStarting from Eq. (3), the mass of the cell states at time T + 1 is given by:\nmT+1c = K∑ k=1 (1− ok)mT+1tot,k = K∑ k=1 mT+1tot,k − K∑ k=1 okm T+1 tot,k,\nwhere mttot,k is the k-th entry of the result from Eq. (2) (at timestep t). The sum over entries in the first term can be simplified as follows:\nK∑ k=1 mT+1tot,k = K∑ k=1 K∑ j=1 rkjc T j + ikx T+1 =\nK∑ j=1 cTj ( K∑ k=1 rkj ) + xT+1 K∑ k=1 ik\n= mTc + x T+1.\nThe final simplification is possible becauseR and i are (left-)stochastic. The mass of the outputs can then be computed from Eq. (4):\nmT+1h = K∑ k=1 okm T+1 tot,k.\nPutting everything together, we find\nmT+1c = K∑ k=1 mT+1tot,k − K∑ k=1 okm T+1 tot,k\n= mTc + x T+1 −mT+1h\n= m0c + T∑ t=1 xt − T∑ t=1 mth + x T+1 −mT+1h\n= m0c + T+1∑ t=1 xt − T+1∑ t=1 mth\nBy the principle of induction, we conclude that mass is conserved, as specified in Eq. (9).\nCorollary 1. In each timestep τ , the cell states cτk are bounded by the sum of mass inputs∑τ t=1 x τ +m0c , that is |cτk| ≤ ∑τ t=1 x\nτ +m0c . Furthermore, if the series of mass inputs converges limτ→∞ ∑τ t=1 x τ = m∞x , then also the sum of cell states converges.\nProof. Since ctk ≥ 0, xt ≥ 0 and mth ≥ 0 for all k and t,\n|cτk| = cτk ≤ K∑ k=1 cτk = m τ c ≤ τ∑ t=1 xτ +m0c , (19)\nwhere we used Theorem 1. Convergence follows immediately through the comparison test." }, { "heading": "D ON RANDOM MARKOV MATRICES.", "text": "When initializing an MC-LSTM model, the entries of the redistribution matrix R of dimension K ×K are created from non-negative and iid random variables (sij)1≤i,j≤K with finite means m and variances σ2 and bounded fourth moments. We collect them in a matrix S. Next we assume that those entries get column-normalized to obtain the random Markov matrixR.\nProperties of Markov matrices and random Markov matrices. Let λ1, . . . , λK be the eigenvalues and s1, . . . , sK be the singular values of R, ordered such that |λ1| ≥ . . . ≥ |λK | and s1 ≥ . . . ≥ sk. We then have the following properties for any Markov matrix (not necessarily random):\n• λ1 = 1. • 1TR = 1. • s1 = ‖R‖2 ≤ √ K.\nFurthermore, for random Markov matrices, we have\n• limK→∞ s1 = 1 (Bordenave et al., 2012, Theorem 1.2)\nFor the reader’s convenience we briefly discuss further selected interesting properties of random Markov matrices in the next paragraph, especially concerning the global behavior of their eigenvalues and singular values.\nCircular and Quartercircular law for random Markov matrices. In random matrix theory one major field of interest concerns the behavior of eigenvalues and singular values when K →∞. One would like to find out how the limiting distribution of the eigenvalues or singular values looks like. To discuss the most important results in this direction for large Markov matricesR, let us introduce some notation.\n• δa denotes the Dirac delta measure centered at a. • By µR = 1K ∑K k=1 δλk we denote the empirical spectral density of the eigenvalues ofR. • Similarly we define the empirical spectral density of the singular values of R as: νR = 1 K ∑K k=1 δsk . • Qσ denotes the quartercircular distribution on the interval [0, σ] and • Uσ the uniform distribution on the disk {z ∈ C : |z| ≤ σ}.\nThen we have as K →∞:\n• Quarter cirular law theorem: (Bordenave et al., 2012, Theorem 1.1): ν√KR → Qσ almost surely.\n• Cirular law theorem: (Bordenave et al., 2012, Theorem 1.3): ν√KR → Uσ almost surely.\nThe convergence here is understood in the sense of weak convergence of probability measures with respect to bounded continuous functions. Note that those two famous theorems originally appeared for 1√\nK S instead of\n√ KR. Of course much more details on those results can be found in Bordenave\net al. (2012).\nGradient flow of MC-LSTM for random redistributions. Here we provide a short note on the gradient dynamics of the cell state in a random MC-LSTM, hence, at initialization of the model. Specifically we want to provide some heuristics based on the arguments about the behavior of large stochastic matrices. Let us start by recalling the formula for ct:\nct = (1− ot) (Rt · ct−1 + it · xt). (20)\nNow we investigate the gradient of ‖ ∂c t\n∂ct−1 ‖2 in the limit K → ∞. We assume that for K → ∞, ot ≈ 0 and it ≈ 0 for all t. Thus we approximately have:\n‖ ∂c t\n∂ct−1 ‖2 ≈ ‖Rt‖2. (21)\nRt is a stochastic matrix, and s1 = ‖Rt‖2 is its largest singular value. Theorem 1.2 from Bordenave et al. (2012) ensures that ‖Rt‖2 = 1 for K → ∞ under reasonable moment assumptions on the distribution of the unnormalized entries (see above). Thus we are able to conclude ‖ ∂c t\n∂ct−1 ‖2 ≈ 1 for large K and all t, which can prevent the gradients from exploding." } ]
2,020
MC-LSTM: MASS-CONSERVING LSTM
SP:69cc1499e1ffdff113346180dd31c60fb1059872
[ "The paper proposes a Quasi-Newton inspired optimization algorithm for Stochastic Optimization named APOLLO. It adjusts a previously known update formula to better suit Deep Learning by using 1) a layer-wise diagonal approximation to the Hessian, 2) an exponential average of gradients to address the noise. Overall the algorithm shows promising results on the assigned experiments.", "This paper presents the optimization method Apollo, a quasi-Newton method that relies on a parameter-wise version of the weak secant condition to allow for a diagonal approximation of the Hessian. Additionally, the issue of a potentially non-PSD approximation is addressed by replacing the approximation with a rectified absolute value. While the combination of techniques is interesting, my main hesitation comes from the limited discussion concerning other quasi-Newton methods for the same problem setting.", "The paper developed a new quasi-Newton algorithm for stochastic non-convex optimization. In contrast to existing works, it uses a (rectified/capped) diagonal matrix to approximate the Hessian, and incorporated techniques to reduce the stochastic variance. It shows first-order convergence guarantee of the algorithm and provided empirical evaluation on 2 CV and 2 NLP datasets.", "The paper presents a diagonal quasi-Newton method, by approximating the Hessian with a diagonal matrix. The paper proves a regret bound for the method in the convex case, and shows that in the non-convex setting the expected norm of the gradients goes to 0. The paper then provides experimental results on two image datasets, a small translation task and a small language modeling task. ", "This work proposes a quasi-Newton method APOLLO for nonconvex stochastic optimization. Based on a parameter-wise weak secant condition, a diagonal approximation of the Hessian is constructed, and rectified absolute value is adopted on the approximation. A step-size bias correction technique is used to mitigate stochastic gradient variance. Theoretical convergence analysis is provided under a convex online setting and a nonconvex stochastic setting. Experiments on CV and NLP tasks demonstrate the effectiveness and stability of the proposed method.", "This paper presents a first-order quasi-Newton method, named APOLLO, for solving stochastic nonconvex finite sum problems with a large number of data points. Each iteration of the method consists of computing a sparse and positive definite (diagonal) approximation of the objective function's Hessian, followed a quasi-Newton step. A regret bound is established for the convex setting and a complexity bound is established for the nonconvex setting, based on prior results on Adam-type optimizers. Finally, a comprehensive set of numerical experiments on three common tasks in vision and language is presented to show the superiority of the proposed method. " ]
In this paper, we introduce APOLLO, a quasi-Newton method for nonconvex stochastic optimization, which dynamically incorporates the curvature of the loss function by approximating the Hessian via a diagonal matrix. Importantly, the update and storage of the diagonal approximation of Hessian is as efficient as adaptive first-order optimization methods with linear complexity for both time and memory. To handle nonconvexity, we replace the Hessian with its rectified absolute value, which is guaranteed to be positive-definite. Experiments on three tasks of vision and language show that APOLLO achieves significant improvements over other stochastic optimization methods, including SGD and variants of Adam, in terms of both convergence speed and generalization performance. The implementation of the algorithm is available at anonymous link.
[]
[ { "authors": [ "Naman Agarwal", "Brian Bullins", "Xinyi Chen", "Elad Hazan", "Karan Singh", "Cyril Zhang", "Yi Zhang" ], "title": "Efficient full-matrix adaptive regularization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Rohan Anil", "Vineet Gupta", "Tomer Koren", "Yoram Singer" ], "title": "Memory efficient adaptive optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sue Becker", "Yann Le Cun" ], "title": "Improving the convergence of back-propagation learning with second order methods", "venue": "In Proceedings of the 1988 connectionist models summer school,", "year": 1988 }, { "authors": [ "Antoine Bordes", "Léon Bottou", "Patrick Gallinari" ], "title": "SGD-QN: Careful quasi-newton stochastic gradient descent", "venue": "The Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Léon Bottou", "Olivier Bousquet" ], "title": "The tradeoffs of large scale learning", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Charles G Broyden" ], "title": "Quasi-newton methods and their application to function minimisation", "venue": "Mathematics of Computation,", "year": 1967 }, { "authors": [ "Charles George Broyden" ], "title": "The convergence of a class of double-rank minimization algorithms", "venue": "IMA Journal of Applied Mathematics,", "year": 1970 }, { "authors": [ "Richard H Byrd", "Peihuang Lu", "Jorge Nocedal", "Ciyou Zhu" ], "title": "A limited memory algorithm for bound constrained optimization", "venue": "SIAM Journal on scientific computing,", "year": 1995 }, { "authors": [ "Richard H Byrd", "Gillian M Chin", "Will Neveitt", "Jorge Nocedal" ], "title": "On the use of stochastic hessian information in optimization methods for machine learning", "venue": "SIAM Journal on Optimization,", "year": 2011 }, { "authors": [ "Richard H Byrd", "Samantha L Hansen", "Jorge Nocedal", "Yoram Singer" ], "title": "A stochastic quasi-newton method for large-scale optimization", "venue": "SIAM Journal on Optimization,", "year": 2016 }, { "authors": [ "Ciprian Chelba", "Tomas Mikolov", "Mike Schuster", "Qi Ge", "Thorsten Brants", "Phillipp Koehn", "Tony Robinson" ], "title": "One billion word benchmark for measuring progress in statistical language modeling", "venue": "arXiv preprint arXiv:1312.3005,", "year": 2013 }, { "authors": [ "X Chen", "M Hong", "S Liu", "R Sun" ], "title": "On the convergence of a class of adam-type algorithms for non-convex optimization", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Xinyi Chen", "Naman Agarwal", "Elad Hazan", "Cyril Zhang", "Yi Zhang" ], "title": "Extreme tensoring for low-memory preconditioning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yann N Dauphin", "Razvan Pascanu", "Caglar Gulcehre", "Kyunghyun Cho", "Surya Ganguli", "Yoshua Bengio" ], "title": "Identifying and attacking the saddle point problem in high-dimensional non-convex optimization", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "William C Davidon" ], "title": "Variable metric method for minimization", "venue": "SIAM Journal on Optimization,", "year": 1991 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "John E Dennis", "Jr.", "Jorge J Moré" ], "title": "Quasi-newton methods, motivation and theory", "venue": "SIAM review,", "year": 1977 }, { "authors": [ "John E Dennis", "Jr.", "Henry Wolkowicz" ], "title": "Sizing and least-change secant methods", "venue": "SIAM Journal on Numerical Analysis,", "year": 1993 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of machine learning research,", "year": 2011 }, { "authors": [ "Roger Fletcher" ], "title": "A new approach to variable metric algorithms", "venue": "The computer journal,", "year": 1970 }, { "authors": [ "Roger Fletcher" ], "title": "Practical methods of optimization", "venue": null, "year": 1987 }, { "authors": [ "D Goldfarb" ], "title": "A family of variable metric updates derived by variational means", "venue": "Mathematics of Computation,", "year": 1970 }, { "authors": [ "Alex Graves" ], "title": "Generating sequences with recurrent neural networks", "venue": "arXiv preprint:1308.0850,", "year": 2013 }, { "authors": [ "Vineet Gupta", "Tomer Koren", "Yoram Singer" ], "title": "Shampoo: Preconditioned stochastic tensor optimization", "venue": "arXiv preprint arXiv:1802.09568,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Geoffrey Hinton", "Li Deng", "Dong Yu", "George E Dahl", "Abdel-rahman Mohamed", "Navdeep Jaitly", "Andrew Senior", "Vincent Vanhoucke", "Patrick Nguyen", "Tara N Sainath" ], "title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups", "venue": "IEEE Signal processing magazine,", "year": 2012 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Nitish Shirish Keskar", "Albert S Berahas" ], "title": "AdaQN: An adaptive quasi-newton algorithm for training rnns", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Yann LeCun", "Bernhard Boser", "John S Denker", "Donnie Henderson", "Richard E Howard", "Wayne Hubbard", "Lawrence D Jackel" ], "title": "Backpropagation applied to handwritten zip code recognition", "venue": "Neural computation,", "year": 1989 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Yingkai Li", "Huidong Liu" ], "title": "Implementation of stochastic quasi-newton’s method in pytorch", "venue": "arXiv preprint arXiv:1805.02338,", "year": 2018 }, { "authors": [ "Dong C Liu", "Jorge Nocedal" ], "title": "On the limited memory bfgs method for large scale optimization", "venue": "Mathematical programming,", "year": 1989 }, { "authors": [ "Liyuan Liu", "Haoming Jiang", "Pengcheng He", "Weizhu Chen", "Xiaodong Liu", "Jianfeng Gao", "Jiawei Han" ], "title": "On the variance of the adaptive learning rate and beyond", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "SGDR: Stochastic gradient descent with warm restarts", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Liangchen Luo", "Yuanhao Xiong", "Yan Liu", "Xu Sun" ], "title": "Adaptive gradient methods with dynamic bound of learning rate", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Xuezhe Ma", "Eduard Hovy" ], "title": "End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2016 }, { "authors": [ "Xuezhe Ma", "Chunting Zhou", "Xian Li", "Graham Neubig", "Eduard Hovy" ], "title": "Flowseq: Nonautoregressive conditional sequence generation with generative flow", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing,", "year": 2019 }, { "authors": [ "James Martens" ], "title": "Deep learning via hessian-free optimization", "venue": "In Proceedings of the 27th International Conference on International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "James Martens", "Roger Grosse" ], "title": "Optimizing neural networks with kronecker-factored approximate curvature", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "James Martens", "Ilya Sutskever" ], "title": "Learning recurrent neural networks with hessian-free optimization", "venue": "In Proceedings of the 28th international conference on machine learning", "year": 2011 }, { "authors": [ "Aryan Mokhtari", "Alejandro Ribeiro" ], "title": "Res: Regularized stochastic bfgs algorithm", "venue": "IEEE Transactions on Signal Processing,", "year": 2014 }, { "authors": [ "Aryan Mokhtari", "Alejandro Ribeiro" ], "title": "Global convergence of online limited memory bfgs", "venue": "The Journal of Machine Learning Research,", "year": 2015 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In ICML,", "year": 2010 }, { "authors": [ "JL Nazareth" ], "title": "If quasi-newton then why not quasi-cauchy", "venue": "SIAG/Opt Views-and-news,", "year": 1995 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli" ], "title": "FairSeq: A fast, extensible toolkit for sequence modeling", "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations),", "year": 2019 }, { "authors": [ "Razvan Pascanu", "Tomas Mikolov", "Yoshua Bengio" ], "title": "On the difficulty of training recurrent neural networks", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Ning Qian" ], "title": "On the momentum term in gradient descent learning algorithms", "venue": "Neural networks,", "year": 1999 }, { "authors": [ "Sashank J Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the convergence of adam and beyond", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A stochastic approximation method", "venue": "The annals of mathematical statistics,", "year": 1951 }, { "authors": [ "Sebastian Ruder" ], "title": "An overview of gradient descent optimization algorithms", "venue": "arXiv preprint arXiv:1609.04747,", "year": 2016 }, { "authors": [ "David E Rumelhart", "Geoffrey E Hinton", "Ronald J Williams" ], "title": "Learning internal representations by error propagation", "venue": "Technical report, California Univ San Diego La Jolla Inst for Cognitive Science,", "year": 1985 }, { "authors": [ "Nicol N Schraudolph", "Jin Yu", "Simon Günter" ], "title": "A stochastic quasi-newton method for online convex optimization", "venue": "In Artificial intelligence and statistics,", "year": 2007 }, { "authors": [ "David F Shanno" ], "title": "Conditioning of quasi-newton methods for function minimization", "venue": "Mathematics of computation,", "year": 1970 }, { "authors": [ "Jascha Sohl-Dickstein", "Ben Poole", "Surya Ganguli" ], "title": "Fast large-scale optimization by unifying stochastic gradient and quasi-newton methods", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural networks for machine learning,", "year": 2012 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Xiao Wang", "Shiqian Ma", "Donald Goldfarb", "Wei Liu" ], "title": "Stochastic quasi-newton methods for nonconvex stochastic optimization", "venue": "SIAM Journal on Optimization,", "year": 2017 }, { "authors": [ "Ashia C Wilson", "Rebecca Roelofs", "Mitchell Stern", "Nati Srebro", "Benjamin Recht" ], "title": "The marginal value of adaptive gradient methods in machine learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Philip Wolfe" ], "title": "The secant method for simultaneous nonlinear equations", "venue": "Communications of the ACM,", "year": 1959 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Zhewei Yao", "Amir Gholami", "Sheng Shen", "Kurt Keutzer", "Michael W Mahoney" ], "title": "ADAHESSIAN: An adaptive second order optimizer for machine learning", "venue": "arXiv preprint arXiv:2006.00719,", "year": 2020 }, { "authors": [ "Wei Yuan", "Kai-Xin Gao" ], "title": "EAdam optimizer: How epsilon impact adam", "venue": "arXiv preprint arXiv:2011.02150,", "year": 2020 }, { "authors": [ "Matthew D Zeiler" ], "title": "Adadelta: an adaptive learning rate method", "venue": "arXiv preprint:1212.5701,", "year": 2012 }, { "authors": [ "Jingzhao Zhang", "Sai Praneeth Karimireddy", "Andreas Veit", "Seungyeon Kim", "Sashank Reddi", "Sanjiv Kumar", "Suvrit Sra" ], "title": "Why are adaptive methods good for attention models", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Mingfa Zhu", "John Lawrence Nazareth", "Henry Wolkowicz" ], "title": "The quasi-cauchy relation and diagonal updating", "venue": "SIAM Journal on Optimization,", "year": 1999 }, { "authors": [ "Juntang Zhuang", "Tommy Tang", "Yifan Ding", "Sekhar C Tatikonda", "Nicha Dvornek", "Xenophon Papademetris", "James Duncan" ], "title": "Adabelief optimizer: Adapting stepsizes by the belief in observed gradients", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Martin Zinkevich" ], "title": "Online convex programming and generalized infinitesimal gradient ascent", "venue": "In Proceedings of the 20th international conference on machine learning (icml-03),", "year": 2003 } ]
[ { "heading": "1 INTRODUCTION", "text": "Nonconvex stochastic optimization is of core practical importance in many fields of machine learning, in particular for training deep neural networks (DNNs). First-order gradient-based optimization algorithms, conceptually attractive due to their linear efficiency on both the time and memory complexity, have led to tremendous progress and impressive successes. A number of advanced first-order algorithms have emerged over the years to pursue fast and stable convergence, among which stochastic gradient descent (SGD) (Robbins & Monro, 1951; LeCun et al., 1998), equipped with momentum (Rumelhart et al., 1985; Qian, 1999; Bottou & Bousquet, 2008), has stood out for its simplicity and effectiveness across a wide range of applications (Hinton & Salakhutdinov, 2006; Hinton et al., 2012; Graves, 2013). However, one disadvantage of SGD is that the gradients in different directions are scaled uniformly, resulting in limited convergence speed and sensitive choice of learning rate, and thus has spawned a lot of recent interests in accelerating SGD from the algorithmic and practical perspectives.\nRecently, many adaptive first-order optimization methods have been proposed to achieve rapid training progress with element-wise scaled learning rates, and we can only mention a few here due to space limits. In their pioneering work, Duchi et al. (2011) proposed AdaGrad, which scales the gradient by the square root of the accumulative square gradients from the first iteration. While AdaGrad works well for sparse settings, its performance significantly degrades for dense settings, primarily due to the monotonic increase of the accumulation. Subsequently, several methods have been proposed with the intuition to limit the accumulation to a small window of past iterations, and in particular exponentially reduce the weight of earlier iterations. Notable works incorporating this method are RMSProp (Tieleman & Hinton, 2012), AdaDelta (Zeiler, 2012), and Adam (Kingma & Ba, 2015), among which Adam has become the default optimization algorithm across many deep learning applications because of its fast convergence speed and relatively consistent selections of hyper-parameters (Ruder, 2016; Zhang et al., 2020). However, it has been observed that these adaptive optimization methods may converge to bad/suspicious local optima, resulting in worse generalization ability than their non-adaptive counterparts (Wilson et al., 2017), or fail to converge due to unstable and extreme learning rates (Luo et al., 2019).\nQuasi-Newton methods have been widely used in solving convex optimization problems, due to their efficient computation and fast convergence rate (Broyden, 1967; Dennis & Moré, 1977). However, the stochastic, high-dimensional and nonconvex nature of many machine learning tasks, such as\ntraining deep neural networks, has rendered many classical quasi-Newton methods ineffective and/or inefficient (Keskar & Berahas, 2016; Wang et al., 2017; Yao et al., 2020). Indeed, in many natural language processing (NLP) and computer vision (CV) tasks (He et al., 2016; Ma & Hovy, 2016; Luo et al., 2019), SGD (with momentum) is chosen as the optimizer, benefiting from its stable and efficient training and outstanding generalization.\nIn this work, we develop APOLLO, a quasi-Newton method for nonconvex stochastic optimization to simultaneously tackle the aforementioned challenges of stochastic variance, nonconvexity and inefficiency. Algorithmically, APOLLO dynamically incorporates the curvature of the objective function with diagonally approximated Hessian. It only requires first-order gradients and updates the approximation of the Hessian diagonally so that it satisfies a parameter-wise version of the weak secant condition (Wolfe, 1959). To handle nonconvexity, we replace the Hessian with its rectified absolute value, the computation of which is also efficient under our diagonal approximation, yielding an efficient optimization algorithm with linear complexity for both time and memory (§3). Experimentally, through three tasks on CV and NLP with popular deep neural networks, including ResNets (He et al., 2016), LSTMs (Hochreiter & Schmidhuber, 1997) and Transformers (Vaswani et al., 2017), we demonstrate that APOLLO significantly outperforms SGD and variants of Adam, in terms of both convergence speed and generalization performance (§4)." }, { "heading": "2 BACKGROUNDS", "text": "In this section, we set up the notations on nonconvex stochastic optimization, briefly review the (quasi-) Newton methods, and discuss the problems of applying quasi-Newton methods to nonconvex stochastic optimization that we attempt to study in the rest of the paper." }, { "heading": "2.1 NONCONVEX STOCHASTIC OPTIMIZATION", "text": "In this paper, we consider the following nonconvex stochastic optimization problem: min θ∈Rd f(θ) = E[l(θ; Γ)] (1) where l : Rd ×Rn → R is a continuously differentiable (and possible nonconvex) function, θ ∈ Rd denotes the parameter to be optimized, Γ ∈ Rn denotes a random variable with distribution function P, and E[·] denotes the expectation w.r.t Γ. Intuitively, Γ incorporates noises in f , leading to a stochastic objective function. A special case of (1) that arises frequently in machine learning is the empirical risk minimization problem:\nmin θ∈Rd\nf(θ) = 1\nN N∑ i=1 li(θ) (2)\nwhere li : Rd → R is the loss function corresponding to the i-th data, and N is the number of data samples that is assumed to be extremely large. Objective functions may also have other sources of noise than data subsampling, such as dropout (Srivastava et al., 2014) in deep neural networks.\nDecoupled Parameters. In this work, we consider a setting of decoupled parameters: θ = {θ(l), l = 1, . . . , L}. Intuitively, under this setting the parameter θ is decoupled into a sequence of parameters serving different functionalities. For example, in neural network training the parameters of a neural network can be naturally decoupled into the parameters of different layers or modules." }, { "heading": "2.2 NEWTON AND QUASI-NEWTON METHODS", "text": "Newton’s method usually employs the following updates to solve (1): θt+1 = θt −H−1t gt (3) where gt = ∇f(θt) is the gradient at θt and Ht = ∇2f(θt) is the Hessian matrix. The convergence rate of Newton’s method is quadratic under standard assumptions (Nocedal & Wright, 2006). However, major challenges with this method are i) the expensive computation of the inverse Hessian at every iteration and the corresponding quadratic memory complexity; and ii) the limitation to convex functions (nonconvexity results in negative curvature of Ht and misleads the update directions).\nA standard alternative to Newton’s method is a class of quasi-Newton methods, which have been widely used in solving convex deterministic optimization problem:\nθt+1 = θt − ηtB−1t gt (4)\nwhere ηt is the stepsize (a.k.a learning rate), Bt is an approximation to the Hessian matrix ∇2f(θt) at θt, which is updated based on the well-known secant equation:\nBt+1 = argmin B\n‖B −Bt‖\ns.t. Bt+1st = yt (secant equation) (5)\nwhere st = θt+1−θt and yt = gt+1−gt. Bt+1 is, in the sense of some matrix norm, the closest toBt among all symmetric matrices that satisfy the secant equation. Each choice of the matrix norm results in a different update formula, such as DFP (Davidon, 1991; Fletcher, 1987) and BFGS (Broyden, 1970; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970). The popularity of this method is due to the fact that only the gradient of the objective function is required at each iteration. Since no second derivatives (Hessian) are required, quasi-Newton methods are sometimes more efficient than Newton’s method, especially when the computation of Hessian is expensive. To further reduce memory cost, one seminal work is the limited memory BFGS (L-BFGS) (Liu & Nocedal, 1989; Byrd et al., 1995) that achieves desirable linear computational and memory complexity by approximating the Hessian as a series of sum of first order information from previous iterations." }, { "heading": "2.3 PROBLEMS OF QUASI-NEWTON METHODS", "text": "Despite their impressive successes on convex deterministic optimization, quasi-Newton methods suffer from their own problems in more challenging scenarios. In this section, we mainly discuss three problems preventing quasi-Newton methods from being applied to the scenario of largescale nonconvex stochastic optimization. Due to these problems, no quasi-Newton methods (to our best knowledge) designed for nonconvex optimization consistently outperform adaptive firstorder algorithms w.r.t convergence speed and generalization performance. The main goal of this work is to algorithmically design and experimentally demonstrate a novel quasi-Newton method, in hope of improving the convergence speed and generalization performance of nonconvex stochastic optimization eventually.\nStochastic Variance. One challenge of quasi-Newton methods on nonconvex stochastic optimization (1) is the variance introduced by the stochastic nature of the problem. At each iteration, only the stochastic gradient gt is available, which is an unbiased estimation of the gradient∇f(θt) and may lead to an erroneous approximation of Hessian (Byrd et al., 2011).\nNonconvexity. Another key challenge in designing such quasi-Newton methods lies in the difficulty of preserving the positive-definiteness of Bt in (5), due to the nonconvexity of the objective function. What is worse is that performing line search is infeasible in the stochastic setting, due to the presence of noise in the stochastic gradients (Wang et al., 2017).\nComputational and Memory Efficiency. Even though quasi-Newton methods are more efficient than Newton’s method, the time and memory complexities are still relatively large compared with adaptive first-order methods. For instance, L-BFGS requires to store first-order information from m previous iterations with commonly m ≥ 5, which is still too expensive for deep neural networks containing millions of parameters. Moreover, adapting quasi-Newton methods to nonconvex stochastic optimization probably introduces additional computation, further slowing down these methods." }, { "heading": "3 ADAPTIVE PARAMETER-WISE DIAGONAL QUASI-NEWTON", "text": "With the end goal of designing an efficient quasi-Newton method to solve the problem in (1) in mind, we first propose to approximate the Hessian with a diagonal matrix, whose elements are determined by the variational approach subject to the parameter-wise weak secant equation (§3.1). Then, we explain our stepsize bias correction technique to reduce the stochastic variance in §3.2. To handle nonconvexity, we directly use the rectified absolute value of the diagonally approximated Hessian as the preconditioning of the gradient (§3.3). The initialization technique of APOLLO allows us to eliminate one hyper-parameter (§3.4). At last, we provide a theoretical analysis of APOLLO’s convergence in both convex optimization and nonconvex stochastic optimization (§3.5). The pseudo-code is shown in Algorithm 1." }, { "heading": "3.1 QUASI-NEWTON METHODS WITH DIAGONAL HESSIAN APPROXIMATION", "text": "As discussed in Bordes et al. (2009), designing an efficient stochastic quasi-Newton algorithm involves a careful trade-off between the sparsity of the approximation matrix Bt and the quality of\nits approximation of the Hessian Ht, and diagonal approximation is a reasonable choice (Becker et al., 1988; Zhu et al., 1999). If B is chosen to be a diagonal matrix satisfying (5), one can obtain a formula similar to the SGD-QN algorithm (Bordes et al., 2009).\nAn alternative of the secant equation in the updating formula (5), as first introduced by Nazareth (1995), is the weak secant equation (Dennis & Wolkowicz, 1993):\nBt+1 = argmin B\n‖B −Bt‖\ns.t. sTt Bt+1st = s T t yt (weak secant equation)\n(6)\nThe motivation of using the weak secant condition in diagonal quasi-Newton method is straightforward: the standard mean-value theorem might not necessarily hold for vector-valued functions expressed in the secant equation, Bt+1st = yt ≈ ∇2f(θt)st. Thus, we do not know whether there exists a vector θ̃ ∈ Rd such that yt = ∇2f(θ̃)st (Dennis & Moré, 1977). On the other hand, the Taylor theorem ensures that there exists such θ̃ that sTt yt = s T t ∇2f(θ̃)st, leading to the reasonable assumption of the weak secant condition (6).\nBased on the variational technique (Zhu et al., 1999), the solution of (6) with Frobenius norm is:\nΛ , Bt+1 −Bt = sTt yt − sTt Btst ‖st‖44 Diag(s2t ) (7)\nwhere s2t is the element-wise square vector of st, Diag(s 2 t ) is the diagonal matrix with diagonal elements from vector s2t , and ‖ · ‖4 is the 4-norm of a vector.\nParameter-Wise Weak Secant Condition. However, in optimization problems with highdimensional parameter space, such as training deep neural networks with millions of parameters, the weak secant condition might be too flexible to produce a good Hessian approximation. In the setting of decoupled parameters (§2.1), we propose a parameter-wise version of the weak secant equation to achieve a trade-off between the secant and weak secant conditions: for each parameter θ(l) ∈ θ, we update B corresponding to θ(l) by solving (6) individually. Remarkably, the secant condition restricts B with an equation of a d-dimensional vector, while the weak secant condition relaxes it with a 1-dimensional scalar. The parameter-wise weak secant condition expresses the restriction as a l-dimension vector (1 < l < d), resulting in a reasonable trade-off. The updating formula is the same as (7) for each parameter-wise B." }, { "heading": "3.2 STEPSIZE BIAS CORRECTION", "text": "To mitigate the stochastic variance problem in stochastic quasi-Newton methods, APOLLO utilizes stepsize bias correction on the stochastic gradients at each step t. We know that the optimal stepsize ηt equals to 1 w.r.t the quadratic approximation underlying Newton’s method, if the Hessian approximation Bt and the stochastic gradient gt are close to the exact Hessian Ht and gradient ∇f(θt), respectively. Inspired by this, we correct the stepsize bias in the stochastic gradient gt by replacing it with a corrected gradient g′t = ηtgt. Together with the corresponding corrected y′t = g ′ t+1 − g′t = ηtyt, we correct the updating term Λ of Bt in (7) by replacing yt with y′t:\nΛ′ = sTt y ′ t − sTt Btst ‖st‖44 Diag(s2t ) = − dTt yt + d T t Btdt ‖dt‖44 Diag(d2t ) (8)\nwhere dt = −st/ηt = B−1t gt is the corrected update direction. Note that after applying the step bias correction, the update formula of Bt in (8) is independent with the stepsize ηt, eliminating the stepsize bias. Technically, the stepsize bias correction is designed to reduce the stochastic variance, rather than entirely discarding the stepsize. The APOLLO algorithm (Algorithm 1) still incorporates the stepsize at every iteration to enforce convergence.\nBased on previous studies, incorporating exponential moving averages (EMVs) for the stochastic gradients significantly reduces the variance (Kingma & Ba, 2015). We follow these works and apply EMV to gt, together with the initialization bias correction:\nmt+1 = β(1− βt) 1− βt+1 mt + 1− β 1− βt+1 gt+1 (9)\nwhere 0 < β < 1 is the decay rate of EMV and yt in (8) is written as mt+1 −mt. Note that we do not apply moving average methods to the approximated Hessian, though the diagonal matrix is easier to be explicitly formed to average than full matrices. Investigating the moving average of the diagonal Bt might be an interesting direction of future work.\nAlgorithm 1: APOLLO, our proposed algorithm for nonconvex stochastic optimization. All operations on vectors are element-wise. Good default settings are β = 0.9 and = 1e−4. Initial: m0, d0, B0 ← 0, 0, 0 // Initialize m0, d0, B0 to zero while t ∈ {0, . . . , T} do\nfor θ ∈ {θ1, . . . , θL} do gt+1 ← ∇ft(θt) // Calculate gradient at step t mt+1 ← β(1−β t) 1−βt+1 mt + 1−β 1−βt+1 gt+1 // Update bias-corrected moving\naverage\nα← d T t (mt+1−mt)+d T t Btdt\n(‖dt‖4+ )4 // Calculate coefficient of B update\nBt+1 ← Bt − α ·Diag(d2t ) // Update diagonal Hessian Dt+1 ← rectify(Bt+1, 0.01) // Handle nonconvexity dt+1 ← D−1t+1mt+1 // Calculate update direction θt+1 ← θt − ηt+1dt+1 // Update parameters\nend end" }, { "heading": "3.3 RECTIFIED ABSOLUTE VALUE OF HESSIAN FOR NONCONVEXITY", "text": "To guarantee convergence, quasi-Newton methods require the approximated Hessian matrix Bt to be positive definite at each step. The common strategy in previous studies is to solve the updating formula in (5) by restricting the candidate matrix B to be symmetric positive definite. It is known that the BFGS update preserves the positive-definiteness of Bt+1 as long as the curvature condition sTt yt > 0 holds, which can be guaranteed for strongly convex problem. For nonconvex problem, the curvature condition can be satisfied by performing a line search, which is, however, expensive or even infeasible in stochastic setting, because the exact function values and gradient information are unavailable. Wang et al. (2017) proposed the stochastic damped L-BFGS (SdLBFGS) method that implicitly generates a positive definite matrix without line search. However, it usually requires large history size (m ≥ 100) to guarantee convergence, which is infeasible for large-scale optimization. To handle nonconvexity, we adopt a different strategy that does not require the solution of Bt in (5) to be positive definite. Intuitively, we search for Bt that is a good approximation of the real Hessian, which is not necessarily positive definite in nonconvex problem. When we use Bt as preconditioning to calculate the update direction, we use its absolute value: |Bt| = √ BTt Bt, where √ · is the positive definite square root of a matrix. The motivation of absolute value is straight-forward: for dimensions with large absolute values of curvature, the objective function could be very sharp and we would prefer to take relatively smaller steps than those flatter dimensions. Since APOLLO formulate Bt as a diagonal matrix, the cost of computing |Bt| is marginal.\nRectified Absolute Value of Bt For nonconvex objective functions, there exist inflection points whose curvatures are zero. To prevent the steps from becoming arbitrarily large, we rectify the absolute value of Bt with a convexity hyper-parameter σ:\nDt = rectify(Bt, σ) = max(|Bt|, σ) (10) where the rectify(·, σ) function is similar to the rectified linear unit (ReLU) (Nair & Hinton, 2010) with threshold set to σ. The update direction in (8) is then dt = D−1t mt.\nAdaHessian (Yao et al., 2020) used an idea similar to the absolute values ofBt to handle nonconvexity, where the root mean square averaging is applied to compute the Hessian diagonal. Different from APOLLO, AdaHessian requires second-order information to compute the Hessian matvec oracle and approximate the Hessian diagonal using Hutchinson’s method, which is significantly more costly." }, { "heading": "3.4 INITIALIZATION", "text": "The rectified Dt in (10) introduces one more hyper-parameter σ, limiting the application of APOLLO in practice. In this section, we show that the zero initialization approach in APOLLO, which initializes the moving average of gradientm0, the parameter update direction d0 and the diagonal approximation of Hessian B0 as (vector of) zeros, leads to coupled stepsize η and convexity σ, allowing us to eliminate one hyper-parameter of η or σ.\nCoupled Stepsize η and Convexity σ. With the zero initialization ofm0, d0 andB0, the following theorem illustrates the relation between η and σ (details in Appendix A): Theorem 1. Given zero initialization of m0, d0, and B0 and a fixed parameter intialization θ0. Suppose that we have two sets of hyper-parameters η, σ and η′, σ′ with the same ratio: ησ = η′\nσ′ . Then the convergence trajectories of these two sets of hyper-parameters are exactly the same:\nθt = θ ′ t, ∀t ∈ {1, . . . , T}. (11)\nwhere θt and θ′t are the parameters of (η, σ) and (η ′, σ′) at iteration t, respectively.\nFrom Theorem 1, we observe that η and σ are coupled with each other and in practice we only need to tune one of them, leaving the other fixed. Therefore, in our experiments (§4), we fix σ = 0.01 and tune η on different problems1.\nLearning Rate Warmup for APOLLO As discussed in Kingma & Ba (2015), zero initialization leads to estimations biased towards zero in the initial iterations. For the moving average mt, this bias can be corrected by dividing the bias-correction term (9). For dt and Bt, however, we cannot derive such bias correction terms. Fortunately, a simple linear warmup heuristic of η at the beginning iterations achieves remarkably stable training." }, { "heading": "3.5 CONVERGENCE ANALYSIS", "text": "Similar to previous work (Reddi et al., 2018; Chen et al., 2019; Zhuang et al., 2020), we omit the initialization bias correction step, i.e. we use mt = βtmt−1 + (1− βt)gt, 0 < βt < 1, ∀t ∈ [T ]. We first analyze the convergence of APOLLO in convex optimization using the online learning framework (Zinkevich, 2003) for a sequence of convex cost functions f1(θ), f2(θ), . . . , fT (θ). Theorem 2. (Convergence in convex optimization) Let {θt} be the sequence from APOLLO. Suppose ηt =\nη√ t , 0 < βt ≤ β ≤ 1 ‖gt‖2 ≤ G, ‖Dt−1‖1ηt−1 ≤ ‖Dt‖1 ηt , ‖θt − θt′‖2 ≤ D, ∀t, t′ ∈ [T ]. For θt generated with the APOLLO algorithm, we have the following bound on the regret:\nRT ≤ √ TD2‖DT ‖1 2η(1− β) + ηG2 1− β (2 √ T − 1) + D 2 2(1− β) T∑ t=1 β2t ηt\n(12)\nThe following result falls as an immediate corollary of the above result. Corollary 2.1. Suppose βt = βλt−1, 0 < λ < 1 in Theorem 2, we have\nRT ≤ √ TD2‖DT ‖1 2η(1− β) + ηG2 1− β (2 √ T − 1) + D 2β2 2η(1− β)(1− λ2)2 (13)\nTheorem 2 implies the regret of APOLLO is upper bounded by O( √ T ). The conditions for Corollary 2.1, as in Reddi et al. (2018), can be relaxed to βt = β/t and still ensures a regret of O( √ T ).\nFor nonconvex case, we analyze the convergence rate of APOLLO with the similar derivations of that in Chen et al. (2019), since APOLLO belongs to the family of generalized Adam-type methods: Theorem 3. (Convergence in nonconvex stochastic optimization) Under the assumptions: • f is lower bounded and differentiable; ‖∇f(θ)−∇f(θ′)‖2 ≤ L‖θ− θ′‖2, ‖Dt‖∞ < L,∀t, θ, θ′. • Both the true and stochastic gradient are bounded, i.e. ‖∇f(θt)‖2 ≤ H, ‖gt‖2 ≤ H, ∀t. • Unbiased and independent noise in gt, i.e. gt = ∇f(θt) + ζt, E[ζt] = 0, and ζi ⊥ ζj , ∀i 6= j. Assume ηt = η√t , βt ≤ β ≤ 1 in non-increasing, Dt−1,j ηt−1 ≤ Dt,jηt , ∀t ∈ [T ], j ∈ [d], then:\nmin t∈[T ]\nE [ ‖∇f(θt)‖22 ] ≤ L√\nT (C1η\n2H2(1 + log T ) + C2dη + C3dη 2 + C4) (14)\nwhere C1, C2, C3 are constants independent of d and T , C4 is a constant independent of T , the expectation is taken w.r.t all the randomness corresponding to {gt}. Theorem 3 implies the convergence rate for APOLLO in the non-convex case is O(log T/ √ T ), which is similar to Adam-type optimizer (Reddi et al., 2018; Chen et al., 2019). In addition, unlike Theorem 3.1 in Chen et al. (2019), Theorem 3 does not specify the bound of each update ‖ηtmt/Dt‖2. This is because that, with conditions ηt ≤ η, ‖gt‖2 ≤ H and Dt ≥ 1, it is straight-forward to derive the bound of ‖ηtmt/Dt‖2 ≤ ηH = G.\n1We changed σ from 1 to 0.01 to make η in a suitable range. See Appendix E.4 for details." }, { "heading": "4 EXPERIMENTS", "text": "To evaluate APOLLO, we conduct experiments on four benchmark datasets across three tasks of CV and NLP that are commonly used to evaluate optimization algorithms: CIFAR-10 (Krizhevsky & Hinton, 2009) and ImageNet (Deng et al., 2009) for image classification; One Billion Words (Chelba et al., 2013) for language modeling; and WMT 2014 English-German for neural machine translation. The five baseline methods we compare with are SGD with momentum (Bottou & Bousquet, 2008), Adam (Kingma & Ba, 2015), Rectified Adam (RAdam) (Liu et al., 2020), AdaBelief (Zhuang et al., 2020), and AdaHessian (Yao et al., 2020). Following Loshchilov & Hutter (2019), we decouple weight decays in Adam, RAdam, AdaBelief and AdaHessian in all the experiments2. For each experiment, we report the average over 5 runs. More detailed descriptions, results and analysis of the conducted experiments are provided in Appendix E.\n4.1 IMAGE CLASSIFICATION\nWe begin our experiments with an evaluation of the convergence and generalization performance on image classification. We use ResNet-1103 for CIFAR-10 and standard ResNeXt-50 (Xie et al., 2017) for ImageNet, respectively. The results on CIFAR-10 and ImageNet are presented in Figure 1 and Table 1, together with the five baselines. For each optimizer, we use two scheduling strategies of learning rate decay: i) milestone that decays the learning rate at the end of some predefined epochs; and ii) cosine annealing schedule proposed in Loshchilov & Hutter\n(2017). All the optimization methods are comprehensively tuned, especially for the learning rate and the rate of weight decay. It is because that the strength of weight decay regularization is co-related with the learning rate, even though the decoupled weight decay technique (Loshchilov & Hutter, 2019) has been applied. The tuning information and the model details are provided in the Appendix E.1.\nFrom Figure 1 and Table 1, we see that APOLLO outperforms the four first-order methods (SGD, Adam, RAdam and AdaBelief) on both the convergence speed and classification accuracy, demonstrating its effectiveness on training the ResNet architectures based on convolutional neural networks (CNNs) (LeCun et al., 1989). Comparing with AdaHessian, APOLLO obtains better test accuracy with similar convergence speed. Note that AdaHessian requires second-order information and is significantly more costly (detailed comparison of time and memory costs in Appendix F.3). Thus, we omit AdaHessian from the following experiments in the rest of this paper.\n2For AdaBelief, we also tried standard L2 regularization. But the accuracies are consistently worse than the models with decoupled weight decay.\n3ResNet-110 is a modified (small) version of ResNet-18 to adapt the image size 32× 32 in CIFAR-10.\nRobustness to Learning Rate Change. Besides performance improvements, we also investigate the robustness of different optimization methods to the change of learning rate. For each optimizer, we use the learning rate in the previous experiment (Table 1) as the base, i.e. 0.1 for SGD, 0.001 for Adam and RAdam, and 0.01 for APOLLO. Then, we explore different learning rates that are α times of the base learning rate, with α ∈ {0.2, 1.0, 2.0, 10.0}. As mentioned above, we observed that the strength of weight decay regularization is co-related with the learning rate, even for Adam and RAdam with decoupled weight decay (Loshchilov & Hutter, 2019). To eliminate the impact of weight decay, we adjust the weight decay rates according to the factor α. Experimental results with ResNet-110 on CIFAR-10 are summarized in Figure 2. After correcting the impact of weight decay, all the optimization methods, except SGD with α = 10.0, achieve consistent model performance, while APOLLO slightly improves the robustness of model training over the three baseline methods." }, { "heading": "4.2 LANGUAGE MODELING", "text": "To evaluate APOLLO on Recurrent Neural Networks (RNNs) that are applied in a wide range of NLP tasks (Graves, 2013), we conduct experiments on the One Billion Words dataset (Chelba et al., 2013), using a two-layer LSTM network for language modeling (details in Appendix E.2).\nFigure 3 and Table 2 illustrate the perplexity (PPL) of training and test for APOLLO and four baseline methods, including SGD, Adam, RAdam and AdaBelief. As shown in Figure 3, although APOLLO is slower than Adam-type methods in the first few updates, its convergence is much faster after that. On generalization performance, APOLLO achieves significant improvements (more than 4.0 PPL points on test data) over Adam and RAdam. In addition, APOLLO also outperforms AdaBelief, which obtains the lowest PPL among the three Adam-type optimization methods4. This demonstrates the effectiveness of APOLLO on training LSTM-based neural architectures.\nTraining Stability. From the middle plot of Figure 3 we see that the training losses of Adam and RAdam may suddenly increase. This occurred in all the runs of experiments using Adam and RAdam, and we selected these successfully converged — the loss went back to normal after some updates, and discarded those that failed to converge — the model crashed due to loss numerical overflow. The models optimized with APOLLO never suffered from this issue, demonstrating its stability.\n4We found that AdaBelief is very sensitive to the value of . The result in Table 2 is obtained using = 1e−12. With other values, e.g. 1e−8 or 1e−16, the PPL points of AdaBelief are even higher than Adam and RAdam. See Appendix E.2 for the details of hyper-parameter tuning.\n4.3 NEURAL MACHINE TRANSLATION\nTable 3: Test BLEU.\nMethod BLEU\nSGD 26.59±0.07 Adam 27.84±0.12 RAdam 28.15±0.15 AdaBelief 28.14±0.11 APOLLO 28.34±0.10 To evaluate APOLLO on Attention-based Transformer architecture (Vaswani et al., 2017), we train the Transformer-base model on the WMT2014 English-German (EN-DE) dataset (around 4.5M sentence pairs). We use the same data preprocessing steps as in Ma et al. (2019) (details in Appendix E.3). We compare APOLLO with the same four baseline methods in the experiments of language modeling. For each experiment, we report the mean and standard variance over 5 runs. From Table 3, the first interesting observation is that SGD performs much worse than Adam-type methods (which is opposite to its behaviour for ResNet- and LSTM-based neural architectures). Similar observations about SGD were reported in (Yao et al., 2020; Zhang et al., 2020). Despite this, APOLLO obtains improvements over all the baseline methods for NMT with transformers." }, { "heading": "5 RELATED WORK", "text": "Stochastic Quasi-Newton Methods. In the literature of (nonconvex) stochastic quasi-Newton methods, several algorithms have been developed recently for large-scale machine learning problems: oLBFGS (Schraudolph et al., 2007; Mokhtari & Ribeiro, 2015), RES (Mokhtari & Ribeiro, 2014), SFO (Sohl-Dickstein et al., 2014), SQN (Byrd et al., 2016), SdLBFGS (Wang et al., 2017), and AdaQN (Keskar & Berahas, 2016), among which only SdLBFGS and AdaQN are designed to solve nonconvex optimization problems. The SdLBFGS algorithm carefully controls the quality of modified BFGS updates to preserve the positive-definiteness of Bt in (5) without line search. AdaQN shares a similar idea but is specifically designed for RNNs by refining the initial L-BFGS scaling, step acceptance control and choice of curvature information matrix, and adopting the SQN framework (Byrd et al., 2016). Different from these two methods, APOLLO does not require Bt in (5) to be positive definite, but replacing it with its rectified absolute value to handle nonconvexity. Moreover, both SdLBFGS and AdaQN use the updating formula similar to L-BFGS and require even larger history size (commonly ≥ 100) to guarantee convergence, preventing them from being applied to large-scale optimization. For comprehensive comparison of SdLBFGS with Apollo, we conducted experiments with small toy CNN models (details in Appendix G).\nAdaptive First-Order Methods. From the diagonal approximation of Hessian, APOLLO is also related to those diagonally-scaled first-order algorithms, such as AdaGrad (Duchi et al., 2011), RMSProp (Tieleman & Hinton, 2012), AdaDelta (Zeiler, 2012), and Adam (Kingma & Ba, 2015). Subsequently, a number of techniques have emerged to theoretically justify and algorithmically improve Adam, including AMSGrad (Reddi et al., 2018), AdaBound (Luo et al., 2019), RAdam (Liu et al., 2020) and AdaBelief (Zhuang et al., 2020). The main difference is that the diagonal preconditioning in APOLLO is directly derived from the quasi-Newton updating formula (6). In terms of memory efficiency, Anil et al. (2019) and Chen et al. (2020) further reduces the memory cost of adaptive methods, and Agarwal et al. (2019) proposed efficient full-matrix adaptive regularization.\nStochastic Second-Order Hessian-Free Methods. Stochastic Second-Order Hessian-Free methods (Martens, 2010; Martens & Sutskever, 2011) implicitly solve quadratic models using matrix-vector products. Dauphin et al. (2014) argued the existence of saddle points and proposed a method to rapidly escape them. K-FAC (Martens & Grosse, 2015) computes a second-order step by constructing an invertible approximation of the Fisher information matrix in an online fashion. Shampoo (Gupta et al., 2018) approximates the Fisher information matrix using low-rank decomposition. Recently, Yao et al. (2020) proposed AdaHessian, which approximates the Hessian diagonal using Hutchinson’s method. These second-order methods differ from APOLLO mainly in the request of second-order information of the objective function at each iteration." }, { "heading": "6 CONCLUSION AND EXTENSIONS", "text": "We have introduced APOLLO, a simple and computationally efficient quasi-Newton algorithm for nonconvex stochastic optimization. This method is aimed towards large-scale optimization problems in the sense of large datasets and/or high-dimensional parameter spaces such as machine learning with deep neural networks. Experimental results on three CV and NLP tasks demonstrate the effectiveness of APOLLO, in terms of both convergence speed and generalization performance. In Appendix C, we briefly outline a few extensions to APOLLO that we want to explore in future work." } ]
null
APOLLO: AN ADAPTIVE PARAMETER-WISE DIAG-
SP:9e9ae7233f8037f5ae0ef4b641027dd46b997324
[ "The paper proposes a benchmark for the evaluation of unsupervised learning of object-centric representation. The benchmark consists of three datasets, multi-object tracking metrics and of the evaluation of four methods. The proposed dataset consists of three sets of video sequences, procedurally generated, which are either generated from slight variations of existing works (Sprites-MOT) or on the basis of existing datasets (dSpirites, Video Object Room). For evaluation, authors propose to use a slight variation of the protocol of the MOT challenge for evaluation (with the addition of a Mostly Detected measure which does not penalize ID switches). As part of the paper, they also evaluate and discuss the performances of four object-centric representation models, one of them (Video MONet) being an extension of an existing approach, proposed as part of this paper, and the remaining being state of the art approaches for the task.", "The paper presents an empirical evaluation of a number of recent models for unsupervised object-based video modelling. Five different models are evaluated on three (partially novel) benchmarks, providing a unifying perspective on the relative performance of these models. Several common issues are identified and highlighted using challenge datasets: The reliance on color as a cue for object segmentation, occlusion, object size, and change in object appearance. The paper concludes with several ideas for alleviating these issues." ]
Perceiving the world in terms of objects and tracking them through time is a crucial prerequisite for reasoning and scene understanding. Recently, several methods have been proposed for unsupervised learning of object-centric representations. However, since these models have been evaluated with respect to different downstream tasks, it remains unclear how they compare in terms of basic perceptual abilities such as detection, figure-ground segmentation and tracking of individual objects. To close this gap, we design a benchmark with three datasets of varying complexity and seven additional test sets which feature challenging tracking scenarios relevant for natural videos. Using this benchmark, we compare the perceptual abilities of four unsupervised object-centric learning approaches: VIMON, a video-extension of MONET, based on a recurrent spatial attention mechanism, OP3, which exploits clustering via spatial mixture models, as well as TBA and SCALOR, which use an explicit factorization via spatial transformers. Our results suggest that architectures with unconstrained latent representations and full-image object masks such as VIMON and OP3 are able to learn more powerful representations in terms of object detection, segmentation and tracking than the explicitly parameterized spatial transformer based architecture of TBA and SCALOR. We also observe that none of the methods are able to gracefully handle the most challenging tracking scenarios despite their synthetic nature, suggesting that our benchmark may provide fruitful guidance towards learning more robust object-centric video representations.
[]
[ { "authors": [ "Keni Bernardin", "Rainer Stiefelhagen" ], "title": "Evaluating multiple object tracking performance: The clear mot metrics", "venue": "EURASIP Journal on Image and Video Processing,", "year": 2008 }, { "authors": [ "Domenico Daniele Bloisi", "Luca Iocchi" ], "title": "Independent multimodal background subtraction", "venue": "In CompIMAGE,", "year": 2012 }, { "authors": [ "Christopher P. Burgess", "Loïc Matthey", "Nicholas Watters", "Rishabh Kabra", "Irina Higgins", "Matthew Botvinick", "Alexander Lerchner" ], "title": "MONet: Unsupervised scene decomposition and representation", "venue": null, "year": 1901 }, { "authors": [ "Kyunghyun Cho", "Bart van Merrienboer", "Çaglar Gülçehre", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using RNN encoder-decoder for statistical machine", "venue": "translation. arXiv.org,", "year": 2014 }, { "authors": [ "Eric Crawford", "Joelle Pineau" ], "title": "Spatially invariant unsupervised object detection with convolutional neural networks", "venue": "Proc. of the Conf. on Artificial Intelligence (AAAI),", "year": 2019 }, { "authors": [ "Antonia Creswell", "Kyriacos Nikiforou", "Oriol Vinyals", "Andre Saraiva", "Rishabh Kabra", "Loic Matthey", "Chris Burgess", "Malcolm Reynolds", "Richard Tanburn", "Marta Garnelo", "Murray Shanahan" ], "title": "Alignnet: Unsupervised entity", "venue": null, "year": 2007 }, { "authors": [ "Martin Engelcke", "Adam R. Kosiorek", "Oiwi Parker Jones", "Ingmar Posner" ], "title": "GENESIS: Generative scene inference and sampling with object-centric latent representations", "venue": null, "year": 1907 }, { "authors": [ "S.M. Ali Eslami", "Nicolas Heess", "Theophane Weber", "Yuval Tassa", "David Szepesvari", "Koray Kavukcuoglu", "Geoffrey E Hinton" ], "title": "Attend, infer, repeat: Fast scene understanding with generative models", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Leon A. Gatys", "Alexander S. Ecker", "Matthias Bethge" ], "title": "A neural algorithm of artistic", "venue": "style. arXiv.org,", "year": 2015 }, { "authors": [ "Klaus Greff", "Antti Rasmus", "Mathias Berglund", "Tele Hao", "Harri Valpola", "Jürgen Schmidhuber" ], "title": "Tagger: Deep unsupervised perceptual grouping", "venue": "In Advances in Neural Information Processing Systems (NeurIPS)", "year": 2016 }, { "authors": [ "Klaus Greff", "Sjoerd van Steenkiste", "Jürgen Schmidhuber" ], "title": "Neural expectation maximization", "venue": "In Advances in Neural Information Processing Systems (NeurIPS)", "year": 2017 }, { "authors": [ "Klaus Greff", "Raphaël Lopez Kaufman", "Rishabh Kabra", "Nick Watters", "Christopher Burgess", "Daniel Zoran", "Loic Matthey", "Matthew Botvinick", "Alexander Lerchner" ], "title": "Multi-object representation learning with iterative variational inference", "venue": "In Proc. of the International Conf. on Machine learning (ICML),", "year": 2019 }, { "authors": [ "Zhen He", "Jian Li", "Daxue Liu", "Hangen He", "David Barber" ], "title": "Tracking by animation: Unsupervised learning of multi-object attentive trackers", "venue": "In Proc. IEEE Conf. on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "X. Hou", "L. Shen", "K. Sun", "G. Qiu" ], "title": "Deep feature consistent variational autoencoder", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2017 }, { "authors": [ "Max Jaderberg", "Karen Simonyan", "Andrew Zisserman", "Koray Kavukcuoglu" ], "title": "Spatial transformer networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS)", "year": 2015 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "In Proc. of the International Conf. on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Jindong Jiang", "Sepehr Janghorbani", "Gerard De Melo", "Sungjin Ahn" ], "title": "Scalor: Generative world models with scalable object representations", "venue": "In Proc. of the International Conf. on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Proc. of the International Conf. on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In Proc. of the International Conf. on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Thomas Kipf", "Elise van der Pol", "Max Welling" ], "title": "Contrastive learning of structured world models", "venue": "In Proc. of the International Conf. on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Adam Kosiorek", "Hyunjik Kim", "Yee Whye Teh", "Ingmar Posner" ], "title": "Sequential attend, infer, repeat: Generative modelling of moving objects", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Zhixuan Lin", "Yi-Fu Wu", "Skand Vishwanath Peri", "Weihao Sun", "Gautam Singh", "Fei Deng", "Jindong Jiang", "Sungjin Ahn" ], "title": "Space: Unsupervised object-oriented scene representation via spatial attention and decomposition", "venue": null, "year": 2001 }, { "authors": [ "Francesco Locatello", "Dirk Weissenborn", "Thomas Unterthiner", "Aravindh Mahendran", "Georg Heigold", "Jakob Uszkoreit", "Alexey Dosovitskiy", "Thomas Kipf" ], "title": "Object-centric learning with slot attention", "venue": null, "year": 2006 }, { "authors": [ "Joe Marino", "Yisong Yue", "Stephan Mandt" ], "title": "Iterative amortized inference", "venue": "In Proc. of the International Conf. on Machine learning (ICML),", "year": 2018 }, { "authors": [ "Loic Matthey", "Irina Higgins", "Demis Hassabis", "Alexander Lerchner" ], "title": "dsprites: Disentanglement testing sprites dataset", "venue": null, "year": 2017 }, { "authors": [ "A. Milan", "L. Leal-Taixé", "I. Reid", "S. Roth", "K. Schindler" ], "title": "MOT16: A benchmark for multi-object", "venue": "tracking. arXiv.org,", "year": 2016 }, { "authors": [ "Volodymyr Mnih", "Nicolas Heess", "Alex Graves", "Koray Kavukcuoglu" ], "title": "Recurrent models of visual attention", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2014 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga", "Alban Desmaison", "Andreas Kopf", "Edward Yang", "Zachary DeVito", "Martin Raison", "Alykhan Tejani", "Sasank Chilamkurthy", "Benoit Steiner", "Lu Fang", "Junjie Bai", "Soumith Chintala" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In Medical Image Computing and Computer-Assisted Intervention (MICCAI),", "year": 2015 }, { "authors": [ "Sjoerd van Steenkiste", "Michael Chang", "Klaus Greff", "Jürgen Schmidhuber" ], "title": "Relational neural expectation maximization: Unsupervised discovery of objects and their interactions", "venue": "In Proc. of the International Conf. on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Rishi Veerapaneni", "John D. Co-Reyes", "Michael Chang", "Michael Janner", "Chelsea Finn", "Jiajun Wu", "Joshua B. Tenenbaum", "Sergey Levine" ], "title": "Entity abstraction in visual model-based reinforcement learning", "venue": null, "year": 1910 }, { "authors": [ "Paul Voigtlaender", "Michael Krause", "Aljosa Osep", "Jonathon Luiten", "Berin Balachandar Gnana Sekar", "Andreas Geiger", "Bastian Leibe" ], "title": "Mots: Multi-object tracking and segmentation", "venue": "In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Julius von Kügelgen", "Ivan Ustyuzhaninov", "Peter Gehler", "Matthias Bethge", "Bernhard Schölkopf" ], "title": "Towards causal generative scene models via competition of experts", "venue": null, "year": 2004 }, { "authors": [ "Nicholas Watters", "Loic Matthey", "Matko Bosnjak", "Christopher P. Burgess", "Alexander Lerchner" ], "title": "COBRA: Data-efficient model-based RL through unsupervised object discovery and curiositydriven exploration", "venue": null, "year": 2019 }, { "authors": [ "Nicholas Watters", "Loic Matthey", "Christopher P. Burgess", "Alexander Lerchner" ], "title": "Spatial broadcast decoder: A simple architecture for learning disentangled representations in VAEs", "venue": null, "year": 2019 }, { "authors": [ "Jinyang Yuan", "Bin Li", "Xiangyang Xue" ], "title": "Generative modeling of infinite occluded objects for compositional scene representation", "venue": "In Proc. of the International Conf. on Machine learning (ICML),", "year": 2019 }, { "authors": [], "title": "C.6 SCALABLE OBJECT-ORIENTED REPRESENTATION (SCALOR) SCALable Object-oriented Representation (SCALOR) (Jiang et al., 2020) is a spatial transformerbased model that extends SQAIR (Kosiorek et al., 2018) to scale to cluttered scenes. Similar to TBA is factors the latent representations in pose, depth and appearance per object and uses spatial", "venue": null, "year": 2018 }, { "authors": [ "transformers (Jaderberg" ], "title": "TBA, it can handle dynamic backgrounds by integrating a background RNN that models background transitions. Proposal-Rejection Module:: SCALOR uses a proposal-rejection module g to discover new objects. All frames up to the current time step x1:t are first encoded using a convolutional LSTM f", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Humans understand the world in terms of objects. Being able to decompose our environment into independent objects that can interact with each other is an important prerequisite for reasoning and scene understanding. Similarly, an artificial intelligence system would benefit from the ability to both extract objects and their interactions from video streams, and keep track of them over time.\nRecently, there has been an increased interest in unsupervised learning of object-centric representations. The key insight of these methods is that the compositionality of visual scenes can be used to both discover (Eslami et al., 2016; Greff et al., 2019; Burgess et al., 2019) and track objects in videos (Greff et al., 2017; van Steenkiste et al., 2018; Veerapaneni et al., 2019) without supervision. However, it is currently not well understood how the learned visual representations of different models compare to each other quantitatively, since the models have been developed with different downstream tasks in mind and have not been evaluated using a common protocol. Hence, in this work, we propose a benchmark based on procedurally generated video sequences to test basic perceptual abilities of object-centric video models under various challenging tracking scenarios.\nAn unsupervised object-based video representation should (1) effectively identify objects as they enter a scene, (2) accurately segment objects, as well as (3) maintain a consistent representation for each individual object in a scene over time. These perceptual abilities can be evaluated quantitatively in the established multi-object tracking framework (Bernardin & Stiefelhagen, 2008; Milan et al., 2016). We propose to utilize this protocol for analyzing the strengths and weaknesses of different object-centric representation learning methods, independent of any specific downstream task, in order to uncover the different inductive biases hidden in their choice of architecture and loss formulation. We therefore compiled a benchmark consisting of three procedurally generated video datasets of varying levels of\nvisual complexity and two generalization tests. Using this benchmark, we quantitatively compared three classes of object-centric models, leading to the following insights:\n• All of the models have shortcomings handling occlusion, albeit to different extents. • OP3 (Veerapaneni et al., 2019) performs strongest in terms of quantitative metrics, but\nexhibits a surprisingly strong dependency on color to separate objects and accumulates false positives when fewer objects than slots are present. • Spatial transformer models, TBA (He et al., 2019) and SCALOR (Jiang et al., 2020), train\nmost efficiently and feature explicit depth reasoning in combination with amodal masks, but are nevertheless outperformed by the simpler model, VIMON, lacking a depth or interaction model, suggesting that the proposed mechanisms may not yet work as intended.\nWe will make our code, data, as well as a public leaderboard of results available." }, { "heading": "2 RELATED WORK", "text": "Several recent lines of work propose to learn object-centric representations from visual inputs for static and dynamic scenes without explicit supervision. Though their results are promising, methods are currently restricted to handling synthetic datasets and as of yet are unable to scale to complex natural scenes. Furthermore, a systematic quantitative comparison of methods is lacking.\nSelecting and processing parts of an image via spatial attention has been one prominent approach for this task (Mnih et al., 2014; Eslami et al., 2016; Kosiorek et al., 2018; Burgess et al., 2019; Yuan et al., 2019; Crawford & Pineau, 2019; Locatello et al., 2020). As an alternative, spatial mixture models decompose scenes by performing image-space clustering of pixels that belong to individual objects (Greff et al., 2016; 2017; 2019; van Steenkiste et al., 2018). While some approaches aim at learning a suitable representation for downstream tasks (Watters et al., 2019a; Veerapaneni et al., 2019), others target scene generation (Engelcke et al., 2019; von Kügelgen et al., 2020). We analyze three classes of models for processing videos, covering three models based on spatial attention and one based on spatial mixture modeling.\nSpatial attention models with unconstrained latent representations use per-object variational autoencoders, as introduced by Burgess et al. (2019). von Kügelgen et al. (2020) adapts this approach for scene generation. So far, such methods have been designed for static images, but not for videos. We therefore extend MONET (Burgess et al., 2019) to be able to accumulate evidence over time for tracking, enabling us to include this class of approaches in our evaluation. Recent concurrent work on AlignNet (Creswell et al., 2020) applies MONET frame-by-frame and tracks objects by subsequently ordering the extracted objects consistently.\nSpatial attention models with factored latents use an explicit factorization of the latent representation into properties such as position, scale and appearance (Eslami et al., 2016; Crawford & Pineau, 2019). These methods use spatial transformer networks (Jaderberg et al., 2015) to render per-object reconstructions from the factored latents (Kosiorek et al., 2018; He et al., 2019; Jiang et al., 2020). SQAIR (Kosiorek et al., 2018) does not perform segmentation, identifying objects only at the bounding-box level. We select Tracking-by-Animation (TBA) (He et al., 2019) and SCALOR (Jiang et al., 2020) for analyzing spatial transformer methods in our experiments, which explicitly disentangle object shape and appearance, providing access to object masks.\nSpatial mixture models cluster pixels using a deep neural network trained with expectation maximization (Greff et al., 2017; van Steenkiste et al., 2018). IODINE (Greff et al., 2019) extends these methods with an iterative amortised variational inference procedure (Marino et al., 2018), improving segmentation quality. SPACE (Lin et al., 2020) combines mixture models with spatial attention to improve scalability. To work with video sequences, OP3 (Veerapaneni et al., 2019) extends IODINE by modeling individual objects’ dynamics as well as pairwise interactions. We therefore include OP3 in our analysis as a representative spatial mixture model." }, { "heading": "3 OBJECT-CENTRIC REPRESENTATION BENCHMARK", "text": "To compare the different object-centric representation learning models on their basic perceptual abilities, we use the well-established multi-object tracking (MOT) protocol (Bernardin & Stiefelhagen,\n2008). In this section, we describe the datasets and metrics considered in our benchmark, followed by a brief description of the models evaluated." }, { "heading": "3.1 DATASETS", "text": "Current object-centric models are not capable of modeling complex natural scenes (Burgess et al., 2019; Greff et al., 2019; Lin et al., 2020). Hence, we focus on synthetic datasets that resemble those which state-of-the-art models were designed for. Specifically, we evaluate on three synthetic datasets1 (see Table 1), which cover multiple levels of visual and motion complexity. Synthetic stimuli enable us to precisely generate challenging scenarios in a controllable manner in order to disentangle sources of difficulty and glean insights on what models specifically struggle with. We design different scenarios that test complexities that would occur in natural videos such as partial or complete occlusion as well as similar object appearances.\nSprites-MOT (SpMOT, Table 1 left), as proposed by He et al. (2019), features simple 2D sprites moving linearly on a black background with objects moving in and out of frame during the sequence. Video-Multi-dSprites (VMDS, Table 1 right) is a video dataset we generated based on a colored, multi-object version of the dSprites dataset (Matthey et al., 2017). Each video contains one to four sprites that move non-linearly and independently of each other with the possibility of partial or full occlusion. Besides the i.i.d. sampled training, validation and test sets of VMDS, we generate seven additional challenge sets that we use to study specific test situations we observed to be challenging, such as guaranteed occlusion, specific object properties, or out-of-distribution appearance variations. Video Objects Room (VOR, Table 1 middle) is a video dataset we generated based on the static Objects Room dataset (Greff et al., 2019), which features static objects in a 3D room with a moving camera. For full details on the datasets and their generation, see Appendix B." }, { "heading": "3.2 METRICS", "text": "Our evaluation protocol follows the multi-object tracking (MOT) challenge, a standard and widelyused benchmark for supervised object tracking (Milan et al., 2016). The MOT challenge uses the CLEAR MOT metrics (Bernardin & Stiefelhagen, 2008), which quantitatively evaluate different performance aspects of object detection, tracking and segmentation. To compute these metrics, predictions have to be matched to ground truth. Unlike Bernardin & Stiefelhagen (2008) and Milan et al. (2016), we use binary segmentation masks for this matching instead of bounding boxes, which helps us better understand the models’ segmentation capabilities. We consider an intersection over union (IoU) greater than 0.5 as a match (Voigtlaender et al., 2019). The error metrics used are the fraction of Misses (Miss), ID switches (ID S.) and False Positives (FPs) relative to the number of ground truth masks. In addition, we report the Mean Squared Error (MSE) of the reconstructed image outputs summed over image channels and pixels.\nTo quantify the overall number of failures, we use the MOT Accuracy (MOTA), which measures the fraction of all failure cases compared to the total number of objects present in all frames. A model with 100% MOTA perfectly tracks all objects without any misses, ID switches or false positives. To quantify the segmentation quality, we define MOT Precision (MOTP) as the average IoU of segmentation masks of all matches. A model with 100% MOTP perfectly segments all tracked objects, but does not necessarily track all objects. Further, to quantify detection and tracking performance\n1Datasets are available at this https URL.\nindependent of false positives, we measure the Mostly Detected (MD) and Mostly Tracked (MT) metrics, the fraction of ground truth objects that have been detected and tracked for at least 80% of their lifespan, respectively. If an ID switch occurs, an object is considered detected but not tracked. For full details regarding the matching process and the evaluation metrics, refer to Appendix A." }, { "heading": "3.3 MODELS", "text": "We consider three classes of unsupervised object-centric representation learning models: (1) a spatial attention model with unconstrained latents, VIMON, which is our video extension of MONET (Burgess et al., 2019); (2) spatial transformer-based attention models, TBA (He et al., 2019) and SCALOR (Jiang et al., 2020); (3) a scene mixture model, OP3 (Veerapaneni et al., 2019). At a high-level, these methods share a common structure which is illustrated in Fig. 1a. They decompose an image into a fixed number of slots (Burgess et al., 2019), each of which contains an embedding zt,k and a mask mt,k of (ideally) a single object. These slots are then combined in a decoding step to reconstruct the image. Below, we briefly describe each method. Appendix C provides a detailed explanation in a unified mathematical framework.\nVideo MONet (VIMON) is our video extension of MONET (Burgess et al., 2019). MONET recurrently decomposes a static scene into slots, using an attention network to sequentially extract attention masks mk ∈ [0, 1]H×W of individual objects k. A Variational Autoencoder (VAE) (Kingma & Welling, 2014) encodes each slot into a latent representation zk ∈ RL of the corresponding object. We use MONET as a simple frame-by-frame baseline for detection and segmentation that does not employ temporal information. VIMON accumulates evidence about the objects over time to maintain a consistent object-slot assignment throughout the video. This is achieved by (1) seeding the attention network the predicted mask m̂t,k ∈ [0, 1]H×W from the previous time step and (2) introducing a gated recurrent unit (GRU) (Cho et al., 2014), which aggregates information over time for each slot separately, enabling it to encode motion information. For full details on MONET and VIMON, as well as ablations to provide context for the design decisions, refer to Appendix C.1, C.2 and E.3.\nTracking-by-Animation (TBA) (He et al., 2019) is a spatial transformer-based attention model. Frames are encoded by a convolutional feature extractor f before being passed to a recurrent block g called Reprioritized Attentive Tracking (RAT). RAT re-weights slot input features based on their cosine similarity with the slots from the previous time step and outputs latent representations for all K slots in parallel. Each slot latent is further decoded into a mid-level representation yt,k consisting of pose and depth parameters, as well as object appearance and shape templates (see Fig. 1c). For rendering, a Spatial Transformer Network (STN) (Jaderberg et al., 2015) is used with an additional occlusion check based on the depth estimate. TBA is trained on frame reconstruction with an additional penalty for large object sizes to encourage compact bounding boxes. TBA can only process scenes with static backgrounds, as it preprocesses sequences using background subtraction (Bloisi & Iocchi, 2012). For full details on TBA, refer to Appendix C.3.\nObject-centric Perception, Prediction, and Planning (OP3) (Veerapaneni et al., 2019) extends IODINE (Greff et al., 2019) to operate on videos. IODINE decomposes an image into objects and\nrepresents them independently by starting from an initial guess of the segmentation of the entire frame, and subsequently iteratively refines it using the refinement network f (Marino et al., 2018). In each refinement step m, the image is represented by K slots with latent representations zm,k. OP3 applies IODINE to each frame xt to extract latent representations zt,m,k, which are subsequently processed by a dynamics network d (see Fig. 1e), which models both the individual dynamics of each slot k as well as the pairwise interaction between all combinations of slots, aggregating them into a prediction of the posterior parameters for the next time step t+ 1 for each slot k. For full details on IODINE and OP3, refer to Appendix C.4 and C.5, respectively.\nSCALable Object-oriented Representation (SCALOR) (Jiang et al., 2020) is a spatial transformer-based model that factors scenes into background and multiple foreground objects, which are tracked throughout the sequence. Frames are encoded using a convolutional LSTM f . In the proposal-rejection phase, the current frame t is divided into H ×W grid cells. For each grid cell a object latent variable zt,h,w is proposed, that is factored into existence, pose, depth and appearance parameters. Subsequently, proposed objects that significantly overlap with a propagated object are rejected. In the propagation phase, per object GRUs are updated for all objects present in the scene. Additionally, SCALOR has a background module to encode the background and its dynamics. Frame reconstructions are rendered using a background decoder and foreground STNs for object masks and appearance. For full details on SCALOR, refer to Appendix C.6." }, { "heading": "4 RESULTS", "text": "We start with a summary of our overall results across the three datasets and four models (Table 2) before analyzing more specific challenging scenarios using variants of the VMDS dataset.\nWe first ask whether tracking could emerge automatically in an image-based model like MONET, which may produce consistent slot assignments through its learned object-slot assignment. This is not the case: MONET exhibits poor tracking performance (Table 2). While MONET correctly finds and segments objects, it does not assign them to consistent slots over time (Fig. E.2). In the following, we will thus focus on the video models: VIMON, TBA, OP3 and SCALOR.\nSpMOT. All models perform tracking well on SpMOT with the exception of one training run of TBA with poor results leading to high standard deviation (cp. best TBA model: 89.8% MT; Table E.1). SCALOR outperforms the other models on the detection and tracking metrics MD and MT, while VIMON exhibits the highest MOTP, highlighting its better segmentation performance on SpMOT.\nVOR. TBA is not applicable to VOR due to the dynamic background which cannot be resolved using background subtraction. VIMON and OP3 show similarly good performance on detection (MD) and\nsegmentation (MOTP), while VIMON outperforms OP3 on the tracking metrics MOTA and MT. OP3 accumulates a high number of false positives leading to a low MOTA due to the splitting of objects into multiple masks as well as randomly segmenting small parts of the background (Fig. E.4). In contrast, SCALOR has almost no false positives or ID switches, but accumulates a high number of misses leading to a poor MOTA. It often segments two objects as one that are occluding each other in the first frame, which is common in VOR due to the geometry of its scenes (Fig. F.11, last row).\nVMDS. OP3 outperforms the other models on VMDS, on which TBA performs poorly, followed by SCALOR, which again accumulates a high number of misses. We will analyze the models on VMDS qualitatively and quantitatively in more detail in the following.\nAccumulation of evidence over time. Recognition and tracking of objects should improve if models can exploit prior knowledge about the objects in the scene from previous video frames. To test whether the models exploit such knowledge, we evaluate their MOTA performance on VMDS after warm starting with up to 10 frames which are not included in evaluation (Fig. 2). Note that the models were trained on sequences of length 10, but are run for 20 frames in the case of a warm start of 10 frames. The performance of VIMON improves with longer warm starts, showing that the GRU accumulates evidence over time. TBA, in contrast, does not use temporal\ninformation beyond 2–3 frames, while SCALOR’s performance slightly drops after 3 frames. OP3 appears to most strongly rely on past information and is able to integrate information over longer time scales: its performance does not even saturate with a warm start of 10 frames. However, the effect for all models is rather small.\nChallenging scenarios for different models. The number of objects in the sequence matters for VIMON, TBA and SCALOR: more objects increase the number of failure cases (Fig. 3). In contrast, OP3 does not exhibit this pattern: it accumulates a higher number of false positives (FPs) in videos with fewer (only one or two) objects (Fig. E.1), as it tends to split objects into multiple slots if fewer objects than slots are present.\nOcclusion leads to failure cases for all models (Fig. 4a–b). Partial occlusion can lead to splitting of objects into multiple slots (Fig. 4a). Objects that reappear after full occlusion are often missed when only a small part of them is visible (Fig. 4a). In particular, SCALOR tends to segment two objects as one when they overlap while entering the scene, leading to a high number of misses.\nColor of the object is important. TBA often misses dark objects (Fig. 4b). In contrast, VIMON, OP3 and SCALOR struggle with scenes that feature objects of similar colors as well as objects that have similar colors to the background (Fig. 4c,e).\nFalse positives are more prevalent for OP3 and TBA than for VIMON and SCALOR (Table 2). FPs of OP3 are due to objects split in multiple masks (Fig. 4a) and random small parts of the background being individually segmented (Fig. 4e), while TBA tends to compose larger objects using multiple smaller, simpler components (Fig. 4d).\nChallenge sets. Based on the challenging scenarios identified above, we design multiple ‘challenge sets’: videos featuring (1) heavy occlusion, (2) objects with same colors, (3) only small objects and (4) only large objects (Fig. 5, top). For details, see Appendix (B.1.1).\nOcclusion reduces performance of all models compared with the i.i.d. sampled VMDS test set, albeit to different degrees (Fig. 5; for absolute performance see Table E.2). OP3 is more robust to occlusion than the other models.\nTracking objects with the same color is challenging for all models (Fig. 5). In particular, OP3 appears to rely on object color as a way to separate objects.\nOP3, VIMON and SCALOR are not sensitive to object size (Fig. 5). They exhibit only slightly decreased performance on the large objects test set, presumably because large objects cause more occlusion (Fig. 5). TBA shows increased performance on small objects but performs poorly on the large objects set.\nOut-of-distribution test sets. Next, we assess generalization to out-of-distribution (o.o.d.) changes in object appearance that are not encountered during training. In the training set of VMDS, object color, size and orientation are constant throughout a video. To test o.o.d. generalization, we evaluate models trained on VMDS on three datasets that feature unseen object transformations (Fig. 6 and Table E.3): continuous changes in object color or size as well as continuous rotation around the object’s centroid while moving. For details, see Appendix B.1.2.\nContinuous changes in object size do not pose a serious problem to TBA, OP3 and SCALOR, while VIMON’s performance drops (Fig. 6). Surprisingly, continuous color changes of objects do not\nimpact the performance of any model. Tracking performance of VIMON drops significantly for rotated objects, while OP3 and SCALOR are affected less. TBA’s tracking performance is not as strongly influenced by object rotation (for absolute values, see Table E.3).\nStability of training and runtime. TBA and SCALOR train faster and require less memory than OP3 and VIMON (see Table E.4 for details). However, some training runs converge to suboptimal minima for TBA. Training OP3 is sensitive to the learning rate and unstable, eventually diverging in almost all experiments. Interestingly, it often reached its best performance prior to divergence. VIMON and TBA are less sensitive to hyper-parameter settings in our experiments. For a more detailed analysis of the runtime, see Appendix E.2\n5 DISCUSSION\nt=1 t=4\nSize\nt=7 t=10 t=1 t=4\nColor\nt=7 t=10 t=1 t=4\nRotation\nt=7 t=10\nSize Color Rotation\n0\n50\n100\nPe rfo\nrm an\nce %\nMOTA\nSize Color Rotation\nMT\nTest set ViMON TBA OP3 SCALOR\nFigure 6: Performance on out-of-distribution sets relative to VMDS test set (100%).\nOur experimental results provide insights into the inductive biases and failure cases of objectcentric models that were not apparent from their original publications. Despite the positive results shown in each of the papers for the evaluated methods, a controlled, systematic analysis demonstrates that they do not convincingly succeed at tracking, which is fundamentally what object-centric video methods should enable.\nTBA has a significantly lower MOTP than the other models on all datasets, suggesting that the simple rendering-based decoder using a fixed template might be less suitable to generate accurate segmentation masks (see also Fig. F.5 and\nFig. F.4) compared to the VAE-based approaches of VIMON, OP3 and SCALOR.\nHandling occlusion of objects during the video is a key property object-centric representations should be capable of. Qualitatively and quantitatively, OP3 is more robust to occlusion than the other models, suggesting that its dynamics network which models interactions between objects is currently most successful at modeling occluded objects. Surprisingly, TBA and SCALOR, which explicitly encode depth, do not handle occlusion more gracefully than VIMON, whose much simpler architecture has no explicit way of dealing with depth. Moving forward, occlusion handling is a key component that object-centric video models need to master, which can be addressed by either equipping the model with a potent interaction module, that takes pairwise interaction between objects (including occlusion) into account, similar to OP3’s dynamics model, or ensuring that the depth reasoning of the models works as intended, which may be preferable, as explained below.\nAll models struggle with detecting objects that have similar color as the background (for TBA: dark objects, since background is removed and set to black in a pre-processing step). Color is a reliable cue to identify objects in these datasets. However, the auto-encoding objective incurs little extra loss for missing objects with similar color as the background and, thus, the models appear to not to learn to properly reconstruct them. In order to scale to data with more visual complexity, one might want to replace the pixel-wise reconstruction with for instance a loss based in feature space in order to focus more on reconstructing semantic content rather than high-frequency texture, as is done when using perceptual loss functions (Gatys et al., 2015; Hou et al., 2017) or by using contrastive learning (Kipf et al., 2020). Furthermore, the models – particularly so OP3 – struggle with separating objects of similar colors from each other. This result hints at a mismatch between the intuitions motivating these models and what the models actually learn: it should be more efficient in terms of the complexity of the latent representation to decompose two objects – even of similar colors – into two masks with simple shapes, rather than encoding the more complicated shape of two objects simultaneously in one slot. However, since none of the models handle occlusion with amodal segmentation masks (i. e. including the occluded portion of the object) successfully, they learn to encode overly complex (modal) mask shapes. As a consequence, they tend to merge similarly colored objects into one slot. This result suggests that resolving the issues surrounding the depth reasoning in combination with amodal segmentation masks would enable much more compact latents and could also resolve the merging of similarly colored objects.\nA major difference between models is the spatial transformer based model formulation of TBA and SCALOR, compared to VIMON and OP3, which operate on image-sized masks. The parallel processing of objects and the processing of smaller bounding boxes renders training TBA and SCALOR to be significantly faster and more memory efficient, enabling them to scale to a larger number of objects. On the downside, the spatial transformer introduces its own complications. TBA depends strongly on its prior on object size and performs well only when this prior fits the data well as well as when the data contains little variation in object sizes, as in SpMOT (Table 2). However, it is not able to handle VMDS and its larger variation in object sizes and shapes. SCALOR performs tracking well in scenes where objects are clearly separated, but struggles to separate objects that partially occlude each other when entering the scene. This difficulty is caused by its discovery\nmechanism which can propose at most one bounding box per grid cell, leading to a high number of misses on datasets which feature significant occlusion (VOR and VMDS). Unfortunately, simply increasing the number of proposals does not provide a simple solution, as SCALOR’s performance is sensitive to properly tweaking the number of proposals.\nChoosing a class of models is therefore dependent on the dataset one wants to apply it to as well as the computational resources at one’s disposal. Datasets that feature a high number of objects (>10) that are well separated from each other make a method like SCALOR, which can process objects in parallel, advisable. On datasets with a lower number of objects per scene which feature heavy occlusion, methods like OP3 and ViMON will likely achieve better results, but require a high computational budget for training.\nIn conclusion, our analysis shows that none of the models solve the basic challenges of tracking even for relatively simple synthetic datasets. Future work should focus on developing robust mechanisms for reliably handling depth and occlusion, additionally combining the transformer-based efficiency of TBA and SCALOR with the stable training of VIMON and the interaction model of OP3. The key open challenges for scaling these models to natural videos include their computational inefficiency, complex training dynamics, as well as over-dependence on simple appearance cues like color." }, { "heading": "SUPPLEMENTARY MATERIAL FOR: BENCHMARKING UNSUPERVISED OBJECT REPRESENTATIONS", "text": "FOR VIDEO SEQUENCES In this supplementary document, we first discuss the metrics used (Section A) and describe the data generation process (Section B). We then describe the methods MONET, VIMON, TBA, IODINE, OP3 and SCALOR (Section C). Section D contains information regarding the implementation details and training protocols. Finally, we provide additional qualitative and quantitative experimental results in Section E." }, { "heading": "A EVALUATION PROTOCOL DETAILS", "text": "We quantitatively evaluate all models on three datasets using the standard CLEAR MOT metrics (Bernardin & Stiefelhagen, 2008). Our evaluation protocol is adapted from the multi-object tracking (MOT) challenge (Milan et al., 2016), a standard computer vision benchmark for supervised object tracking. In particular, we focus on the metrics provided by the py-motmetrics package2." }, { "heading": "A.1 MAPPING", "text": "In each frame, object predictions of each model in the form of binary segmentation masks are mapped to the ground truth object segmentation masks. We require that each pixel is uniquely assigned to at most one object in the ground truth and the predictions, respectively. Matching is based on the intersection over union (IoU) between the predictions and the ground truth masks (Voigtlaender et al., 2019). A valid correspondence between prediction and object has to exceed a threshold in IoU of 0.5. Predictions that are not mapped to any ground truth mask are classified as false positives (FPs). Ground truth objects that are not matched to any prediction are classified as misses. Following (Bernardin & Stiefelhagen, 2008), ground truth objects that are mapped to two different hypothesis IDs in subsequent frames are classified as ID switches for that frame." }, { "heading": "A.2 MOT METRICS", "text": "MOT Accuracy (MOTA) measures the fraction of all failure cases, i.e. false positives (FPs), misses and ID switches compared to total number of objects present in all frames. MOT Precision (MOTP) measures the total accuracy in position for matched object hypothesis pairs, relative to total number of matches made. We use percentage Intersection over Union (IoU) of segmentation masks as the accuracy in position for each match. Mostly Tracked (MT) is the ratio of ground truth objects that have been tracked for at least 80% of their lifespan.(i.e. 80% of the frames in which they are visible). MT as implemented by py-motmetrics counts trajectories of objects as correctly tracked even if ID switches occur. We use a strictly more difficult definition of MT that counts trajectories with ID switches as correctly detected but not correctly tracked. Consequently, we add the Mostly Detected (MD) measure which does not penalize ID switches. Match, Miss, ID Switches (ID S.) and FPs are reported as the fraction of the number of occurrences divided by the total number of object occurrences.\nMOTA = 1− ∑T t=1 Mt + FPt + IDSt∑T\nt=1 Ot (1)\nwhere Mt, FPt,and IDSt are the number of misses, false positives and ID switches, respectively, for time t, and Ot is the number of objects present in frame t. Note that MOTA can become negative, since the number of FPs is unbounded.\nMOTP =\n∑T t=1 ∑I i=1 d\ni t∑T\nt=1 ct (2)\nwhere dit is the total accuracy in position for the i th matched object-hypothesis pair measured in IoU between the respective segmentation masks and ct is the number of matches made in frame t.\n2https://pypi.org/project/motmetrics/\nNote that we exclude the background masks for VIMON and OP3 before evaluating tracking based on IoU. The Video Object Room (VOR) dataset can contain up to three background segments, namely the floor and up to two wall segments. In order to exclude all background slots regardless of whether the model segments the background as one or as multiple masks, we remove all masks before the tracking evaluation that have an IoU of more than 0.2 with one of the ground truth background masks; we empirically tested that this heuristic is successful in removing background masks regardless of whether the models segments it as one or as three separate ones." }, { "heading": "B DATASET GENERATION DETAILS", "text": "B.1 VIDEO MULTI-DSPRITES (VMDS)\nThe Multi-DSprites Video dataset consists of 10-frame video sequences of 64×64 RGB images with multiple moving sprites per video. In order to test temporal aggregation properties of the models, the test set contains 20 frame-long sequences. Each video contains one to four sprites following the dataset proposed in (Burgess et al., 2019) that move independently of each other and might partially or fully occlude one another. The sprites are sampled uniformly from the dSprites dataset (Matthey et al., 2017) and colored with a random RGB color. The background is uniformly colored with a random RGB color. Random trajectories are sampled per object by drawing x and y coordinates from a Gaussian process with squared exponential covariance kernel cov[xs, xt] = exp[−(xs−xt)2/(2τ2)] and time constant τ = 10 frames, and then shifted by an initial (x, y)-position of the sprite centroid, which is uniformly sampled from [10, 54] to ensure that the object is within the image boundaries. Trajectories that leave these boundaries are rejected. In occlusion scenarios, larger objects are always in front of smaller objects to disambiguate prediction of occlusion. The training set consists of 10,000 examples whereas the validation set as well as the test set contain 1,000 examples each. Additionally, we generated four challenge sets and three out-of-distribution test sets for VMDS that contain specifically challenging scenarios. Each test set consists of 1,000 videos of length 10 frames, which we describe in the following.\nB.1.1 VMDS CHALLENGE SETS\nOcclusion test set. In each video, one or more objects are heavily occluded and thus often are not visible at all for a few frames. This is ensured by sampling object trajectories that cross path, i.e. at least in one video frame, two objects are centered on the same pixel. The time step and spatial position of occlusion is sampled randomly. Object trajectories are sampled independently as described above and then shifted such that they are at the sampled position of occlusion at time t. Videos contain two to four sprites (Fig. 5), since at least two objects are necessary for occlusion.\nSmall Objects. Videos contain one to four sprites with all sprites being of the smallest size present in the original dSprites (Matthey et al., 2017) dataset (Fig. 5). Other than that, it follows the generation process of the regular training and test set.\nLarge Objects. Videos contain one to four sprites with all sprites being of the largest size present in the original dSprites (Matthey et al., 2017) dataset (Fig. 5). Other than that, it follows the generation process of the regular training and test set.\nSame Color. Videos contain two to four sprites which are identically colored with a randomly chosen color. Other than that, it follows the generation process of the regular training and test set (Fig. 5).\nB.1.2 VMDS OUT-OF-DISTRIBUTION TEST SETS\nRotation test set. Sprites rotate around their centroid while moving. The amount of rotation between two video frames is uniformly sampled between 5 and 40 degrees, and is constant for each object over the course of the video. Direction of rotation is chosen randomly. Rotation is not included as a transformation in the training set (Fig. 6).\nColor change test set. Sprites change their color gradually during the course of the video. The initial hue of the color is chosen randomly as well as the direction and amount of change between two frames, which stays the same for each object over the course of the video. Saturation and value of the color are kept constant. Color changes are not part of the training set (Fig. 6).\nSize change test set. Sprites change their size gradually during the course of the video. The original dSprites dataset (Matthey et al., 2017) contains six different sizes per object. For each object, its size is sampled as either the smallest or largest in the first frame as well as a random point in time, at which it starts changing its size. At this point in time, it will either become larger or smaller, respectively, increasing or decreasing each frame to the next larger or smaller size present in the original dSprites dataset, until the largest or smallest size is reached. Size changes are not part of the training set (Fig. 6)." }, { "heading": "B.2 SPRITES-MOT (SPMOT)", "text": "Sprites-MOT, originally introduced by (He et al., 2019), consists of video sequences of length 20 frames. Each frame is a 128×128 RGB image. It features multiple sprites moving linearly on a black background. The sprite can have one of four shapes and one of six colors. For more information, refer to the original paper (He et al., 2019). We generate a training set consisting of 9600 examples, validation set of 384 samples and test set of 1,000 examples using the author-provided public codebase3. However, instead of using the default setting of 20 frames per sequence, we instead generate sequences of length 10, in order to facilitate comparison to the other datasets in our study which have only 10 frames per sequence.\nFrames are downsampled to a resolution of 64×64 for training VIMON, OP3 and SCALOR.\nB.3 VIDEO OBJECTS ROOM (VOR)\nWe generate a video dataset based on the static Objects Room dataset (Greff et al., 2019), with sequences of length 10 frames each at a resolution of 128×128. This dataset is rendered with OpenGL using the gym-miniworld4 reinforcement learning environment. It features a 3D room with up to four static objects placed in one quadrant of the room, and a camera initialized at the diagonally opposite quadrant. The objects are either static cubes or spheres, assigned one of 6 colors and a random orientation on the ground plane of the room. The camera then follows one of five trajectories moving towards the objects, consisting of a small fixed distance translation and optional small fixed angle of rotation each time step. The wall colors and room lighting are randomized, but held constant throughout a sequence. The training set consists of 10,000 sequences whereas the validation set and the test set contain 1,000 sequences each.\nFrames are downsampled to a resolution of 64×64 for training VIMON, OP3 and SCALOR." }, { "heading": "C METHODS", "text": "In this section we describe the various methods in a common mathematical framework. For details about implementation and training, please refer to Section D." }, { "heading": "C.1 MONET", "text": "Multi-Object-Network (MONET) (Burgess et al., 2019) is an object-centric representation model designed for static images. It consists of a recurrent attention network that sequentially extracts attention masks of individual objects and a variational autoencoder (VAE) (Kingma & Welling, 2014) that reconstructs the image region given by the attention mask in each processing step.\nAttention Network: The attention network is a U-Net (Ronneberger et al., 2015) parameterized by ψ. At each processing step k, the attention network receives the full image x ∈ [0, 1]H×W×3 as input together with the scope variable sk ∈ [0, 1]H×W . The scope sk keeps track of the regions of the image that haven’t been attended to in the previous processing steps and thus remain to be explained. The attention network outputs a soft attention mask mk ∈ [0, 1]H×W and the updated scope with the current mask subtracted:\n3https://github.com/zhen-he/tracking-by-animation 4https://github.com/maximecb/gym-miniworld\nmk = sk−1αψ(x, sk−1) (3) sk+1 = sk(1− αψ(x, sk)) (4)\nwhere αψ(x, sk) ∈ [0, 1]H×W is the output of the U-net and s0 = 1. The attention mask for the last slot is given by mK = sK−1 to ensure that the image is fully explained, i.e. ∑K k=1 mk = 1.\nVAE: The VAE consists of an encoder g : [0, 1]H×W×3 × [0, 1]H×W → RL×2 and a decoder h : RL → [0, 1]H×W×3 × [0, 1]H×W which are two neural networks parameterized by φ and θ, respectively. The VAE encoder receives as input the full image x and the attention mask mk and computes (µk, logσk), which parameterize the Gaussian latent posterior distribution qφ(zk|x,mk) = N (µk,σkI). Using the reparametrization trick (Kingma & Welling, 2014), zk ∈ RL is sampled from the latent posterior distribution. zk is decoded by the VAE decoder into a reconstruction of the image component x̂k ∈ [0, 1]H×W×3 and mask logits, which are used to compute the reconstruction of the mask m̂k ∈ [0, 1]H×W via a pixelwise softmax across slots. The reconstruction of the whole image is composed by summing over the K masked reconstructions of the VAE: x̂ = ∑K k=1 m̂k x̂k.\nLoss: MONET is trained end-to-end with the following loss function:\nL(φ; θ;ψ;x) = − log K∑ k=1 mkpθ(x|zk) + βDKL( K∏ k=1 qφ(zk|x,mk)‖p(z))\n+γ K∑ k=1 DKL(qψ(mk|x)‖pθ(mk|zk))\n(5)\nwhere pθ(x|zk) is the Gaussian likelihood of the VAE decoder and zk ∈ RL is the latent representation of slot k.\nThe first two loss terms are derived from the standard VAE objective, the Evidence Lower BOund (ELBO) (Kingma & Welling, 2014), i.e. the negative log-likelihood of the decoder and the Kullback–Leibler divergence between the unit Gaussian prior p(z) = N (0, I) and the latent posterior distribution qφ(zk|x,mk) factorized across slots. Notably, the decoder log-likelihood term pθ(x|zk) constrains only the reconstruction within the mask, since it is weighted by the mask mk. Additionally, as a third term, the Kullback–Leibler divergence of the attention mask distribution qψ(mk|x) with the VAE mask distribution pθ(m̂k|zk) is minimized, to encourage the VAE to learn a good reconstruction of the masks.\nC.2 VIDEO MONET\nWe propose an extension of MONET (Burgess et al., 2019), called Video MONet (VIMON), which accumulates evidence over time about the objects in the scene (Fig. C.1).\nVIMON processes a video recurrently by reconstructing one frame at a time and predicting the next frame of the video. The processing of each frame follows a logic similar to MONET with some notable differences. In the following, we use t to indicate the time step in the video and k to indicate the processing step within one video frame.\nAttention Network: The attention network of VIMON outputs an attention mask mt,k ∈ [0, 1]H×W in each step k conditioned on the full frame xt ∈ [0, 1]H×W×3, the scope st,k ∈ [0, 1]H×W and additionally the mask m̂t,k ∈ [0, 1]H×W that was predicted by the\nVAE in the previous time step, in order to provide it with information about which object it should attend to in this specific slot k.\nmt,k = st,k−1αψ(xt, st,k−1, m̂t,k) (6)\nVAE: The VAE of VIMON consists of an encoder g(xt,mt,k;φ) and a decoder h(zt,k; θ). In contrast to MONET, the encoder in VIMON is followed by a gated recurrent unit (GRU) (Cho et al., 2014) with a separate hidden state ht,k per slot k. Thus, the GRU aggregates information over time for each object separately. The GRU outputs (µt,k, logσt,k) which parameterize the Gaussian latent posterior distribution qφ(zt,k|xt,mt,k) where zt,k ∈ RL is the latent representation for slot k at time t:\nz′t,k = g(xt,mt,k;φ) (7)\n(µt,k, logσt,k),ht,k = f(GRU(z ′ t,k,ht−1,k))) (8)\nqφ(zt,k|xt,mt,k) = N (µt,k,σt,kI) ∀t, k (9)\nwhere g is the VAE encoder and f is a linear layer. The latent representation zt,k is sampled from the latent posterior distribution using the reparametrization trick (Kingma & Welling, 2014). Subsequently, zt,k is linearly transformed into ẑt+1,k via a learned transformation A ∈ RL×L: ẑt+1,k = Azt,k with ẑt+1,k being the predicted latent code for the next time step t + 1. Both zt,k and ẑt+1,k are decoded by the shared VAE decoder hθ into a reconstruction of the image x̂t,k ∈ [0, 1]H×W×3 and a reconstruction of the mask m̂t,k ∈ [0, 1]H×W as well as x̂t+1,k and m̂t+1,k, respectively.\nLoss: VIMON is trained in an unsupervised fashion with the following objective adapted from the MONET loss (Eq. (5)) for videos. To encourage the model to learn about object motion, we include a prediction objective in the form of a second decoder likelihood on the next-step prediction pθ(xt+1|ẑt+1,k) and an additional mask loss term, which encourages the predicted VAE mask distribution pθ(m̂t+1,k|ẑt+1,k) to be close to the attention mask distribution qψ(mt+1,k|xt+1) of the next time step for each slot k:\nL(φ; θ;ψ;x) = T∑ t=1 LnegLL + βLprior + γLmask\nLnegLL = −(log K∑ k=1 mt,kpθ(xt|zt,k) + log K∑ k=1 mt+1,kpθ(xt+1|ẑt+1,k))\nLprior = DKL( K∏ k=1 qφ(zt,k|xt,mt,k)‖p(z))\nLmask = K∑ k=1 DKL(qψ(mt,k|xt)‖pθ(mt,k|zt,k)) +DKL(qψ(mt+1,k|xt+1)‖pθ(mt+1,k|ẑt+1,k))" }, { "heading": "C.3 TRACKING BY ANIMATION", "text": "Tracking by Animation (TBA) (He et al., 2019) is a spatial transformer-based attention model which uses a simple 2D rendering pipeline as the decoder. Objects are assigned tracking templates and pose parameters by a tracker array, such that they can be reconstructed in parallel using a renderer based on affine spatial transformation (Fig. C.2). In contrast to VIMON, TBA uses explicit parameters to encode the position, size, aspect ratio and occlusion properties for each slot. Importantly, TBA is designed for scenes with static backgrounds, and preprocesses sequences using background subtraction (Bloisi & Iocchi, 2012) before they are input to the tracker array.\nTracker Array: TBA uses a tracker array to output a latent representation zt ∈ RL×K at time t using a feature extractor f(xt;ψ) and a recurrent ’state update’, where ct ∈ RM×N×C is a convolutional feature representation. The convolutional feature and latent representation have far fewer elements than xt, acting as a bottleneck:\nct = f(xt;ψ), (10) ht,k = RAT (ht−1,k, ct;π), (11) zt = g(ht;φ). (12)\nThough the state update could be implemented as any generic recurrent neural network block, such as an LSTM (Hochreiter & Schmidhuber, 1997) or GRU (Cho et al., 2014), TBA introduces a Reprioritized Attentive Tracking (RAT) block that uses attention to achieve explicit association of slots with similar features over time. Firstly, the previous tracker state ht−1,k is used to generate key variables kt,k and βt,k:\n{kt,k, β̂t,k} = Tht−1,k, (13)\nβt,k = 1 + ln(1 + exp(β̂t,k)), (14)\nwhere T is a learned linear transformation, kt,k ∈ RS is the addressing key, and β̂t,k ∈ R is an un-normalized version of a key strength variable βt,k∈(1,+∞). This key strength acts like a temperature parameter to modulate the feature re-weighting, which is described in the following. Each feature vector in ct, denoted by ct,m,n ∈RS , where m ∈ {1, 2, . . . ,M} and n ∈ {1, 2, . . . , N} are the convolutional feature dimensions, is first used to get attention weights:\nWt,k,m,n = exp(βt,kSim(kt,k, ct,m,n))∑\nm′,n′ exp(βt,kSim(kt,k, ct,m′,n′)) . (15)\nHere, Sim is the cosine similarity defined as Sim(p,q) = pqT/(‖p‖‖q‖), and Wt,k,m,n is an element of the attention weight Wt,k ∈ [0, 1]M×N , satisfying ∑ m,nWt,k,m,n = 1. Next, a read operation is defined as a weighted combination of all feature vectors of ct:\nrt,k = ∑ m,n Wt,k,m,n ct,m,n (16)\nwhere rt,k∈RS is the read vector, representing the associated input feature for slot k. Intuitively, for slots in which objects are present in the previous frame, the model can suppress the features in rt,k that are not similar to the features of that object, helping achieve better object-slot consistency. On the other hand, if there are slots which so far do not contain any object, the key strength parameter allows rt,k to remain similar to ct facilitating the discovery of new objects.\nThe tracker state ht,k of the RAT block is updated with an RNN parameterized by π, taking rt,k instead of ct as its input feature:\nht,k = RNN(ht−1,k, rt,k;π) (17)\nThe RAT block additionally allows for sequential prioritization of trackers, which in turn allows only a subset of trackers to update their state at a given time step, improving efficiency. For full details on the reprioritization and adaptive computation time elements of the RAT block, please refer to the original paper (He et al., 2019).\nMid-Level Representation: The key feature of TBA is that each latent vector zt,k is further decoded into a mid-level representation yt,k = {yct,k,ylt,k,y p t,k,Y s t,k,Y a t,k} corresponding to interpretable, explicit object properties, via a fully-connected neural network h(zt,k; θ) as follows:\nyt,k = h(zt,k; θ). (18)\nhθ is shared by all slots, improving parameter efficiency. The different components of the mid-level representation are:\n• Confidence yct,k∈ [0, 1]: Probability of existence of an object in that slot.\n• Layer ylt,k ∈ {0, 1}O: One-hot encoding of the discretized pseudo-depth of the object relative to other objects in the frame. Each image is considered to be composed of O object layers, where higher layer objects occlude lower layer objects and the background is the zeroth (lowest) layer. E.g., when O = 4, ylt,k = [0, 0, 1, 0] denotes the third layer. For simplicity and without loss of generality, we can also denote the same layer with its integer representation ylt,k = 3.\n• Pose ypt,k=[ŝxt,k, ŝ y t,k, t̂ x t,k, t̂ y t,k]∈ [−1, 1]4: Normalized object pose for calculating the scale\n[sxt,k, s y t,k] = [1+ η xŝxt,k, 1+ η y ŝyt,k] and the translation [t x t,k, t y t,k] = [ W 2 t̂ x t,k, H 2 t̂ y t,k], where\nηx, ηy > 0 are constants. • Shape Yst,k∈{0, 1}U×V and Appearance Yat,k∈ [0, 1]U×V×3: Object template, with hyperpa-\nrameters U and V typically set much smaller than the image dimensions H and W . Note that the shape is discrete (for details, see below) whereas the appearance is continuous.\nIn the output layer of hθ, yct,k and Y a t,k are generated by the sigmoid function, y p t,k is generated by the tanh function, and ylt,k as well as Y s t,k are sampled from the Categorical and Bernoulli distributions, respectively. As sampling is non-differentiable, the Straight-Through Gumbel-Softmax estimator (Jang et al., 2017) is used to reparameterize both distributions so that backpropagation can still be applied.\nRenderer: To obtain a frame reconstruction, the renderer scales and shifts Yst,k and Yat,k according to ypt,k via a Spatial Transformer Network (STN) (Jaderberg et al., 2015):\nmt,k = STN(Y s t,k,y p t,k), (19)\nx̂t,k = STN(Y a t,k,y p t,k). (20)\nwhere mt,k ∈ {0, 1}D and x̂t,k ∈ [0, 1]D×3 are the spatially transformed shape and appearance respectively. To obtain the final object masks m̂t,k, an occlusion check is performed by initializing m̂t,k = y c t,kmt,k, then removing the elements of m̂t,k for which there exists an object in a higher layer. That is, for k=1, 2, . . . ,K and ∀j 6= k where ylt,j > ylt,k:\nm̂t,k = (1−mt,j) m̂t,k. (21)\nIn practice, the occlusion check is sped up by creating intermediate ‘layer masks’, partially parallelizing the operation. Please see the original paper for more details (He et al., 2019). The final reconstruction is obtained by summing over the K slots, x̂t = ∑K k=1 m̂t,k x̂t,k.\nLoss: Learning is driven by a pixel-level reconstruction objective, defined as:\nL(φ;ψ;π; θ;x) = T∑ t=1\n( MSE(x̂t,xt) + λ · 1\nK K∑ k=1 sxt,k s y t,k\n) , (22)\nwhere MSE refers to the mean squared error and the second term penalizes large scales [sxt,k, s y t,k] in order to make object bounding boxes more compact.\nC.4 IODINE\nThe Iterative Object Decomposition Inference NEtwork (IODINE) (Greff et al., 2019), similar to MONET (Burgess et al., 2019), learns to decompose a static scene into a multi-slot representation, in which each slot represents an object in the scene and the slots share the underlying format of the independent representations. In contrast to MONET, it does not recurrently segment the image using spatial attention, rather it starts from an initial guess of the segmentation of the whole image and iteratively refines it. Thus, the inference component of both models differ, while the generative component is the same.\nIterative Inference. As with MONET, IODINE models the latent posterior q(zk|x) per slot k as a Gaussian parameterized by (µm,k,σm,k) ∈ RL×2. To obtain latent representations for independent regions of the input image, IODINE starts from initial learned posterior parameters (µ1,k,σ1,k) and iteratively refines them using the refinement network fφ for a fixed number of refinement steps M . fφ consists of a convolutional neural network (CNN) in combination with an LSTM cell (Hochreiter & Schmidhuber, 1997) parameterized by φ. In\neach processing step, fφ receives as input the image x ∈ [0, 1]H×W×3, a sample from the current posterior estimate zm,k ∈ RL and various auxiliary inputs ak, which are listed in the original paper (Greff et al., 2019). The posterior parameters are concatenated with the output of the convolutional part of the refinement network and together form the input to the refinement LSTM. The posterior parameters are additively updated in each step m in parallel for all K slots:\n(µm+1,k,σm+1,k) = (µm,k,σm,k) + fφ(zm,k,x,ak) (23)\nDecoder. In each refinement step m, the image is represented by K latent representations zm,k. Similar to MONET, each zm,k is independently decoded into a reconstruction of the image x̂m,k ∈ [0, 1]H×W×3 and mask logits m̃m,k, which are subsequently normalized by applying the softmax across slots to obtain the masks mm,k ∈ [0, 1]H×W . The reconstruction of the whole image at each refinement step m is composed by summing over the K masked reconstructions of the decoder: x̂ = ∑K k=1 mm,k x̂m,k.\nTraining. IODINE is trained by minimizing the following loss function that consists of the the Evidence Lower BOund (ELBO) (Kingma & Welling, 2014) unrolled through N iterations:\nL(θ, φ, (µ1,k,σ1,k);x) = M∑ m=1 m M\n[ − log\nK∑ k=1 mm,kpθ(x|zm,k) +DKL ( K∏ k=1 qφ(zm,k|x)‖p(z) )] (24)\nwhere pθ(x|zm,k) is the decoder log-likelihood weighted by the mask mk and DKL is the KullbackLeibler divergence between the unit Gaussian prior p(z) = N (0, I) and the latent posterior distribution q(zm,k|x) factorized across slots." }, { "heading": "C.5 OBJECT-CENTRIC PERCEPTION, PREDICTION, AND PLANNING (OP3)", "text": "Object-centric Perception, Prediction, and Planning (OP3) (Veerapaneni et al., 2019) extends IODINE to work on videos and in a reinforcement learning (RL) setting. It uses the above described IODINE as an observation model to decompose visual observations into objects and represent them independently. These representations are subsequently processed by a dynamics model that models the individual dynamics of the objects, the pairwise interaction between the objects, as well as the action’s effect on the object’s dynamics, predicting the next frame in latent space (Fig. C.3). By modeling the action’s influence on individual objects, OP3 can be applied to RL tasks.\nOP3 performs M refinement steps after each dynamics step.\nRefinement network. The refinement steps proceed as in the description for IODINE in Section C.4. The input image xt ∈ [0, 1]H×W×3, which is the frame from a video at time t, is processed by the refinement network fφ conditioned on a sample from the current posterior estimate zt,m,k ∈ RL. The refinement network outputs an update of the posterior parameters (µt,m,k,σt,m,k) (see Eq. (23)). The posterior parameters (µ1,1,k, σ1,1,k) are randomly initialized.\nDynamics model. After refinement, samples from the current posterior estimate zt,M,k for each slot k are used as input to the dynamics network. The dynamics model dψ consists of a series of linear layers and nonlinearities parameterized by ψ. It models the individual dynamics of the objects per slot k, the pairwise interaction between all combinations of objects, aggregating them into a prediction of the posterior parameters for the next time step t+1 for each object k. The full dynamics model additionally contains an action component that models the influence of a given action on each object, which we do not use in our tracking setting. The predicted posterior parameters are then used in the next time step as initial parameters for the refinement network.\n(µt,1,k,σt,1,k) = dψ(zt−1,M,k, zt−1,M,[6=k])) (25)\nTraining. OP3 is trained end-to-end with the ELBO used at every refinement and dynamics step, with the loss L(θ, φ;x) given by:\nT∑ t=1 1 T M+1∑ m=1 min(m,M) M\n( − log\nK∑ k=1 mt,m,kpθ(xt|zt,m,k) +DKL( K∏ k=1 qφ(zt,m,k|xt)‖q(zt,1,k|xt)) ) (26)\nwhere for time step 1, q(z1,1,k|x1) = N (0, I)." }, { "heading": "C.6 SCALABLE OBJECT-ORIENTED REPRESENTATION (SCALOR)", "text": "SCALable Object-oriented Representation (SCALOR) (Jiang et al., 2020) is a spatial transformerbased model that extends SQAIR (Kosiorek et al., 2018) to scale to cluttered scenes. Similar to TBA is factors the latent representations in pose, depth and appearance per object and uses spatial transformers (Jaderberg et al., 2015) to render objects in parallel. In contrast to TBA, it can handle dynamic backgrounds by integrating a background RNN that models background transitions.\nProposal-Rejection Module:: SCALOR uses a proposal-rejection module g to discover new objects. All frames up to the current time step x1:t are first encoded using a convolutional LSTM f . The resulting features are then aggregated with an encoding of propagated object masks and divided into H ×W grid cells.\ncimgt = f(x1:t;ψ) (27)\ncmaskt =MaskEncoder(M P t ) (28)\ncaggt = Concat([c img t , c mask t ], (29)\nPer grid cell a latent variable zt,h,w is proposed. Proposal generation is done in parallel. Each zt,h,w consists of existence, pose, depth and appearance parameters (zprest,h,w, z pose t,h,w, z depth t,h,w , z what t,h,w).\nzprest,h,w ∼ Bern(·|g1(c agg t )) (30)\nzdeptht,h,w ∼ N (·|g2(c agg t )) (31) zposet,h,w ∼ N (·|g3(c agg t )) (32)\nwhere g1, g2 and g3 are convolutional layers.\nThe appearance parameters zwhatt,h,w are obtained by first taking a glimpse from frame xt of the area specified by zposet,h,w via a Spatial Transformer Network (STN) (Jaderberg et al., 2015) and subsequently extracting features from it via a convolutional neural network:\ncattt,h,w = STN(xt, z pose t,h,w) (33)\nzwhatt,h,w ∼ N (·|GlimpseEnc(cattt,h,w)) (34) ot,h,w,mt,h,w = STN −1(GlimpseDec(zwhatt,h,w), z pose t,h,w) (35)\nwhere ot,h,w is the object RGB glimpse and mt,h,w is the object mask glimpse.\nIn the rejection phase, objects that overlap more than a threshold τ in pixel space with a propagated object from the previous time step are rejected.\nPropagation Module:: During propagation, for each object k from the previous time step t− 1 a feature attention map at,k from the encoded frame features cimgt is extracted centered on the position of the object in the previous time step and used to update the hidden state ht,k of the tracker RNN for object k.\nat,k = att(STN(c img t , z pose t−1,k) (36)\nht,k = GRU([at,k, zt−1,k],ht−1,k) (37) zt,k = update(at,k,ht,k, zt−1,k) (38)\nwhere STN is a spatial transformer module (Jaderberg et al., 2015). If zprest,k = 1 the latent representation zt,k of the respective object k will be propagated to the next time step.\nBackground:: The background of each frame xt is encoded using a convolutional neural network conditioned on the masks Mt of the objects present at time step t and decoded using a convolutional neural network.\n(µbg,σbg) = BgEncoder(xt, (1−Mt)) (39)\nzbgt ∼ N (µbg,σbg) (40) x̂bgt = BgDecoder(z bg t ) (41)\nRendering:: To obtain frame reconstructions x̂t foreground object appearances and masks\nare scaled and shifted using via a Spatial Transformer Network (STN):\nx̂fgt,k = STN −1(ot,k, z pose t,k ) (42)\nγt,k = STN −1(mt,k · zprest,k σ(−z depth t,k ), z pose t,k ) (43) x̂fgt = ∑ K x̂fgt,kγt,k (44)\nSubsequently, foreground objects and background reconstruction are combined as follows to obtain the final reconstruction:\nx̂t = x̂ fg t + (1−Mt) x̂ bg t (45)\nTraining:: SCALOR is trained on frame reconstruction using the evidence lower bound (ELBO):\nT∑ t=1 − log pθ(xt|zt) +DKL(qφ(zt|z<t,x≤t)‖q(zt|z<t)) (46)" }, { "heading": "D MODEL IMPLEMENTATION DETAILS", "text": "D.1 VIDEO MONET\nVAE: Following (Burgess et al., 2019), the VAE encoder is a CNN with 3x3 kernels, stride 2, and ReLU activations (Table D.1). It receives the input image and mask from the attention network as input and outputs (µ, log σ) of a 16-dimensional Gaussian latent posterior. The GRU has 128 latent dimensions and one hidden state per slot followed by a linear layer with 32 output dimensions. The VAE decoder is a Broadcast decoder as published by (Watters et al., 2019b) with no padding, 3x3 kernels, stride 1 and ReLU activations (Table D.2). The output distribution is an independent pixel-wise Gaussian with a fixed scale of σ = 0.09 for the background slot and σ = 0.11 for the foreground slots.\nAttention Network: The attention network is a U-Net (Ronneberger et al., 2015) and follows the architecture proposed by (Burgess et al., 2019). The down and up-sampling components consist each of five blocks with 3x3 kernels, 32 channels, instance normalisation, ReLU activations and down- or up-sampling by a factor of two. The convolutional layers are bias-free and use stride 1 and padding 1. A three-layer MLP with hidden layers of size 128 connect the down- and the up-sampling part of the U-Net.\nTraining: MONET and VIMON are implemented in PyTorch (Paszke et al., 2019) and trained with the Adam optimizer (Kingma & Ba, 2015) with a batch size of 64 for MONET and 32 for VIMON, using an initial learning rate of 0.0001. Reconstruction performance is evaluated after each epoch on the validation set and the learning rate is decreased by a factor of 3 after the validation loss hasn’t improved in 25 consecutive epochs for MONET and 100 epochs for VIMON, respectively. MONET and VIMON are trained for 600 and 1000 epochs, respectively. The checkpoint with the lowest reconstruction error is selected for the final MOT evaluation. MONET is trained with β = 0.5 and γ = 1 and VIMON is trained with β = 1 and γ = 2. K = 5 for SpMOt, K = 6 for VMDS and K = 8 for VOR. Due to the increased slot number for VOR, batch size for VIMON had to be decreased to 24 to fit into the GPU memory. Respectively, the initial learning rate is set to 0.000075 for VIMON on VOR. We initialize the attention network and the VAE in VIMON with the pre-trained weights from MONET to facilitate learning and speed up the training. Note that for all evaluations, the reconstructed masks m̂ from the VAE were used.\nSprites-MOT Initialization: When training MONET and Video MONET on Sprites-MOT from scratch, MONET struggles to learn the extreme color values of the objects that Sprites-MOT features. Instead it completely focuses on learning the shapes. To circumvent that, we initialized the weights of the models with MONET weights that were trained for 100 epochs on Multi-dSprites." }, { "heading": "D.2 TRACKING BY ANIMATION", "text": "Preprocessing: TBA expects its input frames to contain only foreground objects. In (He et al., 2019), the authors use Independent Multimodal Background Subtraction (IMBS) (Bloisi & Iocchi, 2012) to remove the background from datasets consisting of natural videos with static backgrounds. Background subtraction algorithms maintain a spatio-temporal window around each pixel in the sequence, and remove the dominant mode based on a histogram of color values. Since the default implementation of IMBS has several hand-tuned thresholds corresponding to natural videos (e.g., for shadow suppression), it cannot be directly applied to synthetic datasets like VMDS without significant hyper-parameter tuning. We instead re-generate all of the VMDS datasets with identical objects and motion but a black background for our experiments with TBA, to mimic a well-tuned background subtraction algorithm.\nArchitecture: For SpMOT, we follow the same architecture as in (He et al., 2019), while we increase the number of slots from K = 4 to K = 5 and number of layers from O = 3 to O = 4 for VMDS. Since TBA does not model the background, this makes the number of foreground slots equal to the other models in our study.\nFurther, we increase the size prior parameters U × V used for the shape and appearance templates from 21× 21 which is used for SpMOT, to 64× 64 for VMDS, which we empirically found gave the best validation loss among 48× 48, 56× 56, 64× 64 and 72× 72. All other architectural choices are kept fixed for both datasets, and follow (He et al., 2019). Note that due to this, we trained the TBA models at its default resolution of 128×128 unlike the 64×64 resolution used by MONET and OP3. Training and Evaluation: We train for 1000 epochs using the same training schedule as in (He et al., 2019). The checkpoint with the lowest validation loss is selected for the final MOT evaluation. Further, we observed that the discrete nature of the shape code used in TBA’s mid-level representation leads to salt-and-pepper noise in the reconstructed masks. We therefore use a 2× 2 minimum pooling operation on the final output masks to remove isolated, single pixel foreground predictions and generate 64× 64 resolution outputs, similar to MONET and OP3 before evaluation. Deviation of SpMOT results compared to original publication: Our results were generated with 100k training frames, while the original TBA paper (He et al., 2019) uses 2M training frames for the simple SpMOT task. Further, we report the mean of three training runs, while the original paper reports one run (presumably the best). Our best run achieves MOTA of 90.5 (Table E.1). Third, we evaluate using intersection over union (IoU) of segmentation masks instead of bounding boxes." }, { "heading": "D.3 OP3", "text": "Training: The OP3 loss is a weighted sum over all refinement and dynamics steps (Eq. (26)). For our evaluation on multi-object tracking, we weight all time steps equally. In contrast to the original training loss, in which the weight value is linearly increased indiscriminately, thus weighting later predictions more highly, we perform the linear increase only for the refinement steps between dynamics steps, thus weighting all predictions equally.\nOP3, as published by (Veerapaneni et al., 2019), uses curriculum learning. For the first 100 epochs, M refinement steps are taken, followed by a single dynamics step, with a final refinement step afterwards. Starting after 100 epochs, the number of dynamics steps is incremented by 1 every 10 epochs, until five dynamics steps are reached. Thus, only 5 frames of the sequence are used during training at maximum.\nWe chose to use an alternating schedule for training, where after each dynamics step, M = 2 refinement steps are taken, and this is continued for the entire sequence. Thus, the entire available sequence is used, and error is not propagated needlessly, since the model is enabled to refine previous predictions on the reconstruction before predicting again. Note that this is the schedule OP3 uses by default at test-time, when it is used for model predictive control. Note that we still use 4 refinement steps on the initial observation to update the randomly initialized posterior parameters, as in the released implementation. We split all 10-step sequences into 5-step sequences to avoid premature divergence.\nWe train OP3 with a batch size of 16 for 300 epochs using an learning rate of 0.0003 for VMDS and VOR and 0.0001 for SpMOT. K = 5 for SpMOT, K = 6 for VMDS and K = 8 for VOR are used. Larger learning rates for SpMOT led to premature divergence. Note OP3 by default uses a batch size of 80 with the default learning rate of 0.0003, this led to suboptimal performance in our experiments. Finally, training OP3 is very unstable, leading to eventual divergence in almost all experiments that have been performed for this study.\nThe checkpoint prior to divergence with the lowest KL loss is selected for the final MOT evaluation, as the KL loss enforces consistency in the latents over the sequence. Interestingly, the checkpoint almost always corresponded to the epochs right before divergence." }, { "heading": "D.4 SCALOR", "text": "Architecture: We follow the same architecture as in (Jiang et al., 2020). We use a grid of 4× 4 for object discovery with a maximum number of objects of 10. The standard deviation of the image distribution is set to 0.1. Size anchor and variance are set to 0.2 and 0.1, respectively.\nFor SpMOT, background modeling is disabled and the dimensionality of the latent object appearance is set to 8.\nFor VMDS, the dimensionality of background is set to 3 and the dimensionality of the latent object appearance is set to 16. For object discovery, a grid of 3× 3 cells with a maximum number of objects of 8 is used.\nFor VOR, the dimensionality of background is set to 8 and the dimensionality of the latent object appearance is set to 16.\nHyperparameter tuning: For VMDS, we run hyperparameter search over number of grid cells {3× 3, 4× 4}, background dimension {1, 3, 5}, maximum number of objects {5, 8, 10} (dependent on number of grid cells), size anchor {0.2, 0.25, 0.3, 0.4}, zwhat dimenisonality {8, 16, 24} and end value of tau {0.3, 0.5}.\nFor SpMOT, we run hyperparameter search over maximum number of objects {4, 10}, size anchor {0.1, 0.2, 0.3}, zwhat dimensionality {8, 16} and whether to model background (with background dimensionality 1) or not.\nFor VOR, we run hyperparameter search over size anchor {0.2, 0.3} and background dimensionality {8, 12}.\nWe picked best hyper parameters according to the validation loss.\n1 2 3 4 0\n20\n40\n60 80 Fr ac tio n of fa ilu re c as es\n[% ] ID Switches\n1 2 3 4 Number of objects in video\nFPs\n1 2 3 4\nMisses\nViMON TBA OP3 SCALOR\nFigure E.1: Distribution of failure cases dependent on number of objects in VMDS videos split by failure class. Mean of three training runs. Error bars: SD.\nTraining: We train SCALOR with a batch size of 16 for 300 epochs using a learning rate of 0.0001 for SpMOT and VOR and for 400 epochs for VMDS. For the final MOT evaluation, the checkpoint with the lowest loss on the validation set is chosen." }, { "heading": "E ADDITIONAL RESULTS", "text": "Table E.1 lists the individual results for the three training runs with different random seeds per model and dataset. The results of VIMON and SCALOR are coherent between the three runs with different random seed, while TBA has one run on SpMOT with significantly lower performance than the other two and shows variation in the three training runs on VMDS. OP3 exhibits one training run on SpMOT with lower performance than the other two.\nFig. E.1 shows the fraction of failure cases dependent on the number of objects present in the video for the three different failure cases separately; ID switches, FPs and misses. For VIMON, TBA and SCALOR, the number of failures increase with the number of objects present regardless of the type of failure. In contrast, OP3 shows this pattern for ID switches and misses, while it accumulates a higher number of false positives (FPs) in videos with fewer (only one or two) objects.\nFig. E.2 shows a comparison between MONET and VIMON on VMDS. MONET correctly finds and segments objects, but it does not assign them to consistent slots over time, while VIMON maintains a consistent slot assignment throughout the video.\nt=3\nSegmentation\nt=5 t=7 t=9 t=3 t=5 t=7 t=9\nReconstruction\nGround Truth\nViMON MONet\nFigure E.2: Comparison of MONET and VIMON on VMDS. Example sequence of dataset shown with corresponding outputs of the model. Reconstruction shows sum of components from all slots, weighted by the attention masks. Color-coded segmentation maps in third row signify slot-assignment. Note how the object-slot assignment changes for consecutive frames (3rd row) for MONET, while VIMON maintains a consistent slot assignment throughout the video.\nFig. E.4 shows failures cases of OP3 on VOR.\nTable E.2 and Table E.3 list the results for the four models, VIMON, TBA, OP3 and SCALOR, on the VMDS challenge sets and out-of-distribution (o.o.d.) sets respectively. Results are shown as the mean and standard deviation of three training runs with different random seed per model.\nTable E.1: Analysis of SOTA object-centric representation learning models for MOT. Results for three runs with different random training seeds.\nModel Run MOTA ↑ MOTP ↑ MD ↑ MT ↑ Match ↑ Miss ↓ ID S. ↓ FPs ↓ MSE ↓ SpMOT\n1 70.0 90.6 92.8 49.4 74.7 4.1 21.2 4.7 10.4 MONET 2 69.4 90.0 92.7 48.1 74.2 4.1 21.6 4.8 13.4\n3 71.3 88.1 91.6 53.8 77.1 4.9 18.0 5.8 15.2\n1 92.7 92.0 87.5 87.0 94.9 4.9 0.2 2.2 10.5 VIMON 2 92.8 92.0 86.9 86.3 94.8 5.0 0.2 2.0 11.8\n3 93.2 91.6 88.8 88.3 95.2 4.6 0.2 2.0 10.9\n1 90.5 71.4 90.2 89.8 94.4 5.3 0.3 3.9 10.3 TBA 2 58.4 70.7 69.6 60.8 75.0 18.1 6.9 16.6 14.6\n3 90.1 71.5 90.3 89.4 94.0 5.5 0.5 3.9 10.9\n1 92.4 80.0 94.5 93.7 97.3 2.4 0.4 4.8 4.3 OP3 2 81.9 74.9 86.9 86.5 92.8 6.8 0.3 10.9 30.1\n3 92.9 80.1 95.9 95.2 97.6 2.0 0.4 4.7 5.6\n1 94.4 80.1 96.5 92.3 95.4 2.4 2.2 1.0 3.3 SCALOR 2 94.7 80.2 96.4 93.1 95.8 2.4 1.8 1.1 3.4\n3 95.5 80.2 96.3 94.0 96.4 2.4 1.2 0.9 3.6\nVOR 1 28.0 81.3 73.8 26.7 57.4 18.0 24.6 29.4 14.1\nMONET 2 44.5 82.4 78.2 45.4 68.7 15.0 16.3 24.2 11.8 3 38.5 81.6 78.7 39.8 67.0 14.4 18.5 28.5 10.8\n1 89.0 88.9 90.2 89.8 92.9 6.8 0.3 3.9 7.1 VIMON 2 89.0 89.8 89.9 89.6 93.0 6.8 0.2 4.0 6.2\n3 89.0 89.9 91.0 90.6 93.8 6.0 0.2 4.8 5.9\n1 64.8 89.5 87.2 85.1 90.3 8.8 0.9 25.5 3.1 OP3 2 66.2 88.1 88.6 85.1 90.7 7.9 1.4 24.5 2.9\n3 65.3 89.3 88.2 86.1 91.1 8.0 0.9 25.8 3.0\n1 74.1 85.8 75.6 75.5 77.4 22.6 0.0 3.3 6.4 SCALOR 2 74.6 86.0 75.9 75.9 78.1 21.9 0.1 3.5 6.4\n3 75.1 86.1 76.5 76.4 78.2 21.7 0.0 3.1 6.3\nVMDS 1 51.7 79.6 75.1 36.7 67.6 12.9 19.5 15.9 20.8\nMONET 2 44.3 76.1 71.8 34.8 65.9 15.0 19.1 21.5 25.3 3 52.2 80.2 75.6 35.5 66.5 13.0 20.5 14.2 20.4\n1 87.0 86.8 86.7 85.4 92.4 6.8 0.7 5.5 10.6 VIMON 2 87.1 86.8 86.1 85.1 92.3 7.1 0.6 5.3 10.8\n3 86.5 86.7 86.0 84.6 92.1 7.2 0.7 5.6 10.6\n1 68.5 76.1 69.3 65.3 80.7 16.5 2.8 12.2 26.0 TBA 2 38.9 73.8 55.1 50.5 70.2 26.6 3.2 31.3 30.8\n3 56.0 75.0 64.3 59.2 76.7 19.8 3.5 20.8 27.5\n1 93.1 94.2 97.2 96.7 98.0 1.9 0.2 4.9 4.0 OP3 2 92.7 93.4 96.9 96.3 97.8 2.0 0.2 5.1 4.3\n3 89.4 93.3 96.2 95.8 97.6 2.2 0.2 8.3 4.6\n1 75.7 88.1 69.4 68.3 79.8 19.4 0.8 4.0 13.9 SCALOR 2 72.7 87.2 66.7 65.6 77.6 21.6 0.8 4.9 14.2\n3 73.7 87.6 67.5 66.2 77.9 21.2 0.9 4.2 14.0\nTable E.2: Performance on VMDS challenge sets. Results shown as mean ± standard deviation for three runs with different random training seeds. Examples sequences for each challenge set shown below.\nOcclusion Same Color Small Objects Large Objects\nModel MOTA MOTP MT MOTA MOTP MT MOTA MOTP MT MOTA MOTP MT\nVIMON 67.1 ± 0.4 82.5 ± 0.0 63.0 ± 0.1 72.2 ± 0.1 83.6 ± 0.1 70.4 ± 0.3 86.3 ± 0.2 83.3 ± 0.2 83.4 ± 0.4 70.7 ± 0.5 85.1 ± 0.1 76.1 ± 0.7 TBA 37.5 ± 10.4 72.8 ± 0.8 38.3 ± 4.6 47.2 ± 9.4 73.0 ± 0.7 45.2 ± 3.9 74.3 ± 0.7 71.9 ± 0.4 65.3 ± 1.6 25.6 ± 15.0 73.4 ± 0.9 44.7 ± 6.7 OP3 85.3 ± 1.0 91.6 ± 0.4 89.6 ± 0.9 51.5 ± 1.3 86.5 ± 0.3 66.3 ± 1.3 93.3 ± 1.6 93.0 ± 0.4 97.0 ± 0.2 83.8 ± 2.0 92.2 ± 0.4 93.5 ± 0.4 SCALOR 58.8 ± 1.0 86.6 ± 0.4 46.8 ± 1.2 53.7 ± 1.1 83.4 ± 0.3 46.2 ± 1.1 74.4 ± 0.7 86.1 ± 0.4 67.6 ± 1.3 66.1 ± 1.9 86.6 ± 0.5 62.4 ± 1.4\nt=1 t=4\nOcclusion\nt=7 t=10 t=1 t=4\nSame Color\nt=7 t=10 t=1 t=4\nSmall Objects\nt=7 t=10 t=1 t=4\nLarge Objects\nt=7 t=10\nTable E.3: Performance on VMDS OOD test sets. Results shown as mean ± standard deviation for three runs with different random training seeds. Examples sequences for each o.o.d. set shown below.\nSize Color Rotation\nModel MOTA MOTP MD MT MOTA MOTP MD MT MOTA MOTP MD MT\nVIMON 61.4 ± 2.5 78.0 ± 0.3 71.3 ± 2.1 66.8 ± 1.9 87.4 ± 0.4 86.2 ± 0.2 86.4 ± 0.1 85.0 ± 0.2 -10.4 ± 4.0 70.5 ± 0.4 39.5 ± 2.6 29.8 ± 1.0 VIMON* 80.3 ± 0.9 82.1 ± 0.5 82.5 ± 0.4 79.8 ± 0.5 84.5 ± 0.6 84.6 ± 0.5 83.4 ± 0.5 81.8 ± 0.3 78.7 ± 1.6 82.0 ± 0.6 79.2 ± 0.4 76.4 ± 0.6 TBA 52.3 ± 8.7 73.3 ± 0.7 59.8 ± 4.9 51.8 ± 4.9 56.1 ± 11.4 75.1 ± 0.9 63.7 ± 5.4 59.0 ± 5.2 52.4 ± 9.9 73.6 ± 0.8 59.3 ± 6.2 49.8 ± 5.5 TBA* 1.3 ± 7.8 68.4 ± 1.9 30.6 ± 4.5 24.8 ± 3.4 -16.5 ± 8.1 69.6 ± 1.5 29.1 ± 3.8 25.4 ± 3.3 -7.5 ± 7.9 69.4 ± 1.4 26.6 ± 4.0 20.6 ± 3.4 OP3 87.0 ± 1.9 90.8 ± 0.4 96.4 ± 0.1 95.3 ± 0.1 90.8 ± 1.2 93.5 ± 0.5 97.3 ± 0.1 95.8 ± 0.1 54.7 ± 5.7 84.2 ± 0.7 87.1 ± 1.7 80.5 ± 2.5 OP3* 84.0 ± 2.8 91.2 ± 1.0 95.9 ± 0.8 94.5 ± 1.2 83.6 ± 3.7 91.6 ± 1.3 95.5 ± 0.5 92.9 ± 1.6 74.5 ± 2.2 89.8 ± 0.7 94.8 ± 0.6 93.3 ± 0.8 SCALOR 68.1 ± 1.7 84.9 ± 0.4 63.3 ± 1.7 60.0 ± 2.0 75.5 ± 1.1 89.9 ± 0.5 67.0 ± 1.4 65.7 ± 1.6 46.5 ± 1.8 82.1 ± 0.5 41.9 ± 1.7 37.1 ± 1.3 SCALOR* 67.5 ± 1.2 85.2 ± 0.6 61.2 ± 1.2 57.1 ± 0.7 73.3 ± 0.7 89.8 ± 0.5 64.8 ± 1.1 63.0 ± 0.9 61.6 ± 1.4 83.5 ± 0.4 53.4 ± 1.5 50.2 ± 1.1\n* Models trained on a dataset that featured color, size and orientation changes of objects during the sequence.\nt=1 t=3 t=5\nSize\nt=7 t=9 t=1 t=3 t=5" }, { "heading": "Color", "text": "t=7 t=9 t=1 t=3 t=5\nRotation\nt=7 t=9" }, { "heading": "E.1 OUT-OF-DISTRIBUTION TEST SETS", "text": "To test whether the models can in principle learn additional object transformations as featured in the VMDS o.o.d. sets, we additionally train the models on a new training set that includes size and color changes as well as rotation of objects. VIMON, OP3 and SCALOR are able to learn additional property changes of the objects when they are part of the training data while TBA fails to learn tracking on this more challenging dataset (Fig. E.3; for absolute values Table E.3)." }, { "heading": "E.2 STABILITY OF TRAINING AND RUNTIME", "text": "Size Color Rotation\n-50\n0\n50\n100\nPe rfo\nrm an\nce %\nMOTA\nSize Color Rotation\nMT\nTest set ViMON* TBA* OP3* SCALOR*\nFigure E.3: Performance on out-of-distribution sets relative to VMDS test set (100%). * indicates that models were trained on a dataset that included color, size and orientation changes of objects.\nTo assess runtime in a fair way despite the models being trained on different hardware, we report the training progress of all models after one hour of training on a single GPU (Table E.4). In addition, we quantify inference time on the full VMDS test set using a batch size of one.\nE.3 VIMON ABLATIONS\nRemoving the GRU or the mask conditioning of the attention network reduces tracking performance (MOTA on VMDS from 86.8% to 70.6% and 81.4%, respectively; Table E.5)" }, { "heading": "F SUPPLEMENTARY FIGURES", "text": "See figures F.1 – F.8 for additional, randomly picked examples of reconstruction and segmentation for VIMON, TBA, OP3 and SCALOR on the three datasets (VMDS, SpMOT and VOR).\nTable E.4: Runtime analysis (using a single RTX 2080 Ti GPU). Training: models trained on VMDS for one hour. Inference: models evaluated on VMDS test set with batch size=1 (10 frames).\nTraining Inference\nModel Resolution No. Param. Batch Size Memory [MiB] No. Iters Epochs Memory [MiB] Avg. runtime / batch Total runtime\nVIMON 64×64 714,900 18 10,860 3687 6.63 910 0.28 s/it 4min 39s TBA 128×128 3,884,644∗ 64 10,564 4421 28.29 972 0.24 s/it 4min 05s OP3 64×64 876,305 10 10,874 2204 2.20 4092 0.54 s/it 9min 04s SCALOR 64×64 2,763,526 48 10,942 2547 12.23 930 0.29 s/it 4min 48s * The TBA parameter count scales with the feature resolution, which is kept fixed using adaptive pooling. This makes the parameter count\nindependent of input resolution.\nTable E.5: Ablation experiments for VIMON on VMDS.\nModel MOTA ↑ MOTP ↑ MD ↑ MT ↑ Match ↑ Miss ↓ ID S. ↓ FPs ↓ MSE ↓ VIMON W/O MASK CONDITIONING 70.6 87.8 75.7 66.0 81.4 13.4 5.2 10.8 16.9 VIMON W/O GRU 81.4 86.9 79.8 77.3 88.2 10.3 1.4 6.8 18.9\nG ro\nun d\nTr ut h R ec on .\nSe gm . G ro un d Tr ut h\nR ec\non .\nSe gm . G ro un d Tr ut h\nR ec\non .\nt=1\nSe gm\n.\nt=3 t=5 t=7 t=9\nFigure E.4: Failure cases of OP3 on VOR. Example sequences of VOR test set shown with corresponding outputs of the model after final refinement step. Binarized colour-coded segmentation maps in third row signify slot-assignment.\nt=1\nGr ou\nnd\nT ru\nth\nt=2 t=3 t=4 t=5 t=6 t=7 t=8 t=9 t=10\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nFigure F.1: Results of VIMON on VMDS. Random example sequences of VMDS test set shown with corresponding outputs of the model. Reconstruction shows sum of components from all slots, weighted by the reconstructed masks from the VAE. Binarized colour-coded segmentation maps in third row signify slot-assignment.\nt=1\nGr ou\nnd\nT ru\nth\nt=2 t=3 t=4 t=5 t=6 t=7 t=8 t=9 t=10\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nFigure F.2: Results of VIMON on SpMOT. Random example sequences of SpMOT test set shown with corresponding outputs of the model. Reconstruction shows sum of components from all slots, weighted by the reconstructed masks from the VAE. Binarized colour-coded segmentation maps in third row signify slot-assignment.\nt=1\nGr ou\nnd\nT ru\nth\nt=2 t=3 t=4 t=5 t=6 t=7 t=8 t=9 t=10\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nFigure F.3: Results of VIMON on VOR. Random example sequences of VOR test set shown with corresponding outputs of the model. Reconstruction shows sum of components from all slots, weighted by the reconstructed masks from the VAE. Binarized colour-coded segmentation maps in third row signify slot-assignment.\nt=1\nGr ou\nnd\nT ru\nth\nt=2 t=3 t=4 t=5 t=6 t=7 t=8 t=9 t=10\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nFigure F.4: Results of TBA on VMDS. Random example sequences of VMDS test set shown with corresponding outputs of the model. Binarized colour-coded segmentation maps in third row signify slot-assignment. Note that background subtraction is performed in the preprocessing of TBA, hence the black background in the reconstructions.\nt=1\nGr ou\nnd\nT ru\nth\nt=2 t=3 t=4 t=5 t=6 t=7 t=8 t=9 t=10\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nFigure F.5: Results of TBA on SpMOT. Random example sequences of SpMOT test set shown with corresponding outputs of the model. Binarized colour-coded segmentation maps in third row signify slot-assignment.\nt=1\nGr ou\nnd\nT ru\nth\nt=2 t=3 t=4 t=5 t=6 t=7 t=8 t=9 t=10\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nFigure F.6: Results of OP3 on VMDS. Random example sequences of VMDS test set shown with corresponding outputs of the model after final refinement step. Binarized colour-coded segmentation maps in third row signify slot-assignment.\nt=1\nGr ou\nnd\nT ru\nth\nt=2 t=3 t=4 t=5 t=6 t=7 t=8 t=9 t=10\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nFigure F.7: Results of OP3 on SpMOT. Random example sequences of SpMOT test set shown with corresponding outputs of the model after final refinement step. Binarized colour-coded segmentation maps in third row signify slot-assignment.\nt=1\nGr ou\nnd\nT ru\nth\nt=2 t=3 t=4 t=5 t=6 t=7 t=8 t=9 t=10\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nFigure F.8: Results of OP3 on VOR. Random example sequences of VOR test set shown with corresponding outputs of the model after final refinement step. Binarized colour-coded segmentation maps in third row signify slot-assignment.\nt=1\nGr ou\nnd\nT ru\nth\nt=2 t=3 t=4 t=5 t=6 t=7 t=8 t=9 t=10\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nFigure F.9: Results of SCALOR on VMDS. Random example sequences of VMDS test set shown with corresponding outputs of the model. Binarized colour-coded segmentation maps in third row signify slot-assignment.\nt=1\nGr ou\nnd\nT ru\nth\nt=2 t=3 t=4 t=5 t=6 t=7 t=8 t=9 t=10\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nFigure F.10: Results of SCALOR on SpMOT. Random example sequences of SpMOT test set shown with corresponding outputs of the model. Binarized colour-coded segmentation maps in third row signify slot-assignment.\nt=1\nGr ou\nnd\nT ru\nth\nt=2 t=3 t=4 t=5 t=6 t=7 t=8 t=9 t=10\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nGr ou\nnd\nT ru th Re co n.\nSe gm . Gr ou nd T ru th\nRe co n. Se gm .\nFigure F.11: Results of SCALOR on VOR. Random example sequences of VOR test set shown with corresponding outputs of the model. Binarized colour-coded segmentation maps in third row signify slot-assignment." } ]
2,020
null
SP:8cbce41127c32edb148b2d6713f4ecec0efc6ff9
[ "This paper studies the generalization performance of stochastic algorithms in nonconvex optimization with gradient dominance condition. In detail, the authors suggest that for any algorithm, its generalization error can be bounded by $O(1/(n\\beta))$ plus the optimization error of the algorithm, where $\\beta$ is the gradient dominance parameter. The main idea for the authors to obtain such an improved bound is an advanced analysis based on a weaker on-average stability measure. ", "This paper mainly studies the generalization performance of stochastic algorithms. Compared with previous results which rely on Lipschitz condition, this paper assumes smoothness condition and Polyak-Lojasiewicz Condition, and then prove the excess generalization bound that is a summation of $\\frac{1}{n\\beta}$ and empirical optimization error. This result looks impressive, not only because the first term looks shaper than previous $\\frac{1}{\\sqrt{n}}$ of generalization bound , but also it implies optimization benefits generalization, which may help understand some empirical observations in modern machine learning. What's more, authors analyze some common stochastic algorithms as concrete examples to show the corresponding theoretical guarantee. Besides, the whole paper is well-written and easy to follow. " ]
Stochastic optimization has become the workhorse behind many successful machine learning applications, which motivates a lot of theoretical analysis to understand its empirical behavior. As a comparison, there is far less work to study the generalization behavior especially in a non-convex learning setting. In this paper, we study the generalization behavior of stochastic optimization by leveraging the algorithmic stability for learning with β-gradient-dominated objective functions. We develop generalization bounds of the order O(1/(nβ)) plus the convergence rate of the optimization algorithm, where n is the sample size. Our stability analysis significantly improves the existing non-convex analysis by removing the bounded gradient assumption and implying better generalization bounds. We achieve this improvement by exploiting the smoothness of loss functions instead of the Lipschitz condition in Charles & Papailiopoulos (2018). We apply our general results to various stochastic optimization algorithms, which show clearly how the variance-reduction techniques improve not only training but also generalization. Furthermore, our discussion explains how interpolation helps generalization for highly expressive models.
[ { "affiliations": [], "name": "Yunwen Lei" }, { "affiliations": [], "name": "Yiming Ying" } ]
[ { "authors": [ "Zeyuan Allen-Zhu", "Elad Hazan" ], "title": "Variance reduction for faster non-convex optimization", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning and generalization in overparameterized neural networks, going beyond two layers", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Sanjeev Arora", "Simon Du", "Wei Hu", "Zhiyuan Li", "Ruosong Wang" ], "title": "Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Raef Bassily", "Mikhail Belkin", "Siyuan Ma" ], "title": "On exponential convergence of sgd in non-convex over-parametrized learning", "venue": "arXiv preprint arXiv:1811.02564,", "year": 2018 }, { "authors": [ "Raef Bassily", "Vitaly Feldman", "Kunal Talwar", "Abhradeep Guha Thakurta" ], "title": "Private stochastic convex optimization with optimal rates", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Raef Bassily", "Vitaly Feldman", "Cristóbal Guzmán", "Kunal Talwar" ], "title": "Stability of stochastic gradient descent on nonsmooth convex losses", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machinelearning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Léon Bottou", "Frank E Curtis", "Jorge Nocedal" ], "title": "Optimization methods for large-scale machine learning", "venue": "SIAM Review,", "year": 2018 }, { "authors": [ "Olivier Bousquet", "Léon Bottou" ], "title": "The tradeoffs of large scale learning", "venue": "In Advances in Neural Information Processing Systems, pp", "year": 2008 }, { "authors": [ "Olivier Bousquet", "André Elisseeff" ], "title": "Stability and generalization", "venue": "Journal of Machine Learning Research,", "year": 2002 }, { "authors": [ "Olivier Bousquet", "Yegor Klochkov", "Nikita Zhivotovskiy" ], "title": "Sharper bounds for uniformly stable algorithms", "venue": "In Conference on Learning Theory,", "year": 2020 }, { "authors": [ "Alon Brutzkus", "Amir Globerson", "Eran Malach", "Shai Shalev-Shwartz" ], "title": "Sgd learns overparameterized networks that provably generalize on linearly separable data", "venue": "arXiv preprint arXiv:1710.10174,", "year": 2017 }, { "authors": [ "Yuheng Bu", "Shaofeng Zou", "Venugopal V Veeravalli" ], "title": "Tightening mutual information based bounds on generalization error", "venue": "IEEE Journal on Selected Areas in Information Theory,", "year": 2020 }, { "authors": [ "Chih-Chung Chang", "Chih-Jen Lin" ], "title": "Libsvm: a library for support vector machines", "venue": "ACM Transactions on Intelligent Systems and Technology,", "year": 2011 }, { "authors": [ "Zachary Charles", "Dimitris Papailiopoulos" ], "title": "Stability and generalization of learning algorithms that converge to global optima", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yuansi Chen", "Chi Jin", "Bin Yu" ], "title": "Stability and convergence trade-off of iterative optimization algorithms", "venue": "arXiv preprint arXiv:1804.01619,", "year": 2018 }, { "authors": [ "Aaron Defazio", "Francis Bach", "Simon Lacoste-Julien" ], "title": "SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Aymeric Dieuleveut", "Francis Bach" ], "title": "Nonparametric stochastic approximation with large stepsizes", "venue": "Annals of Statistics,", "year": 2016 }, { "authors": [ "Aymeric Dieuleveut", "Nicolas Flammarion", "Francis Bach. Harder" ], "title": "better, faster, stronger convergence rates for least-squares regression", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Andre Elisseeff", "Theodoros Evgeniou", "Massimiliano Pontil" ], "title": "Stability of randomized learning algorithms", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Cong Fang", "Chris Junchi Li", "Zhouchen Lin", "Tong Zhang" ], "title": "Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Vitaly Feldman", "Jan Vondrak" ], "title": "High probability generalization bounds for uniformly stable algorithms with nearly optimal rate", "venue": "In Conference on Learning Theory,", "year": 2019 }, { "authors": [ "Dylan J Foster", "Ayush Sekhari", "Karthik Sridharan" ], "title": "Uniform convergence of gradients for nonconvex learning and optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Dylan J Foster", "Spencer Greenberg", "Satyen Kale", "Haipeng Luo", "Mehryar Mohri", "Karthik Sridharan" ], "title": "Hypothesis set stability and generalization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan" ], "title": "Stochastic first-and zeroth-order methods for nonconvex stochastic programming", "venue": "SIAM Journal on Optimization,", "year": 2013 }, { "authors": [ "Alon Gonen", "Shai Shalev-Shwartz" ], "title": "Average stability is invariant to data preconditioning: Implications to exp-concave empirical risk minimization", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Moritz Hardt", "Tengyu Ma" ], "title": "Identity matters in deep learning", "venue": "arXiv preprint arXiv:1611.04231,", "year": 2016 }, { "authors": [ "Moritz Hardt", "Ben Recht", "Yoram Singer" ], "title": "Train faster, generalize better: Stability of stochastic gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Rie Johnson", "Tong Zhang" ], "title": "Accelerating stochastic gradient descent using predictive variance reduction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Hamed Karimi", "Julie Nutini", "Mark Schmidt" ], "title": "Linear convergence of gradient and proximalgradient methods under the polyak-łojasiewicz condition", "venue": "In European Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Tomer Koren", "Kfir Levy" ], "title": "Fast rates for exp-concave empirical risk minimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Ilja Kuzborskij", "Christoph Lampert" ], "title": "Data-dependent stability of stochastic gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Lihua Lei", "Cheng Ju", "Jianbo Chen", "Michael I Jordan" ], "title": "Non-convex finite-sum optimization via scsg methods", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Yunwen Lei", "Yiming Ying" ], "title": "Fine-grained analysis of stability and generalization for stochastic gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Yunwen Lei", "Ting Hu", "Guiying Li", "Ke Tang" ], "title": "Stochastic gradient descent for nonconvex learning without bounded gradient assumptions", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Yunwen Lei", "Antoine Ledent", "Marius Kloft" ], "title": "Sharper generalization bounds for pairwise learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Jian Li", "Xuanyuan Luo", "Mingda Qiao" ], "title": "On generalization error bounds of noisy gradient methods for non-convex learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yuanzhi Li", "Yang Yuan" ], "title": "Convergence analysis of two-layer neural networks with relu activation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Junhong Lin", "Lorenzo Rosasco" ], "title": "Optimal rates for multi-pass stochastic gradient methods", "venue": "Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Junhong Lin", "Raffaello Camoriano", "Lorenzo Rosasco" ], "title": "Generalization properties and implicit regularization for multiple passes SGM", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Mingrui Liu", "Xiaoxuan Zhang", "Lijun Zhang", "Jing Rong", "Tianbao Yang" ], "title": "Fast rates of ERM and stochastic approximation: Adaptive to error bound conditions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tongliang Liu", "Gábor Lugosi", "Gergely Neu", "Dacheng Tao" ], "title": "Algorithmic stability and hypothesis complexity", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Ben London" ], "title": "A PAC-bayesian analysis of randomized learning with application to stochastic gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Siyuan Ma", "Raef Bassily", "Mikhail Belkin" ], "title": "The power of interpolation: Understanding the effectiveness of sgd in modern over-parametrized learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Andreas Maurer" ], "title": "A second-order look at stability and generalization", "venue": "In Conference on Learning Theory,", "year": 2017 }, { "authors": [ "Wenlong Mou", "Liwei Wang", "Xiyu Zhai", "Kai Zheng" ], "title": "Generalization bounds of sgld for nonconvex learning: Two theoretical viewpoints", "venue": "In Conference on Learning Theory,", "year": 2018 }, { "authors": [ "Nicole Mücke", "Gergely Neu", "Lorenzo Rosasco" ], "title": "Beating sgd saturation with tail-averaging and minibatching", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ion Necoara", "Yu Nesterov", "Francois Glineur" ], "title": "Linear convergence of first order methods for non-strongly convex optimization", "venue": "Mathematical Programming,", "year": 2018 }, { "authors": [ "Jeffrey Negrea", "Mahdi Haghifam", "Gintare Karolina Dziugaite", "Ashish Khisti", "Daniel M Roy" ], "title": "Information-theoretic generalization bounds for sgld via data-dependent estimates", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yu Nesterov" ], "title": "Efficiency of coordinate descent methods on huge-scale optimization problems", "venue": "SIAM Journal on Optimization,", "year": 2012 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "David McAllester", "Nati Srebro" ], "title": "Exploring generalization in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Lam M Nguyen", "Jie Liu", "Katya Scheinberg", "Martin Takáč" ], "title": "SARAH: A novel method for machine learning problems using stochastic recursive gradient", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Atsushi Nitanda", "Taiji Suzuki" ], "title": "Stochastic gradient descent with exponential convergence rates of expected classification errors", "venue": "In Kamalika Chaudhuri and Masashi Sugiyama (eds.), International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Francesco Orabona" ], "title": "Simultaneous model selection and optimization through parameter-free stochastic learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Samet Oymak", "Mahdi Soltanolkotabi" ], "title": "Towards moderate overparameterization: global convergence guarantees for training shallow neural networks", "venue": "IEEE Journal on Selected Areas in Information Theory,", "year": 2020 }, { "authors": [ "Loucas Pillaud-Vivien", "Alessandro Rudi", "Francis Bach" ], "title": "Exponential convergence of testing error for stochastic gradient methods", "venue": "In Conference On Learning Theory,", "year": 2018 }, { "authors": [ "Alexander Rakhlin", "Sayan Mukherjee", "Tomaso Poggio" ], "title": "Stability results in learning theory", "venue": "Analysis and Applications,", "year": 2005 }, { "authors": [ "Sashank Reddi", "Ahmed Hefny", "Suvrit Sra", "Barnabas Poczos", "Alex Smola" ], "title": "Stochastic variance reduction for nonconvex optimization", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Mark Schmidt", "Nicolas Le Roux", "Francis Bach" ], "title": "Minimizing finite sums with the stochastic average gradient", "venue": "Mathematical Programming,", "year": 2017 }, { "authors": [ "Shai Shalev-Shwartz", "Shai Ben-David" ], "title": "Understanding machine learning: From theory to algorithms", "venue": "Cambridge university press,", "year": 2014 }, { "authors": [ "Shai Shalev-Shwartz", "Ohad Shamir", "Nathan Srebro", "Karthik Sridharan" ], "title": "Learnability, stability and uniform convergence", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Mahdi Soltanolkotabi", "Adel Javanmard", "Jason D Lee" ], "title": "Theoretical insights into the optimization landscape of over-parameterized shallow neural networks", "venue": "IEEE Transactions on Information Theory,", "year": 2019 }, { "authors": [ "Nathan Srebro", "Karthik Sridharan", "Ambuj Tewari" ], "title": "Smoothness, low noise and fast rates", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "Sharan Vaswani", "Francis Bach", "Mark Schmidt" ], "title": "Fast and faster convergence of sgd for overparameterized models and an accelerated perceptron", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Zhe Wang", "Kaiyi Ji", "Yi Zhou", "Yingbin Liang", "Vahid Tarokh" ], "title": "Spiderboost and momentum: Faster variance reduction algorithms", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Lin Xiao", "Tong Zhang" ], "title": "A proximal stochastic gradient method with progressive variance reduction", "venue": "SIAM Journal on Optimization,", "year": 2014 }, { "authors": [ "Aolin Xu", "Maxim Raginsky" ], "title": "Information-theoretic analysis of generalization capability of learning algorithms", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yiming Ying", "Massimiliano Pontil" ], "title": "Online gradient descent learning algorithms", "venue": "Foundations of Computational Mathematics,", "year": 2008 }, { "authors": [ "Yiming Ying", "Ding-Xuan Zhou" ], "title": "Unregularized online learning algorithms with general loss functions", "venue": "Applied and Computational Harmonic Analysis,", "year": 2017 }, { "authors": [ "Zhuoning Yuan", "Yan Yan", "Rong Jin", "Tianbao Yang" ], "title": "Stagewise training accelerates convergence of testing error over sgd", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Lijun Zhang", "Mehrdad Mahdavi", "Rong Jin" ], "title": "Linear convergence with condition number independent access of full gradients", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Dongruo Zhou", "Pan Xu", "Quanquan Gu" ], "title": "Stochastic nested variance reduced gradient descent for nonconvex optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yi Zhou", "Yingbin Liang", "Huishuai Zhang" ], "title": "Generalization error bounds with probabilistic guarantee for SGD in nonconvex optimization", "venue": "arXiv preprint arXiv:1802.06903,", "year": 2018 }, { "authors": [ "Difan Zou", "Yuan Cao", "Dongruo Zhou", "Quanquan Gu" ], "title": "Stochastic gradient descent optimizes over-parameterized deep relu networks", "venue": "arXiv preprint arXiv:1811.08888,", "year": 2018 }, { "authors": [ "Hardt" ], "title": "2016, Theorem 2.2). Part (b) was first proved for deterministic algorithms (Bousquet & Elisseeff, 2002, Theorem 11), and then extended to randomized algorithms (Elisseeff", "venue": null, "year": 2005 }, { "authors": [], "title": "We further require a lemma relating the convergence in terms of function values to the convergence in terms of models. This shows that the PL condition is stronger than a quadratic growth condition (Karimi et al., 2016)", "venue": "Lemma B.3 (Karimi et al", "year": 2016 }, { "authors": [ "Reddi et al", "Lei" ], "title": "The proof is complete. We now present the proof of Theorem 9 on the behavior of the stochastic recursive gradient algorithm (SARAH) (Nguyen et al., 2017) and SpiderBoost (Wang et al., 2019)", "venue": "Proof of Theorem", "year": 2017 }, { "authors": [ "Nguyen et al", "Wang" ], "title": "The proof is complete. Finally, we consider SNVRG-PL (Zhou et al., 2018a). Theorem D.4. Let Assumptions 1 and 2 hold with L ≤ nβ/4", "venue": "Let A be the SNVRG-PL in Zhou et al", "year": 2019 }, { "authors": [], "title": "Rn×m is the matrix formed from the data. It is known that if g is σg-strongly convex, then FS satisfies the PL condition (Karimi et al., 2016", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Stochastic optimization has found tremendous applications in training highly expressive machine learning models including deep neural networks (DNNs) (Bottou et al., 2018), which are ubiquitous in modern learning architectures (LeCun et al., 2015). Oftentimes, the models trained in this way have not only very small training errors or even interpolate the training examples, but also surprisingly generalize well to testing examples (Zhang et al., 2017). While the low training error can be well explained by the over-parametrization of models and the efficiency of the optimization algorithm in identifying a local minimizer (Bassily et al., 2018; Vaswani et al., 2019; Ma et al., 2018), it is still unclear how the highly expressive models also achieve a low testing error (Ma et al., 2018). With the recent theoretical and empirical study, it is believed that a joint consideration of the interaction among the optimization algorithm, learning models and training examples is necessary to understand the generalization behavior (Neyshabur et al., 2017; Hardt et al., 2016; Lin et al., 2016).\nThe generalization error for stochastic optimization typically consists of an optimization error and an estimation error (see e.g. Bousquet & Bottou (2008)). Optimization errors arise from the suboptimality of the output of the chosen optimization algorithms, while estimation errors refer to the discrepancy between the testing error and training error at the output model. There is a large amount of literature on studying the optimization error (convergence) of stochastic optimization algorithms (Bottou et al., 2018; Orabona, 2014; Karimi et al., 2016; Ying & Zhou, 2017; Liu et al., 2018). In particular, the power of interpolation is clearly justified in boosting the convergence rate of stochastic gradient descent (SGD) (Bassily et al., 2018; Vaswani et al., 2019; Ma et al., 2018). In contrast, there is far less work on studying estimation errors of optimization algorithms. In a seminal paper (Hardt et al., 2016), the fundamental concept of algorithmic stability was used to study the generalization behavior of SGD, which was further improved and extended in Charles & Papailiopoulos (2018); Zhou et al. (2018b); Yuan et al. (2019); Kuzborskij & Lampert (2018).\n∗Corresponding author: Yiming Ying\nHowever, these results are still not quite satisfactory in the following three aspects. Firstly, the existing stability bounds in non-convex learning require very small step sizes (Hardt et al., 2016) and yield suboptimal generalization bounds (Yuan et al., 2019; Charles & Papailiopoulos, 2018; Zhou et al., 2018b). Secondly, majority of the existing work has focused on functions with a uniform Lipschitz constant which can be very large in practical models if not infinite (Bousquet & Elisseeff, 2002; Hardt et al., 2016; Charles & Papailiopoulos, 2018; Kuzborskij & Lampert, 2018), e.g., DNNs. Thirdly, the existing stability analysis fails to explain how the highly expressive models still generalize in an interpolation setting, which is observed for overparameterized DNNs (Arora et al., 2019; Brutzkus et al., 2017; Bassily et al., 2018; Belkin et al., 2019).\nIn this paper, we make attempts to address the above three issues using novel stability analysis approaches. Our main contributions are summarized as follows.\n1. We develop general stability and generalization bounds for any learning algorithm to optimize (non-convex) β-gradient-dominated objectives. Specifically, we show that the excess generalization error is bounded by O(1/(nβ)) plus the convergence rate of the algorithm, where n is the sample size. This general theorem implies that overfitting will never happen in this case, and generalization would always improve as we increase the training accuracy, which is due to an implicit regularization effect of gradient dominance condition. In particular, we show that interpolation actually improves generalization for highly expressive models. In contrast to the existing discussions based on either hypothesis stability or uniform stability which imply at best a bound of O(1/ √ nβ), the main idea is to consider a weaker on-average stability measure which allows us to replace the uniform Lipschitz constant in Hardt et al. (2016); Kuzborskij & Lampert (2018); Charles & Papailiopoulos (2018) with the training error of the best model.\n2. We apply our general results to various stochastic optimization algorithms, and highlight the advantage over existing generalization analysis. For example, we derive an exponential convergence of testing errors for SGD in an interpolation setting, which complements the exponential convergence of optimization errors (Bassily et al., 2018; Vaswani et al., 2019; Ma et al., 2018) and extends the existing results (Pillaud-Vivien et al., 2018; Nitanda & Suzuki, 2019) from a strongly-convex setting to a non-convex setting. In particular, we show that stochastic variance-reduced optimization outperforms SGD by achieving a significantly faster convergence of testing errors, while this advantage is only shown in terms of optimization errors in the literature (Reddi et al., 2016; Lei et al., 2017; Nguyen et al., 2017; Zhou et al., 2018a; Wang et al., 2019)." }, { "heading": "2 RELATED WORK", "text": "Algorithmic Stability. We first review the related work on stability and generalization. Algorithmic stability is a fundamental concept in statistical learning theory (Bousquet & Elisseeff, 2002; Elisseeff et al., 2005), which has a deep connection with learnability (Shalev-Shwartz et al., 2010; Rakhlin et al., 2005). The important uniform stability was introduced in Bousquet & Elisseeff (2002), where the authors showed that empirical risk minimization (ERM) enjoys the uniform stability if the objective function is strongly convex. This concept was extended to study randomized algorithms such as bagging and bootstrap (Elisseeff et al., 2005). An interesting trade-off between uniform stability and convergence was developed for iterative optimization algorithms, which was then used to study convergence lower bounds of different algorithms (Chen et al., 2018). While generalization bounds based on stability are often stated in expectation, uniform stability was recently shown to guarantee almost optimal high-probability bounds based on elegant concentration inequalities for weakly-dependent random variables (Maurer, 2017; Feldman & Vondrak, 2019; Bousquet et al., 2020). Other than the standard classification and regression setting, uniform stability was very successfully to study transfer learning (Kuzborskij & Lampert, 2018), PAC-Bayesian bounds (London, 2017), privacy learning (Bassily et al., 2019) and pairwise learning (Lei et al., 2020b). Some other stability measures include the uniform argument stability (Liu et al., 2017), hypothesis stability (Bousquet & Elisseeff, 2002), hypothesis set stability (Foster et al., 2019) and on-average stability (Shalev-Shwartz et al., 2010). An advantage of on-average stability is that it is weaker than the uniform stability and can imply better generalization by exploiting either the strong convexity of the objective function (Shalev-Shwartz & Ben-David, 2014, Corollary 13.7) or the more relaxed exp-concavity of loss functions (Koren & Levy, 2015; Gonen & Shalev-Shwartz, 2017). Since gradient-dominance condition is another relaxed extension of strong convexity, we use on-average stability to study generalization bounds.\nGeneralization analysis. We now review related work on generalization analysis for stochastic optimization. In a seminal paper (Hardt et al., 2016), the authors used the nonexpansiveness of gradient mapping to develop uniform stability bounds for SGD to optimize convex, strongly convex and even non-convex objective functions. This inspired some interesting work on stochastic optimization. An interesting data-dependent stability bound was developed for SGD, a nice property of which is that it shows how the initialization would affect generalization (Kuzborskij & Lampert, 2018). These stability bounds were integrated into a PAC-Bayesian analysis of SGD, yielding generalization bounds for arbitrary posterior distributions (London, 2017). Almost optimal generalization bounds were developed for differentially private stochastic convex optimization (Bassily et al., 2019). The onaverage variance of stochastic gradients was used to refine the generalization analysis of SGD (Hardt et al., 2016) in non-convex optimization (Zhou et al., 2018b). The uniform stability was also studied for SGD implemented in a stagewise manner (Yuan et al., 2019) and stochastic gradient Langevin dynamics in a non-convex setting (Li et al., 2020; Mou et al., 2018). Very recently, the discussions in Hardt et al. (2016) were extended to tackle non-smooth (Lei & Ying, 2020; Bassily et al., 2020) and non-Lipscthiz functions (Lei & Ying, 2020). The most related work is Charles & Papailiopoulos (2018), where some general hypothesis stability bounds were developed for learning algorithms that converge to optima. A very interesting point is that their bounds depend only on the convergence of the algorithm to a global minimum and the geometry of loss functions around the global minimum. However, their discussion imply at best the slow generalization bounds O(1/ √ nβ) for β-gradientdominated objective functions, and can not explain the benefit of low optimization errors in helping generalization. The underlying reason is that they used the pointwise hypothesis stability and did not consider the smoothness of loss functions. We aim to improve these results by leveraging the weaker on-average stability and smoothness of loss functions.\nOther than the stability approach, there is interesting generalization analysis of SGD based on either a uniform convergence approach (Lin et al., 2016), an integral operator approach (Lin & Rosasco, 2017; Ying & Pontil, 2008; Dieuleveut & Bach, 2016; Dieuleveut et al., 2017; Mücke et al., 2019) or an information-theoretic approach (Xu & Raginsky, 2017; Negrea et al., 2019; Bu et al., 2020)." }, { "heading": "3 MAIN RESULTS", "text": "Let ρ be a probability measure defined on a sample spaceZ = X×Y withX ⊆ Rd andY ⊆ R, from which a training dataset S = { z1, . . . , zn } is drawn independently and identically. The aim is to find a good model w from a model parameter spaceW based on the training dataset S. The performance of a prescribed model w on a single example z can be measured by a nonnegative loss function f(w; z), where f :W ×Z 7→ R+. In machine learning we often apply an (randomized) algorithm A : ∪nZn 7→ W to S to produce an output modelA(S) ∈ W . Oftentimes, the constructed model w would have a small empirical risk FS(w) = 1n ∑n i=1 f(w; zi). However, we are mostly interested in the generalization performance of a model w on testing examples measured by the population (true) risk F (w) = Ez [ f(w; z) ] , where Ez denotes the expectation with respect to (w.r.t.) z. The gap\nES,A [ F (A(S))−FS(A(S)) ] between the population risk and empirical risk is called the estimation error, which is due to the approximation of ρ by sampling. Here EA denotes the expectation w.r.t. the randomness of the algorithm A. For example, if A is SGD, then EA denotes the expectation w.r.t. the random indices of training examples selected for the gradient computation. A powerful tool to study the estimation error is the algorithmic stability (Bousquet & Elisseeff, 2002; Elisseeff et al., 2005; Shalev-Shwartz et al., 2010; Hardt et al., 2016), which measures the sensitivity of the algorithm’s output w.r.t. the perturbation of a training dataset. Below we give formal definitions of stability measures, whose connection to generalization is established in Theorem A.1.\nDefinition 1 (Uniform Stability). A randomized algorithmA has uniform stability if for all datasets S, S̃ ∈ Zn that differ by at most one example, we have supz EA [ f(A(S); z)− f(A(S̃); z) ] ≤ .\nThe following on-average stability is similar to the average-RO stability in Shalev-Shwartz et al. (2010). The difference is we do not use an absolute value. Form ∈ N, we denote [m] = {1, . . . ,m}.\nDefinition 2 (On-average Stability). Let S = {z1, . . . , zn} and S̃ = {z̃1, . . . , z̃n} be drawn independently from ρ. For each i ∈ [n], denote S(i) = {z1, . . . , zi−1, z̃i, zi+1, . . . , zn}. We say an algorithm A has on-average stability if 1n ∑n i=1 ES,S̃,A [ f(A(S(i)); zi)− f(A(S); zi) ] ≤ .\nIn this paper, we are interested in the excess generalization error F (A(S)) − F (w∗), where w∗ ∈ arg minw∈W F (w) is the best model with the least testing error (population risk). For this purpose, we introduce some basic assumptions. A basic assumption in non-convex learning is the smoothness of loss functions (Ghadimi & Lan, 2013; Karimi et al., 2016), meaning the gradients are Lipschitz continuous. Let ‖ · ‖2 denote the Euclidean norm and∇ denote the gradient operator. Assumption 1 (Smoothness Assumption). We assume for all z ∈ Z , the differentiable function w 7→ f(w; z) is L-smooth, i.e., ‖∇f(w; z)−∇f(w′; z)‖2 ≤ L‖w −w′‖2 for all w,w′ ∈ W .\nAnother assumption is the Polyak-Lojasiewicz (PL) condition on the objective function, which is common in non-convex optimization (Zhou et al., 2018b; Reddi et al., 2016; Karimi et al., 2016; Wang et al., 2019; Lei et al., 2017), and was shown to hold true for deep (linear) and shallow neural networks (Hardt & Ma, 2016; Charles & Papailiopoulos, 2018; Li & Yuan, 2017).\nAssumption 2 (Polyak-Lojasiewicz Condition). Denote F̂S = infw′∈W FS(w′). We assume FS satisfies PL or gradient-dominated condition (in expectation) with parameter β > 0, i.e.,\nES [ FS(w)− F̂S ] ≤ 1 2β ES [ ‖∇FS(w)‖22 ] , ∀w ∈ W. (3.1)\nIt is worthy of mentioning that our results in this section continue to hold if the global PL condition is relaxed to a local PL condition, i.e., (3.1) holds for w in a neighborhood of the minimizer of FS .\nThe existing stability analysis often imposes a bounded gradient assumption below (Bousquet & Elisseeff, 2002; Hardt et al., 2016; Charles & Papailiopoulos, 2018; Yuan et al., 2019; Kuzborskij & Lampert, 2018). Indeed, the resulting stability bounds depend on the uniform Lipschitz constant G (see eq. (3.4)), which can be prohibitively large in practical models, e.g., DNNs, or even infinite, e.g. least squares regression in an unbounded domain. Assumption 3 (Bounded Gradient Assumption). We assume ‖∇f(w; z)‖2 ≤ G for all w ∈ W , z ∈ Z and a constant G > 0.\nOur main result to be proved in Appendix B removes Assumption 3 and replaces the uniform Lipschitz constant G by the minimal empirical risk F̂S , which is significantly smaller than the Lipschitz constant. Note the assumption L ≤ nβ/4 is mild, and the previous generalization bounds become vacuous as O(1) (Yuan et al., 2019; Charles & Papailiopoulos, 2018) if this assumption is violated. Theorem 1 (Main Theorem). Let Assumptions 1, 2 hold and wS = A(S). If L ≤ nβ/4, then\nE [ F (wS)− F̂S ] ≤ 16LE[F̂S ] nβ + LE [ FS(wS)− F̂S ] 2β . (3.2)\nAn important implication is as follows. Since E [ F̂S ] ≤ E [ FS(w ∗) ]\n= F (w∗) and F̂S ≤ FS(wS), Eq. (3.2) implies an upper bound on the excess generalization error E[F (wS)]− F (w∗) and\nE [ F (wS)− FS(wS) ] = O ( 1 nβ + E [ FS(wS)− F̂S ] β ) . (3.3)\nThe above two terms can be explained as follows. The term O(1/(nβ)) reflects the intrinsic complexity of the problem, while E [ FS(wS) − F̂S ] is called the optimization error. An interesting observation is that the overfitting phenomenon would never happen for learning under the PL condition (analogous to learning with strongly convex objectives where the global minimizer generalizes well (Bousquet & Elisseeff, 2002)). Indeed, if the optimization algorithm finds more and more accurate solutions, it achieves the limiting generalization bound O(1/(nβ)). This shows an important message that optimization can be beneficial to generalization. This seemingly counterintuitive phenomenon is due to the implicit regularization enforced by the PL condition (analogous to the strong convexity condition). Another notable property is that Theorem 1 applies to any algorithm. We can plug any known optimization error bounds into it to immediately get generalization bounds. Remark 1. We show that our result significantly improves the existing stability analysis. The work (Charles & Papailiopoulos, 2018) showed the pointwise hypothesis stability is controlled\nby 2G 2 nβ + 2 √ 2G √ E[FS(wS)− F̂S ]/β, which together with the connection between stability and generalization (cf. (A.1)), implies with probability 1− δ that\nF (wS) ≤ FS(wS) + ( M2\nnδ +\n24MG2\nnβδ +\n24MG √ 2E[FS(wS)− F̂S ] √ βδ ) 1 2 . (3.4)\nThe above bound requires the bounded gradient assumption ‖∇f(w; z)‖2 ≤ G and the bounded loss assumption 0 ≤ f(w; z) ≤ M for all w ∈ W and z ∈ Z , which are successfully removed in our generalization analysis. Furthermore, our generalization bound significantly improves (3.4). Indeed, assume E[FS(wS)− F̂S ] ≤ 2β for some > 0, then (3.3) implies\nE [ F (wS) ] = E [ FS(wS) ] +O ( 1 nβ + 2 ) , (3.5)\nwhile (3.4) becomes F (wS) = FS(wS) +O (\n1√ nβ\n+ √ ) . To achieve the generalization guarantee\nO(1/ √ nβ), the above bound requires the optimization accuracy = O(1/(nβ)), while our bound (3.5) only requires the accuracy = O(1/ √ nβ) but gets the significantly better generalization bound 1/(nβ). We actually develop a better stability bound. Specifically, the pointwise hypothesis stability is bounded by O ( 1 nβ + ) in Charles & Papailiopoulos (2018) while we show that the on-\naverage stability is bounded by O (\n1 nβ + 2 ) , which is significantly tighter if 1nβ ≤ ≤ 1 (ignoring\nconstant factors). It should be mentioned that Charles & Papailiopoulos (2018) did not impose a smoothness assumption. However, the smoothness assumption is widely used in non-convex optimization to derive meaningful rates (Ghadimi & Lan, 2013). As compared to probabilistic bounds in Charles & Papailiopoulos (2018), our bounds are stated in expectation. The extension to highprobability bounds will lead to additional O(1/ √ n) term (Feldman & Vondrak, 2019).\nRemark 2 (Bounded gradient assumption). Very recently, the bounded gradient assumption was also removed for the stability analysis (Lei & Ying, 2020). However, their analysis considered SGD applied to convex loss functions. As a comparison, we study stability and generalization in a nonconvex learning setting, and our analysis applies to any stochastic optimization algorithms.\nRemark 3. If A is ERM, Theorem 1 immediately implies E [ F (wS) − F̂S ] ≤ 16Lnβ E[F̂S ]. If FS\nis β-strongly convex and L < nβ/2, it was shown for ERM that E [ F (wS) − F̂S ] ≤ 48Lnβ E [ F̂S ] (Shalev-Shwartz & Ben-David, 2014, Corollary 13.7). Their result is extended here from a strongly convex setting to a gradient-dominated setting, and from the particular ERM to any algorithm.\nAs a direct corollary, we can derive the following optimistic bound in the interpolation setting, which is the most intriguing case for over-parameterized or highly expressive DNN models. Corollary 2. Let Assumptions 1, 2 hold and wS = A(S). If E[F̂S ] = 0 and L < nβ/2, then E [ F (wS) ] ≤ L2βE [ FS(wS) ] .\nRemark 4. Corollary 2 shows a benefit of interpolation in boosting the generalization by achieving a generalization bound O( ) for any > 0 if we minimize FS sufficiently well. This benefit can not be explained by the existing discussions (Hardt et al., 2016; Charles & Papailiopoulos, 2018) as they imply the same generalization bound O(1/ √ nβ) in the interpolation setting. Although it was observed that interpolation helps in training (Bassily et al., 2018; Vaswani et al., 2019; Ma et al., 2018; Oymak & Soltanolkotabi, 2020; Allen-Zhu et al., 2019; Zou et al., 2018), it is still largely unclear, as indicated in Ma et al. (2018), that how interpolation helps in generalization. Corollary 2 shows new insights on how interpolation from highly expressive models helps generalization.\nWe now move on to the discussion on the critical assumption in Corollary 2, i.e. L < nβ/2. According to the proof, the two parameters L and β can be replaced by their local counterparts, i.e., the smoothness and PL condition related to a particular minimizer w′ of FS(i) (Eqs. (B.6), (B.7)). For example, β can be replaced by 12‖∇FS(w\n′)‖22/(FS(w′) − F̂S), which can be larger than β. Below are some examples on explaining L/β < n/2. As we will see, the quantity L/β reflects the complexity of the problem (related to condition number as shown in Examples 1, 2). Therefore, the condition L/β < n/2 imposes implicitly a constraint on the complexity of the problems. This explains why the optimization algorithm would never overfit when applied to gradient-dominated objective functions if L/β < n/2, as shown in Theorem 1. Example 1. Let φ : Rd 7→ Rm be a feature map, and ` : R×R 7→ R+ be a loss function which isL`smooth and σ`-strongly convex w.r.t. the first argument. Consider f(w; z) = `(〈w, φ(xi)〉, yi) with 〈·, ·〉 being an inner product. Then, FS satisfies the PL condition with the parameter σ′min(ΣS)σ`, where ΣS = 1n ∑n i=1 φ(xi)φ(xi)\n> is the empirical covariance matrix, A> denotes the transpose of a matrix A and σ′min(A) means the minimal non-zero singular value of A. The empirical counterpart (we have an expectation w.r.t. S in PL condition) of L/β is of the order of σmax(ΣS)/σ′min(ΣS), where σmax(A) means the maximal singular value (we give details in Appendix E.1).\nExample 2. Consider neural networks with a single hidden layer with d inputs, m hidden neurons and a single output neuron, for which the prediction function takes the form hv,w =∑m k=1 vkφ ( 〈wk, x〉 ) . Here wk ∈ Rd and vk ∈ R denote the weight of the edges connecting the k-th hidden node to the input and output node, respectively, while φ : R 7→ R is the activation function. Analogous to Arora et al. (2019); Oymak & Soltanolkotabi (2020), we fix v = (v1, . . . , vm)> with |vk| = a for some a > 0 and train w = (w1,w2, . . . ,wm)> ∈ Rm×d from S. The loss function then takes the form f(w; z) = ( v>φ(wx)−y )2 . If we consider the identity activation function, i.e.,\nφ(t) = t, then FS satisfies the PL condition with the parameter σmin(ΣS), where σmin(A) denotes the minimal singular value of A and ΣS = 1n ∑n i=1 xix > i . The empirical counterpart of L/β is of the order of σmax(ΣS)/σmin(ΣS) (we give details in Appendix E.2 for a general activation function).\nIt is possible to get generalization bounds under some other conditions. Since one-point strong convexity condition together with smoothness assumption implies the PL condition (Yuan et al., 2019), all our results apply to one-point strongly convex functions. We can also get generalization bounds for objective functions satisfying the quadratic growth condition (Necoara et al., 2018), which is weaker than the PL condition. However, we need to impose a realizability condition which was also imposed in Charles & Papailiopoulos (2018). The proof of Theorem 3 is given in Section C. Let w(S) denote the Euclidean projection of w onto the set of global minimizers of FS inW . Definition 3 (Quadratic Growth Condition). We say FS : W 7→ R satisfies the quadratic growth condition (in expectation) with parameter β if E [ FS(w)−F̂S ] ≥ β2E [ ‖w−w(S)‖22 ] for all w ∈ W . Theorem 3. Let Assumption 1 hold and FS satisfy the quadratic growth condition with parameter β. If the problem is realizable, i.e., E[F̂S ] = 0 and L ≤ nβ/4, then E [ F (wS) ] ≤ 2Lβ−1E [ FS(wS) ] .\nFinally, we consider any optimization algorithms applied to gradient-dominated and Lipschitz continuous functions. We do not require loss functions to be smooth here. It shows that the excess generalization bound can decay as fast as O(1/(nβ)) if we solve the optimization problem to a sufficient accuracy, which is much better than the generalization bound O(1/ √ nβ) in Charles & Papailiopoulos (2018). Recall the analysis in Charles & Papailiopoulos (2018) requires Assumptions 2, 3 and a further assumption on boundedness of loss functions. The proof is given in Section C. Theorem 4. Let Assumptions 2, 3 hold and wS = A(S). Then the following inequality holds\nE [ F (wS)− F̂S ] ≤ 2G 2 nβ + G ( E [ FS(wS)− F̂S ]) 1 2 √ 2β ." }, { "heading": "4 APPLICATIONS", "text": "In this section, we apply Theorem 1 to different stochastic optimization algorithms such as stochastic gradient descent, randomized coordinate descent, and stochastic variance-reduced optimization. In particular, we study the number of stochastic gradient evaluations required to achieve a prescribed generalization bound, which is summarized in Table 1. We always assume L ≤ nβ/4 in this section." }, { "heading": "4.1 STOCHASTIC GRADIENT DESCENT", "text": "We need some notations to state results on SGD. Specifically, denote by w1 ∈ W an initial point of SGD. At the t-th iteration, we first randomly select an index it ∼ unif[n], and then update {wt}t by wt+1 = wt − ηt∇f(wt; zit), (4.1) where {ηt}t is a sequence of positive step sizes and unif[n] denotes the uniform distribution over [n]. The proof of Theorem 5 is given in Appendix D.1. Theorem 5. Let Assumptions 1, 2 hold with L ≤ nβ/4. Let A be SGD with the step size sequence ηt = 2t+1 2β(t+1)2 . Then E [ F (wT+1) ] − F (w∗) = O ( 1 nβ + 1 Tβ3 ) . We can take O( nβ2 ) stochastic\ngradient evaluations to get excess generalization bounds O(1/(nβ)). Remark 5. We compare Theorem 5 with the recent generalization analysis of SGD under the PL condition. Based on pointwise hypothesis stability analysis and the optimization error bound in Karimi et al. (2016), it was shown with probability at least 1− δ (Charles & Papailiopoulos, 2018)\nF (wT+1)− F (w∗) = O ( 1√\nnβδ +\n1\nT 1 4 β 3 4 δ 1 2\n) . (4.2)\nThe above bound indicates thatO(n2/β) stochastic gradient evaluations are needed to get the excess generalization bounds O(1/ √ nβ). Based on the uniform stability bound in Hardt et al. (2016) and the optimization error bound in Karimi et al. (2016), it was shown in Yuan et al. (2019) that\nE[F (wT+1)]− F (w∗) = O ( n−1(βT ) L/β 1+L/β ) +O ( 1 Tβ2 ) . (4.3)\nBy taking an optimal T = n 1+L/β 1+2L/β β− 2+3L/β 1+2L/β (ignoring a constant factor) to balance the above two terms, we derive E[F (wT+1)] − F (w∗) = O ( n− 1+L/β 1+2L/β β− L/β 1+2L/β ) . If L/β is moderately large, then this bound quickly becomes E[F (wT+1)] − F (w∗) = O(1/ √ nβ). With high probability at least 1 − δ, it was shown that SGD with the step size ηt = c(t+2) log(t+2) gets the bound F (wT+1)−FS(wT+1) = O (√ c log T/ √ nδ )\n(Zhou et al., 2018b). However, it is not clear how the optimization errors decay with such step sizes. Typically, c should be of the order O(1/β) as shown in Karimi et al. (2016) and therefore the stability analysis in Zhou et al. (2018b) can at best achieve the generalization boundsO (√ log T/ √ nβ ) . To summarize, the existing stability analysis generally implies the generalization bound O(1/ √ nβ) for SGD in learning with gradient-dominated objectives (Charles & Papailiopoulos, 2018; Zhou et al., 2018b; Yuan et al., 2019), which is significantly improved to O(1/(nβ)) in our paper by the refined stability analysis. It is worth mentioning that, in this comparison, we have used the same optimization error bounds in Karimi et al. (2016), and the analysis in Charles & Papailiopoulos (2018); Zhou et al. (2018b); Yuan et al. (2019) requires a bounded gradient assumption and a bounded loss assumption, which are removed in our analysis.\nThe above iteration complexity in Theorem 5 can be further improved if we impose a restricted secant inequality (Karimi et al., 2016) on FS , which has been considered for non-convex optimization, e.g., optimizing neural networks (Li & Yuan, 2017). This is a slightly stronger assumption than the PL condition as shown in Karimi et al. (2016). Definition 4 (Restricted Secant Inequality). We say FS : W 7→ R satisfies the restricted secant inequality with parameter β if E [ 〈w −w(S),∇FS(w)〉 ] ≥ βE[‖w −w(S)‖22] for all w ∈ W . Theorem 6. Assume FS satisfies the restricted secant inequality with parameter β. Let Assumption 1 hold with L ≤ nβ/4. LetA be SGD with ηt = 1/(β(t+1)). Then one can takeO(n/β) stochastic gradient evaluations to achieve the excess generalization bounds O(1/(nβ)).\nBelow we apply Theorem 1 to establish fast generalization bounds in an interpolation setting. Our analysis shows that interpolation actually boosts SGD by achieving an exponential convergence of testing errors, which can not be derived from the bound (3.4) in Charles & Papailiopoulos (2018). Theorem 7. Let Assumptions 1, 2 hold with L ≤ nβ/4, and E[F̂S ] = 0. Let A be SGD with ηt = β/L 2. Then E[F (wT+1)] ≤ L(1−β 2/L2)T 2β E [ FS(w1) ] . We can take O ( β−2 log(1/(β ))) stochastic gradient evaluations to achieve the generalization bound O( ) for any > 0.\nThe above linear convergence does not contradict existing minimax lower bounds where the benefit of interpolation is not considered. The proofs for Theorems 6, 7 are given in Appendix D.1.\nRemark 6. We discuss some recent work on error bounds in low-noise conditions. Optimization errors of SGD were studied for general non-convex objectives (Vaswani et al., 2019; Ma et al., 2018) and gradient-dominated objectives (Bassily et al., 2018). For binary classification problems with the specific squared loss, it was shown SGD achieves an exponential convergence of testing classification errors under a margin condition, i.e., positive and negative classes are separated by a margin that is strictly positive (Pillaud-Vivien et al., 2018). This was extended to general convex loss functions under the same margin condition (Nitanda & Suzuki, 2019). These discussions consider regularized objective functions (Pillaud-Vivien et al., 2018; Nitanda & Suzuki, 2019), which are strongly convex. The exponential convergence in Pillaud-Vivien et al. (2018); Nitanda & Suzuki (2019) was established for the testing classification errors, i.e., 0-1 loss. As a comparison, we establish an exponential convergence for the testing errors measured by loss functions used in training. In addition, the exponential convergence in Pillaud-Vivien et al. (2018); Nitanda & Suzuki (2019) comes into effect only after a sufficiently large number of iterations, which is not required in Theorem 7." }, { "heading": "4.2 RANDOMIZED COORDINATE DESCENT", "text": "Randomized coordinate descent (RCD) is an efficient optimization algorithm particularly useful for high-dimensional learning problems (Nesterov, 2012). At each iteration it firstly randomly selects a single coordinate it ∈ {1, . . . , d}, and then performs the update along the it-th coordinate as wt+1 = wt − ηt∇itFS(wt)eit , where ∇iFS denotes the derivative of FS w.r.t. the i-th coordinate and ei is a vector in Rd with the i-th coordinate being 1 and other coordinates being 0. Theorem 8. Let Assumptions 1 and 2 hold with L ≤ nβ/4. Let A be RCD with ηt = 1/L. Then E[F (wT+1)]−F (w∗) = O ( 1 nβ + 1 β ( 1− βdL )T) . We takeO((d log n)/β) stochastic gradient eval-\nuations to get excess generalization bounds O(1/(nβ)). If E[F̂S ] = 0, we take O ( β−1d log 1/(β ) ) stochastic gradient evaluations to get generalization bounds O( ) for any > 0.\nThe detailed proof for the above theorem is given in Appendix D.2. As indicated in Remark 1, the discussion in Charles & Papailiopoulos (2018) can only imply the generalization boundO(1/ √ nβ)." }, { "heading": "4.3 STOCHASTIC VARIANCE-REDUCED OPTIMIZATION", "text": "SGD needs a diminishing step size due to the inherent variance of stochastic gradients, which generally yields a sublinear convergence rate (Bottou et al., 2018). Recently, there is a large amount of work to accelerate SGD by using some different gradient estimates with a reduced variance (Johnson & Zhang, 2013; Xiao & Zhang, 2014; Zhang et al., 2013; Allen-Zhu & Hazan, 2016; Fang et al., 2018; Wang et al., 2019; Nguyen et al., 2017; Zhou et al., 2018a; Schmidt et al., 2017; Defazio et al., 2014; Reddi et al., 2016). This class of algorithms proceeds in epochs. Let w̃0 be an initialization point. At the beginning of s-th epoch, we set a reference point w0 = w̃s−1, draw a batch Ĩs ⊆ [n] and compute v0 = ∇fĨs(w0), where we denote fI(w) = 1 |I| ∑ i∈I f(w; zi) for I ⊆ [n] and |I| is the cardinality of I . The batch Ĩs can be equal to [n] (Johnson & Zhang, 2013; Xiao & Zhang, 2014; Wang et al., 2019; Reddi et al., 2016) or drawn with replacement according to the uniform distribution over [n] (Lei et al., 2017; Fang et al., 2018). Then we proceed with ms inner iterations by using some gradient estimators with reduced variances. At the t-th inner iteration, we first draw a batch It ⊆ [n] from the uniform distribution over [n]. The original SVRG (Johnson & Zhang, 2013; Reddi et al., 2016; Xiao & Zhang, 2014) uses the gradient estimator (we omit the dependency on s) vt = ∇fIt(wt)−∇fIt(w0) + v0. (4.4) Recently a different update of gradient estimator is proposed (Nguyen et al., 2017; Fang et al., 2018) vt = ∇fIt(wt)−∇fIt(wt−1) + vt−1. (4.5) An important observation is that the variance of vt diminishes to zero as we are approaching the minimum, which allows us to update the iterate with a constant step size wt+1 = wt − ηvt (Johnson & Zhang, 2013). The framework of stochastic variance-reduced optimization is described in Algorithm 1 in Appendix D.3. The following theorem gives generalization bounds O(1/(nβ)) for stochastic variance-reduced optimization, which significantly improves the boundO(1/ √ nβ) based on (3.4). The proof is given in Appendix D.3. Theorem 9. Let Assumptions 1 and 2 hold with L ≤ nβ/4. Let A be either the SARAH in Nguyen et al. (2017) or the SpiderBoost in Wang et al. (2019). We can take O (( n+ 1/β2 ) log n ) stochastic\ngradient evaluations to get excess generalization boundsO(1/(nβ)). If E[F̂S ] = 0, we takeO (( n+\n1/β2 ) log 1/(β ) ) stochastic gradient evaluations to get generalization bounds O( ) for any > 0.\nAs compared to SGD (Section 4.1), Theorem 9 shows SARAH/SpiderBoost requires significantly fewer iterations to achieve the same testing errors. This shows a clear advantage of stochastic variance-reduced optimization over SGD in generalization other than training. Other than SARAH and SpiderBoost, we also develop generalization bounds for SVRG in Reddi et al. (2016), SCSG in Lei et al. (2017) (Theorem D.3) and SNVRG-PL in Zhou et al. (2018a) (Theorem D.4)." }, { "heading": "5 SIMULATIONS AND CONCLUSIONS", "text": "Simulations. We report some preliminary experiments to support our theory. We consider the dataset IJCNN available from the LIBSVM website (Chang & Lin, 2011) and report the average of experimental results from 25 repetitions. In our first experiment, we aim to check how the condition σmax(ΣS)/σmin(ΣS) ≤ n/4 would be satisfied in practice. To this aim, we randomly pick a subset I ⊂ {1, 2, . . . , n} and build an empirical covariance matrix ΣI = 1|I| ∑ i∈I xix > i , where |I| denotes the cardinality of I . Then we compute the term κI := σmax(ΣI) σmin(ΣI)|I| . Figure 1 plots the κI as a function of |I|. It is clear that the condition κI ≤ 1/4 is violated if |I| is small. As |I| increases, κI decreases and can be as small as 10−3. Then, the condition κI ≤ 1/4 holds trivially for sufficiently large n. Theorem 1 implies that overfitting would never happen for learning with gradient-dominated functions. Our second experiment aims to verify this phenomenon. We consider a generalized linear model for binary classification with the loss function f(w; z) = ( `(w>x) − y )2 , where ` is the logistic link function `(a) = (1 + exp(−a))−1. It was shown that the corresponding objective function is gradient-dominated (Foster et al., 2018). We use 80 percents of the dataset for training and reserve the remaining 20 percents for testing. We apply SGD with the step size ηt = 1/(1 + 0.001t) and compute the testing error of {wt} on the testing dataset. In Figure 2, we plot the testing errors versus the number of passes (iteration number divided by sample size). It is clear that the testing error continue to decrease along the learning process, and there is no overfitting even after 100 passes of the dataset. This is well consistent with Theorem 1.\nConclusions. We study stochastic optimization under the PL condition. We show that the generalization errors can be bounded byO(1/(nβ)) plus the convergence rate of algorithms. An observation is that the optimization always helps in generalization under the PL condition. Our analysis based on a weak on-average stability measure removes the bounded gradient assumption in the literature, and can imply significantly better bounds. In particular, we show how the interpolation accelerates the generalization. Our study relies on an essential PL condition on the objective function. While this assumption is widely used in the non-convex learning setting, it would be very interesting to extend the discussions here to general non-convex objective functions." }, { "heading": "ACKNOWLEDGMENTS", "text": "The work of Yunwen Lei is supported by the National Natural Science Foundation of China (Grant No. 61806091) and the Alexander von Humboldt Foundation. The work of Yiming Ying is supported by NSF grants IIS-1816227 and IIS-2008532." }, { "heading": "A STABILITY AND GENERALIZATION", "text": "We first give the definition of pointwise hypothesis stability. For any i ∈ [n], denote S\\zi = {z1, . . . , zi−1, zi+1, . . . , zn}. Definition 5 (Pointwise Hypothesis Stability). We say a randomized algorithm A has pointwise hypothesis stability if for all i ∈ [n] there holds ES,A\n[∣∣f(A(S); zi)− f(A(S\\zi); zi)∣∣] ≤ . Theorem A.1 establishes the key connection between the generalization and various stability measures. Part (a) and part (b) show that the algorithm with either uniform stability or pointwise hypothesis stability generalizes well to testing examples (Bousquet & Elisseeff, 2002). Initially, they were developed for deterministic algorithms (Bousquet & Elisseeff, 2002), which were then extended to the setting of randomized algorithms (Elisseeff et al., 2005). Part (c) shows the connection between the generalization and the on-average stability (Shalev-Shwartz et al., 2010). Note part (b) involves a square root of 1/δ instead of a log(1/δ).\nTheorem A.1 (Generalization by Stability). Let A be a randomized algorithm. (a) If A has uniform stability , then ∣∣ES,A[FS(A(S))− F (A(S))]∣∣ ≤ .\n(b) Let M > 0. If A has pointwise hypothesis stability and 0 ≤ f(w; z) ≤M for all w ∈ W and z ∈ Z . Then for all δ ∈ (0, 1) with probability at least 1− δ\nF (A(S)) ≤ FS(A(S)) + (M2 + 12Mn\nnδ\n) 1 2\n. (A.1)\n(c) If A has on-average stability , then ES,A [ F (A(S))− FS(A(S)) ] ≤ .\nProof. The proof of Part (a) can be found in Hardt et al. (2016, Theorem 2.2). Part (b) was first proved for deterministic algorithms (Bousquet & Elisseeff, 2002, Theorem 11), and then extended to randomized algorithms (Elisseeff et al., 2005, Theorem 12). We prove Part (c) here due to its simplicity. Since zi and z̃i are drawn from the same distribution, we know\nES,A [ F (A(S))− FS(A(S)) ] = 1\nn n∑ i=1 ES,S̃,A [ F (A(S(i)))− FS(A(S)) ] = 1\nn n∑ i=1 ES,S̃,A [ f(A(S(i)); zi)− f(A(S); zi) ] ,\nwhere the last identity holds since zi is independent of A(S(i)). The proof is complete by noting the definition of on-average stability." }, { "heading": "B PROOF OF THEOREM 1", "text": "In this section, we prove Theorem 1. We begin our analysis with some useful properties of smooth functions. If g is L-smooth, we have the following self-bounding property (Srebro et al., 2010)\n‖∇g(w)‖22 ≤ 2L ( g(w)− inf\nw′ g(w′)\n) , ∀w ∈ W (B.1)\nand the following elementary inequality for all w, w̃ ∈ W (Nesterov, 2012)\ng(w) ≤ g(w̃) + 〈∇g(w̃),w − w̃〉+ L‖w − w̃‖ 2 2\n2 . (B.2)\nIn particular, if g is further nonnegative, then ‖∇g(w)‖22 ≤ 2Lg(w), ∀w ∈ W. (B.3)\nThe following lemma follows directly from the self-bounding property of smooth loss functions.\nLemma B.1. Assume F is L-smooth. Then (w can depend on S) E[‖∇F (w)‖22] ≤ 2LE [ F (w)− F̂S ] .\nProof. Recall w∗ = arg minw∈W F (w). According to the self-bounding property (B.1) and the definition of w∗ we know\nE[‖∇F (w)‖22] ≤ 2LE [ F (w)− F (w∗) ] = 2LE [ F (w)− FS(w∗) ] ≤ 2LE [ F (w)− F̂S ] ,\nwhere we have used E[FS(w∗)] = F (w∗) since w∗ is independent of S, and F̂S ≤ FS(w∗) due to the definition of F̂S . The proof is complete.\nIn the following lemma, we derive the on-average stability bounds under the PL condition. Recall for any w, we denote by w(S) the Euclidean projection of w onto the set of global minimizers of FS inW . Lemma B.2. If Assumptions 1, 2 hold, then A has on-average stability satisfying\n≤ 2L nβ\n( E[F̂S ] + E[F (w(S)S )] ) + E [ F (wS)− F (w(S)S ) ] + E [ F̂S − FS(wS) ] .\nProof. Let S̃ = {z̃1, . . . , z̃n} be drawn independently from ρ. For each i ∈ [n], let S(i) be defined in Definition 2. For each i ∈ [n], we denote wS(i) = A(S(i)) and w (S(i)) S(i) the projection of wS(i) onto the set of global minimizer of FS(i) . We decompose f(wS(i) ; zi)− f(wS ; zi) as follows\nf(wS(i) ; zi)− f(wS ; zi) = ( f(wS(i) ; zi)− f(w (S(i)) S(i) ; zi) ) + ( f(w (S(i))\nS(i) ; zi)− f(w(S)S ; zi)\n) + ( f(w\n(S) S ; zi)− f(wS ; zi)\n) . (B.4)\nWe now address the above three terms separately. We first address f(w(S (i)) S(i) ; zi) − f(w(S)S ; zi). According to the definition of FS , S, S(i), we know\nf(w (S(i))\nS(i) ; zi) = nFS(w\n(S(i)) S(i) )− nFS(i)(w (S(i)) S(i) ) + f(w (S(i)) S(i) ; z̃i).\nSince zi and z̃i follow from the same distribution, we know E[f(w(S (i)) S(i) ; z̃i)] = E[f(w(S)S ; zi)] and further get\nE [ f(w (S(i))\nS(i) ; zi)\n] = nE [ FS(w (S(i)) S(i) ) ] − nE [ FS(i)(w (S(i)) S(i) ) ] + E [ f(w (S) S ; zi) ] .\nIt then follows that E [ f(w (S(i))\nS(i) ; zi)− f(w(S)S ; zi)\n] = nE [ FS(w (S(i))\nS(i) )− FS(i)(w\n(S(i)) S(i) ) ]\n= nE [ FS(w (S(i))\nS(i) )− inf w∈W FS(w)\n] , (B.5)\nwhere we have used the following identity due to the symmetry between zi and z̃i\nE[FS(i)(w (S(i)) S(i) )] = E[F̂S ] = E\n[ inf\nw∈W FS(w)\n] .\nBy the PL condition of FS , it then follows from (B.5) that (in our assumption of PL condition, w may depend on S. This was also imposed in the literature (Yuan et al., 2019; Charles & Papailiopoulos, 2018; Zhou et al., 2018b). Indeed, the PL condition was often shown for empirical functions FS)\nE [ f(w (S(i))\nS(i) ; zi)− f(w(S)S ; zi)\n] ≤ n 2β E [ ‖∇FS(w(S (i)) S(i) )‖22 ] . (B.6)\nAccording to the definition of w(S (i))\nS(i) we know ∇FS(i)(w\n(S(i)) S(i) ) = 0 and therefore ((a + b)2 ≤\n2a2 + 2b2)\n‖∇FS(w(S (i))\nS(i) )‖22 = ∥∥∥∇FS(i)(w(S(i))S(i) )− 1n∇f(w(S(i))S(i) ; z̃i) + 1n∇f(w(S(i))S(i) ; zi)∥∥∥22 ≤ 2 n2 ‖∇f(w(S (i)) S(i) ; z̃i)‖22 + 2 n2 ‖∇f(w(S (i)) S(i) ; zi)‖22\n≤ 4L n2 f(w (S(i)) S(i) ; z̃i) + 4L n2 f(w (S(i)) S(i) ; zi), (B.7)\nwhere we have used the self-bounding property of smooth loss functions (B.3). Since zi and z̃i follow from the same distribution, we know\nE[f(w(S (i)) S(i) ; z̃i)] = E[f(w(S)S ; zi)], E[f(w (S(i)) S(i) ; zi)] = E[f(w(S)S ; z̃i)].\nIt then follows that E [ ‖∇FS(w(S (i)) S(i) )‖22 ] ≤ 4L n2 E[f(w(S)S ; zi)] + 4L n2 E[f(w(S)S ; z̃i)], which, combined with (B.6), gives\nE [ f(w (S(i))\nS(i) ; zi)− f(w(S)S ; zi) ] ≤ 2L nβ ( E[f(w(S)S ; zi)] + E[f(w (S) S ; z̃i)] ) .\nTaking a summation of the above inequality for i = 1, . . . , n, we get n∑ i=1 E [ f(w (S(i)) S(i) ; zi)− f(w(S)S ; zi) ] ≤ 2L β ( E[F̂S ] + E[FS̃(w (S) S )] ) . (B.8)\nWe then address f(wS(i) ; zi) − f(w (S(i)) S(i) ; zi). Since wS(i) and w (S(i)) S(i) are independent of zi, we know\nE [ f(wS(i) ; zi)− f(w (S(i)) S(i) ; zi) ] = E [ F (wS(i))− F (w (S(i)) S(i) ) ] = E [ F (wS)− F (w(S)S ) ] , (B.9)\nwhere we have used the symmetry between zi and z̃i.\nFinally, we address f(w(S)S ; zi)− f(wS ; zi). By the definition of w (S) S we know\nn∑ i=1 ( f(w (S) S ; zi)− f(wS ; zi) ) = n ( F̂S − FS(wS) ) . (B.10)\nPlugging (B.8), (B.9) and the above inequality back into (B.4), we derive n∑ i=1 E [ f(wS(i) ; zi)− f(wS ; zi) ] ≤ 2L β ( E[F̂S ] + E[FS̃(w (S) S )] ) +\nnE [ F (wS)− F (w(S)S ) ] + nE [ F̂S − FS(wS) ] .\nThe proof is complete by recalling the definition of on-average stability and E[FS̃(w (S) S )] = E[F (w(S)S )].\nWe further require a lemma relating the convergence in terms of function values to the convergence in terms of models. This shows that the PL condition is stronger than a quadratic growth condition (Karimi et al., 2016). Lemma B.3 (Karimi et al. 2016). If FS satisfies the PL condition with parameter β > 0. Then for all w ∈ W we have\nE [ FS(w)− FS(w(S)) ] ≥ 2βE[‖w −w(S)‖22]. (B.11)\nWe are now in a position to prove Theorem 1.\nProof of Theorem 1. Plugging the on-average stability established in Lemma B.2 back into Part (c) of Theorem A.1, we derive\nE [ F (wS)− FS(wS) ] ≤ 2L nβ ( E[F̂S ] + E[F (w(S)S )] ) +\nE [ F (wS)− F (w(S)S ) ] + E [ F̂S − FS(wS) ] , (B.12)\nfrom which we derive E [ F (w\n(S) S )− F̂S ] ≤ 2L nβ ( E[F̂S ] + E[F (w(S)S )] ) . (B.13)\nBy (B.2), we know the following inequality for all γ > 0\nF (wS)− F (w(S)S ) ≤ 〈∇F (w (S) S ),wS −w (S) S 〉+\nL 2 ‖wS −w(S)S ‖ 2 2\n≤ ‖∇F (w(S)S )‖2‖wS −w (S) S ‖2 +\nL 2 ‖wS −w(S)S ‖ 2 2\n≤ 1 4γ ‖∇F (w(S)S )‖ 2 2 +\n( γ + L\n2\n) ‖wS −w(S)S ‖ 2 2,\nwhere we have used the Cauchy-Schwartz inequality. This together with Lemma B.1 with w = w(S)S implies that\nE[F (wS)− F (w(S)S )] ≤ L 2γ E [ F (w (S) S )− F̂S ] + ( γ + L 2 ) E[‖wS −w(S)S ‖ 2 2].\nPlugging (B.13) into the above inequality, we get E [ F (wS)− F (w(S)S ) ] ≤ L\n2γ\n2L\nnβ\n( E[F̂S ] + E[F (w(S)S )] ) + ( γ + L\n2\n) E[‖wS −w(S)S ‖ 2 2].\nTaking γ = L/2, we then get E [ F (wS)− F (w(S)S ) ] ≤ 2L nβ ( E[F̂S ] + E[F (w(S)S )] ) + LE[‖wS −w(S)S ‖ 2 2].\nPlugging the above inequality back into (B.12), we derive the following inequality E [ F (wS)− FS(wS) ] ≤ 4L nβ ( E[F̂S ] + E[F (w(S)S )] ) + LE[‖wS −w(S)S ‖ 2 2] + E [ F̂S − FS(wS) ] .\nIt then follows that E [ F (wS)− F̂S ] ≤ 4L nβ ( E[F̂S ] + E[F (w(S)S )] ) + LE[‖wS −w(S)S ‖ 2 2]. (B.14)\nSince L ≤ nβ/4, it follows from (B.13) that E [ F (w\n(S) S )− F̂S\n] ≤ 1\n2\n( E[F̂S ] + E[F (w(S)S )] ) and therefore\nE [ F (w\n(S) S ) ] ≤ 3E[F̂S ].\nWe can plug the above inequality back into (B.14) and derive\nE [ F (wS)− F̂S ] ≤ 16LE[F̂S ]\nnβ + LE[‖wS −w(S)S ‖ 2 2]. (B.15)\nThe stated bound then follows from (B.11). The proof is complete.\nOur analysis in the proof of Theorem 1 actually gives\nE [ F (wS)− F̂S ] ≤ 8LE[F̂S ] nβ − 2L\n+ LE [ FS(wS)− F̂S ] 2β .\nSince we assume E[F̂S ] = 0 in Corollary 2, we only need the condition L < nβ/2 to get Corollary 2." }, { "heading": "C PROOF OF THEOREM 3 AND THEOREM 4", "text": "In this section, we present the proof of Theorem 3 and Theorem 4.\nProof of Theorem 3. Let w̃ be the projection of w(S (i))\nS(i) onto the set of global minimizer of FS . Then\nby the quadratic growth condition, we know E [ FS(w (S(i))\nS(i) )− F̂S\n] ≥ β 2 E [∥∥w(S(i)) S(i) − w̃ ∥∥2 2 ] .\nThis together with (B.5) and non-negativity of f implies nβ 2 E [∥∥w(S(i)) S(i) − w̃ ∥∥2 2 ] ≤ E [ f(w (S(i)) S(i) ; zi) ] = E [ F (w (S(i)) S(i) ) ] = E [ F (w (S) S ) ] , (C.1)\nwhere we have used the symmetry between S and S(i). By the realizability condition, we know almost surely that\nf(w (S) S ; zi) = f(w̃; zi) = 0\nand ∇f(w̃; zi) = 0. It then follows from the smoothness assumption that E [ f(w (S(i))\nS(i) ; zi)− f(w(S)S ; zi)\n] = E [ f(w (S(i))\nS(i) ; zi)− f(w̃; zi) ] ≤ E [ 〈w(S (i))\nS(i) − w̃,∇f(w̃; zi)〉+\nL 2 ‖w(S (i)) S(i) − w̃‖22 ] = E\n[L 2 ‖w(S (i)) S(i) − w̃‖22 ] ≤ LE[F (w(S)S )] nβ .\nWe can plug (B.9), (B.10) and the above inequality back into (B.4), and derive the following bound on the on-average stability\n≤ LE[F (w(S)S )]\nnβ + E\n[ F (wS)− F (w(S)S ) ] + E [ F̂S − FS(wS) ] .\nWe then can analyze analogously to the proof of Theorem 1 but using the above stability bound and get the stated generalization bound. The proof is complete.\nProof of Theorem 4. Similar to (B.7) but using the boundedness of gradients, we know\n‖∇FS(w(S (i))\nS(i) )‖22 ≤\n4G2\nn2 .\nWe can plug this inequality into (B.6) and derive E [ f(w (S(i))\nS(i) ; zi)− f(w(S)S ; zi)\n] ≤ n\n2β\n4G2\nn2 =\n2G2\nnβ .\nTaking a summation of the above inequality gives n∑ i=1 E [ f(w (S(i)) S(i) ; zi)− f(w(S)S ; zi) ] ≤ 2G2/β. (C.2) Plugging (C.2), (B.9) and (B.10) back into (B.4), we derive the following inequality n∑ i=1 E [ f(wS(i) ; zi)− f(wS ; zi) ] ≤ 2G 2 β + nE [ F (wS)− F (w(S)S ) ] + nE [ F̂S − FS(wS)\n] ≤ 2G 2\nβ + nGE\n[ ‖wS −w(S)S ‖2 ] + nE [ F̂S − FS(wS) ] ,\nwhere in the last step we have used the inequality F (wS) − F (w(S)S ) ≤ G‖wS − w (S) S ‖2 due to the boundedness of gradients. According to the definition of on-average stability, we know that the on-average stability of A satisfies\n≤ 2G 2\nnβ +GE\n[ ‖wS −w(S)S ‖2 ] + E [ F̂S − FS(wS) ] ≤ 2G 2 nβ + G ( E [ FS(wS)− F̂S ]) 1 2 √ 2β + E [ F̂S − FS(wS) ] ,\nwhere we have used Lemma B.3. According to Part (c) of Theorem A.1, it follows that\nE [ F (wS)− FS(wS) ] ≤ 2G 2 nβ + G ( E [ FS(wS)− F̂S ]) 1 2 √ 2β + E [ F̂S − FS(wS) ] .\nThe stated bound then follows directly. The proof is complete." }, { "heading": "D PROOFS ON APPLICATIONS", "text": "In this section, we prove generalization bounds for various stochastic optimization algorithms.\nD.1 STOCHASTIC GRADIENT DESCENT\nWe consider here SGD. In the following proposition, we establish the variance of stochastic gradients for SGD under the PL condition. The variance was also studied in a general nonconvex setting (Lei et al., 2020a). Proposition D.1. Let Assumptions 1, 2 hold. Let {wt}t be the sequence produced by SGD with step size sequence {ηt}t∈N. If there exists t0 ∈ N such that ηt ≤ β/L2 for all t ≥ t0, then\nE[‖∇f(wt; zit)‖22] ≤ 2Lmax{E[FS(wt0)], 2E[F̂S ]} ∀t ≥ t0.\nProof. By (B.2) and the update (4.1), we know\nFS(wt+1) ≤ FS(wt) + 〈wt+1 −wt,∇FS(wt)〉+ L‖wt+1 −wt‖22\n2\n= FS(wt)− ηt〈∇f(wt; zit),∇FS(wt)〉+ Lη2t ‖∇f(wt; zit)‖22\n2\n≤ FS(wt)− ηt〈∇f(wt; zit),∇FS(wt)〉+ L2η2t f(wt; zit), where we have used (B.3). Taking expectations on both sides we get the following inequality for all t ≥ t0\nE[FS(wt+1)] ≤ E[FS(wt)]− ηtE[‖∇FS(wt)‖22] + L2η2tE[f(wt; zit)] ≤ E[FS(wt)]− 2ηtβE[FS(wt)− F̂S ] + ηtβE[FS(wt)], (D.1)\nwhere we have used the PL condition and ηt ≤ β/L2 in the last step. It then follows the following inequality for all t ≥ t0\nE[FS(wt+1)] ≤ (1− ηtβ)E[FS(wt)] + ηtβ · 2E[F̂S ] ≤ max { E[FS(wt)], 2E[F̂S ] } .\nApplying this inequality recursively, we derive E[FS(wt+1)] ≤ max{E[FS(wt0)], 2E[F̂S ]} ∀t ≥ t0. This together with (B.3) implies the following inequality for all t ≥ t0 E[‖∇f(wt; zit)‖22] ≤ 2LE[f(wt; zit)] ≤ 2Lmax{E[FS(wt0)], 2E[F̂S ]}. The proof is complete.\nWe now prove generalization bounds in Theorem 5. We denote B B̃ if there exist some constants c1 and c2 > 0 such that c1B ≤ B̃ ≤ c2B.\nProof of Theorem 5. Let t0 = bL2/β2c. It is clear that ηt ≤ β/L2 for all t ≥ t0. Let σ = 2Lmax{E[FS(w1)], . . . ,E[FS(wt0)], 2E[F̂S ]}. According to the self-bounding property (B.3) and Proposition D.1, we know that E[‖∇f(wt; zit)‖22] ≤ σ2 for all t ∈ N. The following optimization error bound was established in Karimi et al. (2016)\nE [ FS(wt+1)− F̂S ] ≤ Lσ 2\n2tβ2 . (D.2)\nWe can plug the above inequality into (3.2) with A(S) = wT+1, and get E [ F (wT+1)− F̂S ] ≤\n16LE [ F̂S ]\nnβ + L2σ2 4Tβ3 .\nSince E [ F̂S ] ≤ E [ FS(w ∗) ]\n= F (w∗), (D.3) we further get\nE [ F (wT+1) ] − F (w∗) ≤ 16LF (w ∗)\nnβ + L2σ2 4Tβ3 .\nBy taking T n/β2, we get E [ F (wT+1) − F̂S ] = O(1/(nβ)). This corresponds to O(n/β2) stochastic gradient evaluations. The proof is complete.\nLemma D.2. Assume FS satisfies the restricted secant inequality with parameter β. Let A be SGD with the step size sequence ηt = 1/(β(t+ 1)). Then there exists some σ ∈ R such that\nE [ ‖wT −w(S)T ‖ 2 2 ] ≤ σ2/(β2T ).\nProof of Lemma D.2. Analogous to the proof of Theorem 5, we can find σ ∈ R+ such that E[‖∇f(wt; zit)‖22] ≤ σ2 for all t ∈ N. Since w (S) t+1 is a projection of wt+1 onto the set of global minimizer of FS , we know ‖wt+1 −w(S)t+1‖22 ≤ ‖wt+1 −w (S) t ‖22 = ‖wt − ηt∇f(wt; zit)−w (S) t ‖22\n= ‖wt −w(S)t ‖22 + η2t ‖∇f(wt; zit)‖22 + 2ηt〈w (S) t −wt,∇f(wt; zit)〉.\nTaking an expectation and using E[‖∇f(wt; zit)‖22] ≤ σ2, we derive E [ ‖wt+1 −w(S)t+1‖22 ] ≤ E [ ‖wt −w(S)t ‖22 ] + η2t σ 2 + 2ηtE [ 〈w(S)t −wt,∇FS(wt)〉 ] ≤ E [ ‖wt −w(S)t ‖22 ] + η2t σ 2 − 2ηtβE [ ‖wt −w(S)t ‖22\n] = (1− 2ηtβ)E [ ‖wt −w(S)t ‖22 ] + η2t σ 2,\nwhere we have used the restricted secant inequality. For the step size ηt = 1/(β(t+ 1)), we have E [ ‖wt+1 −w(S)t+1‖22 ] ≤ t− 1 t+ 1 E [ ‖wt −w(S)t ‖22 ] + σ2 β2(t+ 1)2 .\nMultiplying both sides by t(t+ 1), we derive t(t+ 1)E [ ‖wt+1 −w(S)t+1‖22 ] ≤ (t− 1)tE [ ‖wt −w(S)t ‖22 ] + σ2\nβ2 .\nTaking a summation of the above inequality from t = 1 to T − 1 gives (T − 1)TE [ ‖wT −w(S)T ‖ 2 2 ] ≤ σ2(T − 1)/β2. The proof is complete.\nProof of Theorem 6. It was shown that functions satisfying restricted secant inequality with parameter β also satisfies the PL condition with parameter β/L (Karimi et al., 2016). Therefore (B.15) holds with β there replaced by β/L. According to Lemma D.2, we know E [ ‖wT −w(S)T ‖22 ] ≤ σ2/(β2T ). We can plug this inequality back into (B.15) with A(S) = wT+1, and get\nE [ F (wT+1) ] − F (w∗) = O (F (w∗) nβ + 1 β2T ) ,\nwhere we have used (D.3). By taking T n/β, we get E [ F (wT+1) ] −F (w∗) = O(1/(nβ)). This corresponds to O(n/β) stochastic gradient evaluations. The proof is complete.\nProof of Theorem 7. Let η = β/L2. According to the assumption E[F̂S ] = 0 and (D.1), we know E[FS(wt+1)] ≤ E[FS(wt)]− 2ηβE[FS(wt)] + ηβE[FS(wt)] = ( 1− ηβ ) E[FS(wt)]. (D.4) Applying this inequality recursively, we get E[FS(wT+1)] ≤ (1 − ηβ)TE[FS(w1)]. We can plug the above inequality back into (3.2) with A(S) = wT+1 and get E[F (wT+1)] ≤ LE[FS(wT+1)]\n2β ≤ L(1− β 2/L2)T 2β E [ FS(w1) ] ≤ L exp(−β 2T/L2) 2β E [ FS(w1) ] ,\nwhere we have used the elementary inequality 1− a ≤ exp(−a). (D.5) To achieve E[F (wT+1)] ≤ , we can take T such that exp ( − β2T/L2 ) β ⇐⇒ T β−2 log(1/(β )). The proof is complete.\nD.2 RANDOMIZED COORDINATE DESCENT\nWe prove here the generalization bounds for randomized coordinate descent. We further assume that the gradient is coordinate-wise Lipschitz continuous in the sense that\nFS(w + αei) ≤ FS(w) + α∇iFS(w) + Lα2/2, ∀α ∈ R,w ∈ Rd, i ∈ [d].\nProof of Theorem 8. According to Theorem 3 in Karimi et al. (2016), we know E [ FS(wT+1)− F̂S ] ≤ (\n1− β dL\n)T E [ FS(w1)− F̂S ] . (D.6)\nPlugging the above inequality back into (3.2) and using (D.3), we get\nE[F (wT+1)]− F (w∗) ≤ 16LE\n[ F̂S ]\nnβ +\nL\n2β\n( 1− β\ndL\n)T E[FS(w1)]\n= O (F (w∗)\nnβ\n) +O ( 1 β exp ( − βT dL )) ,\nwhere we have used (D.5). To achieve the excess generalization bounds O(1/(nβ)), we require T satisfying\nexp ( − βT dL ) n−1 ⇐⇒ T d log n β .\nIf E[F̂S ] = 0, then it follows from (3.2), (D.6) and (D.5) that\nE[F (wT+1)] ≤ L\n2β exp ( − Tβ dL ) E[FS(w1)].\nTo achieve the generalization bound , we require T satisfying exp ( − Tβ dL ) β ⇐⇒ T β−1d log 1/(β ). The proof is complete.\nD.3 STOCHASTIC VARIANCE-REDUCED OPTIMIZATION\nWe prove here generalization bounds for various stochastic variance-reduced optimization algorithms. We formulate the framework in Algorithm 1.\nAlgorithm 1: Stochastic Variance Reduced Optimization Input: step size η, initialization w̃0, {ms}\n1 for s = 1, 2, . . . do 2 set w0 = w̃s−1 3 draw a batch Ĩs ⊆ [n] 4 compute v0 = ∇fĨs(w0) 5 update w1 = w0 − ηv0 6 for t = 1, . . . ,ms − 1 do 7 draw a batch It ⊆ [n] 8 compute vt by either (4.4) or (4.5) 9 update wt+1 = wt − ηvt\n10 set w̃s as wis , where is is drawn according to a distribution on [ms] 11 choose the output from {w̃s} according to some strategy\nWe now consider the stochastic variance-reduced gradient descent (SVRG) (Reddi et al., 2016) and stochastically controlled stochastic gradient (SCSG) (Lei et al., 2017). Theorem D.3. Let Assumptions 1 and 2 hold with L ≤ nβ/4. Let A be either the SVRG in Reddi et al. (2016) or the SCSG in Lei et al. (2017). Then we can take O (( n + n 2 3 /β ) log n ) stochastic\ngradient evaluations to get excess generalization bounds O(1/(nβ)). Furthermore, if E[F̂S ] = 0, then we can take O (( n + n 2 3 /β ) log 1/(β ) ) stochastic gradient evaluations to achieve the gener-\nalization bound O( ) for any > 0. Proof. To achieve E[FS(A(S))− F̂S ] ≤ 2/n, it was shown that SVRG and SCSG requires O (( n+ n 2 3 /β ) log n ) stochastic gradient evaluations (Reddi et al., 2016; Lei et al., 2017). We plug this optimization error bound into Theorem 1 and get E[F (A(S))]− F (w∗) = O(1/(nβ)).\nWe now consider the case E[F̂S ] = 0. According to (3.2), to achieve generalization bound O( ), it suffices that E[F (A(S))−F̂S ] = O(β ). This can be achieved by takingO (( n+n 2 3 /β ) log 1/(β ) ) stochastic gradient evaluations (Reddi et al., 2016; Lei et al., 2017). The proof is complete.\nWe now present the proof of Theorem 9 on the behavior of the stochastic recursive gradient algorithm (SARAH) (Nguyen et al., 2017) and SpiderBoost (Wang et al., 2019).\nProof of Theorem 9. To achieve E[FS(A(S)) − F̂S ] ≤ 2/n, it was shown that SARAH and SpiderBoost requires O (( n+ 1/β2 ) log n ) stochastic gradient evaluations (Nguyen et al., 2017; Wang et al., 2019). We plug this optimization error bound into Theorem 1 and get E[F (A(S))]−F (w∗) = O(1/(nβ)).\nWe now consider the case E[F̂S ] = 0. According to (3.2), to achieve generalization bound O( ), it suffices that E[F (A(S))− F̂S ] = O(β ). This can be achieved by takingO (( n+1/β2 ) log 1/(β ) ) stochastic gradient evaluations (Nguyen et al., 2017; Wang et al., 2019). The proof is complete.\nFinally, we consider SNVRG-PL (Zhou et al., 2018a). Theorem D.4. Let Assumptions 1 and 2 hold with L ≤ nβ/4. Let A be the SNVRG-PL in Zhou et al. (2018a). Then we can take O (( n + √ n/β ) log4 n ) stochastic gradient evaluations to get\nexcess generalization bounds O(1/(nβ)). Furthermore, if E[F̂S ] = 0, then we can take O (( n +√\nn/β ) log4 1β )\nstochastic gradient evaluations to achieve the generalization bound O( ) for any > 0. Proof. To achieve E[FS(A(S)) − F̂S ] ≤ 2/n, it was shown that SNVRG-PL requires O (( n +√\nn/β ) log4 n )\nstochastic gradient evaluations (Zhou et al., 2018a). We plug this optimization error bound into Theorem 1 and get E[F (A(S))]− F (w∗) = O(1/(nβ)).\nWe now consider the case E[F̂S ] = 0. According to (3.2), to achieve generalization bound O( ), it suffices that E[F (A(S)) − F̂S ] = O(β ). This can be achieved by taking O (( n +√\nn/β ) log4 1/(β ) )\nstochastic gradient evaluations (Zhou et al., 2018a). The proof is complete." }, { "heading": "E DISCUSSIONS OF EXAMPLES", "text": "In this section, we present some discussions on understanding the assumption L/β < n/2 in Theorem 2.\nE.1 DISCUSSION OF EXAMPLE 1\nWe first give the definition of strong convexity. For any differentiable function g : W 7→ R, we say g is σ-strongly convex if for any w,w′ ∈ W there holds\ng(w′) ≥ g(w) + 〈w′ −w,∇g(w)〉+ σ 2 ‖w −w′‖22.\nIntroduce g : Rn 7→ R+ by g(v) = 1n ∑n i=1 `(vi, yi). Then the function FS can be written as\nFS(w) = 1\nn n∑ i=1 `(〈w, φ(xi)〉, yi) = g(Aw),\nwhere A = ( φ(x1), . . . , φ(xn) )> ∈ Rn×m is the matrix formed from the data. It is known that if g is σg-strongly convex, then FS satisfies the PL condition (Karimi et al., 2016; Necoara et al., 2018)\nFS(w)− F̂S ≤ 1 2σg ( σ′min(A) )2 ‖∇FS(w)‖22. (E.1) Since ` is σ`-strongly convex we know for any v,v′ ∈ Rn\ng(v′) = 1\nn n∑ i=1 `(v′i, yi) ≥ 1 n n∑ i=1 `(vi, yi) + 1 n n∑ i=1 `′(vi, yi)(v ′ i − vi) + σ` 2n n∑ i=1 (vi − v′i)2\n= g(v) + 〈∇g(v),v′ − v〉+ σ` 2n ‖v′ − v‖22. (E.2)\nThat is, g is σ`n -strongly convex. This together with (E.1) shows that\nFS(w)− F̂S ≤ n 2σ` ( σ′min(A) )2 ‖∇FS(w)‖22 = 12σ`σ′min(ΣS)‖∇FS(w)‖22, (E.3) where we have used\n1\nn\n( σ′min(A) )2 = 1\nn σ′min(A >A) = σ′min(ΣS). (E.4)\nFor any v,v′ ∈ Rn, it follows from the L`-strong smoothness of ` that\n‖∇g(v)−∇g(v′)‖22 = 1\nn2 n∑ i=1 ∣∣`′(vi, yi)− `′(v′i, yi)∣∣2 ≤ L2`n2 n∑ i=1 |vi − v′i|2 = L2` n2 ‖v − v′‖22.\nThat is, g is L`n -smooth. It then follows ‖∇FS(w)−∇FS(w′)‖2 = ∥∥A>∇g(Aw)−A>∇g(Aw′)∥∥ 2 ≤ σmax(A)‖∇g(Aw)−∇g(Aw′)‖2\n≤ L`σmax(A) n ‖A(w −w′)‖2 ≤ L`σ\n2 max(A)\nn ‖w −w′‖2.\nThis together with (E.4) shows (σ′min replaced by σmax) ‖∇FS(w)−∇FS(w′)‖2 ≤ L`σmax(ΣS)‖w −w′‖2. (E.5)\nIt is reasonable to assume that L is of the order of the smoothness of FS . In this case, it follows from (E.3) and (E.5) that empirical counterpart of L/β is of the order of σmax(ΣS)/σ′min(ΣS).\nE.2 DISCUSSION OF EXAMPLE 2\nWe recall some notations in Example 2. Consider single-hidden-layer neural networks with d inputs, m hidden neurons and a single output, for which the prediction function takes the form hv,w =∑m k=1 vkφ ( 〈wk, x〉 ) . Here wk ∈ Rd and vk ∈ R denote the weight of the edges connecting the k-th hidden node to the input and output node, respectively, while φ : R 7→ R is the activation function. We fix v with |vk| = a for some a > 0 and train w = (w1,w2, . . . ,wm)> ∈ Rm×d from S. Note we only use the PL condition FS(w)− F̂S ≤ 12β ‖∇FS(w)‖ 2 2 for w = w (S(i)) S(i) in the proof of Theorem 1 (only in (B.6)). We fix w = w(S (i))\nS(i) here. Analogous to Soltanolkotabi et al. (2019), we define the Jacobian matrix J = ( J1, J2, . . . , Jn ) ∈ Rmd×n at w = w(S (i))\nS(i) with\nJj = v1φ ′(w>1 xj)xj\n... vmφ ′(w>mxj)xj and rj = v>φ(wxj)− yj for j ∈ [n]. It was shown that (Soltanolkotabi et al., 2019)\n∇FS(w) = 1\nn Jr, for r = (r1, . . . , rn)>, (E.6)\nand therefore ‖∇FS(w)‖22 = 1\nn2 r>J>Jr =\n1 n2 r> ( J>j Jj′ ) j,j′∈[n]r.\nAccording to the definition Jj , we know\nJ>j Jj′ = ( v1φ ′(w>1 xj)x > j , . . . , vmφ ′(w>mxj)x > j ) v1φ ′(w>1 xj′)xj′\n... vmφ ′(w>mxj′)xj′ = a2\nm∑ k=1 φ′(w>k xj)φ ′(w>k xj′)x > j xj′ .\nIt then follows that\nJ>J = a2 m∑ k=1 ( φ′(Xwk) ( φ′(Xwk) )>) (XX>), (E.7) where X = (x1, . . . , xn)> ∈ Rn×d is the data matrix and denotes the Hadamard (entry-wise) product of matrices. According to the definition of r, we know FS(w) = 1n‖r‖ 2 2. Then, it follows from (E.6) and (E.7) that\n‖∇FS(w)‖22 = a2 n2 r> ( m∑ k=1 ( φ′(Xwk) ( φ′(Xwk) )>) (XX>))r ≥ a 2\nn2 σmin ( m∑ k=1 ( φ′(Xwk) ( φ′(Xwk) )>) (XX>))‖r‖22 = a2\nn σmin ( m∑ k=1 ( φ′(Xwk) ( φ′(Xwk) )>) (XX>))FS(w). That is, we can take the parameter of the PL condition as\nβ = a2\n2n σmin ( m∑ k=1 ( φ′(Xwk) ( φ′(Xwk) )>) (XX>)). It is reasonable to assume that L is of the order of a 2\nn σmax (∑m k=1 ( φ′(Xwk) ( φ′(Xwk) )>) (XX>) ) (Soltanolkotabi et al., 2019). In this case, we have the empirical counterpart of L/β is of\nthe order of σmax (∑m k=1 ( φ′(Xwk) ( φ′(Xwk) )>) (XX>)) σmin (∑m k=1 ( φ′(Xwk) ( φ′(Xwk)\n)>) (XX>)) . If we consider the identify activation function, i.e., φ(t) = t, then it follows from the definition of ΣS that\nL/β σmax\n( XX> ) σmin ( XX> ) = σmax(ΣS) σmin ( ΣS ) ." } ]
2,021
SHARPER GENERALIZATION BOUNDS FOR LEARNING WITH GRADIENT-DOMINATED OBJECTIVE FUNCTIONS
SP:28b164b471496b8f4c07128fa107df88a9dac3e9
[ "The paper proposes a practical asynchronous stochastic gradient descent for Byzantine distributed learning where some of transmitted gradients are likely to be replaced by arbitrary vectors. Specifically, the server temporarily stores gradients on multiple (namely $B$) buffers and performs a proper robust aggregation to compute a more robust from them. When $B = 1$, BASGD is reduced to ASGD. They also conduct experiments to show the performance of BASGD. ", "Review: This paper proposes BASGD which uses buffers to perform asynchronous Byzantine learning. In each SGD step, all workers compute gradients and send them to the server where their ad buffer is updated. When all of the buffers are updated, the server performs an model update. When a worker send a gradient to the server, it also pulls the latest model and compute the gradient on it no matter the server update the model or not. The main contribution in this paper is to introduce a new approach to do asynchronous Byzantine learning without storing training samples on the server like zeno++." ]
Distributed learning has become a hot research topic due to its wide application in cluster-based large-scale learning, federated learning, edge computing and so on. Most traditional distributed learning methods typically assume no failure or attack on workers. However, many unexpected cases, such as communication failure and even malicious attack, may happen in real applications. Hence, Byzantine learning (BL), which refers to distributed learning with failure or attack, has recently attracted much attention. Most existing BL methods are synchronous, which are impractical in some applications due to heterogeneous or offline workers. In these cases, asynchronous BL (ABL) is usually preferred. In this paper, we propose a novel method, called buffered asynchronous stochastic gradient descent (BASGD), for ABL. To the best of our knowledge, BASGD is the first ABL method that can resist malicious attack without storing any instances on server. Compared with those methods which need to store instances on server, BASGD takes less risk of privacy leakage. BASGD is proved to be convergent, and be able to resist failure or attack. Empirical results show that BASGD significantly outperforms vanilla ASGD and other ABL baselines when there exists failure or attack on workers.
[]
[ { "authors": [ "Dan Alistarh", "Zeyuan Allen-Zhu", "Jerry Li" ], "title": "Byzantine stochastic gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "B.M. Assran", "A. Aytekin", "H.R. Feyzmahdavian", "M. Johansson", "M.G. Rabbat" ], "title": "Advances in asynchronous parallel and distributed optimization", "venue": "Proceedings of the IEEE,", "year": 2020 }, { "authors": [ "Gilad Baruch", "Moran Baruch", "Yoav Goldberg" ], "title": "A little is enough: Circumventing defenses for distributed learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jeremy Bernstein", "Jiawei Zhao", "Kamyar Azizzadenesheli", "Anima Anandkumar" ], "title": "signSGD with majority vote is communication efficient and fault tolerant", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Peva Blanchard", "Rachid Guerraoui", "Julien Stainer" ], "title": "Machine learning with adversaries: Byzantine tolerant gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Léon Bottou" ], "title": "Large-scale machine learning with stochastic gradient descent", "venue": "In Proceedings of the International Conference on Computational Statistics,", "year": 2010 }, { "authors": [ "Yudong Chen", "Lili Su", "Jiaming Xu" ], "title": "Distributed statistical machine learning in adversarial settings: Byzantine gradient descent", "venue": "Proceedings of the ACM on Measurement and Analysis of Computing Systems,", "year": 2017 }, { "authors": [ "Georgios Damaskinos", "Rachid Guerraoui", "Rhicheek Patra", "Mahsa Taziki" ], "title": "Asynchronous Byzantine machine learning (the case of SGD)", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jeffrey Dean", "Greg Corrado", "Rajat Monga", "Kai Chen", "Matthieu Devin", "Mark Mao", "Marc’aurelio Ranzato", "Andrew Senior", "Paul Tucker", "Ke Yang" ], "title": "Large scale distributed deep networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Ilias Diakonikolas", "Daniel M Kane" ], "title": "Recent advances in algorithmic high-dimensional robust statistics", "venue": "arXiv preprint arXiv:1911.05911,", "year": 2019 }, { "authors": [ "Ilias Diakonikolas", "Gautam Kamath", "Daniel M Kane", "Jerry Li", "Ankur Moitra", "Alistair Stewart" ], "title": "Being robust (in high dimensions) can be practical", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Rachid Guerraoui", "Sébastien Rouault" ], "title": "The hidden vulnerability of distributed learning in byzantium", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Farzin Haddadpour", "Mohammad Mahdi Kamani", "Mehrdad Mahdavi", "Viveck Cadambe" ], "title": "Trading redundancy for communication: Speeding up distributed SGD for non-convex optimization", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Martin Jaggi", "Virginia Smith", "Martin Takác", "Jonathan Terhorst", "Sanjay Krishnan", "Thomas Hofmann", "Michael I Jordan" ], "title": "Communication-efficient distributed dual coordinate ascent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Rie Johnson", "Tong Zhang" ], "title": "Accelerating stochastic gradient descent using predictive variance reduction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Peter Kairouz", "H Brendan McMahan", "Brendan Avent", "Aurélien Bellet", "Mehdi Bennis", "Arjun Nitin Bhagoji", "Keith Bonawitz", "Zachary Charles", "Graham Cormode", "Rachel Cummings" ], "title": "Advances and open problems in federated learning", "venue": null, "year": 1912 }, { "authors": [ "Jakub Konevcnỳ", "H Brendan McMahan", "Felix X Yu", "Peter Richtárik", "Ananda Theertha Suresh", "Dave Bacon" ], "title": "Federated learning: Strategies for improving communication efficiency", "venue": null, "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Jason D Lee", "Qihang Lin", "Tengyu Ma", "Tianbao Yang" ], "title": "Distributed stochastic variance reduced gradient methods by sampling extra data with replacement", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Mu Li", "David G Andersen", "Alexander J Smola", "Kai Yu" ], "title": "Communication efficient distributed machine learning with the parameter server", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Xiangru Lian", "Ce Zhang", "Huan Zhang", "Cho-Jui Hsieh", "Wei Zhang", "Ji Liu" ], "title": "Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Qihang Lin", "Zhaosong Lu", "Lin Xiao" ], "title": "An accelerated proximal coordinate gradient method", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Chenxin Ma", "Virginia Smith", "Martin Jaggi", "Michael Jordan", "Peter Richtárik", "Martin Takác" ], "title": "Adding vs. averaging in distributed primal-dual optimization", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 1973 }, { "authors": [ "Matthew Nokleby", "Haroon Raja", "Waheed U Bajwa" ], "title": "Scaling-up distributed processing of data streams for machine learning", "venue": "arXiv preprint arXiv:2005.08854,", "year": 2020 }, { "authors": [ "Mark Schmidt", "Nicolas Le Roux", "Francis Bach" ], "title": "Minimizing finite sums with the stochastic average gradient", "venue": "Mathematical Programming,", "year": 2017 }, { "authors": [ "Shai Shalev-Shwartz", "Tong Zhang" ], "title": "Stochastic dual coordinate ascent methods for regularized loss minimization", "venue": "Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Ohad Shamir", "Nati Srebro", "Tong Zhang" ], "title": "Communication-efficient distributed optimization using an approximate newton-type method", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Weisong Shi", "Jie Cao", "Quan Zhang", "Youhuizi Li", "Lanyu Xu" ], "title": "Edge computing: Vision and challenges", "venue": "IEEE Internet of Things Journal,", "year": 2016 }, { "authors": [ "Shizhao Sun", "Wei Chen", "Jiang Bian", "Xiaoguang Liu", "Tie-Yan Liu" ], "title": "Slim-dp: a multi-agent system for communication-efficient distributed deep learning", "venue": "In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2018 }, { "authors": [ "Jianqiao Wangni", "Jialei Wang", "Ji Liu", "Tong Zhang" ], "title": "Gradient sparsification for communicationefficient distributed optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Lin Xiao" ], "title": "Dual averaging methods for regularized stochastic learning and online optimization", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Cong Xie", "Sanmi Koyejo", "Indranil Gupta" ], "title": "Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Cong Xie", "Sanmi Koyejo", "Indranil Gupta" ], "title": "Zeno++: Robust fully asynchronous SGD", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Tianbao Yang" ], "title": "Trading computation for communication: Distributed stochastic dual coordinate ascent", "venue": "In Advances in Neural Information Processing Systems, pp", "year": 2013 }, { "authors": [ "Zhixiong Yang", "Arpita Gang", "Waheed U Bajwa" ], "title": "Adversary-resilient distributed and decentralized statistical inference and machine learning: An overview of recent advances under the byzantine threat model", "venue": "IEEE Signal Processing Magazine,", "year": 2020 }, { "authors": [ "Dong Yin", "Yudong Chen", "Ramchandran Kannan", "Peter Bartlett" ], "title": "Byzantine-robust distributed learning: Towards optimal statistical rates", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Dong Yin", "Yudong Chen", "Ramchandran Kannan", "Peter Bartlett" ], "title": "Defending against saddle point attack in byzantine-robust distributed learning", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hao Yu", "Rong Jin", "Sen Yang" ], "title": "On the linear speedup analysis of communication efficient momentum SGD for distributed non-convex optimization", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hao Yu", "Sen Yang", "Shenghuo Zhu" ], "title": "Parallel restarted SGD with faster convergence and less communication: Demystifying why model averaging works for deep learning", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Lijun Zhang", "Mehrdad Mahdavi", "Rong Jin" ], "title": "Linear convergence with condition number independent access of full gradients", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Ruiliang Zhang", "James Kwok" ], "title": "Asynchronous distributed admm for consensus optimization", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Shen-Yi Zhao", "Ru Xiang", "Ying-Hao Shi", "Peng Gao", "Wu-Jun Li" ], "title": "SCOPE: scalable composite optimization for learning on spark", "venue": "In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Shen-Yi Zhao", "Gong-Duo Zhang", "Ming-Wei Li", "Wu-Jun Li" ], "title": "Proximal SCOPE for distributed sparse learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Shuxin Zheng", "Qi Meng", "Taifeng Wang", "Wei Chen", "Nenghai Yu", "Zhi-Ming Ma", "Tie-Yan Liu" ], "title": "Asynchronous stochastic gradient descent with delay compensation", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Yi Zhou", "Yingbin Liang", "Yaoliang Yu", "Wei Dai", "Eric P Xing" ], "title": "Distributed proximal gradient algorithm for partially asynchronous computer clusters", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Martin Zinkevich", "Markus Weimer", "Lihong Li", "Alex J Smola" ], "title": "Parallelized stochastic gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 } ]
[ { "heading": "1 INTRODUCTION", "text": "Due to the wide application in cluster-based large-scale learning, federated learning (Konevcnỳ et al., 2016; Kairouz et al., 2019), edge computing (Shi et al., 2016) and so on, distributed learning has recently become a hot research topic (Zinkevich et al., 2010; Yang, 2013; Jaggi et al., 2014; Shamir et al., 2014; Zhang & Kwok, 2014; Ma et al., 2015; Lee et al., 2017; Lian et al., 2017; Zhao et al., 2017; Sun et al., 2018; Wangni et al., 2018; Zhao et al., 2018; Zhou et al., 2018; Yu et al., 2019a;b; Haddadpour et al., 2019). Most traditional distributed learning methods are based on stochastic gradient descent (SGD) and its variants (Bottou, 2010; Xiao, 2010; Duchi et al., 2011; Johnson & Zhang, 2013; Shalev-Shwartz & Zhang, 2013; Zhang et al., 2013; Lin et al., 2014; Schmidt et al., 2017; Zheng et al., 2017; Zhao et al., 2018), and typically assume no failure or attack on workers.\nHowever, in real distributed learning applications with multiple networked machines (nodes), different kinds of hardware or software failure may happen. Representative failure include bit-flipping in the communication media and the memory of some workers (Xie et al., 2019). In this case, a small failure on some machines (workers) might cause a distributed learning method to fail. In addition, malicious attack should not be neglected in an open network where the manager (or server) generally has not much control on the workers, such as the cases of edge computing and federated learning. Some malicious workers may behave arbitrarily or even adversarially. Hence, Byzantine learning (BL), which refers to distributed learning with failure or attack, has recently attracted much attention (Diakonikolas et al., 2017; Chen et al., 2017; Blanchard et al., 2017; Alistarh et al., 2018; Damaskinos et al., 2018; Xie et al., 2019; Baruch et al., 2019; Diakonikolas & Kane, 2019).\nExisting BL methods can be divided into two main categories: synchronous BL (SBL) methods and asynchronous BL (ABL) methods. In SBL methods, the learning information, such as the gradient in SGD, of all workers will be aggregated in a synchronous way. On the contrary, in ABL methods the learning information of workers will be aggregated in an asynchronous way. Existing SBL methods mainly take two different ways to achieve resilience against Byzantine workers which refer to those workers with failure or attack. One way is to replace the simple averaging aggregation operation with some more robust aggregation operations, such as median and trimmed-mean (Yin et al., 2018).\nKrum (Blanchard et al., 2017) and ByzantinePGD (Yin et al., 2019) take this way. The other way is to filter the suspicious learning information (gradients) before averaging. Representative examples include ByzantineSGD (Alistarh et al., 2018) and Zeno (Xie et al., 2019). The advantage of SBL methods is that they are relatively simple and easy to be implemented. But SBL methods will result in slow convergence when there exist heterogeneous workers. Furthermore, in some applications like federated learning and edge computing, synchronization cannot even be performed most of the time due to the offline workers (clients or edge servers). Hence, ABL is preferred in these cases.\nTo the best of our knowledge, there exist only two ABL methods: Kardam (Damaskinos et al., 2018) and Zeno++ (Xie et al., 2020). Kardam introduces two filters to drop out suspicious learning information (gradients), which can still achieve good performance when the communication delay is heavy. However, when in face of malicious attack, some work finds that Kardam also drops out most correct gradients in order to filter all faulty (failure) gradients. Hence, Kardam cannot resist malicious attack (Xie et al., 2020). Zeno++ scores each received gradient, and determines whether to accept it according to the score. But Zeno++ needs to store some training instances on server for scoring. In practical applications, storing data on server will increase the risk of privacy leakage or even face legal risk. Therefore, under the general setting where server has no access to any training instances, there have not existed ABL methods to resist malicious attack.\nIn this paper, we propose a novel method, called buffered asynchronous stochastic gradient descent (BASGD), for ABL. The main contributions of BASGD are listed as follows:\n• To the best of our knowledge, BASGD is the first ABL method that can resist malicious attack without storing any instances on server. Compared with those methods which need to store instances on server, BASGD takes less risk of privacy leakage. • BASGD is theoretically proved to be convergent, and be able to resist failure or attack. • Empirical results show that BASGD significantly outperforms vanilla ASGD and other ABL\nbaselines when there exist failure or malicious attack on workers. In particular, BASGD can still converge under malicious attack, when ASGD and other ABL methods fail." }, { "heading": "2 PRELIMINARY", "text": "This section presents the preliminary of this paper, including the distributed learning framework used in this paper and the definition of Byzantine worker." }, { "heading": "2.1 DISTRIBUTED LEARNING FRAMEWORK", "text": "Many machine learning models, such as logistic regression and deep neural networks, can be formulated as the following finite sum optimization problem:\nmin w∈Rd\nF (w) = 1\nn n∑ i=1 f(w; zi), (1)\nwhere w is the parameter to learn, d is the dimension of parameter, n is the number of training instances, f(w; zi) is the empirical loss on the training instance zi. The goal of distributed learning is to solve the problem in (1) by designing learning algorithms based on multiple networked machines.\nAlthough there have appeared many distributed learning frameworks, in this paper we focus on the widely used Parameter Server (PS) framework (Li et al., 2014). In a PS framework, there are several workers and one or more servers. Each worker can only communicate with server(s). There may exist more than one server in a PS framework, but for the problem of this paper servers can be logically conceived as a unity. Without loss of generality, we will assume there is only one server in this paper. Training instances are disjointedly distributed across m workers. Let Dk denote the index set of training instances on worker k, we have ∪mk=1Dk = {1, 2, . . . , n} and Dk ∩ Dk′ = ∅ if k 6= k′. In this paper, we assume that server has no access to any training instances. If two instances have the same value, they are still deemed as two distinct instances. Namely, zi may equal zi′ (i 6= i′). One popular asynchronous method to solve the problem in (1) under the PS framework is ASGD (Dean et al., 2012) (see Algorithm 1 in Appendix A). In this paper, we assume each worker samples one instance for gradient computation each time, and do not separately discuss the mini-batch case.\nIn PS based ASGD, server is responsible for updating and maintaining the latest parameter. The number of iterations that server has already executed is used as the global logical clock of server. At the beginning, iteration number t = 0. Each time a SGD step is executed, t will increase by 1 immediately. The parameter after t iterations is denoted as wt. If server sends parameters to worker k at iteration t′, some SGD steps may have been excuted before server receives gradient from worker k next time at iteration t. Thus, we define the delay of worker k at iteration t as τ tk = t− t′. Worker k is heavily delayed at iteration t if τ tk > τmax, where τmax is a pre-defined non-negative constant." }, { "heading": "2.2 BYZANTINE WORKER", "text": "For workers that have sent gradients (one or more) to server at iteration t, we call worker k loyal worker if it has finished all the tasks without any fault and each sent gradient is correctly received by the server. Otherwise, worker k is called Byzantine worker. If worker k is a Byzantine worker, it means the received gradient from worker k is not credible, which can be an arbitrary value. In ASGD, there is one received gradient at a time. Formally, we denote the gradient received from worker k at iteration t as gtk. Then, we have:\ngtk =\n{ ∇f(wt ′ ; zi), if worker k is loyal at iteration t;\narbitrary value, if worker k is Byzantine at iteration t,\nwhere 0 ≤ t′ ≤ t, and i is randomly sampled from Dk. Our definition of Byzantine worker is consistent with most previous works (Blanchard et al., 2017; Xie et al., 2019; 2020). Either accidental failure or malicious attack will result in Byzantine workers." }, { "heading": "3 BUFFERED ASYNCHRONOUS SGD", "text": "In synchronous BL, gradients from all workers are received at each iteration. During this process, we can compare the gradients with each other, and then filter suspicious ones, or use more robust aggregation rules such as median and trimmed-mean for updating. However, in asynchronous BL, only one gradient is received by the server at a time. Without any training instances stored on server, it is difficult for server to identify whether a received gradient is credible or not.\nIn order to deal with this problem in asynchronous BL, we propose a novel method called buffered asynchronous SGD (BASGD). BASGD introduces B buffers (0 < B ≤ m) on server, and the gradient used for updating parameters will be aggregated from these buffers. The detail of the learning procedure of BASGD is presented in Algorithm 2 in Appendix A. In this section, we will introduce the details of the two key components of BASGD: buffer and aggregation function." }, { "heading": "3.1 BUFFER", "text": "In BASGD, the m workers do the same job as that in ASGD, while the updating rule on server is modified. More specifically, there are B buffers (0 < B ≤ m) on server. When a gradient g from worker s is received, it will be temporarily stored in buffer b, where b = s mod B, as illustrated in Figure 1. Only when each buffer has stored at least one gradient, a new SGD step will be executed. Please note that no matter whether a SGD step is executed or not, the server will immediately send the latest parameters back to the worker after receiving a gradient. Hence, BASGD introduces no barrier, and is an asynchronous algorithm.\nFor each buffer b, more than one gradient may have been received at iteration t. We will store the average of these gradients (denoted by hb) in buffer b. Assume that there are already (N − 1) gradients g1,g2, . . . ,gN−1 which should be stored in buffer b, and hb(old) = 1N−1 ∑N−1 i=1 gi. When the N -th gradient gN is received, the new average value in buffer b should be: hb(new) = 1 N ∑N i=1 gi = N−1 N · hb(old) + 1 N · gN .\nThis is the updating rule for each buffer b when a gradient is received. We use N tb to denote the total number of gradients stored in buffer b at the t-th iteration. After the parameter w is updated, all buffers will be zeroed out at once. With the benefit of buffers, server has access to B candidate gradients when updating parameter. Thus, a more reliable (robust) gradient can be aggregated from the B gradients of buffers, if a proper aggregation function Aggr(·) is chosen." }, { "heading": "3.2 AGGREGATION FUNCTION", "text": "When a SGD step is ready to be executed, there are B buffers providing candidate gradients. An aggregation function is needed to get the final gradient for updating. A naive way is to take the mean of all candidate gradients. However, mean value is sensitive to outliers which are common in BL. For designing proper aggregation functions, we first define the q-Byzantine Robust (q-BR) condition to quantitatively describe the Byzantine resilience ability of an aggregation function. Definition 1 (q-Byzantine Robust). For an aggregation functionAggr(·): Aggr([h1, . . . ,hB ]) = G, where G = [G1, . . . , Gd]T and hb = [hb1, . . . , hbd]T ,∀b ∈ [B], we call Aggr(·) q-Byzantine Robust (q ∈ Z, 0 < q < B/2), if it satisfies the following two properties: (a). Aggr([h1 + h′, . . . ,hB + h′]) = Aggr([h1, . . . ,hB ]) + h′, ∀h1, . . . ,hB ∈ Rd,∀h′ ∈ Rd; (b). mins∈S{hsj} ≤ Gj ≤ maxs∈S{hsj}, ∀j ∈ [d], ∀S ⊂ [B] with |S| = B − q,\nIntuitively, property (a) in Definition 1 says that if all candidate gradients hi are added by a same vector h′, the aggregated gradient will also be added by h′. Property (b) says that for each coordinate j, the aggregated value Gj will be between the (q + 1)-th smallest value and the (q + 1)-th largest value among the j-th coordinates of all candidate gradients. Thus, the gradient aggregated by a q-BR function is insensitive to at least q outliers. We can find that q-BR condition gets stronger when q increases. In other words, if Aggr(·) is q-BR, then for any 0 < q′ < q, Aggr(·) is also q′-BR. Remark 1. It is not hard to find that when B > 1, mean function is not q-Byzantine Robust for any q > 0. We illustrate this by a one-dimension example: h1, . . . , hB−1 ∈ [0, 1], and hB = 10 × B. Then 1B ∑B b=1 hb ≥ hB B = 10 6∈ [0, 1]. Namely, the mean is larger than any of the first B − 1 values.\nWe find that the following two aggregation functions satisfy Byzantine Robust condition. Definition 2 (Coordinate-wise median (Yin et al., 2018)). For candidate gradients h1,h2, . . . ,hB ∈ Rd, hb = [hb1, hb2, . . . , hbd]T , ∀b = 1, 2, . . . , B. Coordinate-wise median is defined as:\nMed([h1, . . . ,hB ]) = [Med(h·1), . . . ,Med(h·d)] T ,\nwhere Med(h·j) is the scalar median of the j-th coordinates, ∀j = 1, 2, . . . , d. Definition 3 (Coordinate-wise q-trimmed-mean (Yin et al., 2018)). For any positive interger q < B/2 and candidate gradients h1,h2, . . . ,hB ∈ Rd, hb = [hb1, hb2, . . . , hbd]T , ∀b = 1, 2, . . . , B. Coordinate-wise q-trimmed-mean is defined as:\nTrm([h1, . . . ,hB ]) = [Trm(h·1), . . . , T rm(h·d)] T , where Trm(h·j) is the scalar q-trimmed-mean: Trm(h·j) = 1B−2q ∑ b∈Mj hbj . Mj is the subset of {hbj}Bb=1 obtained by removing the q largest elements and q smallest elements.\nIn the following content, coordinate-wise median and coordinate-wise q-trimmed-mean are also called median and trmean, respectively. Proposition 1 shows the q-BR property of these two functions. Proposition 1. Coordinate-wise q-trmean is q-BR, and coordinate-wise median is bB−12 c-BR.\nHere, bxc is the maximum integer not larger than x. According to Proposition 1, both median and trmean are proper choices for aggregation function in BASGD. The proof can be found in Appendix B.\nNow we define another class of aggregation functions, which is also important in analysis in Section 4. Definition 4 (Stable aggregation function). Aggregation functionAggr(·) is said to be stable provided that ∀h1, . . . ,hB , h̃1, . . . , h̃B ∈ Rd, letting δ = ( ∑B b=1 ‖hb − h̃b‖2) 1 2 , we have:\n‖Aggr(h1, . . . ,hB)−Aggr(h̃1, . . . , h̃B)‖ ≤ δ.\nIf Aggr(·) is a stable aggregation function, it means that when there is a disturbance with L2-norm δ on buffers, the disturbance of aggregated result will not be larger than δ. Definition 5 (Effective aggregation function). A stable aggregation function Aggr(·) is called an (A1, A2)-effective aggregation function, provided that when there are at most r Byzantine workers and τ tk = 0 for each loyal worker k (∀t = 0, 1, . . . , T − 1), it satisfies the following two properties:\n(i). E[∇F (wt)TGtsyn | wt] ≥ ‖∇F (wt)‖2 −A1, ∀wt ∈ Rd; (ii). E[‖Gtsyn‖2 | wt] ≤ (A2)2, ∀wt ∈ Rd;\nwhere A1, A2 ∈ R+ are two non-negative constants, Gtsyn is the gradient aggregated by Aggr(·) at the t-th iteration in cases without delay (τmax = 0).\nFor different aggregation functions, constants A1 and A2 may differ. A1 and A2 are also related to loss function F (·), distribution of instances, buffer number B, maximum Byzantine worker number r and so on. Inequalities (i) and (ii) in Definition 5 are two important properties in convergence proof of synchronous Byzantine learning methods. As revealed in (Yang et al., 2020), there are many existing asynchronous Byzantine learning methods. Krum, median, and trimmed-mean are proved to satisfy these two properties (Blanchard et al., 2017; Yin et al., 2018). SignSGD (Bernstein et al., 2019) can be seen as a combination of 1-bit quantization and median aggregation, while median satisfies the properties. Bulyan (Guerraoui et al., 2018) uses an existing aggregation rule to obtain a new one, and the property of Bulyan is difficult to be analyzed alone. Zeno (Xie et al., 2019) has an asynchronous version called Zeno++ (Xie et al., 2020), and it is meaningless to check the properties for Zeno.\nPlease note that too large B will slow down the updating frequency and damage the performance, which is supported by both theoretical (in Appendix B) and empirical (in Section 5) results. In practical application, we could estimate Byzantine worker number r in advance, and set B to make Aggr(·) be r-BR. Specially, B is suggested to be (2r + 1) for median, since median is bB−12 c-BR." }, { "heading": "4 CONVERGENCE", "text": "In this section, we theoretically prove the convergence and resilience of BASGD against failure or attack. There are two main theorems. The first theorem presents a relatively loose but general bound for all q-BR aggregation functions. The other one presents a relatively tight bound for each distinct (A1, A2)-effective aggregation function. Since the definition of (A1, A2)-effective aggregation function is usually more difficult to verify than q-BR property, the general bound is also useful. Here we only present the results. Proof details are in Appendix B. We first make the following assumptions, which also have been widely used in stochastic optimization. Assumption 1. Global loss function F (w) is bounded below: ∃F ∗ ∈ R, F (w) ≥ F ∗,∀w ∈ Rd. Assumption 2 (Bounded bias). For any loyal worker, it can use locally stored training instances to estimate global gradient with bounded bias κ: ‖E[∇f(w; zi)]−∇F (w)‖ ≤ κ, ∀w ∈ Rd. Assumption 3 (Bounded gradient). ∇F (w) is bounded: ∃D ∈ R+, ‖∇F (w)‖ ≤ D, ∀w ∈ Rd. Assumption 4 (Bounded variance). E[||∇f(w; zi)− E[∇f(w; zi) | w]||2 | w] ≤ σ2, ∀w ∈ Rd. Assumption 5 (L-smoothness). Global loss function F (w) is differentiable and L-smooth: ||∇F (w)−∇F (w′)|| ≤ L||w −w′||, ∀w,w′ ∈ Rd. Remark 2. Please note that we do not give any assumption about convexity. The analysis in this section is suitable for both convex and non-convex models in machine learning, such as logistic regression and deep neural networks. Also, we do not give any assumption about the behavior of Byzantine workers, which may behave arbitrarily.\nLet N (t) be the (q + 1)-th smallest value in {N tb}b∈[B], N tb is the total number of gradients stored in buffer b at the t-th iteration. We define the constant ΛB,q,r = (B−r) √ B−r+1√\n(B−q−1)(q−r+1) , which will appear\nin Lemma 1 and Lemma 2.\nLemma 1. If Aggr(·) is q-BR, and there are at most r Byzantine workers (r ≤ q), we have: E[||Gt||2 | wt] ≤ ΛB,q,rd · (D2 + σ2/N (t)). Lemma 2. If Aggr(·) is q-BR, and the total number of heavily delayed workers and Byzantine workers is not larger than r (r ≤ q), we have:\n||E[Gt −∇F (wt) | wt]|| ≤ ΛB,q,rd · (τmaxL · [ΛB,q,rd(D2 + σ2/N (t))] 1 2 + σ + κ). Theorem 1. Let D̃ = 1T ∑T−1 t=0 (D 2 + σ2/N (t)) 1 2 . If Aggr(·) is q-BR, B = O(r), and the total number of heavily delayed workers and Byzantine workers is not larger than r (r ≤ q), set learning rate η = O( 1\nL √ T ), we have:∑T−1 t=0 E[||∇F (wt)||2]\nT ≤O\n( L[F (w0)− F ∗]\nT 1 2\n) +O ( rdD̃\nT 1 2 (q − r + 1) 12\n) +O ( rDdσ\n(q − r + 1) 12\n)\n+O\n( rDdκ\n(q − r + 1) 12\n) +O ( r 3 2LDD̃d 3 2 τmax\n(q − r + 1) 34\n) .\nPlease note that the convergence rate of vanilla ASGD is O(T− 1 2 ). Hence, Theorem 1 indicates that BASGD has a theoretical convergence rate as fast as vanilla ASGD, with an extra constant variance. The term O(rDdσ(q − r + 1)− 12 ) is caused by the aggregation function, which can be deemed as a sacrifice for Byzantine resilience. The term O(rDdκ(q − r + 1)− 12 ) is caused by the differences of training instances among different workers. In independent and identically distributed (i.i.d.) cases, κ = 0 and the term vanishes. The term O(r 3 2LDD̃d 3 2 τmax(q − r + 1)− 3 4 ) is caused by the delay, and related to parameter τmax. The term is also related to the buffer size. When N tb increases, N (t) may increase, and thus D̃ will decrease. Namely, larger buffer size will result in smaller D̃. Besides, the factor (q − r + 1)− 12 or (q − r + 1)− 34 decreases as q increases, and increases as r increases. Although general, the bound presented in Theorem 1 is relatively loose in high-dimensional cases, since d appears in all the three extra terms. To obtain a tighter bound, we introduce Theorem 2 for BASGD with (A1, A2)-effective aggregation function (Definition 5). Theorem 2. If the total number of heavily delayed workers and Byzantine workers is not larger than r, B = O(r), and Aggr(·) is an (A1, A2)-effective aggregation function in this case. Set learning rate η = O( 1√\nLT ), and in general asynchronous cases, we have:∑T−1\nt=0 E[‖∇F (wt)‖2] T ≤ O\n( L 1 2 [F (w0)− F ∗]\nT 1 2\n) +O ( L 1 2 τmaxDA2r 1 2\nT 1 2\n) +O ( L 1 2 (A2) 2\nT 1 2\n)\n+O\n( L 5 2 (A2) 2τ2maxr\nT 3 2\n) +A1.\nTheorem 2 indicates that if Aggr(·) makes a synchronous BL method converge (i.e., satisfies Definition 5), BASGD converges when using Aggr(·) as aggregation function. Hence, BASGD can also be seen as a technique of asynchronization. That is to say, new asynchronous methods can be obtained from synchronous ones when using BASGD. The extra constant term A1 is caused by gradient bias. When there is no Byzantine workers (r = 0), and instances are i.i.d. across workers, letting B = 1 and Aggr(h1, . . . ,hB) = Aggr(h1) = h1, BASGD degenerates to vanilla ASGD. Under this circumstance, there is no gradient bias (A1 = 0), and the extra constant term vanishes.\nIn general cases, Theorem 2 guarantees BASGD to find a point such that the squared L2-norm of its gradient is not larger than A1 (but not necessarily around a stationary point), in expectation. Please note that Assumption 3 already guarantees that gradient’s squared L2-norm is not larger than D2. We introduce Proposition 2 to show that A1 is guaranteed to be smaller than D2 under a mild condition. Proposition 2. Aggr(·) is an (A1, A2)-effective aggregation function, and Gtsyn is aggregated by Aggr(·) in synchronous setting. If E[‖Gtsyn −∇F (wt)‖ | wt] ≤ D, ∀wt ∈ Rd, then A1 ≤ D2.\nGtsyn is the aggregated result of Aggr(·), and is a robust estimator of ∇F (wt) used for updating. Since ‖∇F (wt)‖ ≤ D,∇F (wt) locates in a ball with radius D. E[‖Gtsyn −∇F (wt)‖ | wt] ≤ D means that the bias of Gtsyn is not larger than the radius D, which is a mild condition for Aggr(·).\nAs many existing works have indicated (Assran et al., 2020; Nokleby et al., 2020), speed-up is also an important aspect of distributed learning methods. In BASGD, different workers can compute gradients concurrently, make each buffer be filled more quickly, and thus speed up the model updating. However, we mainly focus on Byzantine-resilience in this work. Speed-up will be thoroughly studied in future work. Besides, heavily delayed workers are considered as Byzantine in the current analysis. We will analyze heavily delayed worker’s behavior more finely to obtain better results in future work." }, { "heading": "5 EXPERIMENT", "text": "In this section, we empirically evaluate the performance of BASGD and baselines in both image classification (IC) and natural language processing (NLP) applications. Our experiments are conducted on a distributed platform with dockers. Each docker is bound to an NVIDIA Tesla V100 (32G) GPU (in IC) or an NVIDIA Tesla K80 GPU (in NLP). Please note that different GPU cards do not affect the reported metrics in the experiment. We choose 30 dockers as workers in IC, and 8 dockers in NLP. An extra docker is chosen as server. All algorithms are implemented with PyTorch 1.3." }, { "heading": "5.1 EXPERIMENTAL SETTING", "text": "We compare the performance of different methods under two types of attack: negative gradient attack (NG-attack) and random disturbance attack (RD-attack). Byzantine workers with NG-attack send g̃NG = −katk ·g to server, where g is the true gradient and katk ∈ R+ is a parameter. Byzantine workers with RD-attack send g̃RD = g + grnd to server, where grnd is a random vector sampled from normal distribution N (0, ‖σatkg‖2 · I). Here, σatk is a parameter and I is an identity matrix. NG-attack is a typical kind of malicious attack, while RD-attack can be seen as an accidental failure with expectation 0. Besides, each worker is manually set to have a delay, which is kdel times the computing time. Training set is randomly and equally distributed to different workers. We use the average top-1 test accuracy (in IC) or average perplexity (in NLP) on all workers w.r.t. epochs as final metrics. For BASGD, we use median and trimmed-mean as aggregation function.\nBecause BASGD is an ABL method, SBL methods cannot be directly compared with BASGD. The ABL method Zeno++ either cannot be directly compared with BASGD, because Zeno++ needs to store some instances on server. The number of instances stored on server will affect the performance of Zeno++ (Xie et al., 2020). Hence, we compare BASGD with ASGD and Kardam in our experiments. We set dampening function Λ(τ) = 11+τ for Kardam as suggested in (Damaskinos et al., 2018)." }, { "heading": "5.2 IMAGE CLASSIFICATION EXPERIMENT", "text": "In IC experiment, algorithms are evaluated on CIFAR-10 (Krizhevsky et al., 2009) with deep learning model ResNet-20 (He et al., 2016). Cross-entropy is used as the loss function. We set katk = 10 for NG-attack, and σatk = 0.2 for RD-attack. kdel is randomly sampled from truncated standard normal distribution within [0,+∞). As suggested in (He et al., 2016), learning rate η is set to 0.1 initially for each algorithm, and multiplied by 0.1 at the 80-th epoch and the 120-th epoch respectively. The weight decay is set to 10−4. We run each algorithm for 160 epochs. Batch size is set to 25.\nFirstly, we compare the performance of different methods when there are no Byzantine workers. Experimental results with median and trmean aggregation functions are illustrated in Figure 2(a) and Figure 2(b), respectively. ASGD achieves the best performance. BASGD (B > 1) and Kardam have similar convergence rate to ASGD, but both sacrifice a little accuracy. Besides, the performance of BASGD gets worse when the buffer number B increases, which is consistent with the theoretical results. Please note that ASGD is a degenerated case of BASGD when B = 1 and Aggr(h1) = h1. Hence, BASGD can achieve the same performance as ASGD when there is no failure or attack.\nThen, for each type of attack, we conduct two experiments in which there are 3 and 6 Byzantine workers, respectively. We respectively set 10 and 15 buffers for BASGD in these two experiments. For space saving, we only present average top-1 test accuracy in Figure 2(c) and Figure 2(d) (3 Byzantine workers), and Figure 2(e) and Figure 2(f) (6 Byzantine workers). Results about training loss are in Appendix C. We can find that BASGD significantly outperforms ASGD and Kardam under both RD-attack (accidental failure) and NG-attack (malicious attack). Under the less harmful RD-attack, although ASGD and Kardam still converge, they both suffer a significant loss on accuracy. Under NG-attack, both ASGD and Kardam cannot converge, even if we have tried different values of assumed Byzantine worker number for Kardam, which is denoted by a hyper-parameter γ in this paper. Hence, both ASGD and Kardam cannot resist malicious attack. On the contrary, BASGD still has a relatively good performance under both types of attack.\nMoreover, we count the ratio of filtered gradients in Kardam, which is shown in Table 1. We can find that in order to filter Byzantine gradients, Kardam also filters approximately equal ratio of loyal gradients. It explains why Kardam performs poorly under malicious attack." }, { "heading": "5.3 NATURAL LANGUAGE PROCESSING EXPERIMENT", "text": "In NLP experiment, the algorithms are evaluated on the WikiText-2 dataset with LSTM networks. We only use the training set and test set, while the validation set is not used in our experiment. For LSTM, we adopt 2 layers with 100 units in each. Word embedding size is set to 100, and sequence length is set to 35. Gradient clipping size is set to 0.25. Cross-entropy is used as the loss function. For each algorithm, we run each algorithm for 40 epochs. Initial learning rate η is chosen from {1, 2, 5, 10, 20}, and is divided by 4 every 10 epochs. The best test result is adopted as the final one. The performance of ASGD under no attack is used as gold standard. We set katk = 10 and σatk = 0.1. One of the eight workers is Byzantine. kdel is randomly sampled from exponential distribution with parameter λ = 1. Each experiment is carried out for 3 times, and the average perplexity is reported in Figure 3. We can find that BASGD converges under each kind of attack, with only a little loss in perplexity compared to the gold standard (ASGD without attack). On the other hand, ASGD and Kardam both fail, even if we have set the largest γ (γ = 3) for Kardam." }, { "heading": "6 CONCLUSION", "text": "In this paper, we propose a novel method called BASGD for asynchronous Byzantine learning. To the best of our knowledge, BASGD is the first ABL method that can resist malicious attack without storing any instances on server. Compared with those methods which need to store instances on server, BASGD takes less risk of privacy leakage. BASGD is proved to be convergent, and be able to resist failure or attack. Empirical results show that BASGD significantly outperforms vanilla ASGD and other ABL baselines, when there exists failure or attack on workers." }, { "heading": "A ALGORITHM DETAILS", "text": "" }, { "heading": "A.1 ASYNCHRONOUS SGD (ASGD)", "text": "One popular asynchronous method to solve the problem in (1) under the PS framework is ASGD (Dean et al., 2012), which is presented in Algorithm 1." }, { "heading": "A.2 BUFFERED ASYNCHRONOUS SGD (BASGD)", "text": "The details of learning procedure in BASGD is presented in Algorithm 2." }, { "heading": "B PROOF DETAILS", "text": "" }, { "heading": "B.1 PROOF OF PROPOSITION 1", "text": "Proof. Firstly, we prove coordinate-wise q-trimmed-mean is q-BR. It is not hard to check that trmean satisfies the property (a) in the definition of q-BR, then we prove that it also satisfies property (b).\nWithout loss of generality, we assume h1j , . . . , hBj are already in descending order. By definition, Trm(h·j) is the average value ofMj , which is obtained by removing q largest values and q smallest values of {hij}Bi=1. Therefore,\nh(q+1)j = max x∈Mj {x} ≥ Trm(h·j) ≥ min x∈Mj {x} = h(n−q)j\nFor any S ⊂ [B] with |S| = B − q, by Pigeonhole Principle, S includes at least one of h1j , . . . , h(q+1)j , and includes at least one of h(n−q)j , . . . , hBj . Therefore,\nmax s∈S {hsj} ≥ h(q+1)j ; min s∈S {hsj} ≤ h(n−q)j .\nCombining these two inequalities, we have:\nmax s∈S {hsj} ≥ Trm(h·j) ≥ min s∈S {hsj}.\nThus, coordinate-wise q-trimmed-mean is q-BR. By definition, coordinate-wise median can be seen as bB−12 c-trimmed-mean, and thus is b B−1 2 c-BR.\nAlgorithm 2 Buffered Asynchronous SGD (BASGD)\nServer: Input: learning rate η, buffer number B, aggregation function: Aggr(·); Initialization: initial parameter w0, learning rate η; Send initial w0 to all workers; Set t← 0; Set buffer: hb ← 0, N tb ← 0; repeat\nWait until receiving g from some worker s; Choose buffer: b← s mod B; N tb ← N tb + 1; hb ← (N\nt b−1)hb+g Ntb\n; if N tb > 0 for each b ∈ [B] then\nAggregate: Gt = Aggr([h1, . . . ,hB ]); Execute SGD step: wt+1 ← wt − η ·Gt; for b = 1 to B do\nZero out buffer: hb ← 0, N tb ← 0; end for t← t+ 1;\nend if Send back the latest parameters back to worker s, no matter whether a SGD step is executed or not.\nuntil stop criterion is satisfied Notify all workers to stop;\nWorker k: (k = 0, 1, ...,m− 1) repeat\nWait until receiving the latest parameter w from server; Randomly sample an index i from Dk; Compute ∇f(w; zi); Send ∇f(w; zi) to server;\nuntil receive server’s notification to stop" }, { "heading": "B.2 PROOF OF LEMMA 1", "text": "To begin with, we will introduce a lemma to estimate the ordered statistics. Lemma 3. X1, . . . , XM are non-negative, independent and identically distributed (i.i.d.) random variables sampled from distribution D, and have limited expectation E[X]. Denote the K-th largest value in {X1, . . . , XM} as X(K), then E[X(K)] ≤ CM,K · E[X], where\nCM,K = { M, K = 1; M !(K−1)K−1(M−K)M−K (K−1)!(M−K)!(M−1)M−1 , 1 < K < M 2 .\nProof. Denote the Probability Density Function (PDF) and Cumulative Density Function (CDF) of D as p(x) and P (x), respectively. Then the PDF of X(K) is:\np(K)(x) = M !\n(K − 1)!(M −K)! [1− P (x)]K−1P (x)M−Kp(x).\nThus, E[X(K)] = ∫ +∞ 0 x · p(K)(x)dx\n= ∫ +∞ 0 [ M ! (K − 1)!(M −K)! · [1− P (x)]K−1P (x)M−K ] · xp(x)dx\n(a) ≤ ∫ +∞ 0 [ M ! (K − 1)!(M −K)! · (K − 1) K−1(M −K)M−K (M − 1)M−1 ] · xp(x)dx\n= M !(K − 1)K−1(M −K)M−K\n(K − 1)!(M −K)!(M − 1)M−1 · E[X].\nInequality (a) is derived based on [1 − P (x)]K−1P (x)M−K ≤ (K−1) K−1(M−K)M−K (M−1)M−1 , which is obtained by the following process:\nLet θ(x) = (1− x)K−1xM−K , x ∈ [0, 1]. Then θ′(x) = (1− x)K−2xM−K−1[(M −K)(1− x)− (K − 1)x]. Let θ′(x) = 0. Solving the equation, we obtain x = M−KM−1 , 0 or 1. Also, we have θ(0) = θ(1) = 0, and θ(M−KM−1 ) = (K−1)K−1(M−K)M−K (M−1)M−1 . Then we have maxx∈[0,1] θ(x) = θ(M−KM−1 ) = (K−1)K−1(M−K)M−K (M−1)M−1 . Thus, [1− P (x)]K−1P (x)M−K = θ(P (x)) ≤ (K−1) K−1(M−K)M−K (M−1)M−1 .\nProposition 3. ∀B, q, r ∈ Z+, 0 ≤ r ≤ q < B2 ,\nCB−r,q−r+1 ≤ (B − r) √ B − r + 1√\n(B − q − 1)(q − r + 1) .\nProof. By Stirling’s approximation, we have: √\n2πn · nne−n ≤ n! ≤ e √ n · nne−n, ∀n ∈ Z+.\nTherefore, √\n2πn · e−n ≤ n! nn ≤ e √ n · e−n, ∀n ∈ Z+. (2)\nBy definition of CM,k,\nCM,K = M !(K − 1)K−1(M −K)M−K\n(K − 1)!(M −K)!(M − 1)M−1\n=M · (M − 1)! (M − 1)M−1 · (K − 1) K−1 (K − 1)! · (M −K) M−K (M −K)!\n≤M · [e √ M − 1 · e−(M−1)] · e K−1√ 2π(K − 1) · e M−K√ 2π(M −K)\n= e 2π · M\n√ M − 1√\n(M −K)(K − 1) ,\nwhere the inequality uses Inequality (2).\nCase (i). When r < q,\nCB−r,q−r+1 ≤ e 2π · (B − r)\n√ B − r − 1√\n(B − q − 1)(q − r)\n≤ (B − r) √ B − r + 1√\n(B − q − 1)(q − r + 1) .\nCase (ii). When r = q, by definition of CM,K , we have:\nCB−r,q−r+1 = CB−q,1 = B − q = (B − r) √ B − r + 1√\n(B − q − 1)(q − r + 1) .\nIn conclusion, when r ≤ q, we have:\nCB−r,q−r+1 ≤ (B − r) √ B − r + 1√\n(B − q − 1)(q − r + 1) .\nWhen B and q are fixed, the upper bound of CB−r,q−r+1 will increase when r (number of Byzantine workers) increases. Namely, the upper bound will be larger if there are more Byzantine workers. When B and r are fixed, q measures the Byzantine Robust degree of aggregation function Aggr(·). The factor [(B − q − 1)(q − r)]− 12 is monotonically decreasing with respect to q, when q < B−1+r2 . Since r ≤ q < B2 , the upper bound will decrease when q increases. Also, B − q decreases when q increases. Namely, the upper bound will be smaller if Aggr(·) has a stronger q-BR property. In the worst case (q = r), the upper bound of CB−r,q−r+1 is linear to B. Even in the best case (r = 0, q = bB−12 c), the denominator is about B 2 and the upper bound of CB−r,q−r+1 is linear to √ B. Thus, larger B might result in larger error. Hence, buffer number is not supposed to be set too large.\nNow we prove Lemma 1.\nProof.\nE[||Gt||2 | wt] =E[||Aggr([h1, . . . ,hB ])||2 | wt]\n= d∑ j=1 E[Aggr([h1, . . . ,hB ])2j | wt],\nwhere Aggr([h1, . . . ,hB ])j represents the j-th coordinate of the aggregated gradient.\nWe useHt to denote the credible buffer index set, which is composed by the index of buffers, where the stored gradients are all from loyal workers.\nFor each b ∈ Ht, hb has stored N tb gradients at iteration t: g1, . . . ,gNtb , and we have:\nhb = 1\nN tb Ntb∑ i=1 gi.\nThen,\nE[‖hb‖2 | wt] =E[‖hb − E[hb | wt]‖2 | wt] + ‖E[hb | wt]‖2\n=E[‖ 1 N tb Ntb∑ i=1 (gi − E[gi | wt])‖2 | wt] + ‖E[ 1 N tb Ntb∑ i=1 gi | wt]‖2\n(a) ≤ σ 2 N tb + ‖E[ 1 N tb Ntb∑ i=1 gi | wt]‖2\n= σ2\nN tb +\n1 (N tb) 2 ‖ Ntb∑ i=1 E[gi | wt]‖2\n(b) ≤ σ 2\nN tb +\n1\n(N tb) 2 ·N tb · Ntb∑ i=1 ‖E[gi | wt]‖2\n(c) ≤ σ 2\nN tb +D2.\nInequality (a) is derived based on Assumption 4 and the fact that gi is mutually uncorrelated. Inequality (b) is derived by the following process:\n‖ Ntb∑ i=1 E[gi | wt]‖2 = Ntb∑ i=1 ‖E[gi | wt]‖2 + ∑\n1≤i<i′≤Ntb\n2 · E[gi | wt]TE[g′i | wt]\n≤ Ntb∑ i=1 ‖E[gi | wt]‖2 + ∑\n1≤i<i′≤Ntb\n(‖E[gi | wt]‖2 + ‖E[g′i | wt]‖2\n= Ntb∑ i=1 ‖E[gi | wt]‖2 + (N tb − 1) · Ntb∑ i=1 ‖E[gi | wt]‖2 =N tb · Ntb∑ i=1 ‖E[gi | wt]‖2.\nInequality (c) is derived based on Assumption 3.\nBecause there are no more than r Byzantine workers at iteration t, no more than r buffers contain Byzantine gradient. Thus, the credible buffer index set Ht has at least (B − r) elements. In case that Ht has more than (B − r) elements, we take the indices of the smallest (B − q) elements in {hbj}b∈Ht to composeHtj , and we have |Htj | = B − q.\nNote that Aggr(·) is q-BR, and by definition we have:\nmin b∈Htj {hbj} ≤ Aggr([h1, . . . ,hB ])j ≤ max b∈Htj {hbj}.\nTherefore, d∑ j=1 E[Aggr([h1, . . . ,hB ])2j |wt] ≤ d∑ j=1 E[max b∈Htj {h2bj}|wt].\nThere are (B − r) credible buffers, and we choose the smallest (B − q) buffers to compose Htj . Therefore, for all b ∈ Htj , hbj is not larger than the (q− r+ 1)-th largest one in {hbj}b∈Ht . Let N (t) be the (q + 1)-th smallest value in {N tb}b∈[B]. Using Lemma 3, we have:\nE[max b∈Htj {h2bj}|wt] ≤E[max b∈Htj {‖hb‖2}|wt]\n≤E[max b∈Htj {D2 + σ\n2\nN tb }|wt]\n=CB−r,q−r+1 · (D2 + σ2\nN (t) ).\nThus,\nE[||Gt||2 | wt] ≤ d∑ j=1 E[max b∈Htj {h2bj}|wt] ≤ CB−r,q−r+1d · (D2 + σ2 N (t) ).\nBy Proposition 3, we have:\nE[||Gt||2 | wt] ≤ d · (B − r) √ B − r + 1√\n(B − q − 1)(q − r + 1) · (D2 + σ\n2\nN (t) )." }, { "heading": "B.3 PROOF OF LEMMA 2", "text": "Proof.\nE[Gt −∇F (wt) | wt] =E[Aggr([h1, . . . ,hB ])−∇F (wt) | wt] =E[Aggr([h1 −∇F (wt), . . . ,hB −∇F (wt)]) | wt], (3)\nwhere the second equation is derived based on the Property (b) in the definition of q-BR.\nFor each b ∈ Ht, hb has stored N tb gradients at iteration t: g1, . . . ,gNtb , and we have:\nhb −∇F (wt) = 1\nN tb Ntb∑ k=1 gi −∇F (wt) = 1 N tb Ntb∑ k=1 [∇f(wtk ; zik)−∇F (wt)],\nwhere 0 ≤ t− tk ≤ τmax, ∀k = 1, 2, . . . , N tb . Taking expectation on both sides, we have:\nE[||hb −∇F (wt)|| |wt]\n=E[|| 1 N tb Ntb∑ k=1 (∇f(wtk ; zik)−∇F (wt))|| |wt]\n≤ 1 N tb Ntb∑ k=1 E[||∇f(wtk ; zik)−∇F (wt)|| |wt]\n(a) ≤ 1 N tb Ntb∑ k=1 {E[||∇F (wtk)−∇F (wt)|| |wt]\n+ E[||∇f(wtk ; zik)− E[∇f(wtk ; zik)]|| |wt] + E[||E[∇f(wtk ; zik)]−∇F (wtk)|| |wt]},\nwhere (a) is derived based on Triangle Inequality.\nThe first part:\nE[||∇F (wtk)−∇F (wt)|| |wt] (b) ≤L · E[||wtk −wt|| |wt] =L · E[|| t−1∑ t′=tk Gt ′ || |wt]\n≤ t−1∑ t′=tk L · E[||Gt ′ || |wt]\n= t−1∑ t′=tk L · √ E[||Gt′ || |wt]2\n≤ t−1∑ t′=tk L · √ E[||Gt′ ||2 |wt]\n(c) ≤ t−1∑ t′=tk L · √ CB−r,q−r+1d · (D2 + σ2/N (t))\n(d) ≤ τmaxL · √ CB−r,q−r+1d · (D2 + σ2/N (t)),\nwhere (b) is derived based on Assumption 5, (c) is derived based on Lemma 1 and (d) is derived based on t− tk ≤ τmax. The second part:\nE[||∇f(wtk ; zik)− E[∇f(wtk ; zik)]|| |wt] = √ E[||∇f(wtk ; zik)− E[∇f(wtk ; zik)]|| |wt]2\n≤ √ E[||∇f(wtk ; zik)− E[∇f(wtk ; zik)]||2 |wt]\n(e) ≤σ,\nwhere (e) is derived based on Assumption 4.\nBy Assumption 2, we have the following estimation for the third part:\nE[||E[∇f(wtk ; zik)]−∇F (wtk)|| |wt] ≤ κ.\nTherefore,\nE[||hb −∇F (wt)|| |wt]\n≤ 1 N tb Ntb∑ k=1 (τmaxL √ CB−r,q−r+1d · (D2 + σ2/N (t)) + σ + κ)\n=τmaxL √ CB−r,q−r+1d · (D2 + σ2/N (t)) + σ + κ. (4)\nSimilar to the proof of Lemma 1, ∀j ∈ [d], we have:\nmin b∈Htj {hbj −∇F (wt)j}\n≤Aggr([h1 −∇F (wt), . . . ,hB −∇F (wt)])j ≤max b∈Htj {hbj −∇F (wt)j},\nwhereHtj is composed by the indices of the smallest (B − q) elements in {hbj −∇F (wt)j}b∈Ht . Therefore,\n||E[Aggr([h1 −∇F (wt), . . . ,hB −∇F (wt)]) | wt]||\n≤ d∑ j=1 ||E[Aggr([h1 −∇F (wt), . . . ,hB −∇F (wt)])j | wt]|| ≤ d∑ j=1 E[||Aggr([h1 −∇F (wt), . . . ,hB −∇F (wt)])j || | wt]\n(f) ≤ d∑ j=1 E[max b∈Htj ||hbj −∇F (wt)j || | wt]\n(g) ≤ d∑ j=1 CB−r,q−r+1E[||hbj −∇F (wt)j || |wt]\n≤ d∑ j=1 CB−r,q−r+1E[||hb −∇F (wt)|| |wt]\n(h) ≤ d∑ j=1 CB−r,q−r+1 · (τmaxL √ CB−r,q−r+1d · (D2 + σ2/N (t)) + σ + κ)\n=CB−r,q−r+1d · (τmaxL √ CB−r,q−r+1d · (D2 + σ2/N (t)) + σ + κ), (5)\nwhere (f) is derived based on definition of q-BR, (g) is derived based on Lemma 3, and (h) is derived based on Inequality (4).\nCombining Equation (3) and Inequality (5), we obtain: ||E[Gt −∇F (wt) | wt]|| ≤ CB−r,q−r+1d · (τmaxL √ CB−r,q−r+1d · (D2 + σ2/N (t)) + σ + κ).\nBy Proposition (3), we have:\n||E[Gt −∇F (wt) | wt]|| ≤ d(B − r) √ B − r + 1√\n(B − q − 1)(q − r + 1)\n·(τmaxL √ d (B − r) √ B − r + 1√\n(B − q − 1)(q − r + 1) · (D2 + σ2/N (t)) + σ + κ)." }, { "heading": "B.4 PROOF OF THEOREM 1", "text": "" }, { "heading": "Proof.", "text": "E[F (wt+1) | wt] =E[F (wt − η ·Gt) | wt] (a)\n≤E[F (wt)− η · ∇F (wt)TGt + L 2 η2||Gt||2 | wt] =F (wt)− η · E[∇F (wt)TGt | wt] + η 2L\n2 E[||Gt||2 | wt]\n=F (wt)− η · ∇F (wt)TE[Gt | wt] + η 2L\n2 E[||Gt||2 | wt]\n=F (wt)− η · ∇F (wt)T∇F (wt) + η 2L\n2 E[||Gt||2 | wt]\n− η · ∇F (wt)TE[Gt −∇F (wt) | wt]\n≤F (wt)− η · ||∇F (wt)||2 + η 2L\n2 E[||Gt||2 | wt]\n+ η · ||∇F (wt)|| · ||E[Gt −∇F (wt) | wt]||, where (a) is derived based on Assumption 5.\nUsing Lemma 1 and Lemma 2, we have:\nE[F (wt+1) | wt]\n≤F (wt)− η · ||∇F (wt)||2 + η 2L\n2 CB−r,q−r+1d · (D2 + σ2/N (t)) + η · CB−r,q−r+1d · (τmaxL √ CB−r,q−r+1d · (D2 + σ2/N (t)) + σ + κ) · ||∇F (wt)||.\nAlso, by Assumption 3, ||∇F (wt)|| ≤ D. Taking total expectation and combining ||∇F (wt)|| ≤ D, we have:\nE[F (wt+1)] ≤E[F (wt)]− η · E[||∇F (wt)||2] + η 2L\n2 CB−r,q−r+1d · (D2 + σ2/N (t)) + η · CB−r,q−r+1Dd(τmaxL √ CB−r,q−r+1d · (D2 + σ2/N (t)) + σ + κ).\nLet D̃ = 1T ∑T−1 t=0 √ D2 + σ2/N (t). By telescoping, we have:\nη · T−1∑ t=0 E[||∇F (wt)||2] ≤{F (w0)− E[F (wT )]}\n+ η2T · L 2 CB−r,q−r+1d · 1 T T−1∑ t=0 (D2 + σ2/N (t))\n+ ηT · CB−r,q−r+1Dd(τmaxLD̃ √ CB−r,q−r+1d+ σ + κ).\nNote that E[F (wT )] ≥ F ∗, and let η = O (\n1 L √ T ) :∑T−1\nt=0 E[||∇F (wt)||2] T\n≤O ( L[F (w0)− F ∗]√\nT\n) +O ( CB−r,q−r+1D̃d√\nT ) +O ( CB−r,q−r+1Dd · (τmaxLD̃ √ CB−r,q−r+1d+ σ + κ) ) .\nWhen q = r and B = O(r), we have CB−r,q−r+1 ≤ (B−r) √ B−r+1√\n(B−q−1)(q−r+1) = O\n( r\n(q−r+1) 1 2\n) . Thus,\n∑T−1 t=0 E[||∇F (wt)||2]\nT ≤O\n( L[F (w0)− F ∗]\nT 1 2\n) +O ( rdD̃\nT 1 2 (q − r + 1) 12\n) +O ( rDdσ\n(q − r + 1) 12\n)\n+O\n( rDdκ\n(q − r + 1) 12\n) +O ( r 3 2LDD̃d 3 2 τmax\n(q − r + 1) 34\n) ." }, { "heading": "B.5 PROOF OF THEOREM 2", "text": "Proof. Let h′b be the value of the b-th buffer, if all received loyal gradients were computed based on wt. Note Gt = Aggr(h1, . . . ,hB).\nE[F (wt+1) | wt] =E[F (wt − η ·Gt) | wt] (a) ≤E[F (wt)− η · ∇F (wt)TGt + L 2 η2||Gt||2 | wt]\n=F (wt)− η · E[∇F (wt)TGt | wt] + η 2L\n2 E[||Gt||2 | wt], (6)\nwhere (a) is derived based on Assumption (5).\nFirstly, we estimate the value of E[∇F (wt)TGt | wt]. Since there are at most r Byzantine workers, at most r buffers may contain Byzantine gradients. Without loss of generality, suppose only the first r buffers may contain Byzantine gradients.\nLet Gtsyn = Aggr(h1, . . . ,hr,h ′ r+1, . . . ,h ′ B), where h1, . . . ,hr may contain Byzantine gradients and be arbitrary value, and h′r+1, . . . ,h ′ B each stores loyal gradients computed based on w t. Thus,\nE[∇F (wt)TGtsyn | wt] ≥ ‖∇F (wt)‖2 −A1, (7)\nE[‖Gtsyn‖2 | wt] ≤ (A2)2. (8) Let α = 2η2L2τ2max(B − r) < 1. We claim that\nE[‖Gt −Gtsyn‖2 | wt] ≤ ( 1\n2 αt+1 +\nα\n1− α ) · (A2)2,\nand E[‖Gt‖2 | wt] ≤ (αt+1 + 2\n1− α ) · (A2)2.\nNow we prove it by induction on t.\nStep 1. When t = 0, all gradients are computed according to w0, and we have G0 = G0syn. Thus,\nE[‖G0 −G0syn‖2 | w0] = 0 ≤ ( 1\n2 α1 +\nα\n1− α ) · (A2)2,\nE[‖G0‖2 | w0] = E[‖G0syn‖2 | w0] ≤ (A2)2 ≤ (α1 + 2\n1− α ) · (A2)2.\nStep 2. If\nE[‖Gt ′ −Gt ′ syn‖2 | wt ′ ] ≤ (1 2 αt ′+1 + α 1− α ) · (A2)2,\nE[‖Gt ′ ‖2 | wt ′ ] ≤ (αt ′+1 + 2\n1− α ) · (A2)2,\nholds for all t′ = 0, 1, . . . , t− 1 (induction hypothesis), then:\nE[‖Gt −Gtsyn‖2 | wt] =E[‖Aggr(h1, . . . ,hr,hr+1, . . . ,hB)−Aggr(h1, . . . ,hr,h′r+1, . . . ,h′B)‖2 | wt] (b)\n≤E[ B∑\nb=r+1\n‖hb − h′b‖2 | wt]\n= B∑ b=r+1 E[‖ 1 N tb Ntb∑ i=1 (∇f(wtk ; zik)−∇f(wt; zik))‖2 | wt]\n(c) ≤ B∑\nb=r+1\nE[ 1\nN tb Ntb∑ i=1 ‖∇f(wtk ; zik)−∇f(wt; zik)‖2 | wt]\n(d) ≤ B∑\nb=r+1\nE[ 1\nN tb Ntb∑ i=1 L2‖wtk −wt‖2 | wt]\n= L2(B − r)\nN tb\nNtb∑ i=1 E[‖wtk −wt‖2 | wt]\n= L2(B − r)\nN tb\nNtb∑ i=1 E[‖ t−1∑ t′=tk η ·Gt ′ ‖2 | wt]\n(e) ≤ η 2L2(B − r)\nN tb\nNtb∑ i=1 E[(t− tk) t−1∑ t′=tk ‖Gt ′ ‖2 | wt]\n(f) ≤ η 2L2(B − r)\nN tb\nNtb∑ i=1 [(t− tk) t−1∑ t′=tk (αt ′+1 + 2 1− α ) · (A2)2]\n≤η 2L2(B − r)\nN tb\nNtb∑ i=1 [(t− tk) t−1∑ t′=tk (αt + 2 1− α ) · (A2)2]\n(g) ≤ (η2L2(B − r)τ2max) · (αt + 2\n1− α ) · (A2)2\n(h) ≤ 1 2 α · (αt + 2 1− α ) · (A2)2\n=( 1\n2 αt+1 +\nα\n1− α ) · (A2)2, (9)\nwhere (b) is derived based on the definition of stable aggregation function, (c) is derived based on Cauchy’s Inequality, (d) is derived based on Assumption 5, (e) is also derived based on Cauchy’s Inequality, (f) is derived based on induction hypothesis, (g) is derived based on that t− tk ≤ τmax, and (h) is derived based on that α = 2η2L2τ2max(B − r). Therefore,\nE[‖Gt‖2 | wt] =E[||Gtsyn + (Gt −Gtsyn)||2 | wt] (i)\n≤2 · E[‖Gtsyn‖2 | wt] + 2 · E[||Gt −Gtsyn||2 | wt] (j) ≤2 · (A2)2 + 2 · E[||Gt −Gtsyn||2 | wt] (k) ≤2 · (A2)2 + 2 · ( 1\n2 αt+1 +\nα\n1− α ) · (A2)2\n=(αt+1 + 2\n1− α ) · (A2)2, (10)\nwhere (i) is derived based on that ‖x + y‖2 ≤ 2‖x‖2 + 2‖y‖2, ∀x,y ∈ Rd, (j) is derived by the definition of (A1, A2)-effective aggregation function, and (k) is derived based on Inequality (9).\nBy Inequality (9) and (10), the claimed property also holds for t′ = t.\nIn conclusion, for all t = 0, 1, . . . , T − 1, we have:\nE[‖Gt −Gtsyn‖2 | wt] ≤ ( 1\n2 αt+1 +\nα\n1− α ) · (A2)2, (11)\nand E[‖Gt‖2 | wt] ≤ (αt+1 + 2\n1− α ) · (A2)2. (12)\nAlso, E[‖Gt‖ | wt]2 + V ar[‖Gt‖ | wt] = E[‖Gt‖2 | wt]. Therefore,\nE[‖Gt‖ | wt] = √ E[‖Gt‖ | wt]2 ≤ √ αt+1 + 2\n1− α ·A2. (13)\nWe have:\nη · E[∇F (wt)TGt | wt] =η · E[∇F (wt)TGtsyn | wt] + η · E[∇F (wt)T (Gt −Gtsyn) | wt] (l) ≥η · (‖∇F (wt)‖2 −A1) + η · E[∇F (wt)T (Gt −Gtsyn) | wt] ≥η · ‖∇F (wt)‖2 − η ·A1 − η · ‖∇F (wt)‖ · ‖E[(Gt −Gtsyn) | wt]‖\n(m)\n≥ η · ‖∇F (wt)‖2 − η ·A1 − η ·D · ‖E[(Gt −Gtsyn) | wt]‖ (n) ≥ η · ‖∇F (wt)‖2 − η ·A1 − η ·D · √ 1\n2 αt+1 +\nα\n1− α ·A2, (14)\nwhere (l) is derived based on the definition of (A1, A2)-effective aggregation function, (m) is derived by Assumption 3, and (n) is derived based on Inequality (11).\nCombining Inequalities (6), (12), (14) and taking total expectation, we have:\nE[F (wt+1)] ≤E[F (wt)]− η · E[‖∇F (wt)‖2] + η ·A1 + η ·D √ 1\n2 αt+1 +\nα\n1− α ·A2 +\n1 2 η2L(αt+1 + 2 1− α ) · (A2)2.\nBy telescoping, we have:\nη · T−1∑ t=0 E[‖∇F (wt)‖2] ≤{F (w0)− E[F (wT )]}+ 1 2 η2TL(α+ 2 1− α ) · (A2)2\n+ ηTA1 + ηTD · √ 1\n2 α+\nα\n1− α ·A2.\nDivide both sides of the equation by ηT , and let η = O( 1√ LT ):∑T−1 t=0 E[‖∇F (wt)‖2]\nT\n≤{F (w 0)− E[F (wT )]}\nηT +\n1 2 ηL(α+ 2 1− α ) · (A2)2 +A1 +D ·\n√ 1\n2 α+\nα\n1− α ·A2\n≤ √ L[F (w0)− F ∗]√\nT +\n√ L( 12α+ 1 1−α ) · (A2) 2\n√ T\n+A1 + α 1 2 [ 3− α 2(1− α) ] 1 2 ·DA2.\nNote that α = 2η2L2τ2max(B − r) = O ( Lτ2max(B−r)\nT ) , finally we have:∑T−1\nt=0 E[‖∇F (wt)‖2] T ≤O\n(√ L · [F (w0)− F ∗]√\nT\n) +O (√ L(A2)\n2(1 + α)√ T ) +O ( α 1 2DA2 ) +A1\n=O\n( L 1 2 [F (w0)− F ∗]\nT 1 2\n) +O ( L 1 2 τmax(B − r) 1 2DA2\nT 1 2\n)\n+O\n( L 1 2 (A2) 2\nT 1 2\n) +O ( L 5 2 (A2)\n2τ2max(B − r) T 3 2\n) +A1.\nSpecailly, when B = O(r), we have:∑T−1 t=0 E[‖∇F (wt)‖2]\nT ≤O\n( L 1 2 [F (w0)− F ∗]\nT 1 2\n) +O ( L 1 2 τmaxDA2r 1 2\nT 1 2\n)\n+O\n( L 1 2 (A2) 2\nT 1 2\n) +O ( L 5 2 (A2) 2τ2maxr\nT 3 2\n) +A1." }, { "heading": "B.6 PROOF OF PROPOSITION 2", "text": "Proof. Under the condition that ∀wt ∈ Rd, E[‖Gtsyn −∇F (wt)‖ ≤ D | wt], we have:\nE[∇F (wt)TGtsyn | wt] = E[∇F (wt)T [∇F (wt) + (Gtsyn −∇F (wt)) | wt] = ‖∇F (wt)‖2 + E[∇F (wt)T (Gtsyn −∇F (wt)) | wt] ≥ ‖∇F (wt)‖2 − ‖∇F (wt)‖ · E[‖Gtsyn −∇F (wt)‖ | wt] ≥ ‖∇F (wt)‖2 −D ×D = ‖∇F (wt)‖2 −D2.\nCombining with the property (i) of (A1, A2)-effective aggregation function, we have A1 ≤ D2." }, { "heading": "C MORE EXPERIMENTAL RESULTS", "text": "Figure 4, Figure 5 and Figure 6 illustrate the average training loss w.r.t. epochs when there are no Byzantine workers, 3 Byzantine workers and 6 Byzantine workers. Please note that in Figure 5 and Figure 6, some curves do not appear, because the value of loss function is extremely large or even exceeds the range of floating-point numbers, due to the Byzantine attack. γ is the hyper-parameter about the assumed number of Byzantine workers in Kardam. The experimental results about training loss give further support to the experimental summary in Section 5." } ]
2,020
BASGD: BUFFERED ASYNCHRONOUS SGD
SP:e898ffa6bfdc1597ced0f9bd66c60ff9c6b4c383
[ "This paper investigates conditions under which communities of cooperative agents are stable. Communities in multi-round bargaining games with evolutionary dynamics are evaluated in three main setups. The first imposes no restrictions on the agents' behavior and is shown to be easily invaded by deceitful agents. The second enables agents to refuse to bargain with deceitful agents. Nevertheless, such communities are shown to be invadable. Finally, in the third setup, a global punishment system is shown to be able to drive out deceitful invaders. The main take-home message is that, when lying is an option, agents(' communities) need to be prepared for it. ", "This paper attempts to address a question in the emergent communication literature: what preserves / maintains the stability of emerged communication protocols. The authors manipulate the prevalence of lying behavior in a community of agents playing a variant of a Nash bargaining game. The main take-away is that explicit punishment, from the environment and from truth-tellers not wanting to communicate with liars, can prevent the spread of exploitative lying behavior in the community." ]
The emergence of language is a mystery. One dominant theory is that cooperation boosts language to emerge. However, as a means of giving out information, language seems not to be an evolutionarily stable strategy. To ensure the survival advantage of many competitors, animals are selfish in nature. From the perspective of Darwinian, if an individual can obtain a higher benefit by deceiving the other party, why not deceive? For those who are cheated, once bitten and twice shy, cooperation will no longer be a good option. As a result, motivation for communication, as well as the emergence of language would perish. Then, what preserves the emergence of language? We aim to answer this question in a brand new framework of agent community, reinforcement learning, and natural selection. Empirically, we reveal that lying indeed dispels cooperation. Even with individual resistance to lying behaviors, liars can easily defeat truth tellers and survive during natural selection. However, social resistance eventually constrains lying and makes the emergence of language possible.
[]
[ { "authors": [ "Jacob Andreas", "Dan Klein" ], "title": "Analogs of linguistic structure in deep representations", "venue": "In EMNLP,", "year": 2017 }, { "authors": [ "Ken Binmore", "Ariel Rubinstein", "Asher Wolinsky" ], "title": "The nash bargaining solution in economic modelling", "venue": "The RAND Journal of Economics,", "year": 1986 }, { "authors": [ "Kris Cao", "Angeliki Lazaridou", "Marc Lanctot", "Joel Z Leibo", "Karl Tuyls", "Stephen Clark" ], "title": "Emergent communication through negotiation", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Rahma Chaabouni", "Eugene Kharitonov", "Emmanuel Dupoux", "Marco Baroni" ], "title": "Anti-efficient encoding in emergent communication", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Morten H Christiansen", "Simon Kirby" ], "title": "Language evolution: Consensus and controversies", "venue": "Trends in cognitive sciences,", "year": 2003 }, { "authors": [ "Gautier Dagan", "Dieuwke Hupkes", "Elia Bruni" ], "title": "Co-evolution of language and agents in referential games", "venue": "arXiv preprint arXiv:2001.03361,", "year": 2020 }, { "authors": [ "Abhishek Das", "Satwik Kottur", "José MF Moura", "Stefan Lee", "Dhruv Batra" ], "title": "Learning cooperative visual dialog agents with deep reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "Lewis David" ], "title": "Convention: a philosophical study", "venue": null, "year": 1969 }, { "authors": [ "Iain Davidson" ], "title": "The archaeological evidence of language origins: States of art", "venue": "STUDIES IN THE EVOLUTION OF LANGUAGE,", "year": 2003 }, { "authors": [ "Terrence W Deacon" ], "title": "Universal grammar and semiotic constraints", "venue": "STUDIES IN THE EVOLUTION OF LANGUAGE,", "year": 2003 }, { "authors": [ "Terrence William Deacon" ], "title": "The symbolic species: The co-evolution of language and the brain", "venue": "Number 202. WW Norton & Company,", "year": 1998 }, { "authors": [ "Katrina Evtimova", "Andrew Drozdov", "Douwe Kiela", "Kyunghyun Cho" ], "title": "Emergent communication in a multi-modal, multi-step referential game", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "T Givon" ], "title": "Function, structure and language acquisition", "venue": "The crosslinguistic study of language acquisition,", "year": 2013 }, { "authors": [ "Talmy Givón", "Bertram F Malle" ], "title": "The evolution of language out of pre-language, volume 53", "venue": "John Benjamins Publishing,", "year": 2002 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Satwik Kottur", "José MF Moura", "Stefan Lee", "Dhruv Batra" ], "title": "Natural language does not emerge’naturally’in multi-agent dialog", "venue": "In EMNLP,", "year": 2017 }, { "authors": [ "Angeliki Lazaridou", "Alex Peysakhovich" ], "title": "Multi-agent cooperation and the emergence of (natural) language", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Angeliki Lazaridou", "Karl Moritz Hermann", "Karl Tuyls", "Stephen Clark" ], "title": "Emergence of linguistic communication from referential games with symbolic and pixel input", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Charles N Li", "Jean-Marie Hombert" ], "title": "On the evolutionary origin of language", "venue": "Advances in Consciousness Research,", "year": 2002 }, { "authors": [ "Fushan Li", "Michael Bowling" ], "title": "Ease-of-teaching and language structure from emergent communication", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Jay Meddin" ], "title": "Chimpanzees, symbols, and the reflective self", "venue": "Social Psychology Quarterly,", "year": 1979 }, { "authors": [ "Igor Mordatch", "Pieter Abbeel" ], "title": "Emergence of grounded compositional language in multi-agent populations", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "John Nash" ], "title": "Non-cooperative games", "venue": "Annals of mathematics,", "year": 1951 }, { "authors": [ "John F Nash Jr." ], "title": "The bargaining problem", "venue": "Econometrica: Journal of the econometric society,", "year": 1950 }, { "authors": [ "Martin A Nowak", "David C Krakauer" ], "title": "The evolution of language", "venue": "Proceedings of the National Academy of Sciences,", "year": 1999 }, { "authors": [ "Thomas C Schelling" ], "title": "The strategy of conflict. prospectus for a reorientation of game theory", "venue": "Journal of Conflict Resolution,", "year": 1958 }, { "authors": [ "Robert L Trivers" ], "title": "The evolution of reciprocal altruism", "venue": "The Quarterly review of biology,", "year": 1971 }, { "authors": [ "Ib Ulbaek" ], "title": "The origin of language and cognition", "venue": "Approaches to the Evolution of Language,", "year": 1998 }, { "authors": [ "John Von Neumann", "Oskar Morgenstern" ], "title": "Theory of games and economic behavior (commemorative edition)", "venue": "Princeton university press,", "year": 2007 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 } ]
[ { "heading": "1 INTRODUCTION", "text": "Unveiling the principles behind the emergence and evolution of language is attractive and appealing to all. It is believed that this research field is of great significance for promoting the development of enabling agents to evolve an efficient communication protocol (Nowak & Krakauer, 1999; Kottur et al., 2017; Chaabouni et al., 2019) or acquire existing one (Li & Bowling, 2019), especially when interacting with humans. Previously, many studies have investigated some intriguing properties of language and their effects on the emergence of language (Andreas & Klein, 2017; Lazaridou et al., 2018; Mordatch & Abbeel, 2018). The motivation behind these is that human language is considered as a remarkable degree of structure and complexity (Givon, 2013) and each character is the result of evolution, thus they believe that understanding the language itself is an indispensable step to take. Unlike existing work, we, from a different perspective, focus on a fundamental question that what made the emergence of language possible during evolution.\nOne of the dominant theories in the community of emergent communication is: cooperation boosts language to emerge (Nowak & Krakauer, 1999; Cao et al., 2018). Hence, there has been a surge of work investigating this field in cooperative multi-agent (mostly two agents) referential games (Lazaridou & Peysakhovich, 2017; Kottur et al., 2017; Das et al., 2017; Evtimova et al., 2018; Lazaridou et al., 2018), a variant of the Lewis signaling game (David, 1969). However, they seem to miss some basic elements in the human language. On one hand, human language emerges from the community, not just two persons, after all, language is learnable and can spread from one place to other (Dagan et al., 2020). Studying a language in two-player games is like looking at the world through a keyhole. On the other hand, many works make an agreement that prior to the emergence of language some pre-adaptations occurred in the hominid lineage, and one of the candidates is the ability to use symbols (Deacon, 2003; Davidson, 2003; Christiansen & Kirby, 2003). It seems understanding the emergence of symbolic signals is the key to approach the truth of the origin of language (Deacon, 1998). However, chimpanzees have demonstrated a degree of language capacity by using arbitrary symbols as well as the ability for the cross-modal association, abstract thought, and displacement of thought in time (Meddin, 1979). So why don’t they have a language like us? One of the theory is selfishness has kept animal communication at a minimum (Ulbaek, 1998). In more detail, if an individual can obtain a higher benefit by deceiving the other party in the cooperation, why not deceive? Once deception emerges, mistrust among individuals will haunt. For those who are cheated, once bitten and twice shy, cooperation will no longer be a good option. As a result,\nmotivation for communication, as well as demands of the emergence of language will perish. But human beings are so special since we have overcome this kind of obstacle and evolved language. Then, what preserves the emergence of language? We aim to answer this question in a brand new framework of agent community, reinforcement learning (RL), and natural selection. We believe this process should occur in the pre-language period since lying is possible as long as agents can communicate. Therefore, our investigating communication protocol uses symbols to transmit meaning based on a social convention or implicit agreement.\nIn this paper, we introduce several agents to form one community and allow natural evolution and elimination among them. Both liars (agents tell lies) and truth tellers (agents always tell truth) exist in the community. Each tournament, every agent is supposed to play a non-cooperative game (Nash Jr, 1950; Nash, 1951; Schelling, 1958; Binmore et al., 1986; Von Neumann & Morgenstern, 2007) with others. In our multi-round bargaining game, agents are required to reach an agreement about how many items to give out so that the total quantity can satisfy the market’s demand and they can keep their loss to the minimum in the meantime. We believe this is a perfect fit for the nature of human beings and more common in the hominid lineage compared with the cooperation game. Importantly, during the process of natural selection, the fraction of liars and truth tellers may change from time to time and this allows us to observe what factor imposes influence on the motivation of communication, which is the prerequisite of the emergence of language. It is worthy of note that pre-language communication was subject to the constraints of Darwinian evolution. While linguistic change, which began in the post-language communicative era of hominid evolution, is by and large tied to society and culture (Givón & Malle, 2002; Li & Hombert, 2002). Thus, we disregard the factors related to linguistic change since we are investigating motivation for communication from which language evolved.\nMoreover, apart from the normal setting mentioned above, we add up two more rules to further dig out. Firstly, we introduce a credit mechanism for truth tellers. In other words, we make sure truth tellers know the existence of liars which is one step in the evolution process. Specifically, every liar has credit in the mind of truth teller, and the credit varies with the profit of truth teller. Cooperation would be impossible between two agents as soon as the credit drops to negative. Secondly, an additional penalty will be brought in as a price of lying, and we consider it as social pressure for resisting lying behaviors. All in all, we want to make a thorough investigation about how the individual or social resistance to lying affects communication.\nEmpirically, we show that in normal settings, two truth tellers can make a fair agreement, and liars can achieve a huge advantage over truth teller by telling lies. As for two liars, there is always a better liar that gains relative more than the other. In the credit setting, liars can learn a sophisticated lying strategy that deceives the credit mechanism and makes more profits meanwhile. In both settings, as time goes on, truth tellers seem not to show enough competition against liars and thus die out. In the society setting, we find out liars are afraid of lying if punishment is sufficiently large. This again proves the theory (Ulbaek, 1998): in the human lineage, social cooperation based on obligatory reciprocal altruism (Trivers, 1971) as well as a system which punishes people morally and physically for cheating has evolved. In such an environment language is finally possible." }, { "heading": "2 EXPERIMENTAL FRAMEWORK", "text": "" }, { "heading": "2.1 GAME SETTINGS", "text": "We explore emergent language in the context of multi-round non-cooperative bargaining game (Nash Jr, 1950; Nash, 1951) as illustrated in Figure 1. The core is binding cooperative strategy with the highest profit is impossible whereas selfish strategy sometimes can. In this case, the behavior of telling lies can be meaningful since it undermines cooperation and grabs more benefits from others.\nIn the game, two agents i, j bargain over how to satisfy the demand of market. Agents are presented with N different items. They possess a fixed number of quantity for each item ({qin}Nn=1, {qjn}Nn=1), and have their own hidden utilities for N items ({uin}Nn=1, {ujn}Nn=1). Agents move sequentially. Suppose at bargaining round t, it is turn for agent i to be proposer and it makes proposal {pit,n}Nn=1 about how many to give out to the market, which means the other agent j should contribute the rest {dn − pit,n}Nn=1 to the market, where dn is the market demand for item n. Then agent j would\nchoose to accept the proposal or not. The game will be terminated when they make an agreement about the proposal or the number of bargaining rounds reaches the upper limit Tmax, and agents i and j will receive rewards ∑N n=1(q i n − pit,n) × uin and ∑N n=1(q j n − dn + pit,n) × ujn respectively or get zero when no agreement has been made in the end. In order to make the reward comparable, it will be further normalized by the maximum reward achievable for each agent. Both agents want to keep as more items as possible rather than giving out. Therefore, agents are supposed to seek a tradeoff between keeping more items and satisfying the market demand.\nIn this part, we illustrate how lying mechanism works, where lying means the proposal will not be followed by actions. Suppose agents i, j satisfy the market demand at round t, and agent j chooses to tell lies about the proposal and then gives nothing to the market. This leads to that the market will have demand gap which is {pjt,n}Nn=1, and the market would force each agent to hand over half of the gap for remedy. It is noted that liars are allowed to tell lies about any items. In our settings, we conform to the principle of high risk and high return. To illustrate, when d1 = 10 agent j tells lies and keeps offering pjt,1 = 9, agent i is definitely delightful and more likely to take d1 − p j t,1 = 1. Although agent j can easily put others on the hook, it cannot gain a large advantage since the final result is not much better than evenly contributing. On the contrary, if agent j takes pjt,1 = 1, it allows agent j keeping more items and obtaining more profits. However, this proposal is less appealing to agent i. The key point is we want to link attractive lies to relatively lower profits.\nAs agents take turns to play, first-mover has absolute advantages since the other one is compelled to accept unfair proposal or get nothing. We once tried to convert our bargaining game from moving alternately to moving simultaneously to solve this problem. In detail, two agents make proposals at the same time and their total quantity should surpass the demand. However, it turns out that agents only learn to evenly divide the market demand. In this setting, no agent can guarantee an agreement, therefore the proposal made by an agent could be more conservative, meaning it is hard for the agent to infer others’ utilities during bargaining. Ultimately, we adopt the strategy (Cao et al., 2018) by randomly sampling Tmax between 4 and 10 to mitigate the first-mover effect." }, { "heading": "2.2 GAME RULES", "text": "Natural Selection. We have claimed that it is more reasonable to study the emergence of language in a community since natural selection played an essential role in the acquisition of a new ability. Twins, for instance, sometimes can develop cryptophasia, which is a language no one can understand except the twins. However, such a language is an incomplete work and will be eliminated by nature. In the community, there are M agents: some of them are liars and the rest are truth tellers. In each tournament, each agent will play the game with all other agents in the community. Natural selection occurs after every K tournaments. To be specific, natural elimination is reflected by that the agent with the lowest profit will be weeded out from the community. As for genetic evolution, a new agent that evolved from the elite will fill the opening. The new agent will be trained to learn its own policy\nby playing with the remaining agents, but it is initialized by the parameters of the agent with the highest profit. At the meantime, the remaining agents are also allowed to be trained for the purpose of adapting the new agent.\nCredit Mechanism. Every truth teller has a credit record for each liar. At the beginning, truth tellers consider all other agents are trustful, and credit records are initialized as zero. The credit record will be altered every time after a truth teller plays with a liar. To illustrate, truth teller j plays a game with liar i and gets reward rj , then the credit record cji for i will add a value of 1 − exp(0.5 − rj). In this setting, a truth teller cannot tolerate that the profit is less than half of the maximum, and it will realize that it might be cheated since cooperation can definitely achieve more. Also, it is much easier for a liar to lose trust than acquiring it. We want to reflect the reality that one lie can destroy a person no matter how much truth he told before. In addition, the liar can observe its credit records from truth tellers when making decisions." }, { "heading": "2.3 NETWORK ARCHITECTURE AND TRAINING", "text": "At each round t, suppose it is the turn for agent i to make a proposal, first, agent i obtains a partial observation oit, which consists of four parts: market demand d, its own utility u\ni, proposal pjt−l made in the previous round t− 1 by its opponent j, and ID of j. An encoding network ei(·) which is realized by a two-layer feedforward neural network, maps the observation oit to an encoded vector hit. Then, a termination network π i term(·), which is realized by a single-layer feedforward neural network, takes hit as input and outputs a binary action v i t that determines whether agent i accepts the proposal pjt−1. If it accepts, the game ends and both agents receive their rewards based on the agreement; otherwise, agent i needs to give an alternative proposal. Truth teller and liar are different in generating a proposal. If agent i is a liar, it sends hit into a local lying network π i lie(·) to output sit that indicates whether to tell lies for each item. Currently, the local lying network has a separate two-layer feedforward neural network for each item, however, it can also be instantiated by a single network that outputs decisions for all the items. Next, agent i feeds hit and s i t into a proposal network πiprop(·) to produce the proposal pit. The proposal network also has a separate feedforward neural network for each item n, which outputs a distribution over {0, ..., qin}. The proposal is sampled from the distributions. If agent j is truth teller, sit is set to 0 by default. After that, it is the turn for agent j to move and it will repeat the procedure. Note that when the game ends, the liar determines whether to make deception about each item according to the output of its local lying network. The network architecture is illustrated in Figure 2.\nWhen introducing credit mechanism, liar i additionally has a global lying network πiLIE that takes inputs as credit records cji and ID of its opponent j. The global lying network acts like a gate for local lying network πilie and outputs a binary variable f\ni at the beginning of each game that determines whether to tell lies at the subsequent bargaining round. Unlike πilie that controls the agent at each round and focuses on how to achieve more profits in just one game, πiLIE takes the whole picture and aims to deceive the credit mechanism of truth tellers in order to get more profits in a long run. It considers questions as follow: what would happen if it lies in one game? would others realize that it is a liar?\nDuring training, there are three kinds of policies, πterm, πprop, and πlie (if existed) to be updated for each agent, which are parameterized by θπterm , θprop, and θlie, respectively. For each game, the policies are updated towards maximizing its own expected reward. Suppose agent i plays with agent\nj, the objective of agent i is J (θπi) = Eτ∼πi,πj [R i j(τ)], (1)\nwhere θπi = {θπiterm , θπiprop , θπilie}, π i = {πiterm, πiprop, πilie}, and Rij(τ) is the reward that agent i receives from trajectory τ , playing with agent j. The trajectory τ of one bargaining game is defined as {ot,at = {vt, st, pt}, rt}Tt=1. Then, the gradient of the policies is computed by REINFORCE (Williams, 1992) and can be further derived as\n∇θ πi J (θπi) =Eτ∼πi,πj [R i j(τ) · T∑ t=1 (∇θ πiterm log πiterm(v i t|oit)\n+∇θ πilie log πilie(s i t|oit) +∇θπiprop log π i prop(p i t|oit, sit)) + λ∇θπiH(π i)],\n(2)\nwhere λ are hyper-parameters and H is the entropy regularizer to encourage exploration. As for πiLIE, the policy gradient is computed as\n∇θ πiLIE J (θπiLIE) = EπiLIE [ K∑ k=1 M∑ j=1,j 6=i Gij,k · ∇θπiLIE log πiLIE(f i k|cji,k, j)], (3)\nwhere Gij,k is the return of agent i playing with agent j started from tournament k after previous natural selection and Gij,k = ∑K l=k R i j,l." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 CAN LANGUAGE SURVIVE WITH NO RESTRICTIONS IN ONE COMMUNITY?", "text": "Experimental setting. In this experiment, we are investigating what would happen when there is nothing to counter lying behaviors in the community. There are 8 agents living in a community and trying to make profits by playing bargaining games with others. Half of them are truth tellers and they do not have credit records for liars, which means they are unable to make any response to lying behaviors. The rest are liars which possess local lying network but not the global one. An agent has to play a game with everyone else in each tournament. Every 10 tournaments, the community is updated according to natural selection. We have two training phases. First we train 4 truth tellers for 200k episodes. In each episode, two random truth tellers are sampled to play the game. After all truth tellers have learned how to infer intention of others and make proper proposals, we begin to train liars for 200k episodes by sampling two random agents from the community to play the game in each episode. Note that at least one sampled agent is a liar and the models of truth tellers would be frozen in this phase. Each episode corresponds to a batch of 128 games.\nFor game settings, we haveN = 3 different items. Market demand {dn}Nn=1 and items in possession {qn}Nn=1 are set to 10. Utilities {un}Nn=1 are sampled uniformly between 0 and 5 for each agent. For each game, Tmax is uniformly sampled from 4 to 10.\nResults and analysis. Figure 3a shows the learning curves of all truth tellers in terms of reward in the first training phase. And we can find out 4 truth tellers converge to a similar level (around 0.64), demonstrating that they have learnt to make a fair proposal that can be accepted by both party. To some extent, their proposal shows cooperation and is a win-win strategy since they can obtain more\nthan 0.5 (i.e., the profit achieved by dividing items equally). Figure 3b and 3c show the learning curves of all agents in terms of reward in the second training phase. Figure 3b presents the games between liar and truth teller, and Figure 3c presents the games between liars. From Figure 3b, 4 liars perform better in the game than truth tellers and reach above 0.75, while truth tellers only get around 0.45 that is lower than the equally dividing strategy. It shows lying behaviors indeed help liars to take a dominant position over truth tellers. The performance of liars showed in Figure 3c is between the win-win strategy and equally dividing strategy. It turns out there exits a liar that outperforms all other liars when it comes to liar vs. liar games. Figure 4 visualizes two examples of bargaining using learned policies between truth tellers and between truth teller and liar.\nWe further analyze the change of the community under natural selection. Figure 5 shows as natural selection goes, truth tellers are gradually wiped out from the community and eventually liars take over the whole community. Liars are able to obtain more profits by bargaining with truth tellers so that they can fill the reward gap caused by liar vs. liar games. When liars are the only kind in the community, their average rewards begin to decline since they are unable to squeeze the profit from truth tellers. The results suggest cooperation is finally impossible as long as lying behavior is possible in the community. Liars would never try to cooperate with others, and\nthey undermine the motivation of communication (lower and lower rewards). As many hypotheses state cooperation is a prerequisite for the emergence of language (Cao et al., 2018; Nowak & Krakauer, 1999), we believe more sophisticated language is hard to evolve in such a community. Our finding also corroborates the hypothesis that if it is better to lie than to tell truth, why is all this elaborate coding of thoughts into speech necessary whereas the effective strategy is just not to listen (Ulbaek, 1998), as evidenced by that some liars get nearly the same as equally dividing and hence communication is not necessary, not to mention it incurs extra cost in reality." }, { "heading": "3.2 HOW INDIVIDUAL RESISTANCE TO LYING AFFECTS THE EMERGENCE OF LANGUAGE?", "text": "Experimental setting. In the second experiment, we turn to the scenario where truth tellers would counter lying behaviors on their own by refusing to bargain with agents they consider as liars. We are investigating that whether individual countermeasure is able to suppress lying behaviors and is a necessity for language to emerge. In this settings, we still have 4 truth tellers and 4 liars in the community. For truth tellers, they have credit records for all the liars. They would not play following games with those whose credit is negative until next natural selection. And both parties get reward 0 for the games where one party is absent. Credit mechanism is designed to help truth tellers to discern liars and punish them within their own ability. However, it is still possible for one agent to get low reward even if no one lies due to the random utilities. To reduce such re-\nward variance, truth tellers update credit records based on a batch of games (the size is 16). For liars, they have the global lying network apart from local one, which gives them opportunities of seeking strategies to deceive credit mechanism and keep their advantages. In addition, the global lying\nnetwork is trained for 3k episodes with batch size of 16. As for other networks, we use pre-trained models from the first experiment. The game settings are the same with the first experiment.\nResults and analysis. Figure 6 illustrates an example of learning curves of liar vs. truth teller in terms of average reward in one natural selection period. We can see that the liar converges to about 0.67 and the truth teller is able to obtain an above-average reward of 0.57. Comparing to\nthe first experiment, the difference of rewards between truth teller and liar narrows down. This is the consequence of the compromise between the credit mechanism and global lying network. It turns out the global lying network assists the liar to deceive the credit mechanism in some way since it can achieve more than that it always tells truth (around 0.64). However, the liar still needs to sacrifice some benefits in order to gain enough trusts from its opponent. As observed in the experiment, the liar has learned an effective strategy to accomplish the mission. Figure 7 shows the lying frequency in each tournament during the natural selection period. We can find out that in the first 4 tournaments, the lying frequency is relatively\nlow since liars need to earn trust so that truth tellers would not identity them and refuse to play games at an early stage. With the accumulation of enough trust, liars start to tell lies more often to make more profits. As the natural selection approaches, there is less and less need to pretend for liars since the punishment imposed by truth tellers is meaningless at the end of natural selection period. This explains that the lying frequency gradually increases and is relatively high after 4 tournaments.\nThe competition becomes more fierce since the gap of average rewards between agents is closer than in the previous experiment. However, similar phenomenon in natural selection can be observed in Figure 8, i.e., liars gradually taking over the community, though the process is slower than the previous experiment. The process is slower, because the truth teller is not always ranked at the bottom of the list in terms of rewards. However, as weaker liars are replaced by the elite liar by natural selection, truth tellers progressively decline to the bottom and are eventually wiped out from the community. Once all agents in the community are liars, similar conclusion for the emergence of language can be drawn as the first experiment. In addition, it is indicated that the credit mechanism is effective to protect truth tellers\nto some extent, but liars can still find their way to overcome it. Furthermore, individuals are unable to impose enough force to suppress lying behaviors. Therefore, we claim that it is not easy to resist lying behaviors relying solely on individuals and more compelling force is needed to create a harmonious cooperative environment." }, { "heading": "3.3 HOW SOCIAL RESISTANCE TO LYING AFFECTS THE EMERGENCE OF LANGUAGE?", "text": "Experimental setting. In the final experiment, we seek for a more severe punishment to resist lying behaviors. It reminds us that social organization, e.g., enterprise or government, would penalize anyone who breaks rules, e.g., breach of contract, by imposing a fine or putting into prison. Inspire by this, we introduce two kinds of artificial pressure on liars to counterbalance their lying behaviors, mimicking two types of aforementioned punishment. One is designed for local lying mechanism, and a penalty α × |lies| is directly added to reward function to disfavour lying behaviors. α is a hyperparameter which is considered as the punishment strength in reality and |lies| is the number of items a liar tells lies about in one game. Another is for global lying mechanism, every time the gap of rewards between two agents exceeds over a threshold (i.e., 0.2),\nthe system believes lying behavior has occurred and the liar will be banned for next Lprison games\nwith the same player as punishment. However, in order not to damage the interest of truth tellers, the system will enforce equally dividing for canceled games and both sides get reward 0.5 instead. As for training procedure and game settings, they are the same as the first experiment.\nResults and analysis. Figure 9 shows the learning curves of different α in terms of lying frequency. We find out that when α is under the threshold of 0.02, it has little influence on resisting lying behaviors. When the value of α exceeds 0.02 and continues to increase (i.e., more and more fines), the lying frequency gradually decreases to zero. And the rewards of both truth tellers and liars converges to a similar level, showing they begin to cooperate, as illustrated in Figure 10. In addition, similar pattern is also observed in Figure 11. If the prison time Lprison is long enough, the global lying network tends to always output zero. It shows liars would turn into truth tellers as long as the system exerts sufficiently severe punishment.\nIn this experiment, our results show that artificial penalties inspired by existing social rules can restrain lying behaviors fundamentally compared with individual forces. With the help of strict social pressure, agents in the community are willing to cooperate with each other honestly. This is the premise of hypothesis that cooperation boosts language to emerge (Nowak & Krakauer, 1999; Cao et al., 2018). Only under such a environment, language is finally possible." }, { "heading": "4 DISCUSSION", "text": "We found that language is impossible to be preserved when some agents are entitled to lying ability in a community with natural selection. When two agents solve a simple non-cooperative task in a bargaining way, agents which are not prepared for lying behaviors easily fall into a very unfavorable situation. And most of the agents in the community adapt to lying during a long evolutionary process based on our simulations, leading to that cooperation is no longer a good option and there is no intention to communicate. This motivates us to unearth what preserves language from lying behaviors. According to existing human social behaviors and rules, we design the credit mechanism and artificial penalties, reflecting individual and social resistance to lying behaviors, in reality, to investigate how they affect the emergence of language. Follow-up experiments suggest the limitations of individual resistance. Individuals are unable to obtain full information to determine whether their opponents are lying and hence this is leading to more conservative sanctions that do not effectively suppress lying behaviors. On the contrary, the social system has the right to access all. Thus, it can impose more accurate and severe punishment directly to lying behavior and retain a system with obligatory reciprocal altruism, which provides the soil for the evolution of language. Based on the results above, we hypothesize that social resistance is the key to preserving the language other than individual resistance. Besides, each agent is realized by neural networks. The results are stable across hyper-parameters and different kinds of bargaining games since we also try settings in (Cao et al., 2018).\nFrom the perspective of artificial intelligence, our results stress the importance of controlling the external environment of language emergence. It is well known that enabling intelligent agents to communicate with humans is always one of the ultimate goals for the AI community. If we want to achieve this, it is not enough just to teach agents about human language. Unlike the unemotional machine, the human can convey meaning different from the literal one, reflected in acts such as lying and playing jokes. We want to emphasize that it is impossible to evolve language if agents are unaware of such behaviors of human beings and existing language can be vulnerable to facing them.\nCreating an environment conducive to cooperation seems particularly significant. Also, we present two proof-of-concept examples of how to maintain a favorable environment for the emergence of language by directly penalizing lying behaviors in reward functions or banning liars from games. In future work, we would look for less ad hoc ways for providing good soil for language to thrive and investigate how agents develop such resisting mechanism naturally in a more precise way." } ]
2,020
null
SP:c3bdf7ffa026668d98d241b72ee14e2a3510a7d9
[ "This paper presents a satisfying solution to the open problem of how to train all tasks at approximately the same rate in multi-task learning. There has been a bunch of work on this problem in the last few years. This paper characterizes existing work w.r.t. the fairness of training across tasks in order to motivate two new methods, one applied to shared parameters and the other to task-specific parameters, which overcome the shortcomings of previous methods. The two new methods can be naturally combined to yield a complete method for fair training. Experiments on common MTL benchmarks show the new method compares quite favorably to previous approaches.", "The authors propose to balance multi-task training using IMTL-G on the shared backbone and IMTL-L on the task-specific branches. IMTL-G enforces equal gradient projections between tasks with a close-form formulation to calculate the desired gradient weightings $\\alpha$. IMTL-L learns the loss weightings $e^s$ with a regularization term $-s$. Additional constraint by making all loss weightings sum to one is used. The paper compares the effectiveness of the proposed IMTLs with their counterparts on Cityscapes, NYUv2, and CelebA and claims state-of-the-art performance." ]
Multi-task learning (MTL) has been widely used in representation learning. However, naı̈vely training all tasks simultaneously may lead to the partial training issue, where specific tasks are trained more adequately than others. In this paper, we propose to learn multiple tasks impartially. Specifically, for the task-shared parameters, we optimize the scaling factors via a closed-form solution, such that the aggregated gradient (sum of raw gradients weighted by the scaling factors) has equal projections onto individual tasks. For the task-specific parameters, we dynamically weigh the task losses so that all of them are kept at a comparable scale. Further, we find the above gradient balance and loss balance are complementary and thus propose a hybrid balance method to further improve the performance. Our impartial multi-task learning (IMTL) can be end-to-end trained without any heuristic hyper-parameter tuning, and is general to be applied on all kinds of losses without any distribution assumption. Moreover, our IMTL can converge to similar results even when the task losses are designed to have different scales, and thus it is scale-invariant. We extensively evaluate our IMTL on the standard MTL benchmarks including Cityscapes, NYUv2 and CelebA. It outperforms existing loss weighting methods under the same experimental settings.
[ { "affiliations": [], "name": "Liyang Liu" }, { "affiliations": [], "name": "Yi Li" }, { "affiliations": [], "name": "Zhanghui Kuang" }, { "affiliations": [], "name": "Jing-Hao Xue" }, { "affiliations": [], "name": "Yimin Chen" }, { "affiliations": [], "name": "Wenming Yang" }, { "affiliations": [], "name": "Qingmin Liao" }, { "affiliations": [], "name": "Wayne Zhang" } ]
[ { "authors": [ "Rich Caruana" ], "title": "Multitask learning", "venue": "Machine learning,", "year": 1997 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L Yuille" ], "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2017 }, { "authors": [ "Zhao Chen", "Vijay Badrinarayanan", "Chen-Yu Lee", "Andrew Rabinovich" ], "title": "Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Sumanth Chennupati", "Ganesh Sistu", "Senthil Yogamani", "Samir A Rawashdeh" ], "title": "Multinet++: Multi-stream feature aggregation and geometric loss strategy for multi-task learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2019 }, { "authors": [ "Marius Cordts", "Mohamed Omran", "Sebastian Ramos", "Timo Rehfeld", "Markus Enzweiler", "Rodrigo Benenson", "Uwe Franke", "Stefan Roth", "Bernt Schiele" ], "title": "The cityscapes dataset for semantic urban scene understanding", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Theodoros Evgeniou", "Massimiliano Pontil" ], "title": "Regularized multi–task learning", "venue": "In Proceedings of the tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp", "year": 2004 }, { "authors": [ "Yuan Gao", "Jiayi Ma", "Mingbo Zhao", "Wei Liu", "Alan L Yuille" ], "title": "Nddr-cnn: Layerwise feature fusing in multi-task cnns by neural discriminative dimensionality reduction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Yuan Gao", "Haoping Bai", "Zequn Jie", "Jiayi Ma", "Kui Jia", "Wei Liu" ], "title": "Mtl-nas: Task-agnostic neural architecture search towards general-purpose multi-task learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Michelle Guo", "Albert Haque", "De-An Huang", "Serena Yeung", "Li Fei-Fei" ], "title": "Dynamic task prioritization for multitask learning", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Pengsheng Guo", "Chen-Yu Lee", "Daniel Ulbricht" ], "title": "Learning to branch for multi-task learning", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In Proceedings of the 32nd International Conference on Machine Learning - Volume", "year": 2015 }, { "authors": [ "Alex Kendall", "Yarin Gal", "Roberto Cipolla" ], "title": "Multi-task learning using uncertainty to weigh losses for scene geometry and semantics", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Shikun Liu", "Edward Johns", "Andrew J Davison" ], "title": "End-to-end multi-task learning with attention", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Jiasen Lu", "Vedanuj Goswami", "Marcus Rohrbach", "Devi Parikh", "Stefan Lee" ], "title": "12-in-1: Multi-task vision and language representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Arun Mallya", "Dillon Davis", "Svetlana Lazebnik" ], "title": "Piggyback: Adapting a single network to multiple tasks by learning to mask weights", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Kevis-Kokitsi Maninis", "Ilija Radosavovic", "Iasonas Kokkinos" ], "title": "Attentive single-tasking of multiple tasks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ishan Misra", "Abhinav Shrivastava", "Abhinav Gupta", "Martial Hebert" ], "title": "Cross-stitch networks for multi-task learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Chao Peng", "Tete Xiao", "Zeming Li", "Yuning Jiang", "Xiangyu Zhang", "Kai Jia", "Gang Yu", "Jian Sun" ], "title": "Megdet: A large mini-batch object detector", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Hakan Bilen", "Andrea Vedaldi" ], "title": "Learning multiple visual domains with residual adapters", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Sebastian Ruder" ], "title": "An overview of multi-task learning in deep neural networks", "venue": "arXiv preprint arXiv:1706.05098,", "year": 2017 }, { "authors": [ "Sebastian Ruder", "Joachim Bingel", "Isabelle Augenstein", "Anders Søgaard" ], "title": "Latent multi-task architecture learning", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Ozan Sener", "Vladlen Koltun" ], "title": "Multi-task learning as multi-objective optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Nathan Silberman", "Derek Hoiem", "Pushmeet Kohli", "Rob Fergus" ], "title": "Indoor segmentation and support inference from rgbd images", "venue": "In European Conference on Computer Vision,", "year": 2012 }, { "authors": [ "Trevor Standley", "Amir R Zamir", "Dawn Chen", "Leonidas Guibas", "Jitendra Malik", "Silvio Savarese" ], "title": "Which tasks should be learned together in multi-task learning", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Gjorgji Strezoski", "Nanne van Noord", "Marcel Worring" ], "title": "Many task learning with task routing", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Tianhe Yu", "Saurabh Kumar", "Abhishek Gupta", "Sergey Levine", "Karol Hausman", "Chelsea Finn" ], "title": "Gradient surgery for multi-task learning", "venue": "arXiv preprint arXiv:2001.06782,", "year": 2020 }, { "authors": [ "Amir R Zamir", "Alexander Sax", "William Shen", "Leonidas J Guibas", "Jitendra Malik", "Silvio Savarese" ], "title": "Taskonomy: Disentangling task transfer learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Amir R Zamir", "Alexander Sax", "Nikhil Cheerla", "Rohan Suri", "Zhangjie Cao", "Jitendra Malik", "Leonidas J Guibas" ], "title": "Robust learning through cross-task consistency", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Yu Zhang", "Qiang Yang" ], "title": "A survey on multi-task learning", "venue": "arXiv preprint arXiv:1707.08114,", "year": 2017 }, { "authors": [ "Hengshuang Zhao", "Jianping Shi", "Xiaojuan Qi", "Xiaogang Wang", "Jiaya Jia" ], "title": "Pyramid scene parsing network", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Barret Zoph", "Quoc V. Le" ], "title": "Neural architecture search with reinforcement learning", "venue": null, "year": 2021 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent deep networks in computer vision can match or even surpass human beings on some specific tasks separately. However, in reality multiple tasks (e.g., semantic segmentation and depth estimation) must be solved simultaneously. Multi-task learning (MTL) (Caruana, 1997; Evgeniou & Pontil, 2004; Ruder, 2017; Zhang & Yang, 2017) aims at sharing the learned representation among tasks (Zamir et al., 2018) to make them benefit from each other and achieve better results and stronger robustness (Zamir et al., 2020). However, sharing the representation can lead to a partial learning issue: some specific tasks are learned well while others are overlooked, due to the different loss scales or gradient magnitudes of various tasks and the mutual competition among them. Several methods have been proposed to mitigate this issue either via gradient balance such as gradient magnitude normalization (Chen et al., 2018) and Pareto optimality (Sener & Koltun, 2018), or loss balance like homoscedastic uncertainty (Kendall et al., 2018). Gradient balance can evenly learn task-shared parameters while ignoring task-specific ones. Loss balance can prevent MTL from being biased in favor of tasks with large loss scales but cannot ensure the impartial learning of the shared parameters. In this work, we find that gradient balance and loss balance are complementary, and combining the two balances can further improve the results. To this end, we propose impartial MTL (IMTL) via simultaneously balancing gradients and losses across tasks.\nFor gradient balance, we propose IMTL-G(rad) to learn the scaling factors such that the aggregated gradient of task-shared parameters has equal projections onto the raw gradients of individual tasks ∗Corresponding author\n(see Fig. 1 (d)). We show that the scaling factor optimization problem is equivalent to finding the angle bisector of gradients from all tasks in geometry, and derive a closed-form solution to it. In contrast with previous gradient balance methods such as GradNorm (Chen et al., 2018), MGDA (Sener & Koltun, 2018) and PCGrad (Yu et al., 2020), which have learning biases in favor of tasks with gradients close to the average gradient direction, those with small gradient magnitudes, and those with large gradient magnitudes, respectively (see Fig. 1 (a), (b) and (c)), in our IMTL-G task-shared parameters can be updated without bias to any task.\nFor loss balance, we propose IMTL-L(oss) to automatically learn a loss weighting parameter for each task so that the weighted losses have comparable scales and the effect of different loss scales from various tasks can be canceled-out. Compared with uncertainty weighting (Kendall et al., 2018), which has biases towards regression tasks rather than classification tasks, our IMTL-L treats all tasks equivalently without any bias. Besides, we model the loss balance problem from the optimization perspective without any distribution assumption that is required by (Kendall et al., 2018). Therefore, ours is more general and can be used in any kinds of losses. Moreover, the loss weighting parameters and the network parameters can be jointly learned in an end-to-end fashion in IMTL-L.\nFurther, we find the above two balances are complementary and can be combined to improve the performance. Specifically, we apply IMTL-G on the task-shared parameters and IMTL-L on the task-specific parameters, leading to the hybrid balance method IMTL. Our IMTL is scale-invariant: the model can converge to similar results even when the same task is designed to have different loss scales, which is common in practice. For example, the scale of the cross-entropy loss in semantic segmentation may have different scales when using “average” or “sum” reduction over locations in the loss computation. We empirically validate that our IMTL is more robust against heavy loss scale changes than its competitors. Meanwhile, our IMTL only adds negligible computational overheads.\nWe extensively evaluate our proposed IMTL on standard benchmarks: Cityscapes, NYUv2 and CelebA, where the experimental results show that IMTL achieves superior performances under all settings. Besides, considering there lacks a fair and practical benchmark for comparing MTL methods, we unify the experimental settings such as image resolution, data augmentation, network structure, learning rate and optimizer option. We re-implement and compare with the representative MTL methods in a unified framework, which will be publicly available. Our contributions are:\n• We propose a novel closed-form gradient balance method, which learns task-shared parameters without any task bias; and we develop a general learnable loss balance method, where no distribution assumption is required and the scale parameters can be jointly trained with the network parameters.\n• We unveil that gradient balance and loss balance are complementary and accordingly propose a hybrid balance method to simultaneously balance gradients and losses.\n• We validate that our proposed IMTL is loss scale-invariant and is more robust against loss scale changes compared with its competitors, and we give in-depth theoretical and experimental analyses on its connections and differences with previous methods.\n• We extensively verify the effectiveness of our IMTL. For fair comparisons, a unified codebase will also be publicly available, where more practical settings are adopted and stronger performances are achieved compared with existing code-bases." }, { "heading": "2 RELATED WORK", "text": "Recent advances in MTL mainly come from two aspects: network structure improvements and loss weighting developments. Network-structure methods based on soft parameter-sharing usually lead to high inference cost (review in Appendix A). Loss weighting methods find loss weights to be multiplied on the raw losses for model optimization. They employ a hard parameter-sharing paradigm (Ruder, 2017), where several light-weight task-specific heads are attached upon the heavy-weight task-agnostic backbone. There are also efforts that learn to group tasks and branch the network in the middle layers (Guo et al., 2020; Standley et al., 2020), which try to achieve better accuracyefficiency trade-off and can be seen as semi-hard parameter-sharing. We believe task grouping and loss weighting are orthogonal and complementary directions to facilitate multi-task learning and can benefit from each other. In this work we focus on loss weighting methods which are the most economic as almost all of the computations are shared across tasks, leading to high inference speed. Task Prioritization (Guo et al., 2018) weights task losses by their difficulties to focus on the harder tasks during training. Uncertainty weighting (Kendall et al., 2018) models the loss weights as dataagnostic task-dependent homoscedastic uncertainty. Then loss weighting is derived from maximum likelihood estimation. GradNorm (Chen et al., 2018) learns the loss weights to enforce the norm of the scaled gradient for each task to be close. MGDA (Sener & Koltun, 2018) casts multi-task learning as multi-object optimization and finds the minimum-norm point in the convex hull composed by the gradients of multiple tasks. Pareto optimality is supposed to be achieved under mild conditions. GLS (Chennupati et al., 2019) instead uses the geometric mean of task-specific losses as the target loss, we will show it actually weights the loss by its reciprocal value. PCGrad (Yu et al., 2020) avoids interferences between tasks by projecting the gradient of one task onto the normal plane of the other. DSG (Lu et al., 2020) dynamically makes a task “stop or go” by its converging state, where a task is updated only once for a while if it is stopped. Although many loss weighting methods have been proposed, they are seldom open-sourced and rarely compared thoroughly under practical settings where strong performances are achieved, which motivates us to give an in-depth analysis and a fair comparison about them." }, { "heading": "3 IMPARTIAL MULTI-TASK LEARNING", "text": "In MTL, we map a sample x ∈ X to its labels {yt ∈ Yt}t∈[1,T ] of all T tasks through multiple taskspecific mappings {ft : X→ Yt}. In most loss weighting methods, the hard parameter-sharing paradigm is employed, such that ft is parameterized by heavy-weight task-shared parameters θ and light-weight task-specific parameters θt. All tasks take the same shared intermediate feature z = f (x;θ) as input, and the t-th task head outputs the prediction as ft (x) = ft (z;θt). We aim to find the scaling factors {αt} for all T task losses {Lt (ft (x) ,yt)}, so that the weighted sum loss L = ∑ t αtLt can be optimized to make all tasks perform well. This poses great challenges because: 1) losses may have distinguished forms such as cross-entropy loss and cosine similarity; 2) the dynamic ranges of losses may differ by orders of magnitude. In this work, we propose a hybrid solution for both the task-shared parameters θ and the task-specific parameters {θt}, as Fig. 2.\n3.1 GRADIENT BALANCE: IMTL-G\nFor task-shared parameters θ, we can receive T gradients {gt = ∇θLt} via back-propagation from all of the T raw losses {Lt}, and these gradients represent optimal update directions for individual tasks. As the parameters θ can only be updated with a single gradient, we should compute an aggregated gradient g by the linear combination of {gt}. It also implies to find the scaling factors {αt} of raw losses {Lt}, since g = ∑ t αtgt = ∇θL = ∇θ ( ∑ t αtLt). Motivated by the principle of balance among tasks, we propose to make the projections of g onto {gt} to be equal, as Fig. 1 (d). In this way,\nAlgorithm 1 Training by Impartial Multi-task Learning Input: input sample x, task-specific labels {yt} and learning rate η Output: task-shared/-specific parameters θ/{θt}, scale parameters {st} 1: compute task-shared feature z = f (x;θ) 2: for t = 1 to T do 3: compute task prediction by head network ft (x) = fnett (z;θt) 4: compute raw loss by loss function Lrawt = Lfunct (ft (x) ,yt) 5: compute scaled loss Lt = bastLrawt − st (default a = e, b = 1) . loss balance 6: compute gradient of shared feature z: gt = ∇zLt 7: compute unit-norm gradient ut = gt‖gt‖ 8: end for 9: compute gradient differencesD> = [ g>1 − g>2 , · · · , g>1 − g>T\n] 10: compute unit-norm gradient differences U> = [ u>1 − u>2 , · · · ,u>1 − u>T\n] 11: compute scaling factors for tasks 2 to T : α2:T = g1U> ( DU> )−1 . gradient balance\n12: compute scaling factors for all tasks: α = [ 1− 1α>2:T , α2:T ] 13: update task-shared parameters θ = θ − η∇θ (∑ t αtLt\n) 14: for t = 1 to T do 15: update task-specific parameters θt = θt − η∇θtLt 16: update loss scale parameter st = st − η ∂Lt∂st 17: end for\nwe treat all tasks equally so that they progress in the same speed and none is left behind. Formally, let {ut = gt/ ‖gt‖} denote the unit-norm vector of {gt} which are row vectors, then we have:\ngu>1 = gu > t ⇔ g (u1 − ut) > = 0, ∀ 2 6 t 6 T. (1)\nThe above problem is under-determined, but we can obtain the closed-form results of {αt} by constraining ∑ t αt = 1. Assume α = [α2, · · · , αT ], U> = [ u>1 − u>2 , · · · ,u>1 − u>T ] ,\nD> = [ g>1 − g>2 , · · · , g>1 − g>T ] and 1 = [1, · · · , 1], from Eq. (1) we can obtain:\nα = g1U > (DU>)−1 . (IMTL-G) (2)\nThe detailed derivation is in Appendix B.1. After obtaining α, the scaling factor of the first task can be computed by α1 = 1 − 1α> since ∑ t αt = 1. The optimized {αt} are used to compute L =∑\nt αtLt, which is ultimately minimized by SGD to update the model. By now, back-propagation needs to be executed T times to obtain the gradient of each task loss with respect to the heavy-weight task-shared parameters θ, which is time-consuming and non-scalable. We replace the parameterlevel gradients {gt = ∇θLt} with feature-level gradients {∇zLt} to compute {αt}. This implies to achieve gradient balance with respect to the last shared feature z as a surrogate of task-shared parameters θ, since it is possible for the network to back-propagate this balance all the way through the task-shared backbone starting from z. This relaxation allows us to do back propagation through the backbone only once after obtaining {αt}, and thus the training time can be dramatically reduced." }, { "heading": "3.2 LOSS BALANCE: IMTL-L", "text": "For the task-specific parameters {θt}, we cannot employ IMTL-G described above, because ∇θtLτ = 0, ∀t 6= τ , and thus only the gradient of the corresponding task∇θtLt can be obtained for each θt. Instead we propose to balance the losses among tasks by forcing the scaled losses {αtLt} to be constant for all tasks, without loss of generality, we take the constant as 1. Then the most direct idea is to compute the scaling factors as {αt = 1/Lt}, but they are sensitive to outlier samples and manifest severe oscillations, so we further propose to learn to scale losses via gradient descent and thus stronger stability can be achieved. Suppose the positive losses {Lt > 0} are to be balanced, we first introduce a mapping function h : R → R+ to transform the arbitrarily-ranged learnable scale parameters {st} to positive scaling factors {h (st) > 0}, hereafter we abandon the subscript t for brevity. Then we should construct an appropriate scaled loss g (s) so that both network parameters θ and scale parameter s can be optimized by minimizing g (s). On one hand, we balance different\ntasks by encouraging the scaled losses h (s)L (θ) to be 1 for all tasks, so the optimality s? of s is achieved when h (s)L (θ) = 1, or equivalently:\nf (s) ≡ h (s)L (θ)− 1 = 0, if s = s?. (3)\nOne may expect to minimize |f (s)| = |h (s)L (θ)− 1| to find s?, however when h (s)L (θ) < 1, the gradient with respect to θ, ∇θ |f (s)| = −h (s)∇θL (θ), is in the opposite direction. On the other hand, assume our scaled loss g (s) is a differentiable convex function with respect to s, then its minimum is achieved if and only if s = s?, where the derivative of g (s) is zero:\ng′ (s) = 0, if s = s?. (4)\nFrom Eq. (3) and (4) we find that the values of f (s) and g′ (s) are both 0 when s = s?, we can then regard f (s) as the derivative of g (s), which is our target scaled loss and used to optimize both the network parameters θ and loss scale parameter s, then we have:\ng′ (s) = f (s)⇔ g (s) = ∫ f (s) ds = L (θ) ∫ h (s) ds− s. (5)\nFrom Eq. (3) and (5), we notice that both h (s) and ∫ h (s) ds denote loss scales, so we have∫\nh (s) ds = Ch (s), where C > 0 is a constant. According to ordinary differential equation,∫ h (s) ds must be the exponential function: ∫ h (s) ds = bas with a > 1, b > 0 (see Appendix B.2). We then have g′′ (s) = kas, k > 0, which is always positive and verifies our assumption about the convexity of g (s). Also note that the gradient of g (s) with respect to θ, ∇θg (s)=∫ h (s) ds∇θL (θ) = bas∇θL (θ), is in the appropriate direction since bas > 0. As an instantiation,\nwe set ∫ h (s) ds = es (a = e, b = 1), then\ng (s) = esL (θ)− s, (IMTL-L). (6)\nFrom Eq. (6) we find that the raw loss is scaled by es, and −s acts as a regularization to avoid the trivial solution s = −∞ while minimizing the scaled loss g (s). As for implementation, the task losses {Lt} are scaled by {est}, and the scaled losses {estL− st} are used to update both the network parameters θ, {θt} and the scale parameters {st}." }, { "heading": "3.3 HYBRID BALANCE: IMTL", "text": "We have introduced IMTL-G/IMTL-L to achieve gradient/loss balance, and both of them produce scaling factors to be applied on the raw losses. They can be used solely, but we find them complementary and able to be combined to improve the performance. In IMTL-G, even if the raw losses are multiplied by arbitrary (maybe different among tasks) positive factors, the direction of the aggregated gradient g stays unchanged. Because by definition g = ∑ t αtgt is the angular bisector of the gradients {gt}, and positive scaling will not change the directions of {gt} and thus that of g (proof in Theorem 2). So we can also obtain the scale factors {αt} in IMTL-G with the losses that have been scaled by {st} from IMTL-L. IMTL-G and IMTL-L are combined as: 1) the taskspecific parameters {θt} and scale parameters {st} are updated by scaled losses {estLt − st}; 2) the task-shared parameters θ are updated by ∑ t αt (e\nstLt) which is the weighted average of {estLt}, with the weights {αt} computed by {∇z (estLt)} using IMTL-G. Note that the regularization terms {−st} in Eq. (6) are constants with respect to θ and z, and thus can be ignored when computing gradients and updating parameters in IMTL-G. In this way, we achieve both gradient balance for task-shared parameters and loss balance for task-specific parameters, leading to our full IMTL as illustrated in Alg. 1." }, { "heading": "4 DISCUSSION", "text": "We draw connections between our method and previous state-of-the-arts 1 in Fig. 3. We will show that previous methods can all be categorized as gradient or loss balance, and thus each of them can be seen as a specification of our method. However, all of them have some intrinsic biases or short-comings leading to inferior performances, which we try to overcome.\n1Our analysis of PCGrad (Yu et al., 2020) can be found in Appendix C.3.\nGradNorm (Chen et al., 2018) balances tasks by making the norm of the scaled gradient for each task to be approximately equal. It also introduces the inverse training rate and a hyper-parameter γ to control the strength of approaching the mean gradient norm, such that tasks which learn slower can receive larger gradient magnitudes. However, it does not take into account the relationship of the gradient directions. We show that when the angle between the gradients of each pair of tasks is identical, our IMTL-G leads to the equivalent solution as GradNorm. Theorem 1. If the angle between any pair of ut,uτ stays constant: utu>τ = C1, ∀t 6= τ with C1 < 1, then our IMTL-G leads to the same solution as that of GradNorm: gu>t = C2 ⇔ nt ≡ ‖αtgt‖ = αt ‖gt‖ = C3. In the above ut = gt/ ‖gt‖, C1, C2 and C3 are constants.\nProof in Appendix C.1. In GradNorm, if without the above constant-angle condition utu>τ = C1, the projection of the aggregated gradient g onto task-specific gradient, gu>t = ( ∑ τ C3uτ )u > t =\nC3 ( ∑ τ uτ )u > t , is proportional to ( ∑ τ uτ )u > t . It tends to optimize the “majority tasks” whose\ngradient directions are closer to the mean direction ∑ t ut, resulting in undesired task bias.\nMGDA (Sener & Koltun, 2018) finds the weighted average gradient g = ∑ t αtgt with minimum\nnorm in the convex hull composed by {gt}, so that ∑ t αt = 1 and αt > 0, ∀t. It adopts an iterative method based on Frank-Wolfe algorithm to solve the multi-objective optimization problem. We note the minimum-norm point has a closed-form representation if without the constraints {αt > 0}. In this case, we try to minimize gg> = ( ∑ t αtgt) ( ∑ τ ατgτ ) > such that ∑ t αt = 1. It implies g is perpendicular to the hyper-plane composed by {gt} as illustrated in Fig 1 (b), and thus we have:\ng ⊥ (g1 − gt)⇔ g (g1 − gt)> = 0, ∀ 2 6 t 6 T, (7) and can obtain α = g1D> ( DD> )−1 (see Appendix C.2). From Eq. (7), we note that the aggregated gradient satisfies: gg>t = C. Then the projection of g onto gt, gu > t = C/ ‖gt‖, is inversely proportional to the norm of gt. So it focuses on tasks with smaller gradient magnitudes, which breaks the task balance. Even with {αt > 0}, the problem still exists (see Appendix C.2) in the original MGDA method. Through experiments, we note that finding the minimum-norm point without the constraints {αt > 0} leads to similar performance as MGDA with the constraints {αt > 0}. In our IMTL-G, although we do not constrain {αt > 0}, its loss weighting scales are always positive during the training procedure as shown in Fig. 4.\nUncertainty weighting (Kendall et al., 2018) regards the task uncertainty as loss weight. For regression, it can derive L1 loss from Laplace distribution: − log p (y | f (x)) = |y − f (x)| /b + log b, where x is the data sample, y is the ground-truth label, f denotes the prediction model and b is the diversity of Laplace distribution. L2 loss can be found in Appendix C.4. For classification, it takes the cross-entropy loss as a scaled categorical distribution and introduces the following approximation:\n− log p (y | f (x)) = − log [ softmaxy ( f (x)\nσ2\n)] ≈ − 1\nσ2 log [softmaxy (f (x))] + log σ, (8)\nin which softmaxy (·) stands for taking the y-th entry after the softmax (·) operator. MTL corresponds to maximizing the joint likelihood of multiple targets, then the derivations yield the scaling factor b/σ for the regression/classification loss. (Kendall et al., 2018) learn b and σ as model parameters which are updated by stochastic gradient descent. However, it is applicable only if we can find appropriate correspondence between the loss and the distribution. It is difficult to be used for losses such as cosine similarity, and it is impossible to traverse all kinds of losses to obtain a unified form for them. Moreover, it sacrifices classification tasks. From Eq. (8) we can find that the scaled cross-entropy loss is approximated as L = e2sLcls−s if we set s = − log σ. By taking the derivative we have ∂L/∂s = 2e2sLcls − 1. Then s is optimized to make the scaled loss e2sLcls to be close to 1/2. However, the scaled L1 loss is approximated as L = esLreg − s if we set s = − log b, and taking the derivative we have ∂L/∂s = esLreg − 1. So s is optimized to make the scaled L1 loss to achieve 1, which is twice of the classification loss, and thus the classification task is overlooked.\nWe would like to remark the differences between our IMTL-L and uncertainty weighting (Kendall et al., 2018). Firstly, our derivation is motivated by the fairness among tasks, which intrinsically differs from uncertainty weighting which is based on task uncertainty considering each task independently. Secondly, IMTL-L learns to balance among tasks without any biases, while uncertainty weighting may sacrifice classification tasks to favor regression tasks as derived above. Thirdly, IMTL-L does not depend on any distribution assumptions and thus can be generally applied to various losses including cosine similarity, which uncertainty weighting may have difficulty with. As far as we know, there is no appropriate correspondence between cosine similarity and specific distributions. Lastly, uncertainty weighting needs to deal with different losses case by case, it also introduces approximations in order to derive scaling factors for certain losses (such as cross-entropy loss) which may not be optimal, but our IMTL-L has a unified form for all kinds of losses.\nGLS (Chennupati et al., 2019) calculates the target loss as the geometric mean: L = ( ∏ t Lt) 1 T , then the gradient of L with respect to the model parameters θ can be obtained as Appendix C.5, which can be regarded as to weigh the loss with its reciprocal value. However, as the gradient depends on the value of L, so it is not scale-invariant to the loss scale changes. Moreover, we find it to be unstable when the number of tasks is large because of the geometric mean computation." }, { "heading": "5 EXPERIMENTS", "text": "In previous methods, various experimental settings have been adopted but there are no extensive comparisons. As one contribution of our work, we re-implement representative methods and present fair comparisons among them under the unified code-base, where more practical settings are adopted and stronger performances are achieved compared with existing code-bases. The implementations exactly follow the original papers and open-sourced code to ensure the correctness. We run experiments on the Cityscapes (Cordts et al., 2016), NYUv2 (Silberman et al., 2012) and CelebA (Liu et al., 2015) dataset to extensively analyze different methods. Details can be found in Appendix D.\nResults on Cityscapes. From Tab. 1 we can obtain several informative conclusions. The uniform scaling baseline, which naı̈vely adds all losses, tends to optimize tasks with larger losses and gradient magnitudes, resulting in severe task bias. Uncertainty weighting (Kendall et al., 2018) sacrifices classification tasks to aid regression ones, leading to significantly worse results on semantic segmentation compared with our IMTL-L. GradNorm (Chen et al., 2018) is very sensitive to the choice of the hyper-parameter γ controlling the strength of equal gradient magnitudes, where the default γ = 1.5 works well on NYUv2 but performs badly on Cityscapes. We find its best option is γ = 0 which makes the scaled gradient norm to be exactly equal. MGDA (Sener & Koltun, 2018) focuses on tasks with smaller gradient magnitudes. So the performance of semantic segmentation is good but the other two tasks have difficulty in converging. In addition, we find our proposed closed-form variant without the hard constraints {αt > 0} achieves similar results as the original iterative method. Through the experiments we notice the closed-form solution almost always yields {αt > 0}. As for PCGrad (Yu et al., 2020), it yields slightly better performance than uniform scaling because its conflict projection will have no effect when the angles between the gradients are equal or less than π/2. In contrast, our IMTL method, in terms of both gradient balance and loss balance, yields competitive performance and achieves the best balance among tasks. Moreover, we verify that the two balances are complementary and can be combined to further improve the performance, with the visualizations in Appendix E. Surprisingly, we find our IMTL can beat the single-task baseline where\neach task is trained with a separate model. Training multiple tasks simultaneously can learn a better representation from multiple levels of semantics, which can in turn improve individual tasks.\nIn addition, we present the real-world training time of each iteration for different methods in Tab. 1. As shown, loss balance methods are the most efficient, and our gradient balance method IMTLG adds acceptable computational overhead, similar to that of GradNorm (Chen et al., 2018) and MGDA (Sener & Koltun, 2018). It benefits from computing gradients with respect to the shared feature maps instead of the shared model parameters (the row of “IMTL-G (exact)”), which brings similar performances but adds significant complexity due to multiple (T ) backward passes through the shared parameters. Our IMTL-G only needs to do backward computation on the shared parameters once after obtaining the loss weights via Eq. (2), in which the computation overhead mainly comes from the matrix multiplication rather than the matrix inverse, since the inversed matrixDU> ∈ R(T−1)×(T−1) is small compared with dimension of the shared feature z. As we outperform MGDA (Sener & Koltun, 2018) and PCGrad (Yu et al., 2020) significantly in terms of the objective metrics shown in Tab. 1, we further compare the qualitative results of our hybrid balance IMTL with the loss balance method uncertainty weighting (Kendall et al., 2018) and the gradient balance method GradNorm (Chen et al., 2018) considering their strong performances (see Fig. 6). For depth estimation we only show predictions at the pixels where ground truth (GT) labels exist to compare with GT, which is different from Fig. 7 where depth predictions are shown for all pixels. Consistent with results in Tab. 1, our IMTL shows visually noticeable improvements especially for the semantic and instance segmentation tasks. It is worth noting that we conduct experiments under strong baselines and practical settings which are seldom explored before, in this case changing the backbone in PSPNet (Zhao et al., 2017) from ResNet-50 to ResNet-101 can only improve mIoU of the semantic segmentation task around 0.5% according to the public code base2.\nScale invariance. We are also interested in the scale invariance, which means how the results change with the loss scale. For example, in semantic segmentation, the loss scale is different if we replace the reduction method “mean” (averaged over all locations) with “sum” (summed over all locations) in the cross-entropy loss computation, or the number of the interested classes increases. The scale invariance is beneficial for model robustness. So to simulate this effect, we manually multiply the semantic segmentation loss by 10 and apply the same methods to see how the performances are affected. In the last three columns of Tab. 1 we report the absolute changes resulting from the\n2https://github.com/open-mmlab/mmsegmentation/tree/master/configs/pspnet\nmultiplier. Our IMTL achieves the smallest performance fluctuations and thus the best invariance, while other methods are more or less affected by the loss scale change.\nResults on NYUv2. In Tab. 2 we find similar patterns as on Cityscapes, but NYUv2 is a rather small dataset, so uniform scaling can also obtain reasonable results. Note that uncertainty weighting (Kendall et al., 2018) cannot be directly used to estimate the normal surface when the cosine similarity is used as the loss, since no appropriate distribution can be found to correspond to cosine similarity. In this case, surface normal estimation owns the smallest gradient magnitude, so MGDA (Sener & Koltun, 2018) learns it best but it performs not so well for the rest two tasks. Again, our IMTL performs best taking advantage of the complementary gradient and loss balances.\nResults on CelebA. To compare different methods in the many-task setting, in Tab. 2 we also conduct the multi-label classification experiments on the CelebA (Liu et al., 2015) dataset. The mean accuracy of 40 tasks is used as the final metric. Our IMTL outperforms its competitors in the scenario where the task number is large, showing its superiority. Note that in this setting, GLS (Chennupati et al., 2019) has difficulty in converging and no reasonable results can be obtained." }, { "heading": "6 CONCLUSION", "text": "We propose an impartial multi-task learning method integrating gradient balance and loss balance, which are applied on task-shared and task-specific parameters, respectively. Through our in-depth analysis, we have theoretically compared our method with previous state-of-the-arts. We have also showed that those state-of-the-arts can all be categorized as gradient or loss balance, but lead to specific bias among tasks. Through extensive experiments we verify our analysis and demonstrate the effectiveness of our method. Besides, for fair comparisons, we contribute a unified code-base, which adopts more practical settings and delivers stronger performances compared with existing code-bases, and it will be publicly available for future research." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This work was supported by the Natural Science Foundation of Guangdong Province (No. 2020A1515010711), the Special Foundation for the Development of Strategic Emerging Industries of Shenzhen (No. JCYJ20200109143010272), and the Innovation and Technology Commission of the Hong Kong Special Administrative Region, China (Enterprise Support Scheme under the Innovation and Technology Fund B/E030/18)." }, { "heading": "A RELATED WORK OF NETWORK STRUCTURE", "text": "Cross-stitch Networks (Misra et al., 2016) learn coefficients to linearly combine activations from multiple tasks to construct better task-specific representations. To break the limitation of channelwise cross-task feature fusion only, NDDR-CNN (Gao et al., 2019) proposes the layer-wise crosschannel feature aggregation as 1 × 1 convolutions on the concatenated feature maps from multiple tasks. More generally, MTL-NAS (Gao et al., 2020) introduces cross-layer connections among tasks to fully exploit the feature sharing from both low and high layers, extending the idea in Sluice Networks (Ruder et al., 2019) by leveraging neural architecture search (Zoph & Le, 2017). The parameters of these methods increase linearly with the number of tasks. To improve the model compactness, Residual Adapters (Rebuffi et al., 2017) introduce a small amount of task-specific parameters for each layer and convolve them with the task-agnostic representations to form the taskrelated ones. MTAN (Liu et al., 2019) generates data-dependent attention tensors by task-specific parameters to attend to the task-shared features. Single-tasking (Maninis et al., 2019) instead applies squeeze-and-excitation (Hu et al., 2018) module to generate attentive vectors for each task. In Task Routing (Strezoski et al., 2019), the attentive vectors are randomly sampled before training and are fixed for each image. Piggyback (Mallya et al., 2018) opts to mask parameter weights in place of activation maps, dealing with task-sharing from another point-of-view. The above methods can share parameters among tasks to a large extent, however, they are not memory-efficient because each task still needs to compute all of its own intermediate feature maps, which also leads to inferior inference speed compared with loss weighting methods." }, { "heading": "B DETAILED DERIVATION", "text": "" }, { "heading": "B.1 GRADIENT BALANCE: IMTL-G", "text": "Here we give the detailed derivation of the closed-form solution of our IMTL-G, we also demonstrate the scale-invariance property of our IMTL-G, which is invariant to the scale changes of losses.\nSolution. As we want to achieve:\ngu>1 = gu > t ⇔ g (u1 − ut) > = 0, ∀ 2 6 t 6 T, (9) where ut = gt/ ‖gt‖, recall that we have g = ∑ t αtgt and ∑ t αt = 1, if we setα = [α2, · · · , αT ]\nandG> = [ g>2 , · · · , g>T ] , then α1 = 1− 1α> and Eq. (9) can be expanded as:\n(∑ t αtgt )[ u>1 − u>2 , · · · ,u>1 − u>T ] = 0⇔ [ 1− 1α>, α ] [ g1 G ] U> = 0, (10)\nwhere U> = [ u>1 − u>2 , · · · ,u>1 − u>T ] , 1 and 0 indicate the all-one and all-zero row vector, respectively. Eq. (10) can be solved by:[( 1− 1α> ) g1 +αG ] U> = 0⇔ α ( 1>g1 −G ) U> = g1U >. (11)\nAssumeD> = g>1 1−G> = [ g>1 − g>2 , · · · , g>1 − g>T ] , then we reach:\nαDU> = g1U > ⇔ α = g1U> ( DU> )−1 . (12)\nProperty. We can also prove the aggregated gradient g = ∑ t αtgt with {αt} given in Eq. (12) is invariant to the scale changes of losses {Lt} (or gradients {gt = ∇θLt}), as the following theorem. Theorem 2. Given g = ∑ t αtgt, ∑ t αt = 1 satisfying gu > t = C, when {Lt} are scaled by\n{kt > 0} (equivalently, {gt} are scaled by {kt}), if g′ = ∑ t α ′ t (ktgt), ∑ t α ′ t = 1 satisfies g ′u>t = C ′, then g′ = λg. In the above we have ut = gt‖gt‖ = ktgt ‖ktgt‖ , λ, C and C ′ are constants.\nProof. As we have: g = ∑ t αtgt = ∑ t αt kt ktgt and gu>t = C, (13)\nby constructing: α′t = αt kt / ∑ τ ατ kτ and g′ = ∑ t α′t (ktgt) = g/ ∑ τ ατ kτ = λg, (14) we have: ∑ t α′t = 1 and g ′u>t = C/ ∑ τ ατ kτ = C ′. (15) From Eq. (12) we know that {αt} has a unique solution, and thus g′ satisfying IMTL-G is unique, so it must be the one given by Eq. (14), then we can prove that g′ and g are linearly correlated." }, { "heading": "B.2 LOSS BALANCE: IMTL-L", "text": "With the ordinary differential equation, we can derive that the form of the scale function ∫ h (s) ds in our IMTL-L must be exponential function. As we have:\n∫ h (s) ds = Ch (s) , C > 0. (16)\nIf we set y = ∫ h (s) ds, then:\ny = C dy ds ⇒ dy y = 1 C ds, (17)\nBy taking the antiderivative:\n∫ dy\ny =\n1\nC\n∫ ds⇒ ln y = 1\nC s+ C ′. (18)\nThen we have:\n∫ h (s) ds = y = eC ′ ( e 1 C )s = bas, a > 1, b > 0. (19)" }, { "heading": "C DETAILED DISCUSSION", "text": "" }, { "heading": "C.1 CONDITIONAL EQUIVALENCE OF IMTL-G AND GRADNORM", "text": "First we introduce the following lemma. Lemma 3. If utu>τ = C1, ∀t 6= τ , then the solution {αt} of IMTL-G satisfies {αt > 0}.\nProof. As ut = gt/ ‖gt‖, by constructing g = ∑ t αtgt where:\nαt = ‖gt‖−1 / ∑ τ ‖gτ‖−1 , (20)\nthen we have ∑ t αt = 1 and:\ngu>t = (∑ τ uτut ) / ∑ τ ‖gτ‖−1 = [(T − 1)C1 + 1] / ∑ τ ‖gτ‖−1 = C2. (21)\nFrom Eq. (12) we know the solution {αt} of IMTL-G is unique, so it must be the one given by Eq. (20) where {αt > 0}, so the lemma is proved.\nThen we prove Theorem 1 which states that IMTL-G leads to the same solution as GradNorm when the angle between any pair of gradients {gt} is identical: utu>τ = C1, ∀t 6= τ .\nProof. (⇒ Necessity) Given constant projections in IMTL-G, we have:\ngu>t = (∑ τ ατgτ ) u>t = C2. (22)\nRecall that ut = gt/ ‖gt‖ and utu>τ = C1, ∀t 6= τ . From Lemma 3 we know that {αt} given by IMTL-G must satisfy {αt > 0}. If we assume nt = ‖αtgt‖, then we know αtgt = ntut and:\n∑ τ nτuτu > t = ∑ τ 6=t nτC1 + nt = C2. (23)\nNow we obtain:\n∑ τ 6=t nτC1 + nt = ∑ τ nτC1 + (1− C1)nt = C2. (24)\nAs C1 < 1, we can then prove nt = C3, ∀t. It implies the norm of the scaled gradient is constant, which is requested by GradNorm (Chen et al., 2018). Moreover, we can obtain the relationship among constants from Eq. (24):\nC1TC3 + (1− C1)C3 = C2 ⇒ C3 = C2\n(T − 1)C1 + 1 . (25)\n(⇐ Sufficiency) In GradNorm, {αt} are always chosen to satisfy {αt > 0}, so if we assume nt = ‖αtgt‖, then given the constant norm of the scaled gradient in GradNorm, we have:\nαtgt = ntut = C3ut, (26) where ut = gt/ ‖gt‖. As we have g = ∑ t αtgt and utu > τ = C1, ∀t 6= τ , then we obtain:\ngu>t = (∑ τ ατgτ ) u>t = (∑ τ C3uτ ) u>t = C3 [(T − 1)C1 + 1] = C2. (27)\nIt means the projections of g onto {gt} are constant, which is requested by our IMTL-G. Corollary 4. In GradNorm, if the solution {αt} satisfies ∑ t αt = 1 , then its constants are given\nby C3 = 1/ ∑ t ‖gt‖ −1 and C2 = [(T − 1)C1 + 1] / ∑ t ‖gt‖ −1, and its scaling factors are given\nby { αt = ‖gt‖−1 / ∑ τ ‖gτ‖ −1 } . Proof. By using αt = C3/ ‖gt‖ from Eq. (26), we have ∑ t C3/ ‖gt‖ = 1, then C3 =\n1/ ∑ t ‖gt‖ −1, and also we have αt = ‖gt‖−1 / ∑ τ ‖gτ‖\n−1. As the relationship of C2 and C3 from Eq. (27) is given by C3 [(T − 1)C1 + 1] = C2, so C2 = [(T − 1)C1 + 1] / ∑ t ‖gt‖ −1." }, { "heading": "C.2 CLOSED-FORM SOLUTION OF MGDA", "text": "In our relaxed MGDA (Sener & Koltun, 2018) without {αt > 0}, finding g = ∑ t αtgt with∑\nt αt = 1 such that g has minimum norm is equivalent to find the normal vector of the hyperplane composed by {gt}. So we let g to be perpendicular to all of {g1 − gt} on the hyper-plane:\ng ⊥ (g1 − gt)⇔ g (g1 − gt)> = 0, ∀ 2 6 t 6 T. (28) If we set α = [α2, · · · , αT ] and G> = [ g>2 , · · · , g>T ] , then we have α1 = 1− 1α>, and Eq. (28) can be expanded as:\n(∑ t αtgt )[ g>1 − g>2 , · · · , g>1 − g>T ] = 0⇔ [ 1− 1α>, α ] [ g1 G ] D> = 0, (29)\nwhere D> = [ g>1 − g>2 , · · · , g>1 − g>T ] , 1 and 0 indicates the all-one and all-zero row vector. Eq. (29) can be represented as:\n[( 1− 1α> ) g1 +αG ] D> = 0⇔ α ( 1>g1 −G ) D> = g1D >.\nAs we also haveD = 1>g1 −G, then the closed-form solution of α is given by:\nαDD> = g1D > ⇔ α = g1D> ( DD> )−1 . (30)\nBias of MGDA. In the main text we state that MGDA focuses on tasks with small gradient magnitudes, where we relaxed MGDA by not constraining {αt > 0}. However, even with these constraints, the problem still exists. For example in the context of two tasks, assume ‖g1‖ < ‖g2‖, if the minimum-norm point of g satisfying g = αg1+(1− α) g2 is outside the convex hull composed by {g1, g2}, or equivalently α > 1, MGDA clamps α to α = 1 and the optimal g? = g1. Then the projections of g? onto g1 and g2 will be ‖g1‖ and g1u>2 (u2 = g2/‖g2‖), respectively. As ‖g1‖ >\n∣∣g1u>2 ∣∣, so MGDA still focuses on tasks with smaller gradient magnitudes." }, { "heading": "C.3 ANALYSIS OF PCGRAD", "text": "PCGrad (Yu et al., 2020) mitigates the gradient conflicts by projecting the gradient of one task to the orthogonal direction of the others, and the aggregated gradient can be written as:\ng = ∑ t\n( gt +\n∑ τ Ctτuτ\n) , (31)\nwith ut = gt/ ‖gt‖ and the coefficients:\nCtt = 0, Ctτ = − gt + ∑\nt′<τ,\nCtt′ut′ u>τ + , ∀t, τ, (32)\nwhere [·]+ means the ReLU operator. Note that the tasks have been shuffled before calculating the aggregated gradient g to achieve expected symmetry with respect to the task order. Eq. (31) can be represented more compactly in the matrix form:\ng = 1 (IT +CN)G ≡ αG, (33)\nwhere IT is the identity matrix, C = {Ctτ} is the coefficient matrix whose entries are given in Eq. (32) and N = diag (1/ ‖g1‖ , · · · , 1/ ‖gT ‖) is the diagonal normalization matrix. In Eq. (33) we use G and α to denote the raw gradients and scaling factors of all tasks. We find that PCGrad can also be regarded as loss weighting, with the loss weights given by α = 1 (IT +CN). However, it still may break the balance among tasks. For example with two tasks, assume the angle between\nthe gradients is φ: 1) when π/2 6 φ < π, then C = [\n0 −g1g>2 / ‖g2‖ −g1g>2 / ‖g1‖ 0\n] and the\nprojections onto the two raw gradients are ‖g1‖ sin2 φ and ‖g2‖ sin2 φ; 2) when 0 < φ < π/2, then C = 0 and the projections are ‖g1‖ + ‖g2‖ cosφ and ‖g2‖ + ‖g1‖ cosφ. In both cases, the projections are equal if and only if ‖g1‖ = ‖g2‖. Otherwise, the task with larger gradient magnitude will be trained more sufficiently, which may encounter the same problem as uniform scaling that naı̈vely adds all the losses despite that the loss scales are highly different." }, { "heading": "C.4 L2 LOSS IN UNCERTAINTY WEIGHTING", "text": "For regression, uncertainty weighting (Kendall et al., 2018) regards the L2 loss as likelihood estimation on the sample target which follows the Gaussian distribution:\n− log p (y | f (x)) = 1 2\n( 1\nσ2 ‖y − f (x)‖22 + log σ 2\n) , (34)\nwhere x is the data sample, y is the ground-truth label, f denotes the prediction model and σ is the standard deviation of Gaussian distribution. By setting s = − log σ2, the scaled L2 loss is L = 12 (e\nsLreg − s), which has a similar form as the scaled L1 loss except the front factor 1/2. So uncertainty weighting has difficulty in reaching a unified form for all kinds of losses, which is less general than our IMTL-L." }, { "heading": "C.5 GRADIENT OF GEOMETRIC MEAN", "text": "GLS (Chennupati et al., 2019) computes the loss as the geometric mean, its gradient with respect to model parameters are:\n∇θL = 1\nT (∏ t Lt ) 1 T −1∑ t ∏ τ 6=t Lτ ∇θLt (35)\n= 1\nT (∏ t Lt ) 1 T ∑ t ∇θLt Lt = L T ∑ t 1 Lt (∇θLt) . (36)\nwhere L is the geometric mean loss and T is the task number. It is equivalent to weigh the taskspecific loss with its reciprocal value, except that there exists another term L/T in the front where L = ( ∏ t Lt) 1 T , so GLS is sensitive to the loss scale changes of {Lt} and not scale-invariant.\nD IMPLEMENTATION DETAILS\nTo solely compare the loss weighting methods, we fix the network structure and choose ResNet50 (He et al., 2016) with dilation (Chen et al., 2017) and synchronized (Peng et al., 2018) batch normalization (Ioffe & Szegedy, 2015) as the shared backbone and PSPNet (Zhao et al., 2017) as the task-specific head, and the backbone model weights are pretrained on ImageNet (Deng et al., 2009). Following the common practice of semantic segmentation, in training we adopt augmentations as random resize (between 0.5 to 2), random rotate (between -10 to 10 degrees), Gaussian blur (with a radius of 5) and random horizontal flip. Besides, we apply strided cropping and horizontal flipping as testing augmentations. The predicted results in the overlapped region of different crops are averaged to obtain the aggregated prediction of the whole image. Only pixels with ground truth labels are included in loss and metric computation, while others are ignored. Semantic segmentation, instance segmentation, surface normal estimation and disparity/depth estimation are considered. As for the losses/metrics, semantic segmentation uses cross-entropy/mIoU, surface normal estimation adopts (1− cos)/cosine similarity and both instance segmentation and disparity/depth estimation use L1 loss. We use polynomial learning rate with a power of 0.9, SGD with a momentum of 0.9 and weight decay of 10−4 as the optimizer, with the model trained for 200 epochs. After passing through the shared backbone where strided convolutions exist, the feature maps have 1/8 size as that of the\ninput image. Then the results predicted by PSPNet (Zhao et al., 2017) heads are up-sampled to the original image size for loss and metric computation.\nFor the Cityscapes dataset, the batch size is 32 (2 × 16 GPUs) with the initial learning rate 0.02. We train on the 2975 training images and validate on the 500 validation images (1024 × 2048 full resolution) where ground truth labels are provided. Three tasks are considered, namely semantic segmentation, instance segmentation and disparity/depth estimation. Training and testing are done on 713×713 crops. Semantic segmentation is to differentiate among the commonly used 19 classes. Instance segmentation is taken as offset regression, where each pixel pi = (xi, yi) approximates the relative offset oi = (dxi,dyi) with respect to the centroid cid(pi) of its belonging instance id (pi). To conduct inference, we abandon the time-consuming and complicated clustering methods adopted by the previous method (Kendall et al., 2018). Instead, we directly use the offset vectors {oi} predicted by the model to find the centroids of instances. By definition, the norm of a centroid’s offset vector should be 0, so we can transform the offset vector norm ‖oi‖ to the probability qi of being a centroid with the exponential function qi = e−‖oi‖. Next a 7 × 7 edge filter is applied on the centroid probability map to filter out the spurious centroids on object edges resulting from the regression target ambiguity. The locations with centroid probability qi < 0.1 are also manually suppressed. Then 7 × 7 max-pooling on the filtered probability map is used to produce candidate centroids and filter out duplicate ones. With the predicted centroids {ci}, we can then assign each pixel pi to its belonging instance id (pi) by the distance between its approximated centroids pi+oi and the candidate centroids {ci}: id (pi) = argminj ‖pi + oi − cj‖. Depth is measured in pixels by the disparity between the left and right images. Fig. 5 shows the whole process. Note that we need to carefully deal with label transformation during data augmentation. For example, disparity ground truth needs to be up-scaled by s times if the image is up-sampled by s times. Also, the predicted offset vectors of the flipped input should be mirrored to comply with the normal one.\nOn the NYUv2 dataset, the batch size is 48 (6 × 8 GPUs) with the initial learning rate 0.03. We use the 795 training images for training and the 654 validation images for testing with 480 × 640 full resolution. 401 × 401 crops are used for training and testing. 13 coarse-grain classes are considered in semantic segmentation. The surface normal is represented by the unit normal vector of the corresponding surface. When doing data augmentation, surface normal ground truth n = (x, y, z) should be processed accordingly. If we resize the image by s times, the z coordinate of the normal vector should be scaled by s and renormalized: n′ = (x, y, sz) / ‖(x, y, sz)‖. If the image is rotated by the rotation matrix R, the normal vector should also be in-plane rotated (x′, y′) = (x, y)R> with z unchanged. Moreover, the left-right flip should be applied on the normal vector n′ = (−x, y, z) when mirroring the image horizontally. During testing, the normal vectors in the overlapped region of crops are averaged and renormalized to produce the aggregated results. Depth is the absolute distance to the camera and measured by meters, which is inverse-proportional to the disparity measurement adopted by Cityscapes. So the depth in meters needs to be scaled by 1/s when the image is scaled by s times, which is the reciprocal of disparity transformation.\nCelebA contains 202,599 face images from 10,177 identities, where each image has 40 binary attribute annotations. We train on the 162,770 training images and test on the 19,867 validation\nimages. Most of the implementation details are the same as those on the Cityscapes dataset, except that: 1) we employ the ResNet-18 as the backbone and linear classifiers as the task-specific heads, so totally 40 heads are attached on the backbone ; 2) the binary-cross entropy is used as the classification loss for each attribute; 3) the batch size is 256 (32 × 8 GPUs) and the model is trained from scratch for 100 epochs; 4) the input image has been aligned with the annotated 5 landmarks and cropped to 218× 178.\nE QUALITATIVE RESULTS" } ]
2,021
TOWARDS IMPARTIAL MULTI-TASK LEARNING
SP:839f449191ae3ff1016d4321d9e1926c5f883a78
[ "This paper studies the label solicitation strategy in active learning. In particular, it focuses on the expected loss reduction (ELR) strategy, analyzes its problem, and modifies the original ELR method to make sure the active learner converges to the optimal classifier along learning iterations. The paper provides theoretical guarantees on the new method’s convergence. In the experiment, the proposed method is evaluated on synthetic data and UCI data. The improvement margin over the existing method is very limited.", "This paper provides an interesting algorithm to address the previous Bayesian active learning query strategy in (binary) classification. By the simple modification, the algorithm can overcome the drawbacks of ELR in the convergence to the optimal classifier parameterized by $\\theta_r$. In experiments, the proposed algorithm can achieve the advantages of ELR and BALD simultaneously. " ]
For pool-based active learning, in each iteration a candidate training sample is chosen for labeling by optimizing an acquisition function. In Bayesian classification, expected Loss Reduction (ELR) methods maximize the expected reduction in the classification error given a new labeled candidate based on a one-step-lookahead strategy. ELR is the optimal strategy with a single query; however, since such myopic strategies cannot identify the long-term effect of a query on the classification error, ELR may get stuck before reaching the optimal classifier. In this paper, inspired by the mean objective cost of uncertainty (MOCU), a metric quantifying the uncertainty directly affecting the classification error, we propose an acquisition function based on a weighted form of MOCU. Similar to ELR, the proposed method focuses on the reduction of the uncertainty that pertains to the classification error. But unlike any other existing scheme, it provides the critical advantage that the resulting Bayesian active learning algorithm guarantees convergence to the optimal classifier of the true model. We demonstrate its performance with both synthetic and real-world datasets.
[ { "affiliations": [], "name": "BAYESIAN CLASSIFIER" }, { "affiliations": [], "name": "Guang Zhao" }, { "affiliations": [], "name": "Edward R. Dougherty" }, { "affiliations": [], "name": "Byung-Jun Yoon" }, { "affiliations": [], "name": "Francis J. Alexander" }, { "affiliations": [], "name": "Xiaoning Qian" } ]
[ { "authors": [ "Nguyen Viet Cuong", "Wee Sun Lee", "Nan Ye", "Kian Ming A Chai", "Hai Leong Chieu" ], "title": "Active learning for probabilistic hypotheses using the maximum gibbs error criterion", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Lori A Dalton", "Edward R Dougherty" ], "title": "Optimal classifiers with minimum expected error within a bayesian framework—part i: Discrete and gaussian models", "venue": "Pattern Recognition,", "year": 2013 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "Yarin Gal", "Riashat Islam", "Zoubin Ghahramani" ], "title": "Deep bayesian active learning with image data", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Andrew Gelman", "John B Carlin", "Hal S Stern", "David B Dunson", "Aki Vehtari", "Donald B Rubin" ], "title": "Bayesian data analysis", "venue": "CRC press,", "year": 2013 }, { "authors": [ "Daniel Golovin", "Andreas Krause", "Debajyoti Ray" ], "title": "Near-optimal bayesian active learning with noisy observations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "Trong Nghia Hoang", "Bryan Kian Hsiang Low", "Patrick Jaillet", "Mohan Kankanhalli" ], "title": "Nonmyopic -bayes-optimal active learning of gaussian processes", "venue": null, "year": 2014 }, { "authors": [ "Neil Houlsby", "Ferenc Huszár", "Zoubin Ghahramani", "Máté Lengyel" ], "title": "Bayesian active learning for classification and preference learning", "venue": "arXiv preprint arXiv:1112.5745,", "year": 2011 }, { "authors": [ "H Tolga Kahraman", "Seref Sagiroglu", "Ilhami Colak" ], "title": "The development of intuitive knowledge classifier and the modeling of domain dependent data", "venue": "Knowledge-Based Systems,", "year": 2013 }, { "authors": [ "Ashish Kapoor", "Eric Horvitz", "Sumit Basu" ], "title": "Selective supervision: Guiding supervised learning with decision-theoretic active learning", "venue": "In IJCAI,", "year": 2007 }, { "authors": [ "Andreas Kirsch", "Joost van Amersfoort", "Yarin Gal" ], "title": "Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "David D Lewis", "William A Gale" ], "title": "A sequential algorithm for training text classifiers", "venue": "In SIGIR’94,", "year": 1994 }, { "authors": [ "Stephen Mussmann", "Percy Liang" ], "title": "On the relationship between data efficiency and error for uncertainty sampling", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "N Roy", "A McCallum" ], "title": "Toward optimal active learning through sampling estimation of error reduction", "venue": "int. conf. on machine learning,", "year": 2001 }, { "authors": [ "Paola Sebastiani", "Henry P Wynn" ], "title": "Maximum entropy sampling and optimal bayesian experimental design", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2000 }, { "authors": [ "Samarth Sinha", "Sayna Ebrahimi", "Trevor Darrell" ], "title": "Variational adversarial active learning", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Toan Tran", "Thanh-Toan Do", "Ian Reid", "Gustavo Carneiro" ], "title": "Bayesian generative active deep learning", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Byung-Jun Yoon", "Xiaoning Qian", "Edward R Dougherty" ], "title": "Quantifying the objective cost of uncertainty in complex dynamical systems", "venue": "IEEE Transactions on Signal Processing,", "year": 2013 }, { "authors": [ "Xiaojin Zhu", "John Lafferty", "Zoubin Ghahramani" ], "title": "Combining active learning and semisupervised learning using gaussian fields and harmonic functions. In ICML 2003 workshop on the continuum from labeled to unlabeled data in machine learning and data mining, volume", "venue": null, "year": 2003 } ]
[ { "heading": null, "text": "For pool-based active learning, in each iteration a candidate training sample is chosen for labeling by optimizing an acquisition function. In Bayesian classification, expected Loss Reduction (ELR) methods maximize the expected reduction in the classification error given a new labeled candidate based on a one-step-lookahead strategy. ELR is the optimal strategy with a single query; however, since such myopic strategies cannot identify the long-term effect of a query on the classification error, ELR may get stuck before reaching the optimal classifier. In this paper, inspired by the mean objective cost of uncertainty (MOCU), a metric quantifying the uncertainty directly affecting the classification error, we propose an acquisition function based on a weighted form of MOCU. Similar to ELR, the proposed method focuses on the reduction of the uncertainty that pertains to the classification error. But unlike any other existing scheme, it provides the critical advantage that the resulting Bayesian active learning algorithm guarantees convergence to the optimal classifier of the true model. We demonstrate its performance with both synthetic and real-world datasets." }, { "heading": "1 INTRODUCTION", "text": "In supervised learning, labeling data is often expensive and highly time consuming. Active learning is one field of research that aims to address this problem and has been demonstrated for sampleefficient learning with less required labeled data (Gal et al., 2017; Tran et al., 2019; Sinha et al., 2019). In this paper, we focus on pool-based Bayesian active learning for classification with 0- 1 loss function. Bayesian active learning starts from the prior knowledge of uncertain models. By optimizing an acquisition function, it chooses the next candidate training sample to query for labeling, and then based on the acquired data, updates the belief of uncertain models through Bayes’ rule to approach the optimal classifier of the true model, which minimizes the classification error.\nIn active learning, maximizing the performance of the model trained on queried candidates is the ultimate objective. However, most of the existing methods do not directly target the learning objective. For example, Maximum Entropy Sampling (MES) or Uncertainty Sampling, simply queries the candidate with the maximum predictive entropy (Lewis & Gale, 1994; Sebastiani & Wynn, 2000; Mussmann & Liang, 2018); but the method fails to differentiate between the model uncertainty and the observation uncertainty. Bayesian Active Learning by Disagreement (BALD) seeks the data point that maximizes the mutual information between the observation and the model parameters (Houlsby et al., 2011; Kirsch et al., 2019). Besides BALD, there are also other methods reducing the model uncertainty in different forms (Golovin et al., 2010; Cuong et al., 2013). However, not all the model uncertainty will affect the performance of the learning task of interest. Without identifying whether the uncertainty is related to the classification error or not, these methods can be inefficient in the sense that it may query candidates that do not directly help improve prediction performance.\nIn this paper we focus on the active learning methods directly maximizing the learning model performance. There exist such active learning methods by Expected Loss Reduction (ELR) that aim to maximize the expected reduction in loss based on a one-step-look-ahead manner (Roy & McCallum, 2001; Zhu et al., 2003; Kapoor et al., 2007). The ELR methods can focus on only the uncertainty related to the loss function to achieve sample-efficient learning. In fact, ELR is the optimal strategy for active learning with a single query (Roy & McCallum, 2001). However, a critical shortcoming of previous ELR schemes is that none of them provide any theoretical guarantee regarding their longterm performance. In fact, since these methods are myopic and cannot identify the long-term effect of a query on the loss functions, without special design on the loss function, they may get stuck before reaching the optimal classifier. To the best of our knowledge, there is currently no method that directly maximizes the model performance while simultaneously guaranteeing the convergence to the optimal classifier.\nFig. 1a provides an example of binary classification with one feature where both BALD and ELR methods fail. In the figure, the red lines indicate the upper and lower bounds of the prediction probability of class 1, illustrating the model with higher probability uncertainty on the sides (x → ±4) than that in the middle (x = 0). Querying candidates on the sides will provide more information of the model parameters, and therefore is preferred in BALD. However, since the possible probabilities on the sides are always larger than or less than 0.5, querying candidates on the sides will not help reduce the classification error. On the other hand, ELR queries candidates that help reduce the classification error the most, so it prefers data in the middle whose optimal labels are uncertain given the prior knowledge. The performance shown in Fig. 1b agrees with our analysis. Fig. 1b shows the performance averaged over 1000 runs, with more details and discussions of the example included in Appendix C. BALD performs inefficiently at the beginning by querying points on both sides. On the other hand, the ELR method performs the best at the beginning, but becomes inefficient after some iterations (∼100), indicating some of its runs get stuck before reaching the optimal classifier. In this paper, we consider the algorithm to “get stuck” when the acquisition function value is 0 for all the candidates in the pool and the algorithm degenerates to uniform random sampling.\nIn this paper, we analyze the reason why ELR methods may get stuck before reaching the optimal classifier, and propose a new strategy to solve this problem. Our contributions are in four parts: 1. We show that ELR methods may get stuck, preventing active learning from reaching the optimal classifier efficiently. 2. We propose a novel weighted-MOCU active learning method that can focus only on the uncertainty related to the loss for efficient active learning and is guaranteed to converge to the optimal classifier of the true model. 3. We provide the convergence proof of the weightedMOCU method. 4. We demonstrate the sample-efficiency of our weighted-MOCU method with both synthetic and real-world datasets." }, { "heading": "2 BACKGROUND", "text": "Optimal Bayesian classifier. Consider a classification problem with candidates x ∈ X and class labels y ∈ Y = {0, 1, . . . ,M − 1}. The predictive probability p(y|x, θ) is modeled with parameters\nθ. Assume θ is uncertain with a distribution π(θ) within the uncertainty class Θ. The classification problem is to find a classifier ψ : X → Y , which assigns a predicted class label to a given candidate. The expected 0-1 loss of the classifier ψ for a candidate x, dependent on θ, is defined as Cθ(ψ, x), which can be derived to be the classification error: Cθ(ψ, x) = 1 − p(y = ψ(x)|x, θ). The optimal classifier with θ, ψθ is defined as the classifier minimizing the classification error: ψθ(x) = arg maxy p(y|x, θ). So we have: Cθ(ψθ, x) = minψ Cθ(ψ, x) = miny{1− p(y|x, θ)}. When there is model uncertainty with π(θ), an Optimal Bayesian Classifier (OBC) ψπ(θ) is the classifier that has the minimum expected loss over π(θ) (Dalton & Dougherty, 2013):\nEπ(θ)[Cθ(ψπ(θ), x)] = min ψ Eπ(θ)[Cθ(ψ, x)] = min y {1− p(y|x)} (1)\nwhere p(y|x) = Eπ(θ)[p(y|x, θ)] is the predictive distribution. It’s easily to see ψπ(θ)(x) = arg maxy p(y|x).\nActive learning. Active learning collects the training dataset D in a sequential way. For poolbased active learning, in each iteration, we choose a candidate x from the set of potential training samples X to query for the class label by optimizing an acquisition function U(x). Then, in the Bayesian setting, by including the observed data pair (x, y) to D, we update the posterior distribution based on Bayes’ rule. In each iteration, the acquisition function depends on the posterior distribution of model parameters π(θ|D). In the following discussion, to simplify notations, we omit D from the notations and use π(θ) and p(y|x) to respectively denote the posterior and predictive distributions conditioned on D. When a new observed data point is included, the distributions are updated by Bayes’ rule and the total probability rule as: π(θ|x, y) = π(θ)p(y|x,θ)p(y|x) and p(y′|x′, x, y) = Eπ(θ|x,y)[p(y′|x′, θ)]. The acquisition function of ELR methods in the Bayesian setting can be defined by the expected OBC prediction error reduction after observing the new pair (x, y) (Roy & McCallum, 2001):\nUELR(x) = Ep(x′){Eπ(θ)[Cθ(ψπ(θ), x′)]− Ep(y|x)[Eπ(θ|x,y)[Cθ(ψπ(θ|x,y), x′)]]}, (2)\nwhere p(x′) is the distribution over X , independent of θ and D. ELR methods assume that we use OBC as the classifier, and in each iteration we should choose the query that maximize the decrease in OBC prediction error. The first term in (2) is the OBC prediction error of ψπ(θ), and the second term is the expected prediction error of ψπ(θ|x,y), the one-step-look-ahead OBC, with respect to p(y|x). In the following section, we analyze why this acquisition function is sample-efficient as it directly targets at classification error reduction while ignoring irrelevant uncertainty with respect to the learning task; but it may get stuck before converging to the true optimal classifier (optimal classifier of the true model)." }, { "heading": "3 MOCU-BASED ACTIVE LEARNING", "text": "" }, { "heading": "3.1 MEAN OBJECTIVE COST OF UNCERTAINTY", "text": "To analyze ELR methods, we borrow the idea of the Mean Objective Cost of Uncertainty (MOCU) for active learning with respect to the corresponding posterior π(θ). MOCU is a general objectiveoriented uncertainty quantification framework (Yoon et al., 2013). For active learning, MOCU can be defined as the expected loss difference between the OBC and the optimal classifier:\nM(π(θ)) = Ep(x′)[Eπ(θ)[Cθ(ψπ(θ), x′)− Cθ(ψθ, x′)]] (3) = Ep(x′)[min\ny′ {1− p(y′|x′)} − Eπ(θ)[min y′ {1− p(y′|x′, θ)}]]. (4)\nThe second line is derived by the definition of ψθ and (1). The first term in (3) is the OBC error as the loss. In the second term, ψθ is the optimal classifier with a specific θ. For the terms inside the expectation operator, we have Cθ(ψπ(θ), x′)− Cθ(ψθ, x′) ≥ 0. Therefore, the second term in (3) is a lower bound of the OBC prediction error. MOCU captures the difference between the OBC error and its lower bound. When MOCU is 0, the OBC converges to the true optimal classifier and we cannot reduce the OBC prediction error further. In that case, we say that OBC has reached the true optimal classifier.\nAs in ELR methods, we can define an acquisition function by the reduction of MOCU in a one-steplook-ahead manner:\nUMOCU(x;π(θ)) =M(π(θ))− Ep(y|x)[M(π(θ|x, y))]. (5) We can show that the second term in (3), the lower bound of the OBC error, is cancelled in (5). The acquisition function (5) hence captures the expected reduction of the OBC error given new data and is equivalent to the ELR acquisition function (2). Expanding the second term in (5), we have:\nEp(y|x)[M(π(θ|x, y))] = Ep(x′){Ep(y|x)[Eπ(θ|x,y)[Cθ(ψπ(θ|x,y), x′)− Cθ(ψθ, x′)]]}. (6) Since ∑ y p(y|x)π(θ|x, y) = π(θ), as x is assumed to be independent of θ so that we have π(θ|x) = π(θ), we can rewrite the first term in (5) as:\nM(π(θ)) = Ep(x′){Ep(y|x)[Eπ(θ|x,y)[Cθ(ψπ(θ), x′)− Cθ(ψθ, x′)]]}. (7)\nCombining (6) and (7) and canceling the Cθ(ψθ, x′) terms (the lower bound of the OBC error), (6) can be derived as:\nUMOCU(x;π(θ)) = Ep(x′){Ep(y|x)[Eπ(θ|x,y)[Cθ(ψπ(θ), x′)− Cθ(ψπ(θ|x,y), x′)]]}, (8) which is just the ELR acquisition function in (2). Therefore, we can conclude that MOCU-based methods are equivalent to ELR methods.\nAnother property we can observe from (8) is that UMOCU(x;π(θ)) ≥ 0. By definition, ψπ(θ|x,y) is the OBC with the minimum expected classification error over π(θ|x, y). Therefore, Eπ(θ|x,y)[Cθ(ψπ(θ|x,y), x′)] ≤ Eπ(θ|x,y)[Cθ(ψπ(θ), x′)] and we have UMOCU(x;π(θ)) ≥ 0, indicating collecting new data will reduce MOCU." }, { "heading": "3.2 ANALYSIS OF ELR METHODS", "text": "In the following analysis, we assume that Θ contains the true model θr and π(θr) > 0. We first analyze ELR methods by the MOCU reduction to show that ELR and MOCU-based active learning ignores the uncertainty irrelevant to the OBC prediction. By that, we indicate that not all the model uncertainties directly affect the OBC prediction. Denote the contribution to the MOCU at point x as K(x, π(θ)) = Eπ(θ)[Cθ(ψπ(θ), x) − Cθ(ψθ, x)], so that M(π(θ)) = Ep(x)[K(x, π(θ))]. If K(x, π(θ)) = 0, then we have ∀θ ∈ supp(π), ψθ(x) = ψπ(θ)(x), i.e. arg maxy p(y|x, θ) = arg maxy p(y|x). This means that for all the possible models, the optimal predictions are the same, and the OBC prediction on x will not be affected by the remaining uncertainty of p(y|x, θ), if any. In fact, K(x, π(θ)) = 0 does not necessarily mean that there is no uncertainty associated with p(y|x, θ), for which it requires that the value of p(y|x, θ) is the same ∀θ ∈ supp(π), apparently a stronger statement than K(x, π(θ)) being 0. Therefore, not all the uncertainties of p(y|x, θ) are captured in MOCU whenK(x, π(θ)) = 0. We consider the uncertainty in p(y|x, θ) to be “objectiveirrelevant” to the OBC prediction if K(x, π(θ)) = 0. In the active learning procedure, when a new observation is obtained, it reduces the uncertainty of the parameter θ; and as a result, it reduces the uncertainty of p(y|x, θ) for each x ∈ X . If an observation only reduces objective-irrelevant uncertainty, the value of MOCU will not change. For example, in Fig. 1a, the uncertainty of p(y|x, θ) in the region close to x = ±4 is objective-irrelevant. Evaluating points at x→ ±4 will only reduce irrelevant uncertainty and it will not be considered in either MOCU- or ELR-based methods. That explains why in the first several active learning iterations, the ELR or MOCU-based active learning can be more efficient than the methods guided by total uncertainty reduction, such as BALD.\nNow we explain why ELR methods may get stuck before the OBC converges to the true optimal classifier. When we have ∀x ∈ X , UMOCU(x;π(θ)) = 0 and assume the tie is broken randomly, the acquisition function will suggest any random candidate in the pool. When that happens, we say that ELR methods get stuck if the OBC has not reached the true optimal classifier; i.e.,M(π(θ)) is still larger than 0.\nSince p(y′|x′) = Eπ(θ)[p(y′|x′, θ)] is a linear function of π(θ), the term miny′{1− p(y′|x′)} in (4) is the minimum among M linear functions, and thus a concave piece-wise linear function. Within each linear piece, ψπ(θ)(x′) = arg maxy′ p(y\n′|x′) are the same for different π(θ). The second term Eπ(θ)[miny′{1 − p(y′|x′, θ)}] in (4) is a linear function of π(θ). Subtracting it from the first term and averaging the resulting difference over p(x′) maintain the concavity and the piece-wise linearity.\nTherefore, MOCU defined in (4) is a concave piece-wise linear function of π(θ). Moreover, within a linear piece of MOCU, ψπ(θ)(x′) = arg maxy′ p(y\n′|x′), i.e. the OBC prediction, is the same for each of x′ ∈ X . To gain some intuition, we study a binary classification problem with the uncertainty class of two possible models Θ = {θ1, θ2} and a candidate pool of two training samples to query: X = {x1, x2}. Further details of the model setup can be found in Appendix D. Since π(θ1) = 1− π(θ2), we can express the MOCU function as a univariate function of π(θ1) as shown in Fig. 2. It is clear that the MOCU function is a concave piece-wise linear function.\nSince π(θ) = Ep(y|x)[π(θ|x, y)], from (5), the acquisition function is defined as M[Ep(y|x)[π(θ|x, y)]] − Ep(y|x)[M(π(θ|x, y))]. Based on the concavity of MOCU M(·) and Jensen’s inequality, we have the acquisition function UMOCU(x;π(θ)) ≥ 0. The equality holds if for all y ∈ Y , π(θ|x, y) falls into the same linear piece of MOCU. In this case, the single query (x, y) cannot provide enough evidence to shift the OBC prediction, so that arg maxy′ p(y\n′|x′, x, y) = arg maxy′ p(y\n′|x′) for each of x′ ∈ X , even though p(y′|x′, x, y) 6= p(y′|x′). In Fig. 2, we have shown in the binary classification problem there exists such a case that π̃(θ1|x1, y1 = 0) and π̃(θ1|x1, y1 = 1) are within the interval of the same linear piece of the corresponding MOCU function. When MOCU is larger than 0, if all candidates cannot provide enough evidence to change the OBC predictions, ELR and MOCU-based methods will get stuck before converging to the true optimal classifier. This is due to the myopic nature of the acquisition function ignoring the long-term effect of querying each candidate.\nIn summary, ELR methods are efficient by ignoring objective-irrelevant uncertainty. But if sampling one data point can only provide little information and cannot help improve OBC prediction in the current iteration, they will ignore its long-term effect on prediction performance. As a result, they may get stuck before converging to the optimal classifier. To keep the efficient sampling property by ignoring objective-irrelevant uncertainty but avoid getting stuck, we propose a one-step-look-ahead acquisition function based on a weighted version of MOCU, which can capture the change of the predictive probability from one-step query candidates. If that change can potentially shift the OBC predictions in the long run, our acquisition function will have a positive value, and thereby avoid the issues that ELR methods suffer from." }, { "heading": "3.3 WEIGHTED MOCU-BASED ACTIVE LEARNING", "text": "In this section, we propose a modified MOCU-based acquisition function that has the theoretical guarantee to converge to the optimal classifier. Specifically, we propose a modified MOCU function that multiplies a weight with each loss difference between the OBC ψπ(θ) and the optimal classifier\nψθ in the original MOCU definition:\nMw(π(θ)) = Ep(x′){Eπ(θ){w(π(θ), x′, θ)[Cθ(ψπ(θ), x′)− Cθ(ψθ, x′)]}}, (9)\nwhere w(π(θ), x′, θ) > 0 is the weighting function. The corresponding acquisition function is:\nUw(x;π(θ)) =Mw(π(θ))− Ep(y|x)[Mw(π(θ|x, y))]. (10)\nIn (9), as more data are collected and the model parameter distribution π(θ) changes, w(π(θ), x′, θ) will change accordingly. The change of w(π(θ), x′, θ) cannot affect the value of the weighted MOCU if Cθ(ψπ(θ), x′) − Cθ(ψθ, x′) = 0, ∀θ ∈ supp(π(θ)), indicating the uncertainty at x′ is objective-irrelevant. This makes sure that the acquisition function based on the weighted MOCU will inherit the property of MOCU-based active learning to directly target at classification error reduction while ignoring irrelevant uncertainty. On the other hand, by introducing the predictive probability into the weighting functions, the probability change from one-step samples can be captured by the weighted-MOCU based acquisition function such that it can have theoretical convergence guaranteed to the optimal classifier as shown below.\nWe would like to emphasize that there are also active learning algorithms, such as the ones based on the cyclic sampling and -greedy policies (Hoang et al., 2014), that can almost surely converge to the true model, and as a result, the OBC converges to the true optimal classifier. However, these policies focus on the total uncertainty reduction to derive the full knowledge of the true model, which is unnecessary and therefore inefficient, since we only need the knowledge of the true optimal classifier if the classification performance is the primary concern. Unlike such policies, our weighted-MOCU based policy directly reduces the objective uncertainty affecting classification, and as a result, it is much more efficient by focusing only on those queries that are helpful for improving the prediction. As a result, our proposed algorithm guarantees efficiency both in the short term as well as in the longer term.\nIn the following, we design a weighting function to make Mw(π(θ)) = 0 if and only if ∀x ∈ X , Uw(x;π(θ)) = 0 and show that active learning based on this weighted MOCU converges to the optimal classifier. Specifically, we propose the following weighting function:\nw(π(θ), x′, θ) = 1− c ·K(x′, π(θ)), with (11)\nK(x′, π(θ)) = Eπ(θ)[Cθ(ψπ(θ), x′)− Cθ(ψθ, x′)] (12) = min\ny′ Eπ(θ)[1− p(y′|x′, θ)]− Eπ(θ)[min y′ (1− p(y′|x′, θ)] (13)\n= Eπ(θ)[max y′ p(y′|x′, θ)]−max y′ p(y′|x′), (14)\nwhere 0 < c ≤ 1 is a parameter controlling the approximation of the weighted MOCU to the original MOCU, with smaller c giving a better approximation. The choice of c depends on the specific classification problem and the total query budget. Methods using a smaller c approximate the ELR methods better, hence they will perform well in the first several iterations but may converge slowly in the long run. On the other hand, when c is closer to 1, the acquisition function weighs more heavily on long-term benefits. It is clear that K(x′, π(θ)) ≥ 0 by (13). For binary classification, maxy′ p(y\n′|x′) ≥ 0.5. As Eπ(θ)[maxy′ p(y′|x′, θ)] ≤ 1, from (14), we have K(x′, π(θ)) ≤ 0.5, demonstrating that the weighting function in (11) satisfies the requirementw(π(θ), x′, θ) ≥ 0.5 > 0. Note that this simple weighting function does not change with respect to the model parameter values. Substituting it into the weighted MOCU expression, we have:\nMw(π(θ)) = Ep(x′){(1− cK(x′, π(θ))) ·K(x′, π(θ))}, (15)\nwhich is a strictly concave function of K. We also illustrate the weighted-MOCU function in Fig. 2 for the same example in Section 3.2. As shown in the figure, a smaller c provides better approximation to the MOCU function, and all the weighted MOCU functions are strictly concave functions of π(θ1) instead of being piece-wise linear, which guarantees that the acquisition function Uw(x1; π̃(θ)) is positive. In general, weighted MOCU is strictly concave along most of the directions and only changes linearly along the directions that K(x, π(θ)) is constant for x ∈ X , which correspond to the queries that only reduce irrelevant uncertainties. Such a property can guarantee the convergence to the true optimal classifier.\nBefore presenting the theoretical convergence guarantee of the weighted-MOCU based active learning, we summarize the computation of our weighted-MOCU based acquisition function in Algorithm 1, which can replace ELR and MOCU-based acquisition functions in Bayesian active learning algorithms with the pseudo-code given in Appendix B. We estimate the computational complexity of Algorithm 1 for the discrete feature and parameter spaces. Assume that the size of the discrete feature space is Nx = |X | and the size of the uncertainty set of classifiers is Nθ = |Θ|. We study the complexity of calculating the weighted MOCU. In the WMOCU function, the OBC error evaluation in line 19 is called for O(NxNθ) times. In ACQUISITIONFUN, WMOCU is called for constant times. Hence, the total complexity of calculating the acquisition function in weighted-MOCU based active learning is O(NxNθ). Compared with the ELR method, there is O(Nx) additional computation associated with computing the weight (1 − cK) in line 26. Hence, the incurred computational complexity is of the same order as the original ELR and MOCU-based methods.\nAlgorithm 1 Calculation for Weighted-MOCU based Acquisition Function 1: function ACQUISITIONFUN(x, πθ|D, c) 2: wmocu current =WMOCU(πθ|D) 3: wmocu next = 0 4: for y in {0, 1} do 5: for θ in Θ do 6: Generate array p(θ, y|D,x) = πθ|D · p(y|x, θ) 7: end for 8: p(y|D,x) = ∑ θ p(θ, y|D,x)\n9: πθ|D,x,y = p(θ, y|D,x)/p(y|D,x) 10: wmocu next = wmocu next+ p(y|D,x) ·WMOCU(πθ|D,x,y, c) 11: end for 12: return wmocu current− wmocu next 13: end function\n14: function WMOCU(πθ|D, c) 15: wmocu = 0 16: for x′ in X do 17: bayesian error = 0 18: for θ in Θ do 19: bayesian error = bayesian error + πθ|D · (1−maxy′ p(y′|x′, θ)) 20: end for 21: for y′ in {0, 1} do 22: p(y′|D,x′) = ∑ θ πθ|D · p(y′|x′, θ) 23: end for 24: obc error = 1−maxy′ p(y′|D,x′) 25: K = obc error − bayesian error 26: wmocu = wmocu+ p(x′) · [(1− cK)K] 27: end for 28: return wmocu 29: end function\nTheoretical convergence guarantee. Now we show that if active learning for a binary classification problem is guided by the acquisition function defined by (10) and (11), MOCU will converge to 0 almost surely and hence the procedure will converge to learning the optimal classifier of the true model. We assume that both X and Θ are discrete with finite elements; the true model parameter θr ∈ Θ and the prior distribution π0(θ) over Θ satisfies π0(θr) > 0. We denote the posterior by πn(θ) and predictive probability pn(y|x) in the n-th weighted MOCU based active learning iteration, respectively. In the following, we give important lemmas first. All the proofs of the presented lemmas can be found in Appendix A.\nLemma 1 Given π(θ),M(π(θ)) = 0 if and only ifMw(π(θ)) = 0.\nLemma 1 indicates that ifMw(π(θ)) = 0, the OBC ψπ(θ) converges to the optimal classifier ψθr as explained in the first paragraph in Section 3.1.\nLemma 2 Define G(x′, π(θ)) = (1 − cK(x′, π(θ)))K(x′, π(θ)), 0 < c ≤ 1. G(x′, π(θ)) is a concave function of π(θ).\nIt is important to choose a weighting scheme that renders a concave function G as it guarantees the acquisition function to be larger than or equal to 0, so that adding a new observation helps to reduce weighted MOCU to effectively guide active learning.\nLemma 3 ∀x ∈ X , Uw(x;π(θ)) ≥ 0.\nLemma 4 At the n-th active learning iteration, if Uw(x;πn(θ)) = 0, ∀x ∈ X ,Mw(πn(θ)) = 0.\nThis lemma states that if the acquisition function values of all candidates with respect to π(θ) are 0, the weighted MOCU is 0. By Lemma 1, so is MOCU. With these, we can conclude that the OBC with respect to π(θ) has converged to the optimal classifier. This is significant when comparing with original ELR and MOCU-based methods as we have shown that this is not the case for them, which may get stuck earlier and therefore lose the long-term efficiency.\nLemma 5 If following some policy a candidate x is measured infinitely often almost surely, then limn→∞ U w(x;πn(θ)) = 0 almost surely.\nIntuitively, if a candidate has been measured many times, there is no benefit to measure it again.\nWith these lemmas, we can prove the convergence of weighted-MOCU based active learning:\nTheorem 1 Assume that both X and Θ are discrete with finite elements, the true model parameter θr ∈ Θ and the prior distribution π0(θ) over Θ satisfies π0(θr) > 0; then for the active learning algorithm defined by the acquisition function (10), we have limn→∞M(πn(θ)) = 0 almost surely.\nProof. As the number of active learning iterations n → ∞, following the acquisition function (10), some of the candidates can be measured infinite times. Define XA ⊂ X as the set whose candidates have been measured infinite times. Denote the measuring sequence of the candidates following (10) as {xn}, we have: ∃N, s.t. ∀n > N, xn ∈ XA. Based on Lemma 5,\nlimn→∞ U w(xn;π n(θ)) = 0.\nOn the other hand, since with the weighted MOCU Uw(xn;π n(θ)) = maxx∈X U w(x;πn(θ)), then limn→∞ U w(xn;π\nn(θ)) = 0 indicates that ∀x ∈ X , Uw(x;πn(θ)) uniformly converges to 0. Based on Lemma 4, limn→∞Mwn = 0 and we can conclude the proof with Lemma 1." }, { "heading": "4 EMPIRICAL RESULTS", "text": "We benchmark our weighted-MOCU method with other active learning algorithms, including random sampling, MES (Sebastiani & Wynn, 2000), BALD\n(Houlsby et al., 2011) and ELR (Roy & McCallum, 2001), on both simulated and real-world classification datasets. In the following experiments, we set c = 1 for the weighted MOCU function. The code for our experiments is made available at https://github.com/QianLab/WMOCU_AL.\nSimulated experiments. In addition to the one-dimensional simulated example introduced in Section 1, we test our model on a similar simulation setting as the block in the middle dataset in (Houlsby et al., 2011), where noisy observations with flip error are simulated in a block region on the decision boundary. We generate data based on a two-dimensional Bayesian logistic regression model: p(y = 1|x,w, b) = 1\n1+exp(−wTx−b) with x ∈ [−4, 4] 2. The block region is within [−0.5, 0.5]2\nwith the flip error rate equal to 0.3. For the model parameter prior, w1 ∼ U(0.3, 0.8) is uniformly distributed and w2 ∼ U(−0.25, 0.25) and b ∼ U(−0.25, 0.25); w1, w2 and b are independent. We randomly sample 100 particles from the parameter prior with one of the particles as the true model parameter. The five active learning algorithms are compared for 500 iterations by the OBC\nerror with respect to the testing data generated from the true model. We repeat the simulations for 500 runs and plot the average performance with standard deviation bars in Fig. 3. The error regret is defined as the error difference between the OBC and the true optimal classifier. From the figure,\nMES simply chooses the candidates with the predictive probability closest to 0.5, it can sample many noisy observations from the block region. ELR performs well in the first several iterations but poorly after 200 samples. Our weighted MOCU performs the best.\nWe have also benchmarked our weighted-MOCU based method with other active learning methods for a synthetic multi-class classification problem. We assume that the probabilistic model p(y|x, σ2y) = fy(x, σ2y)/ ∑ y′ f(x, σ 2 y′) with x ∈ [−2, 2]2, y ∈ {0, 1, 2} and fy = exp(−(x − my)\n2/2σ2y). We set my to be (0, 0), (1, 0), (0, 1)\nfor y = 0, 1, 2 respectively; and σ2y ∼ U(1, 5) being the uncertain parameters. Same as the previous binary classification experiment, we test for 300 runs and plot the average performance with standard deviations in Fig. 4. We can observe that ELR performs poorly in the long run while our Weighted MOCU has better empirical performance on par with BALD More results and discussion are in Appendix D & E.\nReal-world benchmark experiments. We also present the results on the UCI User Knowledge dataset (Kahraman et al., 2013). The dataset includes 403 samples assigned to 4 classes (High, Medium, Low, Very Low) with each sample having five features in [0, 1]5. We have grouped the samples into two classes with 224 samples in High or Medium, 179 in Low or Very Low. We consider the first and fifth features for classification and equally divide the feature space into 4× 4 bins. For the i-th bin, the probability of candidates belonging to High or Medium is denoted by θi, 1 ≤ i ≤ 16 and θi’s are independent and θi ∼ Beta(αi, βi), with hyper-\nparameters αi and βi. We present the results with the uncertainty class by setting αi = βi = 10 in eight randomly chosen bins and for the other bins, αi = 5, βi = 2 if the true frequency of High or Medium in the i-th bin is lower than 0.5 and αi = 2, βi = 5 otherwise. We have randomly drawn 150 samples from each class as the candidate pool and perform the five different active learning algorithms. We repeat the whole procedure 150 times and the average error rates are shown in Fig. 5. While ELR clearly gets stuck in this setup, our Weighted MOCU method can converge to the optimal classifier with less samples than all the competing methods. BALD performs poorly as the bins with α = β = 10 have less uncertainty but have more impact on OBC prediction and BALD fails to identify that. More comprehensive results and discussion, including results on the UCI Letter Recognition dataset (Dua & Graff, 2017), can be found in Appendix F." }, { "heading": "5 CONCLUSIONS", "text": "We have identified potential convergence problems of existing ELR methods and proposed a novel active learning strategy for classification based on weighted MOCU. Our weighted MOCU directly targets at decreasing the classification error and ignores uncertainty irrelevant to the classification performance. More critically, it can capture continuous change in objective-relevant uncertainty. Hence, our new active learning can be efficient both at the beginning and in the long run with the guarantee of converging to the optimal classifier. Empirical results have demonstrated active learning guided by weighted MOCU leads to sample-efficient learning. Future work includes theoretical analysis of MOCU-guided active learning for multi-class classification, as well as developing optimization methods for active learning in continuous space." }, { "heading": "ACKNOWLEDGMENTS", "text": "X. Qian was supported in part by the National Science Foundation (NSF) Awards 1553281, 1812641, 1835690, and 1934904. B.-J. Yoon was supported in part by the NSF Award 1835690. The work of E. R. Dougherty and F. J. Alexander was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Mathematical Multifaceted Integrated Capability Centers program under Award DE-SC0019303." }, { "heading": "A. PROOFS OF LEMMAS", "text": "Proof of Lemma 1. Based on (3), since Cθ(ψπ(θ), x′) − Cθ(ψθ, x′) ≥ 0, so M(π(θ)) = 0 iff Cθ(ψπ(θ), x\n′)−Cθ(ψθ, x′) = 0 ∀x′ ∈ X , ∀θ ∈ supp(π). In addition, in (9), w(π(θ), x′, θ) > 0, so Cθ(ψπ(θ), x\n′) − Cθ(ψθ, x′) = 0 ∀x′ ∈ X , ∀θ ∈ supp(π) iffMw(π(θ)) = 0, which concludes the proof.\nProof of Lemma 2. In the following proof, we omit the argument x′ in G and K for simplicity. Owing to the concavity of the min operator, miny′ Eπ(θ)[1 − p(y′|x′, θ)] is a concave function of π(θ). With Eπ(θ)[miny′(1 − p(y′|x′, θ)] being a linear function of π(θ), based on (13), K(π(θ)) equals to a concave function subtracting a linear function and thus is also a concave function.\nAs analyzed in Section 3.2, 0 ≤ K(π(θ)) ≤ 0.5. We define T (κ) = (1 − cκ)κ, κ ∈ [0, 0.5], a strictly increasing and strictly concave function with 0 < c ≤ 1. G(π(θ)) = T [K(π(θ))] is a composite function of T and K. So we conclude the proof with the property of the concavity for the composite functions:\nT [K(λπ1(θ) + (1− λ)π2(θ))] ≥ T [λK(π1(θ)) + (1− λ)K(π2(θ))] ≥ λT [K(π1(θ))] + (1− λ)T [K(π2(θ))]. (16)\nThe first inequality is because T is increasing and K is concave; and the second inequality holds as T is a concave function. Proof of Lemma 3. Since π(θ) = ∑ y p(y|x)π(θ|x, y), by Jensen’s inequality, we have G(x′, π(θ)) ≥ Ey|x[G(x′, π(θ|x, y))] as G is a concave function. So the weighted MOCU acquisition function:\nUw(x;π(θ)) = Ex′ [G(x′, π(θ))]− Ex′ [Ey|x[G(x′, π(θ|x, y))]] ≥ 0. (17)\nProof of Lemma 4. We will prove the contrapositive of the lemma: assumingMw(πn(θ)) > 0, ∃x ∈ X s.t. Uw(x;πn(θ)) > 0. Based on (15), Mw(πn(θ)) > 0 indicating ∃x ∈ X s.t. K(x, πn(θ)) > 0. It is sufficient to show that if K(x, πn(θ)) > 0, then Uw(x;πn(θ)) > 0. To prove that, we only need to prove G(x, πn(θ)) > Epn(y|x)[G(x, πn(θ|x, y))]; then by (17), Uw(x;πn(θ)) > 0. Since G is a concave function, we know G(x, πn(θ)) ≥ Epn(y|x)[G(x, πn(θ|x, y))]. With πn(θ) =∑ y p\nn(y|x)πn(θ|x, y), we can rewrite (16) as: T [K(x, πn(θ))] ≥ T [Epn(y|x)[K(x, πn(θ|x, y))]] ≥ Epn(y|x)[T [K(x, πn(θ|x, y))]].\nThe second equality holds only if ∀y ∈ {0, 1}, K(x, πn(θ|x, y)) = K(x, πn(θ)), which means that to prove G(x, πn(θ)) > Ey|x[G(x, πn(θ|x, y))], we just need to show ∃y ∈ {0, 1}, K(x, πn(θ|x, y)) 6= K(x, πn(θ)). In the following proof, we will show ifK(x, πn(θ)) > 0, then ∃y ∈ {0, 1}, s.t. K(x, πn(θ|x, y)) 6= K(x, πn(θ)). Denote ŷ = arg maxy p n(y|x). By (14) we have:\nK(x, πn(θ)) = ∑\nθ∈supp(πn)\nπn(θ)[max y p(y|x, θ)− p(ŷ|x, θ)]. (18)\nSince K(x, πn(θ)) > 0, the parameter set Θno = {θ ∈ supp(πn) : arg maxy p(y|x, θ) 6= ŷ} is not empty. We only keep the nonzero terms in K:\nK(x, πn(θ)) = ∑ θ∈Θo πn(θ)[max y p(y|x, θ)− p(ŷ|x, θ)]. (19)\nFor binary classification, ŷ = arg maxy p n(y|x), indicating that the predictive probability pn(ŷ|x) ≥ 0.5. For θ ∈ Θo, p(ŷ|x, θ) < 0.5, we have: if θ ∈ Θo, πn(θ|x, ŷ) = π n(θ)p(ŷ|x,θ) pn(ŷ|x) < πn(θ).\nIf we observe (x, ŷ) in (n+1)-th iteration, the updated posterior predictive probability pn(ŷ|x, {x, ŷ}) ≥ pn(ŷ|x) ≥ 0.5 and therefore maxy pn(y|x, {x, ŷ}) = ŷ. Hence,\nK(x, πn(θ|x, ŷ)) = ∑ θ∈Θo πn(θ|x, ŷ)[max y p(y|x, θ)− p(ŷ|x, θ)] < K(x, πn(θ)). (20)\nSince K(πn(θ|x, ŷ), x) 6= K(πn(θ), x), we have G(x, πn(θ)) > Epn(y|x)[G(x, πn(θ|x, y))] and Uw(x;πn(θ)) = Ep(x′)[G(x′, πn(θ))]− Ep(x′)[Epn(y|x)[G(x′, πn(θ|x, y))]] ≥ p(x)[G(x, πn(θ))− Epn(y|x)[G(x, πn(θ|x, y))]] > 0. (21) This concludes our proof.\nProof of Lemma 5. Adding a new data point (x, y) to D, the posterior change is: πn(θ|x, y) = πn(θ)p(y|x,θ)\npn(y|x) . Define Θx = {θ ∈ Θ : p(y|x, θ) = p(y|x, θr)}. Denote Nx(n) as the times of the candidate x being queried at the n-th iteration. Based on the posterior consistency theory we have ∑ θ∈Θx π n(θ) a.s.−−→ 1 as Nx(n) → ∞ (Gelman et al., 2013). Since pn(y|x) =∑\nθ∈Θ π n(θ)p(y|x, θ), we have limn→∞ pn(y|x) a.s.−−→ p(y|x, θr). Hence limn→∞ πn(θ|x, y) − πn(θ) = 0 almost surely, which indicates limn→∞ Uw(x;πn(θ)) = 0 almost surely." }, { "heading": "B. WEIGHTED-MOCU BASED ACTIVE LEARNING & COMPUTATIONAL COMPLEXITY", "text": "The pseudo-code of the general active learning procedure is provided in Algorithm 2. The function ACQUISITIONFUN can be acquisition functions of various methods, including weighted-MOCU, ELR, BALD, etc.\nComputational complexity We study the complexity of the complete active learning procedure. As we analyzed in the main text for the computation of the weighted MOCU acquisition function, the WMOCU function is called for O(NxNθ) times. In ACQUISITIONFUN, WMOCU is called for constant times. Finally, in the main procedure, in each iteration, ACQUISITIONFUN is called for each x. Hence, the total complexity of Weighted MOCU-based active learning is O(TN2xNθ).\n4 3 2 1 0 1 2 3 4 x\n0.00\n0.01\n0.02\n0.03\n0.04\n0.05\nm ut\nua l i\nnf or\nm at\nio n\n(a) Acquisition function of BALD\n4 3 2 1 0 1 2 3 4 x\n0.00000\n0.00001\n0.00002\n0.00003\n0.00004\n0.00005\n0.00006\n0.00007\nde sig\nn va\nlu e\n(b) Acquisition function of ELR Figure S1. The acquisition functions based on the model (22)." }, { "heading": "C. DETAILS OF THE ONE-DIMENSIONAL ACTIVE LEARNING EXAMPLE IN INTRODUCTION", "text": "The example in the Introduction of the main text is a binary classification problem with y ∈ {0, 1} based on only one feature x ∈ [−4, 4]. The underlying discriminative model is based on:\npc(y = 1|x, a, b) = S(x) + (x, a, b)\nS(x) = 0.6 exp (x)\n1 + exp (x) + 0.2\n(x, a, b) = a exp(−x2) + b[exp(−(x− 4)2) + exp(−(x+ 4)2)], (22)\nAlgorithm 2 General active learning procedure 1: function MAINPROCEDURE( ) 2: Set a discrete candidate set X , the probability array px, and iteration number T 3: Set the discrete parameter set Θ and the corresponding probability array πθ 4: Initialize the data set D = ∅ 5: πθ|D = πθ 6: for t = 1 to T do 7: for x in X do 8: Store ACQUISITIONFUN(x, πθ|D) to the array UX 9: end for 10: Optimize UX and find the maximum point x∗ 11: Obtain the label y∗ corresponds to x∗ and update D = D ∪ {x∗, y∗} 12: for θ in Θ do 13: Update πθ|D ∝ πθ|D · p(y∗|x∗, θ) 14: end for 15: X = X/{x∗} 16: end for 17: end function\nwhere θ = (a, b)T is the uncertain parameter vector, with a and b independently uniformly distributed on the intervals [−0.1, 0.1] and [−0.2, 0.2] respectively. The discriminative model equals to a sigmoid function S(x) plus the perturbation (x, a, b), a mixed Gaussian function that changes with a and b, the uncertainty class of classifiers can be constructed by such deviations on S(x). The discriminative model has higher uncertainty near x ± 4, which depends on the value of b, than the uncertainty near x = 0, which depends on a. Value of a and b has negligible influence on the p(y = 1|x) near x± 4 and x = 0, respectively. So by observing data at x = ±4, the uncertainty on pc(y = 1|x = 0) will not be reduced significantly. In Figs. S1(a) and (b), we show how the acquisition functions of BALD and ELR change with respect to x. It is clear that the acquisition function of BALD at x = ±4 should have the largest value since p(y|x) has the highest uncertainty. On the other hand, since p(y|x = ±4) is always above or below 0.5, the specific value of p(y|x, a, b) will not affect the corresponding optimal Bayesian classifier (OBC) and therefore the loss reduction in Fig. S1(b) at x = ±4 is always 0; the acquisition function of ELR around x = 0 can be larger than 0, since knowing the specific value of p(y|x = 0, a, b) can reduce the classification error." }, { "heading": "D. DETAILS OF THE BINARY CLASSIFICATION EXAMPLE IN SECTION 3.2", "text": "In the binary classification problem, Θ = {θ1, θ2}, X = {x1, x2}. The probabilistic model setting for the two candidates is symmetric:\np(y1|x1, θ1) = (0.6, 0.4), p(y1|x1, θ2) = (0.3, 0.7) p(y2|x2, θ1) = (0.7, 0.3), p(y2|x2, θ2) = (0.4, 0.6)\nThere are three intervals corresponding to the linear function pieces of MOCU in Fig. 2: [0, 0.33], (0.33, 0.67] and (0.67, 1]. In the three intervals, ψπ(θ)(x1), the OBC predictions of x1 are 1, 1 and 0, respectively; ψπ(θ)(x2), the OBC predictions of x2 are 1, 0, and 0, respectively.\nIn Fig. 2 we set the prior π̃(θ1) = 0.15, then based on the Bayes’s rule we can obtain the posterior with the observations of (x1, y1). Based on the observation result of y1, the posteriors are π̃(θ1|x1, y1 = 0) = 0.2609 and π̃(θ1|x1, y1 = 1) = 0.0916, both of which fall into the first linear piece of MOCU." }, { "heading": "E. MULTI-CLASS CLASSIFICATION", "text": "Although we have shown in the main text that our weighted-MOCU can achieve good empirical performance of converging to OBC with the simulated multi-class classification experiment, active learning for multi-class classification problems can be complicated. The weighting function (10) adopted in the main text may not have the same theoretical convergence guarantee to the optimal\nclassifier if applied to multi-class classification problems. Here we just show a counter example, for which Lemma 4 does not hold if using the same weighting function.\nAssume a three-class classification problem y ∈ {0, 1, 2}. The candidate pool only has one candidate X = {x} and the parameter set Θ = {θ1, θ2, θ3}. In addition we set the probabilistic model p(y|x, θ) and the prior π(θ) as shown in Tables S1 and S2, and calculate the posterior and posterior predictive probabilities. In the tables, yo denotes the one-step-look-ahead observation corresponding to x, and x is omitted for simplicity. Without loss generality, we just set the weighted MOCU parameter c = 1.\np(y|θ1) p(y|θ2) p(y|θ3) p(y) p(y|yo = 0) p(y|yo = 1) p(y|yo = 2) y = 0 0.4 0.4 0.4 0.4 0.4 0.4 0.4 y = 1 0.3 0.1 0.5 0.3 0.3 0.327 0.273 y = 2 0.3 0.5 0.1 0.3 0.3 0.273 0.327\nTable S1. The probabilities of p(y|x, θ) and p(y|x, yo).\nπ(θ) π(θ|yo = 0) π(θ|yo = 1) π(θ|yo = 2) θ = θ1 0.8 0.8 0.8 0.8 θ = θ2 0.1 0.1 0.17 0.03 θ = θ3 0.1 0.1 0.03 0.17\nTable S2. The prior and posterior of π(θ).\nHere two properties in the setting are worth mentioning:\n1. π(θ1) is close to 1 and as a result ∀yo ∈ {0, 1, 2}, we have maxy p(y) = maxy p(y|yo) = maxy p(y|θ1) = 0.4; 2. p(y|θ2) and p(y|θ3) are symmetric and π(θ2) = π(θ3), as a result ∀yo ∈ {0, 1, 2}, π(θ1) = π(θ1|yo) = 0.8 and therefore Eπ(θ)[maxy′ p(y|θ)] = Eπ(θ|yo)[maxy′ p(y|θ)] = 0.8 × 0.4 + 0.2 × 0.5 = 0.42.\nRecall that the K function and weighted MOCU are:\nK(π(θ)) = Eπ(θ)[max y′ p(y|θ)]−max y′ p(y), (23) Mw(π(θ)) = [1−K(π(θ))] ·K(π(θ)). (24) Therefore, we have ∀yo ∈ {0, 1, 2}, K(π(θ)) = K(π(θ|yo)) = 0.02 and Mw(π(θ)) = Mw(π(θ|yo)) > 0. On the other hand,\nUw(π(θ)) =Mw(π(θ))− Ep(yo)[M w(π(θ|yo))] = 0, (25)\nwhich means that the algorithm may get stuck. Here we just give an extreme case where only one candidate is in the search pool, but it is straightforward to build a more practical example based on what we have shown here.\nWe can see from the example that, unlike in the cases of binary classification problems, the weighting function 1 − cK may remain unchanged for a single observation in multi-class problems. Because of this, the weighted-MOCU algorithm may get stuck. Since OBC prediction is the maximum of the predictive distribution p(y|x), the weight function is introduced to capture the changes of p(y|x), as that indicates the potential shift of OBC prediction in the long run. K is a function of maxy p(y|x), in binary case, maxy p(y|x) must change as p(y|x) changes. However, in multi-class problems, the probability of the optimal label maxy p(y|x) may remain unchanged, when the probability of other labels change, just like in the example above where maxy p(y) = maxy p(y|yo = 1). In the next section, we propose a weighting function that can capture the change of any element in p(y|x)." }, { "heading": "F. ANOTHER WEIGHTED MOCU SCHEME FOR MULTI-CLASS CLASSIFICATION", "text": "To extend the weighted MOCU scheme suit for the multi-class problem, we propose a weight function that can capture the change of p(y|x). The weighting function is defined as the softmax of\np(y|x):\nw(π(θ), x′, θ) = exp(maxy p(y|x))∑\ni exp(p(yi|x)) , (26)\nwhere p(y|x) is the posterior predictive distribution at the current active learning iteration. We compare this Weighted MOCU with other active learning algorithms empirically on the synthetic three-class classification problem and the performance comparison is shown in Fig. S2. This new Weighted MOCU (Weighted MOCU2) performs slightly better than other algorithms on this multiclass classification problem.\n0 100 200 300 400 500 Iteration number\n10 4\n10 3\ner ro\nr r eg\nre t\nrandom MES BALD ELR Weighted_MOCU Weighted_MOCU2\nFigure S2. The expected OBC error regret comparison between different active learning algorithms for the three-class classification problem." }, { "heading": "G. ADDITIONAL SYNTHETIC EXPERIMENTS", "text": "We run the same synthetic experiment of Fig. 3 with a different prior setting: w1 ∼ U(0.3, 0.8), w2 ∼ U(−0.02, 0.02) and b ∼ U(−0.25, 0.25), and the results is shown in Figure S3. The performance shows that only our Weighted MOCU method performs better than the random benchmark.\nHere we benchmark different active learning strategies for OBC with another synthetic example. Assume the classification problem with two dimensional input features x = (x1, x2) ∈ R2 and binary class labels y ∈ {0, 1}. The computational model is derived by a decision boundary in a quadratic form: x2 = ax21 + bx1 + c, i.e. p(y = 1|x, a, b, c) = 1(x2 > ax21 + bx1 + c). The parameter vector θ = (a, b, c) ∈ R3 is uncertain and the true model is characterized by a true parameter θ∗. Unlike Monte Carlo sampling in the main text, here we consider a discrete grid setting for both input space and parameter space with discretization for each variable as follows: 1. x1 ranges in [−0.5, 0.5] with increment 0.05 2. x2 ranges in [0, 2] with increment 0.1. 3. a ranges in [-4.3, -3.8] with increment 0.05, 4. b ranges in [-0.25, 0.25] with increment 0.05, 5. c ranges in [1, 2] with increment 0.05.\nFor now, we simply assume that the distributions over the feature space and parameter space are all uniform to illustrate the effectiveness of MOCU-based active learning. With prior knowledge of the system of interest, knowledge-driven prior should be incorporated. Following the weightedMOCU based active learning algorithm in Algorithm 1, we can sequentially query the true system and reduce the model uncertainty in a way that maximally reduces the classification error of the corresponding OBC.\n0 100 200 300 400 500 Iteration number\n10 3\n4 × 10 4\n6 × 10 4er ro\nr r eg\nre t\nrandom MES BALD ELR Weighted_MOCU\nFigure S3. The expected OBC error regret comparison between different active learning algorithms on binary classification.\nNow we assume that when querying the system, the class label is given with a heterogeneous random flipping error with the error probability being a function of x1: p(y = 1|z = 0) = p(y = 0|z = 1) = 0.3× (1− 4x21) + 0.1. Therefore, when x1 = 0, the flipping error is 0.4; and when x1 = ±0.5, the flipping error is 0.1. We have implemented the same methods as in the main text with 50 iterations and 100 runs, The active learning results are illustrated in Fig. S4. As we can see, in this figure, MES does not perform well as it cannot differentiate between model uncertainty and observation error. ELR performs similarly to BALD and our weighted-MOCU based method at the beginning, but then it gets stuck before finding the true boundary. BALD and our weighted MOCU perform similarly. This is because in this setting p(y|x, θ) is either 1 or 0, so there is no irrelevant uncertainty with which p(y|x, θ) is always larger or smaller than 0.5 but the value is uncertain. In addition to the average performance comparison, we deliberately choose one of the runs in which the ELR method gets stuck to better illustrate the difference between the existing ELR methods and the proposed weighted-MOCU based method. In this run, the randomly chosen parameters are (a = −3, b = 0, c = 1.9). Fig. S5(a) shows the error regret (the OBC error minus the true optimal classifier error) comparison, in which ELR gets stuck and the weighted-MOCU based method reaches 0. Notice that the y-axis is in the logarithm scale, so the vertical line in the WMOCU plot implies that the value turns to 0. Error regret equals to 0 indicates that the OBC classifier equals to the true optimal classifier, but in practice we don’t know the true optimal classifier, so we need the value of MOCU to quantify the expected error difference between OBC and the optimal classifier of each θ = (a, b, c). Fig. S5(b) shows the changes of MOCU value during the two active learning procedures. Not surprisingly, the MOCU value during the iterations of the ELR method also gets stuck, while the MOCU value in the iterations of the weighted-MOCU method continues to decrease. Fig. S5(c) shows the changes of the maximum value of acquisition function in each iteration. The acquisition function of ELR decrease to 0 after 22 iterations, and that explains why ELR gets stuck. On the other hand, the maximum acquisition function of WMOCU is always positive as the corresponding MOCU is positive, until it gets close to 10−16, which is the rounding error in floating point arithmetic. In theory, as the observation is noisy, we can not be sure of the optimal prediction. Therefore, the MOCU and the acquisition function of weighted-MOCU should always be positive, which is demonstrated in the figures.\nWe have also performed an experiment to show the algorithm performance change under different noise levels. We set the flipping error rate as p(y 6= z|x) = × (1 − 4x21) + , 0 ≤ ≤ 0.25. Therefore, when x1 = 0, the flipping error is 2 ; and when x1 = ±0.5, the flipping error is . We perform the same methods with 100 iterations and 100 runs on the noise level = 0.05 and = 0.25. The resulting active learning performance curves are illustrated in Fig. S6. We can\n0 10 20 30 40 50 Iteration number\n10 4\n10 3\n10 2\n10 1\ner ro\nr\nrandom MES BALD ELR Weighted_MOCU\nFigure S4. The expected OBC error comparison between different active learning algorithms in the setting with heterogeneous observation error.\n0 10 20 30 40 50 Iteration number\n10 2\n10 1\ner ro\nr r eg\nre t\nELR Weighted_MOCU\n(a) Error regret\n0 10 20 30 40 50 Iteration number\n10 8\n10 6\n10 4\n10 2\nM OC\nU\nELR Weighted_MOCU\n(b) MOCU value\n0 10 20 30 40 50 Iteration number\n10 14\n10 11\n10 8\n10 5\n10 2\nM ax\na cq\nui sit\nio n\nfu nc\ntio n\nELR Weighted_MOCU\n(c) Acquisition function Figure S5. Comparison of ELR and weighted MOCU on a specific run\n0 20 40 60 80 100 Iteration number\n10 4\n10 3\n10 2\n10 1\ner ro\nr r eg\nre t\nrandom MES BALD ELR Weighted_MOCU\n(a) Noise level = 0.05\n0 20 40 60 80 100 Iteration number\n10 2\n10 1\ner ro\nr r eg\nre t\nrandom MES BALD ELR Weighted_MOCU\n(b) Noise level = 0.25 Figure S6. Active learning algorithm performance comparison with different noise levels\nsee from the figure that the performance of MES degrades significantly with high noise while the performance of other methods does not appear to be very sensitive to the increasing noise level. ." }, { "heading": "H. REAL-WORLD BENCHMARK EXPERIMENTS.", "text": "We here present the complete results on the UCI User Knowledge dataset (Kahraman et al., 2013). In addition to the uncertainty class setup in the main text, we have tested two other setups of hyperparameter values: 1) ‘uniform prior’ with αi = βi = 1, and 2) ’good prior’ with αi = βi = 10 in eight bins chosen randomly, for other bins αi = 5, βi = 2 if the true frequency of High or Medium\n0 5 10 15 20 25 30 Iteration number\n0.0\n0.1\n0.2\n0.3 0.4 er ro r\nrandom MES BALD ELR Weighted_MOCU\n(a) Uniform prior\n0 10 20 30 40 50 60 Iteration number\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\n0.40\ner ro\nr\nrandom MES BALD ELR Weighted_MOCU\n(b) Good prior Figure S7. Classification error rate comparison on UCI User Knowledge dataset\n2 4 6 8 10 12 14 16 Iteration number\n0.02\n0.10\ner ro\nr r at\ne\nrandom MES BALD ELR Weighted_MOCU\n(a) Performance of letter E vs. F classification\n2 4 6 8 10 12 14 16 Iteration number\n0.01\n0.10\ner ro\nr r at\ne\nrandom MES BALD ELR Weighted_MOCU\n(b) Performance of letter P vs. D classification Figure S8. Classification error rate comparison on UCI Letter Recognition dataset\nin the i-th bin is higher than 0.5 and αi = 2, βi = 5 if the frequency is lower than 0.5. We also randomly draw 150 samples from each class as the candidate pool and perform the five different active learning algorithms. We repeat the whole procedure 150 times and the average error rates are shown in Fig. S7. In both Fig. S7a and Fig. S7b, ELR performs the best in these two setups while our Weighted MOCU performs similarly. BALD performs reasonably in Fig. S7a but it again performs poorly in Fig. S7b. This is because the bins with α = β = 10 have less uncertainty but have more impact on OBC prediction and BALD fails to identify that in this setup again.\nWe also present the results on the UCI Letter Recognition dataset (Dua & Graff, 2017). Letter Recognition is a multi-class classification dataset with each sample having 16 numerical features generated from typed images of the capital letters in the English alphabet. We select two pairs of hard-to-distinguish letters: E vs. F and D vs. P. The total number of training samples is 1543 and 1608 for E vs. F and D vs. P, respectively. Active learning algorithms are applied with Bayesian logistic regression models. We randomly take 100 data points first to construct the prior, and use the rest of the data as the pool to test the five active learning algorithms. For prior construction, we train a logistic regression model on the 100 data points and take the trained parameters as the mean of a normal distributed prior with the variance equal to 1. Then we sample 1000 particles from the prior as the uncertain parameter set. We repeat the whole procedure 100 times and the average error rates are shown in Fig. S8. Unlike the synthetic datasets, the real-world datasets have no corresponding true models. We can only find the optimal models that approximate the data best. However, we can still see the trends of different algorithms. Compared with random sampling, all the algorithms quickly converge to the optimal models. ELR performs the best in the first several iterations, while converges slowly in the latter iterations. Our weighted MOCU based method is again demonstrated to converge faster than other competing methods.\nIt is clear from all our experiments for both simulated and real-world data that, in addition to its theoretical guarantee for active learning with OBC, our weighted MOCU method has achieved consistently better or similar empirical performance compared to the best performing ones among the\nexisting pool-based active learning methods, approaching the corresponding OBCs faster with fewer labeled samples." } ]
2,021
null
SP:6c897187759edf48c1bd4f3536c098ac0d5f1179
[ "**Overview:** The paper presents experiments showing that the contrastive learning losses produce better embeddings or feature spaces than those produced by using binary cross-entropy losses. The experiments show that embeddings learned using contrastive learning losses seem to favor long-tailed learning tasks, out-of-distribution tasks, and object detection. The paper also presents an extension of the contrastive loss to improve the embeddings. The experiments in the paper use common and recent long-tail datasets as well as datasets for object detection and out-of-distribution tasks. ", "In this paper, the authors propose a new loss function to learn feature representations for image datasets that are class-imbalanced. The loss function is a simple yet effective tweak on an existing supervised contrastive loss work. A number of empirical tests are performed on long-tailed datasets showing the benefits of the proposed loss in beating state of the art methods. Some specific questions are listed below:" ]
Existing self-supervised learning (SSL) methods are mostly applied for training representation models from artificially balanced datasets (e.g. ImageNet). It is unclear how well they will perform in the practical scenarios where datasets are often imbalanced w.r.t. the classes. Motivated by this question, we conduct a series of studies on the performance of self-supervised contrastive learning and supervised learning methods over multiple datasets where training instance distributions vary from a balanced one to a long-tailed one. Our findings are quite intriguing. Different from supervised methods with large performance drop, the self-supervised contrastive learning methods perform stably well even when the datasets are heavily imbalanced. This motivates us to explore the balanced feature spaces learned by contrastive learning, where the feature representations present similar linear separability w.r.t. all the classes. Our further experiments reveal that a representation model generating a balanced feature space can generalize better than that yielding an imbalanced one across multiple settings. Inspired by these insights, we develop a novel representation learning method, called k-positive contrastive learning. It effectively combines strengths of the supervised method and the contrastive learning method to learn representations that are both discriminative and balanced. Extensive experiments demonstrate its superiority on multiple recognition tasks, including both long-tailed ones and normal balanced ones. Code is available at https://github.com/bingykang/BalFeat.
[ { "affiliations": [], "name": "Bingyi Kang" }, { "affiliations": [], "name": "Yu Li" }, { "affiliations": [], "name": "Zehuan Yuan" }, { "affiliations": [], "name": "Jiashi Feng" } ]
[ { "authors": [ "Kaidi Cao", "Colin Wei", "Adrien Gaidon", "Nikos Arechiga", "Tengyu Ma" ], "title": "Learning imbalanced datasets with label-distribution-aware margin loss", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Fabio M Carlucci", "Antonio D’Innocente", "Silvia Bucci", "Barbara Caputo", "Tatiana Tommasi" ], "title": "Domain generalization by solving jigsaw puzzles", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Nitesh V Chawla", "Kevin W Bowyer", "Lawrence O Hall", "W Philip Kegelmeyer" ], "title": "Smote: synthetic minority over-sampling technique", "venue": "Journal of artificial intelligence research,", "year": 2002 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Kevin Swersky", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "Big selfsupervised models are strong semi-supervised learners", "venue": "arXiv preprint arXiv:2006.10029,", "year": 2020 }, { "authors": [ "Xinlei Chen", "Haoqi Fan", "Ross Girshick", "Kaiming He" ], "title": "Improved baselines with momentum contrastive learning", "venue": "arXiv preprint arXiv:2003.04297,", "year": 2020 }, { "authors": [ "Yin Cui", "Menglin Jia", "Tsung-Yi Lin", "Yang Song", "Serge Belongie" ], "title": "Class-balanced loss based on effective number of samples", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Carl Doersch", "Andrew Zisserman" ], "title": "Multi-task self-supervised visual learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Carl Doersch", "Abhinav Gupta", "Alexei A Efros" ], "title": "Unsupervised visual representation learning by context prediction", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Jeff Donahue", "Philipp Krähenbühl", "Trevor Darrell" ], "title": "Adversarial feature learning", "venue": "arXiv preprint arXiv:1605.09782,", "year": 2016 }, { "authors": [ "Mark Everingham", "Luc Van Gool", "Christopher KI Williams", "John Winn", "Andrew Zisserman" ], "title": "The pascal visual object classes (voc) challenge", "venue": "International journal of computer vision,", "year": 2010 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "arXiv preprint arXiv:1803.07728,", "year": 2018 }, { "authors": [ "Raia Hadsell", "Sumit Chopra", "Yann LeCun" ], "title": "Dimensionality reduction by learning an invariant mapping", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06),", "year": 2006 }, { "authors": [ "Hui Han", "Wen-Yuan Wang", "Bing-Huan Mao" ], "title": "Borderline-smote: a new over-sampling method in imbalanced data sets learning", "venue": "In International conference on intelligent computing,", "year": 2005 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask r-cnn", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Olivier J Hénaff", "Aravind Srinivas", "Jeffrey De Fauw", "Ali Razavi", "Carl Doersch", "SM Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": null, "year": 1905 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "arXiv preprint arXiv:1808.06670,", "year": 2018 }, { "authors": [ "Simon Jenni", "Paolo Favaro" ], "title": "Self-supervised feature learning by learning to spot artifacts", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Bingyi Kang", "Saining Xie", "Marcus Rohrbach", "Zhicheng Yan", "Albert Gordo", "Jiashi Feng", "Yannis Kalantidis" ], "title": "Decoupling representation and classifier for long-tailed recognition", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Salman Khan", "Munawar Hayat", "Syed Waqas Zamir", "Jianbing Shen", "Ling Shao" ], "title": "Striking the right balance with uncertainty", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Salman H Khan", "Munawar Hayat", "Mohammed Bennamoun", "Ferdous A Sohel", "Roberto Togneri" ], "title": "Cost-sensitive learning of deep feature representations from imbalanced data", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2017 }, { "authors": [ "Prannay Khosla", "Piotr Teterwak", "Chen Wang", "Aaron Sarna", "Yonglong Tian", "Phillip Isola", "Aaron Maschinot", "Ce Liu", "Dilip Krishnan" ], "title": "Supervised contrastive learning", "venue": "arXiv preprint arXiv:2004.11362,", "year": 2020 }, { "authors": [ "Gustav Larsson", "Michael Maire", "Gregory Shakhnarovich" ], "title": "Learning representations for automatic colorization", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Gustav Larsson", "Michael Maire", "Gregory Shakhnarovich" ], "title": "Colorization as a proxy task for visual understanding", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Ziwei Liu", "Zhongqi Miao", "Xiaohang Zhan", "Jiayun Wang", "Boqing Gong", "Stella X Yu" ], "title": "Large-scale long-tailed recognition in an open world", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Alejandro Newell", "Jia Deng" ], "title": "How useful is self-supervised pretraining for visual tasks", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Li Shen", "Zhouchen Lin", "Qingming Huang" ], "title": "Relay backpropagation for effective learning of deep convolutional neural networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Merrielle Spain", "Pietro Perona" ], "title": "Measuring and predicting importance of objects in our visual world", "venue": null, "year": 2007 }, { "authors": [ "Vladimir Vapnik" ], "title": "The nature of statistical learning theory", "venue": "Springer science & business media,", "year": 2013 }, { "authors": [ "Tongzhou Wang", "Phillip Isola" ], "title": "Understanding contrastive representation learning through alignment and uniformity on the hypersphere", "venue": "arXiv preprint arXiv:2005.10242,", "year": 2020 }, { "authors": [ "Yu-Xiong Wang", "Deva Ramanan", "Martial Hebert" ], "title": "Learning to model the tail", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Chen Wei", "Lingxi Xie", "Xutong Ren", "Yingda Xia", "Chi Su", "Jiaying Liu", "Qi Tian", "Alan L Yuille" ], "title": "Iterative reorganization with weak spatial constraints: Solving arbitrary jigsaw puzzles for unsupervised representation learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via nonparametric instance discrimination", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Yuzhe Yang", "Zhi Xu" ], "title": "Rethinking the value of labels for improving class-imbalanced learning", "venue": "In Conference on Neural Information Processing Systems (NeurIPS),", "year": 2020 }, { "authors": [ "Yaoyao Zhong", "Weihong Deng", "Mei Wang", "Jiani Hu", "Jianteng Peng", "Xunqiang Tao", "Yaohai Huang" ], "title": "Unequal-training for deep face recognition with long-tailed noisy data", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Bolei Zhou", "Agata Lapedriza", "Aditya Khosla", "Aude Oliva", "Antonio Torralba" ], "title": "Places: A 10 million image database for scene recognition", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "Boyan Zhou", "Quan Cui", "Xiu-Shen Wei", "Zhao-Min Chen" ], "title": "Bbn: Bilateral-branch network with cumulative learning for long-tailed visual recognition", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "George Kingsley Zipf" ], "title": "The psycho-biology of language: An introduction to dynamic philology, volume 21", "venue": null, "year": 1999 } ]
[ { "heading": null, "text": "Existing self-supervised learning (SSL) methods are mostly applied for training representation models from artificially balanced datasets (e.g. ImageNet). It is unclear how well they will perform in the practical scenarios where datasets are often imbalanced w.r.t. the classes. Motivated by this question, we conduct a series of studies on the performance of self-supervised contrastive learning and supervised learning methods over multiple datasets where training instance distributions vary from a balanced one to a long-tailed one. Our findings are quite intriguing. Different from supervised methods with large performance drop, the self-supervised contrastive learning methods perform stably well even when the datasets are heavily imbalanced. This motivates us to explore the balanced feature spaces learned by contrastive learning, where the feature representations present similar linear separability w.r.t. all the classes. Our further experiments reveal that a representation model generating a balanced feature space can generalize better than that yielding an imbalanced one across multiple settings. Inspired by these insights, we develop a novel representation learning method, called k-positive contrastive learning. It effectively combines strengths of the supervised method and the contrastive learning method to learn representations that are both discriminative and balanced. Extensive experiments demonstrate its superiority on multiple recognition tasks, including both long-tailed ones and normal balanced ones. Code is available at https://github.com/bingykang/BalFeat." }, { "heading": "1 INTRODUCTION", "text": "Self-supervised learning (SSL) has been popularly explored as it can learn data representations without requiring manual annotations and offer attractive potential of leveraging the vast amount of unlabeled data in the wild to obtain strong representation models (Gidaris et al., 2018; Noroozi & Favaro, 2016; He et al., 2020; Chen et al., 2020a; Wu et al., 2018). For instance, some recent SSL methods (Hénaff et al., 2019; Oord et al., 2018; Hjelm et al., 2018; He et al., 2020) use the unsupervised contrastive loss (Hadsell et al., 2006) to train the representation models by maximizing the instance discriminativeness, which are shown to generalize well across various downstream tasks, and even surpass the supervised learning counterparts in some cases (He et al., 2020; Chen et al., 2020a).\nDespite the great success, existing SSL methods focus on learning data representations from the artificially balanced datasets (e.g. ImageNet (Deng et al., 2009)) where all the classes have similar numbers of training instances. However in reality, since the classes in natural images follow the Zipfian distribution, the datasets are usually imbalanced and show a long-tailed distribution (Zipf, 1999; Spain & Perona, 2007), i.e., some classes involving significantly fewer training instances than others. Such imbalanced datasets are very challenging for supervised learning methods to model, leading to noticeable performance drop (Wang et al., 2017; Mahajan et al., 2018; Zhong et al., 2019). Thus several interesting questions arise: How well will SSL methods perform on imbalanced datasets? Will the quality of their learned representations deteriorate as the supervised learning methods? Or can they perform stably well? Answering these questions is important for understanding the behavior of SSL in practice. But these questions remain open as no research investigations have been conducted along this direction so far.\ndiscriminative feature space. The shadow area ( ) indicates the decision boundary of each class.\nOur work is motivated by the above questions to study the properties of data representations learned with supervised/self-supervised methods in a practical scenario. We start with two representative losses used by these methods, i.e., the supervised cross-entropy and the unsupervised contrastive losses (Hadsell et al., 2006; Oord et al., 2018), and investigate the classification performance of their trained representation models from multiple training datasets where the instance distribution gradually varies from a balanced one to a long-tailed one. We surprisingly observe that, different from the ones learned from supervised cross-entropy loss where performance drops quickly, the representation models learned from the unsupervised contrastive loss perform stably well, no matter how much the training instance distribution is skewed to be imbalanced. Such a stark difference between the two representation learning methods drives us to explore why SSL performs so stably. We find that using the contrastive loss can obtain representation models generating a balanced feature space that has similar separability (and classification performance) for all the classes, as illustrated in Figure 1.\nSuch a balanced property of the feature spaces from SSL is intriguing and provides a new perspective to understand the behavior of SSL methods. We dig deeper into its benefits via a systematic study. In particular, since a pre-trained representation model is often used as initialization for downstream tasks (He et al., 2020; Newell & Deng, 2020; Hénaff et al., 2019), we evaluate and compare the generalization ability of the models that produce feature spaces of different balanced levels (or ‘balancedness’). We find that a more balanced model tends to generalize better across a variety of settings, including the out-of-distribution recognition as well as the cross-domain and cross-task applications. These studies imply that feature space balancedness is an important but often neglected factor for learning high-quality representations.\nInspired by the above insights, we propose a new representation learning method, the k-positive constrastive learning, which inherits the strength of constrastive learning in learning balanced feature spaces and meanwhile improves the feature spaces’ discriminative capability. Specifically, different from the contrastive learning methods lacking semantic discriminativeness, the proposed k-positive constrastive method leverages the available instance semantic labels by taking k instances of the same label with the anchor instance to embed semantics into the contrastive loss. As such, it can learn representations with desirable balancedness and discriminativeness (Figure 1). Extensive experiments and analyses clearly demonstrate its superiority over the supervised learning and latest contrastive learning methods (He et al., 2020) for various recognition tasks, including visual recognition in both long-tailed setting (e.g., ImageNet-LT, iNaturalist) and balanced setting.\nThis work makes the following important observations and contributions. (1) We present the first systematic studies on the performance of self-supervised contrastive learning on imbalanced datasets which are helpful to understanding the merits and limitations of SSL in practice. (2) Our studies reveal an intriguing property of the model trained by contrastive learning—the model can robustly learn balanced feature spaces—that has never been discussed before. (3) Our empirical analysis demonstrates that learning balanced feature spaces benefits the generalization of representation models and offer a new perspective for understanding deep model generalizability. (4) We develop a new method to explicitly pursue balanced feature spaces for representation learning and it outperforms the popular cross-entropy and contrastive losses based methods. We believe our findings and the novel k-positive contrastive method are inspiring for future research on representation learning." }, { "heading": "2 RELATED WORKS", "text": "Self-supervised learning is a form of unsupervised learning. Recently there has been a surge of self-supervised data representation learning methods developed to alleviate the demand for manual annotations by mining free supervision information through specifically designed loss functions and pretext tasks. The contrastive loss measures the similarities of sample pairs in a feature space and is at the core of several recent SSL methods (Chen et al., 2020a;b; He et al., 2020; Chen et al., 2020c). Adversarial losses that measure the distribution difference are also exploited for self-supervised representation learning (Donahue et al., 2016; Doersch & Zisserman, 2017). A wide range of pretext tasks have been developed including image inpainting (Jenni & Favaro, 2018; Pathak et al., 2016), image colorization (Larsson et al., 2016; 2017), context prediction (Doersch et al., 2015), jigsaw puzzles (Carlucci et al., 2019; Noroozi & Favaro, 2016; Wei et al., 2019), rotation prediction (Gidaris et al., 2018). Though very successful, the behavior of SSL largely remains a mystery. Recently Wang & Isola (2020) analyze contrastive learning from the perspective of uniformity and alignment of learned representations. However investigations on the behavior of contrastive learning on imbalanced datasets are still absent. We present the first study on this problem and our investigation methodology is also applicable to other SSL methods.\nIn practice, the visual data usually follow a long-tailed distribution (Zipf, 1999; Spain & Perona, 2007), challenging supervised learning methods. Due to the imbalance in the number of training instances for different classes, conventional methods tend to perform much more poorly on instancerare classes than on instance-rich ones. To alleviate this performance bias, existing approaches either re-balance the data distribution through sampling (Chawla et al., 2002; Han et al., 2005; Shen et al., 2016; Mahajan et al., 2018) or the loss for each class (Cui et al., 2019; Khan et al., 2017; Cao et al., 2019; Khan et al., 2019) by reweighting. Kang et al. (2020) first propose to decouple representation learning from classifier learning to boost performance, and demonstrate that learning good feature spaces is crucial for long-tailed recognition. Along this direction, SSP (Yang & Xu, 2020) is among the first methods that introduce SSL pretraining into learning the long-tailed recognition models. More specifically, instead of directly training a randomly initialized model from scratch as conventional supervised learning methods, SSP uses a model pretrained with SSL on the same dataset for initialization, which is observed to be able to to alleviate the label bias issue in imbalanced datasets and boost long-tailed recognition performance.\nIn contrast, we conduct a series of systematic studies to directly compare SSL with supervised learning on representation learning. We show that SSL can learn stably well feature spaces robust to the underlying distribution of a dataset. Moreover, inspired by our findings on the benefits of a balanced feature space for generalization, we introduce the k-positive contrastive learning method to explicitly pursue balancedness and discriminativeness for representation learning, which has been shown through experiments to benefit not only long-tailed recognition but also normal recognition tasks." }, { "heading": "3 BALANCED FEATURE SPACES FROM CONTRASTIVE LEARNING", "text": "In this section, we systematically study the performance of representation models trained by SSL from a collection of training datasets with varying instance number distributions, in contrast with the models learned by supervised learning methods, to explore how SSL performs when the training datasets are not artificially balanced. Furthermore, we investigate the generalization performance of these learned representation models under multiple settings, in order to explore the relationship between the representation model’s generalizability and the property of its learned feature space.\nNotations We define the notations used in this paper. Representation learning aims to obtain a representation model fθ that maps a sample xi into a feature space V such that its corresponding representation vi ∈ V encapsulates desired features for target applications. Let Drep-train = {xi, yi}, i = 1, . . . , N be the dataset for training the representation model, where yi is the class label for sample xi. Let C denote the number of total classes and nj denote the number of instances within class j. We use {q1, . . . , qC} with qj = nj/N to denote the discrete instance distribution over the C classes. An imbalanced dataset has significant difference in the class instance numbers, e.g., q1 qC . We use a multi-layer convolutional neural network fθ(·) : xi 7→ vi to implement\nthe representation model. The final classification prediction ŷ is given by a linear classifier ŷ = argmax[W>v + b], where W denotes the classifier weight matrix and b denotes the bias term." }, { "heading": "3.1 METHODOLOGY OF OUR STUDY", "text": "Representation learning methods Various loss functions have been developed for learning the representation model fθ on the training dataset. Among them, the most popular one is the supervised cross-entropy (CE) loss:\nLCE = 1\nN N∑ i=1 − log pyi , (1)\nwhere pyi = softmax(W > yivi + b) is the normalized probability prediction of sample i belonging to its ground truth class yi. Using the semantic labels directly as supervision signal (yi in Equation 1), the representation model trained by the CE loss can have strong semantic discrimination ability but its generated feature space is easily biased by the imbalance of the training instance distribution—if some classes have significantly more training instances than the others, their data representations will occupy dominant portion of the feature space (Figure 1) and get higher classification accuracy than the instance-rare classes (Kang et al., 2020; Wang et al., 2017).\nDifferent from the supervised learning ones, self-supervised learning methods adopt semantic-free loss functions to learn representations from unlabeled data (He et al., 2020; Gidaris et al., 2018). For example, the contrastive loss1 (CL) (Oord et al., 2018) learns representations via maximizing the instance-wise discriminativeness:\nLCL = 1\nN N∑ i=1 − log exp(vi · v + i /τ) exp(vi · v+i /τ) + ∑ v−i ∈V − exp(vi · v−i /τ) , (2)\nwhere τ is a temperature hyper-parameter, v+i is a positive sample for the anchor instance i (typically produced by data augmentation), v−i ∈ V − is the negative sample randomly drawn from the training samples excluding instance i. This contrastive loss encourages the feature representations from positive pairs to be similar, while pushing features from the sampled negative pairs apart.\nWe take the two loss functions (LCE and LCL) as representatives to study how the representation models (and the corresponding feature spaces) trained with supervised/self-supervised methods are affected by the training instance distribution {q1, . . . , qC}.\nBalancedness of feature spaces Since semantic labels are not involved in the contrastive loss (Equation 2), we hypothesize it may lead to representation models yielding feature spaces that are less biased by the imbalance of the training dataset, compared with the ones from the supervised loss (Equation 1). To verify this, we introduce a metric to characterize such an “unbiased” or “balanced” property of a feature space at first. A feature space V is balanced if the representations {vi} from different classes within it have similar degrees of linear separability. As the linear separability degree of the representations is usually evaluated by the accuracy of a linear classifier over them (Vapnik, 2013), we follow this criterion to develop the balancedness metric. Specifically, let a1, . . . , aC denote the classification accuracy of a linear classifier (W, b) over the representations {vi} ⊂ V from C classes. We take the following uniformity of these accuracies as the balancedness of the feature space V :\nβ(V ) , 1\nC2 C∑ i,j exp ( −|ai − aj | 2 σ ) , where aj = #{vi|ŷi = j, yi = j, vi ∈ V } #{vi|yi = j, vi ∈ V } . (3)\nHere σ is a fixed scaling parameter. This metric achieves its maximum when all the class-wise accuracies are equal, i.e., there being no separability bias of the learned representations to any class. Note that this metric is developed to provide a quantitative measure of the balancedness of a feature space, but it has certain limitations such as it can be easily hacked. We leave developing more rigorous metric that can better characterize balanced feature spaces as future work.\n1The term of contrastive loss has been used to refer to various loss functions over positive and negative samples. This work focuses on the specific form in Equation 2 that is widely used in modern SSL methods.\nExperimental protocol We adopt a multi-stage protocol for learning and evaluating the feature spaces. (1) Representation learning: pre-train the representation model fθ on the provided training set Drep-train using the above training losses LCE and LCL; (2) Classifier learning: train a linear classifier (W, b) on top of fθ with θ fixed using another training dataset Dtrain2 and the supervised CE loss; (3) Representation evaluation: evaluate the classification accuracy of the learned classifier on the test datasetD test with the representations from fθ and compute the above balancedness β(V ). To thoroughly investigate sensitiveness of different representation learning methods to the imbalance level of training datasets, we construct six datasets from the long-tailed benchmark ImageNetLT (Liu et al., 2019) (DLT) by varying its instance distribution {q1, . . . , qC} from a long-tailed one to a uniform one gradually, while keeping the total instance number similar. The generated datasets, denoted as DLT0, . . . ,DLT8,DLT (which are increasingly more imbalanced), are used as Drep-train for representation learning in the following experiments. See appendix for their details." }, { "heading": "3.2 CONTRASTIVE LOSS HELPS LEARN BALANCED FEATURE SPACES", "text": "We first investigate classification performance of the representation models trained with the CE and CL losses on the above six datasets DLT0, . . . ,DLT8,DLT that are increasingly more imbalanced. Since linear classifiers are easily biased by skewed training dataset distribution (Kang et al., 2020), it is necessary to eliminate the imblancedness of the evaluation datasets for reliable representation evaluation. Thus, we use the (balanced) training and test sets of ImageNet as Dtrain and Dtest to learn classifiers and evaluate their classification accuracy, following the above protocol.\nThe results are summarized in Figure 2, from which we make an important observation: compared with the supervised cross-entropy loss, the model trained with the unsupervised contrastive loss generates a more balanced feature space, even in presence of highly imbalanced training instance distribution. As shown in Figure 2 (left), the classification accuracy of representation models learned with the CE loss drops quickly when the dataset becomes more imbalanced—the quality of representations from these models is very sensitive to the imbalance of training datasets. In contrast, the classification accuracy of CL-trained models remains stable even when the training dataset transits to a heavily long-tailed one. Such surprising performance robustness to imbalance of the training datasets implies that using contrastive learning can consistently learn balanced feature spaces. To see this, we also visualize the balancedness scores (Equation 3) of the learned feature spaces from CE and CL in Figure 2 (right). Even when the training set is heavily long-tailed, the feature spaces learned with CL loss are as highly balanced as the ones learned from a uniform training distribution, while the blancedness score of the feature spaces from CE loss is lower and drops quickly. Such a balanced feature space offered by CL loss is much desired for the representation learning in practice, where the training instance distribution is usually long-tailed. Certainly, using an unsupervised loss will sacrifice semantic discriminativeness of the representations, leading to the accuracy gap between the CE and CL-trained models." }, { "heading": "3.3 MORE BALANCED REPRESENTATION MODELS GENERALIZE BETTER", "text": "The above studies reveal that the representation models trained with the contrastive loss can produce more balanced feature spaces. A natural question is what are the benefits from a balanced model for\n2Note Dtrain and Drep-train can be the same dataset as in recent SSL works (e.g., He et al. (2020)).\nrecognition? Since a pre-trained representation model is often used to facilitate downstream tasks (He et al., 2020; Newell & Deng, 2020; Hénaff et al., 2019), we here conduct extensive experiments to study its potential benefits on model generalization performance under the following settings.\nOut-of-distribution generalization We first study the relationship between the balancedness of representation models and their generalizability for recognizing new classes. To thoroughly evaluate performance of representation models with different balancedness, we evenly divide the 1,000 classes into two splits (500 vs. 500 classes) on ImageNet, referred to as the source and target split respectively. We use the subsets (corresponding to the source class split) of the above DLT0, . . . ,DLT8,DLT datasets to construct six differentDrep-train for training the representation model fθ. To obtain models with different balancedness, we use the CE loss for training, since the above studies reveal using the CL loss will always produce models with similar balancedness (Figure 2). We use the subsets (corresponding to the target class split) of the training and test sets of ImageNet as Dtrain and Dtest for classifier learning and evaluation, with the representation model fixed. The testing performance of the models with different balancedness on the target classes is presented in Figure 3. It is observed that as the source dataset becomes increasingly more imbalanced (from DLT0 to DLT) and the corresponding models become more imbalanced, their generalization performance degrades correspondingly. Such a positive correlation between balancedness of the models and testing accuracy on the target classes clearly demonstrate that more balanced representation models tend to generalize better for recognizing unseen classes. More details and results about the out-of-distribution generalization studies are deferred to the appendix.\nCross-domain and cross-task generalization We then explore whether learning balanced representation models is able to benefit model’s generalizability to new domains and tasks. We use the ImageNet-LT as Drep-train to train the models with the CE and CL losses, obtaining imbalanced and balanced representation models respectively. For the cross-domain setting, we train a linear classifier on the Places365 dataset (Zhou et al., 2017) with the representation model fixed. From the results in Table 1, it can be clearly observed that the balanced representation model (from CL) surpasses the less balanced one (from CE) significantly, in terms of the top-1 accuracy (by 2.74%). For the crosstask setting, we take train/test splits of the PASCAL VOC (Everingham et al., 2010) and COCO (Lin et al., 2014) datasets as Dtrain/Dtest for evaluating detection performance. The results are given in Table 1. Again, the balanced model (from the CL loss) outperforms the less balanced one (from the CE loss) significantly (up to 1.64%). In comparison, the improvement from the CL-trained model over the CE-trained model is moderate (around 0.3%) when using the full ImageNet for training, as CE can learn a relatively balanced feature space from a balanced dataset. This clearly shows that the generalization performance boost for the cross-domain and cross-task settings brought from CL-trained models does not simply stem from using self-supervised pre-training, but indeed come from learning more balanced feature spaces." }, { "heading": "4 LEARNING BALANCED FEATURE SPACES FOR RECOGNITION", "text": "The above studies demonstrate the representation models trained with the contrastive loss can generate balanced feature spaces showing strong generalizability. Here we explore how to effectively leverage these findings in practice. We introduce a new method that inherits the strength of the contrastive loss in learning balanced feature spaces and enhances the feature spaces’ semantic discrimination capability simultaneously. We thoroughly study its superiority via two application cases, i.e., the long-tailed recognition and pre-training representation models for downstream tasks." }, { "heading": "4.1 K-POSITIVE CONSTRASTIVE LOSS", "text": "Though balanced, the feature spaces from contrastive learning have limited capability of semantic discrimination, as shown in Figure 2 (left). This is because the contrastive loss blindly encourages instance-level discrimiantiveness. Every two instances, even if they are from the same class, are forced to be apart from each other in the learned feature space. To embed semantic discriminativeness into the representations while maintaining desired balancedness, we develop a new method to leverage the provided semantic labels to adaptively compute the instance contrastive loss.\nConcretely, given an anchor training instance xi with its semantic label yi, our proposed method draws k instances from the same class to form the positive sample set V +i,k, instead of only using its augmentation as in Equation 2. Thus, it gives a new loss called k-positive contrastive loss (KCL):\nLKCL = 1\nN(k + 1) N∑ i=1 ∑ v+j ∈{ṽi}∪V + i,k − log exp(vi · v+j /τ) exp(vi · ṽi/τ) + ∑ vj∈Vi exp(vi · vj/τ) , (4)\nwhere ṽi is generated by augmenting vi, Vi is the current batch of examples excluding vi, and V +i,k ⊂ Vi is a positive set containing k instances randomly drawn from the same class as vi. The proposed KCL loss purposely keeps the number of positive instances equal, which is crucial for balancing the learned feature spaces. It brings two benefits. First, it helps learn representations with stronger discriminative ability as it leverages the label information as supervised learning. Secondly, it uses the same number of instances (i.e., k) for all the classes in positive pair construction which further balances the learned feature space. Note our proposed KCL is different from the supervised contrastive learning (Khosla et al., 2020) that leverages all the instances from the same class to construct the positive pairs, which cannot avoid the dominance of instance-rich classes in the representation learning. This is also evidenced by our following experiments on long-tailed recognition. In the following experiments, we choose k = 6 via validation and use ResNet50 as the backbone. Other hyper-parameter choices and implementation details are given in the appendix." }, { "heading": "4.2 LONG-TAILED RECOGNITION", "text": "KCL provides feature spaces with desirable balancedness and semantic discriminativeness, which makes it naturally fit for addressing the challenges of long-tailed recognition, i.e., severe performance bias to the instance-rich classes and poor generalization to the instance-rare classes (Mahajan et al., 2018; Zhong et al., 2019). Here we implement and evaluate KCL for long-tailed recognition, following the two-stage training strategy from Kang et al. (2020): 1) train the representation model with the KCL loss; 2) learn a linear classifier with cross-entropy loss and class-balanced sampling.\nBaselines Besides well established state-of-the-arts, we consider following three kinds of baselines for justifying the advantages of KCL. (1) Classifier balancing methods, i.e., τ -norm and cRT (Kang et al., 2020), that re-train classifiers with class-balanced sampling as KCL but learn the representation models by supervised cross-entropy loss. Comparison with them helps understand the effectiveness of learning balanced features in long-tailed recognition. (2) Methods that train the representation model and classifier jointly with cross-entropy loss (SL) and various data re-sampling strategies, including instance-balanced (SL-i), class-balanced (SL-c), progressively-balanced (SL-p) and square-root re-sampling (SL-s) (Kang et al., 2020). Comparison with them will show advantages of KCL over these data-enriching strategies in feature space balancing. (3) A full-positive variant of KCL, named full-positive contrastive learning (FCL), that uses all the available same-class samples in the current batch to construct positive pairs for computing the contrastive loss, which is similar to the supervised contrastive learning (Khosla et al., 2020). Comparing KCL with FCL will show the benefits of keeping the number of positive samples equal for all the anchor instances in KCL.\nResults We evaluate KCL and compare it with the above strong baselines on two large-scale benchmark datasets, ImageNet-LT (Liu et al., 2019) and iNaturalist 2018 (iNatrualist, 2018). For comprehensive evaluation, following (Liu et al., 2019), we split the classes of ImageNet-LT into many-shot (>100 images), medium-shot (20∼100 images) and few-shot (<20 images) groups. The results are summarized in Tables 2 and 3 respectively, along with the balancedness of these methods on ImageNet-LT in Figure 4. We make the following observations.\nMore balanced feature spaces give better performance. We first compare KCL with cRT and τ - norm, the latest state-of-the-arts with feature spaces learned by supervised cross-entropy loss and\nTable 2: ImageNet-LT results\nTable 3: iNaturalist 2018 results\n37.5\n40.0\n42.5\n45.0\n47.5\n50.0\n52.5\nin st\nan ce\nb al\nan ce d cl as s ba la nc ed pr og re ss iv el y ba\nla nc ed sq ua re ro ot\nk=full k=6\nSL KCL MoCo\nClass Index 60\n65\n70\n75\n80\n85\nAc cu\nra cy\n(% )\nSL-i SL-c SL-p SL-s FCL KCL MoCo\nthus less balanced as demonstrated in Sec. 3. Compared with them, KCL improves the overall accuracy by a large margin (4.2% on ImageNet-LT and 3% on iNaturalist), demonstrating the importance of learning more balanced feature spaces for long-tailed recognition.\nKCL is more effective at learning balanced and discriminative feature spaces. Data re-sampling is widely used as a straightforward approach to alleviate performance bias for long-tailed recognition (Kang et al., 2020). We compare the feature space balancedness of KCL and SL methods with different data re-sampling strategies on ImageNet-LT in Figure 4. Clearly, data re-sampling cannot effectively improve balancedness of the feature space as KCL. Besides data re-sampling, Figure 4 also shows the balancedness of the feature spaces learned by the latest contrastive learning method MoCo (He et al., 2020) on ImageNet-LT. MoCo can balance the feature space but has lower accuracy, due to the lack of semantic discriminativeness in the learned feature space. KCL performs the best, demonstrating its effectiveness at learning both balanced and discriminative feature spaces.\nEqualizing the number of positive instances in KCL is important. To further justify the design of KCL loss in keeping the number of positive instances to be equal, we compare it with its variant FCL. From Tables 2, 3 and Figure 4, though FCL outperforms other baselines, its performance is inferior to KCL, in terms of both the overall accuracy and the balancedness of the learned feature spaces. Equalizing the number of positive instances as KCL is crucial for learning balanced feature spaces and improving recognition performance." }, { "heading": "4.3 PRE-TRAINING REPRESENTATION MODELS FOR DOWNSTREAM TASKS", "text": "The effectiveness of KCL is not limited to the cases where training datasets are imbalanced. In this section, we study KCL as a general representation learning method, i.e., we apply KCL for pretraining a representation model on balanced datasets which is later fine-tuned for downstream tasks, including the out-of-distribution (OOD) recognition and detection.\nOut-of-distribution Generalization Similar to Sec.3.3, we evenly divide the 1000 classes in ImageNet into two splits, use one split to learning representation backbone and the thoer one to learn a linear classifier with the backbone fixed. We adopt two different splitting strategies. Split-overlap (split with semantic overlap) allows the classes within the two splits to share the same super class (e.g., dog and wolf from canidae are put into different splits) in the ImageNet ontology. As such, though the target classes are all novel to the representation model, some of their attributes have been seen by the model before from the source classes. In contrast, Split-independent (split without semantic overlap) strictly avoids classes from the same super classes to be distributed into different splits. Split-independent presents a more challenging case for model’s generalization ability as all the target classes (and attributes) to recognize are novel.\nThe generalization performance comparsion of the representation models with different methods is given in Table 4. When the source classes and target classes share similar semantics (on splitoverlap), the CE-trained model surpasses the CL-trained model on both the source and target classes. But when looking into the generalization gap (i.e., the difference between the source and target accuracy), the CL-trained model suffers larger generalization gap than the CL-trained model (10.5 vs. 1.8). When there is not semantic overlap between source and target (on split-independent), the CL-model outperforms CE-model on the target classes by 4.3% with much smaller generalization gap (13.4 v.s. 32.5). By comparing the “full” performance from split-overlap to split-independent, one can observe that CL loss performs consistently well (58.3 and 58.2), but CE drops as large as 5%. This implies that CL is robust to imbalanced training distribution used for representation learning, while CE is extremely sensitive to it. These results clearly demonstrate the consistent superiority of balanced representation learning in terms of generalization for various training dataset distribution. Notably, our proposed KCL loss surpasses both CE and CL loss on all the four different settings by a large margin (more than 3 points). These results clearly demonstrate that KCL is able to learn a balanced and discriminative feature space, and balancedness is a general property that benefits both balanced and imbalanced datasets.\nCross-domain and cross-task generalization In this part, we first pretrain a model on ImageNet then further finetune it for downstream object detaction tasks (including PASCAL VOC and COCO). Note we aim to study the generalizability of KCL as a representation learning method, rather than aiming at state-of-the-art performance. Hence we compare it with the vanilla supervised crossentropy loss (SL) and MoCo (which the KCL is built on) (He et al., 2020). The results are summarized in Table 5. We also evaluate the discrminativeness of the learned representations (the “repr” in the table) from their classification accuracy by learning a linear classifier on the pretraining datasets. Clearly, KCL outperforms SL and MoCo for both downstream tasks. This is because KCL learns more balanced feature spaces than SL with similar discriminativeness, and learns more discriminative features than MoCo. For more results, Please refer to the appendix ." }, { "heading": "5 CONCLUSIONS", "text": "This work piloted studies on performance of the self-supervised learning methods for imbalanced datasets, and made several intriguing findings. At the heart of these findings is the balanced feature space, which is identified to be an inherent property of the representations learned by the contrastive learning and bring stronger generalizability. It provides a new perspective for understanding the behavior of the contrastive learning. This work further developed a new representation learning method to leverage the benefits of balanced feature spaces. We believe the findings and method developed here are inspiring for the future research on representation learning. However, theoretical understandings on balanced feature spaces are not mature yet and worthy of future exploration." }, { "heading": "ACKNOWLEDGEMENT", "text": "We would like to express our deepest gratitude to Saining Xie for his comments and suggestions throughout this project and the writing of the paper, to Yu Sun for his insightful discussion at the begining of this project." }, { "heading": "B DATASET CONSTRUCTION", "text": "Datasets for studying balancedness of feature spaces We here explain the details on the construction of the series of datasets used in our study in Sec. 3.\nIn particular, we take the standard long-tailed training set from ImageNet-LT (Liu et al., 2019) whose instances follow the Pareto distribution as the base dataset, denoted as DLT. We vary its training instance distribution {q1, . . . , qC} gradually to obtain different datasets as follows,\nnj = ⌊ ND ×\nqαj∑ k q α k + 1 2\n⌋ , (5)\nwhere ND is the total number of training instances in DLT, α ∈ [0, 1] controls the dataset balancedness. When α = 0, it corresponds to a fully balanced dataset; when α = 1, it becomes a heavy long-tailed ones. In total, we generated 6 datasets with α ∈ {0, 0.2, 0.4, 0.6, 0.8, 1.0}, denoted as DLT0, . . . ,DLT8,DLT respectively, as different examples of Drep-train for representation learning. The detailed statistics and visualization of the datasets DLT0, . . . ,DLT8,DLT are summarized in Table 8 and Fig. 6.\nDatasets for generalizability studies We carefully choose the proper datasets to construct the Dtrain and Dtest for evaluating generalizability of the representation models under multiple settings. The choices are summarized in Table 9." }, { "heading": "C ADDITIONAL RESULTS ON MODEL GENERALIZATION PERFORMANCE", "text": "Cross-domain and Cross-task Generalization We evaluate the generalization ability of the representation models trained on the balanced full ImageNet datasets, for cross-domain and cross-task applications. The results are given in Table 10 (cross-domain) and Tables 11 and 12 (for detection) respectively. From Table 10, when the training datasets are balanced, the models trained with CL and CE achieve comparable performance. While when the training datasets are not balanced, the CL model significantly outperforms the CE model (Table 1). This demonstrates that the CL loss can consistently produce balanced representation models and the model generalization performance can indeed benefit from being balanced.\nSimilar conclusion can be drawn for the cross-task generalization. From Tables 11 and 12, when training the model on the full ImageNet dataset, using self-supervised CL loss can produce the model performing slightly better than using the supervised CE loss. On PASCAL VOC, the performance\nadvantage is as marginal as 0.02% in AP50. In contrast, when training the model on the ImageNetLT dataset, using CL loss can boost the model performance over using the CE loss much more significantly. The improvement is as large as 1.64% in AP50. Thus the performance benefit on the generalization to detection brought by CL does not simple stem from using self-supervised pretraining, but indeed come from learning more balanced feature spaces.\nSimilar to OOD generalization, the model learned with KCL gives better downstream performance on both VOC and COCO, which mean enforcing feature space balancedness with KCL is indeed able to help learning better representation." } ]
2,021
null
SP:978b2e085614592b4d8503ea2cc17ff5f0510539
[ "Proposes contrastive learning method for conditional text-generation. Here we maximize similarity (of representations) between source and target sequences (positive) while minimizing similarity with false targets (negative). Additional positives and negatives are created in the sequence representation space by adding perturbations to decoder (output) hidden states to minimize/maximize conditional likelihood p(y|x). It is shown this works a lot better than the naive contrastive approach of sampling random non-target sequences.", "This paper proposes to add contrastive learning to the sequence-to-sequence generation problem. More specifically, the authors apply a contrastive loss on the globally pooled hidden representation of the generated hidden states. The key novelty is to apply adversarial gradients to obtain both hard negative and hard positive examples. The proposed method can improve a state-of-art pretrained transformer model (T5) on 3 tasks: machine translation (WMT16 En-Ro), abstractive summarization (XSum), and question generation (SQuAD)." ]
Recently, sequence-to-sequence (seq2seq) models with the Transformer architecture have achieved remarkable performance on various conditional text generation tasks, such as machine translation. However, most of them are trained with teacher forcing with the ground truth label given at each time step, without being exposed to incorrectly generated tokens during training, which hurts its generalization to unseen inputs, that is known as the “exposure bias” problem. In this work, we propose to mitigate the conditional text generation problem by contrasting positive pairs with negative pairs, such that the model is exposed to various valid or incorrect perturbations of the inputs, for improved generalization. However, training the model with naı̈ve contrastive learning framework using random non-target sequences as negative examples is suboptimal, since they are easily distinguishable from the correct output, especially so with models pretrained with large text corpora. Also, generating positive examples requires domain-specific augmentation heuristics which may not generalize over diverse domains. To tackle this problem, we propose a principled method to generate positive and negative samples for contrastive learning of seq2seq models. Specifically, we generate negative examples by adding small perturbations to the input sequence to minimize its conditional likelihood, and positive examples by adding large perturbations while enforcing it to have a high conditional likelihood. Such “hard” positive and negative pairs generated using our method guides the model to better distinguish correct outputs from incorrect ones. We empirically show that our proposed method significantly improves the generalization of the seq2seq on three text generation tasks — machine translation, text summarization, and question generation.
[ { "affiliations": [], "name": "Seanie Lee" }, { "affiliations": [], "name": "Dong Bok Lee" }, { "affiliations": [], "name": "Sung Ju Hwang" } ]
[ { "authors": [ "Armen Aghajanyan", "Akshat Shrivastava", "Anchit Gupta", "Naman Goyal", "Luke Zettlemoyer", "Sonal Gupta" ], "title": "Better fine-tuning by reducing representational collapse", "venue": "arXiv preprint arXiv:2008.03156,", "year": 2020 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Dzmitry Bahdanau", "Philemon Brakel", "Kelvin Xu", "Anirudh Goyal", "Ryan Lowe", "Joelle Pineau", "Aaron C. Courville", "Yoshua Bengio" ], "title": "An actor-critic algorithm for sequence prediction", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Satanjeev Banerjee", "Alon Lavie" ], "title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization", "venue": null, "year": 2005 }, { "authors": [ "Samy Bengio", "Oriol Vinyals", "Navdeep Jaitly", "Noam Shazeer" ], "title": "Scheduled sampling for sequence prediction with recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Massimo Caccia", "Lucas Caccia", "William Fedus", "Hugo Larochelle", "Joelle Pineau", "Laurent Charlin" ], "title": "Language gans falling short", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Sumit Chopra", "Raia Hadsell", "Yann LeCun" ], "title": "Learning a similarity metric discriminatively, with application to face verification", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition", "year": 2005 }, { "authors": [ "Leshem Choshen", "Lior Fox", "Zohar Aizenbud", "Omri Abend" ], "title": "On the weaknesses of reinforcement learning for neural machine translation", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Alexis Conneau", "Guillaume Lample" ], "title": "Cross-lingual language model pretraining", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Li Dong", "Nan Yang", "Wenhui Wang", "Furu Wei", "Xiaodong Liu", "Yu Wang", "Jianfeng Gao", "Ming Zhou", "Hsiao-Wuen Hon" ], "title": "Unified language model pre-training for natural language understanding and generation", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Xinya Du", "Claire Cardie" ], "title": "Harvesting paragraph-level question-answer pairs from wikipedia", "venue": "Annual Meeting of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Michael U Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics", "venue": "The journal of machine learning research,", "year": 2012 }, { "authors": [ "Jiaji Huang", "Yi Li", "Wei Ping", "Liang Huang" ], "title": "Large margin neural language model", "venue": "Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Haoming Jiang", "Pengcheng He", "Weizhu Chen", "Xiaodong Liu", "Jianfeng Gao", "Tuo Zhao" ], "title": "Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization", "venue": null, "year": 2020 }, { "authors": [ "Dong Bok Lee", "Seanie Lee", "Woo Tae Jeong", "Donghwan Kim", "Sung Ju Hwang" ], "title": "Generating diverse and consistent QA pairs from contexts with information-maximizing hierarchical conditional vaes", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020,", "year": 2020 }, { "authors": [ "Mike Lewis", "Yinhan Liu", "Naman Goyal", "Marjan Ghazvininejad", "Abdelrahman Mohamed", "Omer Levy", "Veselin Stoyanov", "Luke Zettlemoyer" ], "title": "BART: denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "venue": "Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Chin-Yew Lin", "Eduard Hovy" ], "title": "Manual and automatic evaluation of summaries", "venue": "ACL Workshop on Automatic Summarization,", "year": 2002 }, { "authors": [ "Yang Liu", "Maosong Sun" ], "title": "Contrastive unsupervised word alignment with non-local features", "venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence,,", "year": 2015 }, { "authors": [ "Lajanugen Logeswaran", "Honglak Lee" ], "title": "An efficient framework for learning sentence representations", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Junhua Mao", "Jonathan Huang", "Alexander Toshev", "Oana Camburu", "Alan L. Yuille", "Kevin Murphy" ], "title": "Generation and comprehension of unambiguous object descriptions", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Takeru Miyato", "Andrew M. Dai", "Ian J. Goodfellow" ], "title": "Adversarial training methods for semisupervised text classification", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": null, "year": 2010 }, { "authors": [ "Shashi Narayan", "Shay B Cohen", "Mirella Lapata" ], "title": "Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization", "venue": "Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Nathan Ng", "Kyunghyun Cho", "Marzyeh Ghassemi" ], "title": "Ssmba: Self-supervised manifold based data augmentation for improving out-of-domain robustness", "venue": "Empirical Methods in Natural Language Processing,", "year": 2020 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics,", "year": 2002 }, { "authors": [ "Romain Paulus", "Caiming Xiong", "Richard Socher" ], "title": "A deep reinforced model for abstractive summarization", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Matt Post" ], "title": "A call for clarity in reporting bleu scores", "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers,", "year": 2018 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J. Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "Squad: 100,000+ questions for machine comprehension of text", "venue": "Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Marc’Aurelio Ranzato", "Sumit Chopra", "Michael Auli", "Wojciech Zaremba" ], "title": "Sequence level training with recurrent neural networks", "venue": "nternational Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Abigail See", "Peter J Liu", "Christopher D Manning" ], "title": "Get to the point: Summarization with pointergenerator networks", "venue": "Annual Meeting of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Shikhar Sharma", "Layla El Asri", "Hannes Schulz", "Jeremie Zumer" ], "title": "Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation", "venue": null, "year": 2017 }, { "authors": [ "Noam Shazeer", "Mitchell Stern" ], "title": "Adafactor: Adaptive learning rates with sublinear memory cost", "venue": "arXiv preprint arXiv:1804.04235,", "year": 2018 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V Le" ], "title": "Sequence to sequence learning with neural networks", "venue": "Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ramakrishna Vedantam", "Samy Bengio", "Kevin Murphy", "Devi Parikh", "Gal Chechik" ], "title": "Contextaware captions from context-agnostic supervision", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Kilian Q Weinberger", "Lawrence K Saul" ], "title": "Distance metric learning for large margin nearest neighbor classification", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rémi Louf", "Morgan Funtowicz", "Joe Davison", "Sam Shleifer", "Patrick von Platen", "Clara Ma", "Yacine Jernite", "Julien Plu", "Canwen Xu", "Teven Le Scao", "Sylvain Gugger", "Mariama Drame", "Quentin Lhoest", "Alexander M. Rush" ], "title": "Huggingface’s transformers: Stateof-the-art natural language processing", "venue": "ArXiv, abs/1910.03771,", "year": 2019 }, { "authors": [ "Dongling Xiao", "Han Zhang", "Yukun Li", "Yu Sun", "Hao Tian", "Hua Wu", "Haifeng Wang" ], "title": "Ernie-gen: An enhanced multi-flow pre-training and fine-tuning framework for natural language generation", "venue": null, "year": 2020 }, { "authors": [ "Zonghan Yang", "Yong Cheng", "Yang Liu", "Maosong Sun" ], "title": "Reducing word omission errors in neural machine translation: A contrastive learning approach", "venue": "Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Lantao Yu", "Weinan Zhang", "Jun Wang", "Yong Yu" ], "title": "Seqgan: sequence generative adversarial nets with policy gradient", "venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Jingqing Zhang", "Yao Zhao", "Mohammad Saleh", "Peter J Liu" ], "title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization", "venue": null, "year": 2020 }, { "authors": [ "Yizhe Zhang", "Zhe Gan", "Kai Fan", "Zhi Chen", "Ricardo Henao", "Dinghan Shen", "Lawrence Carin" ], "title": "Adversarial feature matching for text generation", "venue": "International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Yizhe Zhang", "Michel Galley", "Jianfeng Gao", "Zhe Gan", "Xiujun Li", "Chris Brockett", "Bill Dolan" ], "title": "Generating informative and diverse conversational responses via adversarial information maximization", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Chen Zhu", "Yu Cheng", "Zhe Gan", "Siqi Sun", "Tom Goldstein", "Jingjing Liu" ], "title": "Freelb: Enhanced adversarial training for natural language understanding", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Raffel" ], "title": "Since the original test set is only accessible via the leader board of SQuAD1, we split the original validation set into our new validation and test set, following the conventions of question generation communities. Preprocessing For machine translation, we download the raw text2, not the tokenized text, and use the same T5-tokenizer", "venue": null, "year": 2020 }, { "authors": [ "brary (Wolf" ], "title": "2019)3 with Adafactor optimizer. We set the batch size 128 and follow the default setting of Adafactor optimizer to finetune the T5-small models. However, the number of negative examples from the batch is 16 or 32 (total batch size divided by the number of GPUs), because we split the batch into smaller batches and distribute them to each GPU machines. We use 8 GPUs for text summarization, and 4 GPUs for machine translation and question generation. The dimension", "venue": null, "year": 2019 }, { "authors": [ "Sharma" ], "title": "For BLEU score, we adopt the implementation by Post (2018)5", "venue": null, "year": 2017 }, { "authors": [ "Context: Sayyid Abul Ala" ], "title": "Maududi was an important early twentieth-century figure in the Islamic revival in India, and then after independence from Britain, in Pakistan. Trained as a lawyer he chose the profession of journalism, and wrote about contemporary issues and most importantly about Islam and Islamic law. Maududi founded the Jamaat-e-Islami party", "venue": null, "year": 1941 }, { "authors": [ "John Paul" ], "title": "CLAPS: How long did Warsaw remain the capital of the Polish-Lithuanian Commonwealth? Context: John Paul II’s visits to his native country in 1979 and 1983 brought support to the budding solidarity movement and encouraged the growing anti-communist fervor there", "venue": null, "year": 1979 } ]
[ { "heading": "1 INTRODUCTION", "text": "The sequence-to-sequence (seq2seq) models (Sutskever et al., 2014), which learn to map an arbitrary-length input sequence to another arbitrary-length output sequence, have successfully tackled a wide range of language generation tasks. Early seq2seq models have used recurrent neural networks to encode and decode sequences, leveraging attention mechanism (Bahdanau et al., 2015) that allows the decoder to attend to a specific token in the input sequence to capture long-term dependencies between the source and target sequences. Recently, the Transformer (Vaswani et al., 2017), which is an all-attention model that effectively captures long-term relationships between tokens in the input sequence as well as across input and output sequences, has become the de facto standard for most of the text generation tasks due to its impressive performance. Moreover, Transformerbased language models trained on large text corpora (Dong et al., 2019; Raffel et al., 2020; Lewis et al., 2020) have shown to significantly improve the model performance on text generation tasks.\nHowever, a crucial limitation of seq2seq models is that they are mostly trained only with teacher forcing, where ground truth is provided at each time step and thus never exposed to incorrectly generated tokens during training (Fig. 1-(a)), which hurts its generalization. This problem is known as the “exposure bias” problem (Ranzato et al., 2016) and often results in the generation of lowquality texts on unseen inputs. Several prior works tackle the problem, such as using reinforcement learning (RL) to maximize non-differentiable reward (Bahdanau et al., 2017; Paulus et al., 2018).\n∗Equal contribution\nAnother approach is to use RL or gumbel softmax (Jang et al., 2017) to match the distribution of generated sentences to that of the ground truth, in which case the reward is the discriminator output from a Generative Adversarial Network (GAN) (Zhang et al., 2018; 2017; Yu et al., 2017). Although the aforementioned approaches improve the performance of the seq2seq models on text generation tasks, they either require a vast amount of effort in tuning hyperparameters or stabilize training.\nIn this work, we propose to mitigate the exposure bias problem with a simple yet effective approach, in which we contrast a positive pair of input and output sequence to negative pairs, to expose the model to various valid or incorrect sentences. Naı̈vely, we can construct negative pairs by simply using random nontarget sequences from the batch (Chen et al., 2020). However, such a naı̈ve construction yields meaningless negative examples that are already well-discriminated in the embedding space (Fig. 1-(b)), which we highlight as the reason why existing methods (Chen et al., 2020) require large batch size. This is clearly shown in Fig. 2, where a large portion of positive-negative pairs can be easily discriminated without any training, which gets worse as the batch size decreases as it will reduce the chance\nto have meaningfully difficult examples in the batch. Moreover, discriminating positive and naı̈ve negative pairs becomes even more easier for models pretrained on large text corpora.\nTo resolve this issue, we propose principled approaches to automatically generate negative and positive pairs for constrastive learning, which we refer to as Contrastive Learning with Adversarial Perturbation for Seq2seq learning (CLAPS). Specifically, we generate a negative example by adding a small perturbation to the hidden representation of the target sequence, such that its conditional likelihood is minimized (Denoted as the red circle in Fig. 1-(c)). Conversely, we construct an additional positive example (Denoted as green circle in Fig. 1-(c)) by adding a large amount of perturbation to the hidden representation of target sequence such that the perturbed sample is far away from the source sequence in the embedding space, while enforcing it to have high conditional likelihood by minimizing Kullback-Leibler (KL) divergence between the original conditional distribution and perturbed conditional distribution. This will yield a negative example that is very close to the original representation of target sequence in the embedding space but is largely dissimilar in the semantics, while the generated positive example is far away from the original input sequence but has the same semantic as the target sequence. This will generate difficult examples that the model fails to correctly discriminate (Fig. 1-(c), Fig.2), helping it learn with more meaningful pairs.\nTo verify the efficacy of our method, we empirically show that it significantly improves the performance of seq2seq model on three conditional text generation tasks, namely machine translation, text summarization and question generation. Our contribution in this work is threefold:\n• To mitigate the exposure bias problem, we propose a contrastive learning framework for conditional sequence generation, which contrasts a positive pair of source and target sentence to negative pairs in the latent embedding space, to expose the model to various valid or incorrect outputs.\n• To tackle the ineffectiveness of conventional approach for constructing negative and positive examples for contrastive learning, we propose a principled method to automatically generate negative and positive pairs, that are more difficult and allows to learn more meaningful representations.\n• We show that our proposed method, CLAPS, significantly improves the performance of seq2seq model on three different tasks: machine translation, text summarization, and question generation." }, { "heading": "2 RELATED WORK", "text": "Exposure Bias There are several prior works to tackle the exposure bias (Ranzato et al., 2016). Bengio et al. (2015) introduce scheduled sampling where the model is initially guided with the true previous tokens but uses the tokens generated by the seq2seq model as the conditional input for the next token, as training goes on. Paulus et al. (2018); Bahdanau et al. (2017) leverage RL to maximize non-differentiable rewards, so it enables to penalize the model for incorrectly generated sentences. Another works (Zhang et al., 2017; 2018; Yu et al., 2017) train GANs to match the distribution of generated sequences to that of ground truth. Since sampling tokens from the generator is not differentiable, they resort RL or gumbel-softmax to train the networks in end-to-end fashion. However, they require either a large amount of effort to tune hyperparameters or stabilize training. However, Choshen et al. (2020) show that RL for machine translation does not optimize the expected reward and the performance gain is attributed to the unrelated effects such as increasing the peakiness of the output distribution. Moreover, (Caccia et al., 2019) show that by tuning the temperature parameter, the language models trained with MLE can be tuned to outperform GAN-based text generation models.\nAdversarial Perturbation Many existing works, such as (Madry et al., 2018), address the robustness of neural networks to adversarial examples, which are generated by applying a small perturbations to the input samples. While adversarial robustness has been mostly explored in image domains, Miyato et al. (2017) adopted adversarial training to text domains. However instead of targeting robustness to perturbed samples, they utilize the adversarial examples as augmented data, and enforce consistency across the predictions across original unlabeled example and its perturbation, for semisupervised learning. Recently Zhu et al. (2019) and Jiang et al. (2020) leverage adversarial training to induce the smoothness of text classifiers, to prevent overfitting to training samples. While they are relevant to ours, these methods do not have the notion of positive and negative examples as they do not consider contrastive learning, and only target text classification. Moreover, they are computationally prohibitive since they use PGD for adversarial training, which requires iterative optimization for each individual sample. Recently, Aghajanyan et al. (2020) propose a simpler yet effective method based on Gaussian noise perturbation to regularize neural networks without expensive PGD steps, which is shown to outperform methods from Zhu et al. (2019) and Jiang et al. (2020). Although our work is similar to these prior works in that we add perturbations to the text embeddings, note that we used the adversarially-generated samples as negative examples of our contrastive learning framework rather than trying to learn the model to be robust to them.\nContrastive Learning Contrastive learning has been widely used. It is to learn a representation by contrasting positive pairs and negative pairs. Chopra et al. (2005); Weinberger & Saul (2009); Schroff et al. (2015) leverage a triplet loss to separate positive examples from negative examples in metric learning. Chen et al. (2020) show that contrastive learning can boost the performance of selfsupervised and semi-supervised learning in computer vison tasks. In natural language processing (NLP), contrastive learning has been widely used. In Word2Vec (Mikolov et al., 2013), neighbouring words are predicted from context with noise-contrastive estimation (Gutmann & Hyvärinen, 2012). Beyond word representation, Logeswaran & Lee (2018) sample two contiguous sentences for positive pairs and the sentences from other document as negative pairs. They constrast positive and negative pairs to learn sentence representation. Moreover, contrastive learning has been investigated in various NLP tasks — language modeling (Huang et al., 2018), unsupervised word alignment (Liu & Sun, 2015), caption generation (Mao et al., 2016; Vedantam et al., 2017), and machine translation (Yang et al., 2019)." }, { "heading": "3 METHOD", "text": "" }, { "heading": "3.1 BACKGROUND: CONDITIONAL TEXT GENERATION", "text": "The goal of conditional text generation with a seq2seq model is to generate an output text sequence y(i) = (y\n(i) 1 , . . . , y (i) T ) with length T conditioned on the input text sequence x (i) = (x (i) 1 , . . . , x (i) L )\nwith length L. A typical approach to the conditional text generation is to leverage the encoderdecoder architecture to parameterize the conditional distribution. We maximize the conditional log likelihood log pθ(y|x) for a given N observations {(x(i),y(i))}Ni=1 as follows:\nLMLE(θ) = N∑ i=1 log pθ(y (i)|x(i))\npθ(y (i) 1 , . . . , y (i) T |x (i)) = T∏ t=1 pθ(y (i) t |y (i) <t,x (i))\npθ(y (i) t |y (i) <t,x (i)) = softmax(Wh(i)t + b)\nh (i) t = g(y (i) t−1,M (i); θ), M(i) = f(x(i); θ)\n(1)\nwhere f, g denote the encoder and the decoder respectively and M(i) = [m(i)1 · · ·m (i) L ] ∈ Rd×L is the concatenation of the hidden representations of the source tokens x(i)." }, { "heading": "3.2 CONTRASTIVE LEARNING WITH ADVERSARIAL PERTURBATIONS FOR SEQ2SEQ", "text": "Since most of the seq2seq models are trained with teacher forcing where the ground truth tokens are provided to maximize Eq. 1, they are never exposed to incorrectly generated tokens during training, which is known as the “expousre bias” problem. In order to tackle the problem, we propose a contrastive learning framework to expose the model to various valid or incorrect output sequences for a given input sentence. Following the contrastive learning framework (Chen et al., 2020), we can train the model to learn the representations of the ground truth sentence by contrasting the positive pairs with the negative pairs, where we select the negative pairs as a random non-target output sequence from the same batch. As shown in Fig. 3-(a), we project the source and target text sequences onto the latent embedding space. Then we maximize the similarity between the pair of source and target sequence, while minimizing the similarity between the negative pairs as follows:\nLcont(θ) = N∑ i=1 log exp(sim(z(i)x , z (i) y )/τ)∑\nz (j) y ∈S\nexp(sim(z(i)x , z (j) y )/τ)\nz(i)x = ξ(M (i); θ), z(i)y = ξ(H (i); θ)\nξ([v1 · · · vT ]; θ) := AvgPool([u1 · · ·uT ]), where ut = ReLU(W(1)vt + b(1))\n(2)\nwhere ξ denotes the composition of affine transformation with the ReLU (Nair & Hinton, 2010) and average pooling to compute the fixed sized representation of a sentence z ∈ Rd, H(i) = [h\n(i) 1 · · ·h (i) T ] ∈ Rd×T is a concatenation of the decoder hidden states of the target sentence y(i) across all the time steps. Furthermore, S = {z(j)y : j 6= i} is a set of hidden representations of target sentences (the objects other than circles in Fig. 3-(a)) that are randomly sampled and not paired with the source sentence x(i), and sim(·, ·) is a cosine similarity function. However, training the model with naı̈ve contrastive learning framework using random non-target sequences as negative examples is highly suboptimal, as described in the introduction and shown in Fig. 1. Many of such naı̈ve negative examples are often located far away from the positive examples in the embedding space from the beginning, when using the pretrained language model. Therefore, simply using the examples from the same batch as done in Chen et al. (2020) will result in trivial negative examples and require very large batch size to enable sampling meaningful negative pairs within the same batch. Moreover, generating positive examples for text sequences is not a trivial problem either since for text domains, we do not have a well-defined set of augmentation methods that preserves the input semantics, unlike with the image domains. To tackle such difficulties, we propose a principled method to automatically construct the adversarial negative and positive examples, such that the samples are difficult for the model to classify correctly. These adversarial positive/negative pairs can guide the model to learn a more accurate representation of the target text sequence, by identifying which features make the output positive or negative (See Fig. 1-(c))." }, { "heading": "3.3 GENERATION OF IMPOSTERS", "text": "As shown in Fig. 3-(b), to generate a negative example, we add a small perturbation δ(i) = [δ\n(i) 1 · · · δ (i) T ] ∈ Rd×T to the H(i), which is the hidden representation of target sequence y(i), such\nthat its conditional likelihood is minimized as follows: H̃(i) = H(i) + δ(i) where δ(i) = argmin\nδ,||δ||2≤ log pθ(y\n(i)|x(i);H(i) + δ)\npθ(y (i)|x(i);H(i) + δ) = T∏ t=1 pθ(y (i) t |y (i) <t,x (i);h (i) t + δt)\npθ(y (i) t |y (i) <t,x (i);h (i) t + δt) = softmax{W(h (i) t + δt) + b}, where δt ∈ Rd\n(3)\nThe exact minimization of the conditional log likelihood with respect to δ is intractable for deep neural networks. Following Goodfellow et al. (2015), we approximate it by linearizing log pθ(y(i)|x(i)) around H(i) as follows:\nH̃(i) = H(i) − g ||g||2 , where g = ∇H(i) log pθ(y(i)|x(i)) (4)\nWe add small perturbation to the hidden representation of each token of target sentence y(i) such that its conditional likelihood is minimized. Thus, the perturbed H̃(i), which we call an imposter (inspired by Weinberger & Saul (2009)), is semantically very dissimilar to y(i), but very close to the hidden representation H(i) in the embedding space (Fig. 3-(a)). This will make it non-trivial for the sequence-to-sequence model to distinguish it from the representation of true target sequence y(i). Please note while adversarial perturbations are generated similarly as in Miyato et al. (2017), we use them in a completely different way. While they train the model to be invariant to adversarial samples within the -ball, we push them far away from the source sentence while pulling the ground truth target sentence to the input sentence. In other words, we use the perturbed representation as an additional negative sample for contrastive learning as follows:\nLcont−neg(θ) = N∑ i=1 log exp(sim(z(i)x , z (i) y )/τ)∑\nz (k) y ∈S∪{z̃ (i) y }\nexp(sim(z(i)x , z (k) y )/τ)\n, where z̃(i)y = ξ(H̃ (i); θ) (5)\nAlternatively, we can generate an imposter by perturbing the hidden representation of target sentence y so that its conditional likelihood is minimized but very close to the source sentence x in the embedding space. However, we empirically find that such a variation yields less performance gain." }, { "heading": "3.4 GENERATION OF DISTANT-TARGETS", "text": "Moreover, as shown in Fig. 3-(c), we construct an additional positive pair of source sequence x(i) by adding large perturbation ζ(i) = [ζ(i)1 · · · ζ (i) T ] ∈ Rd×T to H(i) the hidden state of target sequence y(i), such that cosine similarity from z(i)x is minimized, but the conditional likelihood is\nenforced to remain high. However, the exact computation of ζ(i) with such constraints is intractable. We approximate it with the following two separate stages. First, we add perturbation to H(i) such that it minimizes the contrastive learning objective Lcont(θ) as shown in Eq. 6. Then we add another perturbation to minimize the KL divergence between perturbed conditional distribution pθ(ŷ (i) t |ŷ (i) <t,x (i)) and the original conditional distribution pθ(y (i) t |y (i) <t,x\n(i)) as shown in Eq. 7, where H = [h1 · · ·hT ] ∈ Rd×T , Ĥ = [ĥ1 · · · ĥT ] ∈ Rd×T , and η ∈ R. Note that θ∗ denotes the copied of the model parameter θ and is considered to be constant to prevent it from being updated through back-propagation.\nH (i) = H(i) − η g ||g||2 where g = ∇H(i)Lcont(θ) (6)\npθ(ŷ (i) t |ŷ (i) <t,x (i)) = softmax(Wh (i) t + b)\nLKL(θ) = N∑ i=1 T∑ t=1 DKL(pθ∗(y (i) t |y (i) <t,x (i))||pθ(ŷ(i)t |ŷ (i) <t,x (i))\nĤ(i) = H (i) − η f\n||f ||2 , where f = ∇ H (i) 1 LKL(θ)\n(7)\nWe consider the perturbed hidden state Ĥ(i) as an additional positive example for source sequence x(i), which we refer to as a distant-target. We can use a distant-target to augment contrastive learning and minimize LKL(θ) as follows:\nLcont−pos(θ) = N∑ i=1 log exp(sim(z(i)x , ẑ (i) y )/τ)∑\nz (k) y ∈S∪{z̃ (i) y }\nexp(sim(z(i)x , z (k) y )/τ)\n, where ẑ(i)y = ξ(Ĥ (i); θ) (8)\nCLAPS objective Incorporating the loss on the imposter and the distant target introduced above, we estimate the parameters of the seq2seq model θ by maximizing the following objective, where α, β are hyperparameters which control the importance of contrastive learning and KL divergence:\nmax θ LMLE(θ)− αLKL(θ) + β{Lcont−neg(θ) + Lcont−pos(θ)} (9)\nFor all the experiments, we set α and β as 1, which we search through cross-validation. Note that after training is done, we remove the pooling layer ξ and generate text with the decoder g, given an input encoded with the encoder f ." }, { "heading": "4 EXPERIMENT", "text": "We validate our method on benchmark datasets on three conditional text generation tasks." }, { "heading": "4.1 TASKS", "text": "Machine Translation (MT) For machine translation, we use WMT16 Romanian-English parallel corpus (WMT’16 RO-EN) to train the model. We tokenize the pairs of source and target sequences with the same tokenizer as Raffel et al. (2020). We finetune the pretrained T5-small model for 20 epochs with the batch size of 128 and Adafactor (Shazeer & Stern, 2018). For contrastive learning, we set the norm of perturbation, η and as 3.0.\nText Summarization (Sum.) For text summarization, we use XSum dataset (Narayan et al., 2018) of which summaries are highly abstractive, thus extractive summarization models under-perform abstractive models. We follow the most of the experimental settings for machine translation as described above, except that we set the norm of perturbation, η and as 1.0 and 1.0, respectively.\nQuestion Generation (QG) For question generation, we aim to generate a question from a given answer and paragraph, i.e., we model conditional distribution pθ(y|x,a) where x,y,a denote a paragraph, question and answer, respectively. We concatenate the answer and paragraph with special tokens to generate the question conditioned on both of the answer and paragraph. As the previous experimental settings, we finetune T5-small model on SQuAD dataset (Rajpurkar et al., 2016) for 20 epochs with batch size 128 and set the norm of perturbation, η as 3.0 and as 1.0. Since the test set of SQuAD is only accessible via leader board, we randomly split the validation set into a validation set and a test set." }, { "heading": "4.2 EXPERIMENTAL SETUPS", "text": "Implementation Details For the encoder f , and decoder g, we use T5-small model, which is based on transformer with the hidden dimension, d = 512. We set the temperature, τ as 0.1 for all the experiments. At test time, we use beam search of width 4 to generate the target sequences. Common Baselines We compare our method against relevant baselines. 1. T5-MLE: A pretrained T5 model fine-tuned to maximize LMLE(θ). 2. Scratch-T5-MLE: A random initialized Transformer model that has the identical architecture to\nT5, trained by maximizing LMLE(θ). 3. α-T5-MLE: T5 model trained with MLE, with varying temperature α in the softmax function\nwhen decoding the target sentences, as done in Caccia et al. (2019) 4. T5-SSMBA: This is the T5 model trained to maximize LMLE(θ), with additional examples\ngenerated by the technique proposed in Ng et al. (2020). which are generated by corrupting the target sequences and reconstructs them using a masked language model, BERT.\n5. T5-WordDropout Contrastive: This is a T5 model trained with the contrastive learning framework proposed in Yang et al. (2019), which heuristically generates negative examples by removing the most frequent word from the target sequence. We pretrain T5-small to maximize LMLE(θ) and further train the model to assign higher probability to the ground truth target sentence than a negative example with max-margin loss.\n6. R3F: This is a T5 model that minimizes the negative log likelihood and symmetric KL-divergence between original conditional log likelihood pθ(y|x) and pθ(y|x̃) to enforce the function to be smooth, where x̃ = WordEmbedding(x) + z, z = (z1, . . . , zL), zi\ni.i.d∼ N (0, diag(σ1, . . . , σd)). 7. T5-MLE-contrastive: This is a naive constrastive learning framework with positive/negative\npairs, which maximizes the contrastive learning objective from Eq. 2. 8. T5-CLAPS w/o positive (negative): Our proposed model which jointly maximizes the log like-\nlihood and the contrastive learning objective with imposters but does not use any distant-targets or imposters.\n9. T5-CLAPS: Our full model which jointly maximizes the log likelihood, contrastive learning objective, and KL-divergence as described in the Eq. 9.\n10. Scratch-CLAPS: Our full model as T5-CLAPS but with randomly initialized T5 architecture.\nTask specific baselines For machine translation, we use the Transformer (Vaswani et al., 2017) which consists of 6 layers of self-attention layer with 8 multi-head attention and 512 dimension, as an additional baseline. For QG, we additionally compare our models against Harvesting-QG (Du & Cardie, 2018), which is a LSTM model with copy mechanism. For text summarization, we use PTGEN-COVG (See et al., 2017) as a baseline, which uses copy mechanism and coverage to handle out of vocabulary word and prevent word repetition, and CONVS2S (Narayan et al., 2018) which uses convolutional networks as the encoder and decoder.\nEvaluation Metric Following the conventional evaluation metrics, we adopt n-gram BLEU and BLEU (Papineni et al., 2002) for MT and QG. For text summarization, we use Rouge (Lin & Hovy, 2002) and Meteor (Banerjee & Lavie, 2005). As an additional performance measure for question generation, we evaluate a BERT QA model on the SQuAD test set, where the QA model is trained with the questions generated by each QG methods from the contexts and answers of HarvestingQA dataset (Du & Cardie, 2018), and report the F1 and Exact Match (EM)." }, { "heading": "4.3 EXPERIMENTAL RESULTS", "text": "Quantitative Results We compare our model with the baseline models on WMT’16 RO-En, XSum, SQuAD dataset for machine translation, text summarization and question generation, respectively. Table 1 shows that our proposed method CLAPS significantly outperforms the other baseline, with the performance gain of more than 1% on all tasks according to the BLEU scores. Moreover our proposed method improves the performance of the randomly initialized T5 model (Scratch-CLAPS). For question generation, our proposed method also improves F1/EM as well as BLEU scores. It shows that our proposed model is able to generate semantically valid questions that are beneficial for training the QA model. Note that naively constructing the negative examples for contrastive learning on the both tasks, by randomly shuffling the association of (x,y) from a given mini-batch, degrades the performance. Increasing the batch size to a large value, using larger memory, may increase its performance as observed in SimCLR (Chen et al., 2020). However, such an approach will be highly sample-inefficient. On the contrary, our model outperforms all the other baseline models on Xsum dataset for text summarization, as shown in Table 2. For summarization, we observe that contrastive learning with imposters alone can improve the performance by a large margin.\nVisualization To examine our model with proposed contrastive learning framework learns meaningful representation of sentences, we encode a pair of sequences (x,y) into M,H with encoder f and decoder g. Then, we add perturbations to H to construct an imposter H̃ and an additional positive\n(MT) Lupta lui Hilary a fost mai atractivă. =>(GT): Hillary’s struggle was more attractive =>(Dist.): Hilary’s fight was more attractive =>(Imp.): Thearies’ battle fight has attractive appealing\n(QG) … Von Miller … recording five solo tackles, … =>(GT): How many solo tackles did Von Miller make at Super Bowl 50? =>(Dist.): How many solo tackles did Von Miller record at Super Bowl 50? =>(Imp.): What much tackle did was Miller record at Super Bowl 50? (Sum.) Pieces from the board game … have been found in … China. … =>(GT): An ancient board game has been found in a Chinese Tomb. =>(Dist.): An ancient board game has been discovered in a Chinese Tomb. =>(Imp.): America’s gained vast Africa most well geographical countries, 22\nTable 3: Greedy decoding from hidden representation of imposters and distant-targets. The answer span is highlighted for QG.\nexample Ĥ as shown in Eq. 3 and 6, 7. We apply average pooling to M,H, H̃, and Ĥ and project them onto two dimensional space with t-SNE (Maaten & Hinton, 2008). As shown in Fig. 4-(b), the model pushes away the imposter from the embedding of target sequence and pulls the embedding of the distant-targets to the embedding of the source sequence. For the model without contrastive learning, however, the embeddings of both target sequences and distant targets are far away from those of source sequences and the imposters are very close to them as shown in Fig. 4-(a).\nQualitative Examples For qualitative analysis, we examine the texts that are represented by the distant-target and imposter from our method, CLAPS. To decode them into output sequences, we apply affine transformation and softmax to H̃ and Ĥ and select the most likely token at each time step. As shown in Table 3, the distant-target example (Dist.), preserves the semantic of the original target sequence (GT) with a single word replaced by a synonym (colored in green). However, the imposters (Imp.) have completely different semantics, and often are gramatically incorrect (colored in red). This shows that the model are exposed to those various valid or incorrect sentences with our proposed contrastive learning framework with adversarial perturbations.\nHuman Evaluation We further conduct a human evaluation of the 20 summaries and 20 questions generated by our CLAPS and T5-MLE trained for text summarization and QG task. Specifically, 20 human judges perform blind quality assessment of two sentences generated by the two models, that are presented in a random order. For text summarization, 70% of the human annotators chose the sentences generated by our model as better than the baseline, and for QG, 85% favored the sentences generated by our model over that of the baseline." }, { "heading": "5 CONCLUSION", "text": "To mitigate the exposure bias problem in sequence-to-sequence learning, we proposed a contrastive learning framework which maximizes the similarity between ground truth input and output sequence, and minimize the similarity between the input and an incorrect output sequence. Moreover, since conventional approach to sample random non-target examples from the batch as negative examples for contrastive learning results in trivial pairs that are well-discriminated from the beginning, we propose a new principled approach to automatically construct “hard” negative and positive examples, where the former is semantically dissimilar but close to the input embedding, and the latter is far from the input embedding but semantically similar. This adversarial learning enables the model to learn both the correct and incorrect variations of the input, and generalize better to unseen inputs. We empirically showed that our method improved the performance of seq2seq model on machine translation, question generation, and text summarization tasks. While we specifically targeted the exposure bias problem with seq2seq models for conditional text generation, our method may be applicable to seq2seq learning for tasks from other domains, such as automatic speech recognition, text-to-speech generation, or video captioning.\nAcknowledgements This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.20200-00153), Samsung Advanced Institute of Technology (SAIT), Samsung Electronics (IO20121408145-01), Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)), and the Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF2018R1A5A1059921)." }, { "heading": "A EXPERIMENTAL DETAILS", "text": "Dataset For machine translation, text summarization, and question generation, we use WMT’16 RO-EN, Xsum, SQuAD dataset, for each task. The number of train/validation/test set and its source is shown in Table 4. Note that the number of validation and test set for SQuAD is different from the original dataset. Since the original test set is only accessible via the leader board of SQuAD1, we split the original validation set into our new validation and test set, following the conventions of question generation communities.\nPreprocessing For machine translation, we download the raw text2, not the tokenized text, and use the same T5-tokenizer as Raffel et al. (2020) to tokenize both Romanian and English sentences. We limit the input and output length to 128 tokens. For text summarization, we also use the T5-tokenizer as before, and limit the input length to 512 tokens and output length to 128 tokens. For question generation, we set the maximum length of question as 64 tokens and input which is concatenation of answer and context as 384 tokens.\nImplementation We finetune the pretrained T5-small model provided from the transformers library (Wolf et al., 2019)3 with Adafactor optimizer. We set the batch size 128 and follow the default setting of Adafactor optimizer to finetune the T5-small models. However, the number of negative examples from the batch is 16 or 32 (total batch size divided by the number of GPUs), because we split the batch into smaller batches and distribute them to each GPU machines. We use 8 GPUs for text summarization, and 4 GPUs for machine translation and question generation. The dimension of hidden state of T5 model, d is 512, so we set the hidden size of z as the same.\nEvaluation We use beam search with beam width 4 to generate the target sentences from the source sentences of the test set. Some of the examples are shown in Table 5,6,A. After the generation, we convert the tokens into the raw texts and compare them to the raw text of ground truth target sentences with the automatic evaluation metrics. For n-gram BLEU and Meteor, we use the implementation by Sharma et al. (2017)4. For BLEU score, we adopt the implementation by Post (2018)5.\n1https://rajpurkar.github.io/SQuAD-explorer/ 2https://s3.amazonaws.com/datasets.huggingface.co/translation/wmt_en_\nro.tar.gz 3https://github.com/huggingface/transformers 4https://github.com/Maluuba/nlg-eval 5https://github.com/mjpost/sacrebleu\nTable 5: Generated summaries by CLAPS from Xsum dataset.\nArticle: The US military says a strike targeting Taliban in the northern city of Kunduz may have caused ”collateral damage”. Offering his ”deepest condolences”, Mr Obama said he expected a ”full accounting of the facts” and would then make a definitive judgement. . . .\nGT: President Barack Obama says the US has launched a ”full investigation” into air strikes that killed 19 people at an MSF-run Afghan hospital on Saturday.\nCLAPS: US President Barack Obama has called for an inquiry into air strikes in Afghanistan that killed dozens of medical workers.\nArticle: Forecasts were for quarterly growth of between 0.5% and 0.7%. Official statistics also showed that household consumption expenditure boosted the quarterly growth numbers. But economist Shane Oliver told the BBC the numbers were “well below potential”. On an annual basis the economy expanded 2.3%, beating expectations for 2.1%. Economic growth in the March quarter of 2014 was 2.9%. “The March quarter GDP [gross domestic product] growth was far better than feared just a few days ago,” said Mr Oliver, who is chief economist with AMP Capital in Sydney. “However, Australia is still not out of the woods, as annual growth at 2.3% is well below potential, and a full 0.8% percentage points of the 0.9% growth came from higher inventories and trade.” He said domestic demand remained “very weak with consumer spending and home construction only just offsetting the ongoing slump in mining investment”. ...\nGT: Australia’s economy grew at a better-than-expected 0.9% in the first quarter of 2015, compared to the previous quarter, boosted by mining together with financial and insurance services.\nCLAPS: Australia’s economy grew faster than expected in the first three months of the year, according to official figures.\nArticle: After the problems last week, many doubt the system will cope. Transport for London (TfL) remains confident, although it admits there will be breakdowns. The trick will be in getting the system back up and running quickly. So here’s some friendly advice for tourists and Olympic visitors to try and make the transport experience as easy as possible. If anyone thinks of any more please post below.\nGT: The busiest summer ever looms for London’s transport system.\nCLAPS: London’s transport system has been a pretty busy week.\nArticle: The outgoing vice-president spoke during a state dinner and took the opportunity to praise America’s northern neighbour. ”The world is going to spend a lot of time looking to you, Mr Prime Minister”, he told the Canadian leader. Mr Biden has been highly critical of US President-elect Donald Trump. ”Vive le Canada because we need you very, very badly,” he told the dinner guests. He went on to describe the self-doubt that liberal leaders across the world are currently experiencing after several political defeats. But he praised ”genuine leaders” including German Chancellor Angela Merkel, saying such statesmen and women are in short supply. Mr Trudeau reportedly became emotional during Mr Biden’s remarks when the American spoke of his late father, former Prime Minister Pierre Trudeau. ”You’re a successful father when your children turn out better than you,” Mr Biden said. ...\nGT: US Vice-President Joe Biden told an audience in Ottawa that the world needs ”genuine leaders” such as Canadian Prime Minister Justin Trudeau.\nCLAPS: Vice-President Joe Biden has praised Canadian Prime Minister Vive le Canada for his visit to the country.\nArticle: The Swedish giant asked customers who bought any model of the Mysingso chair to return it for a full refund. The global recall comes after Ikea received reports from Finland, Germany, the US, Denmark and Australia that users had received injuries to their fingers that needed medical treatment. Ikea’s statement said the chair had a ”risk of falling or finger entrapment”. It said: ”After washing the fabric seat it is possible to re-assemble the chair incorrectly leading to risks of falls or finger entrapments. ”Ikea has received five incident reports in which a Mysingso beach chair collapsed during use due to incorrect re-assembly. All five reports included injuries to fingers and required medical attention. It added that a full investigation had led to an improved design ”to further mitigate the risks of incorrect re-assembly and injuries” and the updated chair would be available from next month. Ikea has more than 300 stores in 27 countries.\nGT: Ikea is recalling a beach chair sold in the UK after reports that it can collapse and cause injury.\nCLAPS: Ikea is recalling a popular beach chair that collapsed during use because of incorrect re-assemblies.\nArticle: Spending on the NHS should also be paid for by a dedicated tax marked on every payslip, the former health minister suggested. Under Mr Lamb’s plan, taxes would not be increased as the new levy would be offset by deductions to income tax or national insurance. He has warned the NHS faces collapse without an urgent cash injection. The plans are not yet party policy and will not be put to this year’s conference in Bournemouth. But Mr Lamb, the party’s health spokesman, told party members he was ”very interested in the idea of a dedicated NH S and care contribution - separating it out from the rest of taxation, clearly identified on your payslip. ”And I am really interested in the idea of the right for local areas to raise additional funds for the NHS and care if they choose.” The Lib Dems say he would like to implement the ideas across the UK, although, as health and social care are devolved, it is unclear how this would be enforced. Mr Lamb - who lost out to Tim Farron in a leadership election in July - proposes a cross-party commission to explore the ideas. He intends to consult health bodies and professionals, patients, trade unions and academics. Ministers have pledged £2bn in this financial year for the NHS, and an extra £8bn by 2020. But Mr Lamb told the BBC that this was insufficient and, having ”seen the books” as a minister in the last government, he feared the NHS could face a funding shortfall of £30bn by 2020. ”The bottom line is with rising demand because of an ageing population we need more investment,” he said. Mr Lamb also warned that the social care system was ”on its knees” and could collapse without a cash injection of £5bn. ”I’ve been in the department. I have seen the books and I am deeply concerned. If we carry on regardless, the system will crash.” Taxpayers are already shown how much they have contributed to the health service in annual personal tax statements. An attempt to establish a cross-party commission on social care before the 2010 election - led in part by Mr Lamb - collapsed in acrimony.\nGT: English councils should be allowed to put up taxes to fund the NHS, Norman Lamb has told the Lib Dem conference.\nCLAPS:A new levy on the NHS and social care should be introduced by the Liberal Democrats, Norman Lamb has said.\nArticle: Yorkshire, Lancashire and Derbyshire have been worst affected, after 2-5cm fell overnight, with 10cm reported on higher ground. Passengers waiting to depart Manchester Airport have reported being stuck on the runway for hours due to a lack of de-icers. Leeds Bradford Airport suspended all morning flights but has since reopened. Manchester Airport reported ”minor delays to departing aircraft” - but passengers told the BBC they had been stuck on board outbound flights. Shirley Hale said her Jet2 flight to Tenerife had been waiting to depart for over four hours. ”We have been told that there are not enough de-icers at the airport,” she said. The airport apologised and said de-icing was the responsibility of airlines and their ground teams. More than 100 schools were closed across East Lancashire and Oldham, with 80 shut in West Yorkshire. BBC Weather said Buxton in Derbyshire saw up to 17cm of snow, the deepest measured on Friday. The avalanche risk in the Peak District was currently extremely high, Buxton Mountain Rescue Team said. Parts of Staffordshire have been affected, with several centimetres of snow reported in Flash, England’s highest village. Commuters have been urged to allow extra journey time, and the Met Office has issued snow and ice warnings. More on the snow and other stories in West Yorkshire Weather updates for Lancashire and Greater Manchester BBC Weather presenter Kay Crewdson said conditions were due to slowly improve into Saturday. Molly Greenwood reported 10cm of snow in the Huddersfield area. ”Don’t think I’m going anywhere,” she said. Zulfi Hussain said the snow was causing ”traffic chaos” in Woodhall Road, Calverley, near Leeds. Elliott Hudson, another West Yorkshire resident, said: ”Looks like I have woken up in Narnia.” West Yorkshire’s Liversedge FC, who have had to cancel every home game for the last four months due to bad weather, tweeted a picture of snow with the caption: ”It’s not looking good for Liversedge FC’s home game with Worksop Town tomorrow.” The A628 Woodhead, A57 Snake Pass and A537 Cat and Fiddle roads are all affected, with delays reported on the M65 motorway. Highways England said the A57 eastbound in Great Manchester is closed between M67/A560 and B6174 due to severe weather conditions. It said teams were working to clear the road. Tony Hallwood, from Leeds Bradford Airport, said it reopened at about 09:00 GMT after crews used ploughs to clear snow from the runway. He said: ”We are asking passengers to make their way to the airport as early as they can given the difficult conditions.” Bus operators are also reporting delays to all services across West Yorkshire. Oldham Council has said 48 schools had closed this morning as a result of the snow and severe weather. Drivers are also being asked to take extra care after snow fell overnight in some parts of Northern Ireland. A Met Office yellow warning for ice and snow in northern England and Wales ended at 15:00.\nGT: Heavy snowfall has caused travel disruption in parts of northern England.\nCLAPS: Flights have been disrupted after a large avalanche hit parts of England.\nArticle: But once the votes are counted, what can residents expect to pay in council tax? Below are the figures for a Band D property for every council area in Wales for the current financial year of 2017/18, how much that has gone up by for the current year, and what the average property in the area actually pays. They are grouped here by police force region - council tax includes the police precept which is added to the overall bill paid by homes. Local government is not fully funded by council tax. Much of the funding for councils comes in the form of grants from the Welsh Government, which in turn gets its funding from the UK government in London. In 2017/18 a total of £4.1bn is being divided among Wales’ 22 councils. The lions share of council cash goes on schools - with social services following behind, as shown in the graph above. Residents pay council tax based on which band their property is in, based on its worth. Band D has historically been used as the standard for comparing council tax levels between and across local authorities. It is used to charge tax to a property that, in Wales, was worth between £91,001 to £123,000 on April 2003 values. Council tax gets lower the cheaper a property is, and higher the more expensive a property is. Council tax figures source: Welsh Government\nGT: Voters will go to the polls on Thursday to determine who will represent them on local councils.\nCLAPS: The people of Wales are voting in a referendum on whether or not to pay council tax.\nArticle: The side’s appearance in France will be its first at a major tournament since the 1958 World Cup. Players and coaches left their base at the Vale Resort, Vale of Glamorgan, on Saturday and headed to Cardiff Airport. After a send-off from pupils from Ysgol Treganna, Cardiff, the team took off for a friendly in Sweden on Sunday. They will then head to France ahead of the team’s first game of the tournament against Slovakia on 11 June.\nGT: Wales’ football team has departed the country as their Euro 2016 preparations reach a climax.\nCLAPS: Wales’ Euro 2016 squad have arrived in France for the first time since 1958.\nArticle: The 40-year-old, from the South Bank area of Teesside, was discovered on the A66 in the early hours ”in a distressed state” with wounds to his groin after the attack. The road, from Greystones Roundabout to Church Lane in Middlesbrough, was shut earlier while searches of the area were carried out. It has now reopened. A 22-year-old man was arrested on suspicion of assault and later bailed. Cleveland Police said the injured man had been placed in an induced coma in hospital. The force said in a statement: ”Police can confirm that the man found this morning on the A66 had wounds to his groin area. ”Officers are continuing to investigate and are appealing for anyone with information to contact them.”\nGT: A man has been found by the side of a road with his penis cut off.\nCLAPS: A man is in an induced coma after being found with serious injuries on a Teesside road.\nArticle: In July, a major bug was discovered in the software that could let hijackers access data on up to a billion phones. Manufacturers have been slow to roll out a fix because many variations of Android are widely used. One Android expert said it was ”about time” phone makers issued security fixes more quickly. Android has been working to patch a vulnerability, known as Stagefright, which could let hackers access a phone’s data simply by sending somebody a video message. ”My guess is that this is the single largest software update the world has ever seen,” said Adrian Ludwig, Android’s lead engineer for security, at hacking conference Black Hat. LG, Samsung and Google have all said a number of their handsets will get the fix, with further updates every month. Android is an open source operating system, with the software freely available for phone manufacturers to modify and use on their handsets. The Google-led project does provide security fixes for the software, but phone manufacturers are responsible for sending the updates to their devices. Some phones running old versions of Android are no longer updated by the manufacturer. Many companies also deploy customised versions of Android which take time to rebuild with the security changes. Apple and BlackBerry can patch security problems more quickly because they develop both the software and the hardware for their devices. BlackBerry’s software is reviewed by mobile networks before being sent to handsets, while Apple can push updates to its phones whenever it wants. ”The very nature of Android is that manufacturers add their own software on top, so there have been delays in software roll-outs,” said Jack Parsons, editor of Android Magazine. ”In the US it’s even worse because mobile carriers often add their own software too, adding another layer of bureaucracy holding up security fixes. ”There’s no real villain here, that’s just how the system works. But there will always be security concerns with software, so it’s right that some of the manufacturers are stepping up to deal with this now.”\nGT: Samsung, LG and Google have pledged to provide monthly security updates for smartphones running the Android operating system.\nCLAPS: The world’s largest software update is to be issued by Google-led Android.\nArticle: The move follows a claim by Crossmaglen Rangers player Aaron Cunningham that he was the victim of verbal abuse during the 2 December Ulster football final. The Ulster Council carried out an investigation and BBC Sport understands one Kilcoo player is to be banned for six months and another for four months. Kilcoo said they had not been notified, and the players could appeal. The two suspensions have yet to be officially confirmed by the Ulster Council. It is believed the case was the first time an allegation of racial abuse had been lodged with the provincial governing body. When an investigation was announced, Ulster GAA president Aogán O Fearghail, said anyone found guilty of racism would be dealt with severely. Kilcoo released a statement saying the club condemned abuse and would co-operate with the Ulster Council’s investigation. The Gaelic Athletic Association, which governs the sport in Ireland, is to discuss how to deal with racism at its annual congress in March.\nGT: Two Kilcoo players are to be suspended by Ulster GAA chiefs following allegations of racial abuse.\nCLAPS: Two Kilcoo players have been suspended by the Ulster GAA for alleged racial abuse.\nContext: ... The Broncos finished the regular season with a 12-4 record, and denied the New England Patriots a chance to defend their title from Super Bowl XLIX by defeating them 20-18 in the AFC Championship Game. They joined the Patriots, Dallas Cowboys, and Pittsburgh Steelers as one of four teams that have made eight appearances in the Super Bowl.\nGT: How many appearances have the Denver Broncos made in the Super Bowl?\nCLAPS: How many Super Bowl appearances have the Broncos made?\nContext: In late November 2015, reports surfaced stating that “multiple acts” would perform during the halftime show. On December 3, the league confirmed that the show would be headlined by the British rock group Coldplay. On January 7, 2016, Pepsi confirmed to the Associated Press that Beyoncé, who headlined the Super Bowl XLVII halftime show and collaborated with Coldplay on the single “Hymn for the Weekend”, would be making an appearance. Bruno Mars, who headlined the Super Bowl XLVIII halftime show, and Mark Ronson also performed.\nGT: What nationality is the band Coldplay?\nCLAPS: What nationality was Coldplay?\nContext: There are 13 natural reserves in Warsaw - among others, Bielany Forest, Kabaty Woods, Czerniaków Lake. About 15 kilometres (9 miles) from Warsaw, the Vistula river’s environment changes strikingly and features a perfectly preserved ecosystem, with a habitat of animals that includes the otter, beaver and hundreds of bird species. There are also several lakes in Warsaw - mainly the oxbow lakes, like Czerniaków Lake, the lakes in the Łazienki or Wilanów Parks, Kamionek Lake. There are lot of small lakes in the parks, but only a few are permanent - the majority are emptied before winter to clean them of plants and sediments.\nGT: What animals does the Vistula river’s ecosystem include?\nCLAPS: What animals are included in the Vistula river’s habitat?\nContext: ”The FSO Car Factory was established in 1951. A number of vehicles have been assembled there over the decades, including the Warszawa, Syrena, Fiat 125p (under license from Fiat, later renamed FSO 125p when the license expired) and the Polonez. The last two models listed were also sent abroad and assembled in a number of other countries, including Egypt and Colombia. In 1995 the factory was purchased by the South Korean car manufacturer Daewoo, which assembled the Tico, Espero,Nubia, Tacuma, Leganza, Lanos and Matiz there for the European market. In 2005 the factory was sold to AvtoZAZ, a Ukrainian car manufacturer which assembled there the Chevrolet Aveo. The license for the production of the Aveo expired in February 2011 and has since not been renewed. Currently the company is defunct.”\nGT: Who bought the factory in 2005?\nCLAPS: To whom was the factory sold in 2005?\nContext: The Scotland Act 1998, which was passed by the Parliament of the United Kingdom and given royal assent by Queen Elizabeth II on 19 November 1998, governs the functions and role of the Scottish Parliament and delimits its legislative competence. The Scotland Act 2012 extends the devolved competencies. For the purposes of parliamentary sovereignty, the Parliament of the United Kingdom at Westminster continues to constitute the supreme legislature of Scotland. However, under the terms of the Scotland Act, Westminster agreed to devolve some of its responsibilities over Scottish domestic policy to the Scottish Parliament. Such d̈evolved mattersı̈nclude education, health, agriculture and justice. The Scotland Act enabled the Scottish Parliament to pass primary legislation on these issues. A degree of domestic authority, and all foreign policy, remain with the UK Parliament in Westminster. The Scottish Parliament has the power to pass laws and has limited tax-varying capability. Another of the roles of the Parliament is to hold the Scottish Government to account.\nGT: What does the Scotland Act of 2012 extend?\nCLAPS: What does the Scotland Act 2012 extend?\nContext: Stage 1 is the first, or introductory stage of the bill, where the minister or member in charge of the bill will formally introduce it to Parliament together with its accompanying documents-Explanatory Notes, a Policy Memorandum setting out the policy underlying the bill, and a Financial Memorandum setting out the costs and savings associated with it. Statements from the Presiding Officer and the member in charge of the bill are also lodged indicating whether the bill is within the legislative competence of the Parliament. Stage 1 usually takes place, initially, in the relevant committee or committees and is then submitted to the whole Parliament for a full debate in the chamber on the general principles of the bill. If the whole Parliament agrees in a vote to the general principles of the bill, it then proceeds to Stage 2.\nGT: Where are bills typically gestated in Stage 1?\nCLAPS: Where does Stage 1 usually take place?\nContext: Moderate and reformist Islamists who accept and work within the democratic process include parties like the Tunisian Ennahda Movement. Jamaat-e-Islami of Pakistan is basically a socio-political and democratic Vanguard party but has also gained political influence through military coup d’état in past. The Islamist groups like Hezbollah in Lebanon and Hamas in Palestine participate in democratic and political process as well as armed attacks, seeking to abolish the state of Israel. Radical Islamist organizations like al-Qaeda and the Egyptian Islamic Jihad, and groups such as the Taliban, entirely reject democracy, often declaring as kuffar those Muslims who support it (see takfirism), as well as calling for violent/offensive jihad or urging and conducting attacks on a religious basis.\nGT: Where does Hamas originate?\nCLAPS: Where are Hamas located?\nContext: Sayyid Abul Ala Maududi was an important early twentieth-century figure in the Islamic revival in India, and then after independence from Britain, in Pakistan. Trained as a lawyer he chose the profession of journalism, and wrote about contemporary issues and most importantly about Islam and Islamic law. Maududi founded the Jamaat-e-Islami party in 1941 and remained its leader until 1972. However, Maududi had much more impact through his writing than through his political organising. His extremely influential books (translated into many languages) placed Islam in a modern context, and influenced not only conservative ulema but liberal modernizer Islamists such as al-Faruqi, whose Ïslamization of Knowledgec̈arried forward some of Maududi’s key principles.\nGT: Where did Maududi exert the most impact?\nCLAPS: How did Maududi have more impact on Islam than his political organising?\nContext: ByLike many other mainline Protestant denominations in the United States, the United Methodist Church has experienced significant membership losses in recent decades. At the time of its formation, the UMC had about 11 million members in nearly 42,000 congregations. In 1975, membership dropped below 10 million for the first time. In 2005, there were about 8 million members in over 34,000 congregations. Membership is concentrated primarily in the Midwest and in the South. Texas has the largest number of members, with about 1 million. The states with the highest membership rates are Oklahoma, Iowa, Mississippi, West Virginia, and North Carolina.\nGT: At the time of its formation, how many congregations did the UMC have? CLAPS: How many congregations did the UMC have at the time of its formation?\nContext: Celoron’s expedition force consisted of about 200 Troupes de la marine and 30 Indians. The expedition covered about 3,000 miles (4,800 km) between June and November 1749. It went up the St. Lawrence, continued along the northern shore of Lake Ontario, crossed the portage at Niagara, and followed the southern shore of Lake Erie. At the Chautauqua Portage (near present-day Barcelona, New York), the expedition moved inland to the Allegheny River, which it followed to the site of present-day Pittsburgh. There Céloron buried lead plates engraved with the French claim to the Ohio Country. Whenever he encountered British merchants or fur-traders, Celoron informed them of the French claims on the territory and told them to leave.\nGT: How did Celeron handle business on trip?\nCLAPS: What did Celoron do when he encountered the British?\nContext: Like many cities in Central and Eastern Europe, infrastructure in Warsaw suffered considerably during its time as an Eastern Bloc economy - though it is worth mentioning that the initial Three-Year Plan to rebuild Poland (especially Warsaw) was a major success, but what followed was very much the opposite. However, over the past decade Warsaw has seen many improvements due to solid economic growth, an increase in foreign investment as well as funding from the European Union. In particular, the city’s metro, roads, sidewalks, health care facilities and sanitation facilities have improved markedly. answer:improved markedly\nGT: Warsaw’s sidewalks and sanitation facilities are some examples of things which have what?\nCLAPS: What has happened to Warsaw’s infrastructure in the past decade?\nContext: Several commemorative events take place every year. Gatherings of thousands of people on the banks of the Vistula on Midsummer’s Night for a festival called Wianki (Polish for Wreaths) have become a tradition and a yearly event in the programme of cultural events in Warsaw. The festival traces its roots to a peaceful pagan ritual where maidens would float their wreaths of herbs on the water to predict when they would be married, and to whom. By the 19th century this tradition had become a festive event, and it continues today. The city council organize concerts and other events. Each Midsummer’s Eve, apart from the official floating of wreaths, jumping over fires, looking for the fern flower, there are musical performances, dignitaries’ speeches, fairs and fireworks by the river bank.\nGT: How man people gather along the banks of the Vistula for the Wianki festival?\nCLAPS: How many people gather on the banks of the Vistula on Midsummer’s Night for a festival called Wianki?\nContext: The origin of the legendary figure is not fully known. The best-known legend, by Artur Oppman, is that long ago two of Triton’s daughters set out on a journey through the depths of the oceans and seas. One of them decided to stay on the coast of Denmark and can be seen sitting at the entrance to the port of Copenhagen. The second mermaid reached the mouth of the Vistula River and plunged into its waters. She stopped to rest on a sandy beach by the village of Warszowa, where fishermen came to admire her beauty and listen to her beautiful voice. A greedy merchant also heard her songs; he followed the fishermen and captured the mermaid.\nGT: What did a greedy merchant do to the mermaid?\nCLAPS: What did Oppman do to the mermaid?\nContext: Warsaw remained the capital of the Polish-Lithuanian Commonwealth until 1796, when it was annexed by the Kingdom of Prussia to become the capital of the province of South Prussia. Liberated by Napoleon’s army in 1806, Warsaw was made the capital of the newly created Duchy of Warsaw. Following the Congress of Vienna of 1815, Warsaw became the centre of the Congress Poland, a constitutional monarchy under a personal union with Imperial Russia. The Royal University of Warsaw was established in 1816.\nGT: How long was Warsaw the capital of the Polish-Lithuanian Commonwealth?\nCLAPS: How long did Warsaw remain the capital of the Polish-Lithuanian Commonwealth?\nContext: John Paul II’s visits to his native country in 1979 and 1983 brought support to the budding solidarity movement and encouraged the growing anti-communist fervor there. In 1979, less than a year after becoming pope, John Paul celebrated Mass in Victory Square in Warsaw and ended his sermon with a call to ”renew the face” of Poland: Let Thy Spirit descend! Let Thy Spirit descend and renew the face of the land! This land! These words were very meaningful for the Polish citizens who understood them as the incentive for the democratic changes. GT: What is St. John’s Cathedral an example of, stylistically?\nCLAPS: St. John’s Cathedral is a typical example of what style?\nContext: Gothic architecture is represented in the majestic churches but also at the burgher houses and fortifications. The most significant buildings are St. John’s Cathedral (14th century), the temple is a typical example of the so-called Masovian gothic style, St. Mary’s Church (1411), a town house of Burbach family (14th century), Gunpowder Tower (after 1379) and the Royal Castle Curia Maior (14072̆0131410). The most notable examples of Renaissance architecture in the city are the house of Baryczko merchant family (1562), building called ”The Negro” (early 17th century) and Salwator tenement (1632). The most interesting examples of mannerist architecture are the Royal Castle (15962̆0131619) and the Jesuit Church (16092̆0131626) at Old Town. Among the first structures of the early baroque the most important are St. Hyacinth’s Church (1603-1639) and Sigismund’s Column (1644).\nGT: What is St. John’s Cathedral an example of, stylistically?\nCLAPS: St. John’s Cathedral is a typical example of what style?\nRO: Corbyn, Tsipras și Syriza în Grecia, Podemos în Spania, chiar Bernie Sanders în Statele Unite își alimentează retorica populistă din frustrările acumulate în societățile oocidentale. GT: Corbyn, Tsipras and Syriza in Greece, Podemos in Spain, even Bernie Sanders in the United States feed their populist rhetoric with the frustrations accumulated in the Western world. CLAPS: Corbyn, Tsipras and Syriza in Greece, Podemos in Spain, even Bernie Sanders in the United States are fuelling their populist rhetoric from the frustrations gained in the occident societies.\nRO: Pentru România, chiar și cota voluntară propusă de București, în cuantum de circa 1500 de suflete, ne depășește cu mult bunele intenții de solidaritate cu Uniunea Europeană exprimate în ultimele luni. GT: For Romania, even the voluntary quota proposed by Bucharest, amounting to about 1,500 souls, surpasses by far our good intentions of solidarity with the European Union expressed in recent months. CLAPS: For Romania, even Bucharest's proposed voluntary quota, amounting to around 1 500 souls, goes far beyond our good intentions of solidarity with the European Union expressed in recent months.\nRO: 7.000 de euro pe an pentru a închiria Clubul Pogor.\nGT: 7,000 Euro per year to rent Pogor Club.\nCLAPS: 7,000 euros per year to rent the Pogor Club.\nRO: Crina a fost internată împreună cu bunica ei la spitalul municipal din Pașcani, iar tot atunci tatăl fetei, Costică Balcan, a plecat la muncă în orașul Alexandria, județul Teleorman. GT: Crina and her grandmother were admitted to the hospital of the municipality of Pașcani, and on the same day her father, Costica Balcan went to work in the city of Alexandria, Teleorman County. CLAPS: The crisis was admitted with her grandmother to the municipal hospital in Pascani, and the girl's father, Costică Balcan, also went to work in the town of Alexandria, Teleorman County.\nRO: În țară, părinții Crinei, Alina și Costică, au continuat separat căutările, fiecare pe unde a putut. GT: I was just a little girl when I found out. CLAPS: I was small when I found out. RO: Cu o săptămână înainte de întâlnire nici nu prea mai putea să doarmă. GT: A week before the meeting he hadn't really been able to sleep. CLAPS: A week before the meeting, we could not even sleep much longer. RO: \"Ca să îți vezi copilul după atâția ani, este ceva\", ne spune o nepoată, venită și ea cu Costică de la Ruginoasa. GT: \"It's something extraordinary to see your baby after so many years\", says a niece who came with Costică from Ruginoasa. CLAPS: \"To see your child after so many years, it is something,\" says a little girl, who also comes with Costica from Ruginoasa.\nRO: În Piața Unirii, sub cerul începutului de toamnă, familia din Ruginoasa își unește din nou destinul cu fata lor din Palermo. GT: In Piața Unirii, beneath the autumn sky, the family from Ruginoasa joins their destiny with their daughter in Palermo. CLAPS: In the Square of the Union, under the skies of the early autumn, the family of Ruginoasa is once again uniting their destiny with their mother in Palermo.\nRO: \"Sper să am o amintire frumoasă cu ei, pentru că acum sunt atât de multe lucruri de spus\", ne mărturisește Crina. GT: \"I hope to gain a beautiful memory after meeting them, because now there are so many things to say,\" confesses Crina. CLAPS: \"I hope to have a nice memory with them, because there is so much to say now,\" Crina tells us.\nRO: În weekendul care a trecut, Alina și Costică și-au văzut pentru prima dată fata pierdută pe holurile spitalului din Iași, în urmă cu 20 ani.\nGT: Last weekend, Alina and Costică saw the girl who was lost on the halls of the hospital in Iasi for the first time in 20 years. CLAPS: For the first time in the weekend, Alina and Costica saw their girl lost in hospital rooms in Jasmine 20 years ago. RO: \"A durat până la urmă cam o lună de zile până să îl găsim pe tatăl biologic al fetei\", ne explică comisarul șef Romică Ichim. GT: Then they gave me some pointers, only to find that the person I was directed to wasn't he person I was looking for. CLAPS: They gave me some indications then, just that the person found was not the one wanted." } ]
2,021
TURBATIONS FOR CONDITIONAL TEXT GENERATION
SP:385bf55e0a9bdb8a3f3db800f63acffcb4207927
[ "The paper studies adversarial robustness in the context of federated learning. The authors provide an algorithm for adversarial training that generates adversarial examples on a trusted public dataset and iteratively sends them to the clients, so that they can perform learning on the adversarial examples as well. Notably, the adversarial examples are created by inspecting both the bias and the variance of the current set of models. The method is tested empirically on a wide range of datasets and compared to adversarial training using the local clients' data.", "The authors propose a robust federated learning algorithm, where they assume that all samples are iid, and $n_s$ clean samples are available at the server side. The authors then go on to optimize a loss function that optimizes the aggregate loss and propose some new algorithms with experimental results. While overall the paper is interesting, there are several shortcomings in the execution as discussed below that the authors can address to improve the paper." ]
In federated learning, data is distributed among local clients which collaboratively train a prediction model using secure aggregation. To preserve the privacy of the clients, the federated learning paradigm requires each client to maintain a private local training data set, and only uploads its summarized model updates to the server. In this work, we show that this paradigm could lead to a vulnerable model, which collapses in performance when the corrupted data samples (under adversarial manipulations) are used for prediction after model deployment. To improve model robustness, we first decompose the aggregation error of the central server into bias and variance, and then, propose a robust federated learning framework, named Fed BVA, that performs on-device adversarial training using the bias-variance oriented adversarial examples supplied by the server via asymmetrical communications. The experiments are conducted on multiple benchmark data sets using several prevalent neural network models, and the empirical results show that our framework is robust against white-box and black-box adversarial corruptions under both IID and non-IID settings.
[]
[ { "authors": [ "Abbas Acar", "Hidayet Aksu", "A. Selcuk Uluagac", "Mauro Conti" ], "title": "A survey on homomorphic encryption schemes: Theory and implementation", "venue": "ACM Comput. Surv.,", "year": 2018 }, { "authors": [ "Anish Athalye", "Logan Engstrom", "Andrew Ilyas", "Kevin Kwok" ], "title": "Synthesizing robust adversarial examples", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Eugene Bagdasaryan", "Andreas Veit", "Yiqing Hua", "Deborah Estrin", "Vitaly Shmatikov" ], "title": "How to backdoor federated learning", "venue": "arXiv preprint arXiv:1807.00459,", "year": 2018 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machinelearning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Arjun Nitin Bhagoji", "Supriyo Chakraborty", "Prateek Mittal", "Seraphin Calo" ], "title": "Analyzing federated learning through an adversarial lens", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In 2017 ieee symposium on security and privacy,", "year": 2017 }, { "authors": [ "Hongyan Chang", "Virat Shejwalkar", "Reza Shokri", "Amir Houmansadr" ], "title": "Cronus: Robust and heterogeneous collaborative learning with black-box knowledge transfer", "venue": null, "year": 1912 }, { "authors": [ "François Chollet" ], "title": "Xception: Deep learning with depthwise separable convolutions", "venue": "IEEE Conference on Computer Vision and Pattern Recognition, CVPR,", "year": 2017 }, { "authors": [ "Pedro Domingos" ], "title": "A unified bias-variance decomposition and its applications", "venue": "In International Conference on Machine Learning,", "year": 2000 }, { "authors": [ "Minghong Fang", "Xiaoyu Cao", "Jinyuan Jia", "Neil Zhenqiang Gong" ], "title": "Local model poisoning attacks to byzantine-robust federated learning", "venue": null, "year": 1911 }, { "authors": [ "Stuart Geman", "Elie Bienenstock", "René Doursat" ], "title": "Neural networks and the bias/variance dilemma", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Andrew Hard", "Kanishka Rao", "Rajiv Mathews", "Swaroop Ramaswamy", "Françoise Beaufays", "Sean Augenstein", "Hubert Eichner", "Chloé Kiddon", "Daniel Ramage" ], "title": "Federated learning for mobile keyboard prediction", "venue": "arXiv preprint arXiv:1811.03604,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Andrew G. Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": null, "year": 2017 }, { "authors": [ "Eunjeong Jeong", "Seungeun Oh", "Hyesung Kim", "Jihong Park", "Mehdi Bennis", "Seong-Lyun Kim" ], "title": "Communication-efficient on-device machine learning: Federated distillation and augmentation under non-iid private data", "venue": null, "year": 2018 }, { "authors": [ "Peter Kairouz", "H Brendan McMahan", "Brendan Avent", "Aurélien Bellet", "Mehdi Bennis", "Arjun Nitin Bhagoji", "Keith Bonawitz", "Zachary Charles", "Graham Cormode", "Rachel Cummings" ], "title": "Advances and open problems in federated learning", "venue": "arXiv preprint arXiv:1912.04977,", "year": 2019 }, { "authors": [ "Jakub Konečnỳ", "H Brendan McMahan", "Felix X Yu", "Peter Richtárik", "Ananda Theertha Suresh", "Dave Bacon" ], "title": "Federated learning: Strategies for improving communication efficiency", "venue": "arXiv preprint arXiv:1610.05492,", "year": 2016 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Aguera y Arcas" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "DeepFool: A simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Hesham Mostafa" ], "title": "Robust federated learning through representation matching and adaptive hyperparameters", "venue": "CoRR, abs/1912.13075,", "year": 2019 }, { "authors": [ "Brady Neal", "Sarthak Mittal", "Aristide Baratin", "Vinayak Tantia", "Matthew Scicluna", "Simon LacosteJulien", "Ioannis Mitliagkas" ], "title": "A modern take on the bias-variance tradeoff in neural networks", "venue": "arXiv preprint arXiv:1810.08591,", "year": 2018 }, { "authors": [ "Venkata Krishna Pillutla", "Sham M. Kakade", "Zaı̈d Harchaoui" ], "title": "Robust aggregation for federated learning", "venue": null, "year": 1912 }, { "authors": [ "Amit Portnoy", "Danny Hendler" ], "title": "Towards realistic byzantine-robust federated learning", "venue": "CoRR, abs/2004.04986,", "year": 2020 }, { "authors": [ "Mark Sandler", "Andrew G. Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Ludwig Schmidt", "Shibani Santurkar", "Dimitris Tsipras", "Kunal Talwar", "Aleksander Madry" ], "title": "Adversarially robust generalization requires more data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick Drew McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Giorgio Valentini", "Thomas G Dietterich" ], "title": "Bias-variance analysis of support vector machines for the development of svm-based ensemble methods", "venue": "Journal of Machine Learning Research,", "year": 2004 }, { "authors": [ "Yisen Wang", "Xingjun Ma", "James Bailey", "Jinfeng Yi", "Bowen Zhou", "Quanquan Gu" ], "title": "On the convergence and robustness of adversarial training", "venue": "In Proceedings of the 36th International Conference on Machine Learning,,", "year": 2019 }, { "authors": [ "Wei Wen", "Cong Xu", "Feng Yan", "Chunpeng Wu", "Yandan Wang", "Yiran Chen", "Hai Li" ], "title": "Terngrad: Ternary gradients to reduce communication in distributed deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Chulin Xie", "Keli Huang", "Pin-Yu Chen", "Bo Li" ], "title": "DBA: Distributed backdoor attacks against federated learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Qiang Yang", "Yang Liu", "Tianjian Chen", "Yongxin Tong" ], "title": "Federated machine learning: Concept and applications", "venue": "ACM Trans. Intell. Syst. Technol.,", "year": 2019 }, { "authors": [ "Zitong Yang", "Yaodong Yu", "Chong You", "Jacob Steinhardt", "Yi Ma" ], "title": "Rethinking bias-variance trade-off for generalization of neural networks", "venue": "arXiv preprint arXiv:2002.11328,", "year": 2020 }, { "authors": [ "Chen Zhu", "W Ronny Huang", "Hengduo Li", "Gavin Taylor", "Christoph Studer", "Tom Goldstein" ], "title": "Transferable clean-label poisoning attacks on deep neural nets", "venue": "In International Conference on Machine Learning,", "year": 2019 } ]
[ { "heading": null, "text": "In federated learning, data is distributed among local clients which collaboratively train a prediction model using secure aggregation. To preserve the privacy of the clients, the federated learning paradigm requires each client to maintain a private local training data set, and only uploads its summarized model updates to the server. In this work, we show that this paradigm could lead to a vulnerable model, which collapses in performance when the corrupted data samples (under adversarial manipulations) are used for prediction after model deployment. To improve model robustness, we first decompose the aggregation error of the central server into bias and variance, and then, propose a robust federated learning framework, named Fed BVA, that performs on-device adversarial training using the bias-variance oriented adversarial examples supplied by the server via asymmetrical communications. The experiments are conducted on multiple benchmark data sets using several prevalent neural network models, and the empirical results show that our framework is robust against white-box and black-box adversarial corruptions under both IID and non-IID settings." }, { "heading": "1 INTRODUCTION", "text": "The explosive amount of decentralized user data collected from the ever-growing usage of smart devices, e.g., smartphones, wearable devices, home sensors, etc., has led to a surge of interest in the field of decentralized learning. To protect the privacy-sensitive data of the clients, federated learning (McMahan et al., 2017; Yang et al., 2019) has been proposed. Federated learning only allows a group of clients to train local models using their own data, and then collectively merges the model updates on a central server using secure aggregation (Acar et al., 2018). Due to its high privacy-preserving property, federated learning has attracted much attention in recent years along with the prevalence of efficient light-weight deep models (Howard et al., 2017) and low-cost network communications (Wen et al., 2017; Konečnỳ et al., 2016).\nIn federated learning, the central server only inspects the secure aggregation of the local models as a whole. Consequently, it is susceptible to clients’ corrupted updates (e.g., system failures, etc). Recently, multiple robust federated learning models (Fang et al., 2019; Pillutla et al., 2019; Portnoy & Hendler, 2020; Mostafa, 2019) have been proposed. These works only focus on performing clientlevel robust training or designing server-level aggregation variants with hyper-parameter tuning for Byzantine failures. However, none of them have the ability to mitigate the federated learning’s vulnerability when the adversarial manipulations are present during testing, which as we shown in Section 4.1 that is mainly due to the generalization error in the model aggregation.\nOur work bridges this gap by investigating the error incurred during the aggregation of federated learning from the perspective of bias-variance decomposition (Domingos, 2000; Valentini & Dietterich, 2004). Specifically, we show that the generalization error of the aggregated model on the central server can be decomposed as the combination of bias (triggered by the main prediction of these clients) and variance (triggered by the variations among clients’ predictions). Next, we propose to perform the local robust training on clients by supplying them with a tiny amount of the bias-variance perturbed examples generated from the central server via asymmetrical communications. The experiments are conducted on neural networks with cross-entropy loss, however, other loss functions are also applicable as long as their gradients w.r.t. bias and variance are tractable to estimate. In this way, any gradient-based adversarial training strategies (Goodfellow et al., 2015; Madry et al., 2018) could be used. Compared with previous work, our major contributions include:\n• We provide the exact solution of bias-variance analysis w.r.t. the generalization error which is perfectly suitable for neural network based federated learning. As a comparison, performing adversarial attacks or training with conventional federated learning methods will only focus on the bias of the central model but ignore the variance.\n• We demonstrate that the conventional federated learning framework is vulnerable to the strong attacking methods with increasing communication rounds even if the adversarial training using the locally generated adversarial examples is performed on each client. • Without violating the clients’ privacy, we show that providing a tiny amount of bias-variance perturbed data from the central server to the clients through asymmetrical communication could dramatically improve the robustness of the training model under various settings." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 SETTINGS", "text": "In federated learning, there is a central server and K different clients, each with access to a private training set Dk = {(xki , tki )} nk i=1, where x k i , tki , and nk are the features, label, and number of training examples in the kth client (k = 1, · · · ,K). Each data Dk is exclusively owned by client k and will not be shared with the central server or other clients. In addition, there is a small public training set Ds = {(xsj , tsj)} ns j=1 with ns training examples from the server that is shared with clients, where ns ⌧ PK\nk=1 nk. Note that this will not break the privacy constraints, for example, hospitals (local devices) that contribute to a federated learned medical image diagnosis system could take a few publicly accessible images as additional inputs. The goal of federated learning is to train a global classifier f(·) using knowledge from all the clients such that it generalizes well over test data Dtest. The notation used in this paper is summarized in the Appendix (see Table 4)." }, { "heading": "2.2 PROBLEM DEFINITION", "text": "In this paper, we study the adversarial robustness of neural networks1 in federated learning setting, and we define robust decentralized learning as follows. Definition 2.1. (Adversarially Robust Federated Learning) Input: (1) A set of private training data {Dk}Kk=1 on K different clients; (2) Tiny amount of training data Ds on the central server; (3) Learning algorithm f(·) and loss function L(·, ·). Output: A trained model on the central server that is robust against adversarial perturbation. We would like to point out that our problem definition has the following properties: Asymmetrical communication: The asymmetrical communication between each client and server cloud is allowed: the server provides both global model parameters and limited shared data to the clients; while each client only uploads its local model parameters back to the server. Data distribution: All training examples on the clients and the server are assumed to follow the same data distribution. However, the experiments show that our proposed algorithm also achieves outstanding performance under the non-IID setting, which could be common among personalized clients in real scenarios. Shared learning algorithm: All the clients are assumed to use the identical model f(·), including architectures as well as hyper-parameters (e.g., learning rate, local epochs, local batch size). Remark. The basic assumption of this problem setting is that the learning process is clean (no\nmalicious behaviors are observed during training), however, the intentionally generated adversarial\npoisoning data will be mixed with clean data during training. The eventual trained model being\ndeployed on the devices will be robust against potential future adversarial attacks." }, { "heading": "2.3 BIAS-VARIANCE TRADE-OFF", "text": "Following (Domingos, 2000; Valentini & Dietterich, 2004), we define the optimal prediction, main prediction as well as the bias, variance, and noise for any real-valued loss function L(·, ·) as follows: Definition 2.2. (Optimal Prediction and Main Prediction) Given loss function L(·, ·) and learning algorithm f(·), optimal prediction y⇤ and main prediction ym for an example are defined as:\ny⇤(x) = argmin y Et[L(y, t)] and ym(x) = argmin y0 ED[L(fD(x), y0)] (1) where t and D are viewed as the random variables to denote the class label and training set, and fD denotes the model trained on D. In short, the main prediction is the prediction whose average loss relative to all the predictions over data distributions is minimum, e.g., the main prediction for zeroone loss is the mode of predictions. In this work, we show that the main prediction is the average prediction of client models for mean squared (MSE) loss and cross-entropy (CE) loss in Section 4.1.\n1Our theoretical contribution mainly focuses on classification using neural networks with cross-entropy loss and mean squared loss. However, the proposed framework is generic to allow the use of other classification loss functions as well.\nDefinition 2.3. (Bias, Variance and Noise) Given a loss function L(·, ·) and a learning algorithm f(·), the expected loss ED,t[L(fD(x), t)] for an example x can be decomposed2 into bias, variance and noise as follows:\nB(x) = L(ym, y⇤) and V (x) = ED[L(fD(x), ym)] and N(x) = Et[L(y⇤, t)] (2)\nIn short, bias is the loss incurred by the main prediction w.r.t. the optimal prediction, and variance is the average loss incurred by predictions w.r.t. the main prediction. Noise is conventionally assumed to be irreducible and independent to f(·). Remark. Our definitions on optimal prediction, main prediction, bias, variance and noise slightly\ndiffer from previous ones (Domingos, 2000; Valentini & Dietterich, 2004). For example, conventional optimal prediction was defined as y⇤(x) = argminy Et[L(t, y)], and it is equivalent to our definition when loss function is symmetric over its arguments, i.e., L(y1, y2) = L(y2, y1). Note that this decomposition holds for any real-valued loss function in the binary setting (Domingos, 2000)\nwith a bias & variance trade-off coefficient that has a closed-form expression. For multi-class set-\nting, we inherit their definition of bias & variance directly, and treat the trade-off coefficient as a\nhyper-parameter to tune because no closed-form expression of is available." }, { "heading": "3 THE PROPOSED FRAMEWORK", "text": "A typical framework (Kairouz et al., 2019) of privacy-preserving federated learning can be summarized as follows: (1) Client Update: Each client updates local model parameters wk by minimizing the empirical loss over its own training set; (2) Forward Communication: Each client uploads its model parameter update to the central server; (3) Server Update: It synchronously aggregates the received parameters; (4) Backward Communication: The global parameters are sent back to the clients. Our framework follows the same paradigm but with substantial modifications as below.\nServer Update. The server has two components: The first one uses FedAvg (McMahan et al., 2017) algorithm to aggregate the local models’ parameters, i.e., wG = Aggregate(w1, · · · , wK) =PK\nk=1 nk n wk where n = PK k=1 nk and wk is the model parameters in the k\nth client. Meanwhile, another component is designed to produce adversarially perturbed examples which could be induced by a poisoning attack algorithm for the usage of robust adversarial training.\nIt has been well studied (Belkin et al., 2019; Domingos, 2000; Valentini & Dietterich, 2004) that in the classification setting, the generalization error of a learning algorithm on an example is determined by the bias, variance, and irreducible noise as defined in Eq. (2). Similar to the previous work, we also assume a noise-free learning scenario where the class label t is a deterministic function of x (i.e., if x is sampled repeatedly, the same values of its class t will be observed). This motivates us to generate the adversarial examples by attacking the bias and variance induced by clients’ models as:\nmax x̂2⌦(x) B(x̂;w1, · · · , wK) + V (x̂;w1, · · · , wK) 8(x, t) 2 Ds (3)\nwhere B(x̂;w1, · · · , wK) and V (x̂;w1, · · · , wK) could be empirically estimated from a finite number of clients’ parameters trained on local training sets {D1,D2, · · · ,DK}. Here is a hyperparameter to measure the trade-off of bias and variance, and ⌦(x) is the perturbation constraint.\nNote that Ds (on the server) is the candidate subset of all available training examples that would lead to their perturbed counterparts. This is a more feasible setting as compared to generating adversarial examples on clients’ devices because the server usually has much powerful computational capacity in real scenarios that allows the usage of flexible poisoning attack algorithms. In this case, both poisoned examples and server model parameters would be sent back to each client (Backward Communication), while only clients’ local parameters would be uploaded to the server (Forward Communication), i.e., the asymmetrical communication as discussed in Section 2.2.\nClient Update. The robust training of one client’s prediction model (i.e., wk) can be formulated as the following minimization problem.\nmin wk\n0 @ nkX\ni=1\nL(fDk(x k i ;wk), t k i ) +\nnsX\nj=1\nL(fDk(x̂ s j ;wk), t s j)\n1\nA (4)\nwhere x̂sj 2 ⌦(xsj) is the perturbed examples that is asymmetrically transmitted from the server.\n2This decomposition is based on the weighted sum of bias, variance, and noise. In general, t is a non-deterministic function (Domingos, 2000) of x when the irreducible noise is considered. Namely, if x is sampled repeatedly, different values of t will be observed.\nRemark. Intuitively, the bias measures the systematic loss of a learning algorithm, and the vari-\nance measures the prediction consistency of the learner over different training sets. Therefore, our\nrobust federated learning framework has the following advantages: (i) it encourages the clients\nto consistently produce the optimal prediction for perturbed examples, thereby leading to a better\ngeneralization performance; (ii) local adversarial training on perturbed examples allows to learn a\nrobust local model, and thus a robust global model could be aggregated from clients.\nTheoretically, we could still have another alternative robust federated training strategy:\nmin wk\nnkX\ni=1\nmax x̂ki 2⌦(xki )\nL(f(x̂ki ;wk), t k i ) 8k 2 {1, 2, · · · ,K} (5)\nwhere the perturbed training examples of each client k is generated on local devices from Dk instead of transmitted from the server. This min-max formula is similar to (Madry et al., 2018; Tramèr et al., 2018) where the inner maximization problem synthesizes the adversarial counterparts of clean examples, while the outer minimization problem finds the optimal model parameters over perturbed training examples. Thus, each local robust model is trained individually, nevertheless, poisoning attacks on device will largely increase the computational cost and memory usage. Meanwhile, it only considers the client-specific loss and is still vulnerable against adversarial examples with increasing communication rounds. Both phenomena are observed in our experiments (see Fig. 4 and Fig. 5)." }, { "heading": "4 ALGORITHM", "text": "" }, { "heading": "4.1 BIAS-VARIANCE ATTACK", "text": "We first consider the maximization problem in Eq. (3) using bias-variance based adversarial attacks. It aims to find the adversarial example x̂ (from the original example x) that would produce large bias and variance values w.r.t. clients’ local models. Specifically, perturbation constraint x̂ 2 ⌦(x) forces the adversarial example x̂ to be visually indistinguishable w.r.t. x. Here we consider the wellstudied l1-bounded adversaries3 (Goodfellow et al., 2015; Madry et al., 2018; Tramèr et al., 2018) such that ⌦(x) := {x̂\n||x̂ x||1 ✏} for a perturbation magnitude ✏. Furthermore, we propose to consider the following two gradient-based algorithms to generate adversarial examples.\nBias-variance based Fast Gradient Sign Method (BV-FGSM): Following FGSM (Goodfellow et al., 2015), it linearizes the maximization problem in Eq. (3) with one-step attack as follows. x̂BV FGSM := x+ ✏ · sign (rx (B(x;w1, · · · , wK) + V (x;w1, · · · , wK))) (6) Bias-variance based Projected Gradient Descent (BV-PGD): PGD can be considered as a multistep variant of FGSM (Kurakin et al., 2017) and might generate powerful adversarial examples. This motivated us to derive a BV-based PGD attack: x̂ l+1 BV PGD := Proj⌦(x) x̂ l + ✏ · sign rx̂l B(x̂l;w1, · · · , wK) + V (x̂l;w1, · · · , wK) (7) where x̂l is the adversarial example at the lth step with the initialization x̂0 = x and Proj⌦(x)(·) projects each step onto ⌦(x). Remark. The proposed framework could be naturally generalized to any gradient-based adversarial attack algorithms where the gradients of bias B(·) and variance V (·) w.r.t. x are tractable when estimated from finite training sets. Compared with the existing attack methods (Carlini & Wagner," }, { "heading": "2017; Goodfellow et al., 2015; Kurakin et al., 2017; Moosavi-Dezfooli et al., 2016), our loss function the adversary aims to optimize is a linear combination of bias and variance, whereas existing work mainly focused on attacking the overall classification error that considers bias only.", "text": "The following theorem states that bias B(·) and variance V (·) as well as their gradients over input x could be estimated using the clients’ models. Theorem 4.1. Assume that L(·, ·) is the cross-entropy loss function, then, the empirical estimated main prediction ym for an input example (x, t) has the following closed-form expression: ym(x;w1, · · · , wK) = 1K PK k=1 fDk(x;wk). Furthermore, the empirical bias and variance, as well as their gradients over an input x are estimated as follows:\nB(x;w1, · · · , wK) = 1\nK\nKX\nk=1\nL(fDk(x;wk), t); V (x;w1, · · · , wK) = L(ym, ym) = H(ym)\n3 l1 robustness is surely not the only option for robustness learning. However, we use this standard approach to show the limitations of\nprior federated learning, and evaluate the improvements of our proposed framework.\nHere, H(ym) = PC j=1 y (j) m log y (j) m is the entropy of the main prediction ym and C is the number of classes. Easily, we can have their gradients in terms of the bias and variance as rxB(x;w1, · · · , wK) = 1K PK k=1rxL(fDk(x;wk), t) and rxV (x;w1, · · · , wK) =\n1K PK k=1 PC j=1(log y (j) m + 1)rxf (j)Dk (x;wk). Details of the proof is elaborated in A.2.\nIn addition, we also consider the case where L(·, ·) is the MSE loss function. But the gradients of MSE’s bias and variance are much more computational demanding comparing with the concise formulas that cross-entropy ends up with. More comparisons are illustrated in Appendix A.5.1.\nAlgorithm 1 Fed BVA 1: Input: K (number of clients, with local data\nsets {Dk}Kk=1); f (learning model), E (number of local epochs); F (fraction of clients selected on each round); B (batch size of local client); ⌘ (learning rate); Ds (shared data set on server); ✏ (perturbation magnitude).\n2: Initialization: Initialize w0G and D̂s = ; 3: for each round r = 1, 2, · · · do 4: m = max(F ·K, 1) 5: Sr randomly sampled m clients 6: for each client k 2 Sr in parallel do 7: wrk, fDk ,rxfDk ClientUpdate(wr 1G , D̂s,Ds, k) 8: end for 9: D̂s BVAttack({fDk ,rxfDk}|k 2 Sr)\n10: wrG Aggregate(wrk|k 2 Sr) 11: end for 12: return wG\nAlgorithm 2 ClientUpdate(w, D̂s,Ds,k) 1: Initialize kth client’s model with w 2: B split Dk [ D̂s into batches of size B 3: for each local epoch i = 1, 2, · · · , E do 4: for local batch (x, t) 2 B do 5: w w ⌘rL(fDk(x;w), t) 6: end for 7: end for 8: Calculate fDk(x;wrk), rxfDk(x;w) 8x 2 Ds 9: return w, fDk(x;wrk),rxfDk(x;w)\nAlgorithm 3 BVAttack({fDk ,rxfDk}|k 2 Sr)\n1: Initialize D̂s = ; 2: for (x, t) 2 Ds do 3: Estimate the gradients rxB(x) and rxV (x) using Theorem 4.1 4: Calculate x̂ using Eq. (6) or (7) and add to D̂s 5: end for 6: return D̂s" }, { "heading": "4.2 FED BVA", "text": "We present a novel robust federated learning algorithm with our proposed bias-variance attacks, named Fed BVA. Following the framework defined in Eq. (3) and Eq. (4), key components of our algorithm are (1) bias-variance attacks for generating adversarial examples on the server, and (2) adversarial training using poisoned server examples together with clean local examples on each client. Therefore, we optimize these two objectives by producing the adversarial examples D̂s and updating the local model parameters w iteratively.\nThe proposed algorithm is summarized in Alg. 1. Given the server’s Ds and clients’ training data {Dk}Kk=1 as input, the output is a robust global model on the server. In this case, the clean server data Ds will be shared to all the clients. First, it initializes the server’s model parameter wG and perturbed data D̂s, and then assigns to the randomly selected clients (Steps 4-5). Next, each client optimizes its own local model (Steps 6-8) with the received global parameters wG as well as its own clean data Dk, and uploads the updated parameters as well as the gradients of local model on each shared server example back to the server. At last, the server generates the perturbed data D̂s (Step 9) using the proposed bias-variance attack algorithm (see Alg. 3) with aggregations (model parameter average, bias gradients average, and variance gradients average) in the similar manner as FedAvg (McMahan et al., 2017). These aggregations can be privacy secured if additive homomorphic encryption (Acar et al., 2018) is applied." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 SETTINGS", "text": "In this section, we evaluate the adversarial robustness of our proposed algorithm on four benchmark data sets: MNIST4, Fashion-MNIST5, CIFAR-106 and CIFAR-1006. The baseline models\n4 http://yann.lecun.com/exdb/mnist 5 https://github.com/zalandoresearch/fashion-mnist 6 https://www.cs.toronto.edu/˜kriz/cifar.html\nFigure 3: Convergence on Fashion-MNIST(PGD-20attack)\nFigure 4: Performance on Fashion-MNIST(PGD-20attack)\nFigure 5: Efficiency on FashionMNIST(PGD-20attack)\nwe used include: (1). Centralized: the training with one centralized model, which is identical to the federated learning case that only has one client (K = 1) with fraction (F = 1). (2). FedAvg: the classical federated averaging model (McMahan et al., 2017). (3). FedAvg AT: The simplified version of our proposed method where the local clients perform adversarial training with the asymmetrical transmitted perturbed data generated on top of FedAvg’s aggregation. (4) - (6). Fed Bias, Fed Variance, Fed BVA: Our proposed methods where the asymmetrical transmitted perturbed data is generated using the gradients of bias-only attack, variance-only attack, and bias-variance attack, respectively. (7). EAT: Ensemble adversarial training (Tramèr et al., 2018), where each client performs local adversarial training using Eq. (5), and their model updates are aggregated on server using FedAvg. For fair comparisons, all baselines are modified to the asymmetrical communications setting (FedAvg and EAT have clean Ds received), and all their initializations are set to be the same. (8). EAT+Fed BVA: A combination of baselines (6) and (7). Note that baselines (7) and (8) have high computational requirements on client devices, and are usually not preferred in real scenarios.\nFor the defense model, we use a 4-layer CNN model for MNIST and Fashion-MNIST, and VGG9 architecture for CIFAR-10 and CIFAR-100. Regarding blackbox attacks, we apply ResNet18 (He et al., 2016), VGG11 (Simonyan & Zisserman, 2015), Xception (Chollet, 2017), and MobileNetV2 (Sandler et al., 2018) for CIFAR data, and provide a variety of models for MNIST and Fashion-MNIST by following the design of (Tramèr et al., 2018). The training is performed using the SGD optimizer with fixed learning rate of 0.01 and momentum of value 0.9. The trade-off coefficient between bias and variance is set to = 0.01 for all experiments. All hyper-parameters of federated learning are presented in Table 5 in the Appendix. We empirically demonstrate that these hyper-parameter settings are preferable in terms of both training accuracy and robustness (see the details of Fig. 6 - Fig. 14 in the Appendix). To evaluate the robustness of our federated learning algorithm against adversarial attacks, except for the clean model training, we perform FGSM (Goodfellow et al., 2015), PGD (Kurakin et al., 2017) with 10 and 20 steps towards the aggregated server model on the Dtest. Following (Tramèr et al., 2018; Wang et al., 2019), the maximum perturbations allowed are ✏ = 0.3 on MNIST and Fashion-MNIST, and ✏ = 16255 on CIFAR-10 and CIFAR-100 for both threat and defense models. For IID sampling, the data is shuffled and uniformly partitioned into each client; For non-IID setting, data is divided into 2F ·K shards based on sorted labels, then assigns each client with 2 shards. Thereby, each client will have data with at most two classes." }, { "heading": "5.2 RESULT ANALYSIS", "text": "To analyze the properties of our proposed Fed BVA framework, we present two visualization plots on MNIST using a trained CNN model where the bias and variance are both calculated on the training examples. In Fig. 1, we visualize the extracted gradients using adversarial attack from bias, variance, and bias-variance. Notice that the gradients of bias and variance are similar but with subtle differences in local pixel areas. However, according to Theorem 4.1, the gradient calculation of these two are quite different: bias requires the target label as input, but variance only needs the model output and main prediction. From another perspective, we also investigate the bias-variance magnitude relationship with varying model complexity. As shown in Fig. 2, with increasing model complexity (more convolutional filters in CNN), both bias and variance decrease. This result is different from the double-descent curve or bell-shape variance curve claimed in (Belkin et al., 2019; Yang et al., 2020). The reasons are twofold: First, their bias-variance definitions are from the MSE regression decomposition perspective, whereas our decomposition utilizes the concept of main prediction, and the generalization error is decomposed from the classification perspective; Second, their implementations only evaluate the bias and variance using training batches on one central model and thus is different from the definition which requires the variance to be estimated from multiple sub-models (in our scenario, client models).\nThe convergence plot of all baselines is presented in Fig. 3. We observe that FedAvg has the best convergence, and all robust training will have a slightly higher loss upon convergence. This matches the observations in (Madry et al., 2018) which state that training performance may be sacrificed in order to provide robustness for small capacity networks. For the model performance shown in Fig. 4, we observe that the aggregation of federated learning is vulnerable to adversarial attacks since both FedAvg and EAT have decreased performance with an increasing number of server-client communications. Other baselines that utilized the asymmetrical communications have increasing robustness with more communication rounds although only a small number of perturbed examples (ns = 64) are transmitted. We also observe that when communication rounds reach 40, Fed BVA starts to outperform EATwhile the latter is even more resource-demanding than Fed BVA (shown in Fig. 5, where the pie plot size represents the running time). Overall, bias-variance based adversarial training via asymmetric communication is both effective and efficient for robust federated learning.\nFor the comprehensive experiments in Table 1 and Table 2, it is easy to verify that our proposed model outperforms all other baselines regardless of the source of the perturbed examples (i.e., locally generated like EAT+Fed BVA or asymmetrically transmitted from the server like Fed BVA). Comparing with standard robust federated learning FedAvg AT, the performance of Fed BVA against adversarial attacks still increases 4% 13% and 2% 9% on IID and non-IID settings respectively, although Fed BVA is theoretically suitable for the cases that clients have IID samples. In Table 3, we observe a similar trend where Fed BVA outperforms FedAvg AT on CIFAR-10 and CIFAR-100 (with 0.2% 10% increases) when defending different types of adversarial examples. Comparing with strong local adversarial training baseline EAT, we also observe a maximum 13% accuracy increase when applying its bias-variance oriented baseline EAT+Fed BVA. Overall, the takeaway is that without local adversarial training, using a bias-variance based robust learning framework will almost always outperform other baselines for defending FGSM and PGD attacks. When local adversarial training is allowed (e.g., client device has powerful computation ability), using bias-variance robust learning with local adversarial training will mostly have the best robustness.\nWe also conducted various additional experiments in Appendix A.5 which includes: (1) Comparison of efficiency and effectiveness of Fed BVA using cross-entropy loss and MSE loss; (2) Comparison of single-step Fed BVA and multi-step Fed BVA in terms of the generation of D̂s; (3) Three training scenarios of Fed BVA that use client-specific adversarial examples or universal adversarial examples; (4) Ablation study in terms of the number of shared perturb examples ns, optimizer’s momentum, and the number of local epochs E; (5) Blackbox attacking transferability between various models on all four data sets under multiple settings." }, { "heading": "6 RELATED WORK", "text": "Adversarial Machine Learning: While machine learning models have achieved remarkable performance over clean inputs, recent work (Goodfellow et al., 2015) showed that those trained models are vulnerable to adversarially chosen examples by adding the imperceptive noise to the clean inputs. In general, the adversarial robustness of centralized machine learning models have been explored from the following aspects: adversarial attacks (Carlini & Wagner, 2017; Athalye et al., 2018; Zhu et al.,\n2019), defense (or robust model training) (Madry et al., 2018; Carlini et al., 2019; Tramèr et al., 2018) and interpretable adversarial robustness (Schmidt et al., 2018; Tsipras et al., 2018).\nFederated Learning: Federated learning with preserved privacy (Konečnỳ et al., 2016; McMahan et al., 2017; Hard et al., 2018) and knowledge distillation (Chang et al., 2019; Jeong et al., 2018) has become prevalent in recent years. Meanwhile, the vulnerability of federated learning to backdoor attacks has also been explored by (Bagdasaryan et al., 2018; Bhagoji et al., 2019; Xie et al., 2019). Following their work, multiple robust federated learning models (Fang et al., 2019; Pillutla et al., 2019; Portnoy & Hendler, 2020; Mostafa, 2019) are also proposed and studied. In this paper, we studied the federated learning’s adversarial vulnerability after model deployment from the perspective of bias-variance analysis. This is in sharp contrast to the existing work that focused on the model robustness against the Byzantine failures.\nBias-Variance Decomposition: Bias-variance decomposition (Geman et al., 1992) was originally introduced to analyze the generalization error of a learning algorithm. Then, a generalized biasvariance decomposition (Domingos, 2000; Valentini & Dietterich, 2004) was studied in the classification setting which enabled flexible loss functions (e.g., squared loss, zero-one loss). More recently, bias-variance trade-off was experimentally evaluated on modern neural network models (Neal et al., 2018; Belkin et al., 2019; Yang et al., 2020)." }, { "heading": "7 CONCLUSION", "text": "In this paper, we proposed a novel robust federated learning framework, in which the aggregation incurred loss during the server’s aggregation is dissected into a bias part and a variance part. Our approach improves the model robustness through adversarial training by supplying a few bias-variance perturbed samples to the clients via asymmetrical communications. Extensive experiments have been conducted where we evaluated its performance from various aspects on several benchmark data sets. We believe the further exploration of this direction will lead to more findings on the robustness of federated learning." } ]
2,020
null
SP:06c25da862ae69fa7cd0f87ea0b125243ea86f5f
[ "This paper proposes a new task: spoken conversational question answering, which combines conversational question answering (e.g. CoQA) with spoken question answering (e.g. Spoken-SQuAD). The task is to answer a question (in written text) given a question that is given in both audio form and text form. They create a dataset for this task by combining CoQA with some off-the-shelf text-to-speech and speech-to-text models. They then propose a new model, DDNet, which obtains improved performance on their dataset.", "In this paper, the authors release a new dataset - Spoken-CoQA which includes an ASR based version of the popular CoQA dataset. The dataset has been created by running the Google TTS system followed by ASR using CMU Sphinx, to create a speech-transcribed versions of the dataset. The dataset includes the corresponding TTS audio recordings. Since the transcribed dataset has transcription errors, existing reading comprehension models do not work well. Thus, the paper introduces a joint audio-textual model for QA on the Spoken-CoQA dataset that uses TTS recordings its corresponding ASR output. " ]
In spoken question answering, QA systems are designed to answer questions from contiguous text spans within the related speech transcripts. However, the most natural way that human seek or test their knowledge is via human conversations. Therefore, we propose a new Spoken Conversational Question Answering task (SCQA), aiming at enabling QA systems to model complex dialogues flow given the speech utterances and text corpora. In this task, our main objective is to build a QA system to deal with conversational questions both in spoken and text forms, and to explore the plausibility of providing more cues in spoken documents with systems in information gathering. To this end, instead of adopting automatically generated speech transcripts with highly noisy data, we propose a novel unified data distillation approach, DDNet, which directly fuse audio-text features to reduce the misalignment between automatic speech recognition hypotheses and the reference transcriptions. In addition, to evaluate the capacity of QA systems in a dialogue-style interaction, we assemble a Spoken Conversational Question Answering (Spoken-CoQA) dataset with more than 120k question-answer pairs. Experiments demonstrate that our proposed method achieves superior performance in spoken conversational question answering.
[]
[ { "authors": [ "Alexei Baevski", "Steffen Schneider", "Michael Auli" ], "title": "vq-wav2vec: Self-supervised learning of discrete speech representations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Alexandre Bérard", "Olivier Pietquin", "Christophe Servan", "Laurent Besacier" ], "title": "Listen and translate: A proof of concept for end-to-end speech-to-text translation", "venue": "arXiv preprint arXiv:1612.01744,", "year": 2016 }, { "authors": [ "Antoine Bruguier", "Rohit Prabhavalkar", "Golan Pundak", "Tara N Sainath. Phoebe" ], "title": "Pronunciationaware contextualization for end-to-end speech recognition", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Eunsol Choi", "He He", "Mohit Iyyer", "Mark Yatskar", "Wen-tau Yih", "Yejin Choi", "Percy Liang", "Luke Zettlemoyer" ], "title": "QuAC: Question answering in context", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Yung-Sung Chuang", "Chi-Liang Liu", "Hung-Yi Lee" ], "title": "SpeechBERT: Cross-modal pre-trained language model for end-to-end spoken question answering", "venue": "arXiv preprint arXiv:1910.11559,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Mattia A Di Gangi", "Viet-Nhat Nguyen", "Matteo Negri", "Marco Turchi" ], "title": "Instance-based model adaptation for direct speech translation", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Ahmed Elgohary", "Chen Zhao", "Jordan Boyd-Graber" ], "title": "Dataset and baselines for sequential opendomain question answering", "venue": "In Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Shao-Wei Fan-Jiang", "Tien-Hong Lo", "Berlin Chen" ], "title": "Spoken document retrieval leveraging bertbased modeling and query reformulation", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Peng Gao", "Zhengkai Jiang", "Haoxuan You", "Pan Lu", "Steven CH Hoi", "Xiaogang Wang", "Hongsheng Li" ], "title": "Dynamic fusion with intra-and inter-modality attention flow for visual question answering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Hongyu Gong", "Yelong Shen", "Dian Yu", "Jianshu Chen", "Dong Yu" ], "title": "Recurrent chunking mechanisms for long-text machine reading comprehension", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Sangchul Hahn", "Heeyoul Choi" ], "title": "Self-knowledge distillation in natural language processing", "venue": "In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019),", "year": 2019 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Matthew Honnibal", "Ines Montani" ], "title": "spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing", "venue": null, "year": 2017 }, { "authors": [ "Minghao Hu", "Yuxing Peng", "Furu Wei", "Zhen Huang", "Dongsheng Li", "Nan Yang", "Ming Zhou" ], "title": "Attention-guided answer distillation for machine reading comprehension", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Hsin-Yuan Huang", "Chenguang Zhu", "Yelong Shen", "Weizhu Chen" ], "title": "Fusionnet: Fusing via fullyaware attention with application to machine comprehension", "venue": "arXiv preprint arXiv:1711.07341,", "year": 2017 }, { "authors": [ "Hsin-Yuan Huang", "Eunsol Choi", "Wen-tau Yih" ], "title": "FlowQA: Grasping flow in history for conversational machine comprehension", "venue": "arXiv preprint arXiv:1810.06683,", "year": 2018 }, { "authors": [ "Mingkun Huang", "Yongbin You", "Zhehuai Chen", "Yanmin Qian", "Kai Yu" ], "title": "Knowledge distillation for sequence model", "venue": "Proceedings Interspeech", "year": 2018 }, { "authors": [ "Damianos Karakos", "Rabih Zbib", "William Hartmann", "Richard Schwartz", "John Makhoul" ], "title": "Reformulating information retrieval from speech and text as a detection problem", "venue": "In Proceedings of the workshop on Cross-Language Search and Summarization of Text and Speech (CLSSTS2020),", "year": 2020 }, { "authors": [ "Yoon Kim", "Alexander M Rush" ], "title": "Sequence-level knowledge distillation", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Chia-Chih Kuo", "Shang-Bao Luo", "Kuan-Yu Chen" ], "title": "An audio-enriched bert-based framework for spoken multiple-choice question answering", "venue": "arXiv preprint arXiv:2005.12142,", "year": 2020 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "ALBERT: A Lite BERT for Self-supervised Learning of Language Representations", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Chia-Hsuan Lee", "Shang-Ming Wang", "Huan-Cheng Chang", "Hung-Yi Lee" ], "title": "ODSQA: Opendomain spoken question answering dataset", "venue": "IEEE Spoken Language Technology Workshop (SLT),", "year": 2018 }, { "authors": [ "Chia-Hsuan Lee", "Yun-Nung Chen", "Hung-Yi Lee" ], "title": "Mitigating the impact of speech recognition errors on spoken question answering by adversarial domain adaptation", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Lin-shan Lee", "James Glass", "Hung-yi Lee", "Chun-an Chan" ], "title": "Spoken content retrieval—beyond cascading speech recognition with text retrieval", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2015 }, { "authors": [ "Chia-Hsuan Li", "Szu-Lin Wu", "Chi-Liang Liu", "Hung-yi Lee" ], "title": "Spoken SQuAD: A study of mitigating the impact of speech recognition errors on listening comprehension", "venue": "arXiv preprint arXiv:1804.00320,", "year": 2018 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "venue": null, "year": 1907 }, { "authors": [ "Jiasen Lu", "Dhruv Batra", "Devi Parikh", "Stefan Lee" ], "title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Siva Reddy", "Danqi Chen", "Christopher D Manning" ], "title": "CoQA: A Conversational Question Answering Challenge", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Dmitriy Serdyuk", "Yongqiang Wang", "Christian Fuegen", "Anuj Kumar", "Baiyang Liu", "Yoshua Bengio" ], "title": "Towards end-to-end spoken language understanding", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Shamane Siriwardhana", "Andrew Reis", "Rivindu Weerasekera", "Suranga Nanayakkara" ], "title": "Jointly fine-tuning “bert-like” self supervised models to improve multimodal speech emotion recognition", "venue": "arXiv preprint arXiv:2008.06682,", "year": 2020 }, { "authors": [ "Dan Su", "Pascale Fung" ], "title": "Improving spoken question answering using contextualized word representation", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Bo-Hsiang Tseng", "Sheng-Syun Shen", "Hung-Yi Lee", "Lin-Shan Lee" ], "title": "Towards machine comprehension of spoken content: Initial TOEFL listening comprehension test by machine", "venue": "arXiv preprint arXiv:1608.06378,", "year": 2016 }, { "authors": [ "Mei Tu", "Fan Zhang", "Wei Liu" ], "title": "End-to-end speech translation with self-contained vocabulary manipulation", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Hu Xu", "Bing Liu", "Lei Shu", "Philip S Yu" ], "title": "Review conversational reading comprehension", "venue": "arXiv preprint arXiv:1902.00821,", "year": 2019 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Russ R Salakhutdinov", "Quoc V Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Y. Zhang", "W. Chan", "N. Jaitly" ], "title": "Very deep convolutional networks for end-to-end speech recognition", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2017 }, { "authors": [ "Zhuosheng Zhang", "Yuwei Wu", "Junru Zhou", "Sufeng Duan", "Hai Zhao", "Rui Wang" ], "title": "SG-Net: Syntax-guided machine reading comprehension", "venue": "In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Shiyu Zhou", "Linhao Dong", "Shuang Xu", "Bo Xu" ], "title": "Syllable-based sequence-to-sequence speech recognition with the transformer in mandarin chinese", "venue": "arXiv preprint arXiv:1804.10752,", "year": 2018 }, { "authors": [ "Chenguang Zhu", "Michael Zeng", "Xuedong Huang" ], "title": "SDNet: Contextualized attention-based deep network for conversational question answering", "venue": "arXiv preprint arXiv:1812.03593,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Conversational Machine Reading Comprehension (CMRC) has been studied extensively over the past few years within the natural language processing (NLP) communities (Zhu et al., 2018; Liu et al., 2019; Yang et al., 2019). Different from traditional MRC tasks, CMRC aims to enable models to learn the representation of the context paragraph and multi-turn dialogues. Existing methods to the conversational question answering (QA) tasks (Huang et al., 2018a; Devlin et al., 2018; Xu et al., 2019; Gong et al., 2020) have achieved superior performances on several benchmark datasets, such as QuAC (Choi et al., 2018) and CoQA (Elgohary et al., 2018). However, few studies have investigated CMRC in both spoken content and text documents.\nTo incorporate spoken content into machine comprehension, there are few public datasets that evaluate the effectiveness of the model in spoken question answering (SQA) scenarios. TOEFL listening comprehension (Tseng et al., 2016) is one of the related corpus for this task, an English test designed to evaluate the English language proficiency of non-native speakers. But the multi-choice question answering setting and its scale is limited to train for robust SCQA models. The rest two spoken question answering datasets are Spoken-SQuAD (Li et al., 2018) and ODSQA (Lee et al., 2018), respectively. However, there is usually no connection between a series of questions and answers within the same spoken passage among these datasets. More importantly, the most common way people seek or test their knowledge is via human conversations, which capture and maintain the common ground in spoken and text context from the dialogue flow. There are many real-world applications related to SCQA tasks, such as voice assistant and chat robot.\nIn recent years, neural network based methods have achieved promising progress in speech processing domain. Most existing works first select a feature extractor (Gao et al., 2019), and then enroll the feature embedding into the state-of-the-art learning framework, as used in single-turn spoken language processing tasks such as speech retrieval (Lee et al., 2015; Fan-Jiang et al., 2020; Karakos et al., 2020), translation (Bérard et al., 2016; Serdyuk et al., 2018; Di Gangi et al., 2020; Tu et al., 2020) and recognition (Zhang et al., 2017; Zhou et al., 2018; Bruguier et al., 2019; Siriwardhana\net al., 2020). However, simply adopting existing methods to the SCQA tasks will cause several challenges. First, transforming speech signals into ASR transcriptions is inevitably associated with ASR errors (See Table 2). Previous work (Lee et al., 2019) shows that directly feed the ASR output as the input for the following down-stream modules usually cause significant performance loss, especially in SQA tasks. Second, speech corresponds to a multi-turn conversation (e.g. lectures, interview, meetings), thus the discourse structure will have more complex correlations between questions and answers than that of a monologue. Third, additional information, such as audio recordings, contains potentially valuable information in spoken form. Many QA systems may leverage kind of orality to generate better representations. Fourth, existing QA models are tailored for a specific (text) domain. For our SCQA tasks, it is crucial to guide the system to learn kind of orality in documents.\nIn this work, we propose a new spoken conversational question answering task - SCQA, and introduce Spoken-CoQA, a spoken conversational question answering dataset to evaluate a QA system whether necessary to tackle the task of question answering on noisy speech transcripts and text document. We compare Spoken-CoQA with existing SQA datasets (See Table 1). Unlike existing SQA datasets, Spoken-CoQA is a multi-turn conversational SQA dataset, which is more challenging than single-turn benchmarks. First, every question is dependent on the conversation history in the Spoken-CoQA dataset. It is thus difficult for the machine to parse. Second, errors in ASR modules also degrade the performance of machines in tackling contextual understanding with context paragraph. To mitigate the effects of speech recognition errors, we then present a novel knowledge distillation (KD) method for spoken conversational question answering tasks. Our first intuition is speech utterances and text contents share the dual nature property, and we can take advantage of this property to learn these two forms of the correspondences. We enroll this knowledge into the student model, and then guide the student to unveil the bottleneck in noisy ASR outputs to boost performance. Empirical results show that our proposed DDNet achieves remarkable performance gains in SCQA tasks. To the best of our knowledge, we are the first work in spoken conversational machine reading comprehension tasks.\nIn summary, the main contributions of this work are as follows:\n• We propose a new task for machine comprehension of spoken question-answering style conversation to improve the network performance. To the best of our knowledge, our Spoken-CoQA is the first spoken conversational machine reading comprehension dataset.\n• We develop a novel end-to-end method based on data distillation to learn both from speech and text domain. Specifically, we use the model trained on clear syntax and close-distance recording to guide the model trained on noisy ASR transcriptions to achieve substantial performance gains in prediction accuracy.\n• We demonstrate the robustness of our DDNet on Spoken-CoQA, and demonstrates that the model can effectively alleviate ASR errors in noisy conditions." }, { "heading": "2 RELATED WORK", "text": "Conversational Machine Reading Comprehension In recent years, the natural language processing research community has devoted substantial efforts to conversational machine reading comprehension tasks (Huang et al., 2018a; Zhu et al., 2018; Xu et al., 2019; Zhang et al., 2020; Gong et al., 2020). Within the growing body of work on conversational machine reading comprehension, two signature attributes have emerged: the availability of large benchmark datasets (Choi et al., 2018; Elgohary et al., 2018; Reddy et al., 2019) and pre-trained language models (Devlin et al., 2018; Liu et al., 2019; Lan et al., 2020). However, these existing works typically focus on modeling the complicated context dependency in text form. In contrast, we focus on enabling the machine to build the capability of language recognition and dialogue modeling in both speech and text domains.\nSpoken Question Answering In parallel to the recent works in natural language processing, these trends have also been pronounced in the speech processing (SP) field, where spoken question answering, an extended form of Question Answering, have explored the prospect of machine comprehension in spoken form. Previous work on SQA typically includes two separate modules: automatic speech recognition and text question answering. It entails transferring spoken content to ASR transcriptions, and then employs natural language processing techniques to handle the speech language processing tasks. Prior to this point, the existing methods (Tseng et al., 2016; Serdyuk et al., 2018; Su & Fung, 2020) focus on optimizing each module in a two-stage manner, where errors in the ASR module would suffer from severe performance loss. Concurrent with our research, Serdyuk et al.\n(2018) proposes an end-to-end approach for natural language understanding (NLU) tasks. SpeechBERT (Chuang et al., 2019) cascades the BERT-based models as a unified model and then trains it in an audio-and-text jointly learned manner. However, the existing SQA methods aim at solving a single question given the related passage without building and maintaining the connections of different questions within the human conversations.\nKnowledge Distillation Hinton et al. (2015) introduces the idea of Knowledge Distillation (KD) in a teacher-student scenario. In other words, we can distill the knowledge from one model (massive or teacher model) to another (small or student model). Previous work has shown that KD can significantly boost prediction accuracy in natural language processing and speech processing (Kim & Rush, 2016; Hu et al., 2018; Huang et al., 2018b; Hahn & Choi, 2019), while adopting KD-based methods for SCQA tasks has been less explored. Although we share the same research topic and application, our research direction and methods differ. Previous methods design a unified model to model the single-turn speech-language task. In contrast, our model explores the prospect of handling SQA tasks. More importantly, we focus the question of nature property in speech and text: do spoken conversational dialogues can further assist the model to boost the performance. Finally, we incorporate the knowledge distillation framework to distill reliable dialogue flow from the spoken contexts, and utilize the learned predictions to guide the student model to train well on the noisy input data." }, { "heading": "3 TASK DEFINITION", "text": "" }, { "heading": "3.1 DATA FORMAT", "text": "We introduce Spoken-CoQA, a new spoken conversational machine reading comprehension dataset where the documents are in the spoken and text form. Given the spoken multi-turn dialogues and spoken documents, the task is to answer questions in multi-party conversations. Each example in this dataset is defined as follows: {Di, Qi, Ai}N1 , where Qi={qi1, qi2, ..., qiL} and Ai= {ai1, ai2, ..., aiL} represent a passage with L-turn queries and corresponding answers, respectively. Given a passage Di, multi-turn history questions {qi1, qi2, ..., qiL−1} and the reference answers {ai1, ai2, ..., aiL−1}, our goal is to generate aiL for the given current question qiL. In this study, we use the spoken form of questions and documents as the network input for training. Note that questions and documents (passages) in Spoken-CoQA are in both text and spoken forms, and answers are in the text form." }, { "heading": "3.2 DATA COLLECTION", "text": "We detail the procedures to build Spoken-CoQA as follows. First, we select the conversational question-answering dataset CoQA (Reddy et al., 2019) since it is one of the largest public CMRC datasets. CoQA contains around 8k stories (documents) and over 120k questions with answers. The average dialogue length of CoQA is about 15 turns, and the answer is in free-form text. In CoQA, the training set and the development set contain 7,199 and 500 conversations over the given stories, respectively. Therefore, we use the CoQA training set as our reference text of the training set and the CoQA development set as the test set in Spoken-CoQA. Then we employ the Google text-to-speech system to transform questions and documents in CoQA into the spoken form. Next, we adopt CMU Sphinx to transcribe the processed spoken content into ASR transcriptions. As such, we collect more than 40G audio data, and the data duration is around 300 hours. It is worth to note that since the constructed dataset does not update the answer spans based on the noisy ASR text and continues to assume answer-spans as per the actual text, we perform data filtering in our investigation by eliminating question-answer pairs from the corpus if answer spans to questions do not exist in the referenced ASR transcriptions.\nFor clarity, we provide an example of our Spoken-CoQA dataset in Table 2. Figure 4 compares spectrograms of samples from ASR modules. In this example, we observe that given the text document (ASR-document), the conversation starts with the question Q1 (ASR-Q1), and then the system requires to answer Q1 (ASR-Q1) with A1 based on a contiguous text span R1. Compared to the existing benchmark datasets, ASR transcripts (both the document and questions) are much more difficult for the machine to comprehend questions, reason among the passage, and even predict the correct answer." }, { "heading": "4 DDNET", "text": "In this section, we detail our data distillation approach by leveraging the dual nature of speech and text domains to boost the prediction accuracy in a spoken dialogue system. An overview pipeline of this task is shown in Figure 1. We first introduce the multi-modality fusion mechanism. Then we present the major components of the CRMC module. Finally, we describe a simple yet effective distillation strategy in the proposed DDNet to learn feature representation in the speech-text domain comprehensively.\nGiven spoken words S = {s1, s2, ..., sn} and corresponding text words X = {x1, x2, ..., xn}, we utilize Speech-BERT and Text-BERT to generate speech feature embedding Es={Es1, Es2, ...,Esn} and context word embedding Ex={Ex1, Ex2, ...,Exn}, respectively. Concretely, we first use vqwav2vec (Baevski et al., 2019) to transfer speech signals into a series of tokens, which is the standard tokenization procedure in natural language processing tasks, and then use Speech-BERT (Chuang et al., 2019), a variant of BERT-based models, to process the speech sequences for training. We retrain Speech-BERT (Chuang et al., 2019) on our Spoken-CoQA dataset. The scale of Speech-BERT is similar with BERT-base (Devlin et al., 2018) model that contains 12 transformer layers with the residual structure and the embedding dimension is with 768. In parallel, we embed the text context into a sequence of vectors via our text encoder - Text-BERT. We adopt the same architecture of BERT-base (Devlin et al., 2018) in our Text-BERT due to its superior performance.\nCross Attention Inspired by ViLBERT (Lu et al., 2019), we apply the co-attention transformer layer(Lu et al., 2019), a variant of Self-Attention (Vaswani et al., 2017), as the Cross Attention module for speech and text embedding fusion. We pass query, key, and value matrices (Q, K, V) as input to the Cross Attention module. We then compute the cross attention-pooled features by querying one modality with Q vector from another modality.\nÊcrosss = CrossAttention(Es,Ex,Ex) (1)\nÊcrossx = CrossAttention(Ex,Es,Es) (2)\nFinally, we obtain the aligned cross attention embedding Ecross by concatenating Êcrosss and Êcrossx ." }, { "heading": "4.1 KEY COMPONENTS", "text": "We build our CMRC module, based on recent works (Zhu et al., 2018; Huang et al., 2017). We divide our CMRC module into three key components: Encoding Layer, Attention Layer and Output Layer.\nEncoding Layer We encode documents and conversations (questions and answers) into the corresponding feature embedding (e.g.,character embedding, word embedding, and contextual embedding), and then concatenate the output contextual embedding and the aligned cross attention embedding Ecross, and pass it as input.\nÊenc = [Eenc;Ecross] (3)\nAttention Layer We compute the attention on the context representations of the documents and questions, and extensively exploit correlations between them. Note that we adopt the default attention layers in four baseline models.\nOutput Layer After obtaining attention-pooled representations, the Output Layer computes the probability distribution of the start and end index within the entire documents and predicts an answer to the current question." }, { "heading": "4.2 KNOWLEDGE DISTILLATION", "text": "For prior speech-language models, the only guidance is the standard training objective to measure the difference between the prediction and the reference answer. However, such criteria makes no sense for noisy ASR transcriptions. To tackle this issue, we distill the knowledge from our teacher model, and use them to guide the student model to learn contextual features in our spoken CMRC task. Concretely, we set the model trained on the speech document and text corpus as the teacher model and trained on the ASR transcripts as the student model, respectively. Thus, the student trained on low-quality data learn to imbibe the knowledge that the teacher has discovered.\nConcretely, given the zS and zT are the prediction vectors by the student and teacher models, the objective is define as:\nL = ∑ x∈X (ατ2KL(pτ (zS), pτ (zT )) + (1− α)XE(zT , y)), (4)\nwhere KL(·) and XE(·) denote the Kullback-Leibler divergence and cross entropy, respectively. y represents the ground truth labels in the text training dataset X . pτ (·) refers the softmax function with temperature τ , and α is a balancing factor." }, { "heading": "5 EXPERIMENTS AND RESULTS", "text": "In this section, we first introduce several state-of-the-art language models as our baselines, and then evaluate the robustness of these models on our proposed Spoken-CoQA dataset. Finally, we provide a thorough analysis of different components of our method. Note that we use the default setting in all the evaluated methods." }, { "heading": "5.1 BASELINES", "text": "In principle, DDNet can utilize any backbone network for SCQA tasks. We choose several state-ofthe-art language models (FlowQA (Huang et al., 2018a), SDNet (Zhu et al., 2018), BERT-base (Devlin et al., 2018), ALBERT (Lan et al., 2020)) as our backbone network due to its superior performance. We also compare our proposed DDNet with several state-of-the-art SQA methods (Lee et al., 2018; Serdyuk et al., 2018; Lee et al., 2019; Kuo et al., 2020). To train the teacher-student pairs simultaneously, we first train baselines on the CoQA training set and then compare the performances of testing baselines on CoQA dev set and Spoken-CoQA dev set. Finally, we train the baselines on the Spoken-CoQA training set and evaluate the baselines on the CoQA dev set and Spoken-CoQA test set. We provide quantitative results in Table 3." }, { "heading": "5.2 EXPERIMENT SETTINGS", "text": "We use the official BERT (Devlin et al., 2018) and ALBERT (Lan et al., 2020) as our starting point for training. We use BERT-base (Devlin et al., 2018) and ALBERT-base (Lan et al., 2020), which both include 12 transformer encoders, and the hidden size of each word vector is 768. BERT and ALBERT utilize BPE as the tokenizer, but FlowQA and SDNet use SpaCy (Honnibal & Montani, 2017) for tokenization. Specifically, in the case of tokens in spaCy (Honnibal & Montani, 2017) correspond to more than one BPE sub-tokens, we average the BERT embeddings of these BPE sub-tokens as the embedding for each token. To maintain the integrity of all evaluated model performance, we use standard implementations and hyper-parameters of four baselines for training. The balancing factor α is set to 0.9, and the temperature τ is set to 2. For evaluation, we use Exact Match (EM) and F1 score to compare the model performance on the test set. Note that in this work, each baseline is trained in the local computing environment, which may results in different results compared with the ones on the CoQA leader board." }, { "heading": "5.3 RESULTS", "text": "We compare several teacher-student pairs on CoQA and Spoken-CoQA dataset. Quantitative results are shown in Table 3. We can observe that the average F1 scores are 77.6% when training on CoQA (text document) and testing on the CoQA dev set. However, when training the models on SpokenCoQA (ASR transcriptions) and testing on the Spoken-CoQA test set, average F1 scores are dropped to 49.3%. For FlowQA, the performance even dropped by 40.4% on F1 score. This confirms the importance of mitigating ASR errors which severely degrade the model performance in our tasks.\nAs shown in Table 4, it demonstrates that our proposed Cross Attention block and knowledge distillation strategy consistently boost the remarkable performance on all baselines, respectively. More importantly, our distillation strategy works particularly well. For FlowQA, our method achieves 53.7% (vs.51.6%) and 39.2% (vs.34.7%) in terms of F1 score over text document and ASR transcriptions, respectively. For SDNet, our method outperforms the baseline without distillation strategy, achieving 55.6% (vs.52.5%) and 56.7% (vs.53.1%) in terms of F1 score. For two BERT-like models (BERT-base and ALBERT-base), our methods also improve F1 scores to 58.8% (vs.55.8%) and 57.7% (vs.54.1%); 59.6% (vs.56.0%) and 58.7% (vs.55.2%), respectively. We also compare the combination of our distillation strategy and the cross attention mechanism. Our results suggest that such network notably improve prediction performance for spoken conversational question answering tasks. Such significant improvements demonstrate the effectiveness of DDNet." }, { "heading": "6 QUANTITATIVE ANALYSIS", "text": "Speech Feature in ASR System To perform qualitative analysis of speech features, we visualize the log-mel spectrogram features and the mel-frequency cepstral coefficients (MFCC) feature embedding learned by DDNet in Figure 4. We can observe how the spectrogram features respond to different sentence examples.\nTemperature τ To study the effect of temperature τ (See Section 4.2), we conduct the additional experiments of four baselines with the standard choice of the temperature τ ∈ {1, 2, 4, 6, 8, 10}. All models are trained on Spoken-CoQA dataset, and validated on the CoQA dev and Spoken-CoQA\ntest set, respectively. In Figure 3, when T is set to 2, four baselines all achieve their best performance in term of F1 and EM metrics.\nMulti-Modality Fusion Mechanism To study the effect of different modality fusion mechanisms, we introduce a novel fusion mechanism Con Fusion: first, we directly concatenate two output embedding from speech-BERT and text-BERT models, and then pass it to the encoding layer in the following CMRC module. In Table 5, we observe that Cross Attention fusion mechanism outperform four baselines with Con Fusion in terms of EM and F1 scores. We further investigate the effect of uni-model input. Table 5 shows that text-only performs better than speech-only. One possible reason for this performance is that only using speech features can bring additional noise. Note that speech-only (text-only) represents that we only feed the speech (text) embedding for speech-BERT (text-BERT) to the encoding layer in the CMRC module." }, { "heading": "7 CONCLUSION", "text": "In this paper, we have presented a new spoken conversational question answering task - SpokenCoQA, for enabling human-machine communication. Unlike the existing Spoken conversational machine reading comprehension datasets, Spoken-CoQA includes multi-turn conversations and passages in both text and speech form. Furthermore, we propose a data distillation method, which\nTable 5: Comparison of different fusion mechanisms in DDNet. We set the model trained on speech document and text corpus as the teacher model, and the one trained on the ASR transcripts as the student model.\nCoQA dev S-CoQA test Models EM F1 EM F1 FlowQA (Huang et al., 2018a) 40.9 51.6 22.1 34.7 FlowQA (Huang et al., 2018a)+ speech-only 40.8 51.2 21.8 34.0 FlowQA (Huang et al., 2018a)+ text-only 41.1 51.7 22.4 35.3 FlowQA (Huang et al., 2018a)+ Con Fusion 41.0 52.0 22.1 35.2 FlowQA (Huang et al., 2018a)+ Cross Attention 41.1 52.2 22.5 35.5 SDNet (Zhu et al., 2018) 40.1 52.5 41.5 53.1 SDNet (Zhu et al., 2018)+ speech-only 39.3 51.6 40.9 52.28 SDNet (Zhu et al., 2018)+ text-only 40.2 52.7 41.5 53.3 SDNet (Zhu et al., 2018)+ Con Fusion 40.3 52.6 41.5 53.2 SDNet (Zhu et al., 2018)+ Cross Attention 40.4 52.9 41.6 53.4 BERT-base (Devlin et al., 2018) 42.3 55.8 40.6 54.1 BERT-base (Devlin et al., 2018)+ speech-only 41.9 55.8 40.2 54.1 BERT-base (Devlin et al., 2018)+ text-only 42.4 56.0 40.9 54.3 BERT-base (Devlin et al., 2018)+ Con Fusion 42.3 56.0 40.8 54.1 BERT-base (Devlin et al., 2018)+ Cross Attention 42.4 56.3 40.9 54.5 ALBERT-base (Lan et al., 2020) 42.7 56.0 41.4 55.2 ALBERT-base (Lan et al., 2020)+ speech-only 41.8 55.9 41.1 54.8 ALBERT-base (Lan et al., 2020)+ text-only 42.9 56.3 41.4 55.7 ALBERT-base (Lan et al., 2020)+ Con Fusion 42.7 56.1 41.3 55.4 ALBERT-base (Lan et al., 2020)+ Cross Attention 42.9 56.4 41.6 55.9\nFigure 3: Ablation studies of temperature τ on DDNet performance (FlowQA, SDNet, BERT, ALBERT). Red and blue denote the results on CoQA dev and Spoken-CoQA test set, respectively.\nleverages audio-text features to reduce the misalignment between ASR hypotheses and the reference transcriptions. Experimental results show that DDNet achieves superior performance in prediction accuracy. For future work, we will further investigate different mechanism of integrating speech and text content, and propose novel machine learning based networks to migrate ASR recognition errors to boost the performance of QA systems." }, { "heading": "A APPENDIX", "text": "A.1 SPEECH FEATURES IN ASR SYSTEM\nDue to the page limit, we present some examples of speech features here." } ]
2,020
null
SP:f55167c38de1d6b8528b2d4ef865f5e2e87a5bdc
[ "**Paper Summary:** The paper addresses an important topic in federated learning which is personalization. The authors propose a two steps process to achieve the personalization: 1. Figuring out which models to send to which clients; 2. Computing their personalized weighted combinations for each client. To determine the weights, the authors use first order approximation. ", "The paper proposes a new FL method that computes in every communication round for each client a personalized model as starting point for the next round of federation. The paper defines the client-specific objective as some loss function of the weighted combination of all (or subset) models on a client-specific validation set. This personalized weighted combination of the models especially fits situations where not all clients have congruent objectives such as in non-IID settings. The paper evaluates the proposed FL algorithm on standard datasets for image classification by comparing to alternative FL methods. " ]
While federated learning traditionally aims to train a single global model across decentralized local datasets, one model may not always be ideal for all participating clients. Here we propose an alternative, where each client only federates with other relevant clients to obtain a stronger model per client-specific objectives. To achieve this personalization, rather than computing a single model average with constant weights for the entire federation as in traditional FL, we efficiently calculate optimal weighted model combinations for each client, based on figuring out how much a client can benefit from another’s model. We do not assume knowledge of any underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest, enabling greater flexibility for personalization. We evaluate and characterize our method on a variety of federated settings, datasets, and degrees of local data heterogeneity. Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
[ { "affiliations": [], "name": "Michael Zhang" }, { "affiliations": [], "name": "Karan Sapra" }, { "affiliations": [], "name": "Sanja Fidler" }, { "affiliations": [], "name": "Jose M. Alvarez" } ]
[ { "authors": [ "Martin Abadi", "Andy Chu", "Ian Goodfellow", "H Brendan McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang" ], "title": "Deep learning with differential privacy", "venue": "In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2016 }, { "authors": [ "Keith Bonawitz", "Hubert Eichner", "Wolfgang Grieskamp", "Dzmitry Huba", "Alex Ingerman", "Vladimir Ivanov", "Chloe Kiddon", "Jakub Konečnỳ", "Stefano Mazzocchi", "H Brendan McMahan" ], "title": "Towards federated learning at scale: System design", "venue": "arXiv preprint arXiv:1902.01046,", "year": 2019 }, { "authors": [ "Christopher Briggs", "Zhong Fan", "Peter Andras" ], "title": "Federated learning with hierarchical clustering of local updates to improve training on non-iid data", "venue": "arXiv preprint arXiv:2004.11791,", "year": 2020 }, { "authors": [ "Guobin Chen", "Wongun Choi", "Xiang Yu", "Tony Han", "Manmohan Chandraker" ], "title": "Learning efficient object detection models with knowledge distillation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yuyang Deng", "Mohammad Mahdi Kamani", "Mehrdad Mahdavi" ], "title": "Adaptive personalized federated learning", "venue": "arXiv preprint arXiv:2003.13461,", "year": 2020 }, { "authors": [ "Cynthia Dwork", "Aaron Roth" ], "title": "The algorithmic foundations of differential privacy", "venue": "Foundations and Trends in Theoretical Computer Science,", "year": 2014 }, { "authors": [ "Alireza Fallah", "Aryan Mokhtari", "Asuman Ozdaglar" ], "title": "Personalized federated learning: A metalearning approach", "venue": "arXiv preprint arXiv:2002.07948,", "year": 2020 }, { "authors": [ "Avishek Ghosh", "Jichan Chung", "Dong Yin", "Kannan Ramchandran" ], "title": "An efficient framework for clustered federated learning", "venue": "arXiv preprint arXiv:2006.04088,", "year": 2020 }, { "authors": [ "Filip Hanzely", "Peter Richtárik" ], "title": "Federated learning of a mixture of global and local models", "venue": "arXiv preprint arXiv:2002.05516,", "year": 2020 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Thomas Hofmann" ], "title": "Latent semantic models for collaborative filtering", "venue": "ACM Transactions on Information Systems (TOIS),", "year": 2004 }, { "authors": [ "Kevin Hsieh", "Amar Phanishayee", "Onur Mutlu", "Phillip B Gibbons" ], "title": "The non-iid data quagmire of decentralized machine learning", "venue": null, "year": 1910 }, { "authors": [ "Tzu-Ming Harry Hsu", "Hang Qi", "Matthew Brown" ], "title": "Measuring the effects of non-identical data distribution for federated visual classification", "venue": null, "year": 1909 }, { "authors": [ "Yihan Jiang", "Jakub Konečnỳ", "Keith Rush", "Sreeram Kannan" ], "title": "Improving federated learning personalization via model agnostic meta learning", "venue": null, "year": 1909 }, { "authors": [ "Peter Kairouz", "H Brendan McMahan", "Brendan Avent", "Aurélien Bellet", "Mehdi Bennis", "Arjun Nitin Bhagoji", "Keith Bonawitz", "Zachary Charles", "Graham Cormode", "Rachel Cummings" ], "title": "Advances and open problems in federated learning", "venue": "arXiv preprint arXiv:1912.04977,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Manzil Zaheer", "Maziar Sanjabi", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated optimization in heterogeneous networks", "venue": "Proceedings of Machine Learning and Systems,", "year": 2020 }, { "authors": [ "Paul Pu Liang", "Terrance Liu", "Liu Ziyin", "Ruslan Salakhutdinov", "Louis-Philippe Morency" ], "title": "Think locally, act globally: Federated learning with local and global representations", "venue": "arXiv preprint arXiv:2001.01523,", "year": 2020 }, { "authors": [ "Yishay Mansour", "Mehryar Mohri", "Jae Ro", "Ananda Theertha Suresh" ], "title": "Three approaches for personalization with applications to federated learning", "venue": "arXiv preprint arXiv:2002.10619,", "year": 2020 }, { "authors": [ "Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Aguera y Arcas" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "H. Brendan McMahan", "Eider Moore", "Daniel Ramage", "Blaise Agüera y Arcas" ], "title": "Federated learning of deep networks using model averaging", "venue": "CoRR, abs/1602.05629,", "year": 2016 }, { "authors": [ "Daniel Peterson", "Pallika Kanani", "Virendra J Marathe" ], "title": "Private federated learning with domain adaptation", "venue": "arXiv preprint arXiv:1912.06733,", "year": 2019 }, { "authors": [ "Felix Sattler", "Klaus-Robert Müller", "Wojciech Samek" ], "title": "Clustered federated learning: Modelagnostic distributed multitask optimization under privacy constraints", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Virginia Smith", "Chao-Kai Chiang", "Maziar Sanjabi", "Ameet Talwalkar" ], "title": "Federated multi-task learning", "venue": "CoRR, abs/1705.10467,", "year": 2017 }, { "authors": [ "Canh T Dinh", "Nguyen Tran", "Tuan Dung Nguyen" ], "title": "Personalized federated learning with moreau envelopes", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Kangkang Wang", "Rajiv Mathews", "Chloé Kiddon", "Hubert Eichner", "Françoise Beaufays", "Daniel Ramage" ], "title": "Federated evaluation of on-device personalization", "venue": null, "year": 1910 }, { "authors": [ "Yuhui Xu", "Yongzhuang Wang", "Aojun Zhou", "Weiyao Lin", "Hongkai Xiong" ], "title": "Deep neural network compression with single and multiple level quantization", "venue": "arXiv preprint arXiv:1803.03289,", "year": 2018 }, { "authors": [ "Yue Zhao", "Meng Li", "Liangzhen Lai", "Naveen Suda", "Damon Civin", "Vikas Chandra" ], "title": "Federated learning with non-iid data", "venue": "CoRR, abs/1806.00582,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Federated learning (FL) has shown great promise in recent years for training a single global model over decentralized data. While seminally motivated by effective inference on a general test set similar in distribution to the decentralized data in aggregate (McMahan et al., 2016; Bonawitz et al., 2019), here we focus on federated learning from a client-centric or personalized perspective. We aim to enable stronger performance on personalized target distributions for each participating client. Such settings can be motivated by cross-silo FL, where clients are autonomous data vendors (e.g. hospitals managing patient data, or corporations carrying customer information) that wish to collaborate without sharing private data (Kairouz et al., 2019). Instead of merely being a source of data and model training for the global server, clients can then take on a more active role: their federated participation may be contingent on satisfying client-specific target tasks and distributions. A strong FL framework in practice would then flexibly accommodate these objectives, allowing clients to optimize for arbitrary distributions simultaneously in a single federation.\nIn this setting, FL’s realistic lack of an independent and identically distributed (IID) data assumption across clients may be both a burden and a blessing. Learning a single global model across non-IID data batches can pose challenges such as non-guaranteed convergence and model parameter divergence (Hsieh et al., 2019; Zhao et al., 2018; Li et al., 2020). Furthermore, trying to fine-tune these global models may result in poor adaptation to local client test sets (Jiang et al., 2019). However, the non-IID nature of each client’s local data can also provide useful signal for distinguishing their underlying local data distributions, without sharing any data. We leverage this signal to propose a new framework for personalized FL. Instead of giving all clients the same global model average weighted by local training size as in prior work (McMahan et al., 2016), for each client we compute\n∗Corresponding author; work done while interning at NVIDIA\na weighted combination of the available models to best align with that client’s interests, modeled by evaluation on a personalized target test distribution.\nKey here is that after each federating round, we maintain the client-uploaded parameters individually, allowing clients in the next round to download these copies independently of each other. Each federated update is then a two-step process: given a local objective, clients (1) evaluate how well their received models perform on their target task and (2) use these respective performances to weight each model’s parameters in a personalized update. We show that this intuitive process can be thought of as a particularly coarse version of popular iterative optimization algorithms such as SGD, where instead of directly accessing other clients’ data points and iteratively training our model with the granularity of gradient decent, we limit ourselves to working with their uploaded models. We hence propose an efficient method to calculate these optimal combinations for each client, calling it FedFomo, as (1) each client’s federated update is calculated with a simple first-order model optimization approximating a personalized gradient step, and (2) it draws inspiration from the “fear of missing out”, every client no longer necessarily factoring in contributions from all active clients during each federation round. In other words, curiosity can kill the cat. Each model’s personalized performance can be saved however by restricting unhelpful models from each federated update.\nWe evaluate our method on federated image classification and show that it outperforms other methods in various non-IID scenarios. Furthermore, we show that because we compute federated updates directly with respect to client-specified local objectives, our framework can also optimize for outof-distribution performance, where client’s target distributions are different from their local training ones. In contrast, prior work that personalized based on similarity to a client’s own model parameters (Mansour et al., 2020; Sattler et al., 2020) restricts this optimization to the local data distribution. We thus enable new features in personalized FL, and empirically demonstrate up to 70% improvement in some settings, with larger gains as the number of clients or level of non-IIDness increases." }, { "heading": "Our contributions", "text": "1. We propose a flexible federated learning framework that allows clients to personalize to specific target data distributions irrespective of their available local training data.\n2. Within this framework, we introduce a method to efficiently calculate the optimal weighted combination of uploaded models as a personalized federated update\n3. Our method strongly outperforms other methods in non-IID federated learning settings." }, { "heading": "2 RELATED WORK", "text": "Federated Learning with Non-IID Data While fine-tuning a global model on a client’s local data is a natural strategy to personalize (Mansour et al., 2020; Wang et al., 2019), prior work has shown that non-IID decentralized data can introduce challenges such as parameter divergence (Zhao et al., 2018), data distribution biases (Hsieh et al., 2019), and unguaranteed convergence Li et al. (2020). Several recent methods then try to improve the robustness of global models under heavily non-IID datasets. FedProx (Li et al., 2020) adds a proximal term to the local training objective to keep updated parameter close to the original downloaded model. This serves to reduce potential weight divergence defined in Zhao et al. (2018), who instead allow clients to share small subsets of their data among each other. This effectively makes each client’s local training set closer in distribution to the global test set. More recently, Hsu et al. (2019) propose to add momentum to the global model update in FedAvgM to reduce the possibly harmful oscillations associated with averaging local models after several rounds of stochastic gradient descent for non-identically distributed data.\nWhile these advances may make a global model more robust across non-IID local data, they do not directly address local-level data distribution performance relevant to individual clients. Jiang et al. (2019) argue this latter task may be more important in non-IID FL settings, as local training data differences may suggest that only a subset of all potential features are relevant to each client. Their target distributions may be fairly different from the global aggregate in highly personalized scenarios, with the resulting dataset shift difficult to handle with a single model.\nPersonalized Federated Learning Given the challenges above, other approaches train multiple models or personalizing components to tackle multiple target distributions. Smith et al. (2017) propose multi-task learning for FL with MOCHA, a distributed MTL framework that frames clients as tasks and learns one model per client. Mixture methods (Deng et al., 2020; Hanzely & Richtárik,\n2020; Mansour et al., 2020) compute personalized combinations of model parameters from training both local models and the global model, while Peterson et al. (2019) ensure that this is done with local privacy guarantees. Liang et al. (2020) apply this mixing across network layers, with lower layers acting as local encoders that map a client’s observed data to input for a globally shared classifier. Rather than only mix with a shared global model, our work allows for greater control and distinct mixing parameters with multiple local models. Fallah et al. (2020) instead optimize the global model for fast personalization through meta-learning, while T Dinh et al. (2020) train global and local models under regularization with Moreau envelopes. Alternatively, Clustered FL (Sattler et al., 2020; Ghosh et al., 2020; Briggs et al., 2020; Mansour et al., 2020) assumes that inherent partitions or data distributions exist behind clients’ local data, and aim to cluster these partitions to federate within each cluster. Our work does not restrict which models are computed together, allowing clients to download suitable models independently. We also compute client-specific weighted averages for greater personalization. Finally, unlike prior work, we allow clients to receive personalized updates for target distributions different from their local training data." }, { "heading": "3 FEDERATED FIRST ORDER MODEL OPTIMIZATION", "text": "We now present FedFomo, a personalized FL framework to efficiently compute client-optimizing federated updates. We adopt the general structure of most FL methods, where we iteratively cycle between downloading model parameters from server to client, training the models locally on each client’s data, and sending back the updated models for future rounds. However, as we do not compute a single global model, each federated download introduces two new steps: (1) figuring out which models to send to which clients, and (2) computing their personalized weighted combinations. We define our problem and describe how we accomplish (1) and (2) in the following sections.\nProblem Definition and Notation Our work most naturally applies to heterogeneous federated settings where participating clients are critically not restricted to single local training or target test distribution, and apriori we do not know anything about these distributions. To model this, let C be a population with |C| = K total clients, where each client ci ∈ C carries local data Di sampled from some distribution D and local model parameters θ`(t)i during any round t. Each ci also maintains some personalized objective or task Ti motivating their participation in the federation. We focus on supervised classification as a universal task setting. Each client and task are then associated with a test dataset Dtesti ∼ D∗. We define each Ti := minL(θ `(t) i ;D test i ), where L(θ;D) : Θ 7→ R is the loss function associated with dataset D, and Θ denotes the space of models possible with our presumed network architecture. We assume no knowledge regarding clients and their data distributions, nor that test and local data belong to the same distribution. We aim to obtain the optimal set of model parameters {θ∗1 , . . . , θ∗K} = arg min ∑ i∈[K] LTi(θi)." }, { "heading": "3.1 COMPUTING FEDERATED UPDATES WITH FOMO", "text": "Unlike previous work in federated learning, FedFomo learns optimal combinations of the available server models for each participating client. To do so, we leverage information from clients in two different ways. First, we aim to directly optimize for each client’s target objective. We assume that clients can distinguish between good and bad models on their target tasks, through the use of a labeled validation data split Dvali ⊂ Di in the client’s local data. Dvali should be similar in distribution to the target test dataset Dtesti . The client can then evaluate any arbitrary model θj on this validation set, and quantify the performance through the computed loss, denoted by Li(θj). Second, we directly leverage the potential heterogeneity among client models. Zhao et al. (2018) explore this phenomenon as a failure mode for traditional single model FL, where they show that diverging model weights come directly from local data heterogeneity. However, instead of combining these parameters into a single global model, we maintain the uploaded models individually as a means to preserve a model’s potential contribution to another client. Critically, these two ideas together not only allow us to compute more personal model updates within non-IID local data distributions, but also enable clients to optimize for data distributions different from their own local data’s.\nFederated learning as an iterative local model update The central premise of our work stems from viewing each federated model download−and subsequent changing of local model parameters−as an optimization step towards some objective. In traditional FL, this objective involves performing well on the global population distribution, similar in representation to the union of all local datasets. Assuming N federating clients, we compute each global model θG at time t as:\nθG(t) = ∑N n=1 wn · θ `(t) n , where wn = |Dtrainn |/ ∑N j=1 |Dtrainj | . If client ci downloads this model, we can view this change to their local model as an update: θ`(t+1)i ← θ `(t) i + ∑N n=1 wn· ( θ `(t) n −θ`(t)i\n) since ∑ n wn = 1. This then updates a client’s current local model parameters in directions specified by the weights w and models {θn} in the federation. A natural choice to optimize for the global target distribution sets wn as above and in McMahan et al. (2017), e.g. as an unbiased estimate of global model parameters. However, in our personalized scenario, we are more interested in computing the update uniquely with respect to each client’s target task. We then wish to find the optimal weights w = 〈w1, . . . , wN 〉 that optimize for the client’s objective, minimizing Li(θ`i ).\nEfficient personalization with FedFomo Intuitively, we wish to find models {θ`(t)m : m ∈ [N ]\\i} such that moving towards their parameters leads to better performance on our target distribution, and accordingly weight these θ higher in a model average. If a client carries a satisfying number of local data points associated with their target objective Li, then they could obtain a reasonable model through local training alone, e.g. directly updating their model parameters through SGD:\nθ `(t+1) i ← θ `(t) i − α∇θLi(θ `(t) i ) (1)\nHowever, without this data, clients are more motivated to federate. In doing so they obtain useful updates, albeit in the more restricted form of fixed model parameters {θn : n ∈ N}. Then for personalized or non-IID target distributions, we can iteratively solve for the optimal combination of client models w∗ = arg minLi(θ) by computing:\nθ `(t+1) i ← θ `(t) i − α1 >∇wLi(θ `(t) i ) (2)\nwhere 1 is a size-N vector of ones. Unfortunately, as the larger federated learning algorithm is already an iterative process with many rounds of communication, computing w∗ through Eq. 2 may be cumbersome. Worse, if the model averages are only computed server-side as in traditional FL, Eq. 2 becomes prohibitively expensive in communication rounds (McMahan et al., 2017).\nFollowing this line of reasoning however, we thus derive an approximation of w∗ for any client: Given previous local model parameters θ`(t−1)i , set of fellow federating models available to download {θ`(t)n } and local client objective captured by Li, we propose weights of the form:\nwn = Li(θ`(t−1)i )− Li(θ `(t) n )\n‖θ`(t)n − θ`(t−1)i ‖ (3)\nwhere the resulting federated update θ`(t)i ← θ `(t−1) i + ∑ n∈[N ] wn(θ `(t) n −θ`(t−1)i ) directly optimizes for client ci’s objective up to a first-order approximation of the optimal w∗. We default to the original parameters θ`(t−1)i if wn < 0 above, i.e. wn = max(wn, 0), and among positive wn normalize to get final weights w∗n = max(wn,0)∑ n max(wn,0) to maintain w∗ ∈ [0, 1] and ∑ n=1 w ∗ n ∈ {0, 1}. We derive Eq. 3 as a first-order approximation of w∗ in Appendix A.1. Here we note that our formulation captures the intuition of federating with client models that perform better than our own model, e.g. have a smaller loss on Li. Moreso, we weigh models more heavily as this positive loss delta increases, or the distance between our current parameters and theirs decreases, in essence most heavily weighing the models that most efficiently improve our performance. We use local parameters at t-1 to directly compute how much we should factor in current parameters θ`(t)i , which also helps prevent overfitting as Li(θ`(t−1)i )− Li(θ `(t) i ) < 0 causes “early-stopping” at θ `(t−1) i .\nCommunication and bandwidth overhead Because the server can send multiple requested models in one download to any client, we still maintain one round of communication for model downloads and one round for uploads in between E local training epochs. Furthermore, because w in Eq. 3 is simple to calculate, the actual model update can also happen client-side, keeping the total number of communications with T total training epochs at b 2TE c, as in FedAvg. However FedFomo also needs to consider the additional bandwidth from downloading multiple models. While quantization and distillation (Chen et al., 2017; Hinton et al., 2015; Xu et al., 2018) can alleviate this, we also avoid worst case N2 overhead with respect to the number of active clients\nN by restricting the number of models downloaded M . Whether we can achieve good personalization here involves figuring out which models benefit which clients, and our goal is then to send as many helpful models as possible given limited bandwidth.\nTo do so, we invoke a sampling scheme where the likelihood of sending model θj to client ci relies on how well θj performed regarding client ci’s target objective in previous rounds. Accordingly, we maintain an affinity matrix P composed of vectors pi = 〈pi,1, . . . , pi,K〉, where pi,j measures the likelihood of sending θj to client ci, and at each round send the available uploaded models corresponding to the top M values according to each participating client’s p. Initially we set P = diag(1, . . . , 1), i.e. each model has an equal chance of being downloaded. Then during each federated update, we update p← p + w from Eq. 3, where w can now be negative. If N K, we may benefit from additional exploration, and employ an ε-greedy sampling strategy where instead of picking strictly in order of p, we have ε chance to send a random model to the client. We investigate the robustness of FedFomo to these parameters through ablations of ε and M in the next section." }, { "heading": "4 EXPERIMENTS", "text": "Experimental Setup We consider two different scenarios for simulating non-identical data distributions across federating clients. First we evaluate with the pathological non-IID setup in McMahan et al. (2016), where each client is randomly assigned 2 classes among 10 total classes. We also use a latent distribution non-IID setup, where we first partition our datasets based on feature and semantic similarity, and then sample from them to setup different local client data distributions. We use number of distributions ∈ {2, 3, 4, 5, 10} and report the average Earth Mover’s Distance (EMD) between local client data and the total dataset across all clients to quantify non-IIDness. We evenly allocate clients among distributions and include further details in Appendix A.5. We evaluate under both setups with two FL scenarios: 15 and 100 clients with 100% and 10% participation respectively, reporting final accuracy after training withE = 5 local epochs per round for 20 communication rounds in the former and 100 rounds in the latter. Based on prior work (McMahan et al., 2016; Liang et al., 2020), we compare methods with the MNIST (LeCun et al., 1998), CIFAR-10 (Krizhevsky et al., 2009), and CIFAR-100 datasets. We use the same CNN architecture as in McMahan et al. (2016).\nFederated Learning Baselines We compare FedFomo against methods broadly falling under two categories: they (1) propose modifications to train a single global model more robust to non-IID local datasets, or (2) aim to train more than one model or model component to personalize performance directly to client test sets. For (1), we consider FedAvg, FedProx, and the 5% data-sharing strategy with FedAvg, while in (2) we compare our method to MOCHA, LG-FedAvg, Per-FedAvg, pFedMe, Clustered Federated Learning (CFL), and a local training baseline. All accuracies are reported with mean and standard deviation over three runs, with local training epochsE = 5, the same number of communication rounds (20 for 15 clients, 100% participation; 100 for 100 clients, 10% participation) and learning rate 0.01 for MNIST, 0.1 for CIFAR-10). We implemented all results1.\nPathological Non-IID We follow precedent and report accuracy after assigning two classes out of the ten to each client for the pathological setting in Table 1. Across datasets and client setups, our proposed FedFomo strongly outperforms alternative methods in settings with larger number clients, and achieves competitive accuracy in the 15 client scenario. In the larger 100 client scenario, each individual client participates less frequently but also carries less local training data. Such settings motivate a higher demand for efficient federated updates, as there are less training rounds for each client overall. Meanwhile, methods that try to train a single robust model perform with mixed success over the FedAvg baseline, and notably do not perform better than local training alone. Despite the competitive performance, we note that this pathological setting is not the most natural scenario to apply FedFomo. In particular when there are less clients, each client’s target distribution carries only 2 random classes, there is no guarantee that any two clients share the same objective such that they can clearly benefit each other. With more clients however, we can also expect higher frequencies of target distribution overlap, and accordingly find that we outperform all other methods.\nLatent Distribution Non-IID We next report how each FL method performs in the latent distribution setting in Table 2, with additional results in Fig. 1. Here we study the relative performance of\n1LG-FedAvg and MOCHA were implemented with code from github.com/pliang279/LG-FedAvg. pFedMe and Per-FedAvg were implemented with code from github.com/CharlieDinh/pFedMe. CFL was implemend with code from github.com/felisat/clustered-federated-learning\nFedFomo across various levels of statistical heterogeneity, and again show that our method strongly outperforms others in highly non-IID settings. The performance gap widens as local datasets become more non-IID, where global FL methods may suffer more from combining increasingly divergent weights while also experiencing high target data distribution shift (quantified with higher EMD) due to local data heterogeneity. Sharing a small amount of data among clients uniformly helps, as does actively trying to reduce this divergence through FedProx, but higher performance most convincingly come from methods that do not rely on a single model. The opposite trend occurs with local training, as more distributions using the same 10 or 100 classes leads to smaller within-distribution variance. Critically, FedFomo is competitive with local training in the most extreme non-IID case while strongly outperforming FedAvg, and outperforms both in moderately non-IID settings (EMD ∈ [1, 2]), suggesting that we can selectively leverage model updates that best fit client objectives to justify federating. When data is more IID, any individual client model may benefit another, and it becomes harder for a selective update to beat a general model average. FedFomo also outperforms personalizing-component and multi-model approaches (MOCHA and LG-FedAvg), where regarding data heterogeneity we see similar but weaker and more stochastic trends in performance.\nPersonalized model weighting We next investigate FedFomo’s personalization by learning optimal client to client weights overtime, visualizing P during training in Fig. 2. We depict clients with the same local data distributions next to each other (e.g. clients 0, 1, 2 belong to distribution 0). Given the initial diagonal P depicting equal weighting for all other clients, we hope FedFomo increases the weights of clients that belong to the same distribution, discovering the underlying partitions without knowledge of client datasets. In Fig 2a we show this for the 15 client 5 non-IID latent distribution setting on CIFAR-10 with 5 clients downloaded and ε = 0.3 (lighter = higher weight). These default parameters adjust well to settings with more total clients (Fig 2b), and when we change the number of latent distributions (and IID-ness) in the federation (Fig 2c).\nExploration with ε and number of models downloaded M To further understand FedFomo’s behavior and convergence in non-IID personalized settings with respect to limited download bandwidth capability, we conduct an ablation over ε and M , reporting results on the 15 client CIFAR-10 5-distribution setting in Fig. 3 over 100 training epochs. We did not find consistent correlation between ε and model performance, although this is tied to M inherently (expecting reduced variance with higherM ). With fixed ε, greaterM led to higher performance, as we can evaluate more models and identify the “correct” model-client assignments earlier on.\nOut-of-local-distribution personalization We now consider the non-IID federated setting where each client optimizes for target distributions not the same as their local data distribution. Here, although a client may sufficiently train an adequate model for one domain, it has another target data distribution of interest with hard to access relevant data. For example, in a self-driving scenario, a client may not have enough data for certain classes due to geographical constraints, motivating the need to leverage info from others. To simulate this scenario, after organizing data into latent distributions, we randomly shuffle (Dval,Dtest) as a pair among clients. We test on the CIFAR-10 and CIFAR-100 datasets with 15 clients, full participation, and 5 latent distributions, repeating the shuffling five times, and report mean accuracy over all clients.\nCIFAR-10 CIFAR-100\nLocal Training 20.39± 3.36 7.40± 1.31 FedAvg 23.11± 2.51 13.06± 1.48 FedAvg + Data 42.15± 2.16 24.98± 4.98 FedProx 39.79± 8.57 14.39± 2.85 LG-FedAvg 38.95± 1.85 18.50± 1.10 MOCHA 30.80± 2.60 13.73± 2.83 Clustered FL 29.73± 3.67 19.75± 1.58 Per-FedAvg 39.8± 5.38 21.30± 1.35 pFedMe 43.7± 7.27 25.41± 2.33 Ours (n=5) 64.06± 2.80 34.43± 1.48 Ours (n=10) 63.98± 1.81 40.94± 1.62\nTable 3: Out-of-client distribution evaluation with 5 latent distributions and 15 clients. FedFomo outperforms all alternatives in various datasets.\nAs shown in Fig. 4 and Table 3, our method consistently strongly outperforms alternatives in both non-IID CIFAR-10 and CIFAR-100 federated settings. We compare methods using the same train and test splits randomly shuffled between clients, such that through shuffling we encounter potentially large amounts of data variation between a client’s training data and its test set. This then supports the validity of the validation split and downloaded model evaluation components in our method to uniquely optimize for arbitrary data distributions different from a client’s local training data. All methods other than ours are unable to convincingly handle optimizing for a target distribution that is different from the client’s initially assigned local training data. Sharing data expectedly stands out among other methods that do not directly optimize for a client’s objective, as each client then increases the label representation overlap between its train and test sets. We note that in the 2-distribution setting, where each client’s training data consists of 5 classes on average, the higher performance of other methods may likely be a result of our simulation, where with only two distributions to shuffle between it is more likely that more clients end up with the same test distribution.\nTo shed further light on FedFomo’s performance, we visualize how client weights evolve over time in this setting (Fig. 4 bottom), where to effectively personalize for one client, FedFomo should specifically increase the weights for the other clients belonging to the original client’s target distribution. Furthermore, in the optimal scenario we should upweight all clients with this distribution while downweighting the rest. Here we show that this indeed seems to be the case, denoting local training distributions with color. We depict clients 12, 13, and 14, which all carry the same local data distribution, but 13 and 14 optimize for out-of-local distributions. In all cases, FedFomo upweights clients specifically carrying the same data distribution, such that while with shuffling we do not know apriori 13 and 14’s target distributions, FedFomo discovers these and who should federate with whom in this setting as well. We include similar plots for all clients in Appendix A.2 (Fig. 6).\nLocally Private FedFomo While we can implement FedFomo such that downloaded model parameters are inaccessible and any identifying connections between clients and their uploaded models are removed to subsequently preserve anonymity, unique real world privacy concerns may rise when sharing individual model parameters. Accordingly, we now address training FedFomo under (ε, δ)-differential privacy (DP). Dwork et al. (2014) present further details, but briefly DP ensures that given two near identical datasets, the probability that querying one produces a result is nearly the same as querying the other (under control by ε and δ). Particularly useful here are DP’s composability and robustness to post-processing, which ensure that if we train model parameters θ to satisfy DP, then any function on θ is also DP. We then perform local training with DP-SGD (Abadi et al., 2016) for a DP variant of FedFomo, which adds a tunable amount of Gaussian noise to each gradient and reduces the connection between a model update and individual samples in the local training\ndata. More noise makes models more private at the cost of performance, and here we investigate if FedFomo retains its performance with increased privacy under noisy local updates.\nWe consider the in-distribution personalization task with 5 latent non-IID distributions from the CIFAR-10 and CIFAR-100 datasets, with 15 clients and full participation at each round, and compare FedFomo against FedAvgwith varying levels of Gaussian noise, specified by σ. With all other parameters fixed, higher σ should enable more noisy updates and greater privacy (lower ε), at the potential cost of performance. At fixed δ, we wish to obtain high classification accuracy and low ε. We use the Opacus Pytorch library2 for DP-SGD, and as baselines run FedFomo and FedAvg with the library’s provided SGD optimizer with σ = 0. For DP runs, we set δ = 1× 10−5 3× 10−4, the inverse of the average number of local data points of each client, to maintain reasonable privacy.\nIn Table 4, FedFomo is able to retain a sizeable improvement over FedAvg, even against the nonDP FedAvg, and does so with minimal ε. As expected, greater σ leads to improved privacy (lower ε) at the cost of decreased performance. Additionally, in Fig. 5 we show that even with noisy gradients to protect individual data point privacy, FedFomo maintains its ability to discover the larger latent distributions among local data (albeit with more noise initially). Most importantly, despite adding noise that could potentially derail our federated update, we are able to substantially reduce privacy violation risks under (ε, δ)-differential privacy while maintaining strong performance." }, { "heading": "5 CONCLUSION", "text": "We present FedFomo, a flexible personalized FL framework that achieves strong performance across various non-IID settings, and uniquely enables clients to also optimize for target distributions distinct from their local training data. To do so, we capture the intuition that clients should download personalized weighted combinations of other models based on how suitable they are towards the client’s own target objective, and propose a method to efficiently calculate such optimal combinations by downloading individual models in lieu of previously used model averages. Beyond outperforming alternative personalized FL methods, we empirically show that FedFomo is able to discover the underlying local client data distributions, and for each client specifically upweights the other models trained on data most aligned to the client’s target objective. We finally explore how our method behaves with additional privacy guarantees, and show that we can still preserve the core functionality of FedFomo and maintain strong personalization in federated settings.\n2github.com/pytorch/opacus" }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DERIVING THE FOMO UPDATE", "text": "Recall that we can view each federated model download can be viewed as an iterative update,\nθ `(t+1) i = θ `(t) i + N∑ n=1 wn · ( θ`(t)n − θ `(t) i ) (4)\nwhere given a client’s current parameters θ`(t)i , the weights w = 〈w1, . . . , wN 〉 in conjunction with the model deltas θ`(t)n −θ`(t)i determine how much each client should move its local model parameters to optimize for some objective. Unlike more common methods in machine learning such as gradient descent, the paths we can take to get to this objective are restricted by the fixed model parameters {θ`(t)n } available to us at time t. While traditional FL methods presume this objective to be global test set performance, from a client-centric perspective we should be able to set this objective with respect to any dataset or target distribution of interest.\nWe then view this problem as a constrained optimization problem where ∑ n∈[N ] wn = 1. As a small discrepancy, if i ∈ [N ], then to also calculate wi or how much client ci should weigh its own model in the federated update directly, we reparameterize Eq. 4 as an update from a version of the local model prior to its current state, e.g.\nθ `(t+1) i = θ `(t−1) i + N∑ n=1 wn · ( θ`(t)n − θ `(t−1) i ) (5)\nand again we have current budget of 1 to allocate to all weights w. Additionally, to go along with Eq. 5, we deviate a bit from the optimal t+ 1 term in Eq. 2 and set\nθ `(t+1) i = θ `(t) i ← θ `(t−1) i − α1 >∇wLi(θ `(t−1) i ) (6)\nThere is then a parallel structure between Eq. 5 and Eq. 6, and we proceed by trying to find optimal w that would let our update in Eq. 6 closely approximate the optimal update taking the gradient∇w. We accordingly note the equivalence from Eq. 5 and Eq. 6, where for desired wn,\nN∑ n=1 wn · ( θ`(t)n − θ `(t−1) i ) = −α1>∇wLi(θ `(t−1) i ) (7)\nor in matrix form: w1... wN > (θ `(t) 1 − θ `(t−1) i ) ... (θ `(t) N − θ `(t−1) i ) = −α... −α > ∂ ∂w1 Li(θ`(t−1)i ) ... ∂ ∂wN Li(θ`(t−1)i ) (8) Then for each weight wn, we solve for its optimal value by equating the left and right-hand corresponding vector components. We do so by deriving a first order approximation of ∂∂wnLi(θ `(t−1) i ). First, for each wn, we define the function:\nϕn(w) := wn · θ`(t)n + (1− wn) · θ `(t−1) i (9)\nas an alternate parameterization of the θ’s as functions of weights. We can see that for all n ∈ [N ],\nϕn(0) = θ `(t−1) i\n⇒ ∂ ∂wn Li(θ`(t−1)i ) = ∂ ∂wn Li(ϕn(0))\nThen using a first-order Taylor series approximation, we also note that\nLi(ϕn(w′)) ≈ Li(ϕn(0)) + ∂\n∂wn Li(ϕn(0))(w′ − 0) (10)\nsuch that at our initial point w = 0 or θ`(t−1)i , we can approximate the derivative ∂ ∂wn Li(ϕn(0)) when w′ = 1 as: ∂\n∂wn Li(ϕn(0)) = Li(ϕn(1))− Li(ϕn(0))\n⇒ ∂ ∂wn Li(θ`(t−1)i ) = Li(θ `(t) n )− Li(θ `(t−1) i )\n(11)\nfollowing from Eq. 9. Then for each vector element in Eq. 8, indexed by n = [N ], we can plug in the corresponding partial derivative from Eq. 11 and solve for the corresponding wn to get\nwn = −α · Li(θ`(t)n )− Li(θ`(t−1)i ) ‖θ`(t)n − θ`(t−1)i ‖\n(12)\nas the individual weight for client ci to weight model θn in its federated update.\nWe arrive at Eq. 3 by distributing the negative α to capture the right direction in each update, but also note that the constant cancels out because we normalize to ensure our weights sum to 1, such that the weights w∗n that we actually use in practice are given by:\nw∗n = max(wn, 0)∑N n=1 max(wn, 0)\n(13)" }, { "heading": "A.2 ADDITIONAL LATENT DISTRIBUTION NON-IID EXPERIMENTS", "text": "CIFAR-100 Here we show results on the latent non-IID in-distribution personalization setup for the CIFAR-100 dataset. As in the CIFAR-10 setting, we compare FedFomo against various recent alternative methods when personalizing to a target distribution that is the same as the client’s local training data, and report accuracy as an average over all client runs. We also show results partitioning the CIFAR-100 dataset into increasing number of data distributions for 15 clients total, and report the increasing EMD in parentheses. In Table 5, FedFomo consistently outperforms all alternatives with more non-IID data across different clients. We note similar patterns to that of the CIFAR-10 dataset, where our method is more competitive when client data is more similar (lower EMD, number of distributions), but handily outperforms others as we increase this statistical label heterogeneity." }, { "heading": "A.3 CLIENT WEIGHTING WITH PERSONALIZATION", "text": "In-local vs out-of-local distribution personalization Following the visualizations for client weights in the out-of-local distribution personalization setting (Fig. 4), we include additional visualizations for the remaining clients (Fig. 6). For comparison, we also include the same visualizations for the 15 client 5 non-IID latent distribution setup on CIFAR-10, but when clients optimize for a target distribution the same as their local training data’s (Fig. 7). In both, we use color to denote the client’s local training data distribution, such that if FedFomo is able to identify the right clients to federated with that client, we should see the weights for those colors increase or remain steady over federation rounds, while all other client weights drop.\nAs seen in both Fig. 6 and Fig. 7, FedFomo quickly downweights clients with unhelpful data distributions. For the in-distribution personalization, it is able to increase and maintain higher weights for the clients from the same distribution, and consistently does so for the other two clients that belong to its distribution. In the out-of-local distribution personalization setting, due to our shuffling procedure we have instances where certain clients have in-distribution targets, while others have out-of-distribution targets. We see that FedFomo is able to accommodate both simultaneously, and learns to separate all clients belonging to the target distributions of each client from the rest." }, { "heading": "A.4 ADDITIONAL PRIVACY EXPERIMENTS", "text": "As a follow-up on the privacy experiments in Section 4, we also consider a multiple model variant of FedFomo, where instead of a client downloading a single model θn and evaluating against its own previous model θt−1i , the client downloads the simple average of all the uploaded models except θn (i.e. 1N−1 ∑ j∈[N ]\\n θn) and compares this against the simple average of all uploaded models. This tackles an orthogonal notion of privacy compared to the previous solution of introducing noise to local model gradients via DP-SGD, as now individual data point membership is harder to distill from shared parameters that come from the average of multiple local models. To calculate weights, we note a sign change with respect to Eq. 3 and the baseline model, as now wn should be positive if the model average without θn’s contribution results in a larger target objective loss than the model average with θn. Given client ci considering model θn, this leads to FedFomo weights:\nwn ∝ Li ( 1 N − 1 ∑ j∈[N ]\\n θj ) − Li ( 1 N ∑ j∈[N ] θj ) (14)\nWe evaluate this variant with the same comparison over (ε, δ)-differential privacy parameters on the 15 client 5 latent-distribution scenarios in our previous privacy analysis. We set δ = 1 × 10−5 to setup practical privacy guarantees with respect to the number of datapoints in each client’s local training set, and consider Gaussian noise σ ∈ {0, 1, 2} for baseline and (ε, δ)-differentially private performances. At fixed δ, we wish to obtain high classification accuracy with low privacy loss (ε).\nIn Table 6 we include results for this model average baseline variant (Ours (MA)) on the CIFAR-10 and CIFAR-100 datasets, along with the differentially private federated classification results in Table 4 using DP-SGD during local training for additional context. For both datasets, we still handily outperform non-private FedAvg, although performance drops considerably with respect to the single model download FedFomo variant. We currently hypothesize that this may be due to a more noisy calculation of another model’s potential contribution to the client’s current model, as we now consider the effects of many more models in our loss comparisons as well. Figuring out a balance between the two presented weighting schemas to attain high personalization and high privacy by downloading model averages then remains interesting future work." }, { "heading": "A.5 LATENT DISTRIBUTION NON-IID MOTIVATION AND SETUP", "text": "In this subsection, we discuss our latent distribution non-IID setting in more detail. We believe the pathological setup though useful might not represent more realistic or frequent occurring setups. As an example, a world-wide dataset of road landscapes may vary greatly across different data points, but variance in their feature representations can commonly be explained by their location. In another scenario, we can imagine that certain combinations of songs, or genres of music altogether are more likely to be liked by the same person than others. In fact, the very basis and success of popular recommender system algorithms such as collaborative filtering and latent factor models rely on this scenario (Hofmann, 2004). Accordingly, in this sense statistical heterogeneity and client local data non-IIDnes is more likely to happen in groups.\nWe thus propose and utilize a latent distribution method to evaluate FedFomo against other more recent proposed FL work. To use this setting, we first compute image representations by training a VGG-11 convolutional neural network to at least 85% classification accuracy on a corresponding dataset. We then run inference on every data point, and treat the 4096-dimensional vector produced in the second fully-connected layer as a semantic embedding for each individual image. After further reduction to 256 dimensions through PCA, we use K-Means clustering to partition our dataset into D disjoint distributions. Given K total clients, we then evenly assign each client to a distributionD. For each client we finally obtain its local data by sampling randomly from D without replacement. For datasets with pre-defined train and test splits, we cluster embeddings from both at the same time such that similar images across splits are assigned the same K-means cluster, and respect these original splits such that all Dtest images come from the original test split. (Fig. 8)" }, { "heading": "A.6 MODEL IMPLEMENTATION DETAILS", "text": "We train with SGD, 0.1 learning rate, 0 momentum, 1e-4 weight decay, and 0.99 learning rate decay for CIFAR-10/100, and do the same except with 0.01 learning rate for MNIST. For FedFomo we use n = 5 and n = 10 downloads per client, ε = 0.3 with 0.05 decay each round, and separate Dtrain and Dval with an 80-20 split." }, { "heading": "A.7 ADDITIONAL DESIGN ABLATIONS", "text": "In this section we present additional work on key hyperparameters or aspects of FedFomo to give further insight into our method’s functionality and robustness to parameters. We consider key design choices related to the size of each client’s validation split.\nSize of the validation split To better organize federated uploaded models into personalized federated updates, our method requires a local validation split Dval that reflects the client’s objective or target test distribution. Here, given a pre-defined amount of locally available data, we ask the natural question of how a client should best go about dividing its data points between those to train its own local model and those to evaluate others with respect to computing a more informed personalized update through FedFomo. We use the 15 client 100% participation setup with 5 latent distributions organized over the CIFAR-10 dataset, and consider both the evaluation curve and final test accuracy over allocating a fraction ∈ {0.01, 0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 0.9} of all clients’ local data to Dval, and track evaluation over 20 communication rounds with 5 epochs of local training per round. On\naverage, each client has 3333 local data points. We denote final accuracy and standard deviation over five runs in Fig 9.\nAs reported in Fig. 9, we observe faster convergence to a higher accuracy when allocating under half of all local data points to the validation split, with a notable drop-off using more data points. This is most likely a result of reducing the amount of data available for each client to train their model locally. Eventually this stagnates, and observe a slight decrease in performance between validation split fraction 0.05 and 0.1." } ]
2,021
FIRST ORDER MODEL OPTIMIZATION
SP:478a18897696ba946947faeee860203186d7e756
[ "This work analyzes the LogSumExp aggregated loss (named tiled empirical risk minimization, or TERM, in the paper). It provides several general properties of the loss, such as its relation to min/avg/max-loss, and interpretations of different trade-offs. Empirically, it is shown that TERM can be applied to a diverse set of problems, including robust optimization, fairness and generalization.", "This paper considers a unified framework named TERM for addressing a bunch of problems arising in the simple averaged empirical minimization. By leveraging the key hyper-parameter t in the TERM loss, it can recover the original average loss and approximate robust loss, min/max loss, and the superquantile loss, etc. The authors also propose gradient-based optimization algorithms for solving the TERM problem. " ]
Empirical risk minimization (ERM) is typically designed to perform well on the average loss, which can result in estimators that are sensitive to outliers, generalize poorly, or treat subgroups unfairly. While many methods aim to address these problems individually, in this work, we explore them through a unified framework— tilted empirical risk minimization (TERM). In particular, we show that it is possible to flexibly tune the impact of individual losses through a straightforward extension to ERM using a hyperparameter called the tilt. We provide several interpretations of the resulting framework: We show that TERM can increase or decrease the influence of outliers, respectively, to enable fairness or robustness; has variancereduction properties that can benefit generalization; and can be viewed as a smooth approximation to a superquantile method. We develop batch and stochastic firstorder optimization methods for solving TERM, and show that the problem can be efficiently solved relative to common alternatives. Finally, we demonstrate that TERM can be used for a multitude of applications, such as enforcing fairness between subgroups, mitigating the effect of outliers, and handling class imbalance. TERM is not only competitive with existing solutions tailored to these individual problems, but can also enable entirely new applications, such as simultaneously addressing outliers and promoting fairness.
[ { "affiliations": [], "name": "Tian Li" }, { "affiliations": [], "name": "Ahmad Beirami" }, { "affiliations": [], "name": "Maziar Sanjabi" } ]
[ { "authors": [ "Sherif Abdelkarim", "Panos Achlioptas", "Jiaji Huang", "Boyang Li", "Kenneth Church", "Mohamed Elhoseiny" ], "title": "Long-tail visual relationship recognition with a visiolinguistic hubless loss", "venue": "arXiv preprint arXiv:2004.00436,", "year": 2020 }, { "authors": [ "Sina Baharlouei", "Maher Nouiehed", "Ahmad Beirami", "Meisam Razaviyayn" ], "title": "Rényi fair inference", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ahmad Beirami", "Robert Calderbank", "Mark M Christiansen", "Ken R Duffy", "Muriel Médard" ], "title": "A characterization of guesswork on swiftly tilting curves", "venue": "IEEE Transactions on Information", "year": 2018 }, { "authors": [ "George Bennett" ], "title": "Probability inequalities for the sum of independent random variables", "venue": "Journal of the American Statistical Association,", "year": 1962 }, { "authors": [ "Kush Bhatia", "Prateek Jain", "Purushottam Kar" ], "title": "Robust regression via hard thresholding", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Kush Bhatia", "Prateek Jain", "Parameswaran Kamalaruban", "Purushottam Kar" ], "title": "Consistent robust regression", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Giuseppe C Calafiore", "Laurent El Ghaoui" ], "title": "Optimization Models", "venue": null, "year": 2014 }, { "authors": [ "Haw-Shiuan Chang", "Erik Learned-Miller", "Andrew McCallum" ], "title": "Active bias: Training more accurate neural networks by emphasizing high variance samples", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Nadav Cohen", "Amnon Shashua" ], "title": "Simnets: A generalization of convolutional networks", "venue": "arXiv preprint arXiv:1410.0781,", "year": 2014 }, { "authors": [ "Nadav Cohen", "Or Sharir", "Amnon Shashua" ], "title": "Deep simnets", "venue": "In Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "A. Dembo", "O. Zeitouni" ], "title": "Large deviations techniques and applications", "venue": "Springer Science & Business Media,", "year": 2009 }, { "authors": [ "Ilias Diakonikolas", "Gautam Kamath", "Daniel Kane", "Jerry Li", "Jacob Steinhardt", "Alistair Stewart" ], "title": "Sever: A robust meta-algorithm for stochastic optimization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Michele Donini", "Luca Oneto", "Shai Ben-David", "John S Shawe-Taylor", "Massimiliano Pontil" ], "title": "Empirical risk minimization under fairness constraints", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "D Dua", "C Graff" ], "title": "UCI machine learning repository [http://archive", "venue": "ics. uci. edu/ml]. https://archive. ics. uci. edu/ml/datasets", "year": 2019 }, { "authors": [ "Marco F Duarte", "Yu Hen Hu" ], "title": "Vehicle classification in distributed sensor networks", "venue": "Journal of Parallel and Distributed Computing,", "year": 2004 }, { "authors": [ "Jinyang Gao", "HV Jagadish", "Beng Chin Ooi" ], "title": "Active sampler: Light-weight accelerator for complex data analytics at scale", "venue": "arXiv preprint arXiv:1512.03880,", "year": 2015 }, { "authors": [ "Rong Ge", "Furong Huang", "Chi Jin", "Yang Yuan" ], "title": "Escaping from saddle points—online stochastic gradient for tensor decomposition", "venue": "In Conference on Learning Theory,", "year": 2015 }, { "authors": [ "Moritz Hardt", "Eric Price", "Nati Srebro" ], "title": "Equality of opportunity in supervised learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Tatsunori Hashimoto", "Megha Srivastava", "Hongseok Namkoong", "Percy Liang" ], "title": "Fairness without demographics in repeated loss minimization", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Duncan Wilson", "Kevin Gimpel" ], "title": "Using trusted data to train deep networks on labels corrupted by severe noise", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Wassily Hoeffding" ], "title": "Probability inequalities for sums of bounded random variables", "venue": "In The Collected Works of Wassily Hoeffding", "year": 1994 }, { "authors": [ "Matthew Holland", "Kazushi Ikeda" ], "title": "Better generalization with less data using robust gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Ronald A Howard", "James E Matheson" ], "title": "Risk-sensitive markov decision processes", "venue": "Management science,", "year": 1972 }, { "authors": [ "Peter J Huber" ], "title": "Robust estimation of a location parameter", "venue": "The Annals of Mathematical Statistics,", "year": 1964 }, { "authors": [ "Angela H Jiang", "Daniel L-K Wong", "Giulio Zhou", "David G Andersen", "Jeffrey Dean", "Gregory R Ganger", "Gauri Joshi", "Michael Kaminksy", "Michael Kozuch", "Zachary C Lipton" ], "title": "Accelerating deep learning by focusing on the biggest losers", "venue": "arXiv preprint arXiv:1910.00762,", "year": 2019 }, { "authors": [ "Lu Jiang", "Zhengyuan Zhou", "Thomas Leung", "Li-Jia Li", "Li Fei-Fei" ], "title": "MentorNet: Learning data-driven curriculum for very deep neural networks on corrupted labels", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Chi Jin", "Rong Ge", "Praneeth Netrapalli", "Sham M Kakade", "Michael I Jordan" ], "title": "How to escape saddle points efficiently", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Chi Jin", "Praneeth Netrapalli", "Michael Jordan" ], "title": "What is local optimality in nonconvex-nonconcave minimax optimization", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Mohammad Mahdi Kamani", "Farzin Haddadpour", "Rana Forsati", "Mehrdad Mahdavi" ], "title": "Efficient fair principal component analysis", "venue": "arXiv preprint arXiv:1911.04931,", "year": 2019 }, { "authors": [ "Hamed Karimi", "Julie Nutini", "Mark Schmidt" ], "title": "Linear convergence of gradient and proximal-gradient methods under the polyak-łojasiewicz condition", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2016 }, { "authors": [ "Angelos Katharopoulos", "François Fleuret" ], "title": "Biased importance sampling for deep neural network training", "venue": "arXiv preprint arXiv:1706.00043,", "year": 2017 }, { "authors": [ "Ashish Khetan", "Zachary C Lipton", "Anima Anandkumar" ], "title": "Learning from noisy singly-labeled data", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Barry W Kort", "Dimitri P Bertsekas" ], "title": "A new penalty function method for constrained minimization", "venue": "In IEEE Conference on Decision and Control and 11th Symposium on Adaptive Processes,", "year": 1972 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "M Pawan Kumar", "Benjamin Packer", "Daphne Koller" ], "title": "Self-paced learning for latent variable models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "Yassine Laguel", "Krishna Pillutla", "Jérôme Malick", "Zaid Harchaoui" ], "title": "A superquantile approach for federated learning with heterogeneous devices", "venue": "In Annual Conference on Information Sciences and Systems,", "year": 2021 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Liu Leqi", "Adarsh Prasad", "Pradeep K Ravikumar" ], "title": "On human-aligned risk minimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated learning: Challenges, methods, and future directions", "venue": "IEEE Signal Processing Magazine,", "year": 2020 }, { "authors": [ "Tian Li", "Maziar Sanjabi", "Ahmad Beirami", "Virginia Smith" ], "title": "Fair resource allocation in federated learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Tomasz Malisiewicz", "Abhinav Gupta", "Alexei A Efros" ], "title": "Ensemble of exemplar-SVMs for object detection and beyond", "venue": "In International Conference on Computer Vision,", "year": 2011 }, { "authors": [ "Llew Mason", "Jonathan Baxter", "Peter Bartlett", "Marcus Frean" ], "title": "Boosting algorithms as gradient descent", "venue": "Advances in Neural Information Processing Systems,", "year": 1999 }, { "authors": [ "Andreas Maurer", "Massimiliano Pontil" ], "title": "Empirical bernstein bounds and sample variance penalization", "venue": "arXiv preprint arXiv:0907.3740,", "year": 2009 }, { "authors": [ "H Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Agüera y Arcas" ], "title": "Communicationefficient learning of deep networks from decentralized data", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Aditya Krishna Menon", "Ankit Singh Rawat", "Sashank J Reddi", "Sanjiv Kumar" ], "title": "Can gradient clipping mitigate label noise", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Mehryar Mohri", "Gary Sivek", "Ananda Theertha Suresh" ], "title": "Agnostic federated learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Bhaskar Mukhoty", "Govind Gopakumar", "Prateek Jain", "Purushottam Kar" ], "title": "Globally-convergent iteratively reweighted least squares for robust regression problems", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Hongseok Namkoong", "John C Duchi" ], "title": "Variance-based regularization with convex objectives", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "David Nass", "B. Belousov", "Jan Peters" ], "title": "Entropic risk measure in policy search", "venue": "International Conference on Intelligent Robots and Systems,", "year": 2019 }, { "authors": [ "Maher Nouiehed", "Jong-Shi Pang", "Meisam Razaviyayn" ], "title": "On the pervasiveness of difference-convexity in optimization and statistics", "venue": "Mathematical Programming,", "year": 2019 }, { "authors": [ "Maher Nouiehed", "Maziar Sanjabi", "Tianjian Huang", "Jason D Lee", "Meisam Razaviyayn" ], "title": "Solving a class of non-convex min-max games using iterative first order methods", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ivan Olier", "Noureddin Sadawi", "G Richard Bickerton", "Joaquin Vanschoren", "Crina Grosan", "Larisa Soldatova", "Ross D King" ], "title": "Meta-qsar: a large-scale application of meta-learning to drug design and discovery", "venue": "Machine Learning,", "year": 2018 }, { "authors": [ "Dmitrii M Ostrovskii", "Andrew Lowy", "Meisam Razaviyayn" ], "title": "Efficient search of first-order nash equilibria in nonconvex-concave smooth min-max problems", "venue": "arXiv preprint arXiv:2002.07919,", "year": 2020 }, { "authors": [ "R Kelley Pace", "Ronald Barry" ], "title": "Sparse spatial autoregressions", "venue": "Statistics & Probability Letters,", "year": 1997 }, { "authors": [ "EY Pee", "Johannes O Royset" ], "title": "On solving large-scale finite minimax problems using exponential smoothing", "venue": "Journal of Optimization Theory and Applications,", "year": 2011 }, { "authors": [ "Mengye Ren", "Wenyuan Zeng", "Bin Yang", "Raquel Urtasun" ], "title": "Learning to reweight examples for robust deep learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Ashkan Rezaei", "Rizal Fathony", "Omid Memarrast", "Brian D Ziebart" ], "title": "Fairness for robust log loss classification", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "R Tyrrell Rockafellar", "Stanislav Uryasev" ], "title": "Conditional value-at-risk for general loss distributions", "venue": "Journal of Banking & Finance,", "year": 2002 }, { "authors": [ "R Tyrrell Rockafellar", "Stanislav Uryasev" ], "title": "Optimization of conditional value-at-risk", "venue": "Journal of risk,", "year": 2000 }, { "authors": [ "Yuji Roh", "Kangwook Lee", "Steven Euijong Whang", "Changho Suh" ], "title": "Fr-train: A mutual information-based approach to fair and robust training", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Samira Samadi", "Uthaipon Tantipongpipat", "Jamie H Morgenstern", "Mohit Singh", "Santosh Vempala" ], "title": "The price of fair PCA: One extra dimension", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Chunhua Shen", "Hanxi Li" ], "title": "On the dual formulation of boosting algorithms", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2010 }, { "authors": [ "Abhinav Shrivastava", "Abhinav Gupta", "Ross Girshick" ], "title": "Training region-based object detectors with online hard example mining", "venue": "In Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Jun Shu", "Qi Xie", "Lixuan Yi", "Qian Zhao", "Sanping Zhou", "Zongben Xu", "Deyu Meng" ], "title": "Meta-weight-net: Learning an explicit mapping for sample weighting", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aman Sinha", "Hongseok Namkoong", "John Duchi" ], "title": "Certifying some distributional robustness with principled adversarial training", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Ivan Stelmakh", "Nihar B Shah", "Aarti Singh" ], "title": "Peerreview4all: Fair and accurate reviewer assignment in peer review", "venue": "In Algorithmic Learning", "year": 2019 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Uthaipon Tantipongpipat", "Samira Samadi", "Mohit Singh", "Jamie H Morgenstern", "Santosh Vempala" ], "title": "Multicriteria dimensionality reduction with applications to fairness", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Andreas Veit", "Neil Alldrin", "Gal Chechik", "Ivan Krasin", "Abhinav Gupta", "Serge Belongie" ], "title": "Learning from noisy large-scale datasets with minimal supervision", "venue": "In Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Riccardo Volpi", "Hongseok Namkoong", "Ozan Sener", "John C Duchi", "Vittorio Murino", "Silvio Savarese" ], "title": "Generalizing to unseen domains via adversarial data augmentation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Martin J Wainwright", "Tommi S Jaakkola", "Alan S Willsky" ], "title": "A new class of upper bounds on the log partition function", "venue": "IEEE Transactions on Information Theory,", "year": 2005 }, { "authors": [ "Xueqin Wang", "Yunlu Jiang", "Mian Huang", "Heping Zhang" ], "title": "Robust variable selection with exponential squared loss", "venue": "Journal of the American Statistical Association,", "year": 2013 }, { "authors": [ "Zhiguang Wang", "Tim Oates", "James Lo" ], "title": "Adaptive normalized risk-averting training for deep neural networks", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Hermann Weyl" ], "title": "Das asymptotische verteilungsgesetz der eigenwerte linearer partieller differentialgleichungen (mit einer anwendung auf die theorie der hohlraumstrahlung)", "venue": "Mathematische Annalen,", "year": 1912 }, { "authors": [ "Min Yang", "Linli Xu", "Martha White", "Dale Schuurmans", "Yao-liang Yu" ], "title": "Relaxed clipping: A global training method for robust regression and classification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "I-Cheng Yeh", "Che-hui Lien" ], "title": "The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients", "venue": "Expert Systems with Applications,", "year": 2009 }, { "authors": [ "Yao-liang Yu", "Özlem Aslan", "Dale Schuurmans" ], "title": "A polynomial-time form of robust regression", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Muhammad Bilal Zafar", "Isabel Valera", "Manuel Gomez Rodriguez", "Krishna P Gummadi" ], "title": "Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment", "venue": "In Conference on World Wide Web,", "year": 2017 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Zhilu Zhang", "Mert Sabuncu" ], "title": "Generalized cross entropy loss for training deep neural networks with noisy labels", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Duchi" ], "title": "HIV-1 (Dua & Graff, 2019; Rögnvaldsson, 2013) dataset", "venue": null, "year": 2013 }, { "authors": [ "Yang" ], "title": "However, we note that some of these common benchmarks, such as cal-housing (Pace & Barry, 1997) and Credit (Yeh & Lien, 2009), contain potentially sensitive information. While the goal of our experiments was to showcase that the TERM framework could be useful in learning fair representations that suppress membership bias and hence promote fairer performance, developing an understanding for—and removing—such membership biases", "venue": null, "year": 2010 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many statistical estimation procedures rely on the concept of empirical risk minimization (ERM), in which the parameter of interest, θPΘĎRd, is estimated by minimizing an average loss over the data:\nRpθq :“ 1 N ÿ\niPrNs fpxi; θq . (1)\nWhile ERM is widely used and has nice statistical properties, it can perform poorly in situations where average performance is not an appropriate surrogate for the problem of interest. Significant research has thus been devoted to developing alternatives to traditional ERM for diverse applications, such as learning in the presence of noisy/corrupted data (Jiang et al., 2018; Khetan et al., 2018), performing classification with imbalanced data (Lin et al., 2017; Malisiewicz et al., 2011), ensuring that subgroups within a population are treated fairly (Hashimoto et al., 2018; Samadi et al., 2018), or developing solutions with favorable out-of-sample performance (Namkoong & Duchi, 2017).\nIn this paper, we suggest that deficiencies in ERM can be flexibly addressed via a unified framework, tilted empirical risk minimization (TERM). TERM encompasses a family of objectives, parameterized by a real-valued hyperparameter, t. For t P Rz0, the t-tilted loss (TERM objective) is given by:\nrRpt; θq :“ 1 t log\nˆ\n1\nN\nÿ\niPrNs etfpxi;θq\n˙\n. (2)\nTERM generalizes ERM as the 0-tilted loss recovers the average loss, i.e., rRp0, θq“Rpθq.1 It also recovers other popular alternatives such as the max-loss (tÑ`8) and min-loss (tÑ´8) (Lemma 2). For tą0, the objective is a common form of exponential smoothing, used to approximate the max (Kort & Bertsekas, 1972; Pee & Royset, 2011). Variants of tilting have been studied in several contexts,\n˚Equal contribution. 1 rRp0; θq is defined in (14) via the continuous extension of Rpt; θq.\nincluding robust regression (Wang et al., 2013) ptă0q, importance sampling (Wainwright et al., 2005), sequential decision making (Howard & Matheson, 1972; Nass et al., 2019), and large deviations theory (Beirami et al., 2018). However, despite the rich history of tilted objectives, they have not seen widespread use in machine learning. In this work, we aim to bridge this gap by: (i) rigorously studying the objective in a general form, and (ii) exploring its utility for a number of ML applications. Surprisingly, we find that this simple extension to ERM is competitive for a wide range of problems.\nTo highlight how the TERM objective can help with issues such as outliers or imbalanced classes, we discuss three motivating examples below, which are illustrated in Figure 1.\n(a) Point estimation: As a first example, consider determining a point estimate from a set of samples that contain some outliers. We plot an example 2D dataset in Figure 1a, with data centered at (1,1). Using traditional ERM (i.e., TERM with t “ 0) recovers the sample mean, which can be biased towards outlier data. By setting t ă 0, TERM can suppress outliers by reducing the relative impact of the largest losses (i.e., points that are far from the estimate) in (2). A specific value of t ă 0 can in fact approximately recover the geometric median, as the objective in (2) can be viewed as approximately optimizing specific loss quantiles (a connection which we make explicit in Section 2). In contrast, if these ‘outlier’ points are important to estimate, setting t ą 0 will push the solution towards a point that aims to minimize variance, as we prove more rigorously in Section 2, Theorem 4.\n(b) Linear regression: A similar interpretation holds for the case of linear regression (Figure 2b). As tÑ ´8, TERM finds a line of best while ignoring outliers. However, this solution may not be preferred if we have reason to believe that these ‘outliers’ should not be ignored. As tÑ `8, TERM recovers the min-max solution, which aims to minimize the worst loss, thus ensuring the model is a reasonable fit for all samples (at the expense of possibly being a worse fit for many). Similar criteria have been used, e.g., in defining notions of fairness (Hashimoto et al., 2018; Samadi et al., 2018). We explore several use-cases involving robust regression and fairness in more detail in Section 5.\n(c) Logistic regression: Finally, we consider a binary classification problem using logistic regression (Figure 2c). For t P R, the TERM solution varies from the nearest cluster center (tÑ´8), to the logistic regression classifier (t“0), towards a classifier that magnifies the misclassified data (tÑ`8). We note that it is common to modify logistic regression classifiers by adjusting the decision threshold from 0.5, which is equivalent to moving the intercept of the decision boundary. This is fundamentally different than what is offered by TERM (where the slope is changing). As we show in Section 5, this added flexibility affords TERM with competitive performance on a number of classification problems, such as those involving noisy data, class imbalance, or a combination of the two.\nContributions. In this work, we explore TERM as a simple, unified framework to flexibly address various challenges with empirical risk minimization. We first analyze the objective and its solutions, showcasing the behavior of TERM with varying t (Section 2). Our analysis provides novel connections between tilted objectives and superquantile methods. We develop efficient methods for solving TERM (Section 4), and show via numerous case studies that TERM is competitive with existing, problemspecific state-of-the-art solutions (Section 5). We also extend TERM to handle compound issues, such as the simultaneous existence of noisy samples and imbalanced classes (Section 3). Our results demonstrate the effectiveness and versatility of tilted objectives in machine learning." }, { "heading": "2 TERM: PROPERTIES & INTERPRETATIONS", "text": "To better understand the performance of the t-tilted losses in (2), we provide several interpretations of the TERM solutions, leaving the full statements of theorems and proofs to the appendix. We make no distributional assumptions on the data, and study properties of TERM under the assumption that the loss function forms a generalized linear model, e.g., L2 loss and logistic loss (Appendix D). However, we also obtain favorable empirical results using TERM with other objectives such as deep neural networks and PCA in Section 5, motivating the extension of our theory beyond GLMs in future work.\nGeneral properties. We begin by noting several general properties of the TERM objective (2). Given a smooth fpx; θq, the t-tilted loss is smooth for all finite t (Lemma 4). If fpx; θq is strongly convex, the t-tilted loss is strongly convex for t ą 0 (Lemma 5). We visualize the solutions to TERM for a toy problem in Figure 2, which allows us to illustrate several special cases of the general framework. As discussed in Section 1, TERM can recover traditional ERM (t“0), the max-loss (tÑ`8), and the min-loss (tÑ´8). As we demonstrate in Section 5, providing a smooth tradeoff between these specific losses can be beneficial for a number of practical use-cases— both in terms of the resulting solution and the difficulty of solving the problem itself. Interestingly, we additionally show that the TERM solution can be viewed as a smooth approximation to a superquantile method, which aims to minimize quantiles of losses such as the median loss. In Figure 2, it is clear to see why this may be beneficial, as the median loss (orange) can be highly non-smooth in practice. We make these rough connections more explicit via the interpretations below.\n(Interpretation 1) Re-weighting samples to magnify/suppress outliers. As discussed via the toy examples in Section 1, the TERM objective can be tuned (using t) to magnify or suppress the influence of outliers. We make this notion rigorous by exploring the gradient of the t-tilted loss in order to reason about the solutions to the objective defined in (2). Lemma 1 (Tilted gradient, proof in Appendix B). For a smooth loss function fpx; θq,\n∇θ rRpt;θq“ ÿ iPrNs wipt;θq∇θfpxi;θq, where wipt;θq:“\netfpxi;θq ř\njPrNse tfpxj ;θq\n“ 1 N etpfpxi;θq´ rRpt;θqq. (3)\nFrom this, we can observe that the tilted gradient is a weighted average of the gradients of the original individual losses, where each data point is weighted exponentially proportional to the value of its loss. Note that t “ 0 recovers the uniform weighting associated with ERM, i.e., wipt; θq “ 1{N . For positive t, it magnifies the outliers—samples with large losses—by assigning more weight to them, and for negative t, it suppresses the outliers by assigning less weight to them.\n(Interpretation 2) Tradeoff between average-loss and min/max-loss. To put Interpretation 1 in context and understand the limits of TERM, a benefit of the framework is that it offers a continuum of solutions between the min and max losses. Indeed, for positive values of t, TERM enables a smooth tradeoff between the average-loss and max-loss (as we demonstrate in Figure 10, Appendix I). Hence, TERM can selectively improve the worst-performing losses by paying a penalty on average performance, thus promoting a notion of uniformity or fairness (Hashimoto et al., 2018). On the other hand, for negative t, the solutions achieve a smooth tradeoff between average-loss and min-loss, which can have the benefit of focusing on the ‘best’ losses, or ignoring outliers (Theorem 3, Appendix D). Theorem (Formal statement and proof in Appendix D, Theorem 3). Let θ̆ptq be the minimizer of rRpt; θq, referred to as t-tilted solution. Then, for t ą 0, max-loss, pRpθ̆ptqq, is non-increasing with t while the average loss, Rpθ̆ptqq, is non-decreasing with t.\n(Interpretation 3) Empirical bias/variance tradeoff. Another key property of the TERM solutions is that the empirical variance of the loss across all samples decreases as t increases (Theorem 4).\nHence, by increasing t, it is possible to trade off between optimizing the average loss vs. reducing variance, allowing the solutions to potentially achieve a better bias-variance tradeoff for generalization (Bennett, 1962; Hoeffding, 1994; Maurer & Pontil, 2009) (Figure 10, Appendix I). We use this property to achieve better generalization in classification in Section 5. We also prove that the cosine similarity between the loss vector and the all-ones vector monotonically increases with t (Theorem 5), which shows that larger t promotes a more uniform performance across all losses and can have implications for fairness defined as representation disparity (Hashimoto et al., 2018) (Section 5.2).\nTheorem (Formal statement and proof in Appendix D, Theorem 4). Let fpθq :“ pfpx1; θqq, . . . , fpxN ; θqq be the loss vector for parameter θ. Then, the variance of the vector fpθ̆ptqq is non-increasing with t while its average, i.e., Rpθ̆ptqq, is non-decreasing with t.\n(Interpretation 4) Approximate superquantile method. Finally, we show that TERM is related to superquantile-based objectives, which aim to minimize specific quantiles of the individual losses that exceed a certain value (Rockafellar et al., 2000). For example, optimizing for 90% of the individual losses (ignoring the worst-performing 10%) could be a more reasonable practical objective than the pessimistic min-max objective. Another common application of this is to use the median in contrast to the mean in the presence of noisy outliers. As we discuss in Appendix G, superquantile methods can be reinterpreted as minimizing the k-loss, defined as the k-th smallest loss of N (i.e., 1-loss is the min-loss, N -loss is the max-loss, pN´1q{2-loss is the median-loss). While minimizing the k-loss is more desirable than ERM in many applications, the k-loss is non-smooth (and generally non-convex), and is challenging to solve for large-scale problems (Jin et al., 2020; Nouiehed et al., 2019b).\nTheorem (Formal statement and proof in Appendix G, Theorem 10). The quantile of the losses that exceed a given value is upper bounded by a smooth function of the TERM objective. Further, the t-tilted solutions are good approximate solutions of the superquantile (k-loss) optimization." }, { "heading": "3 TERM EXTENDED: HIERARCHICAL MULTI-OBJECTIVE TILTING", "text": "We also consider an extension of TERM that can be used to address practical applications requiring multiple objectives, e.g., simultaneously achieving robustness to noisy data and ensuring fair performance across subgroups. Existing approaches typically aim to address such problems in isolation. To handle multiple objectives with TERM, let each sample x be associated with a group g P rGs, i.e., x P g. These groups could be related to the labels (e.g., classes in a classification task), or may depend only on features. For any t, τ P R, we define multi-objective TERM as:\nrJpt, τ ; θq :“ 1 t log\n¨\n˝\n1\nN\nÿ\ngPrGs |g|et rRgpτ ;θq\n˛\n‚ , where rRgpτ ; θq :“ 1\nτ log\n˜\n1\n|g| ÿ xPg eτfpx;θq\n¸\n, (4)\nand |g| is the size of group g. Multi-objective TERM recovers sample-level TERM as a special case for τ “ t (Appendix, Lemma 7), and reduces to group-level TERM with τ Ñ 0. Note that all properties discussed in Section 2 carry over to group-level TERM. Similar to the tilted gradient (3), the multi-objective tilted gradient is a weighted sum of the gradients (Appendix, Lemma 6), making it similarly efficient to solve. We validate the effectiveness of hierarchical tilting empirically in Section 5.3, where we show that TERM can significantly outperform baselines to handle class imbalance and noisy outliers simultaneously." }, { "heading": "4 SOLVING TERM", "text": "To solve TERM, we suggest batch and stochastic variants of traditional first-order gradient-based optimization methods. TERM in the batch setting (Batch TERM) is summarized in Algorithm 1 in the context of solving multi-objective hierarchical TERM (4) for full generality. The main steps include computing the tilted gradients of the hierarchical objective defined in (4). Note that Batch TERM with t “ τ reduces to solving the sample-level tilted objective (2). We also provide a stochastic variant in Algorithm 2, Appendix H. At a high level, at each iteration, group-level tilting is addressed by choosing a group based on the tilted weight vector estimated via stochastic dynamics. Sample-level tilting is then incorporated by re-weighting the samples in a uniformly drawn mini-batch. We find that these methods perform well empirically on a variety of tasks (Section 5).\nAlgorithm 1: Batch TERM Input: t, τ, α while stopping criteria not reached do\nfor g P rGs do compute the loss fpx; θq and gradient ∇θfpx; θq for all x P g rRg,τ Ð τ -tilted loss (4) on group g, ∇θ rRg,τ Ð 1|g| ř xPg e τfpx;θq´τ rRg,τ∇θfpx; θq end rJt,τ Ð 1t log ´ 1 N ř gPrGs |g|et rRgpτ ;θq ¯ , wt,τ,g Ð |g|et rRτ,g´t rJt,τ\nθ Ð θ ´ αN ř gPrGs wt,τ,g∇θ rRg,τ end\nWe defer readers to Appendix H for general properties of TERM (smoothness, convexity) that may vary with t and affect the convergence of gradient-based methods used to solve the objective." }, { "heading": "5 TERM IN PRACTICE: USE CASES", "text": "In this section, we showcase the flexibility, wide applicability, and competitive performance of the TERM framework through empirical results on a variety of real-world problems such as handling outliers (Section 5.1), ensuring fairness and improving generalization (Section 5.2), and addressing compound issues (Section 5.3). Despite the relatively straightforward modification TERM makes to traditional ERM, we show that t-tilted losses not only outperform ERM, but either outperform or are competitive with state-of-the-art, problem-specific tailored baselines on a wide range of applications.\nWe provide implementation details in Appendix J. All code, datasets, and experiments are publicly available at github.com/litian96/TERM. For experiments with positive t (Section 5.2), we tune t P t0.1, 0.5, 1, 5, 10, 50, 100, 200u on the validation set. In our initial robust regression experiments, we find that the performance is robust to various t’s, and we thus use a fixed t “ ´2 for all experiments involving negative t (Section 5.1 and Section 5.3). For all values of t tested, the number of iterations required to solve TERM is within 2ˆ that of standard ERM." }, { "heading": "5.1 MITIGATING NOISY OUTLIERS", "text": "We begin by investigating TERM’s ability to find robust solutions that reduce the effect of noisy outliers. We note that we specifically focus on the setting of ‘robustness’ involving random additive noise; the applicability of TERM to more adversarial forms of robustness would be an interesting direction of future work. We do not compare with approaches that require additional clean validation data (e.g., Hendrycks et al., 2018; Ren et al., 2018; Roh et al., 2020; Veit et al., 2017), as such data can be costly to obtain in practice.\nRobust regression. We first consider a regression task with noise corrupted targets, where we aim to minimize the root mean square error (RMSE) on samples from the Drug Discovery dataset (Diakonikolas et al., 2019; Olier et al., 2018). The task is to predict the bioactivities given a set of chemical compounds. We compare against linear regression with an L2 loss, which we view as the ‘standard’ ERM solution for regression, as well as with losses commonly used to mitigate outliers—the L1 loss and Huber loss (Huber, 1964). We also compare with consistent robust regression (CRR) (Bhatia et al., 2017) and STIR (Mukhoty et al., 2019), recent state-of-the-art methods specifically designed for label noise in robust regression. In this particular problem, TERM is equivalent to exponential squared loss, studied in (Wang et al., 2013). We apply TERM at the sample level with an L2 loss, and generate noisy outliers by assigning random targets drawn from N p5, 5q on a fraction of the samples. In Table 1, we report RMSE on clean test data for each objective and under different noise levels. We also present the performance of an oracle method (Genie ERM) which has access to all of the clean data samples with the noisy samples removed. Note that Genie ERM is not a practical algorithm and is solely presented to set the expected performance limit in the noisy setting. The results indicate that TERM is competitive with baselines on the 20% noise level, and achieves better robustness with moderate-to-extreme noise. We observe similar trends in scenarios involving both\nnoisy features and targets (Appendix I.2). CRR tends to run slowly as it scales cubicly with the number of dimensions (Bhatia et al., 2017), while solving TERM is roughly as efficient as ERM.\nTable 2: TERM is competitive with robust classification baselines, and is superior in high noise regimes.\nobjectives test accuracy (CIFAR10, Inception)\n20% noise 40% noise 80% noise\nERM 0.775 (.004) 0.719 (.004) 0.284 (.004) RandomRect (Ren et al., 2018) 0.744 (.004) 0.699 (.005) 0.384 (.005) SelfPaced (Kumar et al., 2010) 0.784 (.004) 0.733 (.004) 0.272 (.004) MentorNet-PD (Jiang et al., 2018) 0.798 (.004) 0.731 (.004) 0.312 (.005) GCE (Zhang & Sabuncu, 2018) 0.805 (.004) 0.750 (.004) 0.433 (.005) TERM 0.795 (.004) 0.768 (.004) 0.455 (.005) Genie ERM 0.828 (.004) 0.820 (.004) 0.792 (.004)\nNote that the outliers considered here are unstructured with random noise, and not adversarial. This makes it possible for the methods to find the underlying structure of clean data even if the majority of the samples are noisy outliers. To gain more intuition on these cases, we also generate synthetic two-dimensional data points and test the performance of TERM under 0%, 20%, 40%, and 80% noise for linear regression. TERM with t “ ´2 performs well in all noise levels (Figure 11 and 12 in Appendix I.2). However, as might be expected, in Figure 14 (Appendix I.2) we show that TERM may overfit to noisy samples when the noise is structured and the noise values are large (e.g., 80%).\nRobust classification. It is well-known that deep neural networks can easily overfit to corrupted labels (e.g., Zhang et al., 2017). While the theoretical properties we study for TERM (Section 2) do not directly cover objectives with neural network function approximations, we show that TERM can be applied empirically to DNNs to achieve robustness to noisy training labels. MentorNet (Jiang et al., 2018) is a popular method in this setting, which learns to assign weights to samples based on feedback from a student net. Following the setup in Jiang et al. (2018), we explore classification on CIFAR10 (Krizhevsky et al., 2009) when a fraction of the training labels are corrupted with uniform noise—comparing TERM with ERM and several state-of-the-art approaches (Krizhevsky et al., 2009; Kumar et al., 2010; Ren et al., 2018; Zhang & Sabuncu, 2018). As shown in Table 2, TERM performs competitively with 20% noise, and outperforms all baselines in the high noise regimes. We use MentorNet-PD as a baseline since it does not require clean validation data. In Appendix I.2, we show that TERM also matches the performance of MentorNet-DD, which requires clean validation data. To help reason about the performance of TERM, we also explore a simpler, two-dimensional logistic regression problem in Figure 13, Appendix I.2, finding that TERM with t=´2 is similarly robust across the considered noise regimes.\nLow-quality annotators. It is not uncommon for practitioners to obtain human-labeled data for their learning tasks from crowd-sourcing platforms. However, these labels are usually noisy in part due to the varying quality of the human annotators. Given a collection of labeled samples from crowd-workers, we aim to learn statistical models that are robust to the potentially low-quality annotators. As a case study, following the setup of (Khetan et al., 2018), we take the CIFAR-10 dataset and simulate 100 annotators where 20 of them are hammers (i.e., always correct) and 80 of them are spammers (i.e., assigning labels uniformly at random). We apply TERM at the annotator group level in (4), which is equivalent to assigning annotator-level weights based on the aggregate value of their loss. As shown in Figure 3, TERM is able to achieve the test accuracy limit set by Genie ERM, i.e., the ideal performance obtained by completely removing the known outliers. We note in particular that the accuracy reported by (Khetan et al., 2018) (0.777) is lower than TERM (0.825) in the same setup, even though their approach is a two-pass algorithm requiring at least to double the training time. We provide full empirical details and investigate additional noisy annotator scenarios in Appendix I.2." }, { "heading": "5.2 FAIRNESS AND GENERALIZATION", "text": "In this section, we show that positive values of t in TERM can help promote fairness (e.g., via learning fair representations), and offer variance reduction for better generalization.\nFair principal component analysis (PCA). We explore the flexibility of TERM in learning fair representations using PCA. In fair PCA, the goal is to learn low-dimensional representations which are fair to all considered subgroups (e.g., yielding similar reconstruction errors) (Kamani et al., 2019; Samadi et al., 2018; Tantipongpipat et al., 2019). Despite the non-convexity of the fair PCA problem, we apply TERM to this task, referring to the resulting objective as TERM-PCA. We tilt the same loss function as in (Samadi et al., 2018): fpX;Uq “ 1|X| ´ }X ´XUUJ}2F ´ }X ´ X̂}2F ¯ , where X P Rnˆd is a subset (group) of data, U P Rdˆr is the current projection, and X̂ P Rnˆd is the optimal rank-r approximation of X . Instead of solving a more complex min-max problem using semi-definite programming as in (Samadi et al., 2018), which scales poorly with problem dimension, we apply gradient-based methods, re-weighting the gradients at each iteration based on the loss on each group. In Figure 4, we plot the aggregate loss for two groups (high vs. low education) in the Default Credit dataset (Yeh & Lien, 2009) for different target dimensions r. By varying t, we achieve varying degrees of performance improvement on different groups—TERM (t “ 200) recovers the min-max results of (Samadi et al., 2018) by forcing the losses on both groups to be (almost) identical, while TERM (t “ 10) offers the flexibility of reducing the performance gap less aggressively.\nHandling class imbalance. Next, we show that TERM can reduce the performance variance across classes with extremely imbalanced data when training deep neural networks. We compare TERM with several baselines which re-weight samples during training, including assigning weights inversely proportional to the class size (InverseRatio), focal loss (Lin et al., 2017), HardMine (Malisiewicz et al., 2011), and LearnReweight (Ren et al., 2018). Following (Ren et al., 2018), the datasets are composed of imbalanced 4 and 9 digits from MNIST (LeCun et al., 1998). In Figure 5, we see that TERM obtains similar (or higher) final accuracy on the clean test data as the state-of-the-art methods. We note that compared with LearnReweight, which optimizes the model over an additional balanced validation set and requires three gradient calculations for each update, TERM neither requires such balanced validation data nor does it increase the per-iteration complexity.\nImproving generalization via variance reduction. A common alternative to ERM is to consider a distributionally robust objective, which optimizes for the worst-case training loss over a set of distributions, and has been shown to offer variance-reduction properties that benefit generalization (e.g., Namkoong & Duchi, 2017; Sinha et al., 2018). While not directly developed for distributional robustness, TERM also enables variance reduction for positive values of t (Theorem 4), which can be used to strike a better bias-variance tradeoff for generalization. We compare TERM with several baselines including robustly regularized risk (RobustRegRisk) (Namkoong & Duchi, 2017), linear SVM (Ren et al., 2018), LearnRewight (Ren et al., 2018), FocalLoss (Lin et al., 2017), and HRM (Leqi et al., 2019). The results and detailed discussions are presented in Appendix I.2." }, { "heading": "5.3 SOLVING COMPOUND ISSUES: HIERARCHICAL MULTI-OBJECTIVE TILTING", "text": "Finally, in this section, we focus on settings where multiple issues, e.g., class imbalance and label noise, exist in the data simultaneously. We discuss two possible instances of hierarchical multiobjective TERM to tackle such problems. One can think of other variants in this hierarchical tilting space which could be useful depending on applications at hand. However, we are not aware of other\nprior work that aims to simultaneously handle multiple goals, e.g., suppressing noisy samples and addressing class imbalance, in a unified framework without additional validation data.\nWe explore the HIV-1 dataset (Rögnvaldsson, 2013), as in Section 5.2. We report both overall accuracy and accuracy on the rare class in four scenarios: (a) clean and 1:4, the original dataset that is naturally slightly imbalanced with rare samples represented 1:4 with respect to the common class; (b) clean and 1:20, where we subsample to introduce a 1:20 imbalance ratio; (c) noisy and 1:4, which is the original dataset with labels associated with 30% of the samples randomly reshuffled; and (d) noisy and 1:20, where 30% of the labels of the 1:20 imbalanced dataset are reshuffled.\nIn Table 3, hierarchical TERM is applied at the sample level and class level (TERMsc), where we use the sample-level tilt of τ“´2 for noisy data. We use class-level tilt of t“0.1 for the 1:4 case and t“50 for the 1:20 case. We compare against baselines for robust classification and class imbalance (discussed previously in Sections 5.1 and 5.2), where we tune them for best performance (Appendix J). Similar to the experiments in Section 5.1, we avoid using baselines that require clean validation data (e.g., Roh et al., 2020). While different baselines perform well in their respective problem settings, TERM is far superior to all baselines when considering noisy samples and class imbalance simultaneously (rightmost column in Table 3). Finally, in the last row of Table 3, we simulate the noisy annotator setting of Section 5.1 assuming that the data is coming from 10 annotators, i.e., in the 30% noise case we have 7 hammers and 3 spammers. In this case, we apply hierarchical TERM at both class and annotator levels (TERMca), where we perform the higher level tilt at the annotator (group) level and the lower level tilt at the class level (with no sample-level tilting). We show that this approach can benefit noisy/imbalanced data even further (far right, Table 3), while suffering only a small performance drop on the clean and noiseless data (far left, Table 3)." }, { "heading": "6 RELATED WORK", "text": "Alternate aggregation schemes: exponential smoothing/superquantile methods. A common alternative to the standard average loss in empirical risk minimization is to consider a min-max objective, which aims to minimize the max-loss. Min-max objectives are commonplace in machine learning, and have been used for a wide range of applications, such as ensuring fairness across subgroups (Hashimoto et al., 2018; Mohri et al., 2019; Samadi et al., 2018; Stelmakh et al., 2019; Tantipongpipat et al., 2019), enabling robustness under small perturbations (Sinha et al., 2018), or generalizing to unseen domains (Volpi et al., 2018). As discussed in Section 2, the TERM objective can be viewed as a minimax smoothing (Kort & Bertsekas, 1972; Pee & Royset, 2011) with the added flexibility of a tunable t to allow the user to optimize utility for different quantiles of loss similar to superquantile approaches (Laguel et al., 2021; Rockafellar et al., 2000), directly trading off between robustness/fairness and utility for positive and negative values of t (see Appendix G for these connections). However, the TERM objective remains smooth (and efficiently solvable) for moderate values of t, resulting in faster convergence even when the resulting solutions are effectively the same as the min-max solution or other desired quantiles of the loss (as we demonstrate in the experiments of Section 5). Such smooth approximations to the max often appear through LogSumExp functions, with applications in geometric programming (Calafiore & El Ghaoui, 2014, Sec. 9.7), and boosting (Mason et al., 1999; Shen & Li, 2010). Interestingly, Cohen et al. introduce Simnets (Cohen & Shashua, 2014; Cohen et al., 2016), with a similar exponential smoothing operator, though for a differing purpose of achieving layer-wise operations between sum and max in deep neural networks.\nAlternate loss functions. Rather than modifying the way the losses are aggregated, as in (smoothed) min-max or superquantile methods, it is also quite common to modify the losses themselves. For example, in robust regression, it is common to consider losses such as the L1 loss, Huber loss, or general M -estimators (Holland & Ikeda, 2019) as a way to mitigate the effect of outliers (Bhatia et al., 2015). (Wang et al., 2013) studies a similar exponentially tilted loss for robust regression, though it is limited to the squared loss and only corresponds to tă0. Losses can also be modified to address outliers by favoring small losses (Yu et al., 2012; Zhang & Sabuncu, 2018) or gradient clipping (Menon et al., 2020). On the other extreme, the largest losses can be magnified to encourage focus on hard samples (Li et al., 2020b; Lin et al., 2017; Wang et al., 2016), which is a popular approach for curriculum learning. Constraints could also be imposed to promote fairness (Baharlouei et al., 2020; Donini et al., 2018; Hardt et al., 2016; Rezaei et al., 2020; Zafar et al., 2017). Ignoring the log portion of the objective in (2), TERM can be viewed as an alternate loss function exponentially shaping the loss to achieve both of these goals with a single objective, i.e., magnifying hard examples with t ą 0 and suppressing outliers with t ă 0. In addition, we show that TERM can even achieve both goals simultaneously with hierarchical multi-objective optimization (Section 5.3).\nSample re-weighting schemes. Finally, there exist approaches that implicitly modify the underlying ERM objective by re-weighting the influence of the samples themselves. These re-weighting schemes can be enforced in many ways. A simple and widely used example is to subsample training points in different classes. Alternatively, one can re-weight examples according to their loss function when using a stochastic optimizer, which can be used to put more emphasis on “hard” examples (Jiang et al., 2019; Katharopoulos & Fleuret, 2017; Leqi et al., 2019; Shrivastava et al., 2016). Re-weighting can also be implicitly enforced via the inclusion of a regularization parameter (Abdelkarim et al., 2020), loss clipping (Yang et al., 2010), or modelling crowd-worker qualities (Khetan et al., 2018). Such an explicit re-weighting has been explored for other applications (e.g., Chang et al., 2017; Gao et al., 2015; Jiang et al., 2018; Lin et al., 2017; Ren et al., 2018; Shu et al., 2019), though in contrast to these methods, TERM is applicable to a general class of loss functions, with theoretical guarantees. TERM is equivalent to a dynamic re-weighting of the samples based on the values of the objectives (Lemma 1), which could be viewed as a convexified version of loss clipping. We compare to several sample re-weighting schemes empirically in Section 5." }, { "heading": "7 CONCLUSION", "text": "In this paper, we examined tilted empirical risk minimization (TERM) as a flexible extension to the ERM framework. We explored, both theoretically and empirically, TERM’s ability to handle various known issues with ERM, such as robustness to noise, class imbalance, fairness, and generalization, as well as more complex issues like the simultaneous existence of class imbalance and noisy outliers. Despite the straightforward modification TERM makes to traditional ERM objectives, the framework consistently outperforms ERM and delivers competitive performance with state-of-the-art, problemspecific methods on a wide range of applications. Our work highlights the effectiveness and versatility of tilted objectives in machine learning. Building on the analyses and empirical study provided herein, in future work, it would be interesting to investigate generalization bounds for TERM as a function of t, and to derive theoretical convergence guarantees for our proposed stochastic solvers." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We are grateful to Arun Sai Suggala and Adarsh Prasad (CMU) for their helpful comments on robust regression; to Zhiguang Wang, Dario Garcia Garcia, Alborz Geramifard, and other members of Facebook AI for productive discussions and feedback and pointers to prior work (Cohen & Shashua, 2014; Cohen et al., 2016; Rockafellar et al., 2000; Wang et al., 2016); and to Meisam Razaviyayn (USC) for helpful discussions and pointers to exponential smoothing (Kort & Bertsekas, 1972; Pee & Royset, 2011), Value-at-Risk (Nouiehed et al., 2019a; Rockafellar & Uryasev, 2002), and general properties of gradient-based methods in non-convex optimization problems (Ge et al., 2015; Jin et al., 2017; 2020; Ostrovskii et al., 2020). The work of TL and VS was supported in part by the National Science Foundation grant IIS1838017, a Google Faculty Award, a Carnegie Bosch Institute Research Award, and the CONIX Research Center. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the National Science Foundation or any other funding agency." }, { "heading": "A Notation & Assumptions 15", "text": "" }, { "heading": "B Basic Properties of the TERM Objective 16", "text": "" }, { "heading": "C Hierarchical Multi-Objective Tilting 19", "text": "" }, { "heading": "D General Properties of the Objective for GLMs 21", "text": "" }, { "heading": "E General Properties of TERM Solutions for GLMs 23", "text": "" }, { "heading": "F Connections Between TERM and Exponential Tilting 30", "text": "" }, { "heading": "G TERM as an Approximate Superquantile Method 31", "text": "" }, { "heading": "H Algorithms for Solving TERM 34", "text": "H.1 Convergence with t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35" }, { "heading": "I Additional Experiments 37", "text": "I.1 Experiments to showcase properties of TERM . . . . . . . . . . . . . . . . . . . . 37 I.2 Complete case studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38" }, { "heading": "J Experimental Details 42", "text": "J.1 Datasets and models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 J.2 Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42\nK Discussion 44" }, { "heading": "A NOTATION & ASSUMPTIONS", "text": "In this section, we provide the notation and the assumptions that are used throughout our theoretical analyses.\nThe results in this paper are derived under one of the following four assumptions:\nAssumption 1 (Smoothness condition). We assume that for i P rN s, loss function fpxi; θq is in differentiability class C1 (i.e., continuously differentiable) with respect to θ P Θ Ď Rd. Assumption 2 (Strong convexity condition). We assume that Assumption 1 is satisfied. In addition, we assume that for any i P rN s, fpxi; θq is in differentiability class C2 (i.e., twice differentiable with continuous Hessian) with respect to θ. We further assume that there exist βmin, βmax P R` such that for i P rN s and any θ P Θ Ď Rd,\nβminI ĺ ∇2θθJfpxi; θq ĺ βmaxI, (5) where I is the identity matrix of appropriate size (in this case dˆ d). We further assume that there does not exist any θ P Θ, such that∇θfpxi; θq “ 0 for all i P rN s. Assumption 3 (Generalized linear model condition (Wainwright & Jordan, 2008)). We assume that Assumption 2 is satisfied. We further assume that the loss function fpx; θq is given by\nfpx; θq “ Apθq ´ θJT pxq, (6)\nwhere Ap¨q is a convex function such that there exists βmax such that for any θ P Θ Ď Rd,\nβminI ĺ ∇2θθJApθq ĺ βmaxI. (7) We also assume that\nÿ\niPrNs T pxiqT pxiqJ ą 0. (8)\nThis nest set of assumptions become the most restrictive with Assumption 3, which essentially requires that the loss be the negative log-likelihood of an exponential family. While the assumption is stated using the natural parameter of an exponential family for ease of presentation, the results hold for a bijective and smooth reparameterization of the exponential family. Assumption 3 is satisfied by the commonly used L2 loss for regression and logistic loss for classification (see toy examples (b) and (c) in Figure 1). While the assumption is not satisfied when we use neural network function approximators in Section 5.1, we observe favorable numerical results motivating the extension of these results beyond the cases that are theoretically studied in this paper.\nIn the sequel, many of the results are concerned with characterizing the t-tilted solutions defined as the parametric set of solutions of t-tiled losses by sweeping t P R,\nθ̆ptq P arg min θPΘ rRpt; θq, (9)\nwhere Θ Ď Rd is an open subset of Rd. We state an assumption on this set below. Assumption 4 (Strict saddle property (Definition 4 in (Ge et al., 2015))). We assume that the set arg minθPΘ rRpt; θq is non-empty for all t P R. Further, we assume that for all t P R, rRpt; θq is a “strict saddle” as a function of θ, i.e., for all local minima,∇2θθJ rRpt; θqą0, and for all other stationary solutions, λminp∇2θθJ rRpt; θqq ă 0, where λminp¨q is the minimum eigenvalue of the matrix.\nWe use the strict saddle property in order to reason about the properties of the t-tilted solutions. In particular, since we are solely interested in the local minima of rRpt; θq, the strict saddle property implies that for every θ̆ptq P arg minθPΘ rRpt; θq, for a sufficiently small r, for all θ P Bpθ̆ptq, rq,\n∇2θθJ rRpt; θq ą 0, (10)\nwhere Bpθ̆ptq, rq denotes a d-ball of radius r around θ̆ptq. We will show later that the strict saddle property is readily verified for t P R` under Assumption 2." }, { "heading": "B BASIC PROPERTIES OF THE TERM OBJECTIVE", "text": "In this section, we provide the basic properties of the TERM objective.\nProof of Lemma 1. Lemma 1, which provides the gradient of the tilted objective, has been studied previously in the context of exponential smoothing (see (Pee & Royset, 2011, Proposition 2.1)). We provide a brief derivation here under Assumption 1 for completeness. We have:\n∇θ rRpt; θq “ ∇θ\n$\n&\n%\n1 t log\n¨\n˝\n1\nN\nÿ\niPrNs etfpxi;θq\n˛\n‚\n,\n.\n-\n(11)\n“ ř iPrNs∇θfpxi; θqetfpxi;θq ř\niPrNs e tfpxi;θq\n. (12)\nLemma 2. Under Assumption 1,\nrRp´8; θq :“ lim tÑ´8 rRpt; θq “ qRpθq, (13)\nrRp0; θq :“ lim tÑ0 rRpt; θq “ Rpθq, (14)\nrRp`8; θq :“ lim tÑ`8 rRpt; θq “ pRpθq, (15)\nwhere pRpθq is the max-loss and qRpθq is the min-loss2:\npRpθq :“ max iPrNs fpxi; θq, qRpθq :“ min iPrNs fpxi; θq. (16)\nProof. For tÑ 0,\nlim tÑ0 rRpt; θq “ lim tÑ0\n1 t log\n¨\n˝\n1\nN\nÿ\niPrNs etfpxi;θq\n˛\n‚\n“ lim tÑ0\nř iPrNs fpxi; θqetfpxi;θq ř\niPrNs e tfpxi;θq\n(17)\n“ 1 N ÿ\niPrNs fpxi; θq, (18)\nwhere (17) is due to L’Hôpital’s rule applied to t as the denominator and log ´\n1 N\nř iPrNs e tfpxi;θq\n¯\nas the numerator.\nFor tÑ ´8, we proceed as follows:\nlim tÑ´8 rRpt; θq “ lim tÑ´8\n1 t log\n¨\n˝\n1\nN\nÿ\niPrNs etfpxi;θq\n˛\n‚\ně lim tÑ´8\n1 t log\n¨\n˝\n1\nN\nÿ\niPrNs etminjPrNs fpxj ;θq\n˛\n‚ (19)\n“ min iPrNs fpxi; θq. (20)\n2When the argument of the max-loss or the min-loss is not unique, for the purpose of differentiating the loss function, we define pRpθq as the average of the individual losses that achieve the maximum, and qRpθq as the average of the individual losses that achieve the minimum.\nOn the other hand,\nlim tÑ´8 rRpt; θq “ lim tÑ´8\n1 t log\n¨\n˝\n1\nN\nÿ\niPrNs etfpxi;θq\n˛\n‚\nď lim tÑ´8\n1 t log\nˆ\n1\nN etminjPrNs fpxj ;θq\n˙\n(21)\n“ min iPrNs fpxi; θq ´ lim tÑ´8\n\"\n1 t logN\n*\n(22)\n“ min iPrNs fpxi; θq. (23)\nHence, the proof follows by putting together (20) and (23).\nThe proof proceeds similarly to tÑ ´8 for tÑ `8 and is omitted for brevity.\nNote that Lemma 2 has been previously observed in (Cohen & Shashua, 2014). This lemma also implies that rθp0q is the ERM solution, rθp`8q is the min-max solution, and rθp´8q is the min-min solution.\nLemma 3 (Tilted Hessian and strong convexity for t P R`). Under Assumption 2, for any t P R,\n∇2θθJ rRpt; θq “ t ÿ iPrNs p∇θfpxi; θq ´∇θ rRpt; θqqp∇θfpxi; θq ´∇θ rRpt; θqqJetpfpxi;θq´ rRpt;θqq\n(24)\n` ÿ iPrNs ∇2θθJfpxi; θqetpfpxi;θq´ rRpt;θqq. (25)\nIn particular, for all θ P Θ and all t P R`, the t-tilted objective is strongly convex. That is\n∇2θθJ rRpt; θq ą βminI. (26)\nProof. Recall that\n∇θ rRpt; θq “ ř iPrNs∇θfpxi; θqetfpxi;θq ř\niPrNs e tfpxi;θq\n(27)\n“ ÿ iPrNs ∇θfpxi; θqetpfpxi;θq´ rRpt;θqq. (28)\nThe proof of the first part is completed by differentiating again with respect to θ, followed by algebraic manipulation. To prove the second part, notice that the term in (24) is positive semi-definite, whereas the term in (25) is positive definite and lower bounded by βminI (see Assumption 2, Eq. (5)).\nLemma 4 (Smoothness of rRpt; θq in the vicinity of the final solution θ̆ptq). For any t P R, let βptq be the smoothness parameter in the vicinity of the final solution:\nβptq :“ sup θPBpθ̆ptq,rq λmax\n´ ∇2θθJ rRpt; θq ¯ , (29)\nwhere∇2θθJ rRpt; θq is the Hessian of rRpt; θq at θ, λmaxp¨q denotes the largest eigenvalue, and Bpθ, rq denotes a d-ball of radius r around θ. Under Assumption 2, for any t P R, rRpt; θq is a βptq-smooth function of θ. Further, for t P R´, at the vicinity of θ̆ptq,\nβptq ă βmax, (30)\nand for t P R`,\n0 ă lim tÑ`8 βptq t ă `8. (31)\nProof. Let us first provide a proof for t P R´. Invoking Lemma 3 and Weyl’s inequality (Weyl, 1912), we have\nλmax\n´ ∇2θθJ rRpt; θq ¯\nď λmax\n¨\n˝t ÿ iPrNs p∇θfpxi; θq ´∇θ rRpt; θqqp∇θfpxi; θq ´∇θ rRpt; θqqJetpfpxi;θq´ rRpt;θqq\n˛\n‚\n(32)\n` λmax\n¨\n˝\nÿ\niPrNs ∇2θθJfpxi; θqetpfpxi;θq´ rRpt;θqq\n˛\n‚ (33)\nď βmax, (34) where we have used the fact that the term in (24) is negative semi-definite for t ă 0, and that the term in (25) is positive definite for all t with smoothness bounded by βmax (see Assumption 2, Eq. (5)).\nFor t P R`, following Lemma 3 and Weyl’s inequality (Weyl, 1912), we have ˆ\n1\nt\n˙\nλmax\n´ ∇2θθJ rRpt; θq ¯\nď λmax\n¨\n˝\nÿ\niPrNs p∇θfpxi; θq ´∇θ rRpt; θqqp∇θfpxi; θq ´∇θ rRpt; θqqJetpfpxi;θq´ rRpt;θqq\n˛\n‚\n(35)\n` ˆ 1\nt\n˙\nλmax\n¨\n˝\nÿ\niPrNs ∇2θθJfpxi; θqetpfpxi;θq´ rRpt;θqq\n˛\n‚. (36)\nConsequently,\nlim tÑ`8\nˆ\n1\nt\n˙\nλmax\n´ ∇2θθJ rRpt; θq ¯ ă `8. (37)\nOn the other hand, following Weyl’s inequality (Weyl, 1912),\nλmax\n´ ∇2θθJ rRpt; θq ¯\ně tλmax\n¨\n˝\nÿ\niPrNs p∇θfpxi; θq ´∇θ rRpt; θqqp∇θfpxi; θq ´∇θ rRpt; θqqJetpfpxi;θq´ rRpt;θqq\n˛\n‚,\n(38) and hence,\nlim tÑ`8\nˆ\n1\nt\n˙\nλmax\n´ ∇2θθJ rRpt; θq ¯ ą 0, (39)\nwhere we have used the fact that no solution θ exists that would make all fi’s vanish (Assumption 2).\nUnder the strict saddle property (Assumption 4), it is known that gradient-based methods would converge to a local minimum (Ge et al., 2015), i.e., θ̆ptq would be obtained using gradient descent (GD). The rate of convergence of GD scales linearly with the smoothness parameter of the optimization landscape, which is characterized by Lemma 4.\nLemma 5 (Strong convexity of rRpt; θq in R`). Under Assumption 2, for any t P R`, rRpt; θq is a strongly convex function of θ. That is for t P R`,\n∇2θθJ rRpt; θq ą βminI. (40)\nProof. The result follows by invoking Lemma 3 with t P R`, and considering (5) (Assumption 2).\nThis lemma also implies that under Assumption 2, the strict saddle assumption (Assumption 4) is readily verified." }, { "heading": "C HIERARCHICAL MULTI-OBJECTIVE TILTING", "text": "We start by stating the hierarchical multi-objective tilting for a hierarchy of depth 3. While we don’t directly use this form, it is stated to clarify the experiments in Section 5 where tilting is done at class level and annotator level, and the sample-level tilt value could be understood to be 0.\nrJpm, t, τ ; θq :“ 1 m log\n¨\n˝\n1\nN\nÿ\nGPrGGs\n¨\n˝\nÿ\ngPrGs |g|\n˛\n‚em rJGpτ ;θq\n˛\n‚ (41)\nrJGpt, τ ; θq :“ 1\nt log\n¨\n˝ 1 ř gPrGs |g| ÿ gPrGs |g|et rRgpτ ;θq\n˛\n‚ (42)\nrRgpτ ; θq :“ 1\nτ log\n˜\n1\n|g| ÿ xPg eτfpx;θq\n¸\n, (43)\nNext, we continue by evaluating the gradient of the hierarchical multi-objective tilt for a hierarchy of depth 2. Lemma 6 (Hierarchical multi-objective tilted gradient). Under Assumption 1,\n∇θ rJpt, τ ; θq “ ÿ\ngPrGs\nÿ xPg wg,xpt, τ ; θq∇θfpx; θq (44)\nwhere\nwg,xpt, τ ; θq :“\n´\n1 |g|\nř yPg e τfpy;θq\n¯p tτ´1q\nř g1PrGs |g1| ´ 1 |g1| ř yPg1 e τfpy;θq\n¯ t τ\neτfpx;θq. (45)\nProof. We proceed as follows. First notice that by invoking Lemma 1,\n∇θ rJpt, τ ; θq “ ÿ gPrGs wgpt, τ ; θq∇θ rRgpτ ; θq (46)\nwhere\nwgpt, τ ; θq :“ |g|et rRgpτ ;θq\nř g1PrGs |g1|e t rRg1 pτ ;θq\n. (47)\nwhere rRgpτ ; θq is defined in (4), and is reproduced here:\nrRgpτ ; θq :“ 1\nτ log\n˜\n1\n|g| ÿ xPg eτfpx;θq\n¸\n. (48)\nOn the other hand, by invoking Lemma 1,\n∇θ rRgpτ ; θq “ ÿ xPg wg,xpτ ; θq∇θfpx; θq (49)\nwhere\nwg,xpτ ; θq :“ eτfpx;θq ř\nyPg e τfpy;θq . (50)\nHence, combining (46) and (49),\n∇θ rJpt, τ ; θq “ ÿ\ngPrGs\nÿ xPg wgpt, τ ; θqwg,xpτ ; θq∇θfpx; θq. (51)\nThe proof is completed by algebraic manipulations to show that\nwg,xpt, τ ; θq “ wgpt, τ ; θqwg,xpτ ; θq. (52)\nLemma 7 (Sample-level TERM is a special case of hierarchical multi-objective TERM). Under Assumption 1, hierarchical multi-objective TERM recovers TERM as a special case for t “ τ . That is\nrJpt, t; θq “ rRpt; θq. (53)\nProof. The proof is completed by noticing that setting t “ τ in (45) (Lemma 6) recovers the original sample-level tilted gradient." }, { "heading": "D GENERAL PROPERTIES OF THE OBJECTIVE FOR GLMS", "text": "In this section, even if not explicitly stated, all results are derived under Assumption 3 with a generalized linear model and loss function of the form (6), effectively assuming that the loss function is the negative log-likelihood of an exponential family (Wainwright & Jordan, 2008). Definition 1 (Empirical cumulant generating function). Let\nΛpt; θq :“ t rRpt; θq. (54) Definition 2 (Empirical log-partition function (Wainwright et al., 2005)). Let Γpt; θq be\nΓpt; θq :“ log\n¨\n˝\n1\nN\nÿ\niPrNs e´tθ\nJT pxiq\n˛\n‚. (55)\nThus, we have\nrRpt; θq “ Apθq ` 1 t log\n¨\n˝\n1\nN\nÿ\niPrNs e´tθ\nJT pxiq\n˛\n‚“ Apθq ` 1 t Γpt; θq. (56)\nDefinition 3 (Empirical mean and empirical variance of the sufficient statistic). LetM and V denote the mean and the variance of the sufficient statistic, and be given by\nMpt; θq :“ 1 N ÿ iPrNs T pxiqe´tθ JT pxiq´Γpt;θq, (57)\nVpt; θq :“ 1 N ÿ iPrNs pT pxiq ´Mpt; θqqpT pxiq ´Mpt; θqqJe´tθ JT pxiq´Γpt;θq. (58)\nLemma 8. For all t P R, we have Vpt; θq ą 0.\nNext we state a few key relationships that we will use in our characterizations. The proofs are straightforward and omitted for brevity. Lemma 9 (Partial derivatives of Γ). For all t P R and all θ P Θ,\nB BtΓpt; θq “ ´θ\nJMpt; θq, (59) ∇θΓpt; θq “ ´tMpt; θq. (60)\nLemma 10 (Partial derivatives ofM). For all t P R and all θ P Θ, B BtMpt; θq “ ´Vpt; θqθ, (61)\n∇θMpt; θq “ ´tVpt; θq. (62)\nThe next few lemmas characterize the partial derivatives of the cumulant generating function. Lemma 11. (Derivative of Λ with t) For all t P R and all θ P Θ,\nB BtΛpt; θq “ Apθq ´ θ JMpt; θq. (63)\nProof. The proof is carried out by\nB BtΛpt; θq “ Apθq ´ θ J ÿ\niPrNs T pxiqe´tθ JT pxiq´Γpt;θq “ Apθq ´ θJMpt; θq. (64)\nLemma 12 (Second derivative of Λ with t). For all t P R and all θ P Θ, B2\nBt2 Λpt; θq “ θ JVpt; θqθ. (65)\nLemma 13 (Gradient of Λ with θ). For all t P R and all θ P Θ,\n∇θΛpt; θq “ t∇θApθq ´ tMpt; θq. (66) Lemma 14 (Hessian of Λ with θ). For all t P R and all θ P Θ,\n∇2θθJΛpt; θq “ t∇2θθJApθq ` t2Vpt; θq. (67) Lemma 15 (Gradient of Λ with respect to t and θ). For all t P R and all θ P Θ,\nB Bt∇θΛpt; θq “ ∇θApθq ´Mpt; θq ` tVpt; θqθ. (68)" }, { "heading": "E GENERAL PROPERTIES OF TERM SOLUTIONS FOR GLMS", "text": "Next, we characterize some of the general properties of the solutions of TERM objectives. Note that these properties are established under Assumptions 3 and 4.\nLemma 16. For all t P R, ∇θΛpt; θ̆ptqq “ 0. (69)\nProof. The proof follows from definition and the assumption that Θ is an open set.\nLemma 17. For all t P R, ∇θApθ̆ptqq “Mpt; θ̆ptqq. (70)\nProof. The proof is completed by noting Lemma 16 and Lemma 13.\nLemma 18 (Derivative of the solution with respect to tilt). Under Assumption 4, for all t P R,\nB Bt θ̆ptq “ ´ ´ ∇2θθJApθ̆ptqq ` tVpt; θ̆ptqq ¯´1 Vpt; θ̆ptqqθ̆ptq, (71)\nwhere\n∇2θθJApθ̆ptqq ` tVpt; θ̆ptqq ą 0. (72)\nProof. By noting Lemma 16, and further differentiating with respect to t, we have\n0 “ BBt∇θΛpt; θ̆ptqq (73)\n“ BBτ∇θΛpτ ; θ̆ptqq ˇ ˇ ˇ ˇ τ“t `∇2θθJΛpt; θ̆ptqq ˆ B Bt θ̆ptq ˙\n(74)\n“ tVpt; θ̆ptqqθ̆ptq ` ` t∇2θθJApθq ` t2Vpt; θq ˘\nˆ\nB Bt θ̆ptq\n˙\n, (75)\nwhere (74) follows from the chain rule, (75) follows from Lemmas 15 and 17 and 14. The proof is completed by noting that∇2θθJΛpt; θ̆ptqq ą 0 for all t P R under Assumption 4.\nFinally, we state an auxiliary lemma that will be used in the proof of the main theorem.\nLemma 19. For all t, τ P R and all θ P Θ,\nMpτ ; θq ´Mpt; θq “ ´ ˆ ż τ\nt\nVpν; θqdν ˙ θ. (76)\nProof. The proof is completed by noting that\nMpτ ; θq ´Mpt; θq “ ż τ\nt\nB BνMpν; θqdν “ ´\nˆ ż τ\nt\nVpν; θqdν ˙ θ. (77)\nTheorem 1. Under Assumption 3 and Assumption 4, for any t, τ P R, (a) BBt rRpτ ; θ̆ptqq ă 0 iff t ă τ ; (b) BBt rRpτ ; θ̆ptqq “ 0 iff t “ τ ; (c) B Bt rRpτ ; θ̆ptqq ą 0 iff t ą τ .\nProof. The proof proceeds as follows. Notice that\nB Bτ rRpt; θ̆pτqq “ 1 t\nˆ\nB Bτ θ̆pτq ˙J ∇θΛpt; θ̆pτqq (78)\n“ ´θ̆JpτqVpτ ; θ̆pτqq ´ ∇2θθJApθ̆pτqq ` τVpτ ; θ̆pτqq ¯´1\nˆ ´ ∇θApθ̆pτqq ´Mpt; θ̆pτqq ¯\n(79)\n“ ´θ̆JpτqVpτ ; θ̆pτqq ´ ∇2θθJApθ̆pτqq ` τVpτ ; θ̆pτqq ¯´1\nˆ ´ Mpτ ; θ̆pτqq ´Mpt; θ̆pτqq ¯\n(80)\n“ θ̆JpτqVpτ ; θ̆pτqq ´ ∇2θθJApθ̆pτqq ` τVpτ ; θ̆pτqq ¯´1\nˆ ˆ ż τ\nt\nVpν; θ̆pτqqdν ˙ θ̆pτq, (81)\nwhere (78) follows from the chain rule and (54), (79) follows from Lemma 18 and Lemma 13, (80) follows from Lemma 17, and (81) follows from Lemma 19. Now notice that invoking Lemma 8, and noticing that following the strict saddle property\n∇2θθJ rRpt; θq ˇ ˇ ˇ\nθ“θ̆pτq “ ∇2θθJApθ̆pτqq ` τVpτ ; θ̆pτqq ą 0, (82)\nwe have\n(a) şτ t Vpν; θ̆pτqqdν ă 0 iff t ă τ ;\n(b) şτ t Vpν; θ̆pτqqdν “ 0 iff t “ τ ;\n(c) şτ t Vpν; θ̆pτqqdν ą 0 iff t ą τ ,\nwhich completes the proof.\nTheorem 2 (Average- vs. max-loss tradeoff). Under Assumption 3 and Assumption 4, for any t P R`,\nB Bt pRpθ̆ptqq ď 0, (83) B BtRpθ̆ptqq ě 0. (84)\nProof of Theorem 2. To prove (83), first notice that from Lemma 2,\npRpθq “ lim tÑ`8 rRpt; θq. (85)\nNow, invoking Theorem 1 (Appendix D), for any τ, t P R` such that τ ă t\nB Bτ rRpt; θ̆pτqq ă 0, (86)\nIn particular, by taking the limit as tÑ `8,\nlim tÑ`8 B Bτ rRpt; θ̆pτqq ď 0. (87)\nNotice that\n0 ě lim tÑ`8 B Bτ rRpt; θ̆pτqq “ lim tÑ`8\nˆ\nB Bτ θ̆pτq ˙J ∇θ rRpt; θ̆pτqq (88)\n“ ˆ B Bτ θ̆pτq ˙J lim tÑ`8 ∇θ rRpt; θ̆pτqq (89)\n“ ˆ B Bτ θ̆pτq ˙J ∇θ pRpθ̆pτqq (90)\n“ BBτ pRpθ̆pτqq (91)\nwhere (90) holds because ∇θ rRpt; θ̆pτqq is a finite weighted sum of the gradients of the individual losses with weights bounded in r0, 1s, per Lemma 1, completing the proof of the first part. To prove (84), notice that by Lemma 2,\nRpθq “ lim tÑ0 rRpt; θq. (92)\nNow, invoking Theorem 1 (Appendix D), for any τ, t P R` such that τ ą t B Bτ rRpt; θ̆pτqq ą 0. (93)\nIn particular, by taking the limit as tÑ 0, B Bτ Rpθ̆pτqq “ limtÑ0 B Bτ rRpt; θ̆pτqq ą 0, (94)\ncompleting the proof.\nTheorem 3 (Average- vs. min-loss tradeoff). Under Assumption 3 and Assumption 4, for any t P R´, B Bt qRprθptqq ě 0, (95)\nB BtRp rθptqq ď 0. (96)\nProof of Theorem 3. To prove (95), first notice that from Lemma 2,\npRpθq “ lim tÑ´8 rRpt; θq. (97)\nNow, invoking Theorem 1 (Appendix D), for any τ, t P R` such that τ ą t B Bτ rRpt; θ̆pτqq ą 0. (98)\nIn particular, by taking the limit as tÑ ´8, B Bτ qRpθ̆pτqq “ lim tÑ´8 B Bτ rRpt; θ̆pτqq ą 0, (99)\ncompleting the proof of the first part.\nTo prove (96), notice that by Lemma 2,\nRpθq “ lim tÑ0 rRpt; θq. (100)\nNow, invoking Theorem 1 (Appendix D), for any τ, t P R` such that τ ă t B Bτ rRpt; θ̆pτqq ă 0. (101)\nIn particular, by taking the limit as tÑ 0, B Bτ Rpθ̆pτqq “ limtÑ0 B Bτ rRpt; θ̆pτqq ă 0, (102)\ncompleting the proof.\nTheorem 1 is concerned with characterizing the impact that TERM solutions for different t P R have on the objective rRpτ ; θ̆ptqq for some fixed τ P R. Recall that τ “ ´8 recovers the min-loss, τ “ 0 is the average-loss, and τ “ `8 is the max-loss. By definition, if t “ τ , θ̆pτq is the minimizer of rRpτ ; θ̆ptqq. Theorem 1 shows that for t P p´8, τq the objective is decreasing; while for t P pτ,`8q the objective increasing. Recall that for any fixed τ P R, rRpτ ; θq is also related to the k-th smallest loss of the population (Appendix G). Hence, the solution θ̆ptq is approximately minimizing the kptq-th smallest loss where kptq is increasing from 1 to N by sweeping t in p´8,`8q. Theorem 4 (Variance reduction). Let fpθq :“ pfpx1; θqq, . . . , fpxN ; θqq. For any u P RN , let\nmeanpuq :“ 1 N ÿ\niPrNs ui, varpuq :“\n1\nN\nÿ\niPrNs pui ´meanpuqq2. (103)\nThen, under Assumption 3 and Assumption 4, for any t P R, B Bt ! varpfpθ̆ptqqq ) ă 0. (104)\nProof. Recall that fpxi; θq “ Apθq ´ θJT pxiq. Thus,\nmeanpfq “ 1 N ÿ\niPrNs fpxi; θq “ Apθq ´\n1\nN θJ\nÿ\niPrNs T pxiq “ Apθq ´Mp0; θq (105)\nConsequently,\nvarpfpθqq “ 1 N ÿ\niPrNs\n¨\n˝fpxi; θq ´ 1\nN\nÿ\njPrNs fpxj ; θq\n˛\n‚\n2\n(106)\n“ 1 N ÿ\niPrNs\n¨\n˝θJT pxiq ´ 1\nN θJ\nÿ\njPrNs T pxjq\n˛\n‚\n2\n(107)\n“ 1 N θJ\n¨\n˝\nÿ\niPrNs pT pxiq ´\n1\nN\nÿ\njPrNs T pxjqqpT pxiq ´\n1\nN\nÿ\njPrNs T pxjqqJ\n˛\n‚θ (108)\n“ θJV0θ, (109) where\nV0 “ Vp0; θq “ 1\nN\nÿ\niPrNs pT pxiq ´\n1\nN\nÿ\njPrNs T pxjqqpT pxiq ´\n1\nN\nÿ\njPrNs T pxjqqJ. (110)\nHence,\nB Bτ ! varpfpθ̆pτqqq ) “ ˆ B Bτ θ̆pτq ˙J ∇θ ! varpfpθ̆pτqqq )\n(111)\n“ 2 ˆ B Bτ θ̆pτq ˙J V0θ̆pτq (112)\n“ ´2θ̆JpτqVpτ ; θ̆pτqq ´ ∇2θθApθ̆pτqq ` τVpτ ; θ̆pτqq ¯´1 V0θ̆pτq (113)\nă 0, (114) completing the proof.\nTheorem 5 (Cosine similarity of the loss vector and the all-ones vector increases with t). For u,v P RN , let cosine similarity be defined as\nspu,vq :“ u Jv\n}u}2}v}2 . (115)\nLet fpθq :“ pfpx1; θqq, . . . , fpxN ; θqq and let 1N denote the all-1 vector of length N . Then, under Assumption 3 and Assumption 4, for any t P R,\nB Bt ! spfpθ̆ptqq,1N q ) ą 0. (116)\nProof. Notice that\nspfpθq,1N q “ 1 N\nř\niPrNs fpxi; θq b\n1 N\nř\niPrNs f 2pxi; θq\n. (117)\nLetM0 :“Mp0; θq and V0 :“ Vp0; θq. Hence, 1\nN\nÿ\niPrNs fpxi; θq “ Apθq ´ θJM0, (118)\n1\nN\nÿ\niPrNs f2pxi; θq “ pApθq ´ θJM0q2 ` θJV0θ (119)\nNotice that\n∇θ s2pfpθq,1N q ( “ ∇θ\n$\n’ &\n’ %\n´\n1 N\nř iPrNs fpxi; θq ¯2\n1 N\nř\niPrNs f 2pxi; θq\n,\n/ .\n/ -\n(120)\n“ ∇θ \" pApθq ´ θJM0q2 pApθq ´ θJM0q2 ` θJV0θ *\n(121)\n“ 2pApθq ´ θ JM0qp∇θApθq ´M0qθJV0θ ´ 2pApθq ´ θJM0q2V0θ\nppApθq ´ θJM0q2 ` θJV0θq2 (122)\n“ 2pApθq ´ θJM0q ` θJp∇θApθq ´M0q ´Apθq ` θJM0 ˘ V0θ ppApθq ´ θJM0q2 ` θJV0θq2\n(123)\n“ 2pApθq ´ θJM0q ` θJ∇θApθq ´Apθq ˘ V0θ ppApθq ´ θJM0q2 ` θJV0θq2\n(124)\n“ ´ 2pApθq ´ θ JM0q2V0θ\nppApθq ´ θJM0q2 ` θJV0θq2 . (125)\nHence,\nB Bτ ! s2pfpθ̆pτqq,1N q ) “ ˆ B Bτ θ̆pτq ˙J ∇θ ! s2pfpθ̆pτqq,1N q )\n(126)\n“ ´θ̆JpτqVpτ ; θ̆pτqq ´ ∇2θθApθ̆pτqq ` τVpτ ; θ̆pτqq ¯´1\nˆ´ 2pApθ̆pτqq ´ θ̆pτq JM0q2\n´ pApθ̆pτqq ´ θ̆pτqJM0q2 ` θ̆pτqJV0θ ¯2V0θ̆pτq (127)\ną 0, (128) completing the proof.\nTheorem 6 (Gradient weights become more uniform by increasing t). Under Assumption 3 and Assumption 4, for any τ, t P R,\nB BtHpwpτ ; θ̆ptqqq ą 0, (129)\nwhere Hp¨q denotes the Shannon entropy function measured in nats,\nH pwpt; θqq :“ ´ ÿ\niPrNs wipt; θq logwipt; θq. (130)\nProof. Notice that\nH pwpt; θqq “ ´ ÿ\niPrNs wipt; θq logwipt; θq (131)\n“ ´ ÿ iPrNs ptfpxi; θq ´ Λpt; θqqetfpxi;θq´Λpt;θq (132)\n“ Λpt; θq ´ t ÿ iPrNs fpxi; θqetfpxi;θq´Λpt;θq (133)\n“ Λpt; θq ´ tApθq ` tθJMpt; θq. (134) Thus,\n∇θH pwpt; θqq “ ∇θ ` Λpt; θq ´ tApθq ` tθJMpt; θq ˘\n(135)\n“ t∇θApθq ´ tMpt; θq ´ t∇θApθq ` tMpt; θq ´ t2Vpt; θqθ (136) “ ´t2Vpt; θqθ. (137)\nHence,\nB Bτ H ´ wpt; θ̆pτqq ¯ “ ˆ B Bτ θ̆pτq ˙J ∇θH ´ wpt; θ̆pτqq ¯\n(138)\n“ ∇θ ` Λpt; θq ´ tApθq ` tθJMpt; θq ˘\n(139)\n“ t2θ̆JpτqVpτ ; θ̆pτqq ´ ∇2θθApθ̆pτqq ` τVpτ ; θ̆pτqq ¯´1 Vpt; θ̆pτqqθ̆pτq (140)\ně 0, (141)\ncompleting the proof.\nTheorem 7 (Tilted objective is increasing with t). Under Assumption 3, for all t P R, and all θ P Θ, B Bt rRpt; θq ě 0. (142)\nProof. Following (56),\nB Bt rRpt; θq “ BBt\n\"\n1 t Γpt; θq\n*\n(143)\n“ ´ 1 t2 Γpt; θq ´ 1 t θJMpt; θq, (144) “: gpt; θq, (145)\nwhere (144) follows from Lemma 9, and (145) defines gpt; θq. Let gp0; θq :“ limtÑ0 gpt; θq Notice that\ngp0; θq “ lim tÑ0\n\"\n´ 1 t2 Γpt; θq ´ 1 t θJMpt; θq\n*\n(146)\n“ ´ lim tÑ0\n\" 1 tΓpt; θq ` θ JMpt; θq t *\n(147)\n“ θJVp0; θqθ, (148)\nwhere (148) is due to L’H0̂pital’s rule and Lemma 12. Now consider\nB Bt t2gpt; θq ( “ BBt ´Γpt; θq ´ tθJMpt; θq (\n(149)\n“ θJMpt; θq (150) ´ θJMpt; θq ` tθJVpt; θqθ (151)\n“ tθJVpt; θqθ (152)\nwhere gpt; θq “ BBt rRpt; θq, (150) follows from Lemma 9, (151) follows from the chain rule and Lemma 10. Hence, t2gpt; θq is an increasing function of t for t P R`, and a decreasing function of t for t P R´, taking its minimum at t “ 0. Hence, t2gpt; θq ě 0 for all t P R. This implies that gpt; θq ě 0 for all t P R, which in conjunction with (145) implies the statement of the theorem.\nDefinition 4 (Optimal tilted objective). Let the optimal tilted objective be defined as\nrF ptq :“ rRpt; θ̆ptqq. (153)\nTheorem 8 (Optimal tilted objective is increasing with t). Under Assumption 3, for all t P R, and all θ P Θ,\nB Bt rF ptq “ BBt rRpt; θ̆ptqq ě 0. (154)\nProof. Notice that for all θ, and all P R`,\nrRpt` ; θq ě rRpt; θq (155) ě rRpt; θ̆ptqq, (156)\nwhere (155) follows from Theorem 7 and (156) follows from the definition of θ̆ptq. Hence,\nrRpt` ; θ̆pt` qq “ min θPBpθ̆ptq,rq rRpt` ; θq ě rRpt; θ̆ptqq, (157)\nwhich completes the proof." }, { "heading": "F CONNECTIONS BETWEEN TERM AND EXPONENTIAL TILTING", "text": "Here we provide connections between TERM and exponential tilting, a concept previously explored in the context of importance sampling and the theory of large deviations (Beirami et al., 2018; Dembo & Zeitouni, 2009; Wainwright et al., 2005). To do so, suppose that X is drawn from distribution pp¨q. Let us study the distribution of random variable Y “ fpX; θq. Let ΛY ptq be the cumulant generating function (Dembo & Zeitouni, 2009, Sectiom 2.2). That is\nΛY ptq :“ log ` Ep etY (˘\n(158)\n“ log ´\nEp\n! etfpX;θq )¯ . (159)\nNow, suppose that x1, . . . , xN are drawn i.i.d. from pp¨q. Note that this distributional assumption is made solely for providing intuition on the tilted objectives and is not needed in any of the proofs in this paper. Hence, rRpt; θq can be viewed as an empirical approximation to the cumulant generating function:\nΛY ptq « t rRpt; θq. (160)\nHence, rRpt; θq provides an approximate characterization of the distribution of fpX; θq. Thus, minimizing rRpt; θq is approximately equivalent to minimizing the complementary cumulative distribution function (CDF) of fpX; θq. In other words, this is equivalent to minimizing P tfpX; θq ą au for some a, which is a function of t.\nIn the next section, we will explore these connections with tail probabilities dropping the distributional assumptions, effectively drawing connections between superquantile methods and TERM." }, { "heading": "G TERM AS AN APPROXIMATE SUPERQUANTILE METHOD", "text": "For all a P R, let Qpa; θq denote the quantile of the losses that are no smaller than a, i.e.,\nQpa; θq :“ 1 N ÿ\niPrNs I tfpxi; θq ě au , (161)\nwhere It¨u is the indicator function. Notice that Qpa; θq P 0, 1N , . . . , 1 (\nquantifies the fraction of the data for which loss is at least a. In this section, we further assume that f is such that fpxi; θq ě 0 for all θ.\nSuppose that we are interested in choosing θ in a way that for a given a P R, we minimize the fraction of the losses that are larger than a. That is\nQ0paq :“ min θ Qpa; θq “ Qpa; θ0paqq, (162)\nwhere θ0paq :“ arg min\nθ Qpa; θq. (163)\nThis is a non-smooth non-convex problem and solving it to global optimality is very challenging. In this section, we argue that TERM provides a reasonable approximate solution to this problem, which is computationally feasible.\nNotice that we have the following simple relation:\nLemma 20. If a ă rF p´8q then Q0paq “ 1. Further, if a ą rF p`8q then Q0paq “ 0, where rF p¨q is defined in Definition 4, and is reproduced here:\nrF p´8q “ lim tÑ´8 rRpt; θ̆ptqq “ min θ min iPrNs fpxi; θq, (164)\nrF p`8q “ lim tÑ`8 rRpt; θ̆ptqq “ min θ max iPrNs fpxi; θq. (165)\nNext, we present our main result on the connection between the superquantile method and TERM.\nTheorem 9. For all t P R, and all θ, and all a P p rF p´8q, rF p`8qq,3\nQpa; θq ď rQpa; t, θq :“ e rRpt;θqt ´ e rF p´8qt\neat ´ e rF p´8qt . (166)\nProof. We have\nQpa; θq “ 1 N\nř iPrNs e pa´ rF p´8qqtI\n\"\nfpxi;θq´ĂF p´8q a´ĂF p´8q\ně1 *\n´ 1\nepa´ rF p´8qqt ´ 1 (167)\nď 1 N\nř iPrNs e pfpxi;θq´ rF p´8qqt ´ 1\nepa´ rF p´8qqt ´ 1 (168)\n“ e rRpt;θqt ´ e rF p´8qt\neat ´ e rF p´8qt , (169)\nwhere (167) follows from Lemma 21, (168) follows from Lemma 22, the fact that etx is strictly increasing (resp. decreasing) for t ą 0 (resp. t ă 0) and pepa´ rF p´8qqt´1q is positive (resp. negative) for t ą 0 (resp. t ă 0), and (169) follows from definition.\nLemma 21. For all t P R, and all θ,4\nQpa; θq “ 1 N\nř iPrNs e pa´ rF p´8qqtItfpxi;θqěau ´ 1\nepa´ rF p´8qqt ´ 1 . (170)\n3We define the RHS at t “ 0 via continuous extension. 4We define the RHS at t “ 0 via continuous extension.\nProof. The proof is completed following this identity:\n1\nN\nÿ\niPrNs epa´ rF p´8qqItfpxi;θqěaut “ Qpa; θqepa´ rF p´8qqt ` p1´Qpa; θqq. (171)\nLemma 22. For x ě 0, we have Itx ě 1u ď x.\nTheorem 9 directly leads to the following result.\nTheorem 10. For all a P p rF p´8q, rF p`8qq, we have\nQ0paq ď Q1paq ď Q2paq ď Q3paq “ inf tPR\n#\ne rF ptqt ´ e rF p´8qt\neat ´ e rF p´8qt\n+\n, (172)\nwhere\nQ1paq :“ inf tPR Qpa; θ̆ptqq (173)\nQ2paq :“ Qpa; θ̆prtpaqqq (174) Q3paq :“ rQpa;rtpaq, θ̆prtpaqqq (175)\nand\nrtpaq :“ arg inf tPR\n! rQpa; t, θ̆ptqq )\n“ arg inf tPR\n#\ne rF ptqt ´ e rF p´8qt\neat ´ e rF p´8qt\n+\n. (176)\nProof. The only non-trivial step is to show that Q2paq ď Q3paq. Following Theorem 9,\nQ2paq “ Qpa; θ̆prtpaqq (177) ď inf tPR rQpa; t, θ̆ptqq (178)\n“ Q3paq, (179)\nwhich completes the proof.\nTheorem 10 motivates us with the following approximation on the solutions of the superquantile method. Approximation 1. For all a P p rF p´8q, rF p`8qq,\nQpa; θ0paqq “ Q0paq « Q2paq “ Qpa; θ̆prtpaqq, (180)\nand hence, θ̆prtpaq is an approximate solution to the superquantile optimization problem.\nWhile we have not characterized how tight this approximation is for a P p rF p´8q, rF p`8qq, we believe that Approximation 1 provides a reasonable solution to the superquantile optimization problem in general. This is evidenced empirically when the approximation is evaluated on the toy examples of Figure 1, and compared with the global solutions of the superquantile method. The results are shown in Figure 6. As can be seen, Q0paq « Q2paq as suggested by Approximation 1. Also, we can see that while the bound in Theorem 10 is not tight, the solution that is obtained from solving it results in a good approximation to the superquantile optimization.\nFinally, we draw connections between these results and the k-loss. Notice that minimizing Qpa; θq for a fixed a is equivalent to minimizing a for a fixed Qpa; θq. If we fix Qpa; θq “ pN ´ kq{N, minimizing a would be equivalent to minimizing the k-loss. Formally, let Rpkqpθq be the k-th order statistc of the loss vector. Hence, Rpkq is the k-th smallest loss, and particularly\nRp1qpθq “ qRpθq, (181)\nRpNqpθq “ pRpθq. (182)\n0.2 0.4 0.6 0.8 1.0\nquantiles\n−4\n−3\n−2\n−1\n0\na\nQi(a), point estimation\nQ0(a)\nQ1(a)\nQ2(a)\nQ3(a)\n(a) Numerical results showing the bounds Q1paq, Q2paq, and Q3paq for Q0paq on the point estimation example.\n0.0 0.2 0.4 0.6 0.8 1.0\nquantiles\n0.00\n0.05\n0.10\n0.15\n0.20\na\nQi(a), linear regression\nQ0(a)\nQ1(a)\nQ2(a)\nQ3(a)\n(b) Numerical results showing the bounds Q1paq, Q2paq, and Q3paq for Q0paq on the linear regression example.\nFigure 6: Q1paq and Q2paq are close to Q0paq, which indicates that the solution obtained from solving Q3paq (which is Q2paq) is a tight approximation of the globally optimal solution of Q0paq.\nThus, for any k P rN s, we define R˚pkq :“ min\nθ Rpkqpθq. (183)\nθ˚pkq :“ arg min θ Rpkqpθq. (184)\nNote that\nR˚p1q “ rF p´8q, (185)\nR˚pNq “ rF p`8q. (186)\nTheorem 9 directly implies the following result: Corollary 11. For all k P t2, . . . , N ´ 1u, and all t P R` :\nˇ\nˇ ˇ epRpkqpθq´ rF p´8qqt ´ 1 ˇ ˇ ˇ ď ˆ N\nN ´ k\n˙\nˇ\nˇ ˇ ep rRpt;θq´ rF p´8qqt ´ 1 ˇ ˇ ˇ . (187)\nProof. We proceed by setting Qpa; θq “ N´kN and a “ Rpkqpθq in Theorem 9, which implies the result.\nWhile the bound is left implicit in Corollary 11, we can obtain an explicit bound if we only consider t P R` (i.e., we are interested in k-losses for larger k): Corollary 12. For all k P t2, . . . , N ´ 1u, and all t P R` :\nRpkqpθq ď rF p´8q ` 1\nt log\n˜\nep rRpt;θq´ rF p´8qqt ´ kN\n1´ kN\n¸\n. (188)\nProof. The statement follows by algebraic manipulation of Corollary 11." }, { "heading": "H ALGORITHMS FOR SOLVING TERM", "text": "In the main text, we present TERM in the batch setting (Algorithm 1). Here we provide the stochastic variants of the solvers in the context of hierarchical multi-objective tilting (see Eq. (4)).\nThere are a few points to note about stochastic solvers (Algorithm 2):\n1. It is intractable to compute the exact normalization weights for the samples in the minibatch. Hence, we use rRg,τ , a term that incorporates stochastic dynamics, to follow the tilted objective for each group g, which is used for normalizing the weights as in (3).\n2. While we sample the group from which we draw the minibatch, for small number of groups, one might want to draw one minibatch per each group and weight the resulting gradients accordingly.\n3. The second last line in Algorithm 2, concerning the update of rRg,τ is not a trivial linear averaging. Instead, we use a tilted averaging to ensure an unbiased estimator (if θ is not being updated).\nAlgorithm 2: Stochastic TERM\nInitialize : rRg,τ “ 0 @g P rGs Input: t, τ, α, λ while stopping criteria not reached do\nsample g on rGs from a Gumbel-Softmax distribution with logits rRg,τ ` 1t log |g| and temperature 1t sample minibatch B uniformly at random within group g compute the loss fpx; θq and gradient ∇θfpx; θq for all x P B rRB,τ Ð τ -tilted loss (2) on minibatch B rRg,τ Ð 1τ log ´ p1´ λqeτ rRg,τ ` λeτ rRB,τ ¯ , wτ,x Ð eτfpx;θq´τ rRg,τ θ Ð θ ´ α|B| ř\nxPB wτ,x∇θfpx; θq end\nThe stochatic algorithm above requires roughly the same time/space complexity as mini-batch SGD, and thus scales similarly for large-scale problems. TERM for the non-hierarchical cases can be recovered from Algorithm 1 and 2 by setting the inner-level tilt parameter τ “ 0. For completeness, we also describe them here. Algorithm 3 is the sample-level tilting algorithm in the batch setting, and Algorithm 4 is its stochastic variant.\nAlgorithm 3: Batch Non-Hierarchical TERM Input: t, α while stopping criteria not reached do\ncompute the loss fpxi; θq and gradient ∇θfpxi; θq for all i P rN s rRpt; θq Ð t-tilted loss (2) on all i P rN s wipt; θq Ð etpfpxi;θq´ rRpt;θqq θ Ð θ ´ αN ř\niPrNs wipt; θq∇θfpxi; θq end\nIn order to verify the correctness of Algorithm 4, we plot the distance of the solution θ to the solution θ˚ obtained by running the full gradient method (Algorithm 1) in terms of the number of iterations. In Figure 7, we see that θ can be arbitrarily close to θ˚, and Algorithm 4 with t ‰ 0 converges similarly to mini-batch SGD with t “ 0. As mentioned in the main text, theoretically analyze the convergence of stochastic solvers would be interesting direction of future work. The challenges would be to characterize the tightness of the estimator rR to the true risk R at each iteration leveraging the proposed tilted averaging structure.\nWe summarize the applications we solve TERM for and the algorithms we use in Table 4 below on the left.\nAlgorithm 4: Stochastic Non-Hierarchical TERM\nInitialize : rRt “ 0 Input: t, α, λ while stopping criteria not reached do\nsample minibatch B uniformly at random from rN s compute the loss fpx; θq and gradient ∇θfpx; θq for all x P B rRB,t Ð t-tilted loss (2) on minibatch B rRt Ð 1t log ´ p1´ λqet rRt ` λet rRB,t ¯ , wt,x Ð etfpx;θq´t rRt θ Ð θ ´ α|B| ř\nxPB wt,x∇θfpx; θq end\nThree toy examples (Figure 1) Algorithm 3 Robust Regression (Table 1) Algorithm 3 Robust Classification (Table 2) Algorithm 4 Low-quality Annotators (Figure 3) Algorithm 2 (τ “ 0) Fair PCA (Figure 4) Algorithm 1 (τ “ 0) Class Imbalance (Figure 5) Algorithm 2 (τ “ 0) Variance Reduction (Table 8) Algorithm 1 (τ “ 0) Hierarchical TERM (Table 3) Algorithm 1\nTable 4: Applications and their corresponding solvers.\nH.1 CONVERGENCE WITH t\nFirst, we note that t-tilted losses are βptq-smooth for all t. In a small neighborhood around the tilted solution, βptq is bounded for all negative t and moderately positive t, whereas it scales linearly with t as t Ñ `8, which has been previously studied in the context of exponential smoothing of the max (Kort & Bertsekas, 1972; Pee & Royset, 2011). We prove this formally in Appendix B, Lemma 4, but it can also be observed visually via the toy example in Figure 2. Based on this, we provide a convergence result below for Algorithm 3. Theorem 13. Under Assumption 2, there exist C1, C2 ă 8 that do not depend on t such that for any t P R`, setting the step size α “ 1C1`C2t , after m iterations:\nrRpt, θmq ´ rRpt, θ̆ptqq ď ˆ 1´ βmin C1 ` C2t\n˙m ´\nrRpt, θ0q ´ rRpt, θ̆ptqq ¯ . (189)\nProof. First observe that by Lemma 5, rRpt, θq is βmin-strongly convex for all t P R`. Next, notice that by Lemma 4, there exist C1, C2 ă 8 such that rRpt; θq has a pC1 ` C2tq-Lipschitz gradient for all t P R`. Then, the result follows directly from (Karimi et al., 2016)[Theorem 1].\nTheorem 13 indicates that solving TERM to a local optimum using gradient-based methods will tend to be as efficient as traditional ERM for small-to-moderate values of t (Jin et al., 2017), which we corroborate via experiments on multiple real-world datasets in Section 5. This is in contrast to solving for the min-max solution, which would be similar to solving TERM as t Ñ `8 (Kort & Bertsekas, 1972; Ostrovskii et al., 2020; Pee & Royset, 2011).\nSecond, recall that the t-tilted loss remains strongly convex for t ą 0, so long as the origi-\nnal loss function is strongly convex. On the other hand, for sufficiently large negative t, the t-tilted loss becomes non-convex. Hence, while the t-tilted solutions for positive t are unique, the objective may have multiple (spurious) local minima for negative t even if the original loss function is strongly convex. For negative t, we seek the solution for which the parametric set of t-tilted solutions obtained by sweeping t P R remains continuous (as in Figure 1a-c). To this end, for negative t, we solve TERM by smoothly decreasing t from 0 ensuring that the solutions form a continuum in Rd. Despite the non-convexity of TERM with t ă 0, we find that this approach produces effective solutions to multiple real-world problems in Section 5. Additionally, as the objective remains smooth, it is still relatively efficient to solve. We plot the convergence with t on a toy problem in Figure 8." }, { "heading": "I ADDITIONAL EXPERIMENTS", "text": "In this section we provide complete experimental results showcasing the properties of TERM (Appendix I.1) and the use-cases covered in Section 5 (Appendix I.2). Details on how the experiments themselves were executed are provided in Appendix J.\nI.1 EXPERIMENTS TO SHOWCASE PROPERTIES OF TERM\nRecall that in Section 2, Interpretation 1 is that TERM can be tuned to re-weight samples to magnify or suppress the influence of outliers. In Figure 9 below, we visually show this effect by highlighting the samples with the largest weight for t Ñ `8 and t Ñ ´8 on the logistic regression example previously described in Figure 1.\nInterpretation 2 is concerned with smooth tradeoffs between the average-loss and max/min-loss. In Figure 10 below, we show that (1) tilted solutions with positive t’s achieve a smooth tradeoff between average-loss and max-loss, (2) similarly, negative t’s result in a smooth tradeoff between average-loss and min-loss, and (3) increasing t from ´8 to `8 reduces the variance of the losses.\nI.2 COMPLETE CASE STUDIES\nHere we provide complete results obtained from applying TERM to a diverse set of applications. We either present full metrics of the empirical results discussed in Section 5, or provide additional experiments demonstrating the effects of TERM in new settings.\nRobust regression. In Section 5.1, we focused on noise scenarios with random label noise. Here, we present results involving both feature noise and target noise. We investigate the performance of TERM on two datasets (cal-housing (Pace & Barry, 1997) and abalone (Dua & Graff, 2019)) used in (Yu et al., 2012). Both datasets have features with 8 dimensions. We generate noisy samples following the setup in (Yu et al., 2012)—sampling 100 training samples, and randomly corrupting 5% of them by multiplying their features by 100 and multiply their targets by 10,000. From Table 5 below, we see that TERM significantly outperforms the baseline objectives in the noisy regime on both datasets.\nWe also provide results on synthetic data across different noise levels in two settings. In Figure 11, the mean of the noise is different from the mean of the clean data, and in Figure 12, the mean of two groups of data are the same. Similarly, TERM (t “ ´2) can effectively remove outliers in the presence of random noise.\nRobust classification. Recall that in Section 5.1, for classification in the presence of label noise, we only compare with baselines which do not require clean validation data. In Table 6 below, we report the complete results of comparing TERM with all baselines, including MentorNet-DD (Jiang et al., 2018) which needs additional clean data. In particular, in contrast to the other methods, MentorNet-DD uses 5,000 clean validation images. TERM is competitive with can even exceed the performance of MentorNet-DD, even though it does not have access to this clean data.\nTo interpret the noise more easily, we provide a toy logistic regression example with synthetic data here. In Figure 13, we see that TERM with t “ ´2 (blue) can converge to the correct classifier under 20%, 40%, and 80% noise.\n(Adversarial or structured noise.) As a word of caution, we note that the experiments thus far have focused on random noise; as one might expect, TERM with negative t’s could potentially overfit to outliers if they are constructed in an adversarial way. In the examples shown in Figure 14, under 40% noise and 80% noise, TERM has a high error measured on the clean data (green dots).\nLow-quality annotators. In Section 5.1, we demonstrate that TERM can be used to mitigate the effect of noisy annotators, and we assume each annotator is either always correct, or always uniformly\nassigning random labels. Here, we explore a different and possibly more practical scenario where there are four noisy annotators who corrupt 0%, 20%, 40%, and 100% of their data by assigning labels uniformly at random, and there is one additional adversarial annotator who always assigns wrong labels. We assume the data points labeled by each annotator do not overlap, since (Khetan et al., 2018) show that obtaining one label per sample is optimal for the data collectors under a fixed annotation budget. We compare TERM with several baselines: (a) training without the data coming from the adversarial annotator, (b) training without the data coming from the worst two annotators, and (c) training with all the clean data combined (Genie ERM). The results are shown in Figure 15. We see that TERM outperforms the strong baselines of removing one or two noisy annotators, and closely matches the performance of training with all the available clean data.\nFair federated learning. Federated learning involves learning statistical models across massively distributed networks of remote devices or isolated organizations (Li et al., 2020a; McMahan et al., 2017). Ensuring fair (i.e., uniform) performance distribution across the devices is a major concern in federated settings (Li et al., 2020b; Mohri et al., 2019), as using current approaches for federated learning (FedAvg (McMahan et al., 2017)) may result in highly variable performance across the network. (Li et al., 2020b) consider solving an alternate objective for federated learning, called q-FFL, to dynamically emphasize the worst-performing devices, which is conceptually similar to the goal of TERM, though it is applied specifically to the problem of federated learning and limited to the case of positive t. Here, we compare TERM with q-FFL in their setup on the vehicle dataset (Duarte & Hu, 2004) consisting of data collected from 23 distributed sensors (hence 23 devices). We tilt the L2 regularized linear SVM objective at the device level. At each communication round, we re-weight the accumulated local model updates from each selected device based on the weights estimated via Algorithm 2. From Figure 16, we see that similar to q-FFL, TERM (t “ 0.1) can also significantly promote the accuracy on the worst device while maintaining the overall performance. The statistics of the accuracy distribution are reported in Table 7 below.\nImproving generalization via variance reduction. We compare TERM (applied at the class-level as in (4), with logistic loss) with robustly regularized risk (RobustRegRisk) as in (Namkoong & Duchi, 2017) on the HIV-1 (Dua & Graff, 2019; Rögnvaldsson, 2013) dataset originally investigated\nby (Namkoong & Duchi, 2017). We examine the accuracy on the rare class (Y “ 0), the common class (Y “ 1), and overall accuracy. The mean and standard error of accuracies are reported in Table 8. RobustRegRisk and TERM offer similar performance improvements compared with other baselines, such as linear SVM, LearnRewight (Ren et al., 2018), FocalLoss (Lin et al., 2017), and HRM (Leqi et al., 2019). For larger t, TERM achieves similar accuracy in both classes, while RobustRegRisk does not show similar trends by sweeping its hyperparameters. It is common to adjust the decision threshold to boost the accuracy on the rare class. We do this for ERM and RobustRegRisk and optimize the threshold so that ERM` and RobustRegRisk` result in the same validation accuracy on the rare class as TERM (t “ 50). TERM achieves similar performance to RobustRegRisk`, without the need for an extra tuned hyperparameter." }, { "heading": "J EXPERIMENTAL DETAILS", "text": "We first describe the datasets and models used in each experiment presented in Section 5, and then provide a detailed setup including the choices of hyperparameters. All code and datasets are publicly available at github.com/litian96/TERM.\nJ.1 DATASETS AND MODELS\nWe apply TERM to a diverse set of real-world applications, datasets, and models.\nIn Section 5.1, for regression tasks, we use the drug discovery data extracted from (Diakonikolas et al., 2019) which is originally curated from (Olier et al., 2018) and train linear regression models with different losses. There are 4,085 samples in total with each having 411 features. We randomly split the dataset into 80% training set, 10% validation set, and 10% testing set. For mitigating noise on classification tasks, we use the standard CIFAR-10 data and their standard train/val/test partitions along with a standard inception network (Szegedy et al., 2016). For experiments regarding mitigating noisy annotators, we again use the CIFAR-10 data and their standard partitions with a ResNet20 model. The noise generation procedure is described in Section 5.1.\nIn Section 5.2, for fair PCA experiments, we use the complete Default Credit data to learn lowdimensional approximations and the loss is computed on the full training set. We follow the exact data processing steps described in the work (Samadi et al., 2018) we compare with. There are 30,000 total data points with 21-dimensional features (after preprocessing). Among them, the high education group has 24,629 samples and the low education group has 5,371 samples. For class imbalance experiments, we directly take the unbalanced data extracted from MNIST (LeCun et al., 1998) used in (Ren et al., 2018). When demonstrating the variance reduction of TERM, we use the HIV-1 dataset (Rögnvaldsson, 2013) as in (Namkoong & Duchi, 2017) and randomly split it into 80% train, 10% validation, and 10% test set. There are 6,590 total samples and each has 160 features. We report results based on five such random partitions of the data. We train logistic regression models (without any regularization) for this binary classification task for TERM and the baseline methods. We also investigate the performance of a linear SVM.\nIn Section 5.3, the HIV-1 data are the same as that in Section 5.2. We also manually subsample the data to make it more imbalanced, or inject random noise, as described in Section 5.3.\nJ.2 HYPERPARAMETERS\nSelecting t. In Section 5.2 where we consider positive t’s, we select t from a limited candidate set of t0.1, 0.5, 1, 5, 10, 50, 100, 200u on the held-out validation set. For initial robust regression experiments, RMSE changed by only 0.08 on average across t; we thus used t “ ´2 for all experiments involving noisy training samples (Section 5.1 and Section 5.3).\nOther parameters. For all experiments, we tune all other hyperparameters (the learning rates, the regularization parameters, the decision threshold for ERM`, ρ for (Namkoong & Duchi, 2017), α and γ for focal loss (Lin et al., 2017)) based on a validation set, and select the best one. For experiments regarding focal loss (Lin et al., 2017), we select the class balancing parameter (α in the original focal loss paper) from rangep0.05, 0.95, 0.05q and select the main parameter γ from t0.5, 1, 2, 3, 4, 5u. We tune ρ in (Namkoong & Duchi, 2017) such that ρn is selected from t0.5, 1, 2, 3, 4, 5, 10u where n is the training set size. All regularization parameters including regularization for linear SVM are selected from t0.0001, 0.01, 0.1, 1, 2u. For all experiments on the baseline methods, we use the default hyperparameters in the original paper (or the open-sourced code).\nWe summarize a complete list of main hyperparameter values as follows.\nSection 5.1:\n• Robust regression. The threshold parameter δ for Huber loss for all noisy levels is 1, the corruption parameter k for CRR is: 500 (20% noise), 1000 (40% noise), and 3000 (80% noise); and TERM uses t “ ´2.\n• Robust classification. The results are all based on the default hyperparameters provided by the open-sourced code of MentorNet (Jiang et al., 2018), if applicable. We tune the q parameter for\ngeneralized cross entropy (GCE) from t0.4, 0.8, 1.0u and select a best one for each noise level. For TERM, we scale t linearly as the number of iterations from 0 to -2 for all noise levels.\n• Low-quality annotators. For all methods, we use the same set of hyperparameters. The initial step-size is set to 0.1 and decayed to 0.01 at epoch 50. The batch size is 100.\nSection 5.2:\n• Fair PCA. We use the default hyperparameters and directly run the public code of (Samadi et al., 2018) to get the results on the min-max fairness baseline. We use a learning rate of 0.001 for our gradient-based solver for all target dimensions.\n• Handling class imbalance. We take the open-sourced code of LearnReweight (Ren et al., 2018) and use the default hyperparameters for the baselines of LearnReweight, HardMine, and ERM. We implement focal loss, and select α “ 0.05, γ “ 2.\n• Variance reduction. The regularization parameter for linear SVM is 1. γ for focal loss is 2. We perform binary search on the decision thresholds for ERM` and RobustRegRisk`, and choose 0.26 and 0.49, respectively.\nSection 5.3:\n• We tune the q parameter for GCE based on validation data. We use q “ 0, 0, 0.7, 0.3 respectively for the four scenarios we consider. For RobustlyRegRisk, we use ρn “ 10 (where n is the training sample size) and we find that the performance is not sensitive to the choice of ρ. For focal loss, we tune the hyperparameters for best performance and select γ “ 2, α “0.5, 0.1, 0.5, and 0.2 for four scenarios. We use t “ ´2 for TERM in the presence of noise, and tune the positive t’s based on the validation data. In particular, the values of tilts under four cases are: (0, 0.1), (0, 50), (-2, 5), and (-2, 10) for TERMsc and (0.1, 0), (50, 0), (1, -2) and (50, -2) for TERMca." }, { "heading": "K DISCUSSION", "text": "Our proposed work provides an alternative to empirical risk minimization (ERM), which is ubiquitous throughout machine learning. As such, our framework (TERM) could be widely used for applications both positive and negative. However, our hope is that the TERM framework will allow machine learning practitioners to easily modify the ERM objective to handle practical concerns such as enforcing fairness amongst subgroups, mitigating the effect of outliers, and ensuring robust performance on new, unseen data. One potential downside of the TERM objective is that if the underlying dataset is not well-understood, incorrectly tuning t could have the unintended consequence of magnifying the impact of biased/corrupted data in comparison to traditional ERM. Indeed, critical to the success of such a framework is understanding the implications of the modified objective, both theoretically and empirically. The goal of this work is therefore to explore these implications so that it is clear when such a modified objective would be appropriate.\nIn terms of the use-cases explored with the TERM framework, we relied on benchmark datasets that have been commonly explored in prior work (e.g., Samadi et al., 2018; Tantipongpipat et al., 2019; Yang et al., 2010; Yu et al., 2012). However, we note that some of these common benchmarks, such as cal-housing (Pace & Barry, 1997) and Credit (Yeh & Lien, 2009), contain potentially sensitive information. While the goal of our experiments was to showcase that the TERM framework could be useful in learning fair representations that suppress membership bias and hence promote fairer performance, developing an understanding for—and removing—such membership biases requires a more comprehensive treatment of the problem that is outside the scope of this work." } ]
2,021
TILTED EMPIRICAL RISK MINIMIZATION
SP:56e4d560f80360bd6f50d162caade651b5ff91a6
[ "This work presents a novel FL algorithm named HeteroFL (the name might sounds weird to some peoples) and 3 different simple methods to improve FL in heterogeneous conditions (i.e. both in term of clients and data partitioning). These tricks are: 1. A revised batchnormalisation; 2. a pre-activity scaling; 3. a masked loss (i.e only consider local classes) to help with non-IID datasets. All these modifications have been tested on 3 different datasets and 2 different tasks. From the results, we can see that the proposed approach works better. Although, it is not clear from where the benefit comes. ", "This paper proposes a new federated learning framework called HeteroFL, which supports the training of different sizes of local models in heterogeneous clients. Clients with higher computation capability can train larger models while clients with less computation capability train smaller models, and all these model architectures belong to the same model class. This approach dramatically benefits clients with limited computation capability and fully exploits their computation power. " ]
Federated Learning (FL) is a method of training machine learning models on private data distributed over a large number of possibly heterogeneous clients such as mobile phones and IoT devices. In this work, we propose a new federated learning framework named HeteroFL to address heterogeneous clients equipped with very different computation and communication capabilities. Our solution can enable the training of heterogeneous local models with varying computation complexities and still produce a single global inference model. For the first time, our method challenges the underlying assumption of existing work that local models have to share the same architecture as the global model. We demonstrate several strategies to enhance FL training and conduct extensive empirical evaluations, including five computation complexity levels of three model architecture on three datasets. We show that adaptively distributing subnetworks according to clients’ capabilities is both computation and communication efficient.
[ { "affiliations": [], "name": "NEOUS CLIENTS" }, { "affiliations": [], "name": "Enmao Diao" }, { "affiliations": [], "name": "Jie Ding" } ]
[ { "authors": [ "Dan Alistarh", "Demjan Grubic", "Jerry Li", "Ryota Tomioka", "Milan Vojnovic" ], "title": "Qsgd: Communication-efficient sgd via gradient quantization and encoding", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Mathieu Andreux", "Jean Ogier du Terrail", "Constance Beguier", "Eric W Tramel" ], "title": "Siloed federated learning for multi-centric histopathology datasets. In Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning", "venue": null, "year": 2020 }, { "authors": [ "Tal Ben-Nun", "Torsten Hoefler" ], "title": "Demystifying parallel and distributed deep learning: An in-depth concurrency analysis", "venue": "ACM Computing Surveys (CSUR),", "year": 2019 }, { "authors": [ "Keith Bonawitz", "Hubert Eichner", "Wolfgang Grieskamp", "Dzmitry Huba", "Alex Ingerman", "Vladimir Ivanov", "Chloe Kiddon", "Jakub Konečnỳ", "Stefano Mazzocchi", "H Brendan McMahan" ], "title": "Towards federated learning at scale: System design", "venue": "arXiv preprint arXiv:1902.01046,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "arXiv preprint arXiv:1803.03635,", "year": 2018 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Andrew Hard", "Kanishka Rao", "Rajiv Mathews", "Swaroop Ramaswamy", "Françoise Beaufays", "Sean Augenstein", "Hubert Eichner", "Chloé Kiddon", "Daniel Ramage" ], "title": "Federated learning for mobile keyboard prediction", "venue": "arXiv preprint arXiv:1811.03604,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Nikita Ivkin", "Daniel Rothchild", "Enayat Ullah", "Ion Stoica", "Raman Arora" ], "title": "Communicationefficient distributed sgd with sketching", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yihan Jiang", "Jakub Konečnỳ", "Keith Rush", "Sreeram Kannan" ], "title": "Improving federated learning personalization via model agnostic meta learning", "venue": null, "year": 1909 }, { "authors": [ "Mikhail Khodak", "Maria-Florina F Balcan", "Ameet S Talwalkar" ], "title": "Adaptive gradient-based metalearning methods", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jakub Konečnỳ", "H Brendan McMahan", "Felix X Yu", "Peter Richtárik", "Ananda Theertha Suresh", "Dave Bacon" ], "title": "Federated learning: Strategies for improving communication efficiency", "venue": "arXiv preprint arXiv:1610.05492,", "year": 2016 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Ang Li", "Jingwei Sun", "Binghui Wang", "Lin Duan", "Sicheng Li", "Yiran Chen", "Hai Li" ], "title": "Lotteryfl: Personalized and communication-efficient federated learning with lottery ticket hypothesis on non-iid datasets", "venue": "arXiv preprint arXiv:2008.03371,", "year": 2020 }, { "authors": [ "Daliang Li", "Junpu Wang" ], "title": "Fedmd: Heterogenous federated learning via model distillation", "venue": "arXiv preprint arXiv:1910.03581,", "year": 2019 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated learning: Challenges, methods, and future directions", "venue": "IEEE Signal Processing Magazine,", "year": 2020 }, { "authors": [ "Paul Pu Liang", "Terrance Liu", "Liu Ziyin", "Ruslan Salakhutdinov", "Louis-Philippe Morency" ], "title": "Think locally, act globally: Federated learning with local and global representations", "venue": "arXiv preprint arXiv:2001.01523,", "year": 2020 }, { "authors": [ "Wei Yang Bryan Lim", "Nguyen Cong Luong", "Dinh Thai Hoang", "Yutao Jiao", "Ying-Chang Liang", "Qiang Yang", "Dusit Niyato", "Chunyan Miao" ], "title": "Federated learning in mobile edge networks: A comprehensive survey", "venue": "IEEE Communications Surveys & Tutorials,", "year": 2020 }, { "authors": [ "Lingjuan Lyu", "Han Yu", "Qiang Yang" ], "title": "Threats to federated learning: A survey", "venue": "arXiv preprint arXiv:2003.02133,", "year": 2020 }, { "authors": [ "Yishay Mansour", "Mehryar Mohri", "Jae Ro", "Ananda Theertha Suresh" ], "title": "Three approaches for personalization with applications to federated learning", "venue": "arXiv preprint arXiv:2002.10619,", "year": 2020 }, { "authors": [ "Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Aguera y Arcas" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Luca Melis", "Congzheng Song", "Emiliano De Cristofaro", "Vitaly Shmatikov" ], "title": "Exploiting unintended feature leakage in collaborative learning", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2019 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "arXiv preprint arXiv:1609.07843,", "year": 2016 }, { "authors": [ "Takayuki Nishio", "Ryo Yonetani" ], "title": "Client selection for federated learning with heterogeneous resources in mobile edge", "venue": "IEEE International Conference on Communications (ICC),", "year": 2019 }, { "authors": [ "Virginia Smith", "Chao-Kai Chiang", "Maziar Sanjabi", "Ameet S Talwalkar" ], "title": "Federated multi-task learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Mingxing Tan", "Quoc V Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "arXiv preprint arXiv:1905.11946,", "year": 2019 }, { "authors": [ "Chandra Thapa", "Seyit" ], "title": "Camtepe. Splitfed: When federated learning meets split learning", "venue": "arXiv preprint arXiv:2004.12088,", "year": 2020 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Instance normalization: The missing ingredient for fast stylization", "venue": "arXiv preprint arXiv:1607.08022,", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Kangkang Wang", "Rajiv Mathews", "Chloé Kiddon", "Hubert Eichner", "Françoise Beaufays", "Daniel Ramage" ], "title": "Federated evaluation of on-device personalization", "venue": null, "year": 1910 }, { "authors": [ "Xun Xian", "Xinran Wang", "Jie Ding", "Reza Ghanadan" ], "title": "Assisted learning: a framework for multiorganization learning", "venue": "NeurIPS 2020 (spotlight),", "year": 2020 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Bo Zhao", "Konda Reddy Mopuri", "Hakan Bilen" ], "title": "idlg: Improved deep leakage from gradients", "venue": "arXiv preprint arXiv:2001.02610,", "year": 2020 }, { "authors": [ "Yue Zhao", "Meng Li", "Liangzhen Lai", "Naveen Suda", "Damon Civin", "Vikas Chandra" ], "title": "Federated learning with non-iid data", "venue": "arXiv preprint arXiv:1806.00582,", "year": 2018 }, { "authors": [ "Ligeng Zhu", "Zhijian Liu", "Song Han" ], "title": "Deep leakage from gradients", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Mobile devices and the Internet of Things (IoT) devices are becoming the primary computing resource for billions of users worldwide (Lim et al., 2020). These devices generate a significant amount of data that can be used to improve numerous existing applications (Hard et al., 2018). From the privacy and economic point of view, due to these devices’ growing computational capabilities, it becomes increasingly attractive to store data and train models locally. Federated learning (FL) (Konečnỳ et al., 2016; McMahan et al., 2017) is a distributed machine learning framework that enables a number of clients to produce a global inference model without sharing local data by aggregating locally trained model parameters. A widely accepted assumption is that local models have to share the same architecture as the global model (Li et al., 2020b) to produce a single global inference model. With this underlying assumption, we have to limit the global model complexity for the most indigent client to train its data. In practice, the computation and communication capabilities of each client may vary significantly and even dynamically. It is crucial to address heterogeneous clients equipped with very different computation and communication capabilities.\nIn this work, we propose a new federated learning framework called HeteroFL to train heterogeneous local models with varying computation complexities and still produce a single global inference model. This model heterogeneity differs significantly from the classical distributed machine learning framework where local data are trained with the same model architecture (Li et al., 2020b; Ben-Nun & Hoefler, 2019). It is natural to adaptively distribute subnetworks according to clients’\ncapabilities. However, to stably aggregate heterogeneous local models to a single global model under various heterogeneous settings is not apparent. Addressing these issues is thus a key component of our work. Our main contributions of this work are three-fold.\n• We identify the possibility of model heterogeneity and propose an easy-to-implement framework HeteroFL that can train heterogeneous local models and aggregate them stably and effectively into a single global inference model. Our approach outperforms state-ofthe-art results without introducing additional computation overhead.\n• Our proposed solution addresses various heterogeneous settings where different proportions of clients have distinct capabilities. Our results demonstrate that even when the model heterogeneity changes dynamically, the learning result from our framework is still stable and effective.\n• We introduce several strategies for improving FL training and demonstrate that our method is robust against the balanced non-IID statistical heterogeneity. Also, the proposed method can reduce the number of communication rounds needed to obtain state-of-the-art results. Experimental studies have been performed to evaluate the proposed approach." }, { "heading": "2 RELATED WORK", "text": "Federated Learning aims to train massively distributed models at a large scale (Bonawitz et al., 2019). FedAvg proposed by McMahan et al. (2017) is currently the most widely adopted FL baseline, which reduces communication cost by allowing clients to train multiple iterations locally. Major challenges involved in FL include communication efficiency, system heterogeneity, statistical heterogeneity, and privacy (Li et al., 2020b). To reduce communication costs in FL, some studies propose to use data compression techniques such as quantization and sketching (Konečnỳ et al., 2016; Alistarh et al., 2017; Ivkin et al., 2019), and some propose to adopt split learning (Thapa et al., 2020). To tackle system heterogeneity, techniques of asynchronous communication and active sampling of clients have been developed (Bonawitz et al., 2019; Nishio & Yonetani, 2019). Statistical heterogeneity is the major battleground for current FL research. A research trend is to adapt the global model to accommodate personalized local models for non-IID data (Liang et al., 2020), e.g., by integrating FL with other frameworks such as assisted learning (Xian et al., 2020), metalearning (Jiang et al., 2019; Khodak et al., 2019), multi-task learning (Smith et al., 2017), transfer learning (Wang et al., 2019; Mansour et al., 2020), knowledge distillation (Li & Wang, 2019) and lottery ticket hypothesis (Li et al., 2020a). Nevertheless, these personalization methods often introduce additional computation and communication overhead that may not be necessary. Another major concern of FL is data privacy (Lyu et al., 2020), as model gradient updates can reveal sensitive information (Melis et al., 2019) and even local training data (Zhu et al., 2019; Zhao et al., 2020).\nTo our best knowledge, what we present is the first work that allows local models to have different architectures from the global model. Heterogeneous local models can allow local clients to adaptively contribute to the training of global models. System heterogeneity and communication efficiency can be well addressed by our approach, where local clients can optimize low computation complexity models and therefore communicate a small number of model parameters. To address statistical heterogeneity, we propose a ”Masking Trick” for balanced non-IID data partition in classification problems. We also propose a modification of Batch Normalization (BN) (Ioffe & Szegedy, 2015) as privacy concern of running estimates hinders the usage of advanced deep learning models." }, { "heading": "3 HETEROGENEOUS FEDERATED LEARNING", "text": "" }, { "heading": "3.1 HETEROGENEOUS MODELS", "text": "Federated Learning aims to train a global inference model from locally distributed data {X1, . . . , Xm} across m clients. The local models are parameterized by model parameters {W1, . . . ,Wm}. The server will receive local model parameters and aggregate them into a global model Wg through model averaging. This process iterates multiple communication rounds and can be formulated as W tg = 1 m ∑m i=1W t i at iteration t. At the next iteration, W t g is transmitted to a subset of local clients and update their local models as W t+1i =W t g .\nIn this work, we focus on the relaxation of the assumption that local models need to share the same architecture as the global model. Since our primary motivation is to reduce the computation and communication complexity of local clients, we consider local models to have similar architecture but can shrink their complexity within the same model class. To simplify global aggregation and local update, it is tempting to propose local model parameters to be a subset of global model parameters W t+1i ⊆ W tg . However, this raises several new challenges like the optimal way to select subsets of global model parameters, compatibility of the-state-of-art model architecture, and minimum modification from the existing FL framework. We develop Heterogeneous Federated Learning (HeteroFL) to address these issues in the context of deep learning models.\nA variety of works show that we can modulate the size of deep neural networks by varying the width and depth of networks (Zagoruyko & Komodakis, 2016; Tan & Le, 2019). Because we aim to reduce the computation complexity of local models, we choose to vary the width of hidden channels. In this way, we can significantly reduce the number of local model parameters, while the local and global model architectures are also within the same model class, which stabilizes global model aggregation.\nWe demonstrate our method of selecting subsets of global model parameters Wl for a single hidden layer parameterized by Wg ∈ Rdg×kg in Fig. 1, where dg and kg are the output and input channel size of this layer. It is possible to have multiple computation complexity levels W pl ⊂ W p−1 l · · · ⊂ W 1l as illustrated in Fig. 1. Let r be the hidden channel shrinkage ratio such that d p l = r\np−1dg and kpl = r\np−1kg . It follows that the size of local model parameters |W pl | = r2(p−1)|Wg| and the model shrinkage ratio R = |W p l |\n|Wg| = r 2(p−1). With this construction, we can adaptively allocate subsets of\nglobal model parameters according to the corresponding capabilities of local clients. Suppose that number of clients in each computation complexity level is {m1, . . . ,mp}. Specifically, we perform global aggregation in the following way.\nW pl = 1\nm m∑ i=1 W pi , W p−1 l \\W p l =\n1\nm−mp m−mp∑ i=1 W p−1i \\W p i , . . . (1)\nW 1l \\W 2l = 1\nm−m2:p m−m2:p∑ i=1 W 1i \\W 2i (2)\nWg =W 1 l =W p l ∪ (W p−1 l \\W p l ) ∪ · · · ∪ (W 1 l \\W 2l ) (3)\nFor notational convenience, we have dropped the iteration index t. We denote the W pi as a matrix/tensor. The W tg [: dm, : km] denotes the upper left submatrix with a size of dm × km. Also, W p−1,t+1g \\W p,t+1g denotes the set of elements included in W p−1,t+1g but excluded in W p,t+1g . We exemplify the above equations using Fig. 1. The first part of Equation (1) shows that the smallest part of model parameters (blue, p = 3) is aggregated from all the local clients that contain it.\nIn the second part of Equation (1), the set difference between part p − 1 (orange) and p (blue) of model parameters is aggregated from local clients with computation complexity level smaller than p − 1. In Equation (2), the red part of model parameters can be similarly aggregated from m −m2:p = m1 clients. In Equation (3), the global model parameters W tg is constructed from the union of all disjoint sets of the partition. In summary, each parameter will be averaged from those clients whose allocated parameter matrix contains that parameter. Thus, a model of an intermediate complexity will have parameters fully averaged with all the other larger models but partially with smaller models (according to the corresponding upper left submatrix).\nSeveral works show that wide neural networks can drop a tremendous number of parameters per layer and still produce acceptable results (Han et al., 2015; Frankle & Carbin, 2018). The intuition is thus to perform global aggregation across all local models, at least on one subnetwork. To stabilize global model aggregation, we also allocate a fixed subnetwork for every computation complexity level. Our proposed inclusive subsets of global model parameters also guarantee that smaller local models will aggregate with more local models.\nThus, small local models can benefit more from global aggregation by performing less global aggregation for part of larger local model parameters. We empirically found that this approach produces better results than uniformly sampled subnetworks for each client or computation complexity level." }, { "heading": "3.2 STATIC BATCH NORMALIZATION", "text": "After global model parameters are distributed to active local clients, we can optimize local model parameters with private data. It is well-known that the latest deep learning models usually adopt Batch Normalization (BN) to facilitate and stabilize optimization. However, classical FedAvg and most recent works avoid BN. A major concern of BN is that it requires running estimates of representations at every hidden layer. Uploading these statistics to the server will cause higher communication costs and privacy issues Andreux et al. (2020) proposes to track running statistics locally.\nWe highlight an adaptation of BN named as static Batch Normaliztion (sBN) for optimizing privacy constrained heterogeneous models. During the training phase, sBN does not track running estimates and simply normalize batch data. We do not track the local running statistics as the size of local models may also vary dynamically. This method is suitable for HeteroFL as every communication round is independent. After the training process finishes, the server sequentially query local clients and cumulatively update global BN statistics. There exist privacy concerns about calculating global statistics cumulatively and we hope to address those issues in the future work. We also empirically found this trick significantly outperforms other forms of normalization methods including the InstanceNorm (Ulyanov et al., 2016), GroupNorm (Wu & He, 2018), and LayerNorm (Ba et al., 2016) as shown in Table 4 and Table 5." }, { "heading": "3.3 SCALER", "text": "There still exists another cornerstone of our HeteroFL framework. Because we need to optimize local models for multiple epochs, local model parameters at different computation complexity levels will digress to various scales. This known phenomenon was initially discussed by the dropout (Srivastava et al., 2014). To directly use the full model during the inference phase, inverted dropout with dropout rate q scales representations with 11−q during the training phase. In practice, dropout is usually attached after the activation layer as the selection of subnetworks is performed with masking. Our method directly selects subnetworks from the subsets of global model parameters. Therefore, we append a Scaler module right after the parametric layer and before the sBN and activation layers. The Scaler module scales representations by 1rp−1 during the training phase. After the global aggregation, the global model can be directly used for inference without scaling. To further illustrate this point, we include a comprehensive ablation study in Tables 4 and 5. A typical linear hidden layer used in our HeteroFL framework can be formulated as\ny = φ(sBN(Scaler(XmW pm + b p m))) (4)\nwhere y is the output, φ(·) denotes a non-linear activation layer, e.g ReLU(), and W pm, bpm are the weight and bias for local model m at computation complexity level p. With all the practical methods mentioned above, we propose the complete pseudo-code for our HeteroFL framework in Algorithm 1. The local capabilities information Lm is an abstraction of the computation and communica-\nAlgorithm 1: HeteroFL: Heterogeneous Federated Learning Input: Data Xi distributed on M local clients, the fraction C of active clients per\ncommunication round, the number of local epochs E, the local minibatch size B, the learning rate η, the global model parameterized by Wg , the channel shrinkage ratio r, and the number of computation complexity levels P .\nSystem executes: Initialize W 0g and local capabilities information L1:K for each communication round t = 0, 1, 2, . . . do\nMt ← max(C ·M, 1) St ← random set of Mt clients for each client m ∈ St in parallel do\nDetermine computation complexity level p based on Lm rm ← r(p−1), dm ← rmdg , km ← rmkg W tm ←W tg [: dm, : km] W t+1m ← ClientUpdate(m, rm,W tm)\nend for each computation complexity level p do\nW p−1,t+1g \\W p,t+1g ← 1Mt−Mp:P,t ∑Mt−Mp:P,t i=1 W p−1,t+1 i \\W p,t+1 i\nend W t+1g ← ⋃P p=1W p−1,t+1 g \\W p,t+1g\nUpdate L1:K , η (Optional) end Query representation statistics from local clients (Optional)\nClientUpdate (m, rm,Wm): Bm ← Split local data Xm into batches of size B for each local epoch e from 1 to E do\nfor batch bm ∈ Bm do Wm ←Wm − η∇`(Wm, rm; bm)\nend end Return Wm to server\ntion capabilities of a local client m. Once this information is communicated to the server, the server can know the model complexity that should be allocated to the client. We can also optionally update learning rates to facilitate optimization and local capabilities information if changing dynamically." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "We trained over 600 individual models for exploring and demonstrating the effectiveness of our method. We experimented with MNIST and CIFAR10 image classification tasks and the WikiText2 language modeling task (LeCun et al., 1998; Krizhevsky et al., 2009; Merity et al., 2016; Devlin et al., 2018).\nOur experiments are performed with three different models including a CNN for MNIST, a preactivated ResNet (PreResNet18) (He et al., 2016) for CIFAR10 and a Transformer (Vaswani et al., 2017) for WikiText2. We replace BN in CNN and PreResNet18 with our proposed sBN, and attach the Scaler module after each convolution layer. To study federated optimization, we adopt data partition the same as in (McMahan et al., 2017; Liang et al., 2020). We have 100 clients, and the fraction C of active clients per communication round is 0.1 throughout our experiments. For IID data partition, we uniformly assign the same number of data examples for each client. For balanced non-IID data partition, we assume that the label distribution is skewed, where clients will only have examples at most from two classes and the number of examples per class is balanced. We note that there exist other kinds of non-IID data partition, e.g., the unbalanced non-IID data partition where clients may hold unbalanced labeled dataset and the feature distribution skew where clients may hold different features. We conduct a masked language modeling task with a 15% masking rate and uniformly\nassign balanced data examples for each client. It needs to point out that each client will roughly have 3000 different words in their local dataset, while the total vocabulary size is 33278. The details regarding hyperparameters and model architecture can be found in Table 6 of the Appendix.\nTo study the effectiveness of our proposed HeteroFL framework, we construct five different computation complexity levels {a, b, c, d, e} with the hidden channel shrinkage ratio r = 0.5. We have tried various shrinkage ratios, and we found that it is most illustrative to use the discrete complexity levels 0.5, 0.25, 0.125, and 0.0625 (relative to the most complex model). For example, model ‘a’ has all the model parameters, while models ‘b’ to ‘e’ have the effective shrinkage ratios 0.5, 0.25, 0.125, and 0.0625. We note that the complexity of ‘e’ is close to a logistic regression model. Our experiments indicated that the ratio can be arbitrary between (0, 1] and dynamically change. In practice, using a dictionary of discrete complexity levels are convenient for coordination purposes.\nEach local client is assigned an initial computation complexity level. We annotate Fix for experiments with a fixed assignment of computation complexity levels, and Dynamic for local clients uniformly sampling computation complexity levels at each communication round. We perform distinct experiments for Fix and Dynamic assignments. All the figures are based on the Fix scenario, where we considered models of different sizes, and each client is allocated with a fixed size. All the tables are based on the Dynamic scenario, where we randomly vary the allocation of clients’ model complexity, and the ratio of the number of weak learners is fixed to 50%.\nThe x-axis of figures represents the average model parameters. When 10% clients use the model ’a’ and 90% use the model ’e’, the average number of model parameters is 0.1× (size of model ‘a’) + 0.9 × (size of model ‘e’). We interpolate this partition from 10% to 100% with step size 10% to demonstrate the effect of proportionality of clients with various computation complexity levels.\nTo demonstrate the effect of dynamically varying computation and communication capabilities, we uniformly sample from various combinations of computation complexity levels. For example, model ’a-b-c-d-e’ means that we uniformly sample from all possible available levels for every active client at each communication round. We show the number of model parameters, FLOPs, and Space (MB) to indicate the computation and communication requirements of our methods. For example, since we uniformly sample levels, model a − e calculates computational metrics by averaging those of model a and e. The ratio is calculated between the number of parameters of a given model with respect to its 100% global model.\nWe compare our results to other baseline methods like Standalone, FedAvg, and LG-FedAvg gathered from (Liang et al., 2020). Standalone means there is no communication between clients and the server. In our experimental studies, we considered more complex models compared with the existing work. In particular, the baseline models in LG-FedAvg used MLP (on MNIST) and CNN (on CIFAR10). In terms of the number of parameters, our models ‘a-e’ (on MNIST) and ‘b-e’ (on CIFAR10) are comparable with those baselines. In terms of the FLOPs, our model ‘d-e’ (on MNIST and CIFAR10) can be compared with those baselines. The single-letter models ‘a’, ‘b’, ‘c’, ‘d’, ‘e’ are our implementations of the FedAvg equipped with the sBN and Masking CrossEntropy. The takeaway of Table 2 is that a weak learner that can only train a small model ‘e’ (on CIFAR10)(77.09%) can boost its performance to ‘c-e’ (86.88%), ‘b-e’ (89.10%), or ‘a-e’ (90.29%), which are close to the scenario where all the learners are strong, namely c(87.55%), b(89.82%), or a(91.99%). In particular, in ‘c-e’, ‘b-e’, or ‘a-e’, half of the clients are trained with larger models ‘c’, ‘b’, or ‘a’, while the other half are trained with the model ‘e’. Only the aggregated global models ‘c’, ‘b’, or ‘a’ are used during the testing stage. Although weak clients train smaller models ‘e’, they will test with the largest models ‘c’, ‘b’, or ‘a’ to gain better performance.\nFull results including other possible combinations can be found in appendix in Table 7-9. Finally, our method is robust to dynamically varying model complexities. It is worth noting that our method does not incur any additional computation overhead and can be readily adapted to existing applications.\nWe also perform experiments for balanced non-IID data partition and provide a simple trick to achieve comparable results. As mentioned earlier, most state-of-the-art results ofbalanced non-IID datasets suggest the personalization of local models to achieve better local results (Smith et al., 2017; Liang et al., 2020; Li et al., 2020a). Here, the Local results assume that the training data distribution and test data distribution for each local client are the same, and assign zero probability for those classes that are not presented to a client during training. The Global results were calculated from the global model applied to the test data directly. The Local results were cumulatively averaged from\nthe performance of each data example on each local client. Zhao et al. (2018) showed that the failure of non-IID FL is related to the weight divergence among local model parameters trained locally for many iterations. The weight divergence mostly occurs in the last classification layer of networks.\nThus, instead of a full Cross-Entropy Loss for all classes, we are motivated to train each local model only with their corresponding classes. In this way, each local model will train a sub-task given locally available label information. Specifically, we mask out the output of the model before passing it Cross-Entropy Loss, which we named as Masked Cross-Entropy Loss. We experimented with several different ways of masking, we find replacing the last layer outputs that are not associated with local labels with zero achieves both stable and comparable local and global results. When aggregating local model parameters, we do not aggregate the untrained parameters in the last classification layers. Either the server can infer this information implicitly, or the local clients can report which classes they have to the server explicitly. We provide a comprehensive ablation study in Tables 5. The results show that Masked Cross-Entropy Loss significantly improve local performance and moderately global performance of balanced non-IID data partition task. Since our primary focus is to address model heterogeneity, we leave the analysis of this trick to future work. We show the results of Fix experiments in appendix in Fig. 5-8. Dynamic non-IID results are also included in\nTable 1-3. The results show that our method performs comparably to those with personalized local models. Our method is readily adaptable, free of computation overhead, and only rely on the single global model for testing local and global results. It allows local clients to switch to another subtask simply by changing its mask without querying the server for others’ personalized models.\nWe show the learning curves of 50% Fix and Dynamic assignments in appendix in Fig. 9-11. The learning curves show that the optimization of HeteroFL for the IID dataset is stable and efficient. Our method achieves better results with a fewer number of communication rounds, e.g., 800 for Heterofl and 1800 for LG-FedAvg (Liang et al., 2020). We empirically discover gradient clipping stabilizes the optimization of HeteroFL as it prevents small models from gradient explosion. We can therefore adopt a universal learning rate for heterogeneous local models. It is also perceivable\nthat aggregation of model parameters trained with non-IID data makes the optimization less stable. Results of Dynamic show that global aggregation of dynamically varying computation complexities is stable." }, { "heading": "5 CONCLUSIONS AND FUTURE WORK", "text": "We propose Heterogeneous Federated Learning (HeteroFL), which shows the possibility of coordinatively training local models much smaller than a global model to produce a single global inference model. Our experiments show that FL can be made more practical by introducing HeteroFL\nand sBN and Mased Cross-Entropy Loss, as HeteroFL fully exploits local clients’ capabilities and achieves better results with a fewer number of communication rounds. We demonstrate our results with various model architectures, including CNN, PreResNet18, and Transformer, and show that our method is robust to statistical heterogeneity and dynamically varying local capabilities. A future direction is to distinct model classes as well as model heterogeneity. Also, the proposed methods may be emulated to address heterogeneous few-shot learning, multi-modal learning, and multi-task learning." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported by the Office of Naval Research (ONR) under grant number N00014-181-2244, and the Army Research Office (ARO) under grant number W911NF-20-1-0222." }, { "heading": "A APPENDIX", "text": "The appendix contains supplementary experimental results. In Table 5 we show the ablation study of Non-IID experiments. Compared to results of IID experiments shown in Table 4, Table .5 shows the ablation study Maksed CrossEntropy which is shown beneficial for balanced Non-IID data partition. In Table 6, we show the hyperparameters adopted in our experiments. In Figure 3 and 4, we show the Fix complexity assignments of MNIST and WikiText with iid data partition experiments. From Figure 6 to 8, we show the Fix complexity assignments for all balanced Non-IID data partition experiments. Figure 9 to 11, we show the learning curve of experiments with Dyanmic complexity assignments. The complete results for all experiments with Dyanmic complexity assignments can be found in Table 7, 8, and 9." } ]
2,021
null
SP:9fad18ae03570219f7b9fd631dc6eccbbb41fa30
[ "In this paper, the authors propose the usage of complex numbers in deep neural networks. Would be good to know that complex numbers, n x n matrices, quaternions, diagonal matrices, etc. all can be used in neural networks. The authors also claims benchmark performance in large-scale image classification and language modeling.", "The authors propose AlgebraNets - a previously explored approach to replace real-valued algebra in deep learning models with other associative algebras that include 2x2 matrices over real and complex numbers. They provide a comprehensive overview of prior methods in this direction and motivate their work with potential for both parameter and computational efficiency, and suggest that the latter is typically overlooked in prior literature. The paper is very well-written and follows a nice narrative, and the claims are mostly backed empirically with experimental results. " ]
Neural networks have historically been built layerwise from the set of functions in f : R → R, i.e. with activations and weights/parameters represented by real numbers, R. Our work considers a richer set of objects for activations and weights, and undertakes a comprehensive study of alternative algebras as number representations by studying their performance on two challenging problems: largescale image classification using the ImageNet dataset and language modeling using the enwiki8 and WikiText-103 datasets. We denote this broader class of models as AlgebraNets. Our findings indicate that the conclusions of prior work, which explored neural networks constructed from C (complex numbers) and H (quaternions) on smaller datasets, do not always transfer to these challenging settings. However, our results demonstrate that there are alternative algebras which deliver better parameter and computational efficiency compared with R. We consider C, H,M2(R) (the set of 2× 2 real-valued matrices),M2(C),M3(R),M4(R), dual numbers and theR cross product. Additionally, we note that multiplication in these algebras has higher compute density than real multiplication, a useful property in situations with inherently limited parameter reuse such as auto-regressive inference and sparse neural networks. We therefore investigate how to induce sparsity within AlgebraNets. We hope that our strong results on large-scale, practical benchmarks will spur further exploration of these unconventional architectures which challenge the default choice of using real numbers for neural network weights and activations.
[]
[ { "authors": [ "Rewon Child", "Scott Gray", "Alec Radford", "Ilya Sutskever" ], "title": "Generating long sequences with sparse transformers, 2019", "venue": null, "year": 2019 }, { "authors": [ "Kyunghyun Cho", "Bart van Merrienboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder–decoder for statistical machine translation", "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime Carbonell", "Quoc Le", "Ruslan Salakhutdinov" ], "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Ivo Danihelka", "Greg Wayne", "Benigno Uria", "Nal Kalchbrenner", "Alex Graves" ], "title": "Associative long short-term memory, 2016", "venue": null, "year": 2016 }, { "authors": [ "Erich Elsen", "Marat Dukhan", "Trevor Gale", "Karen Simonyan" ], "title": "Fast sparse convnets", "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Utku Evci", "Trevor Gale", "Jacob Menick", "Pablo Samuel Castro", "Erich Elsen" ], "title": "Rigging the lottery: Making all tickets winners, 2019", "venue": null, "year": 2019 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Trevor Gale", "Erich Elsen", "Sara Hooker" ], "title": "The state of sparsity in deep neural networks, 2019", "venue": null, "year": 2019 }, { "authors": [ "Chase J. Gaudet", "Anthony S. Maida" ], "title": "Deep quaternion networks", "venue": "In 2018 International Joint Conference on Neural Networks (IJCNN),", "year": 2018 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Geoffrey E Hinton", "Sara Sabour", "Nicholas Frosst" ], "title": "Matrix capsules with EM routing", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": null, "year": 2015 }, { "authors": [ "Siddhant M. Jayakumar", "Wojciech M. Czarnecki", "Jacob Menick", "Jonathan Schwarz", "Jack Rae", "Simon Osindero", "Yee Whye Teh", "Tim Harley", "Razvan Pascanu" ], "title": "Multiplicative interactions and where to find them", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": null, "year": 2014 }, { "authors": [ "Christos Louizos", "Max Welling", "Diederik P. Kingma" ], "title": "Learning sparse neural networks through L_0 regularization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models, 2016", "venue": null, "year": 2016 }, { "authors": [ "Dmitry Molchanov", "Arsenii Ashukha", "Dmitry P. Vetrov" ], "title": "Variational dropout sparsifies deep neural networks. ArXiv", "venue": null, "year": 2017 }, { "authors": [ "André Stork", "Dieter W. Fellner" ], "title": "Joint Schedule and Layout Autotuning for Sparse Matrices with Compound Entries on GPUs", "venue": "Vision, Modeling and Visualization. The Eurographics Association,", "year": 2019 }, { "authors": [ "Nate Oh" ], "title": "The nvidia titan v deep learning deep dive: It’s all about the tensor cores, 2019", "venue": "URL https: //www.anandtech.com/show/12673/titan-v-deep-learning-deep-dive/3", "year": 2019 }, { "authors": [ "Xingang Pan", "Xiaohang Zhan", "Jianping Shi", "Xiaoou Tang", "Ping Luo" ], "title": "Switchable whitening for deep representation learning", "venue": "IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Titouan Parcollet", "Mirco Ravanelli", "Mohamed Morchid", "Georges Linarès", "Chiheb Trabelsi", "Renato De Mori", "Yoshua Bengio" ], "title": "Quaternion recurrent neural networks, 2018", "venue": null, "year": 2018 }, { "authors": [ "William H. Press", "Saul A. Teukolsky", "William T. Vetterling", "Brian P. Flannery" ], "title": "Numerical Recipes 3rd Edition: The Art of Scientific Computing", "venue": null, "year": 2007 }, { "authors": [ "Prajit Ramachandran", "Barret Zoph", "Quoc V. Le" ], "title": "Searching for activation functions, 2017", "venue": null, "year": 2017 }, { "authors": [ "Aurko Roy", "Mohammad Saffar", "Ashish Vaswani", "David Grangier" ], "title": "Efficient content-based sparse attention with routing transformers, 2020", "venue": null, "year": 2020 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander C. Berg", "Li Fei-Fei" ], "title": "ImageNet Large Scale Visual Recognition Challenge", "venue": "International Journal of Computer Vision (IJCV),", "year": 2015 }, { "authors": [ "Chiheb Trabelsi", "Olexa Bilaniuk", "Ying Zhang", "Dmitriy Serdyuk", "Sandeep Subramanian", "Joao Felipe Santos", "Soroush Mehri", "Negar Rostamzadeh", "Yoshua Bengio", "Christopher J Pal" ], "title": "Deep complex networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Riccardo Vecchi", "Simone Scardapane", "Danilo Comminiello", "Aurelio Uncini" ], "title": "Compressing deep quaternion neural networks with targeted regularization", "venue": null, "year": 2019 }, { "authors": [ "Daniel E. Worrall", "Stephan J. Garbin", "Daniyar Turmukhambetov", "Gabriel J. Brostow" ], "title": "Harmonic networks: Deep translation and rotation equivariance", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Muqiao Yang", "Dongyu Li Martin QMa", "Yao-Hung Hubert Tsai", "Ruslan Salakhutdinov" ], "title": "Complex transformer: A framework for modeling complex-valued sequence", "venue": "CASSP", "year": 2020 }, { "authors": [ "Michael Zhu", "Suyog Gupta" ], "title": "To prune, or not to prune: exploring the efficacy of pruning for model compression, 2017", "venue": null, "year": 2017 } ]
[ { "heading": "1 Introduction", "text": "Nearly universally, the atomic building blocks of artificial neural networks are scalar real-valued weights and scalar real-valued neuron activations that interact using standard rules of multiplication and addition.\nWe propose AlgebraNets, where we replace the commonly used real-valued algebra with other associative algebras. Briefly, this amounts to replacing scalars by tuples and real multiplication by a tuple multiplication rule. For example, by replacing each scalar weight and activation with 2 × 2 matrices, and standard real addition / multiplication with matrix addition / multiplication. These alternative algebras provide three clear benefits for deep learning at scale:\nParameter efficiency. One sweeping benefit of AlgebraNets is they are able to match baseline performance on a variety of tasks, spread over multiple domains, with fewer parameters than the competitive real-valued baselines. This means that equivalently capable models can be trained on smaller hardware, and for a given amount of memory, a model with greater effective capacity can be trained. We find some variants of AlgebraNets that are more parameter efficient than the previously considered C and H algebras. Throughout the text, we count parameters as the total number of real values e.g. a complex number counts as two parameters.\nComputational efficiency. For scaling large models, parameter efficiency is not the only bottleneck: FLOP efficiency – reducing the relative number of floating-point operations to achieve an equivalent accuracy – is also important. We find instantiations of AlgebraNets that are more FLOP efficient than the previously considered C and H algebras and as FLOP efficient as R. Additionally, all of the proposed algebras offer parameter reuse greater than 1 (see Table 1). That is, the ratio of multiplications performed to values consumed is greater than or equal to 1:1. By contrast, for multiplication in R it is only 1:2. Modern hardware requires a high ratio of floating point operations to bytes loaded (bandwidth) to become compute bound and saturate the arithmetic units. This is\nparticularly problematic for auto-regressive inference (dominated by matrix-vector multiplies), sparse models, depthwise convolutions and other operations with low arithmetic density.\nArchitectural exploration. The choice of real numbers for weights and activations is usually taken for granted (with some exceptions, e.g. those discussed in Sec. 3). With AlgebraNets, we challenge this established design choice and open up a vast new space for neural network architecture exploration by showing that real numbers can be easily replaced with a variety of algebraic structures. Leveraging these new building blocks, one can consider different algebraic interactions, different choices of non-linearities, and different network architecture choices. Importantly, as we demonstrate in this work, AlgebraNets are not only scalable to large models and complex tasks, but they in fact offer improvements in model efficiency, which makes them a viable practical choice. We believe we have only begun to scratch the surface of what these alternative building blocks can enable, and we hope that their broader adoption will usher in further progress across the field.\nIn summary, our main contributions are as follows:\n• We propose AlgebraNets — a novel class of neural networks, which replaces the nearly ubiquitously used real algebra with alternatives. We show that in contrast to previous work, algebra specific initializations and replacement of batch normalization by an expensive whitening procedure (Trabelsi et al., 2018; Gaudet and Maida, 2018; Wu et al., 2020; Pan et al., 2019) is not necessary, making them a near drop-in replacement to real-valued networks. • We evaluate AlgebraNets based on a wide range of algebras on three challenging large scale benchmarks: ImageNet image classification (Russakovsky et al., 2015), Enwik8 (LLC, 2009), and WikiText language modelling (Merity et al., 2016). • We explore sparse AlgebraNets to take advantage of their higher compute density. • We find that AlgebraNets offer improved parameter efficiency and FLOP parity compared to the real-valued baselines, which establishes them as a viable choice for efficient deep learning at scale." }, { "heading": "2 AlgebraNets", "text": "" }, { "heading": "2.1 Why Algebras?", "text": "We consider algebras because they have the right properties to make them a drop-in replacement for real numbers in typical neural networks. This is not surprising as the real numbers are an algebra over themselves. An algebra A over a field K (which we take to always be the field of real or complex numbers) satisfies the following properties1 (Wikipedia contributors, 2020b;a):\n1. It is a vector space overK. • It has an associative and commutative addition operator with an identity element (x+ 0 = x) and inverse element (x+ (−x) = 0). • It is possible to multiply elements of fieldK with vectors.2 2. There is a right and left distributive multiplication operator • over vectors closed in A. 3. Scalar multiplication combines with • in a compatible way: (ax) • (by) = (ab)(x • y).\nWe do not claim that these properties are all required as neural network building-blocks, merely that they are convenient. For example, one could imagine not having associative addition – this would require a careful implementation to get right but is possible. One could eliminate the requirement that scalars fromK multiply with vectors from A - this would make various normalizations (e.g. batch normalization) impossible, but they are not required. Most importantly, removing some of these requirements does not lead to an obviously useful class of mathematical objects to consider.\nIn addition to the previously considered C and H algebras, we also consider the algebras of n× n matrices over R and C (i.e. Mn(R) orMn(C)) as they have higher compute density than R and map well to the matrix multiplication units that are becoming common in processors (Oh, 2019). We note\n1We use the terminology ‘vector’ in the definition as that is the generally accepted mathematical term, however throughout the rest of the paper we use the term ‘tuple’ instead. This is to avoid the confusion of calling a matrix a vector, which is technically correct in this context, but rife with potential for confusion.\n2a(bx) = (ab)x; 1x = x for 1, the multiplicative identity inK; a(x+y) = ax+ay; (a+ b)x = ax+ bx\nthatM2(R) is isomorphic to the split-quaternion (Cockle, 1849) algebra andM2(C) is isomorphic to the biquaternion (Hamilton, 1844) algebra, but the matrix algebras are more familiar so we retain that terminology. Lastly, we consider the dual numbers and the cross product of length-3 tuples." }, { "heading": "2.2 Algebra Details", "text": "R; Real Numbers All baseline networks are real-valued with scalar weights; standard multiplication rules apply. For two weight values, we load 2 scalars and perform 1 multiply.\nC a b a a b b b −a We provide tables describing the multiplicative interaction between tuples. The interaction between two tuples (oa, ob, ...) = (ta, tb, ...) • (va, vb, ...) is described by a matrix where the indices of t are on the left, v are on the top and entries correspond to which component of o the interaction contributes to. A 0 means there is no interaction and a negative index means the result is subtracted.\nC; Complex Numbers Each weight, w, is a length 2 tuple (ta, tb) representing the complex number ta + tbi. For two weight values we load 4 scalars and perform 4 multiplies.\nMn(R);n×nMatricesEachweight is a lengthn2 tuple, representing ann×nmatrix. Multiplication and addition proceed with standard rules for matrices. We consider up toM4(R) matrices. For two weight values we load 2n2 scalars and perform n3 multiplies.\nH a b c d a a b c d b b -a d -c c c -d -a b d d c -b -a\nMn(C);n×nComplexMatricesWeights are length 2n2 tuples representing n × n complex-valued matrices. We consider only n = 2. For two weight values we load 4n2 scalars and perform 4n3 multiplies. The multiplication table is in Appendix A.\nH; Quaternions Each weight, wi is replaced by a length 4 tuple, (ta, tb, tc, td). Multiplication is not commutative, with the product of two quaternions given by the Hamilton product (Hamilton, 1843). For two weight values, we load 8 elements and perform 16 multiplies.\nD a b c d a a 0 0 0 b 0 b 0 0 c 0 0 c 0 d 0 0 0 d Diagonal Algebra The high FLOP cost of the whitening operation required by (Trabelsi et al., 2018; Gaudet and Maida, 2018; Wu et al., 2020; Pan et al., 2019) makes networks using it inefficient at training and inference in terms of FLOPs. We attempt to design an algebra where using whitening would in fact be competitive by eliminating the interaction of terms through the algebra. Only when combining the ‘diagonal’ D algebra with whitening are there interactions between the different tuple components.\na b\na a b b b 0\nDual Numbers Each weight is represented by a length 2 tuple representing the dual number (t0 + t1 ).\nFor a multiplication, we load 4 values and perform 3 multiplies.\nR3 a b c a 0 c -b b -c 0 a c b -a 0 R3 Cross Product Each weight is represented by a length 3 tuple. We use the cross product between two tuples for the multiplication rule, resulting in 6 different multiplies for 6 values loaded." }, { "heading": "2.3 Initialization, Normalization, Non-Linearities, and Pruning", "text": "Prior work (Trabelsi et al., 2018; Gaudet andMaida, 2018) has advocated algebra-specific initializations and expensive whitening procedures to replace batch normalization. We find that this is not necessary to achieve good performance, and we are able to use the same initialization, normalization, and non-linearities across all algebras which facilitates exploring a wide variety of options.\nTo initialize all the components of the algebra tuple at the beginning of a network we set the first tuple component to the typical input. For ResNet, MobileNet, and the RNN we initialize the other components of the tuple with a small one or two-layer MLP, i.e. tb,c,... = MLP (ta). For the transformer, we take advantage of the fact that the embedding is already a learned representation and simply reshape the output embedding appropriately. We find that the specifics of the input initialization do not have a large effect on performance, though allowing a learnable transformation outperformed initializing additional components to 0 or replicating the input. We use standard Glorot (Glorot and Bengio, 2010) weight initialization of each component independently. Comparisons with the algebra specific initializations (Trabelsi et al., 2018; Gaudet and Maida, 2018) can be found in Appendix B.\nExisting activation functions can be applied component-wise (t = (f(ta), · · · , f(td))) and we found that ReLU and swish (Ramachandran et al., 2017) work well; tanh and sigmoid can also be applied component-wise as part of GRUs and LSTMs. Applying the activation function to the entire tuple has possible computational advantages if it is ReLU-like as it would allow an entire tuple multiplication to be skipped. For example, consider t = f(g(t))t. If g(•) returns the mean of the tuple, and if f was H the Heaviside step function, then one can remove entire components. Appendix B examines different choices for doing this, but we do not consider it further in the main text.\nThe final logits of an AlgebraNet must be real-valued. We use an Algebra-specific final linear layer and convert the final algebra tuple to a scalar with the tuple-wise L2 norm before applying softmax. More details are in Appendix B.\nTo apply magnitude pruning (Zhu and Gupta, 2017; Gale et al., 2019) to prune tuples we used the tuple L2 norm as the criterion for pruning for all AlgebraNet variants. For theMn(R) algebras we also experimented with criteria based on the eigenvalues, λi, and singular values, σi, of each n× n matrix. The Frobenius norm corresponds to ( ∑ i σ 2 i ) 1/2 and the determinant corresponds to ( ∏\ni λi). We found pruning based on the Frobenius norm to be the most effective, followed by pruning based on the largest eigenvalue. See Appendix C for a comparison between different methods.\n(Trabelsi et al., 2018), (Gaudet and Maida, 2018), and (Wu et al., 2020) use whitening in place of batch normalization. Whitening normalizes and de-correlates the different tuple elements from one another. However, this extension results in a substantial increase in both training and test time computational costs, as described in (Pan et al., 2019). The inclusion of the whitening cost to the FLOP count in Fig. 1 highlights the substantial cost inference cost. Cholesky decomposition (Press et al., 2007) of the inverted covariance matrix is required during training and at inference it is not possible to fold the whitening transformation into adjacent convolutions. A contribution from each of the algebra elements contributes to each element in the whitened output. We find that batch normalization does not substantially decrease performance, trains 1.9× faster and has no inference cost, so we use it for all experiments, unless explicitly stated." }, { "heading": "3 Related Work", "text": "(Trabelsi et al., 2018) applied complex-valued networks to convolutional neural networks trained on CIFAR-10, as well as to music transcription and Speech Spectrum Prediction. They find that complex-valued networks with the same number of parameters and more FLOPs perform slightly better than real-valued networks. (Gaudet and Maida, 2018) extend the procedure from (Trabelsi et al., 2018) to quaternion valued weights, showing that they are able to reduce the parameter count by a factor of two over complex-valued networks and a factor of four over real-valued networks, while again slightly increasing the top-1 accuracy on CIFAR-10. (Wu et al., 2020) further extend\nthis approach to octonions (which are a non-associative algebra), demonstrating that they are able to further reduce the parameter count while increasing the accuracy of their models on CIFAR-10.\nThese papers establish the efficacy of some alternative algebras, though they focus purely on parameter efficiency, rather than FLOP efficiency which is equally important for image classification tasks. Additionally, the tested datasets are relatively small, and it is unclear how the results scale to larger datasets. Both the quaternion and octonion network papers do not test their models on language modeling tasks where parameter efficiency is often of greater importance.\n(Parcollet et al., 2018) propose a quaternion recurrent neural network (QRNN) and quaternion LSTM (QLSTM). They show that quaternion based methods are able to reduce the parameter count while offering better performance on the TIMIT and WSJ phoneme recognition tasks. Associative Long Short-Term Memory leverage complex-vectors to increase the memory capacity of LSTMs without a parameter increase (Danihelka et al., 2016).\nRecently, many methods to induce sparsity in neural networks have shown that it is possible to train models with an overwhelming fraction of the weights being 0 (Molchanov et al., 2017; Gale et al., 2019; Frankle and Carbin, 2019; Louizos et al., 2018; Evci et al., 2019; Zhu and Gupta, 2017). Many of these methods gradually decrease the number of weights in the network through training by using some combination of each weight’s gradient and magnitude. Fine grained sparsity is hard to accelerate on modern hardware, although there have been some recent results demonstrating that speedups are possible (Elsen et al., 2020). (Vecchi et al., 2019) considered inducing sparsity in quaternion networks. Primitives that increase computational density of fundamental interactions would increase the performance of sparse methods as demonstrated on the GPU by (Mueller-Roemer et al., 2019) in scientific computing.\n(Jayakumar et al., 2020) emphasize the importance of multiplicative interaction layers providing a particularly useful inductive bias during the fusion of multiple information streams. Specific AlgebraNets may provide strong, useful domain-specific inductive biases, for example as done by (Worrall et al., 2017), leveraging the rotational invariance of complex numbers in convolutional networks and by (Hinton et al., 2018) where they use a 4× 4 pose matrix to represent orientations." }, { "heading": "4 Experiments and Results", "text": "" }, { "heading": "4.1 ImageNet", "text": "We examine the performance of AlgebraNet versions of ResNet-50 (He et al., 2016) and MobileNetv1 (Howard et al., 2017) on the ImageNet (Russakovsky et al., 2015) dataset. We use a width multiplier on the channels to adjust model capacity. For all experiments we use SGD with momentum of 0.9. We increase the number of training epochs by a factor of two to 180, which we also use for the pruning experiments. This did not affect the baseline, but it improves the pruning results. It also resulted in improved performance for H, so we used it throughout. For a batch size of 256, the initial learning rate for the ResNet experiments was set to 2.5 and multiplied by 0.1 at epochs 60, 100, 140, and 160. We find it is useful to reduce the amount of L2 regularization that is used for AlgebraNets. The baseline value 10−4 was reduced by a factor of 0.725 for ResNet-50 and 0.625 for MobileNet-v1. We use the swish activation function for all experiments shown in Figures 1 and 2 including the baselines. We found it improved performance across the board.\nFigures 1 and 2 compare the trade-offs between accuracy, parameters, and FLOPs for different flavours of AlgebraNet. Notably, we do not find that the parameter reduction without accuracy loss from (Trabelsi et al., 2018) and (Gaudet and Maida, 2018) on CIFAR translates to ImageNet; we are unable to divide the number of parameters by a factor of two/four for C/H and match baseline performance. We hypothesize that this is in part due to over-paramaterization of many networks trained on CIFAR and that AlgebraNets act as an additional regularizer, in part due to the greater weight reuse: each tuple component is now involved in multiple equations. We feel that this highlights the need for testing methods on large-scale datasets.\nWe findM2(R) AlgebraNets provide the best parameter efficiency of all considered algebras while requiring no more FLOPs than the real baseline on both ResNet-50 and MobileNet-v1. We also find, for both ResNet-50 and MobileNet-v1, M2(C) AlgebraNets provide better FLOP efficiency than the previously studied H while having the same ratio of multiplies to values. The diagonal\nalgebra is extremely parameter efficient and we do find the interaction between different components through whitening to be important as hypothesized – a network trained with whitening achieves 6% higher top-1 accuracy than the same network trained with batch normalization. Unfortunately, adding whitening increases the total number of inference FLOPs by a factor of 3×. We are left to conclude that whitening is not currently competitive and recommend using batch normalization for all algebras. Future work exploring the role of the interaction from whitening, and alternatives that are more computationally efficient is an interesting direction. An additional benefit is that AlgebraNets applied\nto MobileNet like architectures increase the computational density of the often bandwidth bound depthwise convolutions, while reducing the number of FLOPs in the more costly 1× 1 convolution." }, { "heading": "4.2 Pruning ResNet-50", "text": "We use magnitude based pruning, with the schedule proposed in (Zhu and Gupta, 2017). We always begin pruning at 20% and end pruning at 80% of the total training iterations. We prune every 100 steps. At each pruning step, we set tuples with the lowest magnitude, given by √∑ t2i , to 0. We do not prune terms in the final linear layer, the tuple-initialization convolutions, or in the initial convolution of the ResNet. To allow for comparisons with (Gale et al., 2019), we use ReLU activations in pruning experiments, as opposed to swish as used in Fig. 1. Final top-1 accuracies of pruned networks are shown in Fig. 3. Despite pruning entire tuples, which allows skipping an entire tuple multiplication, we are still able to find sparse networks that are similarly FLOP efficient to those from (Gale et al., 2019), while having higher compute density due to the algebra structure. Pruning individual components, rather than setting entire tuples to 0 does improve performance, though does not provide the same computational benefits offered by AlgebraNets. We provide further results in Appendix C." }, { "heading": "4.3 Transformer-XL on Enwik8 and RNNs on WikiText-103", "text": "We perform character level language modeling on the Enwik8 dataset from the Hutter Prize (LLC, 2009) with the Transformer-XL (Dai et al., 2019) architecture. We tuned the baseline model by halving the embedding size, number of heads and feedforward size, resulting in a more challenging ‘efficient’ baseline with only 25% as many parameters as the 24 layer network from (Dai et al., 2019). We train with Adam (Kingma and Ba, 2014) (learning rate 2× 10−4), dropout 0.25 (component-wise, not tuple-wise), and windows of 1536 at train and 4608 at test time. TheM2(R)-Transformer uses a learning rate of .0005 and dropout 0.15. Our ‘efficient’ baseline 24 layer Transformer-XL model has an embedding size of 512, 4 heads of size 128 each, and a feed-forward hidden size of 1536 for a total\nparameter count of 69.4 million. It achieves 0.99 bits per character (BPC), matching the results of (Dai et al., 2019) while requiring 75% less parameters. Using theM2(R)-algebra in all linear layers with fixed activation size results in a further 75% reduction in parameter count. We use these parameter savings to increase the depth of the model from 24 to 42 layers, resulting in a model with 45% as many parameters as the ‘efficient’ baseline. The resulting M2(R) AlgebraNet also achieves 0.99 BPC, but with only 31.2 million parameters. The character-embedding layers are computationally unchanged; they associate each character with a d4 -sizedM2(R) embedding which can be thought of as a reshaped Rd embedding. Special consideration has to be paid to the Rl×l attention matrix, which is often regarded as a practical memory and compute bottleneck (Child et al., 2019; Roy et al., 2020; Kitaev et al., 2020). Using an l × l algebra valued attention matrix would increase the memory and compute requirements (e.g. by a factor 2 in the Complex-Transformer (Yang et al., 2020) or 4 for M2(R)). Thus, we desire a real valued attention matrix from the sets of Rk-valued key and query vectors (k, q). We do this by reshaping keys and queries fromM2(R) to R. Formally, we redefine the attentions real-valued scalar product as 〈k, q〉M2(R) := 〈F (k), F (q)〉R where F flattens the input into a real vector.\nFinally, we also consider a dataset approximately one order of magnitude larger by tokenizing WikiText-103 (Merity et al., 2016) into characters (instead of the more common words). On this dataset we consider a single layer GRU architecture followed by 5 linear readout layers with the ReLU non-linearity, skip connections and layer normalization after each layer. We train using a batch size of 16, Adam (learning rate of 10−4), and L2 regularization of 10−7 for 200,000 steps. Training takes two days for the largest baseline variant on a single V100 GPU. We train using length 512 sequences and back propagation through time. We initialize the different components of each algebra with single linear layers from the input. We report results on the typical validation set. We replace a gated recurrent unit (GRU) (Cho et al., 2014) with the AlgebraNet equivalent, as well as replacing the readout layers with AlgebraNet variants. We considerM2(R), C, H, andM3(R). Results are shown in Table 3. A C AlgebraNet with a hidden size of 1024 and 24.1 million parameters achieves a validation BPC of 1.26, comparable to a real-valued network with 1.45 times the parameter count. We find thatM2(R) with a hidden size of 512 results in a validation BPC of 1.30, comparable to a model with twice as many parameters. Again, demonstrating the parameter efficiency ofM2(R) and the usefulness of AlgebraNets for problems such as language modeling where parameter efficiency is crucial." }, { "heading": "5 Conclusion", "text": "Conventional neural networks are composed of real-valued weights and activations along with real valued operators. In this work, we proposed AlgebraNets, a general paradigm of replacing real valued weights and operators with weights and operators from other associative algebras in a general fashion. We show these methods to be more parameter efficient than their real-valued counterparts while having higher compute density. We also find that theM2(R) algebra is more FLOP efficient than previously considered algebras – in fact it is as FLOP efficient as the reals. The increased compute density of the proposed algebras will prove particularly useful for sparse neural networks and auto-regressive inference, due to modern hardware favoring a relatively high compute density. We hope that our work enables further development of these methods and promotes broader research into the fundamental design choices upon which modern neural networks are based." }, { "heading": "A Additional Algebra Information", "text": "A.1 M2(R) Multiplication Table\nEach tuple (ta, tb, tc, td) represents a 2× 2 real matrix. M2(R) a b c d a a b 0 0 b 0 0 a b c c d 0 0 d 0 0 c d\nA.2 M2(C) Multiplication Table\nEach tuple (ta, tb, tc, td, te, tf , tg, th) represents the 2× 2 complex matrix: [ ta + tbi tc + tdi te + tf i tg + thi ]\nM2(C) a b c d e f g h a a b c d 0 0 0 0 b b -a d -c 0 0 0 0 c 0 0 0 0 a b c d d 0 0 0 0 b -a d -c e e f g h 0 0 0 0 f f -e h -g 0 0 0 0 g 0 0 0 0 e f g h h 0 0 0 0 f -e h -g" }, { "heading": "A.3 Dual Number Multiplication Table", "text": "Each tuple (ta, tb) represents the dual number (ta + tb ).\nDual Number a b a a b b b 0" }, { "heading": "A.4 Cross Product Multiplication Table", "text": "Multiplicated uses the cross product between length-3 tuples (ta, tb, tc).\nCross Product a b c a 0 c -b b -c 0 a c b -a 0" }, { "heading": "A.5 Linear Layer Example", "text": "We give a concrete example of replacing a real linear layer withM2(R)-linear layer such that the activation memory is kept identical. Intuitively, this can be thought of as reshaping the Rd input activations to have shape M2(R)d/4 that is processed by a fM : M2(R)d/4 → M2(R)d/4 linear layer resulting in output activations – when flattened – with shape Rd. Each such linear layer fM requires 14 of the parameters and 1 2 of the FLOPS compared to a real Rd → Rd linear layer counterpart." }, { "heading": "B AlgebraNet Choices: Activations, Initializations, etc", "text": "" }, { "heading": "B.1 Tuple-wise Nonlinearity", "text": "We consider equations of the form: t← f(g(t)) ∗ t (1)\nWe found that if g is the tuple mean, and f isH the Heaviside function, top-1 performance dropped on anM2(R) ResNet-50 AlgebraNet by 2.97%. While this drop is significant, the resulting activation sparsity might make it a desirable tradeoff in some circumstances. Other methods, such as setting g to be the determinant resulted in greater than a 10% drop in performance.\nB.2 Initialization\nFor a ResNet-50 H-AlgebraNet with the standard number of channels divided by 4, we find a top-1 performance of 74.0± 0.14 using standard initialization and 74.1± 0.15 using initialization from Gaudet and Maida (2018). These experiments are done using standard batch normalization instead of the more expensive whitening procedure." }, { "heading": "B.3 Conversion to Reals", "text": "For all considered algebras, the norm of the tuple is mathematically given by √∑\ni t 2 i . It is possible that the\noptimal choice for converting to the reals would be different in models with very large final layers, such as word based language modeling – which we do not consider." }, { "heading": "C AlgebraNet Pruning", "text": "C.1 Alternative tuple pruning of M2(R)\nForM2(R), we consider a variety of alternative pruning methods to remove entire tuples, based on the two eigenvalues, λ1 and λ2 and singular values, σ1, σ2. Specifically, because our matrices are square but not symmetric, the Forbenius norm is defined based on the singular values which correspond to the squared eigenvalues of AAT , if A is the matrix in question.\n• Frobenius Norm: ( σ21 + σ 2 2 )1/2 • Determinant: λ1λ2\n• Smallest Eigenvalue min(|λ1|, |λ2| )\n• Largest Eigenvalue max(|λ1|, |λ2| )\nIn all cases, we remove tuples with the minimum magnitude of one of those options.\nIn Table 4, we show the resulting drop in top-1 accuracy relative to the Frobenius norm at three different sparsities for three alternative pruning methods. In addition to always achieving the best performance, the Frobenius norm has the additional advantage that it is defined for all Algebra variants that was consider, rather than anMn(R) specific variant, for example.\nC.2 Pruning components of M2(R) and H\nForM2(R) and H, we also prune individual tuple elements based on element norms. This equally reduces the number of non-zero weights in the network, though it does not result in entire matrix multiplies that can be skipped.\nIn Table 5, we show the resulting increase in top-1 accuracy that results from pruning individual tuple components, rather than entire tuples. However, due to the structureMn(R) and H multiplication, setting individual values to 0 does not result in 0 in the output. Therefore, pruning entire tuples provides more useful computational advantages." }, { "heading": "D AlgebraNet Tests on CIFAR", "text": "We use a network structure based on that described in Gaudet and Maida (2018). We begin with the same ResNet structure, with 128, 256, and then 512 channels in each real block. For the C networks, all channel counts are divided by two. For theM2(R) and H networks, we assign the initial convolution, before the residual blocks, to have half the original number of channels, all other channel counts are divided by four. Thus, for H andM2(R) we have slightly more than 1/4 the parameters. We train with 24× 24 random crops and evaluate on 32× 32 images.\nWe find we are able to divide the channels in the filter by two and maintain the same performance using complex valued networks. When reducing the parameter count by a factor of ∼four, we find we are able to again match baseline performance with quaternions and 2× 2 matrices. Regularization has non-trivial effect on performance, and by more finely adjusting the L2 loss for the different algebras may result in higher top-1 accuracy. We note that the relative reduction in parameters on CIFAR-10 is not something we are able to replicate on ImageNet. The results from the main text also hold here –M2(R) is the only algebra that is able to maintain accuracy while having fewer FLOPs than the baseline real network. For these experiments, we used algebra specific weight initializations, though we again verified that this does not seem to have a substantial effect.\nE Example M2(R) Code\nWe write the update rule explicitly for readability. Note that it is possible to concatenate the relevant terms on the channel axis to reduce the number of convolutions needed." }, { "heading": "Convolution", "text": "''' Simplified example code for M_2(R). x: Input with an additional algebra axis. In the case of a convolution, either (B, H, W, C, A) or (B, C, H, W, A)\nw: Corresponding weight matrix, with an additional\nalgebra axis. '''\n# Rule that describes 2x2 matrix multiplication. mat_22_rule = [[(0, 0), (1, 2)],\n[(0, 1), (1, 3)], [(2, 0), (3, 2)], [(2, 1), (3, 3)]]\n# Update each of the four algebra components. x_new = [0, 0, 0, 0] for i in range(4):\nfor j in range(2): # w: weight with an extra algebra dimension. # x: Input with shape [B, ... , A] where A is the additional algebra dimension. x_new[i] += Conv2D(x[..., mat_22_rule[i][j][1],\nw[..., mat_22_rule[i][j][0], ...)\n# Add bias if wanted. Add (4,) to shape." }, { "heading": "Linear Layer", "text": "# Update each of the four algebra components. x_new = [0, 0, 0, 0] for i in range(4):\nfor j in range(2): # w: weight with an extra algebra dimension. # x: Input with shape [B, L, A] where A is the algebra dimension. x_new[i] += dot(x[..., mat_22_rule[i][j][1],\nw[..., mat_22_rule[i][j][0]) # Add bias if wanted. Add (4,) to shape.\nF Increased Activation Memory\nDue to the activations, there will be a slight increase in memory footprint from AlgebraNets in some cases. For example, in aM2(R) AlgebraNet for ResNet-50 with channels/4, there will be C/4 convolutions performed. This would, in a naive implementation, result in twice the activation memory. However, with a properly written kernel, this would not be the case. There is, however, an additional factor: to reach comparable performance a slightly larger network than C/4 is needed. In practice about a 1.3× increase in activation memory would be incurred." } ]
2,020
null
SP:cbfb4439fcbf27dc2c05675123b7b0555acdbf33
[ "This paper proposed L3Net which is a new graph convolution with decomposing the learnable local filters into low-rank. It can contain both spatial and spectral graph convolution (including ChebNet, GAT, EdgeNet and so on) as subsets. It is also robust to graph noise. Experiments are conducted on mesh data, facial recognition and action recognition, indicating out-performed performance over baselines. Its robustness to graph noise is also tested.", "The paper presents a graph neural network (GNN) architecture with learnable low-rank filters that unifies various recently-proposed GNN-based methods. The local filters substitute the graph shift operator (GSO) by a learnable set of parameters that capture the local connectivity of each node in the graph. Moreover, a regularization penalty is proposed to increase the robustness of the model and prevent these local structures to overfit. The paper provides proofs to justify the generality of the approach and how different methods can be seen as a particularization of the proposed scheme. Two theorems are also proved to claim the stability of the GNN architecture against dilation perturbations in the input signal. Several numerical experiments are conducted to empirically test the usefulness of the model." ]
Geometric variations like rotation, scaling, and viewpoint changes pose a significant challenge to visual understanding. One common solution is to directly model certain intrinsic structures, e.g., using landmarks. However, it then becomes non-trivial to build effective deep models, especially when the underlying non-Euclidean grid is irregular and coarse. Recent deep models using graph convolutions provide an appropriate framework to handle such non-Euclidean data, but many of them, particularly those based on global graph Laplacians, lack expressiveness to capture local features required for representation of signals lying on the non-Euclidean grid. The current paper introduces a new type of graph convolution with learnable low-rank local filters, which is provably more expressive than previous spectral graph convolution methods. The model also provides a unified framework for both spectral and spatial graph convolutions. To improve model robustness, regularization by local graph Laplacians is introduced. The representation stability against input graph data perturbation is theoretically proved, making use of the graph filter locality and the local graph regularization. Experiments on spherical mesh data, real-world facial expression recognition/skeleton-based action recognition data, and data with simulated graph noise show the empirical advantage of the proposed model.
[ { "affiliations": [], "name": "Xiuyuan Cheng" }, { "affiliations": [], "name": "Zichen Miao" } ]
[ { "authors": [ "James Atwood", "Don Towsley" ], "title": "Diffusion-convolutional neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "John R Baumgardner", "Paul O Frederickson" ], "title": "Icosahedral discretization of the two-sphere", "venue": "SIAM Journal on Numerical Analysis,", "year": 1985 }, { "authors": [ "Davide Boscaini", "Jonathan Masci", "Emanuele Rodolà", "Michael Bronstein" ], "title": "Learning shape correspondence with anisotropic convolutional neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Michael M Bronstein", "Joan Bruna", "Yann LeCun", "Arthur Szlam", "Pierre Vandergheynst" ], "title": "Geometric deep learning: going beyond euclidean data", "venue": "IEEE Signal Processing Magazine,", "year": 2017 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on graphs", "venue": "arXiv preprint arXiv:1312.6203,", "year": 2013 }, { "authors": [ "Adrian Bulat", "Georgios Tzimiropoulos" ], "title": "How far are we from solving the 2d & 3d face alignment problem? (and a dataset of 230,000 3d facial landmarks)", "venue": "In International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Zhe Cao", "Tomas Simon", "Shih-En Wei", "Yaser Sheikh" ], "title": "Realtime multi-person 2d pose estimation using part affinity fields", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Dong Chen", "Xudong Cao", "Fang Wen", "Jian Sun" ], "title": "Blessing of dimensionality: Highdimensional feature and its efficient compression for face verification", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2013 }, { "authors": [ "Fan RK Chung", "Fan Chung Graham" ], "title": "Spectral graph theory", "venue": "Number 92. American Mathematical Soc.,", "year": 1997 }, { "authors": [ "Adam Coates", "Andrew Y Ng" ], "title": "Selecting receptive fields in deep networks. In Advances in neural information processing", "venue": null, "year": 2011 }, { "authors": [ "Ronald R Coifman", "Mauro Maggioni" ], "title": "Diffusion wavelets", "venue": "Applied and Computational Harmonic Analysis,", "year": 2006 }, { "authors": [ "Benjamin Coors", "Alexandru Paul Condurache", "Andreas Geiger" ], "title": "Spherenet: Learning spherical representations for detection and classification in omnidirectional images", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Timothy F. Cootes", "Gareth J. Edwards", "Christopher J. Taylor" ], "title": "Active appearance models", "venue": "IEEE Transactions on pattern analysis and machine intelligence,", "year": 2001 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Hui Ding", "Shaohua Kevin Zhou", "Rama Chellappa" ], "title": "Facenet2expnet: Regularizing a deep face recognition net for expression recognition", "venue": "IEEE International Conference on Automatic Face & Gesture Recognition (FG", "year": 2017 }, { "authors": [ "Carlos Esteves", "Christine Allen-Blanchette", "Ameesh Makadia", "Kostas Daniilidis" ], "title": "Learning so (3) equivariant representations with spherical cnns", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Matthias Fey", "Jan Eric Lenssen", "Frank Weichert", "Heinrich Müller" ], "title": "Splinecnn: Fast geometric deep learning with continuous b-spline kernels", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Fernando Gama", "Joan Bruna", "Alejandro Ribeiro" ], "title": "Stability properties of graph neural networks", "venue": "arXiv preprint arXiv:1905.04497,", "year": 2019 }, { "authors": [ "Fernando Gama", "Alejandro Ribeiro", "Joan Bruna" ], "title": "Diffusion scattering transforms on graphs", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Ian J Goodfellow", "Dumitru Erhan", "Pierre Luc Carrier", "Aaron Courville", "Mehdi Mirza", "Ben Hamner", "Will Cukierski", "Yichuan Tang", "David Thaler", "Dong-Hyun Lee" ], "title": "Challenges in representation learning: A report on three machine learning contests", "venue": "In International Conference on Neural Information Processing,", "year": 2013 }, { "authors": [ "Yanan Guo", "Dapeng Tao", "Jun Yu", "Hao Xiong", "Yaotang Li", "Dacheng Tao" ], "title": "Deep neural networks with relativity learning for facial expression recognition", "venue": "IEEE International Conference on Multimedia & Expo Workshops (ICMEW),", "year": 2016 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "David K Hammond", "Pierre Vandergheynst", "Rémi Gribonval" ], "title": "Wavelets on graphs via spectral graph theory", "venue": "Applied and Computational Harmonic Analysis,", "year": 2011 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Elvin Isufi", "Fernando Gama", "Alejandro Ribeiro" ], "title": "Edgenets: Edge varying graph neural networks", "venue": "arXiv preprint arXiv:2001.07620,", "year": 2020 }, { "authors": [ "Mira Jeong", "Byoung Chul Ko" ], "title": "Driver’s facial expression recognition in real-time for safe", "venue": "driving. Sensors,", "year": 2018 }, { "authors": [ "Chiyu Jiang", "Jingwei Huang", "Karthik Kashinath", "Philip Marcus", "Matthias Niessner" ], "title": "Spherical cnns on unstructured grids", "venue": "arXiv preprint arXiv:1901.02039,", "year": 2019 }, { "authors": [ "Will Kay", "Joao Carreira", "Karen Simonyan", "Brian Zhang", "Chloe Hillier", "Sudheendra Vijayanarasimhan", "Fabio Viola", "Tim Green", "Trevor Back", "Paul Natsev" ], "title": "The kinetics human action video dataset", "venue": "arXiv preprint arXiv:1705.06950,", "year": 2017 }, { "authors": [ "Qiuhong Ke", "Mohammed Bennamoun", "Senjian An", "Ferdous Sohel", "Farid Boussaid" ], "title": "A new representation of skeleton sequences for 3d action recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Nicolas Keriven", "Gabriel Peyré" ], "title": "Universal invariant and equivariant graph neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tae Soo Kim", "Austin Reiter" ], "title": "Interpretable 3d human action analysis with temporal convolutional networks", "venue": "IEEE conference on computer vision and pattern recognition workshops (CVPRW),", "year": 2017 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Ron Levie", "Federico Monti", "Xavier Bresson", "Michael M Bronstein" ], "title": "Cayleynets: Graph convolutional neural networks with complex rational spectral filters", "venue": "IEEE Transactions on Signal Processing,", "year": 2018 }, { "authors": [ "Ruoyu Li", "Sheng Wang", "Feiyun Zhu", "Junzhou Huang" ], "title": "Adaptive graph convolutional neural networks", "venue": "In Thirty-second AAAI conference on artificial intelligence,", "year": 2018 }, { "authors": [ "Shan Li", "Weihong Deng" ], "title": "Deep facial expression recognition: A survey", "venue": "IEEE Transactions on Affective Computing,", "year": 2020 }, { "authors": [ "Renjie Liao", "Zhizhen Zhao", "Raquel Urtasun", "Richard Zemel" ], "title": "Lanczosnet: Multi-scale deep graph convolutional networks", "venue": null, "year": 2019 }, { "authors": [ "Jun Liu", "Amir Shahroudy", "Dong Xu", "Gang Wang" ], "title": "Spatio-temporal lstm with trust gates for 3d human action recognition", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Ziqi Liu", "Chaochao Chen", "Longfei Li", "Jun Zhou", "Xiaolong Li", "Le Song", "Yuan Qi" ], "title": "Geniepath: Graph neural networks with adaptive receptive paths", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Patrick Lucey", "Jeffrey F Cohn", "Takeo Kanade", "Jason Saragih", "Zara Ambadar", "Iain Matthews" ], "title": "The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression", "venue": "In 2010 ieee computer society conference on computer vision and pattern recognition-workshops,", "year": 2010 }, { "authors": [ "Haggai Maron", "Heli Ben-Hamu", "Hadar Serviansky", "Yaron Lipman" ], "title": "Provably powerful graph networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Haggai Maron", "Heli Ben-Hamu", "Nadav Shamir", "Yaron Lipman" ], "title": "Invariant and equivariant graph networks. 2019b", "venue": null, "year": 2019 }, { "authors": [ "Jonathan Masci", "Davide Boscaini", "Michael Bronstein", "Pierre Vandergheynst" ], "title": "Geodesic convolutional neural networks on riemannian manifolds", "venue": "In Proceedings of the IEEE international conference on computer vision workshops,", "year": 2015 }, { "authors": [ "Zibo Meng", "Ping Liu", "Jie Cai", "Shizhong Han", "Yan Tong" ], "title": "Identity-aware convolutional neural network for facial expression recognition", "venue": "12th IEEE International Conference on Automatic Face & Gesture Recognition (FG", "year": 2017 }, { "authors": [ "Federico Monti", "Davide Boscaini", "Jonathan Masci", "Emanuele Rodola", "Jan Svoboda", "Michael M Bronstein" ], "title": "Geometric deep learning on graphs and manifolds using mixture model cnns", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "E Morales-Vargas", "CA Reyes-Garćıa", "Hayde Peregrina-Barreto" ], "title": "On the use of action units and fuzzy explanatory models for facial expression recognition", "venue": "PloS one,", "year": 2019 }, { "authors": [ "Christopher Morris", "Martin Ritzert", "Matthias Fey", "William L Hamilton", "Jan Eric Lenssen", "Gaurav Rattan", "Martin Grohe" ], "title": "Weisfeiler and leman go neural: Higher-order graph neural networks", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Charles R Qi", "Hao Su", "Matthias Nießner", "Angela Dai", "Mengyuan Yan", "Leonidas J Guibas" ], "title": "Volumetric and multi-view cnns for object classification on 3d data", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Charles R Qi", "Hao Su", "Kaichun Mo", "Leonidas J Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Charles Ruizhongtai Qi", "Li Yi", "Hao Su", "Leonidas J Guibas" ], "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Q Qiu", "X Cheng", "R Calderbank", "G Sapiro" ], "title": "Dcfnet: Deep neural network with decomposed convolutional filters", "venue": "In International Conference Machine Learning,", "year": 2018 }, { "authors": [ "Anurag Ranjan", "Timo Bolkart", "Soubhik Sanyal", "Michael J Black" ], "title": "Generating 3d faces using convolutional mesh autoencoders", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Bin Ren", "Mengyuan Liu", "Runwei Ding", "Hong Liu" ], "title": "A survey on 3d skeleton-based action recognition using learning method", "venue": "arXiv preprint arXiv:2002.05907,", "year": 2020 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2008 }, { "authors": [ "Stefan C Schonsheck", "Bin Dong", "Rongjie Lai" ], "title": "Parallel transport convolution: A new tool for convolutional neural networks on manifolds", "venue": "arXiv preprint arXiv:1805.07857,", "year": 2018 }, { "authors": [ "Amir Shahroudy", "Jun Liu", "Tian-Tsong Ng", "Gang Wang" ], "title": "Ntu rgb+ d: A large scale dataset for 3d human activity analysis", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Raviteja Vemulapalli", "Felipe Arrate", "Rama Chellappa" ], "title": "Human action recognition by representing 3d skeletons as points in a lie group", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2014 }, { "authors": [ "Raviteja Vemulapalli", "Felipe Arrate", "Rama Chellappa" ], "title": "Human action recognition by representing 3d skeletons as points in a lie group", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2014 }, { "authors": [ "Jiang Wang", "Zicheng Liu", "Ying Wu", "Junsong Yuan" ], "title": "Mining actionlet ensemble for action recognition with depth cameras", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2012 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Fengwen Chen", "Guodong Long", "Chengqi Zhang", "S Yu Philip" ], "title": "A comprehensive survey on graph neural networks", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural", "venue": null, "year": 2021 }, { "authors": [ "2019. Sijie Yan", "Yuanjun Xiong", "Dahua Lin" ], "title": "Spatial temporal graph convolutional networks", "venue": null, "year": 2019 }, { "authors": [ "2018. Dongmian Zou", "Gilad Lerman" ], "title": "Graph convolutional neural networks via scattering", "venue": null, "year": 2018 }, { "authors": [ "L · CC ′. GCN (Kipf", "Welling" ], "title": "2016) is a special case of ChebNet. Take L = 2 in (5), and tie the choice of θ0 and θ1, Mc′,c = θ(c", "venue": null, "year": 2016 }, { "authors": [ "• GAT In GAT (Veličković" ], "title": "2017), R being the number of attention heads, the graph convolution operator in one GNN layer can be written as (omitting bias and non-linear mapping", "venue": null, "year": 2017 }, { "authors": [ "W . Isufi" ], "title": "2020) also proposed higherorder GAT by considering powers of the affinity matrix A as well as the edge-varying", "venue": null, "year": 2020 }, { "authors": [ "Per Eqn" ], "title": "2020), the edge-varying GNN layer mapping can be written", "venue": null, "year": 2020 }, { "authors": [ "sphere", "follow Jiang" ], "title": "2019) by moving projected digit to equator, avoiding coordinate singularity at poles", "venue": null, "year": 2019 }, { "authors": [ "Tab. A" ], "title": "SphereMNIST and Sphere-ModelNet40 on fine meshes on the sphere. Specifically, the mesh used for SphereMNIST here is of levels L4, L3, L2, and the SphereModelNet-40 mesh of levels L5", "venue": "L4, L3,", "year": 2019 }, { "authors": [ "Ding" ], "title": "Batch size is set as 16, learning rate is 0.001 which decay by 0.1 if validation loss remains same for last 15 epochs. We choose Adam optimizer and train 100 epochs for each fold validation", "venue": null, "year": 2017 }, { "authors": [ "Yan" ], "title": "This is a small dataset that contains 30 action classes", "venue": null, "year": 2018 }, { "authors": [ "locally-connected GNN (Coates", "Ng" ], "title": "There is also flexibility in defining the graph down/up-sampling schemes, and the choice depends on application. An example of graph sampling operator on face mesh data is given in (Ranjan et al., 2018). At last, apart from using separate down/up-sampling layers, it is also possible to extend the L3Net model", "venue": "Bruna et al.,", "year": 2011 } ]
[ { "heading": "1 Introduction", "text": "Deep methods have achieved great success in visual cognition, yet they still lack capability to tackle severe geometric transformations such as rotation, scaling and viewpoint changes. This problem is often handled by conducting data augmentations with these geometric variations included, e.g. by randomly rotating images, so as to make the trained model robust to these variations. However, this would remarkably increase the cost of training time and model parameters. Another way is to make use of certain underlying structures of objects, e.g. facial landmarks (Chen et al., 2013) and human skeleton landmarks (Vemulapalli et al., 2014a), c.f. Fig. 1 (right). Nevertheless, these methods then adopt hand-crafted features based on landmarks, which greatly constrains their ability to obtain rich features for downstream tasks. One of the main obstacles for feature extraction is the non-Euclidean property of underlying structures, and particularly, it prohibits the direct usage of prevalent convolutional neural network (CNN) architectures (He et al., 2016; Huang et al., 2017). Whereas there are recent CNN models designed for non-Euclidean grids, e.g., for spherical mesh (Jiang et al., 2019; Cohen et al., 2018; Coors et al., 2018) and manifold mesh in computer graphics (Bronstein et al., 2017; Fey et al., 2018), they mainly rely on partial differential operators which only can be calculated precisely on fine and regular mesh, and may not be applicable to the landmarks which are irregular and course. Recent works have also applied Graph Neural Network (GNN) approaches to coarse non-Euclidean data, yet methods using GCN (Kipf & Welling, 2016) may fall short of model capacity, and other methods adopting GAT (Veličković et al., 2017) are mostly heuristic and lacking theoretical analysis. A detailed review is provided in Sec. 1.1.\nIn this paper, we propose a graph convolution model, called L3Net, originating from lowrank graph filter decomposition, c.f. Fig. 1 (left). The model provides a unified framework\nfor graph convolutions, including ChebNet (Defferrard et al., 2016), GAT, EdgeNet (Isufi et al., 2020) and CNN/geometrical CNN with low-rank filter as special cases. In addition, we theoretically prove that L3Net is strictly more expressive to represent graph signals than spectral graph convolutions based on global adjacency/graph Laplacian matrices, which is then empirically validated, c.f. Sec. 3.1. We also prove a Lipschitz-type representation stability of the new graph convolution layer using perturbation analysis.\nBecause our model allows neighborhood specialized local graph filters, regularization may be needed to prevent over-fitting, so as to handle changing underlying graph topology and other graph noise, e.g., inaccurately detected landmarks or missing landmark points due to occlusions. Therefore, we also introduce a regularization scheme based on local graph Laplacians, motivated by the eigen property of the latter. This further improves the representation stability aforementioned. The improved performance of L3Net compared to other GNN benchmarks is demonstrated in a series of experiments, and with the the proposed graph regularization, our model shows robustness to a variety of graph data noise.\nIn summary, the contributions of the work are the following:\n• We propose a new graph convolution model by a low-rank decomposition of graph filters over trainable local basis, which unifies several previous models of both spectral and spatial graph convolutions.\n• Regularization by local graph Laplacians is introduced to improve the robustness against graph noise.\n• We provide theoretical proof of the enlarged expressiveness for representing graph signals and the Lipschitz-type input-perturbation stability of the new graph convolution model.\n• We demonstrate with applications to object recognition of spherical data and facial expression/skeleton-based action recognition using landmarks. Model robustness against graph data noise is validated on both real-world and simulated datasets." }, { "heading": "1.1 Related Works", "text": "Modeling on face/body landmark data. Many applications in computer vision, such as facial expression recognition (FER) and skeleton-based action recognition, need to extract high-level features from landmarked data which are sampled at irregular grid points on human face or at body joints. While CNN methods (Guo et al., 2016; Ding et al., 2017; Meng et al., 2017) prevail in FER task, landmark methods have the potential advantage in lighter model size as well as more robustness to previously mentioned geometric transformations like pose variation. Earlier methods based on facial landmarks used hand-crafted features (Jeong & Ko, 2018; Morales-Vargas et al., 2019) rather than deep networks. Skeleton-based methods in action recognition have been developed intensively recently (Ren et al., 2020), including non-deep methods (Vemulapalli et al., 2014b; Wang et al., 2012) and deep methods (Ke et al., 2017; Kim & Reiter, 2017; Liu et al., 2016; Yan et al., 2018). Facial and skeleton landmarks only give a coarse and irregular grid, and then mesh-based geometrical CNN’s are hardly applicable, while previous GNN models on such tasks may lack sufficient expressive power.\nGraph convolutional network. A systematic review can be found in several places, e.g. Wu et al. (2020). Spectral graph convolution was proposed using full eigen decomposition of the graph Laplacian in Bruna et al. (2013), Chebyshev polynomial in ChebNet (Defferrard\net al., 2016), by Cayley polynomials in Levie et al. (2018). GCN (Kipf & Welling, 2016), the mostly-used GNN, is a variant of ChebNet using degree-1 polynomial. Liao et al. (2019) accelerated the spectral computation by Lanczos algorithm. Graph scattering transform has been developed using graph wavelets (Zou & Lerman, 2020; Gama et al., 2019b), which can be constructed in the spectral domain (Hammond et al., 2011) and by diffusion wavelets (Coifman & Maggioni, 2006). The scattering transform enjoys theoretical properties of the representation but lacks adaptivity compared to trainable neural networks. Spatial graph convolution has been performed by summing up neighbor nodes’ transformed features in NN4G (Scarselli et al., 2008), by graph diffusion process in DCNN (Atwood & Towsley, 2016), where the graph propagation across nodes is by the adjacency matrix. Graph convolution with trainable filter has also been proposed in several settings: MPNN (Gilmer et al., 2017) enhanced model expressiveness by message passing and sub-network; GraphSage (Hamilton et al., 2017) used trainable differential local aggregator functions in the form of LSTM or mean/max-pooling; GAT (Veličković et al., 2017) and variants (Li et al., 2018; Zhang et al., 2018; Liu et al., 2019) introduced attention mechanism to achieve adaptive graph affinity, which remains non-negative valued; EdgeNet (Isufi et al., 2020) developed adaptive filters by taking products of trainable local filters. Our model learns local filters which can take negative values and contains GAT and EdgeNet as special cases. Theoretically, expressive power of GNN has been studied in Morris et al. (2019); Xu et al. (2019); Maron et al. (2019a;b); Keriven & Peyré (2019), mainly focusing on distinguishing graph topologies, while our primary concern is to distinguish signals lying on a graph.\nCNN and geometrical CNN. Standard CNN applies local filters translated and shared across locations on an Euclidean domain. To extend CNN to non-Euclidean domains, convolution on a regular spherical mesh using geometrical information has been studied in S2CNN (Cohen et al., 2018), SphereNet (Coors et al., 2018), SphericalCNN (Esteves et al., 2018), and UGSCNN (Jiang et al., 2019), and applied to 3D object recognition, for which other deep methods include 3D convolutional (Qi et al., 2016) and non-convolutional architectures (Qi et al., 2017a;b). CNN’s on manifolds construct weight-sharing across local atlas making use of a mesh, e.g., by patch operator in Masci et al. (2015), anisotropic convolution in ACNN (Boscaini et al., 2016), mixture model parametrization in MoNet (Monti et al., 2017), spline functions in SplineCNN (Fey et al., 2018), and manifold parallel transport in Schonsheck et al. (2018). These geometric CNN models use information of non-Euclidean meshes which usually need sufficiently fine resolution." }, { "heading": "2 Method", "text": "" }, { "heading": "2.1 Decomposed local filters", "text": "Consider an undirected graph G = (V,E), |V | = n. A graph convolution layer maps from input node features X(u′, c′) to output Y (u, c), where u, u′ ∈ V , c′ ∈ [C ′] (c ∈ [C]) is the input (output) channel index, the notation [m] means {1, · · · ,m}, and\nY (u, c) = σ( ∑\nu′∈V,c′∈[C′]\nM(u′, u; c′, c)X(u′, c′) + bias(c)), u ∈ V, c ∈ [C]. (1)\nThe spatial and spectral graph convolutions correspond to different ways of specifying M , c.f. Sec. 2.3. The proposed graph convolution is defined as\nM(u′, u; c′, c) = K∑ k=1 ak(c ′, c)Bk(u ′, u), ak(c ′, c) ∈ R, (2)\nwhere Bk(u ′, u) is non-zero only when u′ ∈ N (dk)u , N (d)u denoting the d-th order neighborhood of u (i.e., the set of d-neighbors of u), and K is a fixed number. In other words, Bk’s are K basis of local filters around each u, and the order dk can differ with 1 ≤ k ≤ K. Both ak and Bk are trainable, so the number of parameters are K ·CC ′ + ∑K k=1 ∑ u∈V |N (dk) u | ∼ K ·CC ′ +Knp, where p stands for the average local patch size. In our experiments we use K up to 5, and dk up to 3. We provide the matrix notation of (2) in Appendix A.1.\nThe construction (2) can be used as a layer type in larger GNN architectures. Pooling of graphs can be added between layers, and see Appendix C.5 for further discussion on multiscale model. The choice of K and neighborhood orders (d1, · · · , dK) can also be adjusted accordingly. The model may be extended in several ways to be discussed in the last section." }, { "heading": "2.2 Regularization by local graph Laplacian", "text": "The proposed L3Net layer enlarges the model capacity by allowing K basis filters at each location, and a natural way to regularize the trainable filters is by the graph geometry, where, by construction, only the local graph patch is concerned. We introduce the following regularization penalty of the basis filters Bk’s as\nR({Bk}k) = K∑ k=1 ∑ u∈V (b(k)u ) TL(k)u b (k) u , b (k) u (v) := Bk(v, u), b (k) u : N (dk) u → R, (3)\nwhere L (k) u , equaling (D − A) restricted to the subgraph on N (dk)u , is the Dirichlet local graph Laplacian on N (dk) u (Chung & Graham, 1997) (Fig. 2). The training objective is\nL({ak, Bk}k) + λR({Bk}k), λ ≥ 0, (4) where L is the classification loss. As L encourages the diversity of Bk’s, the K-rankness usually remains a tight constraint in training, unless λ is very large, see also Proposition 3." }, { "heading": "2.3 A unified framework for graph convolutions", "text": "Graph convolutions basically fall into two categories, the spatial and spectral constructions (Wu et al., 2020). The proposed L3Net belongs to spatial construction, and here we show that the model (2) is a unified framework for various graph convolutions, both spatial and spectral. Details and proofs are given in Appendix A.\n• ChebNet (Defferrard et al., 2016), GAT (Veličković et al., 2017), EdgeNet (Isufi et al., 2020): In ChebNet, M per (c′, c) equals a degree-(L-1) polynomial of the graph Laplacian matrix, where the polynomial coefficients are trainable. GCN (Kipf & Welling, 2016) can be viewed as ChebNet with polynomial degree-1 and tied coefficients. The attention mechanism in GAT enhances the model expressiveness by incorporating adaptive kernel-based nonnegative affinities. In EdgeNet, the graph convolution operator is the product of trainable local filters supported on order-1 neighborhoods. We have the following proposition:\nProposition 1. L3Net (2) includes the following models as special cases:\n(1) ChebNet (GCN) when K ≥ L (K ≥ 2), L being the polynomial degree. (2) GAT when K ≥ R, R being the number of attention branches. (3) EdgeNet when K ≥ L, L being the order of graph convolutions.\n• CNN: When nodes lie on a geometrical domain that allows translation (u′−u), in (2) setting Bk(u ′, u) = bk(u ′−u) for some bk(·) enforces spatial convolutional. The convolutional kernel can be decomposed as ∑ k ak(c\n′, c)bk(·) (Qiu et al., 2018). Extension to CNN on manifold mesh is also possible as in Masci et al. (2015); Fey et al. (2018). We have the following:\nProposition 2. Mesh-based geometrical CNN’s defined by linear patch operators, including standard CNN on Rd, and with low-rank decomposed filters are special cases of L3Net (2).\nWe also note that L3Net reduces from locally connected GNN (Coates & Ng, 2011; Bruna et al., 2013), the largest class of spatial GNN, only by the low-rankness imposed by a small number of K in (2). Locally connected GNN can be viewed as (1) with the requirement that for each (c, c′), M(u′, u; c′, c) is nonzero only when u′ is locally connected in u. The complexities of the various models are summarized in Fig. 2 (Table), where L3Net reduces from the np ·CC ′ complexity of locally-connected net to be the additive (np+CC ′) times K.\nWhen the number of channels C, C ′ are large, e.g. in deep layers they ∼ 102, and the graph size is not large, e.g., in landmark data applications np CC ′, the complexity is dominated by KCC ′ which is comparable with ChebNet (GAT) if K ≈ L (R). The computational cost is also comparable, as shown in experiments in Sec. 4. Furthermore, we have:\nProposition 3. Suppose the subgraphs on N (dk) u are all connected, given αu,k > 0 for all u, k, the minimum of (3) with constraint ‖b(k)u ‖2 ≥ αu,k is achieved when b(k)u equals the first Dirichlet eigenvector on N (dk) u , which does not change sign on N (dk) u .\nThe proposition shows that in the strong regularization limit of λ→∞ in (4), L3Net reduces to be ChebNet-like. The constraint with constants αu,k is included because otherwise the minimizer will be Bk all zero. The first Dirichlet eigenvector is envelope-like (Fig. 2), and then Bk(·, u) will be averaging operators on the local patch. Thus the regularization parameter λ can be viewed as trading-off between the more expressiveness in the learnable Bk, and the more stability of the averaging local filters, similar to ChebNet and GCN." }, { "heading": "3 Analysis", "text": "We analyze the representation expressiveness and stability (defined in below) of the proposed L3Net model. All proofs in Appendix A, and experimental details in Appendix B." }, { "heading": "3.1 Representation expressiveness of graph signals", "text": "The theoretical question of graph signal representation expressiveness concerns the ability for GNN deep features to distinguish graph signals. While related, the problem differs from the graph isomorphism test problem which has been intensively studied in the GNN expressiveness literature. Here we prove that L3Net is strictly more expressive than certain spectral GNNs, and support the theoretical prediction by experiments.\nWe have shown that the L3Net model contains ChebNet (Proposition 1), and the following proposition proves the strictly more expressiveness for graph signal classification. We call B a graph local filter if B(u, v) is non-zero only when v is in the neighborhood of u. In a spectral GNN, the graph convolution takes the form as x 7→ f(A)x where f is a function on R, and A is the (possibly normalized) adjacency matrix. Proposition 4. There is a graph and 1) A local filter B on it such that B cannot be expressed by any spectral graph convolution, but can be expressed by L3Net with K = 1. 2) Two data distributions on the graph (two classes) such that, with a permutation group invariant operator in the last layer, the deep feature of any spectral GNN cannot distinguish the two classes, but that of L3Net with 1 layer and K = 1 can.\nThe fundamental argument is that spectral GNN is permutation equivariant (see e.g. Gama et al. (2019a), reproduced as Lemma A.1), and the local filters in L3Net break such symmetry to obtain more discriminative power. The constructive example used in the proof is on a ring graph (Fig. A.1, A and the basis B), and the two data distributions shown in Fig. 3. Proposition 4 gives that, on the ring graph and using GNN with a global pooling in the last layer, an L3Net layer with K = 1 can have classification power while a ChebNet with any order cannot. On a chain graph (removing the connection between two end points in a ring graph), which not exactly follows the theory assumption, since the two graphs only differ at one edge, we expect that it will remain a difficult case for the ChebNet but not for L3Net. To verify the theory, we conduct experiments using a two-layer GNN and the results are in Fig. 3 (table). In the last row, we further impose shared basis across nodes which\nreduces L3Net to a 1D convolutional layer, and the learned basis shows a “difference” shape (right plot) which explains its classification power. Results are similar using a 1-layer GNN (Tab. A.1). The argument in Proposition 4 extends to other graphs and network types. Generally, when a GNN based on global graph adjacency or Laplacian matrix applies linear combinations of local averaging filters, then certain graph filters may be difficult to express. We experimentally examine GAT, WLN and MPNN, which underperform on the binary classification task, as shown in Fig. 3 (table)." }, { "heading": "3.2 Representation stability", "text": "We derive perturbation bounds of GNN feature representation, which is important for robustness against data noise. The analysis implies a trade-off between de-noising and keeping high-frequency information, which is consistent with experimental observation in Sec. 4.\nConsider the change in the GNN layer output Y defined in (1)(2) when the input X changes. For simplicity, let C = C ′ = 1, and the argument extends. For any graph signal x : V → R and V ′ ⊂ V , define ‖x‖2,V ′ := ( ∑ u∈V ′ x(u) 2)1/2 and 〈x, y〉V ′ = ∑ u∈V ′ x(u)y(u). The following perturbation bound holds for the L3Net layer with/without regularization.\nTheorem 1. Suppose that X = {X(u)}u∈V is perturbed to be X̃ = X+ ∆X, the activation function σ : R → R is non-expansive, and supu∈V ∑K k=1 |N (dk) u | ≤ Kp, then the change in the output {Y (u)}u∈V in 2-norm is bounded by\n‖∆Y ‖2,V ≤ β(1) · ‖a‖2 √ Kp‖∆X‖2,V , β(1) := sup\nk,u ‖Bk(·, u)‖2,N(dk)u .\nNote that p indicates the averaged size of the dk-order local neighborhoods. The proposition implies that when K is O(1), and the local basis Bk’s have O(1) 2-norms on all local parches uniformly bounded by β(1), then the Lipschitz constant of the GNN layer mapping is O(1), i.e., the product of ‖a‖2, β(1) and √ Kp, which does not scale with n. This resembles the generalizes the 2-norm of a convolutional operator which only involves the norm of the convolutional kernel, which is possible due to the local receptive fields in the spatial construction of L3Net.\nThe local graph regularization introduced in Sec. 2.2 improves the stability of Y w.r.t. ∆X by suppressing the response to local high-frequency perturbations in ∆X. Specifically, the local graph Laplacian L (k) u on the subgraph on N (dk) u is positive definite whenever the subgraph is connected and not isolated from the whole graph. We then define the weighted\n2-norm on local patch ‖x‖ L (k) u := 〈x, L(k)u x〉N(dk)u , and similarly ‖x‖(L(k)u )−1 .\nTheorem 2. Notation and setting as in Theorem 1, if furtherly, all the subgraphs on N (dk) u are connected within itself and to the rest of the graph, and there is ρ ≥ 0 s.t. ∀u, k, ‖∆X‖\n(L (k) u )−1 ≤ ρ‖∆X‖ 2,N (dk) u , then ‖∆Y ‖2,V ≤ ρβ(2) · ‖a‖2 √ Kp‖∆X‖2,V , β(2) := sup\nk,u ‖Bk(·, u)‖L(k)u .\nThe bound improves from Theorem 1 when ρβ(2) < β(1), and regularizing by R =∑ u,k ‖Bk(·, u)‖2L(k)u leads to smaller β (2). Meanwhile, on each N (dk) u the Dirichlet eigenvalues increases 0 < λ1 ≤ λ2 · · · ≤ λpu,k , pu,k := |N (dk) u |, thus weighting by λ−1l in ‖ · ‖(L(k)u )−1 decreases the contribution from high-frequency eigenvectors. As a result, ρ will be small if ∆X contains a significant high-frequency component on the local patch, e.g., additive Gaussian noise or missing values. Note that in the weighted 2-norm of ∆X by (L (k) u )−1, only the relative amount of high-frequency component in ∆X matters (because any constant normalization of L (k) u cancels in the product of ρ and β(2)). The benefits of local graph regularization in presence of noise in graph data will be shown in experiments." }, { "heading": "4 Experiment", "text": "We test the proposed L3Net model on several datasets 1. 1Codes available at https://github.com/ZichenMiao/L3Net." }, { "heading": "4.1 Object recognition of data on spherical mesh", "text": "We first classify data on a spherical mesh: sphere MNIST and sphere ModelNet-40, following the settings in literature. Though regular mesh on sphere is not the primary application scenario that motivates our model, we include the experiments to compare with benchmarks and test the efficiency of L3Net on such regular meshes. Following UGSCNN (Jiang et al., 2019), we implement different mesh resolution on a sphere, indicated by “mesh level” (Fig. 4), where number of nodes in different levels can vary from 2562 (level 4) to 12 (level 0). All the networks consist of three convolutional layers, see more details in Appendix C.1. Using the original mesh level (4;3;2), the finest resolution as in UGSCNN, L3Net gives among the best accuracies for sphere MNIST. On Modelnet-40, L3Net achieves a testing accuracy of 90.24, outperforming ChebNet and GCN and and is comparable to UGSCNN which uses spherical mesh information (Tab. A.2). When the mesh becomes coarser, as shown in Fig. 4 (Table), L3Net improves over GCN and ChebNet (L=4) and is comparable with UGSCNN under nearly all mesh settings. We observe that in some settings ChebNet can benefit from larger L, but the overall accuracy is still inferior to L3Net. The most right two columns give two cases of coarse meshes where L3Net shows the most significant advantage." }, { "heading": "4.2 Facial expression recognition (FER)", "text": "We test on two FER datasets, Extended CohnKanade (CK+) (Lucey et al., 2010) and FER13 (Goodfellow et al., 2013). We use 15 facial landmarks, see Fig. 1, and pixel values on a patch around each landmark point as node features. Details about dataset and model setup are in Appendix C.2. Unlike spherical mesh, facial and body landmarks are coarse irregular grids where no clear pre-defined mesh operation is applicable. We benchmark L3Net with other GNN approaches, as shown in Table 1. The local graph regularization strategy is applied on FER13, due to the severe outlier data of landmark detection caused by occlusion. On CK+, L3Net leads all non-CNN models by a large margin, and the best model (1,1,2,3) uses comparable number of parameters with the best ChebNet (L=4). On FER13, L3Net has lower performance than ChebNet and EdgeNet (Isufi et al., 2020), but outperforms after adding regularization. The running times of best ChebNet and L3Net models are comparable, and are much less than GAT’s." }, { "heading": "4.3 Action recognition", "text": "We test on two skeleton-based action recognition datasets, NTU-RGB+D (Shahroudy et al., 2016) and Kinetics-Motion (Kay et al., 2017). The irregular mesh is the 18/25-point body landmarks, with graph edges defined by body joints, shown in Fig. 1 and Fig. A.2. We adopt ST-GCN (Yan et al., 2018) as the base architecture, and substitute the GCN layer with new L3Net layer, called ST-L3Net. On Kinetics-Motion, we adopt the regularization mechanism to overcome the severe data missing caused by camera out-of-view. See more experimental details in Appendix C.3. We benchmark performance with ST-GCN (Yan et al., 2018), ST-GCN (our implementation without using geometric information) and STChebNet (replacing GCN with ChebNet layer), shown in Table 2. L3Net shows significant advantages on two NTU tasks, cross-view and cross-subject settings. On Kinetics-Motion, L3Net regains superiority over other models after applying regularization. The results in both Table 1 and 2 indicate that stronger regularization sacrifices expressiveness for clean data and gains stability for noisy data, which is consistent with the theory in Sec. 3.2." }, { "heading": "4.4 Robustness to graph noise", "text": "To examine the robustness to graph noise, we experiment on down-sampled MNIST data on 2D regular grid with 4-nearest-neighbor graph. With no noise, on 28×28 data (Tab. A.3), 14×14 data (Tab. A.4), and 7×7 data (Tab. 3 “original” column), the performance of L3Net is comparable to ChebNet (Defferrard et al., 2016) and EdgeNet (Isufi et al., 2020) and better than other GNN methods. We consider three types of noise, Gaussian noise added to the pixel value, missing nodes or equivalently missing value in image input, and permutation of the node indices, details in Appendix C.4. The results of adding different levels of gaussian noise and permutation noise are shown in Tab. 3, while results of adding missing value noise is provided in Appendix C.4. The results show that our regularization scheme improves the robustness to all three types of graph noise, supporting the theory in Sec. 3.2. Specifically, L3Net without regularization may underperform than ChebNet, but catches up after adding regularization, which is consistent with Proposition 3." }, { "heading": "5 Conclusion and Discussion", "text": "The paper proposes a new graph convolution model using learnable local filters decomposed over a small number of basis. Strengths: Provable enhancement of model expressiveness with significantly reduced model complexity from locally connected GNN. Improved stability and robustness via local graph regularization, supported by theory. Plug-and-play layer type, suitable for GNN graph signal classification problems on relatively unchang-\ning small underlying graphs, like face/body landmark data in FER and action recognition applications.\nLimitations and extensions: (1) Scalability to larger graph. When |V | = n is large, the complexity increase in the npK term would be significant. The issue in practice can be remedied by mixing use of layer types, e.g., only adopting L3Net layers in upper levels of mesh which are of reduced size. (2) Dynamically changing underlying graph across samples. For more severe changes of the underlying graph, we can benefit from solutions such as node registration or other preprocessing techniques, possibly by another neural network. Related is the question of reducing model dependence on graph topology, possibly under a statistical model of the underlying graphs. This includes transferability to larger networks. (3) Incorporation of edge features. Edge features can be transformed into extra channels of node features by an additional layer in the bottom, and the low-rank graph operation can be similarly employed there. (4) Theoretically, the representation robustness analysis is to be extended to more general types of graph perturbation. Generally, one can work to extend to other types of graph data and tasks." }, { "heading": "Acknowledgements", "text": "The work is supported by NSF (DMS-1820827). XC is partially supported by NIH and the Alfred P. Sloan Foundation. ZM and QQ are partially supported by NSF and the DARPA TAMI program." }, { "heading": "Appendix", "text": "" }, { "heading": "A Proofs", "text": "" }, { "heading": "A.1 Details and proofs in Sec. 2.3", "text": "To facilitate comparison with literature, we provide a summary of various graph convolution models in matrix notation, the precise definition of which will be detailed in below. For simplicity, only the linear transform part is shown, and the addition of bias and point-wise non-linearity are omitted.\nNotation as in Section 2.1, suppose X ∈ Rn×C′ is the input node feature, and Y ∈ Rn×C the output feature,\n• L3Net (ours): Y = ∑K k=1BkXAk, where Bk ∈ Rn×n is the local basis filter, and\nAk ∈ RC ′×C are the coefficients, both Bk and Ak are learnable. • ChebNet/GCN: Y = ∑L−1 l=0 Tl(L̃)XΘl, where Tl(·)’s are Chebshev polynomials, L̃\nis the rescaled and re-centered graph Laplacian, Tl(L̃) ∈ Rn×n, and Θl ∈ RC ′×C are trainable.\n• GAT: Y = ∑R r=1A(r)XΘr, where A(r) ∈ Rn×n is the graph attention affinity\ncomputed adaptively from input features, and Θr ∈ RC ′×C are trainable and weightshared with the parameters in A(r), see more in below.\n• EdgeNet: Y = ∑L−1 r=0 PrXΘr, where Pr = ∏r k=0 Φk for a sequence of trainable local\nfilters Φk, and Θr ∈ RC ′×C are trainable.\nFrom the matrix formulation, it can be seen that when Bk are the classical graph filtering operators, e.g. polynomials of L̃, and Ak the trainable Θk, L3Net recovers the above graph convolution models in literature (c.f. Proposition 1). In below we give more details, as well as the reduction to filter-decomposed CNN (c.f. Proposition 2)." }, { "heading": "A.1.1 Locally connected GNN", "text": "Specifically, the construction in Coates & Ng (2011); Bruna et al. (2013) assumes that u and u′ belongs to the graph of different scales, u′ is on the fine graph, and u is on a coarse-grained layer produced by clustering of indices of the graph of the input layer. If one generalize the construction to allow over-lapping of the receptive fields, and assume no pooling or coarse-graining of the graph, then the non-zero parameters are of the number∑\nu∈V |Nu| · CC ′ = np · CC ′,\nwhere n = |V |, p is the average patch size |Nu|, and C and C ′ are the number of input and output feature channels.\nA.1.2 ChebNet/GCN, GAT and EdgeNet\n• ChebNet/GCN In view of (1), ChebNet (Defferrard et al., 2016) makes use of the graph adjacency matrix to construct M . Specifically, Asym := D\n−1/2AD−1/2 is the symmetrized graph adjacency matrix (possibly including self-edge, then A equals original A plus I), and Lsym := I − Asym has spectral decomposition Lsym = ΨΛΨ\nT . Let L̃ = α1I + α2Lsym be the rescaled and re-centered graph Laplacian such that the eigenvalues are between [−1, 1], α1, α2 fixed constants. Then, written in n-by-n matrix form,\nMc′,c = L−1∑ l=0 θl(c ′, c)Tl(L̃), θl(c ′, c) ∈ R, (5)\nwhere Tl(·) is Chebshev polynomial of degree l. As Asym and then L̃ are given by the graph, only θl’s are trainable, thus the number of parameters are\nL · CC ′.\nGCN (Kipf & Welling, 2016) is a special case of ChebNet. Take L = 2 in (5), and tie the choice of θ0 and θ1,\nMc′,c = θ(c ′, c)(α′1I + α ′ 2Asym) =: θ(c ′, c)Ã, α′1, α ′ 2 fixed constants,\nwhere θ(c′, c) is trainable. This factorized form leads to the linear part of the layer-wise\nmapping as Y = ÃXΘ written in matrix form, where à is n-by-n matrix defined as above, X (Y ) is n-by-C ′ (-C) array, Θ is C ′-by-C matrix. The model complexity is CC ′ which are the parameters in Θ.\n• GAT In GAT (Veličković et al., 2017), R being the number of attention heads, the graph convolution operator in one GNN layer can be written as (omitting bias and non-linear mapping)\nY = R∑ r=1 A(r)XΘr, A(r)u,v = ec (r) uv∑\nv′∈N(1)u ec\n(r) uv′ , c(r)uv = σ((a (r))T [W (r)Xu,W (r)Xv]), (6)\nwhere {W (r), a(r)} are the trainable parametrization of attention graph affinity mechanism A(r), which constructs non-negative affinities between graph nodes u and v adaptively from the input graph node feature X. In particular, A(r) shares sparsity pattern as the graph topology, that is, A(r)(u, u′) 6= 0 only when u′ ∈ N (1)u .\nIn the original GAT, Θr = W (r)C(r), where C(r)’s are fixed matrices such that the output from r-th head is concatenated into the output Y across r = 1, · · · , R. Variants of GAT adopt channel mixing across heads, e.g. a generalization of GAT in Isufi et al. (2020) uses extra trainable Θr in (6) independent from W\n(k). Isufi et al. (2020) also proposed higherorder GAT by considering powers of the affinity matrix A(r) as well as the edge-varying version (c.f. Eqn. (36)(39) in Isufi et al. (2020)). As this higher-order GAT and the edge-varying counterpart are special cases of the edgy-varying GNN, we cover this case in Proposition 1 3).\nThe model complexity of GAT: In the original GAT where Θr is tied with W (r), the number of parameters in one layer is R(C0C ′ + 2C0), where R is the number of attention heads, C = C0R, and W (r) : RC′ → RC0 . When Θr are free from {W (r), a(r)} in (6), the number of parameters is R(CC ′ + C0C ′ + 2C0) ≤ R(2CC ′ + 2C), where W (r) maps to dimension C0 and Θr maps to dimension C.\n• EdgeNet (Edge-varying GCN) Per Eqn. (1)(8) in Isufi et al. (2020), the edge-varying GNN layer mapping can be written as\nY = L−1∑ r=0\n( r∏\nk=0\nΦk ) XΘr, (7)\nwhere Φ0 is an n-by-n diagonal matrix, and Φk, k = 1, · · · , r, are supported on N (1)u of each node u. The trainable parameters are {Φk}Rk=0 and {Θr}Rr=0, Θr : RC\n′ → RC . Edgevarying GAT implements polynomials of averaging filters, and general edge-varying GNN takes product of arbitrary 1-order filters. The proof shows that EdgeNet layer is a special case of L3Net layer, while restricting Bk to be of the product form (9) rather than freely supported on N (dk) u for user-specified order (d1, · · · , dK) is a non-trivial restriction.\nThe trainable parameters: Θr has LCC ′ many, Φ0 has n, and Φk, k = 1, · · · , L−1 each has np(1) many, p(1) being the average size o 1-neighborhood of nodes. Thus the total number of parameters is\nLCC ′ + n+ (L− 1)np(1) ∼ L(CC ′ + np(1)).\nProof of Proposition 1. Part (1): Since GCN is a special case of ChebNet, it suffices to\nprove that (5) can be expressed in the form of L3Net (2) for some K. By definition of L̃, mathematically equivalently,\nMc′,c = L−1∑ l=0 θl(c ′, c)Tl(α1I+α2L) = L−1∑ l=0 θl(c ′, c)Tl(α1I+α2(I−Asym)) = L−1∑ l=0 βl(c ′, c)Alsym, (8) where the coefficients βl’s are determined by θl’s, per (c ′, c). Since Alsym propagates to the l-th order neighborhood of any node, setting Bk(u ′, u) = Ak−1sym(u ′, u), Bk(u ′, u) is non-zero when u′ ∈ N (k−1)u , 1 ≤ k ≤ K := L, and then setting ak(c′, c) = βk−1(c′, c) gives (5) in the form of (2).\nPart (2): We consider (6) as the GAT model. Recall that Θr : RC ′ → RC , then (6) can be re-written in the form of (1) by letting\nM(u′, u; c′, c) = R∑ r=1 A(r)(u′, u)Θr(c′, c),\nwhich is a special case of (2) where R = K, A(k) = Bk and Θk = ak. Since A(r)(u, u′) as a function of u′ is supported on u′ ∈ N (1)u , (6) belongs to the L3Net model (2) where d1 = · · · = dK = 1, in addition to that Bk must be of the attention affinity form, i.e. built from the attention coefficients c (r) uv computed from input X via parameters {W (r), a(r)}.\nPart (3): Comparing with (1)(2), we have that (7) is a special case of L3Net (2) by letting K = L,\nBk = k−1∏ k′=0 Φk′ , (9)\nak = Θk−1, and dk = k − 1 for k = 1, · · · ,K." }, { "heading": "A.1.3 Standard and geometrical CNN’s", "text": "Standard CNN on Rd, e.g. d = 1 for audio signal and d = 2 for image data, applies a discretized convolution to the input data in each convolutional layer, which can be written as (omitting bias which is added per c, and the non-linear activation)\ny(u, c) = ∑\nc′∈[C′] ∑ u′∈U wc′,c(u ′ − u)x(u′, c′), (10)\nwhere U is a grid on Rd. We write in the way of “anti-convolution”, which has “u′ − u” rather than “u − u′”, but the definition is equivalent. For audio and image data, U is usually a regular mesh with evenly sampled grid points, and proper boundary conditions are applied when computing y(u, c) at a boundary grid point u. E.g., boundary can be handled by standard padding as in CNN. As the convolutional filters wc′,c are compactly supported, the summation of u′ is on a neighborhood of u.\nMore generally, CNN’s on non-Euclidean domains are constructed when spatial points are sampled on an irregular mesh in Rd, e.g., a 2D surface in R3. The generalization of (10) is by defining the “patch operator” (Masci et al., 2015) which pushes a template filter w on a regular mesh on Rd, d being the intrinsic dimensionality of the sampling domain, to the irregular mesh in the ambient space that have coordinates on local charts. Specifically, for a mesh of 2D surface in 3D, d = 2, and w is a template convolutional filter on R2. For any local cluster of 3D mesh points Nu around a point u, the patch operator Pu provides (Puw)(u′) for u′ ∈ Nu by certain interpolation scheme on the local chart. The operator Pu is linear in w, and possibly trainable. As a result, in mesh-based geometrical CNN,\ny(u, c) = ∑\nc′∈[C′] ∑ u′ (Puwc′,c)(u′)x(u′, c′), (11)\nand one can see that in Euclidean space taking (Puw)(u′) = w(u′ − u) reduces (11) to the standard CNN as in (10).\nIn both (10) and (11), spatial low-rank decomposition of the filters wc′,c can be imposed (Qiu et al., 2018). This introduces a set of bases {bk}k over space that linearly span the filters wc′,c. For standard CNN in Rd, bk are basis filters on Rd, and for geometrical CNN, they are defined on the reference domain in Rd same as wc′,c, where d is the intrinsic dimension. Suppose wc′,c = ∑K k=1 βk,(c′,c)bk for coefficients βk,(c′,c), by linearity, (11) becomes\ny(u, c) = ∑\nc′∈[C′] ∑ u′ K∑ k=1 βk,(c′,c)(Pubk)(u′)x(u′, c′), (12)\nand similarly for (10). The trainable parameters in (12) are βk,(c′,c) and the basis filters bk’s, the former has KCC ′ parameters, and the latter has ∑ k pk, where pk is the size of the support of bk in Rd. Suppose the average size is p, then the number of parameters is Kp. This gives the total number of parameters as\nKCC ′ +Kp.\nProof of Proposition 2. Since standard CNN is a special case of geometrical CNN 11, we only consider the latter. Assuming low-rank filter decomposition, the convolutional mapping is (12). Comparing to the GNN layer mapping defined in (1), one sees that\nM(u′, u; c′, c) = K∑ k=1 βk,(c′,c)(Pubk)(u′),\nwhich equals (2) if setting Bk(u ′, u) = (Pubk)(u′) and ak(c′, c) = βk,(c′,c).\nA.1.4 Strong regularization limit\nProof of Proposition 3. The constrained minimization of R defined in (3) separates for each u, k, and the minimization of b (k) u is given by\nmin w:N (dk) u →R\nwTL(k)u w, s.t. ‖w‖2 ≥ αu,k > 0. (13)\nFor each u, k, the local Dirichlet graph Laplacian L (k) u has eigen-decomposition L (k) u = Ψ (k) u Λ (k) u (Ψ (k) u )T , where (Ψ (k) u )TΨ (k) u = I, and the diagonal entries of Λ (k) u are eigenvalues of L (k) u , which are all ≥ 0 and sorted in increasing order. By the variational property of eigenvalues, the minimizer of w in (13) is achieved when w = Ψ (k) u (·, 1), i.e., the eigenvector associated with the smallest eigenvalue of L (k) u . By that the local subgraph is connected, this smallest eigenvalue has single multiplicity, and the eigenvector is the Perron-Frobenius vector which does not change sign. The claim holds for arbitrary αu,k > 0 since eigenvector is defined up to a constant multiplication." }, { "heading": "A.2 Proofs in Sec. 3.1", "text": "Proof of Proposition 4. Part 1): Let the graph be the ring graph with n nodes, and each node has 2 neighbors, n=8 as shown in Fig. 1 (right). We index the nodes as u = 0, . . . , n−1 and allows addition/subtraction of u−v (mod n). Let B be the “difference” filter B(u′, u) = 1 when u′ = u and −1 when u′ = u+ 1. We show that B 6= f(A) for any f , and in contrast, setting this B as the basis in (2) expresses the filter with K = 1.\nTo prove that B 6= f(A) for any f , let πu be the permutation of the n nodes such that πu(u+ v) = (u− v) for all v, i.e., mirror flip the ring around the node u. By construction, the graph topology of the ring graph is preserved under πu, that is, Aπu := πuAπ T u = A, whether A is the 0/1 value adjacency matrix or the symmetrically normalized one Asym = D−1/2AD−1/2 (D is constant on diagonal) or other normalized version as long as the relation Aπu = A holds. By Lemma A.1 1), for any f : R→ R,\nf(A)πu = f(Aπu)πu = πuf(A),\nthis means that if B = f(A) for some f , then Bπu = πuB, which contradicts with the construction of B.\nPart 2): Consider the two distributions of graph signals on the ring graph in 1), which we call “upwind/downwind” signals: Xup consists of finite superpositions of functions on the ring graph which are periodic, smoothly increasing from 0 to 1 and then dropping to zero. Signals in Xup are under certain distribution, and Xdown consists of the signals that can be produced by mirror-flipping the upwind signals. That is, denoting xup (xdown) an upwind (downwind) signal, πu the permutation as in 1) around any node u, then\nπuxup dist. = xdown,\nwhere dist. = means equaling in distribution. Example signals of the two classes as illustrated in Fig. 3.\nSame as in 1), by construction Aπu = A. Let F (L) be the mapping to the L-th layer spectral GNN feature, for xup an upwind signal, Lemma A.1 2) gives that\nF (L)[A]πuxup = F (L)[Aπu ]πuxup = πuF (L)[A]xup.\nThe last layer applies group invariant operator U , then\nUF (L)[A]πuxup = UπuF (L)[A]xup = UF (L)[A]xup,\nthis gives that\nUF (L)[A]xdown dist. = UF (L)[A]πuxup = UF (L)[A]xup,\nwhich means that the final output deep feature via UF (L)[A] are statistically the same for the input signals from the two classes.\nMeanwhile, the difference local filter B in the proof of 1) can extract feature to differentiate the two classes: with Relu activation function, the output feature after one convolutional layer and a global pooling, which is permutation invariant, can be made strictly positive for one class, and zero for the other class. Thus, L3Net with 1 layer and 1 basis suffices to distinguish the Xup and Xdown signals.\nLemma A.1 (Permutation equivariance, Proposition 1 in Gama et al. (2019a)). Let A be the (possibly normalized) graph adjacency matrix, for any input signal x : V → R, and π ∈ Sn a permutation of graph nodes, 1) The spectral graph convolution mapping f(A) satisfies that\nf(Aπ)π = πf(A), Aπ := πAπ T .\n2) Let F (l)[A] be the mapping to the l-th layer spectral GNN feature with graph adjacency A, then\nF (l)[Aπ]πx = πF (l)[A]x.\nProof of Lemma A.1. Proved in Gama et al. (2019a) and we reproduce with our notation for completeness.\nPart 1): Denote the n-by-n permutation matrix also by π, then by definition, f(A) = Uf(Λ)UT where A = UΛUT is the diagonalization and U is orthogonal matrix, thus\nf(Aπ) = f(πUΛU TπT ) = πUf(Λ)UTπT = πf(A)πT ,\nand this proves 1).\nPart 2): Each spectral GNN layer mapping adds the bias and the node-wise non-linear activation mapping to the graph convolution linear operator, which preserves the permutation equivariance. Recursively applying to L layers proves 2)." }, { "heading": "A.3 Proofs in Sec. 3.2", "text": "Proof of Theorem 1. By definition,\nY (u) = σ( K∑ k=1 ak〈Bk(·, u), X(·)〉N(dk)u + bias),\nthen since σ is non-expansive, ∀u ∈ V ,\n|∆Y (u)| ≤ | K∑ k=1 ak〈Bk(·, u),∆X(·)〉N(dk)u | ≤ ‖a‖2 ( K∑ k=1 |〈Bk(·, u),∆X(·)〉N(dk)u | 2 )1/2 . (14)\nBy that |〈Bk(·, u),∆X(·)〉N(dk)u | ≤ ‖Bk(·, u)‖2,N(dk)u · ‖∆X(·)‖2,N(dk)u , (15) we have that ∑ u∈V |∆Y (u)|2 ≤ ‖a‖22 ∑ u K∑ k=1 |〈Bk(·, u),∆X(·)〉N(dk)u | 2\n≤ ‖a‖22 ∑ u K∑ k=1 ‖Bk(·, u)‖2 2,N (dk) u · ‖∆X(·)‖2 2,N (dk) u\n≤ (‖a‖2β(1))2 ∑ u,k ‖∆X(·)‖2 2,N (dk) u , (16)\nand observe that∑ u,k ‖∆X(·)‖2 2,N (dk) u = K∑ k=1 ∑ u∈V ∑ v∈N(dk)u |∆X(v)|2 = K∑ k=1 ∑ u,v∈V 1{v∈N(dk)u } |∆X(v)|2\n= K∑ k=1 ∑ u,v∈V 1{u∈N(dk)v } |∆X(v)|2 = K∑ k=1 ∑ v∈V |N (dk)v | · |∆X(v)|2 ≤ Kp ∑ v∈V |∆X(v)|2,\nwhere we used the assumption on Kp to obtain the last ≤. Then (16) continues as ≤ (‖a‖2β(1))2Kp‖∆X‖22,V , which proves that ‖∆Y ‖2,V ≤ (‖a‖2β(1)) √ Kp‖∆X‖2,V as claimed.\nProof of Theorem 2. Same as in the proof of Theorem 1, we have (14). The eigendecomposition L (k) u = Ψ (k) u Λ (k) u (Ψ (k) u )T has that (Ψ (k) u )TΨ (k) u = I, and, under the connectivity condition of the subgraph, the diagonal entries of Λ (k) u all > 0. Thus\n〈u, v〉 N (dk) u = 〈(Λ(k)u )1/2Ψ(k)u u, (Λ(k)u )−1/2Ψ(k)u v〉N(dk)u ,\nwhich gives the Cauchy-Schwarz with weighted 2-norm as\n|〈Bk(·, u),∆X(·)〉N(dk)u | ≤ ‖Bk(·, u)‖L(k)u · ‖∆X(·)‖(L(k)u )−1 . (17)\nThen similarly as in (16), using the definition of β(2) and the the condition with ρ, we obtain that ∑\nu∈V |∆Y (u)|2 ≤ (‖a‖2β(2))2 ∑ u,k ρ2‖∆X(·)‖2 2,N (dk) u , (18)\nand the rest of the proof is the same, which gives that∑ u∈V |∆Y (u)|2 ≤ (‖a‖2β(2))2ρ2Kp‖∆X‖22,V ,\nwhich proves the claim." }, { "heading": "B Up/down-wind Classification Experiment", "text": "" }, { "heading": "B.1 Dataset Setup", "text": "We generate the Up/Down wind dataset on both ring graph and chain graph with 64 nodes. Every node is assigned to a probability drawn from (0, 1) uniform distribution. Node with probability less than threshold = 0.1 will be assigned with a gaussian distribution with std = 1.5. Each gaussian distribution added is masked half side. Distribution masked left half is the ‘Down Wind’ class, distribution masked right half is the ‘Up Wind’ class, as shown in left plot in Fig. 3. We then sum up all half distributions from different locations in each sample. We generate 5000 training samples and 5000 testing samples.\nB.2 Model architecture and training details\nNetwork architectures.\n• 2-gcn-layer model: GraphConv(1,32)-ReLU-MaxPool1d(2)-GraphConv(32,64)-ReLU-AvgPool(32)-FC(2),\n• 1-gcn-layer model: GraphConv(1,32)-ReLU-AvgPool(64)-FC(2),\nwhere GraphConv can be ChebNet or L3Net." }, { "heading": "Training details.", "text": "We choose the Adam Optimizer, batch size of 100, set initial learning rate of 1×10−3, make it decay by 0.1 at 80 epoch and train for 100 epochs." }, { "heading": "B.3 Additional results", "text": "We report additional results using 1-gcn layer architecture in Tab. A.1. Our L3Net again shows stronger classification performance than ChebNet." }, { "heading": "C Experimental Details", "text": "C.1 Classification of sphere mesh data\nSpherical mesh We conduct this experiment on icosahedral spherical mesh (Baumgardner & Frederickson, 1985). Like S2CNN (Cohen et al., 2018), we project digit image onto surface\nof unit sphere, and follow Jiang et al. (2019) by moving projected digit to equator, avoiding coordinate singularity at poles.\nHere, we details the subdivision scheme of the icosahedral spherical mesh we used. Start with an unit icosahedron, this sphere discretization progressively subdivide each face into four equal triangles, which makes this discretization uniform and accurate. Plus, this scheme provides a natural downsampling strategy for networks, as it denotes the path for aggregating information from higher-level neighbor nodes to lower-level center node. We adopt the following naming convention for different mesh resolution: start with level-0(L0) mesh(i.e., unit icosahedron), each level above is associated with a subdivision. For level-i(Li), properties of spherical mesh are:\nNe = 30 · 4 ∗ i,Nf = 20 · 4 ∗ i,Nv = Ne −Nf + 2 (19)\nin which Nf , Ne, Nv denote number of edges, faces, and vertices.\nTo give a direct illustration of how many nodes each level of mesh has, we list them below,\n• L0 12 nodes • L1 42 nodes • L2 162 nodes • L3 642 nodes • L4 2562 nodes • L5 10242 nodes\nNetwork architectures We use a three-stage GNN model for this sphereMNIST, with each stage conduct convolution on spherical mesh of a specific level. Detailed architecture (suppose mesh levels used are Li, Lj, Lk):\nConv(1,16)Li-BN-ReLU-DownSamp-ResBlock(16,16,64)Lj-DownSampResBlock(64,64,256)Lk-AvgPool-FC(10),\nWe use the 4-stage model architecture for SphereModelNet-40, where 4 mesh levels are: L5, L4, L3, L2. Detailed architecture are:\nConv(6,32)L5-BN-ReLU-DownSamp-ResBlock(32,32,128)L4-DownSamp -ResBlock(128,128,512)L3-DownSamp-ResBlock(512,512,2048)L2-DownSamp-AvgPoolFC(40),\nwhere the GraphConv(feat in, feat out) in above model architectures can be either Mesh Convolution layer or Graph Convolution layer, and “ResBlock” is a bottleneck module with two 1× 1 convolution layers and one GraphConv layer. Training Details For SphereMNIST experiments, we use batch size of 64, Adam optimizer, initial learning rate of 0.01 which decays by 0.5 every 10 epochs. We totally train model for 100 epochs.\nFor SphereModelNet-40 experiment, we batch size of 16, Adam optimizer, initial learning rate of 0.005 which decay by 0.7 every 25 epochs. We totally train 300 epochs." }, { "heading": "Results on fine mesh", "text": "Tab. A.2 show the results of SphereMNIST and Sphere-ModelNet40 on fine meshes on the sphere. Specifically, the mesh used for SphereMNIST here is of levels L4, L3, L2, and the SphereModelNet-40 mesh of levels L5, L4, L3, L2, same as in Jiang et al. (2019)." }, { "heading": "C.2 Facial Expression Recognition", "text": "Landmarks setting 15 landmarks are selected from the standard 68 facial landmarks defined in AAM (Cootes et al., 2001), and edges are connected according to prior information of human face, e.g., nearby landmarks on the eye are connected, see Fig. 1 (left)." }, { "heading": "Dataset setup", "text": "• CK+: The CK+ dataset (Lucey et al., 2010) is the mostly used laboratory-controlled FER dataset (downloaded from: http://www.jeffcohn.net/resources/ ). It contains 327 video sequences from 118 subjects with seven basic expression labels(anger, contempt, disgust, fear, happiness, sadness, and surprise). Every sequence shows a shift from neutral face to the peak expression. Following the commonly used ‘(static) image-based’ methods (Li & Deng, 2020), we extract the one to three frames in each expression sequence that have peak expression information in the CK+ dataset, and form a dataset with 981 image samples. Every facial image is aligned and resized to (120, 120) with face alignment model (Bulat & Tzimiropoulos, 2017), and then we use this model again to get facial landmarks. As we describe in Sec. 4.2, we select 15 from 68 facial landmarks and build graph on them. The input feature for each node is an image patch centered at the landmark with size (20, 20), concatenated with the landmark’s coordinates, so the total input feature dimension is 402.\n• FER13: FER13 dataset (Goodfellow et al., 2013) is a large-scaled, unconstrained database collected automatically by Goole Image API (downloaded from: https://www.kaggle.com/c/challenges-in-representation-learning-facial-expressionrecognition-challenge/data). It contains 28,709 training images, 3589 validation images and 3589 test images of size (48, 48) with seven common expression labels as CK+. We align facial images, get facial landmarks, and select nodes & build graph the same way as we do in CK+. Input features are local image patch centered at each landmark with size (8, 8) and landmark’s coordinates, so the total input feature dimension is 66." }, { "heading": "Network architectures.", "text": "• CK+: GraphConv(402,64)-BN-ReLU-GraphConv(64,128)-BN-ReLU-FC(7),\n• FER13: GraphConv(66,64)-BN-ReLU-GraphConv(64,128)-BN-ReLU-GraphConv(128,256)-BNReLU-FC(7),\nwhere GraphConv(feat in, feat out) here can be any type of graph convolution layer, including our L3Net." }, { "heading": "Training details.", "text": "• CK+: We use 10-fold cross validation as Ding et al. (2017). Batch size is set as 16, learning rate is 0.001 which decay by 0.1 if validation loss remains same for last 15 epochs. We choose Adam optimizer and train 100 epochs for each fold validation.\n• FER13: We report results on test set. Batch size is set as 32, learning rate is 0.0001 which decay 0.1 if validation loss remains same for last 20 epochs. We choose Adam optimizer and train models for 150 epochs.\nRuntime analysis details. In section 4.2, we report the running time of our L3Net(order 1,1,2,3), 13.02ms, and best ChebNet, 12.56ms, on CK+ dataset, which are comparable. Here, we provide more details about this. The time we use to compare is the time of model finishing inference on validation set with batch size of 16. For each model, we record all validation time usages in all folds and report the average of them. The Runtime analysis is performed on a single NVIDIA TITAN V GPU." }, { "heading": "C.3 Skeleton-based Action Recognition", "text": "" }, { "heading": "Dataset setup.", "text": "• NTU-RGB+D: NTU-RGB+D (Shahroudy et al., 2016) is a large skeleton-based action recognition dataset with three-dimensional coordinates given to every body joint (downloaded from: http://rose1.ntu.edu.sg/datasets/requesterAdd.asp?DS=3 ). It comprises 60 action classes and total 56,000 action clips. Every clip is captured by three fixed Kineticsv2 sensors in lab environment performed by one of 40 different subjects. Three sensors are set at same height but in different horizontal views, −45◦, 0◦, 45◦. There are 25 joints tracked, as shown in Fig. A.2. Two experiment setting are proposed by Shahroudy et al. (2016), cross-view (X-view) and cross-subject (X-sub). X-view consists of 37,920 clips for training and 18960 for testing, where training clips are from sensor on 0◦, 45◦, testing clips from sensor on −45◦. X-sub has 40,320 clips for training and 16,560 clips for testing, where training clips are from 20 subjects, testing clips are from the other 20 subjects. We test our model on both settings.\n• Kinetics: Kinetics (Kay et al., 2017) is a large and most commonly-used action recognition dataset with nearly 300,000 clips for 400 classes (downloaded from: https://deepmind.com/research/open-source/kinetics). We follow Yan et al. (2018) to get 18-point body joints from each frame using OpenPose (Cao et al., 2017) toolkit. Input features for each joint to the Network is (x, y, p), in which x, y are 2D coordinates of the joint, and p is the confidence for localizing the joint. To eliminate the effect of skeletonbased model’s inability to recognize objects in clips, we mainly focus on action classes that requires only body movements. Thus, we conduct our experiments on Kinetics-Motion, proposed by Yan et al. (2018). This is a small dataset that contains 30 action classes strongly related to body motion. Note that there are severe data missing problem in landmark coordinates in Kinetics data, so we also use our regularization scheme in this experiment.\nNetwork Architectures.\n• NTU-RGB+D: We follow the architecture in Yan et al. (2018):\nSTGraphConv(3,64,9,s1)-STGraphConv(64,64,9,s1)-STGraphConv(64,64,9,s1)STGraphConv(64,64,9,s1)-STGraphConv(64,128,9,s2)-STGraphConv(128,128,9,s1)STGraphConv(128,128,9,s1)-STGraphConv(128,256,9,s2)-STGraphConv(256,256,9,s1)STGraphConv(256,256,9,s1)-STAvgPool-fc(60).\n• Kinetics: We also design a computation-efficient architecture for Kinetics-Motion with larger temporal downsampling rate, which results in less forward time:\nSTGraphConv(3,32,9,s2)-STGraphConv(32,64,9,s2)-STGraphConv(64,64,9,s1)STGraphConv(64,64,9,s1)-STGraphConv(64,128,9,s2)-STGraphConv(128,128,5,s1)STGraphConv(128,128,5,s1)-STGraphConv(128,256,5,s2)-STGraphConv(256,256,3,s1)STGraphConv(256,256,3,s1)-STAvgPool-fc(60),\nwhere the structure of STGraphConv(feat in, feat out, temporal kernel size, temporal stride) is:\nGraphConv(feat in, feat out)-BN-ReLU-1DTemporalConv(feat out, feat out, temporal kernel size, temporal stride)-BN-ReLU." }, { "heading": "Training Details", "text": "• NTU-RGB+D: We use batch size of 32, initial learning rate of 0.001 which decay by 0.1 at (30, 80) epoch, and total train 120 epochs. SGD optimizer is selected. We padding every sample temporally with 0 to 300 frames.\n• Kinetics: We use batch size of 32, initial learning rate of 0.01 which decay by 0.1 at (40, 80) epoch, and total train 100 epochs. SGD optimizer is selected. We padding every sample temporally with 0 to 300 frames, and during training, we perform data augmentation by randomly choosing 150 contiguous frames." }, { "heading": "C.4 Details of experiment on MNIST", "text": "C.4.1 Simulated graph noise on 7× 7 MNIST.\nHere we describe three types of noise in our experiments:\nGaussian noise. Given a 7× 7 image from MNIST, we sample 49 values from N (0, std2). The std controls the strength of noise added. We conduct experiments under std = 0.1, 0.2, 0.3 as shown in Tab. 3. The amount of noise is also measured by PNSR which is standard for image data.\nMissing value noise. Given a image, we randomly sample 49 values from U(0, 1), and select nodes with probabilities less than a threshold. This threshold is called noise level, which controls the percentage of nodes affected. Then, we remove the pixel value at those selected nodes. Experiments with noise level = 0.1, 0.2, 0.3 are conducted.\nGraph node permutation noise. For each sample, we randomly select a permutation center node which has exact 4 neighbors. Then, we rotate its neighbors clockwise by 90 degree, e.g., top neighbor becomes right neighbor, and then we update the indices of permuted nodes.\nC.4.2 Network architecture and training details\nWe use the same architecture for different experiment settings:\nGraphConv(1,32)-BN-ReLU-GraphConv(32,64)-BN-ReLU-FC(10),\nwhere GraphConv can be different types of graph convolution layers. We set batch size to 100, use Adam optimizer, and set initial learning rate to 1e-3. Learning rate will drop by 10 if the least validation loss remains the same for the last 15 epochs. We set total training epochs as 200. We use 10,000 images for training.\nWe also adopt graph pooling layers in the above architecture:\nGraphConv(1,32)-BN-ReLU-Graph Pooling-GraphConv(32, 64)-BN-ReLUGraph Pooling-FC(10).\nMore discussion about graph pooling layer and multi-scale graph convolution is detailed in Appendix C.5." }, { "heading": "C.4.3 Additional results", "text": "Here, we show experiments results on 28×28, 14×14 grid, as well as 7×7 grid with missing values. Tab. A.3 shows results on 28× 28 image grid. Our model have better performance than other methods.\nTab. A.4 shows results on 14 × 14 image grid, where our L3Net have comparable results with the best ChebNet (Defferrard et al., 2016) method.\nWe shows our results on 7 × 7 image grid with missing values in Tab. A.5. With regularization, L3Net achieves the best performance in every experiment with different noise levels." }, { "heading": "C.5 Multi-scale graph convolution", "text": "The proposed L3Net graph convolution model (2) is compatible with graph down/upsampling schemes to achieve multi-scale feature extraction.\nThe graph down/up-sampling is usually implemented as a separate layer between graph convolution layers. As an example, Fig. A.3 illustrates three levels of graphs produced from an originally 14×14 image grid, denoted as G1, G2, G3, and they have 176, 45 and 9 nodes\nrespectively. On G1, 10% of pixels which contain the lowest amount of pixel intensities over the dataset are removed, and those nodes are located near the boundary of the canvas. For each node x′i in the coarse-grained graph G2, a neighborhood consisting of nodes in G1 is constructed, called N(x ′ i;G1). A pooling operator computes the feature on x ′ i from those on N(x′i;G1), and the pooled feature is used as the input to the graph convolution on G2. A similar graph pooling layer is used from G2 to G3. The graph topology and local neighborhoods are determined by grid point locations. Using a two-layer convolution with graph poolings in between from G1 to G3, and the other setting same as in Table A.4, L3Net obtains 97.33± 0.15 test accuracy (basis order 1; 1; 2, with regularization 0.001). We have also applied graph pooling layers on regular image grid on the 28×28 MNIST dataset. The results, reported in Table A.3, show that multi-scale convolution in L3Net not only improves the classification accuracy but also reduces the number of parameters.\nGraph up-sampling layer can be used similarly. These multi-scale approaches generally apply to graph convolution models, see, e.g., the hierarchical construction originally proposed for locally-connected GNN (Coates & Ng, 2011; Bruna et al., 2013). There is also flexibility in defining the graph down/up-sampling schemes, and the choice depends on application. An example of graph sampling operator on face mesh data is given in (Ranjan et al., 2018). At last, apart from using separate down/up-sampling layers, it is also possible to extend the L3Net model (2) to directly implement graph down/up-sampling, which would be similar to the convolution-with-stride (conv-t) operator in standard CNN. Specifically, between Gl and Gl+1, the local basis filter Bk(u, u\n′) is defined for u′ ∈ Gl and u ∈ Gl+1, and Bk(u, u′) 6= 0 only when u′ is in a local neighborhood of u. In matrix notation, Bk is of size |Gl+1|-by-|Gl|, and is sparse according to the graph local neighborhood relation between Gl and Gl+1." } ]
2,021
Graph Convolution with Low-rank Learn- able Local Filters
SP:7a333ae10f9732f3e0bed9bf009914e5d1bc265f
[ "This paper proposes xERTE, a comprehensive set of strategies (i.e. a temporal relational attention mechanism and a human-mimic representation update scheme, temporal neighborhood sampling and pruning, etc.) for link forecasting in temporal knowledge graphs (tKGs). Experiments on real-world tKGs show significant improvements and better explainability on KG forecasting. ", "Authors have presented a method to forecast future links on temporal knowledge graphs (KGs). They use attention mechanisms to extract a query-dependent subgraph. According to the authors, this extracted subgraph provides a graphical explanation of the prediction. Authors have performed an ablation study to denote the effect of different components (e.g., updating the representation of nodes, time encoding, sampling strategy) in their method. They have tested the performance of their approach on 3 datasets and have shown that their approach outperforms other baselines in terms of Hits and MRR." ]
Modeling time-evolving knowledge graphs (KGs) has recently gained increasing interest. Here, graph representation learning has become the dominant paradigm for link prediction on temporal KGs. However, the embedding-based approaches largely operate in a black-box fashion, lacking the ability to interpret their predictions. This paper provides a link forecasting framework that reasons over queryrelevant subgraphs of temporal KGs and jointly models the structural dependencies and the temporal dynamics. Especially, we propose a temporal relational attention mechanism and a novel reverse representation update scheme to guide the extraction of an enclosing subgraph around the query. The subgraph is expanded by an iterative sampling of temporal neighbors and by attention propagation. Our approach provides human-understandable evidence explaining the forecast. We evaluate our model on four benchmark temporal knowledge graphs for the link forecasting task. While being more explainable, our model obtains a relative improvement of up to 20 % on Hits@1 compared to the previous best temporal KG forecasting method. We also conduct a survey with 53 respondents, and the results show that the evidence extracted by the model for link forecasting is aligned with human understanding.
[ { "affiliations": [], "name": "Zhen Han" }, { "affiliations": [], "name": "Peng Chen" }, { "affiliations": [], "name": "Yunpu Ma" }, { "affiliations": [], "name": "Volker Tresp" } ]
[ { "authors": [ "Ivana Balažević", "Carl Allen", "Timothy M Hospedales" ], "title": "Tucker: Tensor factorization for knowledge graph completion", "venue": "arXiv preprint arXiv:1901.09590,", "year": 2019 }, { "authors": [ "Antoine Bordes", "Nicolas Usunier", "Alberto Garcia-Duran", "Jason Weston", "Oksana Yakhnenko" ], "title": "Translating embeddings for modeling multi-relational data", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Elizabeth Boschee", "Jennifer Lautenschlager", "Sean O’Brien", "Steve Shellman", "James Starz", "Michael Ward" ], "title": "Icews coded event data", "venue": "Harvard Dataverse,", "year": 2015 }, { "authors": [ "Rajarshi Das", "Shehzaad Dhuliawala", "Manzil Zaheer", "Luke Vilnis", "Ishan Durugkar", "Akshay Krishnamurthy", "Alex Smola", "Andrew McCallum" ], "title": "Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning", "venue": "arXiv preprint arXiv:1711.05851,", "year": 2017 }, { "authors": [ "Shib Sankar Dasgupta", "Swayambhu Nath Ray", "Partha Talukdar" ], "title": "Hyte: Hyperplane-based temporally aware knowledge graph embedding", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Tim Dettmers", "Pasquale Minervini", "Pontus Stenetorp", "Sebastian Riedel" ], "title": "Convolutional 2d knowledge graph embeddings", "venue": "arXiv preprint arXiv:1707.01476,", "year": 2017 }, { "authors": [ "Filip Karlo Došilović", "Mario Brčić", "Nikica Hlupić" ], "title": "Explainable artificial intelligence: A survey", "venue": null, "year": 2018 }, { "authors": [ "Alberto Garcı́a-Durán", "Sebastijan Dumančić", "Mathias Niepert" ], "title": "Learning sequence encoders for temporal knowledge graph completion", "venue": "arXiv preprint arXiv:1809.03202,", "year": 2018 }, { "authors": [ "Rishab Goel", "Seyed Mehran Kazemi", "Marcus Brubaker", "Pascal Poupart" ], "title": "Diachronic embedding for temporal knowledge graph completion", "venue": "arXiv preprint arXiv:1907.03143,", "year": 2019 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Zhen Han", "Peng Chen", "Yunpu Ma", "Volker Tresp" ], "title": "DyERNIE: Dynamic Evolution of Riemannian Manifold Embeddings for Temporal Knowledge Graph Completion", "venue": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2020 }, { "authors": [ "Zhen Han", "Yuyi Wang", "Yunpu Ma", "Stephan Guünnemann", "Volker Tresp" ], "title": "The graph hawkes network for reasoning on temporal knowledge graphs", "venue": "arXiv preprint arXiv:2003.13432,", "year": 2020 }, { "authors": [ "Junheng Hao", "Muhao Chen", "Wenchao Yu", "Yizhou Sun", "Wei Wang" ], "title": "Universal representation learning of knowledge bases by jointly embedding instances and ontological concepts", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Marcel Hildebrandt", "Jorge Andres Quintero Serna", "Yunpu Ma", "Martin Ringsquandl", "Mitchell Joblin", "Volker Tresp" ], "title": "Reasoning on knowledge graphs with debate dynamics", "venue": "arXiv preprint arXiv:2001.00461,", "year": 2020 }, { "authors": [ "Woojeong Jin", "Changlin Zhang", "Pedro Szekely", "Xiang Ren" ], "title": "Recurrent event network for reasoning over temporal knowledge graphs", "venue": null, "year": 1904 }, { "authors": [ "Timothee Lacroix", "Guillaume Obozinski", "Nicolas Usunier" ], "title": "Tensor decompositions for temporal knowledge base completion", "venue": "ICLR preprint https://openreview.net/pdf?id=rke2P1BFwS,", "year": 2020 }, { "authors": [ "Julien Leblay", "Melisachew Wudage Chekol" ], "title": "Deriving validity time in knowledge graph", "venue": "In Companion Proceedings of the The Web Conference", "year": 2018 }, { "authors": [ "Xi Victoria Lin", "Richard Socher", "Caiming Xiong" ], "title": "Multi-hop knowledge graph reasoning with reward shaping", "venue": "arXiv preprint arXiv:1808.10568,", "year": 2018 }, { "authors": [ "Xin Lv", "Lei Hou", "Juanzi Li", "Zhiyuan Liu" ], "title": "Differentiating concepts and instances for knowledge graph embedding", "venue": "arXiv preprint arXiv:1811.04588,", "year": 2018 }, { "authors": [ "Shiheng Ma", "Jianhui Ding", "Weijia Jia", "Kun Wang", "Minyi Guo" ], "title": "Transt: Type-based multiple embedding representations for knowledge graph completion", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2017 }, { "authors": [ "Yunpu Ma", "Marcel Hildebrandt", "Volker Tresp", "Stephan Baier" ], "title": "Holistic representations for memorization and inference", "venue": "In UAI, pp", "year": 2018 }, { "authors": [ "Yunpu Ma", "Volker Tresp", "Erik" ], "title": "A Daxberger. Embedding models for episodic knowledge graphs", "venue": "Journal of Web Semantics,", "year": 2018 }, { "authors": [ "Farzaneh Mahdisoltani", "Joanna Biega", "Fabian M Suchanek" ], "title": "Yago3: A knowledge base from multilingual wikipedias", "venue": null, "year": 2013 }, { "authors": [ "George A Miller" ], "title": "Wordnet: a lexical database for english", "venue": "Communications of the ACM,", "year": 1995 }, { "authors": [ "Grégoire Montavon", "Sebastian Lapuschkin", "Alexander Binder", "Wojciech Samek", "Klaus-Robert Müller" ], "title": "Explaining nonlinear classification decisions with deep taylor decomposition", "venue": "Pattern Recognition,", "year": 2017 }, { "authors": [ "Maximilian Nickel", "Volker Tresp", "Hans-Peter Kriegel" ], "title": "A three-way model for collective learning on multi-relational data", "venue": "In Icml,", "year": 2011 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, highperformance deep learning library", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Lin Qiu", "Yunxuan Xiao", "Yanru Qu", "Hao Zhou", "Lei Li", "Weinan Zhang", "Yong Yu" ], "title": "Dynamically fused graph network for multi-hop reasoning", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Hongyu Ren", "Weihua Hu", "Jure Leskovec" ], "title": "Query2box: Reasoning over knowledge graphs in vector space using box embeddings", "venue": "arXiv preprint arXiv:2002.05969,", "year": 2020 }, { "authors": [ "Michael Schlichtkrull", "Thomas N Kipf", "Peter Bloem", "Rianne Van Den Berg", "Ivan Titov", "Max Welling" ], "title": "Modeling relational data with graph convolutional networks", "venue": "In European Semantic Web Conference,", "year": 2018 }, { "authors": [ "Zhiqing Sun", "Zhi-Hong Deng", "Jian-Yun Nie", "Jian Tang" ], "title": "Rotate: Knowledge graph embedding by relational rotation in complex space", "venue": null, "year": 1902 }, { "authors": [ "Komal K Teru", "Etienne Denis", "William L Hamilton" ], "title": "Inductive relation prediction by subgraph reasoning", "venue": "arXiv preprint arXiv:1911.06962,", "year": 2019 }, { "authors": [ "Rakshit Trivedi", "Hanjun Dai", "Yichen Wang", "Le Song" ], "title": "Know-evolve: Deep temporal reasoning for dynamic knowledge graphs", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Théo Trouillon", "Johannes Welbl", "Sebastian Riedel", "Éric Gaussier", "Guillaume Bouchard" ], "title": "Complex embeddings for simple link prediction", "venue": "International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Xiang Wang", "Dingxian Wang", "Canran Xu", "Xiangnan He", "Yixin Cao", "Tat-Seng Chua" ], "title": "Explainable reasoning over knowledge graphs for recommendation", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Da Xu", "Chuanwei Ruan", "Evren Korpeoglu", "Sushant Kumar", "Kannan Achan" ], "title": "Inductive representation learning on temporal graphs", "venue": "arXiv preprint arXiv:2002.07962,", "year": 2020 }, { "authors": [ "Xiaoran Xu", "Wei Feng", "Yunsheng Jiang", "Xiaohui Xie", "Zhiqing Sun", "Zhi-Hong Deng" ], "title": "Dynamically pruned message passing networks for large-scale knowledge graph reasoning", "venue": null, "year": 1909 }, { "authors": [ "Bishan Yang", "Wen-tau Yih", "Xiaodong He", "Jianfeng Gao", "Li Deng" ], "title": "Embedding entities and relations for learning and inference in knowledge bases", "venue": "arXiv preprint arXiv:1412.6575,", "year": 2014 }, { "authors": [ "Zhitao Ying", "Dylan Bourgeois", "Jiaxuan You", "Marinka Zitnik", "Jure Leskovec" ], "title": "Gnnexplainer: Generating explanations for graph neural networks", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Cunchao Zhu", "Muhao Chen", "Changjun Fan", "Guangquan Cheng", "Yan Zhan" ], "title": "Learning from history: Modeling temporal knowledge graphs with sequential copy-generation networks, 2020", "venue": null, "year": 2020 }, { "authors": [ "Leblay", "2018 Chekol", "2018 Garcı́a-Durán et al", "2018b Ma et al", "2018 Dasgupta et al", "2017 Trivedi et al", "2019 Jin et al", "2019 Goel et al", "2020). Ma Lacroix et al" ], "title": "2018b) developed extensions of static knowledge graph models by adding timestamp embeddings to their score functions. Besides, Garcı́a-Durán et al. (2018) suggested a straight forward extension of some existing static knowledge graph models that utilize a recurrent neural network (RNN) to encode", "venue": null, "year": 2018 }, { "authors": [ "Das" ], "title": "EXPLAINABLE REASONING ON KNOWLEDGE GRAPHS Recently, several explainable reasoning methods for knowledge graphs have been proposed (Das et al., 2017", "venue": null, "year": 2017 }, { "authors": [ "Hildebrandt" ], "title": "let them perform a policy guided walk to the correct object entity. The reasoning paths produced by the agents can explain the prediction results to some extent", "venue": null, "year": 2019 }, { "authors": [ "Trivedi" ], "title": "2020) use this filtering setting for reporting their results on temporal KG forecasting. However, this filtering setting is not appropriate for evaluating the link prediction on temporal KGs. For example, there is a test quadruple (Barack Obama, visit, India, 2015-01-25), and we perform the object prediction", "venue": "(Barack Obama,", "year": 2013 }, { "authors": [ "A. First" ], "title": "South Africa engaged in diplomatic cooperation with Morocco. Later, on 2014-11-21, a representative of the Morocco government met a representative of the French government", "venue": null, "year": 2014 }, { "authors": [ "B. First" ], "title": "Chinese government engaged in negotiation with the Iranian government. Later, on 2014-11-21, a representative of the French government met a representative of the Chinese government", "venue": null, "year": 2014 }, { "authors": [ "A. South" ], "title": "Africa engaged in diplomatic cooperation with South Sudan on 2014-12-11", "venue": null, "year": 2014 }, { "authors": [ "B. First" ], "title": "Uhuru Muigai Kenyatta accused UN Security Council. Later, on 2014-1126, the UN Security Council provided military protection to South Sudan", "venue": null, "year": 2014 }, { "authors": [ "B. Zaini" ], "title": "Abdullah endorsed the Indonesia citizen on 2014-12-25", "venue": null, "year": 2014 }, { "authors": [ "B. Greek" ], "title": "police refused to surrender to the Greek head of government on 2014-10-15", "venue": null, "year": 2014 }, { "authors": [ "A. First", "on", "Dmitry A" ], "title": "Medvedev made statements to Barack Obama", "venue": "Later, on 200901-30,", "year": 2009 }, { "authors": [ "D. First" ], "title": "Evo Morales hosted a visit of Barack Obama. Later, on 2011-09-19, Raúl Castro appeal for de-escalation of military engagement to Evo Morales", "venue": null, "year": 2008 }, { "authors": [ "A. First" ], "title": "2014-07-04, the armed rebel in Ukraine used unconventional violence to the military of Ukraine. Later, on 2014-07-10, the head of government of Ukraine made statements to the armed rebel in Ukraine", "venue": null, "year": 2014 }, { "authors": [ "C. First" ], "title": "2014-07-04, the armed rebel in Ukraine used unconventional violence to the military of Ukraine. Later, on 2014-07-19, the head of government of Ukraine made statements to the armed rebel in Ukraine", "venue": null, "year": 2014 }, { "authors": [ "A. First" ], "title": "2014-07-23, a member of Ukraine parliament consulted the head of the Ukraine government. Later, on 2014-07-24, the head of government made a statement to the Ukraine police", "venue": null, "year": 2014 }, { "authors": [ "B. First" ], "title": "2014-06-25, the military of Ukraine used violence to an armed rebel that occurred in Ukraine. Later, on 2014-07-10, the armed rebel used violence to the Ukraine police", "venue": null, "year": 2014 }, { "authors": [ "C. First" ], "title": "2005-02-20, the military of Ukraine made a statement to the head of the government of Ukraine. Later, on 2005-07-18, the head of government of Ukraine appealed for a change in leadership of the Ukraine police", "venue": null, "year": 2005 }, { "authors": [ "A. First" ], "title": "2014-07-27, the undersecretary of Bahrain made statements to the Labor and Employment Ministry of Bahrain. Later, on 2015-01-21, an officer of Business Affairs of Bahrain signed a formal agreement with the undersecretary of Bahrain", "venue": null, "year": 2015 }, { "authors": [ "C. First" ], "title": "2006-11-01, the employees in Bahrain made statements with the special Rapporteurs of the United Nation. Later, on 2011-05-11, the office of Business Affairs of Bahrain reduced relations with the employees in Bahrain", "venue": null, "year": 2011 }, { "authors": [ "A D" ], "title": "representative of the Labor and Employment Ministry of Bahrain consulted with a representative of the Office of Business Affairs of Bahrain on 2014-01-31", "venue": null, "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reasoning, a process of inferring new knowledge from available facts, has long been considered an essential topic in AI research. Recently, reasoning on knowledge graphs (KG) has gained increasing interest (Das et al., 2017; Ren et al., 2020; Hildebrandt et al., 2020). A knowledge graph is a graphstructured knowledge base that stores factual information in the form of triples (s, p, o), e.g., (Alice, livesIn, Toronto). In particular, s (subject) and o (object) are expressed as nodes and p (predicate) as an edge type. Most knowledge graph models assume that the underlying graph is static. However, in the real world, facts and knowledge can change with time. For example, (Alice, livesIn, Toronto) becomes invalid after Alice moves to Vancouver. To accommodate time-evolving multi-relational data, temporal KGs have been introduced (Boschee et al., 2015), where a temporal fact is represented as a quadruple by extending the static triple with a timestamp t indicating the triple is valid at t, i.e. (Barack Obama, visit, India, 2010-11-06).\nIn this work, we focus on forecasting on temporal KGs, where we infer future events based on past events. Forecasting on temporal KGs can improve a plethora of downstream applications such as decision support in personalized health care and finance. The use cases often require the predictions made by the learning models to be interpretable, such that users can understand and trust the predictions. However, current machine learning approaches (Trivedi et al., 2017; Jin et al., 2019) for temporal KG forecasting operate in a black-box fashion, where they design an embedding-based score function to estimate the plausibility of a quadruple. These models cannot clearly show which evidence contributes to a prediction and lack explainability to the forecast, making them less suitable for many real-world applications. ∗Equal contribution. †Corresponding authors.\nExplainable approaches can generally be categorized into post-hoc interpretable methods and integrated transparent methods (Došilović et al., 2018). Post-hoc interpretable approaches (Montavon et al., 2017; Ying et al., 2019) aim to interpret the results of a black-box model, while integrated transparent approaches (Das et al., 2017; Qiu et al., 2019; Wang et al., 2019) have an explainable internal mechanism. In particular, most integrated transparent (Lin et al., 2018; Hildebrandt et al., 2020) approaches for KGs employ path-based methods to derive an explicit reasoning path and demonstrate a transparent reasoning process. The path-based methods focus on finding the answer to a query within a single reasoning chain. However, many complicated queries require multiple supporting reasoning chains rather than just one reasoning path. Recent work (Xu et al., 2019; Teru et al., 2019) has shown that reasoning over local subgraphs substantially boosts performance while maintaining interpretability. However, these explainable models cannot be applied to temporal graph-structured data because they do not take time information into account. This work aims to design a transparent forecasting mechanism on temporal KGs that can generate informative explanations of the predictions.\nIn this paper, we propose an explainable reasoning framework for forecasting future links on temporal knowledge graphs, xERTE, which employs a sequential reasoning process over local subgraphs. To answer a query in the form of (subject eq , predicate pq , ?, timestamp tq), xERTE starts from the query subject, iteratively samples relevant edges of entities included in the subgraph and propagates attention along the sampled edges. After several rounds of expansion and pruning, the missing object is predicted from entities in the subgraph. Thus, the extracted subgraph can be seen as a concise and compact graphical explanation of the prediction. To guide the subgraph to expand in the direction of the query’s interest, we propose a temporal relational graph attention (TRGA) mechanism. We pose temporal constraints on passing messages to preserve the causal nature of the temporal data. Specifically, we update the time-dependent hidden representation of an entity ei at a timestamp t by attentively aggregating messages from its temporal neighbors that were linked with ei prior to t. We call such temporal neighbors as prior neighbors of ei. Additionally, we use an embedding module consisting of stationary entity embeddings and functional time encoding, enabling the model to capture both global structural information and temporal dynamics. Besides, we develop a novel representation update mechanism to mimic human reasoning behavior. When humans perform a reasoning process, their perceived profiles of observed entities will update, as new clues are found. Thus, it is necessary to ensure that all entities in a subgraph can receive messages from prior neighbors newly added to the subgraph. To this end, the proposed representation update mechanism enables every entity to receive messages from its farthest prior neighbors in the subgraph.\nThe major contributions of this work are as follows. (1) We develop xERTE, the first explainable model for predicting future links on temporal KGs. The model is based on a temporal relational attention mechanisms that preserves the causal nature of the temporal multi-relational data. (2) Unlike most black-box embedding-based models, xERTE visualizes the reasoning process and provides an interpretable inference graph to emphasize important evidence. (3) The dynamical pruning procedure enables our model to perform reasoning on large-scale temporal knowledge graphs with millions of edges. (4) We apply our model for forecasting future links on four benchmark temporal knowledge graphs. The results show that our method achieves on average a better performance than current state-of-the-art methods, thus providing a new baseline. (5) We conduct a survey with 53 respondents to evaluate whether the extracted evidence is aligned with human understanding." }, { "heading": "2 RELATED WORK", "text": "Representation learning is an expressive and popular paradigm underlying many KG models. The embedding-based approaches for knowledge graphs can generally be categorized into bilinear models (Nickel et al., 2011; Yang et al., 2014; Ma et al., 2018a), translational models (Bordes et al., 2013; Lv et al., 2018; Sun et al., 2019; Hao et al., 2019), and deep-learning models (Dettmers et al., 2017; Schlichtkrull et al., 2018). However, the above methods are not able to use rich dynamics available on temporal knowledge graphs. To this end, several studies have been conducted for temporal knowledge graph reasoning (Garcı́a-Durán et al., 2018; Ma et al., 2018b; Jin et al., 2019; Goel et al., 2019; Lacroix et al., 2020; Han et al., 2020a;b; Zhu et al., 2020). The published approaches are largely black-box, lacking the ability to interpret their predictions. Recently, several explainable reasoning methods for knowledge graphs have been proposed (Das et al., 2017; Xu et al., 2019;\nHildebrandt et al., 2020; Teru et al., 2019). However, the above explainable methods can only deal with static KGs, while our model is designed for interpretable forecasting on temporal KGs." }, { "heading": "3 PRELIMINARIES", "text": "Let E and P represent a finite set of entities and predicates, respectively. A temporal knowledge graph is a collection of timestamped facts written as quadruples. A quadruple q = (es, p, eo, t) represents a timestamped and labeled edge between a subject entity es ∈ E and an object entity eo ∈ E , where p ∈ P denotes the edge type (predicate). The temporal knowledge graph forecasting task aims to predict unknown links at future timestamps based on observed past events.\nDefinition 1 (Temporal KG forecasting). Let F represent the set of all ground-truth quadruples, and let (eq, pq, eo, tq) ∈ F denote the target quadruple. Given a query (eq, pq, ?, tq) derived from the target quadruple and a set of observed prior facts O = {(ei, pk, ej , tl) ∈ F|tl < tq}, the temporal KG forecasting task is to predict the missing object entity eo. Specifically, we consider all entities in the set E as candidates and rank them by their likelihood to form a true quadruple together with the given subject-predicate-pair at timestamp tq1.\nFor a given query q = (eq, pq, ?, tq), we build an inference graph Ginf to visualize the reasoning process. Unlike in temporal KGs, where a node represents an entity, each node in Ginf is an entitytimestamp pair. The inference graph is a directed graph in which a link points from a node with an earlier timestamp to a node with a later timestamp.\nDefinition 2 (Node in Inference Graph and its Temporal Neighborhood). Let E represent all entities, F denote all ground-truth quadruples, and let t represent a timestamp. A node in an inference graph Ginf is defined as an entity-timestamp pair v = (ei, t), ei ∈ E . We define the set of one-hop prior neighbors of v as Nv=(ei,t) = {(ej , t′)|(ei, pk, ej , t′) ∈ F ∧ (t′ < t)}2. For simplicity, we denote one-hop prior neighbors as Nv . Similarly, we define the set of one-hop posterior neighbors of v as N v=(ei,t) = {(ej , t′)|(ej , pk, ei, t) ∈ F ∧ (t′ > t)}. We denote them as N v for short.\nWe provide an example in Figure 4 in the appendix to illustrate the inference graph." }, { "heading": "4 OUR MODEL", "text": "We describe xERTE in a top-down fashion where we provide an overview in Section 4.1 and then explain each module from Section 4.2 to 4.6." }, { "heading": "4.1 SUBGRAPH REASONING PROCESS", "text": "Our model conducts the reasoning process on a dynamically expanded inference graph Ginf extracted from the temporal KG. We show a toy example in Figure 1. Given query q = (eq, pq, ?, tq), we initialize Ginf with node vq = (eq, tq) consisting of the query subject and the query time. The inference graph expands by sampling prior neighbors of vq . For example, suppose that (eq, pk, ej , t′) is a valid quadruple where t′ < tq , we add the node v1 = (ej , t′) into Ginf and link it with vq where the link is labeled with pk and points from vq to v1. We use an embedding module to assign each node and predicate included in Ginf a temporal embedding that is shared across queries. The main goal of the embedding module is to let the nodes access query-independent information and get a broad view of the graph structure since the following temporal relational graph attention (TRGA) layer only performs query-dependent message passing locally. Next, we feed the inference graph into the TRGA layer that takes node embeddings and predicate embeddings as the input, produces a query-dependent representation for each node by passing messages on the small inference graph, and computes a query-dependent attention score for each edge. As explained in Section 4.7, we propagate the attention of each node to its prior neighbors using the edge attention scores. Then we further expand Ginf by sampling the prior neighbors of the nodes in Ginf. The expansion will grow\n1Throughout this work, we add reciprocal relations for every quadruple, i.e., we add (eo, p−1, es, t) for every (es, p, eo, t). Hence, the restriction to predict object entities does not lead to a loss of generality.\n2Prior neighbors linked with ei as subject entity, e.g., (ej , pk, ei, t), are covered using reciprocal relations.\nrapidly and cover almost all nodes after a few steps. To prevent the inference graph from exploding, we reduce the edge amount by pruning the edges that gain less attention. As the expansion and pruning iterate, Ginf allocates more and more information from the temporal KG. After running L inference steps, the model selects the entity with the highest attention score in Ginf as the prediction of the missing query object, where the inference graph itself serves as a graphical explanation." }, { "heading": "4.2 NEIGHBORHOOD SAMPLING", "text": "We define the set of edges between node v = (ei, t) and its prior neighbors Nv as Qv , where qv ∈ Qv is a prior edge of v. To reduce the complexity, we sample a subset of prior edges Q̂v ∈ Qv at each inference step. We denote the remaining prior neighbors and posterior neighbors of node v after the sampling as N̂v and N̂v , respectively. Note that there might be multiple edges between node v and its prior neighbor u because of multiple predicates. If there is at least one edge that has been sampled between v and u, we add u into N̂v . The sampling can be uniform if there is no bias, it can also be temporally biased using a non-uniform distribution. For instance, we may want to sample more edges closer to the current time point as the events that took place long ago may have less impact on the inference. Specifically, we propose three different sampling strategies: (1) Uniform sampling. Each prior edge qv ∈ Qv has the same probability of being selected: P(qv) = 1/|Qv|. (2) Time-aware exponentially weighted sampling. We temporally bias the neighborhood sampling using an exponential distribution and assign the probability P(qv = (ei, pk, ej , t′)) = exp(t′ − t)/ ∑ (ei,pl,em,t′′)∈Qv exp(t\n′′ − t) to each prior neighbor, which negatively correlates with the time difference between node v and its prior neighbor (ej , t′). Note that t′ and t′′ are prior to t. (3) Time-aware linearly weighted sampling. We use a linear function to bias the sampling. Compared to the second strategy, the quadruples occurred in early stages have a higher probability of being sampled. Overall, we have empirically found that the second strategy is most beneficial to our framework and provide a detailed ablation study in Section 5.2." }, { "heading": "4.3 EMBEDDING", "text": "In temporal knowledge graphs, graph structures are no longer static, as entities and their links evolve over time. Thus, entity features may change and exhibit temporal patterns. In this work, the embedding of an entity ei ∈ E at time t consists of a static low-dimensional vector and a functional representation of time. The time-aware entity embedding is defined as ei(t) = [ēi||Φ(t)]T ∈ RdS+dT . Here, ēi ∈ RdS represents the static embedding that captures time-invariant features and global dependencies over the temporal KG. Φ(·) denotes a time encoding that captures temporal dependencies between entities (Xu et al., 2020). We provide more details about Φ(·) in Appendix I. || denotes the concatenation operator. dS and dT represent the dimensionality of the static embedding and the time embedding, which can be tuned according to the temporal fraction of the given dataset. We also tried the temporal encoding presented in Goel et al. (2019), which has significantly more parameters. But we did not see considerable improvements. Besides, we assume that predicate features do not evolve. Thus, we learn a stationary embedding vector pk for each predicate pk." }, { "heading": "4.4 TEMPORAL RELATIONAL GRAPH ATTENTION LAYER", "text": "Here, we propose a temporal relational graph attention (TRGA) layer for identifying the relevant evidence in the inference graph related to a given query q. The input to the TRGA layer is a set of entity embeddings ei(t) and predicate embeddings pk in the given inference graph. The layer produces a query-dependent attention score for each edge and a new set of hidden representations as its output. Similar to GraphSAGE (Hamilton et al., 2017) and GAT (Veličković et al., 2017), the TRGA layer performs a local representation aggregation. To avoid misusing future information, we only allow message passing from prior neighbors to posterior neighbors. Specifically, for each node v in the inference graph, the aggregation function fuses the representation of node v and the sampled prior neighbors N̂v to output a time-aware representation for v. Since entities may play different roles, depending on the predicate they are associated with, we incorporate the predicate embeddings in the attention function to exploit relation information. Instead of treating all prior neighbors with equal importance, we take the query information into account and assign varying importance levels to each prior neighbor u ∈ N̂v by calculating a query-dependent attention score using\nelvu(q, pk) = W l sub(h l−1 v ||pl−1k ||h l−1 eq ||p l−1 q )W l obj(h l−1 u ||pl−1k ||h l−1 eq ||p l−1 q ), (1)\nwhere elvu(q, pk) is the attention score of the edge (v, pk, u) regarding the query q = (eq, pq, ?, tq), pk corresponds to the predicate between node u and node v, pk and pq are predicate embeddings. hl−1v denotes the hidden representation of node v at the (l − 1)th inference step. When l = 1, i.e., for the first layer, h0v = Wvei(t)+bv , where v = (ei, t). W l sub and W l obj are two weight matrices for capturing the dependencies between query features and node features. Then, we compute the normalized attention score αlvu(q, pk) using the softmax function as follows\nαlvu(q, pk) = exp(elvu(q, pk))∑ w∈N̂v ∑ pz∈Pvw exp(e l vw(q, pz)) , (2)\nwhere Pvw represents the set of labels of edges that connect nodes v and w. Once obtained, we aggregate the representations of prior neighbors and weight them using the normalized attention scores, which is written as\nh̃lv(q) = ∑ u∈N̂v ∑ pk∈Pvu αlvu(q, pk)h l−1 u (q). (3)\nWe combine the hidden representation hl−1v (q) of node v with the aggregated neighborhood representation h̃lv(q) and feed them into a fully connected layer with a LeakyReLU activation function σ(·), as shown below\nhlv(q) = σ(W l h(γh l−1 v (q) + (1− γ)h̃lv(q) + blh)), (4)\nwhere hlv(q) denotes the representation of node v at the l th inference step, and γ is a hyperparameter. Further, we use the same layer to update the relation embeddings, which is of the form plk = Wlhp l−1 k + b l h. Thus, the relations are projected to the same embedding space as nodes and can be utilized in the next inference step." }, { "heading": "4.5 ATTENTION PROPAGATION AND SUBGRAPH PRUNING", "text": "After having the edges’ attention scores in the inference graph, we compute the attention score alv,q of node v regarding query q at the lth inference step as follows:\nalv,q = ∑ u∈N̂v ∑ pz∈Puv αluv(q, pz)a l−1 u,q . (5)\nThus, we propagate the attention of each node to its prior neighbors. As stated in Definition 2, each node in inference graph is an entity-timestamp pair. To assign each entity a unique attention score, we aggregate the attention scores of nodes whose entity is the same:\nalei,q = g(a l v,q|v(e) = ei), for v ∈ VGinf , (6)\nwhere alei,q denotes the attention score of entity ei, VGinf is the set of nodes in inference graph Ginf. v(e) represents the entity included in node v, and g(·) represents a score aggregation function. We try two score aggregation functions g(·), i.e., summation and mean. We conduct an ablation study and find that the summation aggregation performs better. To demonstrate which evidence is important for the reasoning process, we assign each edge in the inference graph a contribution score. Specifically, the contribution score of an edge (v, pk, u) is defined as cvu(q, pk) = αlvu(q, pk)a l v,q , where node u is a prior neighbor of node v associated with the predicate pk. We prune the inference graph at each inference step and keep the edges with K largest contribution scores. We set the attention score of entities, which the inference graph does not include, to zero. Finally, we rank all entity candidates according to their attention scores and choose the entity with the highest score as our prediction." }, { "heading": "4.6 REVERSE REPRESENTATION UPDATE MECHANISM", "text": "When humans perform a reasoning process, the perceived profile of an entity during the inference may change as new evidence joins the reasoning process. For example, we want to predict the profitability of company A. We knew that A has the largest market portion, which gives us a high expectation about A’s profitability. However, a new evidence shows that conglomerate B enters this market as a strong competitor. Although the new evidence is not directly related to A, it indicates that there will be a high competition between A and B, which lowers our expectation about A’s profitability. To mimic human reasoning behavior, we should ensure that all existing nodes in inference graph Ginf can receive messages from nodes newly added to Ginf. However, since Ginf expands once at each inference step, it might include l-hop neighbors of the query subject at the lth step. The vanilla solution is to iterate the message passing l times at the lth inference step, which means that we need to run the message passing (1 + L) · L/2 times in total, for L inference steps. To avoid the quadratic increase of message passing iterations, we propose a novel reverse representation update mechanism. Recall that, to avoid violating temporal constraints, we use prior neighbors to update nodes’ representations. And at each inference step, we expand Ginf by adding prior neighbors of each node in Ginf. For example, assuming that we are at the fourth inference step, for a node that has been added at the second step, we only need to aggregate messages from nodes added at the third and fourth steps. Hence, we can update the representations of nodes in reverse order as they have been added in Ginf. Specifically, at the lth inference step, we first update the representations of nodes added at the (l − 1)th inference step, then the nodes added at (l − 2)th, and so forth until l = 0, as shown in Algorithm 1 in the appendix. In this way, we compute messages along each edge in Ginf only once and ensure that every node can receive messages from its farthest prior neighbor." }, { "heading": "4.7 LEARNING", "text": "We split quadruples of a temporal KG into train, validation, and test sets by timestamps, ensuring (timestamps of training set)<(timestamps of validation set)<(timestamps of test set). We use the binary cross-entropy as the loss function, which is defined as\nL = − 1 |Q| ∑ q∈Q 1 |E infq | ∑ ei∈Einfq ( yei,q log( aLei,q∑ ej∈Einfq a L ej ,q ) + (1− yei,q) log(1− aLei,q∑ ej∈Einfq a L ej ,q ) ) ,\nwhere E infq represents the set of entities in the inference graph of the query q, yei,q represents the binary label that indicates whether ei is the answer for q, and Q denotes the set of training quadruples. aLei,q denotes the attention score of ei at the final inference step. We list all model parameters in Table 2 in the appendix. Particularly, we jointly learn the embeddings and other model parameters by end-to-end training." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 DATASETS AND BASELINES", "text": "Integrated Crisis Early Warning System (ICEWS) (Boschee et al., 2015) and YAGO (Mahdisoltani et al., 2013) have established themselves in the research community as benchmark datasets of temporal KGs. The ICEWS dataset contains information about political events with time annotations,\ne.g., (Barack Obama, visit, Malaysia, 2014-02-19). We evaluate our model on three subsets of the ICEWS dataset, i.e., ICEWS14, ICEWS18, and ICEWS05-15, that contain event facts in 2014, 2018, and the facts from 2005 to 2015, respectively. The YAGO dataset is a temporal knowledge base that fuses information from Wikipedia with the English WordNet dataset (Miller, 1995). Following the experimental settings of HyTE (Dasgupta et al., 2018), we use a subset and only deal with year level granularity by dropping the month and date information. We compare our approach and baseline methods by performing the link prediction task on the ICEWS14, ICEWS18, ICEWS0515, and YAGO datasets. The statistics of the datasets are provided in Appendix C.\nWe compare xERTE with benchmark temporal KG and static KG reasoning models. From the temporal KG reasoning models, we compare our model with several state-of-the-art methods, including TTransE (Leblay & Chekol, 2018), TA-DistMult/TA-TransE (Garcı́a-Durán et al., 2018), DE-SimplE(Goel et al., 2019), TNTComplEx (Lacroix et al., 2020), CyGNet(Zhu et al., 2020), and RE-Net (Jin et al., 2019). From the static KG reasoning models, we choose TransE (Bordes et al., 2013), DistMult (Yang et al., 2014), and ComplEx (Trouillon et al., 2016)." }, { "heading": "5.2 EXPERIMENTAL RESULTS AND ABLATION STUDY", "text": "Comparison results Table 1 summarizes the time-aware filtered results of the link prediction task on the ICEWS and YAGO datasets4. The time-aware filtering scheme only filters out triples that are genuine at the query time while the filtering scheme applied in prior work (Jin et al., 2019; Zhu et al., 2020) filters all triples that occurred in history. A detailed explanation is provided in Appendix D. Overall, xERTE outperforms all baseline models on ICEWS14/05-15/18 in MRR and Hits@1/3/10 while being more interpretable. Compared to the strongest baseline RE-Net, xERTE obtains a relative improvement of 5.60% and 15.15% in MRR and Hits@1, which are averaged on ICEWS14/05-15/18. Especially, xERTE achieves more gains in Hits@1 than in Hits@10. It confirms the assumption that subgraph reasoning helps xERTE make a sharp prediction by exploiting local structures. On the YAGO dataset, xERTE achieves comparable results with RE-Net in terms of MRR and Hits@1/3. To assess the importance of each component, we conduct several ablation studies and show their results in the following.\nRepresentation update analysis We train a model without the reverse representation update mechanism to investigate how this mechanism contributes to our model. Since the reverse representation update ensures that each node can receive messages from all its prior neighbors in the inference graph, we expect this mechanism could help nodes mine available information. This update mechanism should be especially important for nodes that only have been involved in a small number of events. Since the historical information of such nodes is quite limited, it is very challenging to forecast their future behavior. In Figure 2a and 2b we show the metrics of Hits@1 and Hits@10 against the number of nodes in the inference graph. It can be observed the model with the reverse update mechanism performs better in general. In particular, this update mechanism significantly improves the performance if the query subject only has a small number of neighbors in the subgraph, which meets our expectation.\n3We found that CyGNet does not perform subject prediction in its evaluation code and does not report time-aware filtered results. The performance significantly drops after fixing the code.\n4Code and datasets are available at https://github.com/TemporalKGTeam/xERTE\nTime-aware representation analysis and node attention aggregation To verify the importance of the time embedding, we evaluate the performance of a model without time encoding. As shown in Figure 2c, removing the time-dependent part from entity representations sacrifices the model’s performance significantly. Recall that each node in inference graph Ginf is associated with a timestamp, the same entity might appear in several nodes in Ginf with different timestamps. To get a unified attention score for each entity, we aggregate the attention scores of nodes whose entity is the same. Figure 2d shows that the summation aggregator brings a considerable gain on ICEWS14.\nSampling analysis We run experiments with different sampling strategies proposed in Section 4.2. To assess the necessity of the time-aware weighted sampling, we propose a deterministic version of the time-aware weighted sampling, where we chronologically sort the prior edges of node v in terms of their timestamps and select the last N edges to build the subset Q̂v . The experimental results are provided in Table 3 in the appendix. We find that the sampling strategy has a considerable influence on model’s performance. Sampling strategies that bias towards recent quadruples perform better. Specifically, the exponentially time-weighted strategy performs better than the linear time-weighted strategy and the deterministic last-N-edges strategy.\nTime cost analysis The time cost of xERTE is affected not only by the scale of a dataset but also by the number of inference steps L. Thus, we run experiments of inference time and predictive power regarding different settings of L and show the results in Figures 2e and 2f. We see that the model achieves the best performance with L = 3 while the training time significantly increases as L goes up. To make the computation more efficient, we develop a series of segment operations for subgraph reasoning. Please see Appendix G for more details." }, { "heading": "5.3 GRAPHICAL EXPLANATION AND HUMAN EVALUATION", "text": "The extracted inference graph provides a graphical explanation for model’s prediction. As introduced in 4.7, we assign each edge in the inference graph a contribution score. Thus, users can trace back the important evidence that the prediction mainly depends on. We study a query chosen from the test set, where we predict whom will Catherine Ashton visit on Nov. 9, 2014 and show the final\ninference graph in Figure 3. In this case, the model’s prediction is Oman. And (Catherine Ashton, express intent to meet or negotiate, Oman, 2014-11-04) is the most important evidence to support this answer.\nTo assess whether the evidence is informative for users in an objective setting, we conduct a survey where respondents evaluate the relevance of the extracted evidence to the prediction. More concretely, we set up an online quiz consisting of 7 rounds. Each round is centered around a query sampled from the test set of ICEWS14/ICEWS05-15. Along with the query and the ground-truth answer, we present the human respondents with two pieces of evidence in the inference graph with high contribution scores and two pieces of evidence with low contribution scores in a randomized order. Specifically, each evidence is based on a chronological reasoning path that connects the query subject with an object candidate. For example, given a query (police, arrest, ?, 2014-12-28), an extracted clue is that police made statements to lawyers on 2014-12-08, then lawyers were criticized by citizens on 2014-12-10. In each round, we set up three questions to ask the participants to choose the most relevant evidence, the most irrelevant evidence, and sort the pieces of evidence according to their relevance. Then we rank the evidence according to the contribution scores computed by our model and check whether the relevance order classified by the respondents matches that estimated by our models. We surveyed 53 participants, and the average accuracy of all questions is 70.5%. Moreover, based on a majority vote, 18 out of 21 questions were answered correctly, indicating that the extracted inference graphs are informative, and the model is aligned with human intuition. The complete survey and a detailed evaluation are reported in Appendix H." }, { "heading": "6 CONCLUSION", "text": "We proposed an explainable reasoning approach for forecasting links on temporal knowledge graphs. The model extracts a query-dependent subgraph from a given temporal KG and performs an attention propagation process to reason on it. Extensive experiments on four benchmark datasets demonstrate the effectiveness of our method. We conducted a survey about the evidence included in the extracted subgraph. The results indicate that the evidence is informative for humans." }, { "heading": "A RELATED WORK", "text": "A.1 KNOWLEDGE GRAPH MODELS\nRepresentation learning is an expressive and popular paradigm underlying many KG models. The key idea is to embed entities and relations into a low-dimensional vector space. The embeddingbased approaches for knowledge graphs can generally be categorized into bilinear models (Nickel et al., 2011; Balažević et al., 2019), translational models (Bordes et al., 2013; Sun et al., 2019), and deep-learning models (Dettmers et al., 2017; Schlichtkrull et al., 2018). Besides, several studies (Hao et al., 2019; Lv et al., 2018; Ma et al., 2017) explore the ontology of entity types and relation types and utilize type-based semantic similarity to produce better knowledge embeddings. However, the above methods lack the ability to use rich temporal dynamics available on temporal knowledge graphs. To this end, several studies have been conducted for link prediction on temporal knowledge graphs (Leblay & Chekol, 2018; Garcı́a-Durán et al., 2018; Ma et al., 2018b; Dasgupta et al., 2018; Trivedi et al., 2017; Jin et al., 2019; Goel et al., 2019; Lacroix et al., 2020). Ma et al. (2018b) developed extensions of static knowledge graph models by adding timestamp embeddings to their score functions. Besides, Garcı́a-Durán et al. (2018) suggested a straight forward extension of some existing static knowledge graph models that utilize a recurrent neural network (RNN) to encode predicates with temporal tokens derived from given timestamps. Also, HyTE (Dasgupta et al., 2018) embeds time information in the entity-relation space by arranging a temporal hyperplane to each timestamp. However, these models cannot generalize to unseen timestamps because they only learn embeddings for observed timestamps. Additionally, the methods are largely black-box, lacking the ability to interpret their predictions while our main focus is to employ an integrated transparency mechanism for achieving human-understandable results.\nA.2 EXPLAINABLE REASONING ON KNOWLEDGE GRAPHS\nRecently, several explainable reasoning methods for knowledge graphs have been proposed (Das et al., 2017; Xu et al., 2019; Hildebrandt et al., 2020) . Das et al. (2017) proposed a reinforcement learning-based path searching approach to display the query subject and predicate to the agents and let them perform a policy guided walk to the correct object entity. The reasoning paths produced by the agents can explain the prediction results to some extent. Also, Hildebrandt et al. (2020) framed the link prediction task as a debate game between two reinforcement learning agents that extract evidence from knowledge graphs and allow users to understand the decision made by the agents. Besides, and more related to our work, Xu et al. (2019) models a sequential reasoning process by dynamically constructing an input-dependent subgraph. The difference here is that these explainable methods can only deal with static KGs, while our model is designed for forecasting on temporal KGs." }, { "heading": "B WORKFLOW", "text": "We show the workflow of the subgraph reasoning process in Figure 5. The model conducts the reasoning process on a dynamically expanding inference graph Ginf extracted from the temporal KG. This inference graph gives an interpretable graphical explanation about the final prediction. Given a query q = (eq, pq, ?, tq), we initialize the inference graph with the query entity eq and define the tuple of (eq, tq) as the first node in the inference graph (Figure 5a). The inference graph expands by\nsample neighbors that have been linked with eq prior to tq , as shown in Figure 5b. The expansion would go rapidly that it covers almost all nodes after a few steps. To prevent the inference graph from exploding, we constrain the number of edge by pruning the edges that are less related to the query (Figure 5c) . Here, we propose a query-dependent temporal relational attention mechanism in Section 4.4 to identify the nodes’ importance in the inference graph for query q and aggregate information from nodes’ local neighbors. Next, we sample the prior neighbors of the remaining nodes in the inference graph to expand it further, as shown in Figure 5d. As this process iterates, the inference graph incrementally gains more and more information from the temporal KG. After running L inference steps, the model selects the entity with the highest attention score in Ginf as the prediction of the missing query object, where the inference graph itself serves as a graphical explanation." }, { "heading": "C DATASET STATISTICS", "text": "We provide the statistics of datasets in Table 4. Since we split each dataset into subsets by timestamps, ensuring (timestamps of training set) < (timestamps of validation set) < (timestamps of test set), a considerable amount of entities in test sets is unseen. We report the number of entities in each subset in Table 5." }, { "heading": "D EVALUATION PROTOCOL", "text": "For each quadruple q = (es, p, eo, t) in the test set Gtest, we create two queries: (es, p, ?, t) and (eo, p\n−1, ?, t), where p−1 denotes the reciprocal relation of p. For each query, the model ranks all entities E infq in the final inference graph according to their attention scores. If the ground truth entity does not appear in the final subgraph, we set its rank as |E| (the number of entities in the dataset). Let ψes and ψeo represent the rank for es and eo of the two queries respectively. We evaluate our model using standard metrics across the link prediction literature: mean reciprocal rank (MRR):\n1 2·|Gtest| ∑ q∈Gtest( 1 ψes + 1ψeo ) and Hits@k(k ∈ {1, 3, 10}): the percentage of times that the true entity candidate appears in the top k of the ranked candidates.\nIn this paper, we consider two different filtering settings. The first one is following the ranking technique described in Bordes et al. (2013), where we remove from the list of corrupted triples all the triples that appear either in the training, validation, or test set. We name it static filtering. Trivedi et al. (2017), Jin et al. (2019), and Zhu et al. (2020) use this filtering setting for reporting their results on temporal KG forecasting. However, this filtering setting is not appropriate for evaluating the link prediction on temporal KGs. For example, there is a test quadruple (Barack Obama, visit, India, 2015-01-25), and we perform the object prediction (Barack Obama, visit, ?, 2015-01-25). We have observed the quadruple (Barack Obama, visit, Germany, 2013-01-18) in the training set. According to the static filtering, (Barack Obama, visit, Germany) will be considered as a genuine triple at the timestamp 2015-01-25 and will be filtered out because the triple (Barack Obama, visit, Germany) appears in the training set in the quadruple (Barack Obama, visit, Germany, 2015-01-18). However, the triple (Barack Obama, visit, Germany) is only temporally valid on 2013-01-18 but not on 2015-01-25. Therefore, we apply another filtering scheme, which is more appropriate for the link forecasting task on temporal KGs. We name it time-aware filtering. In this case, we only filter out the triples that are genuine at the timestamp of the query. In other words, if the triple (Barack Obama, visit, Germany) does not appear at the query time of 2015-01-25, the quadruple (Barack Obama, visit, Germany, 2015-01-25) is considered as corrupted and will be filtered out. We report the time-aware filtered results of baselines and our model in Table 1.\nE IMPLEMENTATION\nWe implement our model and all baselines in PyTorch (Paszke et al., 2019). We tune hyperparameters of our model using a grid search. We set the learning rate to be 0.0002, the batch size to be 128, the inference step L to be 3. Please see the source code5 for detailed hyperparameter settings. We implement TTransE, TA-TransE/TA-DistMult, and RE-Net based on the code6 provided in (Jin et al., 2019). We use the released code to implement DE-SimplE7, TNTComplEx8, and CyGNet9. We use the binary cross-entropy loss to train these baselines and optimize hyperparameters according to MRR on the validation set. Besides, we use the datasets augmented with reciprocal relations to train all baseline models." }, { "heading": "F REVERSE REPRESENTATION UPDATE MECHANISM FOR SUBGRAPH REASONING", "text": "In this section, we explain an additional reason why we have to update node representations along edges selected in previous inference steps. We show our intuition by a simple query in Figure 6 with two inference steps. For simplicity, we do not apply the pruning procedure here. First, we check the equations without updating node representations along previously selected edges. hli denotes the\n5https://github.com/TemporalKGTeam/xERTE 6https://github.com/INK-USC/RE-Net 7https://github.com/BorealisAI/de-simple 8https://github.com/facebookresearch/tkbc 9https://github.com/CunchaoZ/CyGNet\nhidden representation of node i at the lth inference step.\nFirst inference step: h10 = f(h 0 0,h 0 1,h 0 2,h 0 3)\nh11 = f(h 0 1)\nh12 = f(h 0 2)\nh13 = f(h 0 3)\nSecond inference step: h20 = f(h 1 0,h 1 1,h 1 2,h 1 3)\n= f(h10, f(h 0 1), f(h 0 2), f(h 0 3))\nh21 = f(h 1 1,h 1 4,h 1 5)\nh22 = f(h 1 2,h 1 7,h 1 8)\nh23 = f(h 1 3,h 1 6)\nNote that h20 is updated with h 0 1,h 0 2,h 0 3 and has nothing to do with h 1 4,h 1 5,h 1 6,h 1 7,h 1 8, i.e., two-hop neighbors. In comparison, if we update the node representations along previously selected edges, the update in second layer changes to:\nSecond inference step part a: h24 = f(f(h 0 4))\nh25 = f(f(h 0 5))\nh26 = f(f(h 0 6))\nh27 = f(f(h 0 7))\nh28 = f(f(h 0 8))\nSecond inference step part b: h21 = f(h 1 1,h 2 4,h 2 5)\nh22 = f(h 1 2,h 2 7,h 2 8)\nh23 = f(h 1 3,h 2 6)\nSecond inference step part c: h20 = f(h 1 0,h 2 1,h 2 2,h 2 3)\nThus, the node 1∼3 receive messages from their one-hop prior neighbors, i.e. h21 = f(h11,h24,h25). Then they pass the information to the query subject (node 0), i.e., h20 = f(h 1 0,h 2 1,h 2 2,h 2 3)." }, { "heading": "G SEGMENT OPERATIONS", "text": "The degree of entities in temporal KGs, i.e., ICEWS, varies from thousands to a single digit. Thus, the size of inference graphs of each query is also different. To optimize the batch training, we define an array to record all nodes in inference graphs for a batch of queries. Each node is represented by a tuple of (inference graph index, entity index, timestamp, node index). The node index is the unique index to distinguish the same node in different inference graphs.\nNote that the inference graphs of two queries may overlap, which means they have the same nodes in their inference graphs. But the query-dependent node representations would be distinct in different inference graphs. To avoid mixing information across different queries, we need to make sure that tensor operations can be applied separately to nodes in different inference graphs. Instead of iterating through each inference graph, we develop a series of segment operations based on matrix multiplication. The segment operations significantly improve time efficiency and reduce the time cost. We report the improvement of time efficiency on ICEWS14 in Table 6. Additionally, we list two examples of segment operations in the following.\nSegment Sum Given a vector x ∈ Rd and another vector s ∈ Rd that indicates the segment index of each element in x, the segment sum operator returns the summation for each segment. For example, we have x = [3, 1, 5]T and s = [0, 0, 1]T , which means the first two element of x belong to the 0th segment and the last elements belongs to the first segment. The segment sum operator returns [4, 5]T as the output. It is realized by creating a sparse matrix Y ∈ Rn×d, where n denotes the number of segments. We set 1 in positions {(s[i], i), ∀i ∈ {0, ..., d}} of Y and pad other positions with zeros. Finally, we multiply Y with x to get the sum of each segment.\nSegment Softmax The standard softmax function σ : RK → RK is defined as:\nσ(z)i = exp (zi)∑K j=1 exp(zj)\nThe segment softmax function has two inputs: z ∈ RK contains elements to normalize and s ∈ RK denotes the segment index of each element. It is then defined as:\nσ(z)i = exp (zi)∑\nj∈{k|sk=si,∀k∈{0,...,K}} exp (zj)\n, where si denotes the segment that zi is in.\nThe segment softmax function can be calculated by the following steps:\n1. We apply the exponential function to each element of z and then apply the segment sum operator to get a denominator vector d. We need broadcast d such that it aligns with z, which means d[i] is the summation of segment s[i].\n2. We apply element-wise division between d and z." }, { "heading": "H SURVEY", "text": "In this section, we provide the online survey (see Section 5.3 in the main body) and the evaluation statistics based on 53 respondents. To avoid biasing the respondents, we did not inform them about the type of our project. Further, all questions are permuted at random.\nWe set up the quiz consisting of 7 rounds. In each round, we sample a query from the test set of ICEWS14/ICEWS0515. Along with the query and the ground-truth object, we present the users with two pieces of evidence extracted from the inference graph with high contribution scores and two pieces of evidence with low contribution scores in randomized order. The respondents are supposed to judge the relevance of the evidence to the query in two levels, namely relevant or less relevant. There are three questions in each round that ask the participants to give the most relevant evidence, the most irrelevant evidence, and rank the four pieces of evidence according to their relevance. The answer to the first question is classified as correct if a participant gives one of the two statements with high contribution scores as the most relevant evidence. Similarly, the answer to the second question is classified as correct if the participant gives one of the two statements with low contribution scores as the most irrelevant evidence. For the relevance ranking task, the answer is right if the participant ranks the two statements with high contribution scores higher than the two statements with low contribution scores.\nH.1 POPULATION\nWe provide the information about gender, age, and education level of the respondents in Figure 7.\nH.2 AI QUIZ\nYou will participate in a quiz consisting of eight rounds. Each round is centered around an international event. Along with the event, we also show you four reasons that explain why the given event happened. While some evidence may be informative and explain the occurrence of this event, others may irrelevant to this event. Your task is to find the most relevant evidence and most irrelevant evidence, and then sort all four evidence according to their relevance. Don’t worry if you feel that you cannot make an informed decision: Guessing is part of this game!\nAdditional Remarks: Please don’t look for external information (e.g., Google, Wikipedia) or talk to other respondents about the quiz. But you are allowed to use a dictionary if you need vocabulary clarifications.\nExample\nGiven an event, please rank the followed evidence according to the relevance to the given event. Especially, please select the most relevant reason, the most irrelevant reason, and rank the relevance from high to low.\nEvent: French government made an optimistic comment about China on 2014-11-24.\nA. First, on 2014-11-20, South Africa engaged in diplomatic cooperation with Morocco. Later, on 2014-11-21, a representative of the Morocco government met a representative of the French government.\nB. First, on 2014-11-18, the Chinese government engaged in negotiation with the Iranian government. Later, on 2014-11-21, a representative of the French government met a representative of the Chinese government.\nC. On 2014-11-23, the French hosted a visit by Abdel Fattah Al-Sisi.\nD. A representative of the French government met a representative of the Chinese government on 2014-11-21.\nCorrect answer\nMost relevant: D Most irrelevant: A Relevance ranking: D B C A\nTasks" }, { "heading": "1. Event: On 2014-12-17, the UN Security Council accused South Sudan.", "text": "A. South Africa engaged in diplomatic cooperation with South Sudan on 2014-12-11.\nB. First, on 2014-11-17, Uhuru Muigai Kenyatta accused UN Security Council. Later, on 2014-11- 26, the UN Security Council provided military protection to South Sudan.\nC. On 2014-12-16, UN Security Council threatened South Sudan with sanctions.\nD. South Sudan hosted the visit of John Kerry on 2014-12-16.\nMost relevant: Most irrelevant: Relevance ranking:" }, { "heading": "2. Event: Indonesia police arrested and retained an Indonesia citizen at 2014-12-28.", "text": "A. The Indonesia police claimed that an attorney denounced the citizen on 2014-12-10.\nB. Zaini Abdullah endorsed the Indonesia citizen on 2014-12-25.\nC. The Indonesia police made an optimistic comment on the citizen on 2014-12-14.\nD. The Indonesia police investigated the citizen on 2014-12-08.\nMost relevant: Most irrelevant: Relevance ranking:\n3. Event: A citizen from Greece protested violently against the police of Greece on 2014-11-17.\nA. The Greek head of government accused the political party “Coalition of the Radical Left” on 2014-05-25.\nB. Greek police refused to surrender to the Greek head of government on 2014-10-15.\nC. Greek citizens gathered support on behalf of John Kerry on 2014-11-17.\nD. Greek police arrested and detained another Greek police officer on 2014-11-04.\nMost relevant: Most irrelevant: Relevance ranking:" }, { "heading": "4. Event: Raúl Castro signed a formal agreement with Barack Obama on 2014-12-17.", "text": "A. First, on 2009-01-28, Dmitry A. Medvedev made statements to Barack Obama. Later, on 2009- 01-30, Raúl Castro negotiated with Dmitry A. Medvedev.\nB. Raúl Castro visited Angola on 2009-07-22.\nC. Raúl Castro hosted a visit of Evo Morales on 2011-09-19.\nD. First, on 2008-11-05, Evo Morales hosted a visit of Barack Obama. Later, on 2011-09-19, Raúl Castro appeal for de-escalation of military engagement to Evo Morales.\nMost relevant: Most irrelevant: Relevance ranking:" }, { "heading": "5. Event: The head of the government of Ukraine considered to make a policy option with Angela", "text": "Merkel on 2015-07-10.\nA. First, on 2014-07-04, the armed rebel in Ukraine used unconventional violence to the military of Ukraine. Later, on 2014-07-10, the head of government of Ukraine made statements to the armed rebel in Ukraine.\nB. The head of the government of Ukraine expressed intent to meet with Angela Merkel on 2014- 10-30.\nC. First, on 2014-07-04, the armed rebel in Ukraine used unconventional violence to the military of Ukraine. Later, on 2014-07-19, the head of government of Ukraine made statements to the armed rebel in Ukraine.\nD. The head of the government of Ukraine consulted with Angela Merkel on 2015-06-06.\nMost relevant: Most irrelevant: Relevance ranking:" }, { "heading": "6. Event: On 2014-08-09, Ukraine police arrested a member of the Ukraine military.", "text": "A. First, on 2014-07-23, a member of Ukraine parliament consulted the head of the Ukraine government. Later, on 2014-07-24, the head of government made a statement to the Ukraine police.\nB. First, on 2014-06-25, the military of Ukraine used violence to an armed rebel that occurred in Ukraine. Later, on 2014-07-10, the armed rebel used violence to the Ukraine police.\nC. First, on 2005-02-20, the military of Ukraine made a statement to the head of the government of Ukraine. Later, on 2005-07-18, the head of government of Ukraine appealed for a change in leadership of the Ukraine police.\nD. On 2014-07-31, the head of the Ukraine government praised the Ukraine police.\nMost relevant: Most irrelevant: Relevance ranking:" }, { "heading": "7. Event: The Office of Business Affairs of Bahrain negotiated with the Labor and Employment Ministry of Bahrain on 2015-07-16.", "text": "A. First, on 2014-07-27, the undersecretary of Bahrain made statements to the Labor and Employment Ministry of Bahrain. Later, on 2015-01-21, an officer of Business Affairs of Bahrain signed a formal agreement with the undersecretary of Bahrain.\nB. On 2012-01-21, the office of Business Affairs of Bahrain expressed intent to provide policy support to the employees in Bahrain.\nC. First, on 2006-11-01, the employees in Bahrain made statements with the special Rapporteurs of the United Nation. Later, on 2011-05-11, the office of Business Affairs of Bahrain reduced relations with the employees in Bahrain.\nD. A representative of the Labor and Employment Ministry of Bahrain consulted with a representative of the Office of Business Affairs of Bahrain on 2014-01-31.\nMost relevant: Most irrelevant: Relevance ranking:\nH.3 GROUND TRUTH ANSWERS\nQuestion 1:\nMost relevant: B/C Most irrelevant: A/D\nRelevance ranking: BCAD/BCDA/CBAD/CBDA\nQuestion 2:\nMost relevant: A/D Most irrelevant: B/C\nRelevance ranking: ADBC/ADCB/DABC/DACB\nQuestion 3:\nMost relevant: B/D Most irrelevant: A/C\nRelevance ranking: BDAC/BDCA/DBAC/DBCA\nQuestion 4:\nMost relevant: A/D Most irrelevant: B/C\nRelevance ranking: ADBC/ADCB/DABC/DACB\nQuestion 5:\nMost relevant: B/D Most irrelevant: A/C\nRelevance ranking: BDAC/BDCA/DBAC/DBCA\nQuestion 6:\nMost relevant: B/C. Most irrelevant: A/D\nRelevance ranking: BCAD/BCDA/CBAD/CBDA\nQuestion 7:\nMost relevant: A/D Most irrelevant: B/C\nRelevance ranking: ADBC/ADCB/DABC/DACB\nH.4 EVALUATION\nThe evaluation results of 53 respondents are shown in Figure 8." }, { "heading": "I ADDITIONAL ANALYSIS OF TIME-AWARE ENTITY REPRESENTATIONS", "text": "We use a generic time encoding (Xu et al., 2020) defined as Φ(t) = √\n1 d [cos(ω1t +\nφ1), . . . ., cos(ωdt+ φd)] to generate the time-variant part of entity representations (please see Section 4.2 for more details). Time-aware representations have considerable influence on the temporal attention mechanism. To make our point, we conduct a case study and extract the edges’ attention scores from the final inference graph. Specifically, we study the attention scores of the interactions between military and student at different timestamps in terms of the query (student, criticize, ?, Nov. 17, 2014). We list the results of the model with time encoding in Table 7 and the results of the model without time encoding in Table 8.\nAs shown in Table 7, by means of the time-encoding, quadruples that even have the same subject, predicate, and object have different attention scores. Specifically, quadruples that occurred recently tend to have higher attention scores. This makes our model more interpretable and effective. For example, given three quadruples {(country A, accuse, country B, t1), (country A, express intent to negotiate with, country B, t2), (country A, cooperate with, country B, t3)}, country A probably has a good relationship with B at t if (t1 < t2 < t3 < t) holds. However, there would be a strained relationship between A and B at t if (t > t1 > t2 > t3) holds. Thus, we can see that the time information is crucial to the reasoning, and attention values should be time-dependent. In comparison, Table 8 shows that the triple (military, use conventional military force, student) has randomly different attention scores at different timestamps, which is less interpretable." } ]
2,021
CASTING ON TEMPORAL KNOWLEDGE GRAPHS
SP:759c0a0298f9845f41d6b556a2187867230a0ca5
[ "The paper proposed FedDEC, a novel approach to conduct model updates aggregation in federated learning. The main motivation of this paper is to decouple the aggregation of normal model weights and statistics in BNs separately such that both data and model heterogeneity can be handled. Theoretical analysis indicates that the proposed FedDEC method enjoys a good convergence guarantee. Extensive experimental results are provided to show that FedDEC enjoys high efficiency and better model accuracy under the non-IID environment compared to the considered baseline methods.", "This paper introduces an aggregation mechanism designed for neural networks with batch normalisation layers. This mechanism relies on two parts: probabilistic mixing weights of the loss function and the use of a weighted pool estimator for aggregating the BN variance parameters. The mixing weights are derived from a GMM with variational inference. A convergence result in the *convex* case is provided. Experimental results on 3 image datasets show that this approach yields better results than other standard FL algorithms (FedAvg, FedProx, q-FedSGD, FedMA…) as well as a better resilience to heterogeneity (understood as class imbalance)." ]
In the federated learning paradigm, multiple mobile clients train local models independently based on datasets generated by edge devices, and the server aggregates parameters/gradients from local models to form a global model. However, existing model aggregation approaches suffer from high bias on both data distribution and parameter distribution for non-IID datasets, which result in severe accuracy drop for increasing number of heterogeneous clients. In this paper, we proposed a novel decoupled probabilistic-weighted gradient aggregation approach called FeDEC for federated learning. The key idea is to optimize gradient parameters and statistical parameters in a decoupled way, and aggregate the parameters from local models with probabilistic weights to deal with the heterogeneity of clients. Since the overall dataset is unaccessible by the central server, we introduce a variational inference method to derive the optimal probabilistic weights to minimize statistical bias. We further prove the convergence bound of the proposed approach. Extensive experiments using mainstream convolutional neural network models based on three federated datasets show that FeDEC significantly outperforms the state-of-the-arts in terms of model accuracy and training efficiency.
[]
[ { "authors": [ "S. Boyd", "N. Parikh", "E. Chu", "B. Peleato", "J. Eckstein" ], "title": "Distributed optimization and statistical learning via the alternating direction method of multipliers", "venue": "Foundations and Trends in Machine Learning,", "year": 2011 }, { "authors": [ "Tianyi Chen", "Georgios Giannakis", "Tao Sun", "Wotao Yin" ], "title": "Lag: Lazily aggregated gradient for communication-efficient distributed learning", "venue": "Advances in Neural Information Processing Systems (NIPS’18),", "year": 2018 }, { "authors": [ "Jeffrey Dean", "Greg Corrado", "Rajat Monga", "Kai Chen", "Matthieu Devin", "Mark Mao", "Marc'aurelio Ranzato", "Andrew Senior", "Paul Tucker", "Ke Yang", "Quoc V. Le", "Andrew Y. Ng" ], "title": "Large scale distributed deep networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "O. Dekel", "R. Gilad-Bachrach", "O. Shamir", "L. Xiao" ], "title": "Optimal distributed online prediction using mini-batches", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "A.P. Dempster", "N.M. Laird", "D.B. Rubin" ], "title": "Maximum likelihood from incomplete data via the em algorithm. JOURNAL OF THE ROYAL STATISTICAL SOCIETY", "venue": "SERIES B,", "year": 1977 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16),", "year": 2016 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Jinlong Ji", "Xuhui Chen", "Qianlong Wang", "Lixing Yu", "Pan Li" ], "title": "Learning to learn gradient aggregation by gradient descent", "venue": "In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI’19),", "year": 2019 }, { "authors": [ "Linshan Jiang", "Rui Tan", "Xin Lou", "Guosheng Lin" ], "title": "On lightweight privacy-preserving collaborative learning for internet-of-things objects", "venue": "In Proceedings of the International Conference on Internet of Things Design and Implementation (IoTDI’19),", "year": 2019 }, { "authors": [ "James M. Joyce" ], "title": "Kullback-Leibler Divergence, pp. 720–722", "venue": null, "year": 2011 }, { "authors": [ "Marcel Keller", "Valerio Pastro", "Dragos Rotaru" ], "title": "Overdrive: Making SPDZ great again", "venue": "In Advances in Cryptology (EUROCRYPT’18),", "year": 2018 }, { "authors": [ "Peter R. Killeen" ], "title": "An alternative to null-hypothesis significance tests", "venue": "Psychological science,", "year": 2005 }, { "authors": [ "Diederik Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In International Conference on Learning Representations (ICLR’14),", "year": 2014 }, { "authors": [ "Jakub Konečnỳ", "H. Brendan McMahan", "Daniel Ramage" ], "title": "Federated optimization: Distributed optimization beyond the datacenter. NIPS Optimization for Machine Learning", "venue": "Workshop 2015, pp", "year": 2015 }, { "authors": [ "Jakub Konecný", "H. Brendan McMahan", "Daniel Ramage", "Peter Richtárik" ], "title": "Federated optimization: Distributed machine learning for on-device", "venue": "intelligence. ArXiv,", "year": 2016 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Yann Lecun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "In Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "Mnist handwritten digit database", "venue": "ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist,", "year": 2010 }, { "authors": [ "Mu Li", "David G. Andersen", "Jun Woo Park", "Alexander J. Smola", "Amr Ahmed", "Vanja Josifovski", "James Long", "Eugene J. Shekita", "Bor-Yiing Su" ], "title": "Scaling distributed machine learning with the parameter server", "venue": "In 11th USENIX Symposium on Operating Systems Design and Implementation", "year": 2014 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Manzil Zaheer", "Maziar Sanjabi", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated optimization in heterogeneous networks", "venue": "In Proceedings of Machine Learning and Systems", "year": 2020 }, { "authors": [ "Tian Li", "Maziar Sanjabi", "Virginia Smith" ], "title": "Fair resource allocation in federated learning", "venue": "In International Conference on Learning Representations (ICLR’20),", "year": 2020 }, { "authors": [ "Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Agüera y Arcas" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "Proceedings of the 20th International Conference on Artificial Intelligence and Statistics", "year": 2017 }, { "authors": [ "Mehryar Mohri", "Gary Sivek", "Ananda Theertha Suresh" ], "title": "Agnostic federated learning", "venue": "In Proceedings of the 36th International Conference on Machine Learning (ICML’19),", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga", "Alban Desmaison", "Andreas Kopf", "Edward Yang", "Zachary DeVito" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "In Advances in Neural Information Processing Systems (NeurIPS’19),", "year": 2019 }, { "authors": [ "Peter Richtárik", "Martin Takác" ], "title": "Distributed coordinate descent method for learning with big data", "venue": "Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "M. Sandler", "A. Howard", "M. Zhu", "A. Zhmoginov", "L. Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Ohad Shamir", "Nati Srebro", "Tong Zhang" ], "title": "Communication-efficient distributed optimization using an approximate newton-type method", "venue": "In Proceedings of the 31st International Conference on Machine Learning (ICML’14),", "year": 2014 }, { "authors": [ "V. Smith", "S. Forte", "C. Ma", "M. Takac", "M.I. Jordan", "M. Jaggi" ], "title": "Cocoa: A general framework for communication-efficient distributed optimization", "venue": "Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Shizhao Sun", "Wei Chen", "Jiang Bian", "Xiaoguang Liu", "Tie-Yan Liu" ], "title": "Ensemble-compression: A new method for parallel training of deep neural networks", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML-KDD’17),", "year": 2017 }, { "authors": [ "Hongyi Wang", "Mikhail Yurochkin", "Yuekai Sun", "Dimitris Papailiopoulos", "Yasaman Khazaeni" ], "title": "Federated learning with matched averaging", "venue": "In International Conference on Learning Representations", "year": 2020 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning", "venue": null, "year": 2017 }, { "authors": [ "Cong Xie", "Sanmi Koyejo", "Indranil Gupta" ], "title": "Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance", "venue": "In Proceedings of the 36th International Conference on Machine Learning (ICML’19),", "year": 2019 }, { "authors": [ "Mikhail Yurochkin", "Mayank Agarwal", "Soumya Ghosh", "Kristjan Greenewald", "Nghia Hoang", "Yasaman Khazaeni" ], "title": "Bayesian nonparametric federated learning of neural networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning (ICML’19),", "year": 2019 }, { "authors": [ "Sixin Zhang", "Anna E Choromanska", "Yann LeCun" ], "title": "Deep learning with elastic averaging sgd", "venue": "In Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Y. Zhang", "J.C. Duchi", "M.J. Wainwright" ], "title": "Communication-efficient algorithms for statistical opti- mization", "venue": "Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Yue Zhao", "Meng Li", "Liangzhen Lai", "Naveen Suda", "Damon Civin", "V. Chandra" ], "title": "Federated learning with non-iid", "venue": "data. ArXiv,", "year": 2018 }, { "authors": [ "H. Zhu", "Y. Jin" ], "title": "Multi-objective evolutionary federated learning", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Federated learning (FL) has emerged as a novel distributed machine learning paradigm that allows a global machine learning model to be trained by multiple mobile clients collaboratively. In such paradigm, mobile clients train local models based on datasets generated by edge devices such as sensors and smartphones, and the server is responsible to aggregate parameters/gradients from local models to form a global model without transferring data to a central server. Federated learning has been drawn much attention in mobile-edge computing (Konecný et al. (2016); Sun et al. (2017)) with its advantages in preserving data privacy (Zhu & Jin (2020); Jiang et al. (2019); Keller et al. (2018)) and enhancing communication efficiency (Shamir et al. (2014); Smith et al. (2018); Zhang et al. (2013); McMahan et al. (2017); Wang et al. (2020)).\nGradient aggregation is the key technology of federated learning, which typically involves the following three steps repeated periodically during training process: (1) the involved clients train the same type of models with their local data independently; (2) when the server sends aggregation signal to the clients, the clients transmit their parameters or gradients to the server; (3) when server receives all parameters or gradients, it applies an aggregation methods to the received parameters or gradients to form the global model. The standard aggregation method FedAvg (McMahan et al. (2017)) and its variants such as FedProx (Li et al. (2020a)), Zeno (Xie et al. (2019)) and q-FedSGD (Li et al. (2020b)) applied the synchronous parameter averaging method to the entire model indiscriminately. Agnostic federated learning (AFL) (Mohri et al. (2019)) defined an agnostic and risk-averse objective to optimize a mixture of the client distributions. FedMA (Wang et al. (2020)) constructed the shared global model in a layer-wise manner by matching and averaging hidden elements with similar feature extraction signatures. The recurrent neural network (RNN) based aggregator (Ji et al. (2019)) learned an aggregation method to make it resilient to Byzantine attack.\nDespite the efforts that have been made, applying the existing parameter aggregation methods for large number of heterogeneous clients in federated learning still suffers from performance issues. It was reported in (Zhao et al. (2018)) that the accuracy of a convolutional neural network (CNN) model trained by FedAvg reduces by up to 55% for highly skewed non-IID dataset. The work of\n(Wang et al. (2020)) showed that the accuracy of FadAvg (McMahan et al. (2017)) and FedProx (Li et al. (2020a)) dropped from 61% to under 50% when the client number increases from 5 to 20 under heterogeneous data partition. A possible reason to explain the performance drops in federated learning could be the different levels of bias caused by inappropriate gradient aggregation, on which we make the following observations.\nData Bias: In the federated learning setting, local datasets are only accessible by the owner and they are typically non-IID. Conventional approaches aggregate gradients uniformly from the clients, which could cause great bias to the real data distribution. Fig. 1 shows the distribution of the real dataset and the distributions of uniformly taking samples from different number of clients in the CIFAR-10 dataset (Krizhevsky (2009)). It is observed that there are great differences between the real data and the sampled distributions. The more clients involved, the more difference occurs.\nParameter Bias: A CNN model typically contains two different types of parameters: the gradient parameters from the convolutional (Conv) layers and full connected (FC) layers; and the statistical parameters such as mean and variance from the batch normalization (BN) layers. Existing approaches such as FedAvg average the entire model parameters indiscriminately using distributed stochastic gradient descent (SGD), which will lead to bias on the means and variances in BN layer. Fig. 2 shows the means and variances in BN layer distribution of a centrally-trained CNN model and that of FedAvg-trained models with different number of clients on non-IID local datasets. It is observed that the more clients involved, the larger deviation between the central model and the federated learning models.\nOur contributions: In the context of federated learning, the problems of data bias and parameter bias have not been carefully addressed in the literature. In this paper, we propose a novel gradient aggregation approach called FeDEC. The main contribution of our work are summarized as follows. (1) We propose the key idea of optimizing gradient aggregation with a decoupled probabilisticweighted method. To the best of our knowledge, we make the first attempt to aggregate gradient parameters and statistical parameters separatively, and adopt a probabilistic mixture model to resolve the problem of aggregation bias for federated learning with heterogeneous clients. (2) We propose a variational inference method to derive the optimal probabilistic weights for gradient aggregation, and prove the convergence bound of the proposed approach. (3) We conduct extensive experiments using five mainstream CNN models based on three federated datasets under non-IID conditions. It is shown that FeDEC significantly outperforms the state-of-the-arts in terms of model accuracy and training efficiency." }, { "heading": "2 RELATED WORK", "text": "We summarize the related work as two categories: parameter/gradient aggregation for distributed learning and federated learning.\nDistributed Learning: In distributed learning, the most famous parameter aggregation paradigm is the Parameter Server Framework (Li et al. (2014)). In this framework, multiple servers maintain a partition of the globally shared parameters and communicate with each other to replicate and migrate parameters, while the clients compute gradients locally with a portion of the training data, and communicate with the server for model update. Parameter server paradigm had motivated the development of numerous distributed optimization methods (Boyd et al. (2011); Dean et al. (2012); Dekel et al. (2012); Richtárik & Takác (2016); Zhang et al. (2015)). Several works focused on\nimproving the communication-efficiency for distributed learning (Shamir et al. (2014); Smith et al. (2018); Zhang et al. (2013)). To address the issue of model robustness, Zeno (Xie et al. (2019)) was proposed to make distributed machine learning tolerant to an arbitrary number of faulty workers. The RNN based aggregator (Ji et al. (2019)) adopted a meta-learning approach that utilizes a recurrent neural network (RNN) in the parameter server to learn to aggregate the gradients from the workers, and designed a coordinatewise preprocessing and postprocessing method to improve its robustness.\nFederated Learning: Federated learning (Konečnỳ et al. (2015)) is an emerging edge distributed machine learning paradigm that aims to build machine-learning models based on datasets distributing across multiple clients. One of the standard parameter aggregation methods is FedAvg (McMahan et al. (2017)), which combines local stochastic gradient descent (SGD) on each client with a server that performs parameter averaging. The lazily aggregated gradient (Lag) method (Chen et al. (2018)) allowed clients running multiple epochs before model aggregation to reduce communication cost. For heterogeneous datasets, FedProx (Li et al. (2020a)) modified FedAvg by adding a heterogeneity bound on datasets and devices to tackle heterogeneity. The FedMA (Wang et al. (2020)) method, derived from AFL Mohri et al. (2019) and PFNM (Yurochkin et al. (2019)), demonstrated that permutations of layers can affect the gradient aggregation results, and proposed a layer-wise gradient aggregation method to solve the problem. For fair resources allocation, the q-FedSGD (Li et al. (2020b)) method encouraged a more uniform accuracy distribution across devices in federated networks.\nHowever, all the methods did not differentiate gradient parameters and statistical parameters and aggregated the entire model in a coupled manner. In this paper, we make the first attempt to decouple the aggregation of gradient parameters and statistical parameters with probabilistic weights to optimize the global model to achieve fast convergence and high accuracy in non-IID conditions." }, { "heading": "3 FEDEC: A DECOUPLED GRADIENT AGGREGATION METHOD", "text": "" }, { "heading": "3.1 OBJECTIVE OF FEDERATED LEARNING WITH NON-IID DATA", "text": "Consider a federated learning scenario with K clients that train their local CNN models independently based on local datasets x1,x2, . . . ,xK and report their gradients and model parameters to a central server. The objective of the server is to form an aggregate global CNN model to minimize the loss function over the total datasets x = {x1,x2, . . . ,xK}. Conventional federated learning tends to optimize the following loss function:\nmin W\nL(W,x) := K∑\nk=1\n|xk| |x| Lk(Wk,xk), (1)\nwhere W is the parameters of the global model, Wk (k = 1, 2, · · · ,K) is the parameters of the k-th local model; L(·) and Lk(·) indicate the loss functions for global model and local models accordingly. The above objective assumes training samples uniformly distributed among the clients, so that the aggregated loss can be represented by the sum of percentage-weighted of the local losses.\nAs discussed in section 1, conventional federated learning has two drawbacks. Firstly, local datasets are collected by mobile devices used by particular users, which are typically non-IID. Training samples on each client may be drawn from a different distribution, therefore the data points available locally could be bias from the overall distribution. Secondly, since a neural network model is typically consists of convolutional (Conv) layers and full-connected (FC) layers that are formed by gradient parameters, and batch normalization (BN) layers that are formed by statistical parameters such as mean and variance, aggregating them without distinction will cause severe deviation of the global model parameters.\nTo address the above issues, we propose a decoupled probabilistic-weighted approach for federated learning that focuses on optimizing the following loss function:\nmin W∗\nL({WtNN ,Wtmean,Wtvar},x) := K∑\nk=1\nπkLk({Wt−1,kNN ,W t−1,k mean,W t−1,k var },xk), (2)\nwhere ∗ indicates NN,mean and var; WtNN , Wtmean and Wtvar are the parameters of Conv and FC layers of the global model after t-th aggregation epoch; Wt−1,kNN , W t−1,k mean and W t−1,k var are the k-th local model been trained several local epoch based t− 1-th global model; πk(k = 1, . . . ,K) is\nthe probability that a sample is drawn from the distribution of the k-th client, i.e., πk ∈ [0, 1] (k = 1, . . . ,K) and ∑K k=1 πk = 1.\nThe above formulation objects to minimize the expected loss over K clients with non-IID datasets. Next, we will introduce a decoupled method called FeDEC to optimize the parameters of different types of layers separatively, and derived the probability weights πk for parameter aggregation." }, { "heading": "3.2 DECOUPLED PROBABILISTIC-WEIGHTED GRADIENT AGGREGATION METHOD", "text": "In this section, we proposed a decoupled method to derive the global model with respect to WtNN (parameters of Conv and FC layers) and Wtmean,W t var (statistical parameters of BN layers)." }, { "heading": "3.2.1 GRADIENT AGGREGATION FOR CONV AND FC LAYERS", "text": "Since the parameters of Conv and FC layers are neural network weights which are updated by distributed gradient descent method (Nesterov (1983)), they are appropriate to be aggregated with a similar approach that adapts conventional federated average for non-IID datasets. Let gtk = W t−1,k NN∗ − W t−1,k NN (k = 1, . . . ,K), where NN∗ indicates NN parameters after full local training. be the gradient of the k-th client in the t-th training epoch. After receiving the gradients from K clients, the central server update the parameters of global model as follows.\nWtNN = W t−1 NN − β K∑ k=1 πtkg t k, (3)\nwhere β is the learning rate for parameter update, πtk (k = 1, . . . ,K) are the probabilistic weights with ∑K k=1 π t k = 1 that are derived in section 3.2.3." }, { "heading": "3.2.2 PARAMETER AGGREGATION FOR MEANS AND VARIANCES IN BN", "text": "Different from the Conv and FC layers, the BN layers mainly contain statistical parameters such as mean and variance. Conventional federated learning aggregates BN layers and other layers without distinction, which could lead to high bias of means and variances in BN layer of the global model. Thus we propose a different way to aggregate means and variances in BN layer as follows.\nIn t-th training epoch, the means and variances Wtmean,W t var in BN layer, which are updated by:\nWtmean = K∑ k=1 πtkW t,k mean, (4)\nWtvar = 1\n|x| −K K∑ k=1 (|xk| − 1)πtkWt,kvar, (5)\nwhere Wt,kmean and W t,k var indicate the means and variances in BN layers of the k-th client in epoch t; πtk (k = 1, . . . ,K) are probabilistic weights with ∑K k=1 π t k = 1 that are derived in section 3.2.3.\nIn the above equations, we update the mean with the weighted average of local models, and update the variance with the weighted pooled variance (Killeen (2005)), which can give an unbias estimation of parameters of the whole dataset under non-IID conditions (see Appendix A.2)." }, { "heading": "3.2.3 DERIVATION OF PROBABILISTIC WEIGHTS", "text": "We adopt a mixture probabilistic model to describe non-IID datasets in federated learning. Without loss of generality, in the t-th training epoch, we assume the mini-batch samples of each client follows a Gaussian distribution Nk(µk, σk) (k = 1, . . . ,K), where µk, σk are the mean and standard deviation of the distribution that vary among clients. We omit the upper script t for simplicity thereafter. The whole samples can be described as a Gaussian Mixture Model (GMM) with the following probability function:\np(x|λ) = K∑\nk=1\nπkp(xk|µk, σk), (6)\nwhere λ = {πk, µk, σk | k = 1, 2, · · · ,K} are the parameters of the GMM model1. 1Noted that the proposed variational inference method can be applied to other non-Gaussian distributions with slight modification.\nIn federated learning, the local data samples are accessed by particular client and the central server can only observe the statistics of local dataset such as mean and standard variance. Without knowing the overall samples, conventional expectation-maximization (EM) algorithm (Dempster et al. (1977)) cannot be applied to derive λ. Alternatively, we introduce a variational inference method to estimate the parameters of λ.\n1\nSpecifically, we construct a variational Bayesian generative model to generate data that are close to the reported statistics of local models as possible, and use the generated data to estimate the GMM model parameters. The plate notions of the generative model are shown in Fig. 3. The notations are explained as follows.\n•stk = {Wt,kmean,Wt,kvar} is the observed statistics from the feature maps of k-th client.\n• ztk = {ztk,i|(i = 1, 2, · · · , C)} is a vector of latent variables with length C, where ztk,i ∈ [0, 1], ∑C i=1 z t k,i = 1, and C is the number of classes for a classification task. ztk can be viewed as a data distribution that represents the probability of a sample in client k belonging to the classes.\n• θ = {θk} are generative model parameters, and ϕ = {ϕk} are variational parameters.\nThe solid lines in Fig. 3 denote the generative model pθk(z t k)pθk(s t k|ztk), and the dashed lines denote the variational approximation qϕk(z t k|stk) to the intractable posterior pθk(ztk|stk) We approximate pθk(z t k|stk) with qϕk(ztk|stk) by minimizing their divergence:\nϕ∗k,θ ∗ k = arg min\nθk,ϕk divergence(qϕk(z\nt k|stk) || pθk(ztk|stk)),\ns.t. ∑C\ni=1 ztk,i = 1.\n(7)\nTo derive the optimal value of the parameters ϕk and θk, we compute the marginal likelihood of stk:\nlog p(stk) = DKL(qϕk(z t k|stk) || pθk(ztk|stk)) + Eqϕk (ztk|stk)\n[ log pθk(z t k, s t k)\nqϕk(z t k|stk)\n] . (8)\nIn Eq. 8, the first term is the KL-divergence (Joyce (2011)) of the approximate distribution and the posterior distribution; the second term is called the ELBO (Evidence Lower BOund) on the marginal likelihood of dataset in the k-th client.\nSince log p(stk)is non-negative, the minimization problem of Eq. 7 can be converted to maximize the ELBO. To solve the problem, we change the form of ELBO as:\nEqϕk (ztk|stk) [ log pθk(z t k, s t k)\nqϕk(z t k|stk)\n] = Eqϕk (ztk|stk) [ log p(ztk)\nqϕk(z t k|stk) ] ︸ ︷︷ ︸\nEncoder\n+Eqϕk (ztk|stk)[log pθk(s t k|ztk)]︸ ︷︷ ︸\nDecoder\n.\n(9) The above form is a variational encoder-decoder structure: the model qϕk(z t k|stk) can be viewed as a probabilistic encoder that given an observed statistics stk it produces a distribution over the possible values of the latent variables ztk; The model pθk(s t k|ztk) can be refered to as a probabilistic decoder that reconstructs the value of stk based on the code z t k. According to the theory of variational inference (Kingma & Welling (2014)), the problem in Eq. 9 can be solved with stochastic gradient descent (SGD) method using a fully-connected neural network to optimize the mean squared error loss function.\nWith the derived optimal parameters ϕ∗k,θ ∗ k, we can extract the latent variables z t k that is interpreted as the sample distribution of client-k. Therefore ztk can be used to infer the parameters (πk, µk, σk) of k-th component of the GMM model. Specifically, the probabilistic weights πk can be represented by\nπtk = { C∑ i=1 ztk,i∑K j=1 z t j,i }/{ K∑ k=1 C∑ i=1 ztk,i∑K j=1 z t j,i } . (10)" }, { "heading": "4 CONVERGENCE ANALYSIS", "text": "In this section, we will show that the convergence of the proposed FeDEC algorithm is theoretically guaranteed. We use the following assumptions and lemmas, and the convergence guarantee is provided in Theorem 1.\nAssumption 1 (Unbiased Gradient): We assume that the stochastic gradients gti is an unbiased estimator of the true gradient ∇f(wti), i.e., E[gti ] = ∇f(wti), where f(·) is any convex objective function and wti is its variables.\nAssumption 2 (Gradient Convex Set): We assume that gradient set G is a convex set, where all gradients g1,g2, . . . ,gK are in G, and any g = ∑K i=1 λigi (∀λi > 0 and ∑K i=1 λi = 1) is in G.\nLemma 1 (L-Lipschitz Continuity): For a function f(·) is Lipschitz continuous if there exists a positive real constant L such that, for all real x1 and x2:\n|f(x1)− f(x2)| ≤ L|x1 − x2|.\nLemma 2 (Jensen’s Inequality): If f(w) is a convex function on W , and E[f(w)] and f(E[w]) are finite, then:\nE[f(w)] ≥ f(E[w])).\nDefinition 1 (Projection Operation): Assume w∗ is an intermediate result of optimization, we define a project operator ∏ W(w∗) to project w∗ to the domain W , which is computed by:∏\nW (w∗) = arg min w∈W ||w −w∗||.\nDefinition 2 (Diameter of Domain): Given a function f(w), where w ∈ W , and W is f ’s domain of definition. The diameter of W is denoted by Γ: for every w1,w2 ∈ W: ||w1 −w2|| ≤ Γ. Theorem 1 (Guaranteed Convergence Rate): If a convex function f(w) is L-Lipschitz continuous function, then ||∇f(w)|| ≤ L. Let Γ be the diameter of domain. Applying equations (3)(4)(5) for gradients aggregation, we have the following convergence rate for the proposed FeDEC algorithm:\nf(w̄T )− min w∈W\nf(w) ≤ O( Γ 2\n2βT +\nβ 2 L2), (11)\nwhere w̄T is the average result of w for total training epoch T , β is the learning rate in equation-(3), and T is the total training epoch. If we let β = Γ\nL √ T , the convergence rate is O( 1√ T ).\nProof skeleton: We provide a simple description of the proof skeleton of Theorem 1 with the following steps. (1) Since f(·) is a convex function, we have f(wt)−f(w) ≤ ∇f(wt)(wt−w). (2) With assumption 1 and 2, we have f(wt)− f(w) ≤ 12β (||w t −w||2 − ||wt+1∗ −w||2) + β 2 ||∇f(w\nt)||2, where wt+1∗ is the intermediate result of f(w) in update time t+1. (3) With lemma 1 and definition 1, by projecting wt+1∗ to w t+1, we have f(wt)−f(w) ≤ 12β (||w t−w||2−||wt+1−w||2)+ β2L 2.\n(4) Summing from t = 1 to T and with definition 1 and 2, we have ∑T\nt=1 f(w t) − Tf(w) ≤\n1 2βΓ 2 + β2L 2T . (5) According to lemma 2, we have f(w̄T ) − f(w) ≤ Γ\n2\n2βT + β 2L 2. (6) Taking\nβ = Γ/(L √ T ), we can obtain the convergence rate in the theorem. The detailed proof of Theorem 1 and explanations are provided in Appendix A.1.\nAccording to Theorem 1, the FeDEC parameter aggregation algorithm is guaranteed to converge, and the convergence rate can be as fast as general stochastic gradient decent which only related to the training epoch T with an associated constants. The constant is related to the optimization problem parameters such as lipschitz constant L, and diameter of domain Γ." }, { "heading": "5 PERFORMANCE EVALUATION", "text": "In this section, we evaluate the performance of the proposed FeDEC method for federated learning." }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "Implementation. We implement the proposed FeDEC parameter aggregation approach and the considered baselines in PyTorch (Paszke et al. (2019)). We train the models in a simulated federated learning environment consisting of one server and a set of mobile clients with wireless network connections. Unless explicitly specified, the default number of clients is 20, and the learning rate β = 0.01. We conduct experiments on a GPU-equipped personal computer (CPU: Inter Core i78700 3.2GHz, GPU: Nvidia GeForce RTX 2070, Memory: 32GB DDR4 2666MHz, and OS: 64-bit Ubuntu 16.04).\nModels and datasets. We conduct experiments based on 5 mainstream neural network models: ResNet18 (He et al. (2016)), LeNet (Lecun et al. (1998)), DenseNet121 (Huang et al. (2017)), MobileNetV2 (Sandler et al. (2018)), and a 4-layer CNN (every CNN layer is followed by a BN layer). The detailed structure of the CNN models are provided in Appendix A.3.\nWe use three real world datasets: MNIST (LeCun et al. (2010)), Fashion-MNIST (Xiao et al. (2017)), and CIFAR-10 (Krizhevsky (2009)). MNIST is a dataset for hand written digits classification with 60000 samples and each example is a 28 × 28 greyscale image. Fashion-MNIST is a dataset intended to replace the original MNIST for benchmarking machine learning algorithms. CIFAR-10 is a larger dataset with 10 categories. Each category has 5000 training images and 1000 validation images of size 32 × 32. For each dataset, we use 80% of the data for training and amalgamate the remaining data into a global test set.\nWe form non-IID local datasets as follows. Assume there are C classes of samples in a dataset. Each client draw samples form the dataset with probability pr(x) = {\nη ∈ [0, 1], if x ∈ classj , N (0.5, 1), otherwise.\nIt means that the client draw samples from a particular class j with a fixed probability η, and from other classes based on standard Gaussian distribution. The larger η is, the more likely the client’s samples concentrate on a particular class, and the more heterogeneous the local datasets are." }, { "heading": "5.2 PERFORMANCE COMPARISON", "text": "We compare the performance of FeDEC with 5 state-of-the-art methods: FedAvg (McMahan et al. (2017)), RNN based aggregator (Ji et al. (2019)), FedProx (Li et al. (2020a)), q-FedSGD (Li et al. (2020b)), and FedMA (Wang et al. (2020)). The results are analyzed as follows.\nConvergence: In this experiment we study the convergence of all baselines and our algorithm by showing the total communication epochs versus train loss. Fig. 4 shows the result of ResNet18 on CIFAR-10. It is shown that the loss of all algorithms tend to be stable after a number of epoches. Clearly FeDEC has the lowest loss among all algorithms, which means that FeDEC converges faster that of baselines. The results of more CNN models on different datasets are shown in Appendix A.4.\nTraining Efficiency: In this experiment we study the test accuracy versus time during training of a CNN model with federated learning. Fig.5 shown the results of training ResNet18 on CIFAR-10. It is shown that FeDEC reaches 0.8 accuracy after 18 minutes, while FedMA, FedProx, and FedAvg take 36 to 63 minutes to reach the same accuracy. FeDEC approaches 0.9 accuracy after 54 minutes, while the accuracy of other algorithms are below 0.85. The results of more CNN models on different datasets are shown in Appendix A.5. It suggests that FeDEC trains much faster than the baseline algorithms and it can reach high accuracy in a short time period.\nParameter Bias: In this experiment we study the parameter bias of federated learning algorithms. Fig. 6 compares the KL-divergence between the means and variances in BN of global models aggregated by different algorithms and the central model. It is shown that FedAvg, FedProx, and qFedSGD have exceptional high parameter bias, while FeDEC has significantly lower KL-divergence compared to the baselines for different CNN models on different datasets.\nGlobal Model Accuracy: In this experiment, we compare the global model accuracy of different federated parameter aggregation algorithms after training to converge. We repeat the experiment for 20 rounds and show the average results in Table 1. As shown in the table, the central method yields the highest accuracy. In comparison of different federated learning methods, FeDEC significantly outperforms the other algorithms in global model accuracy. It performs better than the state-of-the-art method FedMA with 2.87%, 3.17%, 2.58%, and 3.09% accuracy improvement in ResNet18, DenseNet121, MobileNetV2, and 4-L CNN respectively for CIFAR-10, 1.09% improvement in LeNet for F-MNIST, and 0.33% improvement in LeNet for MNIST accordingly. FeDEC achieves the highest accuracy among all baselines, and it performs very close to the centralized method, whose accuracy drop is less than 3% in all cases.\nHyperparameter Analysis: We further analyze the influence of two hyperparameters in federated learning: the number of clients involved and the heterogeneity of local datasets.\nFig. 7 compares the test accuracy of the global model for different number of involved clients. According to the figure, the performance of FeDEC is stable. When the number of mobile clients increases from 5 to 20, the test accuracy slightly decreases from 0.909 to 0.893. Other baseline algorithms yield significant performance drop. FeDEC achieves the highest test accuracy among all federated learning algorithms in all cases, and it performs very close to the central model.\nIn the experiment, the heterogeneity of local datasets is represented by η, the probability that a client tends to sample from a particular class. The more η approaches to 1, the more heterogeneous the local datasets are. Fig. 8 shows the test accuracy under different level of heterogeneity. As η increases, the test accuracy of all models decreases. FeDEC yields the highest test accuracy among all algorithms, and its performance drops much slower than the baselines. It verifies the effectiveness of the proposed probabilistic-weighted gradient aggregation approach under non-IID conditions." }, { "heading": "6 CONCLUSION", "text": "Gradient aggregation played an important role in federated learning to form a global model. To address the problem of data and parameter bias in federated learning for non-IID dataset, we proposed a novel probabilistic parameter aggregation method called FeDEC that decoupled gradient parameters and statistical parameters to aggregate them separatively. The probabilistic weights were optimized with variational inference, and the proposed method was proved to be convergence guaranteed. Extensive experiments showed that FeDEC significantly outperforms the state-of-the-arts on a variety of performance metrics." }, { "heading": "A APPENDIX", "text": "A.1 PROOF OF CONVERGENCE GUARANTEE (THEOREM 1 IN SECTION 4)\nWe provide the detailed proof of Theorem 1 in Section 4. We first restate the necessary equations and the theorem.\nIn section 3, we propose the usage of the following equations for gradient decent update algorithm:\nWtNN = W t−1 NN − β K∑ k=1 πtkg t k, (3)\nwhere β is the learning rate for parameter update. And for batch normalization update:\nWtmean = K∑ k=1 πtkW t,k mean (4)\nWtvar = 1\n||X|| − n K∑ k=1 (||xk|| − 1)πtkWt,kvar (5)\nWe restate the theorem in section 4 in the following:\nAssumption 1 (Unbiased Gradient): We assume that the stochastic gradients gti is an unbiased estimator of the true gradient ∇f(wti), i.e., E[gti ] = ∇f(wti), where f(·) is any convex objective function and wti is its variables.\nAssumption 2 (Gradient Convex Set): We assume that gradient set G is a convex set, where all gradients g1,g2, . . . ,gK are in G, and any g = ∑K i=1 λigi (∀λi > 0 and ∑K i=1 λi = 1) is in G.\nLemma 1 (L-Lipschitz Continuity): For a function f(·) is Lipschitz continuous if there exists a positive real constant L such that, for all real x1 and x2:\n|f(x1)− f(x2)| ≤ L|x1 − x2|.\nLemma 2 (Jensen’s Inequality): If f(w) is a convex function on W , and E[f(w)] and f(E[w]) are finite, then:\nE[f(w)] ≥ f(E[w])). Definition 1 (Projection Operation): Assume w∗ is an intermediate result of optimization, we\ndefine a project operator ∏\nW(w∗) to project w∗ to the domain W , which is computed by:∏ W (w∗) = arg min w∈W ||w −w∗||.\nDefinition 2 (Diameter of Domain): Given a function f(w), where w ∈ W , and W is f ’s domain of definition. The diameter of W is denoted by Γ: for every w1,w2 ∈ W: ||w1 −w2|| ≤ Γ. Theorem 1 (Guaranteed Convergence Rate): If a convex function f(w) is L-Lipschitz continuous function, then ||∇f(w)|| ≤ L. Let Γ be the diameter of domain. Applying equations (3)(4)(5) for gradients aggregation, we have the following convergence rate for the proposed FeDEC algorithm:\nf(w̄T )− min w∈W\nf(w) ≤ O( Γ 2\n2βT +\nβ 2 L2), (11)\nwhere w̄T is the average result of w for total training epoch T , β is the learning rate in equation-(3), and T is the total training epoch. If we let β = Γ\nL √ T , the convergence rate is O( 1√ T ).\nProof: To simplify the analysis, we consider fixed learning rate β. The proof includes the following steps:\n(1) According to the definition of convex function,\nf(wt)− f(w) ≤ ∇f(wt)(wt −w).\n(2) We define G(w) = ∇fT (wt)(wt−w), and gt = ∑K\nk=1 πkg t k. The intermediate result of f(w)\nin update time t+ 1 is denoted by wt+1∗ . With assumption 1 and assumption 2, we have:\nG(w) = 1\nβ (wt −wt+1∗ )(wt −w)\n= 1\nβ ((wt)2 −wtw −wt+1∗ wt +wt+1∗ w)\n= 1\n2β ((wt)2 −wtw +w2 − (wt+1∗ )2 + 2wwt+1∗ −w2 + (wt)2 − 2wtwt+1∗ + (wt+1∗ )2)\n= 1\n2β (||wt −w||2 − ||wt+1∗ −w||2 + ||wt −wt+1∗ ||2)\n= 1\n2β (||wt −w||2 − ||wt+1∗ −w||2) +\nβ 2 ||∇f(wt)||2\n= 1\n2β (||wt −w||2 − ||wt+1∗ −w||2) +\nβ 2 ||gt||2.\nSo we have:\nf(wt)− f(w) ≤ 1 2β (||wt −w||2 − ||wt+1∗ −w||2) + β 2 ||gt||2.\n(3) We project wt+1∗ to w t+1. With definition 1 and using non-expandable property of projection operation of convex set, we have:\nG1(w) ≤ 1 2β (||wt −w||2 − ||wt+1 −w||2) + β 2 ||gt||2\nDue to L-Lipschitz Continuity (Lemma 1), we have:\nG(w) ≤ 1 2β (||wt −w||2 − ||wt+1 −w||2) + β 2 L2\nSo we have: f(wt)− f(w) ≤ 1\n2β (||wt −w||2 − ||wt+1 −w||2) + β 2 L2\n(4) According to definition 2, summing up all w from t = 1 to T , we have: T∑\nt=1\nf(wt)− Tf(w) ≤ 1 2β (||w1 −w||2 − ||wt+1 −w||2) + β 2 L2T\n≤ 1 2β |w1 −w||2 + β 2 L2T\n≤ 1 2β Γ2 + β 2 L2T.\n(5) According to Jensen’s Inequality (Lemma 2), we have:\nf(w̄T )− f(w) = f( 1 T T∑ t=1 wt)− f(w)\n≤ 1 T T∑ t=1 f(wt)− f(w)\n≤ Γ 2\n2βT +\nβ 2 L2.\nWe can get the result:\nf(w̄T )− min w∈W\nf(w) ≤ O( Γ 2\n2βT +\nβ 2 L2),\n(6) Taking β = Γ/(L √ T ), the right part of the above equation becomes\nΓ2\n2βT +\nβ 2 L2 =\nΓ2L √ T\n2ΓT +\nΓ\n2L √ T L2 = ΓL√ T .\nTherefore we can obtain the simplified expression of the convergence bound O( 1√ T ).\nA.2 EXPLANATION OF UNBIAS PARAMETER AGGREGATION IN SECTION 3.2.2\nWe compute the expectation of the aggregated parameters Wtmean and W t var in Section 3.2.2 as follows.\nE[Wtmean] = E\n[ K∑\nk=1\nπkW t,k mean\n]\n= K∑ k=1 πkE [ Wt,kmean ]\nE[Wtvar] = E\n[ 1\n||X|| −K K∑ k=1 (||xk|| − 1)πkWt,kvar\n]\n= 1\n||X|| −K E\n[ K∑\nk=1\n(||xk|| − 1)πkWt,kvar\n]\n= 1\n||X|| −K K∑ k=1 (||xk|| − 1)πkE [ Wt,kvar ] According to the above equations, if the parameters of the local models Wt,kmean and W t,k var are\nunbias, then the aggregated model parameters are unbias as well.\nA.3 STRUCTURE OF THE NEURAL NETWORK MODELS IN SECTION 5\nHere we report the detailed model structure used in the experiments. We use LeNet shown in Table 2 and the 4-layer CNN model shown in Table 3. We adopt a slim ResNet18 as shown in Table 4, where “Conv2d” is convolution layer, “BatchNorm2d” is batch normalization layer, and “Linear” is fully-connected layer. We can observe that every convolution layer is followed by a batch normalization (BN) layer. For all models, we use ReLU layer after every Conv2d layer. The structure of DenseNet1212 and MobileNetV23 can be found in GitHub.\nFor language model, we consider the sentiment analysis task on tweets from Sentiment140 with 2-layer BiLSTM. The BiLSTM binary classifier containing 256 hidden units with pretrained 100- dimentional GloVe embedding. Each twitter account corresponds to a device.\nA.4 CONVERGENCE OF FEDERATED LEARNING ALGORITHMS FOR DIFFERENT MODELS ON DIFFERENT DATASETS\nA.5 TRAINING EFFICIENCY OF FEDERATED LEARNING ALGORITHMS FOR DIFFERENT MODELS ON DIFFERENT DATASETS\nAlgorithm 1: FeDEC Aggregation 1 Server 2 for t= 1 to T do 3 Transmit Wt to all clients. 4 Receive gt, µ̂, σ̂ from all clients. 5 Inference πtk with 9 based on µ̂, σ̂ from W t mean,W t var. 6 Update Wt−1NN with 3 to get W t NN . 7 Aggregate Wtmean with 4 and W t var with 5. 8 Combine WtNN , W t mean and W t var to new model W\nt. 9 Client\n10 for t= 1 to T do 11 Receive server model Wt. 12 Train local model Wt based on local dataset and get local gradients gt. 13 Transmit gt and µ̂, σ̂ to server. 14 Stop training until received server model Wt." } ]
2,020
null
SP:43728b5763907cbe84f1c7ded63e5f63c45415c5
[ "This paper tackles the challenging question of how deep networks might learn to extrapolate knowledge outside the support of their training distribution. The paper contributes both with novel theoretical arguments as well as with empirical evidence collected on targeted cases. Differently from other recent approaches to the problem, the theoretical analyses presented here are non-asymptotic and provide precise information about the kind of functions that MLPs can learn in the proximity of the training region. Moreover, the authors provide compelling arguments about the need to explicitly encoding (task-specific) non-linearities in the input representation and/or in the model architecture in order to promote successful extrapolation.", "This paper analyzes the extrapolate ability of MLPs and GNNs. In contrast to the existing theoretical works that focus on generalizability and capacity of these models, this paper emphasizes the behavior of training algorithm using gradient descent. It takes analogy of kernel regression via the neural tangent kernel as an example to study the bias induced by the gradient descent algorithm. The presentation of this paper is clear and well-organized with the most significant result shown in the first section, raising interest of the readers, as opposed to leaving them behind a massive amount of proofs. The contribution of this paper is significant as well since it draws attention of the researcher to theoretical analysis on the bias induced from the implementations of the algorithms as compared to the theoretical analysis on the model structure itself. Model extrapolation is also closely connected to topics such as meta-learning, multi-task learning, domain adaptation and semi-supervised learning since the ability of model extrapolation will limit its performance when applied to other tasks. " ]
We study how neural networks trained by gradient descent extrapolate, i.e., what they learn outside the support of the training distribution. Previous works report mixed empirical results when extrapolating with neural networks: while feedforward neural networks, a.k.a. multilayer perceptrons (MLPs), do not extrapolate well in certain simple tasks, Graph Neural Networks (GNNs) – structured networks with MLP modules – have shown some success in more complex tasks. Working towards a theoretical explanation, we identify conditions under which MLPs and GNNs extrapolate well. First, we quantify the observation that ReLU MLPs quickly converge to linear functions along any direction from the origin, which implies that ReLU MLPs do not extrapolate most nonlinear functions. But, they can provably learn a linear target function when the training distribution is sufficiently “diverse”. Second, in connection to analyzing the successes and limitations of GNNs, these results suggest a hypothesis for which we provide theoretical and empirical evidence: the success of GNNs in extrapolating algorithmic tasks to new data (e.g., larger graphs or edge weights) relies on encoding task-specific non-linearities in the architecture or features. Our theoretical analysis builds on a connection of over-parameterized networks to the neural tangent kernel. Empirically, our theory holds across different training settings.
[ { "affiliations": [], "name": "Keyulu Xu" }, { "affiliations": [], "name": "Mozhi Zhang" }, { "affiliations": [], "name": "Jingling Li" }, { "affiliations": [], "name": "Simon S. Du" }, { "affiliations": [], "name": "Ken-ichi Kawarabayashi" }, { "affiliations": [], "name": "Stefanie Jegelka" } ]
[ { "authors": [ "Kartik Ahuja", "Jun Wang", "Amit Dhurandhar", "Karthikeyan Shanmugam", "Kush R. Varshney" ], "title": "Empirical or invariant risk minimization? a sample complexity perspective", "venue": "In International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning and generalization in overparameterized neural networks, going beyond two layers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Zhao Song" ], "title": "A convergence theory for deep learning via overparameterization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Martin Arjovsky", "Léon Bottou", "Ishaan Gulrajani", "David Lopez-Paz" ], "title": "Invariant risk minimization", "venue": "arXiv preprint arXiv:1907.02893,", "year": 2019 }, { "authors": [ "Raman Arora", "Amitabh Basu", "Poorya Mianjy", "Anirbit Mukherjee" ], "title": "Understanding deep neural networks with rectified linear units", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Sanjeev Arora", "Simon Du", "Wei Hu", "Zhiyuan Li", "Ruosong Wang" ], "title": "Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Sanjeev Arora", "Simon S Du", "Wei Hu", "Zhiyuan Li", "Russ R Salakhutdinov", "Ruosong Wang" ], "title": "On exact computation with an infinitely wide neural net", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sanjeev Arora", "Simon S. Du", "Zhiyuan Li", "Ruslan Salakhutdinov", "Ruosong Wang", "Dingli Yu" ], "title": "Harnessing the power of infinitely wide deep nets on small-data tasks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jimmy Ba", "Murat Erdogdu", "Taiji Suzuki", "Denny Wu", "Tianzong Zhang" ], "title": "Generalization of two-layer neural networks: An asymptotic viewpoint", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Rolf W Banz" ], "title": "The relationship between return and market value of common stocks", "venue": "Journal of financial economics,", "year": 1981 }, { "authors": [ "Etienne Barnard", "LFA Wessels" ], "title": "Extrapolation and interpolation in neural network classifiers", "venue": "IEEE Control Systems Magazine,", "year": 1992 }, { "authors": [ "Peter Battaglia", "Razvan Pascanu", "Matthew Lai", "Danilo Jimenez Rezende" ], "title": "Interaction networks for learning about objects, relations and physics", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Richard Bellman" ], "title": "On a routing problem", "venue": "Quarterly of applied mathematics,", "year": 1958 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman Vaughan" ], "title": "A theory of learning from different domains", "venue": "Machine learning,", "year": 2010 }, { "authors": [ "Alberto Bietti", "Julien Mairal" ], "title": "On the inductive bias of neural tangent kernels", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman" ], "title": "Learning bounds for domain adaptation", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Yuan Cao", "Quanquan Gu" ], "title": "Generalization bounds of stochastic gradient descent for wide and deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Lenaic Chizat", "Francis Bach" ], "title": "On the global convergence of gradient descent for over-parameterized models using optimal transport", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Lenaic Chizat", "Edouard Oyallon", "Francis Bach" ], "title": "On lazy training in differentiable programming", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Gabriele Corso", "Luca Cavalleri", "Dominique Beaini", "Pietro Liò", "Petar Veličković" ], "title": "Principal neighbourhood aggregation for graph nets", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "George Cybenko" ], "title": "Approximation by superpositions of a sigmoidal function", "venue": "Mathematics of control, signals and systems,", "year": 1989 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Simon Du", "Jason Lee", "Haochuan Li", "Liwei Wang", "Xiyu Zhai" ], "title": "Gradient descent finds global minima of deep neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Simon S. Du", "Jason D. Lee" ], "title": "On the power of over-parametrization in neural networks with quadratic activation", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Simon S Du", "Kangcheng Hou", "Russ R Salakhutdinov", "Barnabas Poczos", "Ruosong Wang", "Keyulu Xu" ], "title": "Graph neural tangent kernel: Fusing graph neural networks with graph kernels", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Simon S. Du", "Xiyu Zhai", "Barnabas Poczos", "Aarti Singh" ], "title": "Gradient descent provably optimizes over-parameterized neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Eugene F Fama", "Kenneth R French" ], "title": "Common risk factors in the returns on stocks and bonds", "venue": "Journal of financial economics,", "year": 1993 }, { "authors": [ "Ken-Ichi Funahashi" ], "title": "On the approximate realization of continuous mappings by neural networks", "venue": "Neural networks,", "year": 1989 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Behrooz Ghorbani", "Song Mei", "Theodor Misiakiewicz", "Andrea Montanari" ], "title": "Linearized two-layers neural networks in high dimension", "venue": "arXiv preprint arXiv:1904.12191,", "year": 2019 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Joel Goh", "Melvyn Sim" ], "title": "Distributionally robust optimization and its tractable approximations", "venue": "Operations research,", "year": 2010 }, { "authors": [ "Pamela J Haley", "DONALD Soloway" ], "title": "Extrapolation limitations of multilayer feedforward neural networks", "venue": "In International Joint Conference on Neural Networks,", "year": 1992 }, { "authors": [ "Boris Hanin", "David Rolnick" ], "title": "Complexity of linear regions in deep networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Moritz Hardt", "Ben Recht", "Yoram Singer" ], "title": "Train faster, generalize better: Stability of stochastic gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Matthias Hein", "Maksym Andriushchenko", "Julian Bitterwolf" ], "title": "Why relu networks yield highconfidence predictions far away from the training data and how to mitigate the problem", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Xiaoyuan Liu", "Eric Wallace", "Adam Dziedzic", "Rishabh Krishnan", "Dawn Song" ], "title": "Pretrained transformers improve out-of-distribution robustness", "venue": "In Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Kurt Hornik", "Maxwell Stinchcombe", "Halbert White" ], "title": "Multilayer feedforward networks are universal approximators", "venue": "Neural networks,", "year": 1989 }, { "authors": [ "Weihua Hu", "Bowen Liu", "Joseph Gomes", "Marinka Zitnik", "Percy Liang", "Vijay Pande", "Jure Leskovec" ], "title": "Strategies for pre-training graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Michael Janner", "Sergey Levine", "William T. Freeman", "Joshua B. Tenenbaum", "Chelsea Finn", "Jiajun Wu" ], "title": "Reasoning about physical interactions with object-centric models", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Justin Johnson", "Bharath Hariharan", "Laurens van der Maaten", "Judy Hoffman", "Li Fei-Fei", "C Lawrence Zitnick", "Ross Girshick" ], "title": "Inferring and executing programs for visual reasoning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Vera Kurkova" ], "title": "Kolmogorov’s theorem and multilayer neural networks", "venue": "Neural networks,", "year": 1992 }, { "authors": [ "Brenden M Lake", "Tomer D Ullman", "Joshua B Tenenbaum", "Samuel J Gershman" ], "title": "Building machines that learn and think like people", "venue": "Behavioral and brain sciences,", "year": 2017 }, { "authors": [ "Guillaume Lample", "Franois Charton" ], "title": "Deep learning for symbolic mathematics", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Alan Lapedes", "Robert Farber" ], "title": "Nonlinear signal processing using neural networks: Prediction and system modelling", "venue": "Technical report,", "year": 1987 }, { "authors": [ "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning overparameterized neural networks via stochastic gradient descent on structured data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Roi Livni", "Shai Shalev-Shwartz", "Ohad Shamir" ], "title": "On the computational efficiency of training neural networks. In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "Andreas Madsen", "Alexander Rosenberg Johansen" ], "title": "Neural arithmetic units", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Hartmut Maennel", "Olivier Bousquet", "Sylvain Gelly" ], "title": "Gradient Descent Quantizes ReLU Network Features", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Yishay Mansour", "Mehryar Mohri", "Afshin Rostamizadeh" ], "title": "Domain adaptation: Learning bounds and algorithms", "venue": "In Conference on Learning Theory,", "year": 2009 }, { "authors": [ "Jiayuan Mao", "Chuang Gan", "Pushmeet Kohli", "Joshua B. Tenenbaum", "Jiajun Wu" ], "title": "The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "David B McCaughan" ], "title": "On the properties of periodic perceptrons", "venue": "In International Conference on Neural Networks,", "year": 1997 }, { "authors": [ "Tomas Mikolov", "Quoc V Le", "Ilya Sutskever" ], "title": "Exploiting similarities among languages for machine translation", "venue": "arXiv preprint arXiv:1309.4168,", "year": 2013 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Atsushi Nitanda", "Taiji Suzuki" ], "title": "Optimal rates for averaged stochastic gradient descent under neural tangent kernel regime", "venue": "In International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Roman Novak", "Lechao Xiao", "Jiri Hron", "Jaehoon Lee", "Alexander A. Alemi", "Jascha Sohl-Dickstein", "Samuel S. Schoenholz" ], "title": "Neural tangents: Fast and easy infinite neural networks in python", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jonas Peters", "Peter Bühlmann", "Nicolai Meinshausen" ], "title": "Causal inference by using invariant prediction: identification and confidence intervals", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2016 }, { "authors": [ "Matthew Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2018 }, { "authors": [ "Mateo Rojas-Carulla", "Bernhard Schölkopf", "Richard Turner", "Jonas Peters" ], "title": "Invariant models for causal transfer learning", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Elan Rosenfeld", "Pradeep Kumar Ravikumar", "Andrej Risteski" ], "title": "The risks of invariant risk minimization", "venue": "In International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Stephen A Ross" ], "title": "The arbitrage theory of capital asset pricing", "venue": "Journal of Economic Theory,", "year": 1976 }, { "authors": [ "Shiori Sagawa", "Pang Wei Koh", "Tatsunori B. Hashimoto", "Percy Liang" ], "title": "Distributionally robust neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Adam Santoro", "Felix Hill", "David Barrett", "Ari Morcos", "Timothy Lillicrap" ], "title": "Measuring abstract reasoning in neural networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Pedro Savarese", "Itay Evron", "Daniel Soudry", "Nathan Srebro" ], "title": "How do infinite width bounded norm networks look in function space", "venue": "In Conference on Learning", "year": 2019 }, { "authors": [ "David Saxton", "Edward Grefenstette", "Felix Hill", "Pushmeet Kohli" ], "title": "Analysing mathematical reasoning abilities of neural models", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "Aman Sinha", "Hongseok Namkoong", "John Duchi" ], "title": "Certifying some distributional robustness with principled adversarial training", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Mei Song", "Andrea Montanari", "P Nguyen" ], "title": "A mean field view of the landscape of two-layers neural networks", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "JM Sopena", "R Alquezar" ], "title": "Improvement of learning in recurrent networks by substituting the sigmoid activation function", "venue": "In International Conference on Artificial Neural Networks,", "year": 1994 }, { "authors": [ "Daniel Soudry", "Elad Hoffer", "Mor Shpigel Nacson", "Suriya Gunasekar", "Nathan Srebro" ], "title": "The implicit bias of gradient descent on separable data", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Matthew Staib", "Stefanie Jegelka" ], "title": "Distributionally robust optimization and generalization in kernel methods", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Andrew Trask", "Felix Hill", "Scott E Reed", "Jack Rae", "Chris Dyer", "Phil Blunsom" ], "title": "Neural arithmetic logic units", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Leslie G Valiant" ], "title": "A theory of the learnable", "venue": "In Proceedings of the sixteenth annual ACM symposium on Theory of computing,", "year": 1984 }, { "authors": [ "Vladimir Vapnik" ], "title": "The nature of statistical learning theory", "venue": "Springer science & business media,", "year": 2013 }, { "authors": [ "Petar Velickovic", "Rex Ying", "Matilde Padovano", "Raia Hadsell", "Charles Blundell" ], "title": "Neural execution of graph algorithms", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Nicholas Watters", "Daniel Zoran", "Theophane Weber", "Peter Battaglia", "Razvan Pascanu", "Andrea Tacchetti" ], "title": "Visual interaction networks: Learning a physics simulator from video", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Taylor Webb", "Zachary Dulberg", "Steven Frankland", "Alexander Petrov", "Randall OReilly", "Jonathan Cohen" ], "title": "Learning representations that support extrapolation", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Francis Williams", "Matthew Trager", "Daniele Panozzo", "Claudio Silva", "Denis Zorin", "Joan Bruna" ], "title": "Gradient dynamics of shallow univariate relu networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Shijie Wu", "Mark Dredze" ], "title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of bert", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Keyulu Xu", "Chengtao Li", "Yonglong Tian", "Tomohiro Sonobe", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "Representation learning on graphs with jumping knowledge networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Keyulu Xu", "Jingling Li", "Mozhi Zhang", "Simon S. Du", "Ken ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "What can neural networks reason about", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Kexin Yi", "Jiajun Wu", "Chuang Gan", "Antonio Torralba", "Pushmeet Kohli", "Josh Tenenbaum" ], "title": "Neuralsymbolic vqa: Disentangling reasoning from vision and language understanding", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Michelle Yuan", "Mozhi Zhang", "Benjamin Van Durme", "Leah Findlater", "Jordan Boyd-Graber" ], "title": "Interactive refinement of cross-lingual word embeddings", "venue": "In Proceedings of Empirical Methods in Natural Language Processing,", "year": 2020 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Mozhi Zhang", "Keyulu Xu", "Ken-ichi Kawarabayashi", "Stefanie Jegelka", "Jordan Boyd-Graber" ], "title": "Are girls neko or shōjo? cross-lingual alignment of non-isomorphic embeddings with iterative normalization", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Han Zhao", "Shanghang Zhang", "Guanhang Wu", "José MF Moura", "Joao P Costeira", "Geoffrey J Gordon" ], "title": "Adversarial multiple source domain adaptation", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Han Zhao", "Remi Tachet Des Combes", "Kun Zhang", "Geoffrey Gordon" ], "title": "On learning invariant representations for domain adaptation", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Kaiyang Zhou", "Yongxin Yang", "Yu Qiao", "Tao Xiang" ], "title": "Domain generalization with mixstyle", "venue": "In International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Jacot" ], "title": "We prove Lemma 2 in Appendix B.6. To analyze the learned functions as the min-norm solution in feature space, we also need the explicit formula of an induced feature map of the corresponding neural tangent kernel. The following lemma gives a NTK feature space for two-layer MLPs with ReLU activation", "venue": null, "year": 2021 }, { "authors": [ "∼ N" ], "title": "Note that to ensure the βw is a well-defined number, here we can work with the polar representation and integrate with respect to an angle. Then βw is well-defined", "venue": "But for simplicity of exposition,", "year": 2021 }, { "authors": [ "Jacot" ], "title": "2018) have derived the general framework for computing the neural tangent kernel of a neural network with general architecture and activation function", "venue": "Following the framework in Jacot et al", "year": 2018 }, { "authors": [ "Jacot" ], "title": "2019b). Let σ denote the activation function. The neural tangent kernel for an h-layer multi-layer perceptron can be recursively defined via a dynamic programming process. Here, Σ : R × R → R for i = 0...h is the covariance for the i-th layer. Σ(0)(x,x′) = x>x", "venue": null, "year": 2019 }, { "authors": [ "Arora" ], "title": "ei,−ei}i=1. We then randomly sample 100 orthogonal transform matrices Q via the QR decomposition. Our training samples are QX , i.e., multiply each point inX by Q. This gives 100 training sets with 2d data points satisfying the condition in Lemma 1. We perform kernel regression on these training sets using a two-layer neural tangent kernel (NTK)", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Humans extrapolate well in many tasks. For example, we can apply arithmetics to arbitrarily large numbers. One may wonder whether a neural network can do the same and generalize to examples arbitrarily far from the training data (Lake et al., 2017). Curiously, previous works report mixed extrapolation results with neural networks. Early works demonstrate feedforward neural networks, a.k.a. multilayer perceptrons (MLPs), fail to extrapolate well when learning simple polynomial functions (Barnard & Wessels, 1992; Haley & Soloway, 1992). However, recent works show Graph Neural Networks (GNNs) (Scarselli et al., 2009), a class of structured networks with MLP building blocks, can generalize to graphs much larger than training graphs in challenging algorithmic tasks, such as predicting the time evolution of physical systems (Battaglia et al., 2016), learning graph algorithms (Velickovic et al., 2020), and solving mathematical equations (Lample & Charton, 2020).\nTo explain this puzzle, we formally study how neural networks trained by gradient descent (GD) extrapolate, i.e., what they learn outside the support of training distribution. We say a neural network extrapolates well if it learns a task outside the training distribution. At first glance, it may seem that neural networks can behave arbitrarily outside the training distribution since they have high capacity (Zhang et al., 2017) and are universal approximators (Cybenko, 1989; Funahashi, 1989; Hornik et al., 1989; Kurkova, 1992). However, neural networks are constrained by gradient descent training (Hardt et al., 2016; Soudry et al., 2018). In our analysis, we explicitly consider such implicit bias through the analogy of the training dynamics of over-parameterized neural networks and kernel regression via the neural tangent kernel (NTK) (Jacot et al., 2018).\nStarting with feedforward networks, the simplest neural networks and building blocks of more complex architectures such as GNNs, we establish that the predictions of over-parameterized MLPs with ReLU activation trained by GD converge to linear functions along any direction from the origin. We prove a convergence rate for two-layer networks and empirically observe that convergence often occurs close to the training data (Figure 1), which suggests ReLU MLPs cannot extrapolate well for most nonlinear tasks. We emphasize that our results do not follow from the fact that ReLU networks have finitely many linear regions (Arora et al., 2018; Hanin & Rolnick, 2019; Hein et al., 2019). While having finitely many linear regions implies ReLU MLPs eventually become linear, it does not say whether MLPs will learn the correct target function close to the training distribution. In contrast, our results are non-asymptotic and quantify what kind of functions MLPs will learn close to the training distribution. Second, we identify a condition when MLPs extrapolate well: the task is linear and the geometry of the training distribution is sufficiently “diverse”. To our knowledge, our results are the first extrapolation results of this kind for feedforward neural networks.\nWe then relate our insights into feedforward neural networks to GNNs, to explain why GNNs extrapolate well in some algorithmic tasks. Prior works report successful extrapolation for tasks that can be solved by dynamic programming (DP) (Bellman, 1966), which has a computation structure aligned with GNNs (Xu et al., 2020). DP updates can often be decomposed into nonlinear and linear steps. Hence, we hypothesize that GNNs trained by GD can extrapolate well in a DP task, if we encode appropriate non-linearities in the architecture and input representation (Figure 2). Importantly, encoding non-linearities may be unnecessary for GNNs to interpolate, because the MLP modules can easily learn many nonlinear functions inside the training distribution (Cybenko, 1989; Hornik et al., 1989; Xu et al., 2020), but it is crucial for GNNs to extrapolate correctly. We prove this hypothesis for a simplified case using Graph NTK (Du et al., 2019b). Empirically, we validate the hypothesis on three DP tasks: max degree, shortest paths, and n-body problem. We show GNNs with appropriate architecture, input representation, and training distribution can predict well on graphs with unseen sizes, structures, edge weights, and node features. Our theory explains the empirical success in previous works and suggests their limitations: successful extrapolation relies on encoding task-specific non-linearities, which requires domain knowledge or extensive model search. From a broader standpoint, our insights go beyond GNNs and apply broadly to other neural networks.\nTo summarize, we study how neural networks extrapolate. First, ReLU MLPs trained by GD converge to linear functions along directions from the origin with a rate of O(1/t). Second, to explain why GNNs extrapolate well in some algorithmic tasks, we prove that ReLU MLPs can extrapolate well in linear tasks, leading to a hypothesis: a neural network can extrapolate well when appropriate nonlinearities are encoded into the architecture and features. We prove this hypothesis for a simplified case and provide empirical support for more general settings." }, { "heading": "1.1 RELATED WORK", "text": "Early works show example tasks where MLPs do not extrapolate well, e.g. learning simple polynomials (Barnard & Wessels, 1992; Haley & Soloway, 1992). We instead show a general pattern of how ReLU MLPs extrapolate and identify conditions for MLPs to extrapolate well. More recent works study the implicit biases induced on MLPs by gradient descent, for both the NTK and mean field regimes (Bietti & Mairal, 2019; Chizat & Bach, 2018; Song et al., 2018). Related to our results, some works show MLP predictions converge to “simple” piecewise linear functions, e.g., with few linear regions (Hanin & Rolnick, 2019; Maennel et al., 2018; Savarese et al., 2019; Williams et al., 2019). Our work differs in that none of these works explicitly studies extrapolation, and some focus only on one-dimensional inputs. Recent works also show that in high-dimensional settings of the NTK regime, MLP is asymptotically at most a linear predictor in certain scaling limits (Ba et al., 2020; Ghorbani et al., 2019). We study a different setting (extrapolation), and our analysis is non-asymptotic in nature and does not rely on random matrix theory.\nPrior works explore GNN extrapolation by testing on larger graphs (Battaglia et al., 2018; Santoro et al., 2018; Saxton et al., 2019; Velickovic et al., 2020). We are the first to theoretically study GNN extrapolation, and we complete the notion of extrapolation to include unseen features and structures." }, { "heading": "2 PRELIMINARIES", "text": "We begin by introducing our setting. Let X be the domain of interest, e.g., vectors or graphs. The task is to learn an underlying function g : X → R with a training set {(xi, yi)}ni=1 ⊂ D, where yi = g(xi) and D is the support of training distribution. Previous works have extensively studied in-distribution generalization where the training and the test distributions are identical (Valiant, 1984; Vapnik, 2013); i.e., D = X . In contrast, extrapolation addresses predictions on a domain X that is larger than the support of the training distribution D. We will say a model extrapolates well if it has a small extrapolation error.\nDefinition 1. (Extrapolation error). Let f : X → R be a model trained on {(xi, yi)}ni=1 ⊂ D with underlying function g : X → R. Let P be a distribution over X \\ D and let ` : R× R→ R be a loss function. We define the extrapolation error of f as Ex∼P [`(f(x), g(x))].\nWe focus on neural networks trained by gradient descent (GD) or its variants with squared loss. We study two network architectures: feedforward and graph neural networks.\nGraph Neural Networks. GNNs are structured networks operating on graphs with MLP modules (Battaglia et al., 2018; Xu et al., 2019). Let G = (V,E) be a graph. Each node u ∈ V has a feature vector xu, and each edge (u, v) ∈ E has a feature vector w(u,v). GNNs recursively compute node representations h(k)u at iteration k (Gilmer et al., 2017; Xu et al., 2018). Initially, h (0) u = xu. For k = 1..K, GNNs update h(k)u by aggregating the neighbor representations. We can optionally compute a graph representation hG by aggregating the final node representations. That is,\nh(k)u = ∑\nv∈N (u)\nMLP(k) ( h(k−1)u ,h (k−1) v ,w(v,u) ) , hG = MLP(K+1) (∑ u∈G h(K)u ) . (1)\nThe final output is the graph representation hG or final node representations h (K) u depending on the task. We refer to the neighbor aggregation step for h(k)u as aggregation and the pooling step in hG as readout. Previous works typically use sum-aggregation and sum-readout (Battaglia et al., 2018). Our results indicate why replacing them may help extrapolation (Section 4)." }, { "heading": "3 HOW FEEDFORWARD NEURAL NETWORKS EXTRAPOLATE", "text": "Feedforward networks are the simplest neural networks and building blocks of more complex architectures such as GNNs, so we first study how they extrapolate when trained by GD. Throughout the paper, we assume ReLU activation. Section 3.3 contains preliminary results for other activations." }, { "heading": "3.1 LINEAR EXTRAPOLATION BEHAVIOR OF RELU MLPS", "text": "By architecture, ReLU networks learn piecewise linear functions, but what do these regions precisely look like outside the support of the training data? Figure 1 illustrates examples of how ReLU MLPs extrapolate when trained by GD on various nonlinear functions. These examples suggest that outside the training support, the predictions quickly become linear along directions from the origin. We systematically verify this pattern by linear regression on MLPs’ predictions: the coefficient of determination (R2) is always greater than 0.99 (Appendix C.2). That is, ReLU MLPs “linearize” almost immediately outside the training data range.\nWe formalize this observation using the implicit biases of neural networks trained by GD via the neural tangent kernel (NTK): optimization trajectories of over-parameterized networks trained by GD are equivalent to those of kernel regression with a specific neural tangent kernel, under a set of assumptions called the “NTK regime” (Jacot et al., 2018). We provide an informal definition here; for further details, we refer the readers to Jacot et al. (2018) and Appendix A.\nDefinition 2. (Informal) A neural network trained in the NTK regime is infinitely wide, randomly initialized with certain scaling, and trained by GD with infinitesimal steps.\nPrior works analyze optimization and in-distribution generalization of over-parameterized neural networks via NTK (Allen-Zhu et al., 2019a;b; Arora et al., 2019a;b; Cao & Gu, 2019; Du et al., 2019c;a; Li & Liang, 2018; Nitanda & Suzuki, 2021). We instead analyze extrapolation.\nTheorem 1 formalizes our observation from Figure 1: outside the training data range, along any direction tv from the origin, the prediction of a two-layer ReLU MLP quickly converges to a linear\nfunction with rate O( 1t ). The linear coefficients βv and the constant terms in the convergence rate depend on the training data and direction v. The proof is in Appendix B.1. Theorem 1. (Linear extrapolation). Suppose we train a two-layer ReLU MLP f : Rd → R with squared loss in the NTK regime. For any direction v ∈ Rd, let x0 = tv. As t → ∞, f(x0 + hv)− f(x0)→ βv · h for any h > 0, where βv is a constant linear coefficient. Moreover, given > 0, for t = O( 1 ), we have |\nf(x0+hv)−f(x0) h − βv| < .\nReLU networks have finitely many linear regions (Arora et al., 2018; Hanin & Rolnick, 2019), hence their predictions eventually become linear. In contrast, Theorem 1 is a more fine-grained analysis of how MLPs extrapolate and provides a convergence rate. While Theorem 1 assumes two-layer networks in the NTK regime, experiments confirm that the linear extrapolation behavior happens across networks with different depths, widths, learning rates, and batch sizes (Appendix C.1 and C.2). Our proof technique potentially also extends to deeper networks.\nTheorem 1 implies which target functions a ReLU MLP may be able to match outside the training data: only functions that are almost-linear along the directions away from the origin. Indeed, Figure 4a shows ReLU MLPs do not extrapolate target functions such as x>Ax (quadratic), ∑d i=1 cos(2π ·x(i))\n(cos), and ∑d i=1 √ x(i) (sqrt), where x(i) is the i-th dimension of x. With suitable hyperparameters, MLPs extrapolate the L1 norm correctly, which satisfies the directional linearity condition.\nFigure 4a provides one more positive result: MLPs extrapolate linear target functions well, across many different hyperparameters. While learning linear functions may seem very limited at first, in Section 4 this insight will help explain extrapolation properties of GNNs in non-linear practical tasks. Before that, we first theoretically analyze when MLPs extrapolate well." }, { "heading": "3.2 WHEN RELU MLPS PROVABLY EXTRAPOLATE WELL", "text": "Figure 4a shows that MLPs can extrapolate well when the target function is linear. However, this is not always true. In this section, we show that successful extrapolation depends on the geometry of training data. Intuitively, the training distribution must be “diverse” enough for correct extrapolation.\nWe provide two conditions that relate the geometry of the training data to extrapolation. Lemma 1 states that over-parameterized MLPs can learn a linear target function with only 2d examples. Lemma 1. Let g(x) = β>x be the target function for β ∈ Rd. Suppose {xi}ni=1 contains an orthogonal basis {x̂i}di=1 and {−x̂i}di=1. If we train a two-layer ReLU MLP f on {(xi, yi)}ni=1 with squared loss in the NTK regime, then f(x) = β>x for all x ∈ Rd.\nLemma 1 is mainly of theoretical interest, as the 2d examples need to be carefully chosen. Theorem 2 builds on Lemma 1 and identifies a more practical condition for successful extrapolation: if the support of the training distribution covers all directions (e.g., a hypercube that covers the origin), the MLP converges to a linear target function with sufficient training data. Theorem 2. (Conditions for extrapolation). Let g(x) = β>x be the target function for β ∈ Rd. Suppose {xi}ni=1 is sampled from a distribution whose support D contains a connected subset S, where for any non-zero w ∈ Rd, there exists k > 0 so that kw ∈ S. If we train a two-layer ReLU MLP f : Rd → R on {(xi, yi)}ni=1 with squared loss in the NTK regime, f(x) p−→ β>x as n→∞.\nExperiments: geometry of training data affects extrapolation. The condition in Theorem 2 formalizes the intuition that the training distribution must be “diverse” for successful extrapolation, e.g., D includes all directions. Empirically, the extrapolation error is indeed small when the condition of Theorem 2 is satisfied (“all” in Figure 4b). In contrast, the extrapolation error is much larger when the training examples are restricted to only some directions (Figure 4b and Figure 3).\nRelating to previous works, Theorem 2 suggests why spurious correlations may hurt extrapolation, complementing the causality arguments (Arjovsky et al., 2019; Peters et al., 2016; Rojas-Carulla et al., 2018). When the training data has spurious correlations, some combinations of features are missing; e.g., camels might only appear in deserts in an image collection. Therefore, the condition for Theorem 2 no longer holds, and the model may extrapolate incorrectly. Theorem 2 is also analogous to an identifiability condition for linear models, but stricter. We can uniquely identify a linear function if the training data has full (feature) rank. MLPs are more expressive, so identifying the linear target function requires additional constraints.\nTo summarize, we analyze how ReLU MLPs extrapolate and provide two insights: (1) MLPs cannot extrapolate most nonlinear tasks due to their linear extrapolation (Theorem 1); and (2) MLPs extrapolate well when the target function is linear, if the training distribution is “diverse” (Theorem 2). In the next section, these results help us understand how more complex networks extrapolate." }, { "heading": "3.3 MLPS WITH OTHER ACTIVATION FUNCTIONS", "text": "Before moving on to GNNs, we complete the picture of MLPs with experiments on other activation functions: tanh σ(x) = tanh(x), cosine σ(x) = cos(x) (Lapedes & Farber, 1987; McCaughan, 1997; Sopena & Alquezar, 1994), and quadratic σ(x) = x2 (Du & Lee, 2018; Livni et al., 2014). Details are in Appendix C.4. MLPs extrapolate well when the activation and target function are similar; e.g., tanh activation extrapolates well when learning tanh, but not other functions (Figure 5). Moreover, each activation function has different limitations. To extrapolate the tanh function with tanh activation, the training data range has to be sufficiently wide. When learning a quadratic function with quadratic activation, only two-layer networks extrapolate well as more layers lead to higher-order polynomials. Cosine activations are hard to optimize for high-dimensional data, so we only consider one/two dimensional cosine target functions." }, { "heading": "4 HOW GRAPH NEURAL NETWORKS EXTRAPOLATE", "text": "Above, we saw that extrapolation in nonlinear tasks is hard for MLPs. Despite this limitation, GNNs have been shown to extrapolate well in some nonlinear algorithmic tasks, such as intuitive physics (Battaglia et al., 2016; Janner et al., 2019), graph algorithms (Battaglia et al., 2018; Velickovic et al., 2020), and symbolic mathematics (Lample & Charton, 2020). To address this discrepancy, we build on our MLP results and study how GNNs trained by GD extrapolate." }, { "heading": "4.1 HYPOTHESIS: LINEAR ALGORITHMIC ALIGNMENT HELPS EXTRAPOLATION", "text": "We start with an example: training GNNs to solve the shortest path problem. For this task, prior works observe that a modified GNN architecture with min-aggregation can generalize to graphs larger than those in the training set (Battaglia et al., 2018; Velickovic et al., 2020):\nh(k)u = min v∈N (u)\nMLP(k) ( h(k−1)u ,h (k−1) v ,w(v,u) ) . (2)\nWe first provide an intuitive explanation (Figure 2a). Shortest path can be solved by the Bellman-Ford (BF) algorithm (Bellman, 1958) with the following update:\nd[k][u] = min v∈N (u)\nd[k − 1][v] +w(v, u), (3)\nwhere w(v, u) is the weight of edge (v, u), and d[k][u] is the shortest distance to node u within k steps. The two equations can be easily aligned: GNNs simulate the BF algorithm if its MLP modules learn a linear function d[k−1][v]+w(v, u). Since MLPs can extrapolate linear tasks, this “alignment” may explain why min-aggregation GNNs can extrapolate well in this task.\nFor comparison, we can reason why we would not expect GNNs with the more commonly used sum-aggregation (Eqn. 1) to extrapolate well in this task. With sum-aggregation, the MLP modules need to learn a nonlinear function to simulate the BF algorithm, but Theorem 1 suggests that they will not extrapolate most nonlinear functions outside the training support.\nWe can generalize the above intuition to other algorithmic tasks. Many tasks where GNNs extrapolate well can be solved by dynamic programming (DP) (Bellman, 1966), an algorithmic paradigm with a recursive structure similar to GNNs’ (Eqn. 1) (Xu et al., 2020).\nDefinition 3. Dynamic programming (DP) is a recursive procedure with updates\nAnswer[k][s] = DP-Update({Answer[k − 1][s′]} , s′ = 1...n), (4)\nwhere Answer[k][s] is the solution to a sub-problem indexed by iteration k and state s, and DP-Update is a task-specific update function that solves the sub-problem based on the previous iteration.\nFrom a broader standpoint, we hypothesize that: if we encode appropriate non-linearities into the model architecture and input representations so that the MLP modules only need to learn nearly linear steps, then the resulting neural network can extrapolate well.\nHypothesis 1. (Linear algorithmic alignment). Let f : X → R be the underlying function and N a neural network with m MLP modules. Suppose there exist m linear functions {gi}mi=1 so that by replacing N ’s MLP modules with gi’s, N simulates f . Given > 0, there exists {(xi, f(xi))}ni=1 ⊂ D ( X so that N trained on {(xi, f(xi))}ni=1 by GD with squared loss learns f̂ with ‖f̂ − f‖ < .\nOur hypothesis builds on the algorithmic alignment framework of (Xu et al., 2020), which states that a neural network interpolates well if the modules are “aligned” to easy-to-learn (possibly nonlinear) functions. Successful extrapolation is harder: the modules need to align with linear functions.\nApplications of linear algorithmic alignment. In general, linear algorithmic alignment is not restricted to GNNs and applies broadly to neural networks. To satisfy the condition, we can encode appropriate nonlinear operations in the architecture or input representation (Figure 2). Learning DP algorithms with GNNs is one example of encoding non-linearity in the architecture (Battaglia et al., 2018; Corso et al., 2020). Another example is to encode log-and-exp transforms in the architecture to help extrapolate multiplication in arithmetic tasks (Trask et al., 2018; Madsen & Johansen, 2020). Neural symbolic programs take a step further and encode a library of symbolic operations to help extrapolation (Johnson et al., 2017; Mao et al., 2019; Yi et al., 2018).\nFor some tasks, it may be easier to change the input representation (Figure 2b). Sometimes, we can decompose the target function f as f = g ◦ h into a feature embedding h and a “simpler” target function g that our model can extrapolate well. We can obtain h via specialized features or feature transforms using domain knowledge (Lample & Charton, 2020; Webb et al., 2020), or via representation learning (e.g., BERT) with unlabeled out-of-distribution data in X \\ D (Chen et al., 2020; Devlin et al., 2019; Hu et al., 2020; Mikolov et al., 2013b; Peters et al., 2018). This brings a new perspective of how representations help extrapolation in various application areas. For example, in natural language processing, pretrained representations (Mikolov et al., 2013a; Wu & Dredze, 2019) and feature transformation using domain knowledge (Yuan et al., 2020; Zhang et al., 2019) help models generalize across languages, a special type of extrapolation. In quantitative finance, identifying the right “factors” or features is crucial for deep learning models as the financial markets may frequently be in extrapolation regimes (Banz, 1981; Fama & French, 1993; Ross, 1976).\nLinear algorithmic alignment explains successful extrapolation in the literature and suggests that extrapolation is harder in general: encoding appropriate non-linearity often requires domain expertise or model search. Next, we provide theoretical and empirical support for our hypothesis." }, { "heading": "4.2 THEORETICAL AND EMPIRICAL SUPPORT", "text": "We validate our hypothesis on three DP tasks: max degree, shortest path, and n-body problem, and prove the hypothesis for max degree. We highlight the role of graph structures in extrapolation.\nTheoretical analysis. We start with a simple yet fundamental task: learning the max degree of a graph, a special case of DP with one iteration. As a corollary of Theorem 1, the commonly used sum-based GNN (Eqn. 1) cannot extrapolate well (proof in Appendix B.4).\nCorollary 1. GNNs with sum-aggregation and sum-readout do not extrapolate well in Max Degree.\nTo achieve linear algorithmic alignment, we can encode the only non-linearity, the max function, in the readout. Theorem 3 confirms that a GNN with max-readout can extrapolate well in this task. Theorem 3. (Extrapolation with GNNs). Assume all nodes have the same feature. Let g and g′ be the max/min degree function, respectively. Let {(Gi, g(Gi)}ni=1 be the training set. If {(g(Gi), g′(Gi), g(Gi) · Nmaxi , g′(Gi) · Nmini )}ni=1 spans R4, where Nmaxi and Nmini are the number of nodes that have max/min degree on Gi, then one-layer max-readout GNNs trained on {(Gi, g(Gi))}ni=1 with squared loss in the NTK regime learn g.\nTheorem 3 does not follow immediately from Theorem 2, because MLP modules in GNNs only receive indirect supervision. We analyze the Graph NTK (Du et al., 2019b) to prove Theorem 3 in Appendix B.5. While Theorem 3 assumes identical node features, we empirically observe similar results for both identical and non-identical features (Figure 16 in Appendix).\nInterpretation of conditions. The condition in Theorem 3 is analogous to that in Theorem 2. Both theorems require diverse training data, measured by graph structure in Theorem 3 or directions in Theorem 2. In Theorem 3, the condition is violated if all training graphs have the same max or min node degrees, e.g., when training data are from one of the following families: path, C-regular graphs (regular graphs with degree C), cycle, and ladder.\nExperiments: architectures that help extrapolation. We validate our theoretical analysis with two DP tasks: max degree and shortest path (details in Appendix C.5 and C.6). While previous works only test on graphs with different sizes (Battaglia et al., 2018; Velickovic et al., 2020), we also test on graphs with unseen structure, edge weights and node features. The results support our theory. For max degree, GNNs with max-readout are better than GNNs with sum-readout (Figure 6a), confirming Corollary 1 and Theorem 3. For shortest path, GNNs with min-readout and min-aggregation are better than GNNs with sum-readout (Figure 6a).\nExperiments confirm the importance of training graphs structure (Figure 7). Interestingly, the two tasks favor different graph structure. For max degree, as Theorem 3 predicts, GNNs extrapolate well when trained on trees, complete graphs, expanders, and general graphs, and extrapolation errors are\nhigher when trained on 4-regular, cycles, or ladder graphs. For shortest path, extrapolation errors follow a U-shaped curve as we change the sparsity of training graphs (Figure 7b and Figure 18 in Appendix). Intuitively, models trained on sparse or dense graphs likely learn degenerative solutions.\nExperiments: representations that help extrapolation. Finally, we show a good input representation helps extrapolation. We study the n-body problem (Battaglia et al., 2016; Watters et al., 2017) (Appendix C.7), that is, predicting the time evolution of n objects in a gravitational system. Following previous work, the input is a complete graph where the nodes are the objects (Battaglia et al., 2016). The node feature for u is the concatenation of the object’s mass mu, position x (t) u , and velocity v (t) u at time t. The edge features are set to zero. We train GNNs to predict the velocity of each object u at time t+ 1. The true velocity f(G;u) for object u is approximately\nf(G;u) ≈ vtu + atu · dt, atu = C · ∑ v 6=u mv ‖xtu − xtv‖32 · ( xtv − xtu ) , (5)\nwhere C is a constant. To learn f , the MLP modules need to learn a nonlinear function. Therefore, GNNs do not extrapolate well to unseen masses or distances (“original features” in Figure 6b). We instead use an improved representation h(G) to encode non-linearity. At time t, we transform the edge features of (u, v) from zero to w(t)(u,v) = mv · ( x (t) v − x(t)u ) /‖x(t)u − x(t)v ‖32. The new edge features do not add information, but the MLP modules now only need to learn linear functions, which helps extrapolation (“improved features” in Figure 6b)." }, { "heading": "5 CONNECTIONS TO OTHER OUT-OF-DISTRIBUTION SETTINGS", "text": "We discuss several related settings. Intuitively, from the viewpoint of our results above, methods in related settings may improve extrapolation by 1) learning useful non-linearities beyond the training data range and 2) mapping relevant test data to the training data range.\nDomain adaptation studies generalization to a specific target domain (Ben-David et al., 2010; Blitzer et al., 2008; Mansour et al., 2009). Typical strategies adjust the training process: for instance, use unlabeled samples from the target domain to align the target and source distributions (Ganin et al., 2016; Zhao et al., 2018). Using target domain data during training may induce useful non-linearities and may mitigate extrapolation by matching the target and source distributions, though the correctness of the learned mapping depends on the label distribution (Zhao et al., 2019).\nSelf-supervised learning on a large amount of unlabeled data can learn useful non-linearities beyond the labeled training data range (Chen et al., 2020; Devlin et al., 2019; He et al., 2020; Peters et al., 2018). Hence, our results suggest an explanation why pre-trained representations such as BERT improve out-of-distribution robustness (Hendrycks et al., 2020). In addition, self-supervised learning could map semantically similar data to similar representations, so some out-of-domain examples might fall inside the training distribution after the mapping.\nInvariant models aim to learn features that respect specific invariances across multiple training distributions (Arjovsky et al., 2019; Rojas-Carulla et al., 2018; Zhou et al., 2021). If the model indeed learns these invariances, which can happen in the linear case and when there are confounders or anti-causal variables (Ahuja et al., 2021; Rosenfeld et al., 2021), this may essentially increase the training data range, since variations in the invariant features may be ignored by the model.\nDistributional robustness considers small adversarial perturbations of the data distribution, and ensures that the model performs well under these (Goh & Sim, 2010; Sagawa et al., 2020; Sinha et al., 2018; Staib & Jegelka, 2019). We instead look at more global perturbations. Still, one would expect that modifications that help extrapolation in general also improve robustness to local perturbations." }, { "heading": "6 CONCLUSION", "text": "This paper is an initial step towards formally understanding how neural networks trained by gradient descent extrapolate. We identify conditions under which MLPs and GNNs extrapolate as desired. We also suggest an explanation how GNNs have been able to extrapolate well in complex algorithmic tasks: encoding appropriate non-linearity in architecture and features can help extrapolation. Our results and hypothesis agree with empirical results, in this paper and in the literature." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Ruosong Wang, Tianle Cai, Han Zhao, Yuichi Yoshida, Takuya Konishi, Toru Lin, Weihua Hu, Matt J. Staib, Yichao Zhou, Denny Wu, Tianyi Yang, and Dingli (Leo) Yu for insightful discussions. This research was supported by NSF CAREER award 1553284, NSF III 1900933, and a Chevron-MIT Energy Fellowship. This research was also supported by JST ERATO JPMJER1201 and JSPS Kakenhi JP18H05291. MZ was supported by ODNI, IARPA, via the BETTER Program contract 2019-19051600005. The views, opinions, and/or findings contained in this article are those of the author and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency, the Department of Defense, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." }, { "heading": "A THEORETICAL BACKGROUND", "text": "In this section, we introduce theoretical background on neural tangent kernel (NTK), which draws an equivalence between the training dynamics of infinitely-wide (or ultra-wide) neural networks and that of kernel regression with respect to the neural tangent kernel.\nConsider a general neural network f(θ,x) : X → R where θ ∈ Rm is the parameters in the network and x ∈ X is the input. Suppose we train the neural network by minimizing the squared loss over training data, `(θ) = 12 ∑n i=1(f(θ,xi)−yi)2, by gradient descent with infinitesimally small learning rate, i.e., dθ(t)dt = −∇`(θ(t)). Let u(t) = (f(θ(t),xi)) n i=1 be the network outputs. u(t) follows the dynamics\ndu(t)\ndt = −H(t)(u(t)− y), (6)\nwhereH(t) is an n× n matrix whose (i, j)-th entry is\nH(t)ij =\n〈 ∂f(θ(t),xi)\n∂θ , ∂f(θ(t),xj) ∂θ\n〉 . (7)\nA line of works show that for sufficiently wide networks,H(t) stays almost constant during training, i.e.,H(t) = H(0) in the limit (Arora et al., 2019a;b; Allen-Zhu et al., 2019a; Du et al., 2019c;a; Li & Liang, 2018; Jacot et al., 2018). Suppose network parameters are randomly initialized with certain scaling, as network width goes to infinity,H(0) converges to a fixed matrix, the neural tangent kernel (NTK) (Jacot et al., 2018):\nNTK(x,x′) = E θ∼W\n〈 ∂f(θ(t),x)\n∂θ , ∂f(θ(t),x′) ∂θ\n〉 , (8)\nwhereW is Gaussian. Therefore, the learning dynamics of sufficiently wide neural networks in this regime is equivalent to that of kernel gradient descent with respect to the NTK. This implies the function learned by a neural network at convergence on any specific training set, denoted by fNTK(x), can be precisely characterized, and is equivalent to the following kernel regression solution\nfNTK(x) = (NTK(x,x1), ...,NTK(x,xn)) · NTK−1trainY , (9) where NTKtrain is the n × n kernel for training data, NTK(x,xi) is the kernel value between test data x and training data xi, and Y is the training labels.\nWe can in fact exactly calculate the neural tangent kernel matrix for certain architectures and activation functions. The exact formula of NTK with ReLU activation has been derived for feedforward neural networks (Jacot et al., 2018), convolutional neural networks (Arora et al., 2019b), and Graph Neural Networks (Du et al., 2019b).\nOur theory builds upon this equivalence of network learning and kernel regression to more precisely characterize the function learned by a sufficiently-wide neural network given any specific training set. In particular, the difference between the learned function and true function over the domain of X determines the extrapolation error.\nHowever, in general it is non-trivial to compute or analyze the functional form of what a neural network learns using Eqn. 9, because the kernel regression solution using neural tangent kernel only gives point-wise evaluation. Thus, we instead analyze the function learned by a network in the NTK’s induced feature space, because representations in the feature space would give a functional form.\nLemma 2 makes this connection more precise: the solution to the kernel regression using neural tangent kernel, which also equals over-parameterized network learning, is equivalent to a min-norm solution among functions in the NTK’s induced feature space that fits all training data. Here the min-norm refers to the RKHS norm. Lemma 2. Let φ(x) be a feature map induced by a neural tangent kernel, for any x ∈ Rd. The solution to kernel regression Eqn. 9 is equivalent to fNTK(x) = φ(x)>βNTK, where βNTK is\nmin β ‖β‖2\ns.t. φ(xi)>β = yi, for i = 1, ..., n.\nWe prove Lemma 2 in Appendix B.6. To analyze the learned functions as the min-norm solution in feature space, we also need the explicit formula of an induced feature map of the corresponding neural tangent kernel. The following lemma gives a NTK feature space for two-layer MLPs with ReLU activation. It follows easily from the kernel formula described in Jacot et al. (2018); Arora et al. (2019b); Bietti & Mairal (2019). Lemma 3. An infinite-dimensional feature map φ(x) induced by the neural tangent kernel of a two-layer multi-layer perceptron with ReLU activation function is\nφ (x) = c ( x · I ( w(k) > x ≥ 0 ) ,w(k) > x · I ( w(k) > x ≥ 0 ) , ... ) , (10)\nwhere w(k) ∼ N (0, I), with k going to infinity. c is a constant, and I is the indicator function.\nWe prove Lemma 3 in Appendix B.7. The feature maps for other architectures, e.g., Graph Neural Networks (GNNs) can be derived similarly. We analyze the Graph Neural Tangent Kernel (GNTK) for a simple GNN architecture in Theorem 3.\nWe then use Lemma 2 and 3 to characterize the properties of functions learned by an overparameterized neural network. We precisely characterize the neural networks’ learned functions in the NTK regime via solving the constrained optimization problem corresponding to the min-norm function in NTK feature space with the constraint of fitting the training data.\nHowever, there still remains many technical challenges. For example, provable extrapolation (exact or asymptotic) is often not achieved with most training data distribution. Understanding the desirable condition requires significant insights into the geometry properties of training data distribution, and how they interact with the solution learned by neural networks. Our insights and refined analysis shows in Rd space, we need to consider the directions of training data. In graphs, we need to consider, in addition, the graph structure of training data. We refer readers to detailed proofs for the intuition of data conditions. Moreover, since NTK corresponds to infinitely wide neural networks, the feature space is of infinite dimension. The analysis of infinite dimensional spaces poses non-trivial technical challenges too.\nSince different theorems have their respective challenges and insights/techniques, we refer the interested readers to the respective proofs for details. In Lemma 1 (proof in Appendix B.2), Theorem 2 (proof in Appendix B.3), and Theorem 1 (proof in Appendix B.1) we analyze over-parameterized MLPs. The proof of Corollary 1 is in Appendix B.4. In Theorem 3 we analyze Graph Neural Networks (proof in Appendix B.5)." }, { "heading": "B PROOFS", "text": "" }, { "heading": "B.1 PROOF OF THEOREM 1", "text": "To show neural network outputs f(x) converge to a linear function along all directions v, we will analyze the function learned by a neural network on the training set {(xi, yi)}ni=1, by studying the functional representation in the network’s neural tangent kernel RKHS space.\nRecall from Section A that in the NTK regime, i.e., networks are infinitely wide, randomly initialized, and trained by gradient descent with infinitesimally small learning rate, the learning dynamics of the neural network is equivalent to that of a kernel regression with respect to its neural tangent kernel.\nFor any x ∈ Rd, the network output is given by f(x) = (〈 φ(x), φ(x1) 〉 , ..., 〈 φ(x), φ(xn) 〉) · NTK−1trainY ,\nwhere NTKtrain is the n× n kernel for training data, 〈 φ(x), φ(xi) 〉 is the kernel value between test data x and training data xi, and Y is training labels. By Lemma 2, the kernel regression solution is also equivalent to the min-norm solution in the NTK RKHS space that fits all training data\nf(x) = φ(x)>βNTK, (11) where the representation coefficient βNTK is\nmin β ‖β‖2\ns.t. φ(xi)>β = yi, for i = 1, ..., n.\nThe feature map φ(x) for a two-layer MLP with ReLU activation is given by Lemma 3 φ (x) = c′ ( x · I ( w(k) > x ≥ 0 ) ,w(k) > x · I ( w(k) > x ≥ 0 ) , ... ) , (12)\nwhere w(k) ∼ N (0, I), with k going to infinity. c′ is a constant, and I is the indicator function. Without loss of generality, we assume the bias term to be 1. For simplicity of notations, we denote each data x plus bias term by, i.e., x̂ = [x|1] (Bietti & Mairal, 2019), and assume constant term is 1. Given any direction v on the unit sphere, the network outputs for out-of-distribution data x0 = tv and x = x0 + hv = (1 + λ)x0, where we introduce the notation of x and λ for convenience, are given by Eqn. 11 and Eqn. 12\nf(x̂0) =β > NTK ( x̂0 · I ( w(k) > x̂0 ≥ 0 ) ,w(k) > x̂0 · I ( w(k) > x̂0 ≥ 0 ) , ... ) ,\nf(x̂) =β>NTK\n( x̂ · I ( w(k) > x̂ ≥ 0 ) ,w(k) > x̂ · I ( w(k) > x̂ ≥ 0 ) , ... ) ,\nwhere we have x̂0 = [x0|1] and x̂ = [(1 + λ)x0|1]. It follows that f(x̂)− f(x̂0) = β>NTK ( x̂ · I ( w(k) > x̂ ≥ 0 ) − x̂0 · I ( w(k) > x̂0 ≥ 0 ) , (13)\nw(k) > x̂ · I ( w(k) > x̂ ≥ 0 ) −w(k) > x̂0 · I ( w(k) > x̂0 ≥ 0 ) , ... ) (14)\nBy re-arranging the terms, we get the following equivalent form of the entries: x̂ · I ( w>x̂ ≥ 0 ) − x̂0 · I ( w>x̂0 ≥ 0 ) (15)\n= x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 ) + I ( w>x̂0 ≥ 0 )) − x̂0 · I ( w>x̂0 ≥ 0 ) (16)\n= x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) + (x̂− x̂0) · I ( w>x̂0 ≥ 0 ) (17)\n= [x|1] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) + [hv|0] · I ( w>x̂0 ≥ 0 ) (18)\nSimilarly, we have w>x̂ · I ( w>x̂ ≥ 0 ) −w>x̂0 · I ( w>x̂0 ≥ 0 ) (19)\n= w>x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 ) + I ( w>x̂0 ≥ 0 )) −w>x̂0 · I ( w>x̂0 ≥ 0 ) (20)\n= w>x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) +w> (x̂− x̂0) · I ( w>x̂0 ≥ 0 ) (21)\n= w> [x|1] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) +w>[hv|0] · I ( w>x̂0 ≥ 0 ) (22)\nAgain, let us denote the part of βNTK corresponding to each w by βw. Moreover, let us denote the part corresponding to Eqn. 18 by β1w and the part corresponding to Eqn. 22 by β 2 w. Then we have\nf(x̂)− f(x̂0) h\n(23)\n= ∫ β1 > w [x/h|1/h] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (24)\n+ ∫ β1 > w [v|0] · I ( w>x̂0 ≥ 0 ) dP(w) (25)\n+ ∫ β2w ·w> [x/h|1/h] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (26)\n+ ∫ β2w ·w>[v|0] · I ( w>x̂0 ≥ 0 ) dP(w) (27)\nNote that all βw are finite constants that depend on the training data. Next, we show that as t→∞, each of the terms above converges in O(1/ ) to some constant coefficient βv that depend on the training data and the direction v. Let us first consider Eqn. 25. We have∫\nI ( w>x̂0 ≥ 0 ) dP(w) = ∫ I ( w>[x0|1] ≥ 0 ) dP(w) (28)\n= ∫ I ( w>[x0/t|1/t] ≥ 0 ) dP(w) (29)\n−→ ∫ I ( w>[v|0] ≥ 0 ) dP(w) as t→∞ (30)\nBecause β1w are finite constants, it follows that∫ β1 > w [v|0] · I ( w>x̂0 ≥ 0 ) dP(w)→ ∫ β1 > w [v|0] · I ( w>[v|0] ≥ 0 ) dP(w), (31)\nwhere the right hand side is a constant that depends on training data and direction v. Next, we show the convergence rate for Eqn. 31. Given error > 0, because β1 >\nw [v|0] are finite constants, we need to bound the following by C · for some constant C,\n| ∫ I ( w>x̂0 ≥ 0 ) − I ( w>[v|0] ≥ 0 ) dP(w)| (32)\n= | ∫ I ( w>[x0|1] ≥ 0 ) − I ( w>[x0|0] ≥ 0 ) dP(w)| (33)\nObserve that the two terms in Eqn. 33 represent the volume of half-(balls) that are orthogonal to vectors [x0|1] and [x0|0]. Hence, Eqn. 33 is the volume of the non-overlapping part of the two (half)balls, which is created by rotating an angle θ along the last coordinate. By symmetry, Eqn. 33 is linear in θ. Moreover, the angle θ = arctan(C/t) for some constant C. Hence, it follows that\n| ∫ I ( w>[x0|1] ≥ 0 ) − I ( w>[x0|0] ≥ 0 ) dP(w)| = C1 · arctan(C2/t) (34)\n≤ C1 · C2/t (35) = O(1/t) (36)\nIn the last inequality, we used the fact that arctanx < x for x > 0. Hence, O(1/t) < implies t = O(1/ ) as desired. Next, we consider Eqn. 24.∫\nβ1 > w [x/h|1/h] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (37)\nLet us first analyze the convergence of the following: | ∫ I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 ) dP(w)| (38)\n= | ∫ I ( w>[(1 + λ)x0|1] ≥ 0 ) − I ( w>[x0|1] ≥ 0 ) dP(w)dP(w)| (39)\n= | ∫ I ( w>[x0| 1\n1 + λ ] ≥ 0\n) − I ( w>[x0|1] ≥ 0 ) dP(w)dP(w)| → 0 (40)\nThe convergence to 0 follows from Eqn. 34. Now we consider the convergence rate. The angle θ is at most 1− 11+λ times of that in Eqn. 34. Hence, the rate is as follows(\n1− 1 1 + λ\n) ·O ( 1\nt\n) = λ 1 + λ ·O ( 1 t ) = h/t 1 + h/t ·O ( 1 t ) = O ( h (h+ t)t ) (41)\nNow we get back to Eqn. 24, which simplifies as the following.∫ β1 >\nw\n[ v + tv\nh | 1 h\n] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (42)\nWe compare the rate of growth of left hand side and the rate of decrease of right hand side (indicators).\nt h · h (h+ t)t = 1 h+ t → 0 as t→∞ (43)\n1 h · h (h+ t)t =\n1\n(h+ t)t → 0 as t→∞ (44)\nHence, the indicators decrease faster, and it follows that Eqn. 24 converges to 0 with rate O( 1 ). Moreover, we can bound w with standard concentration techniques. Then the proofs for Eqn. 26 and Eqn. 27 follow similarly. This completes the proof." }, { "heading": "B.2 PROOF OF LEMMA 1", "text": "Overview of proof. To prove exact extrapolation given the conditions on training data, we analyze the function learned by the neural network in a functional form. The network’s learned function can be precisely characterized by a solution in the network’s neural tangent kernel feature space which has a minimum RKHS norm among functions that can fit all training data, i.e., it corresponds to the optimum of a constrained optimization problem. We show that the global optimum of this constrained optimization problem, given the conditions on training data, is precisely the same function as the underlying true function.\nSetup and preparation. LetX = {x1, ...,xn} and Y = {y1, ..., yn} denote the training set input features and their labels. Let βg ∈ Rd denote the true parameters/weights for the underlying linear function g, i.e.,\ng(x) = β>g x for all x ∈ Rd\nRecall from Section A that in the NTK regime, where networks are infinitely wide, randomly initialized, and trained by gradient descent with infinitesimally small learning rate, the learning dynamics of a neural network is equivalent to that of a kernel regression with respect to its neural tangent kernel. Moreover, Lemma 2 tells us that this kernel regression solution can be expressed in the functional form in the neural tangent kernel’s feature space. That is, the function learned by the neural network (in the ntk regime) can be precisely characterized as\nf(x) = φ(x)>βNTK,\nwhere the representation coefficient βNTK is\nmin β ‖β‖2 (45)\ns.t. φ(xi)>β = yi, for i = 1, ..., n. (46)\nAn infinite-dimensional feature map φ(x) for a two-layer ReLU network is described in Lemma 3 φ (x) = c′ ( x · I ( w(k) > x ≥ 0 ) ,w(k) > x · I ( w(k) > x ≥ 0 ) , ... ) ,\nwhere w(k) ∼ N (0, I), with k going to infinity. c′ is a constant, and I is the indicator function. That is, there are infinitely many directionsw with Gaussian density, and each direction comes with two features. Without loss of generality, we can assume the scaling constant to be 1.\nConstrained optimization in NTK feature space. The representation or weight of the neural network’s learned function in the neural tangent kernel feature space, βNTK, consists of weight vectors for each x · I ( w(k) > x ≥ 0 ) ∈ Rd and w(k)>x · I ( w(k) > x ≥ 0 ) ∈ R. For simplicity\nof notation, we will use w to refer to a particular w, without considering the index (k), which does not matter for our purposes. For any w ∈ Rd, we denote by β̂w = (β̂(1)w , ..., β̂(d)w ) ∈ Rd the weight vectors corresponding to x · I ( w>x ≥ 0 ) , and denote by β̂′w ∈ Rd the weight for\nw>x · I ( w>x ≥ 0 ) .\nObserve that for any w ∼ N (0, I) ∈ Rd, any other vectors in the same direction will activate the same set of xi ∈ Rd. That is, if w>xi ≥ 0 for any w ∈ Rd, then (k ·w)>xi ≥ 0 for any k > 0. Hence, we can reload our notation to combine the effect of weights for w’s in the same direction. This enables simpler notations and allows us to change the distribution of w in NTK features from Gaussian distribution to uniform distribution on the unit sphere.\nMore precisely, we reload our notation by using βw and β′w to denote the combined effect of all weights (β̂(1)kw, ..., β̂ (d) kw) ∈ Rd and β̂′kw ∈ R for all kw with k > 0 in the same direction of w. That is, for each w ∼ Uni(unit sphere) ∈ Rd, we define β(j)w as the total effect of weights in the same direction\nβ(j)w = ∫ β̂(j)u I ( w>u\n‖w‖ · ‖u‖ = 1\n) dP(u), for j = [d] (47)\nwhere u ∼ N (0, I). Note that to ensure the βw is a well-defined number, here we can work with the polar representation and integrate with respect to an angle. Then βw is well-defined. But for simplicity of exposition, we use the plain notation of integral. Similarly, we define β′w as reloading the notation of\nβ′w = ∫ β̂uI ( w>u\n‖w‖ · ‖u‖ = 1 ) · ‖u‖ ‖w‖ dP(u) (48)\nHere, in Eqn. 48 we have an extra term of ‖u‖‖w‖ compared to Eqn. 47 because the NTK features that Eqn. 48 corresponds to,w>x · I ( w>x ≥ 0 ) , has an extraw> term. So we need to take into account the scaling. This abstraction enables us to make claims on the high-level parameters βw and β′w only, which we will show to be sufficient to determine the learned function.\nThen we can formulate the constrained optimization problem whose solution gives a functional form of the neural network’s learned function. We rewrite the min-norm solution in Eqn. 45 as\nmin β\n∫ ( β(1)w )2 + ( β(2)w )2 + ...+ ( β(d)w )2 + (β′w) 2 dP(w) (49)\ns.t. ∫\nw>xi≥0\nβ>wxi + β ′ w ·w>xi dP(w) = β>g xi ∀i ∈ [n], (50)\nwhere the density of w is now uniform on the unit sphere of Rd. Observe that since w is from a uniform distribution, the probability density function P(w) is a constant. This means every xi is activated by half of thew on the unit sphere, which implies we can now write the right hand side of Eqn. 50 in the form of left hand side, i.e., integral form. This allows us to further simplify Eqn. 50 as∫\nw>xi≥0\n( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 ∀i ∈ [n], (51)\nwhere Eqn. 51 follows from the following steps of simplification∫ w>xi≥0 β(1)w x (1) i + ..β (d) w x (d) i + β ′ w ·w>xidP(w) = β(1)g x (1) i + ...β (d) g x (d) i ∀i ∈ [n],\n⇐⇒ ∫\nw>xi≥0\nβ(1)w x (1) i + ...+ β (d) w x (d) i + β ′ w ·w>xi dP(w)\n= 1∫\nw>xi≥0 dP(w)\n· ∫\nw>xi≥0\ndP(w) · ( β(1)g x (1) i + ...+ β (d) g x (d) i ) ∀i ∈ [n],\n⇐⇒ ∫\nw>xi≥0\nβ(1)w x (1) i + ...+ β (d) w x (d) i + β ′ w ·w>xidP(w)\n= 2 · ∫\nw>xi≥0\nβ(1)g x (1) i + ...+ β (d) g x (d) i dP(w) ∀i ∈ [n],\n⇐⇒ ∫\nw>xi≥0\n( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 ∀i ∈ [n].\nClaim 1. Without loss of generality, assume the scaling factor c in NTK feature map φ(x) is 1. Then the global optimum to the constraint optimization problem Eqn. 49 subject to Eqn. 51, i.e.,\nmin β\n∫ ( β(1)w )2 + ( β(2)w )2 + ...+ ( β(d)w )2 + (β′w) 2 dP(w) (52)\ns.t. ∫\nw>xi≥0\n( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 ∀i ∈ [n]. (53)\nsatisfies βw + β′w ·w = 2βg for all w.\nThis claim implies the exact extrapolation we want to prove, i.e., fNTK(x) = g(x). This is because, if our claim holds, then for any x ∈ Rd\nfNTK(x) = ∫ w>x≥0 β>wx+ β ′ w ·w>x dP(w)\n= ∫ w>x≥0 2 · β>g x dP(w)\n= ∫ w>x≥0 dP(w) · 2β>g x\n= 1\n2 · 2β>g x = g(x)\nThus, it remains to prove Claim 1. To compute the optimum to the constrained optimization problem Eqn. 52, we consider the Lagrange multipliers. It is clear that the objective Eqn. 52 is convex. Moreover, the constraint Eqn. 53 is affine. Hence, by KKT, solution that satisfies the Lagrange condition will be the global optimum. We compute the Lagrange multiplier as\nL(β, λ) = ∫ (\nβ(1)w )2 + ( β(2)w )2 + ...+ ( β(d)w )2 + (β′w) 2 dP(w) (54)\n− n∑ i=1 λi · ∫ w>xi≥0 ( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) (55) Setting the partial derivative of L(β, λ) with respect to each variable to zero gives\n∂L ∂β (k) w = 2β(k)w P(w) + n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) = 0 (56)\n∂L β′w = 2β′wP(w) + n∑ i=1 λi ·w>xi · I ( w>xi ≥ 0 ) = 0 (57) ∂L ∂λi = ∫ w>xi≥0 ( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 (58)\nIt is clear that the solution in Claim 1 immediately satisfies Eqn. 58. Hence, it remains to show there exist a set of λi for i ∈ [n] that satisfies Eqn. 56 and Eqn. 57. We can simplify Eqn. 56 as\nβ(k)w = c · n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) , (59)\nwhere c is a constant. Similarly, we can simplify Eqn. 57 as\nβ′w = c · n∑ i=1 λi ·w>xi · I ( w>xi ≥ 0 ) (60)\nObserve that combining Eqn. 59 and Eqn. 60 implies that the constraint Eqn. 60 can be further simplified as\nβ′w = w >βw (61)\nIt remains to show that given the condition on training data, there exists a set of λi so that Eqn. 59 and Eqn. 61 are satisfied.\nGlobal optimum via the geometry of training data. Recall that we assume our training data {(xi, yi)}ni=1 satisfies for any w ∈ Rd, there exist d linearly independent {xwi }di=1 ⊂ X , where X = {xi}ni=1, so that w>xwi ≥ 0 and −xwi ∈X for i = 1..d, e.g., an orthogonal basis of Rd and their opposite vectors. We will show that under this data regime, we have\n(a) for any particular w, there indeed exist a set of λi that can satisfy the constraints Eqn. 59 and Eqn. 61 for this particular w.\n(b) For any w1 and w2 that activate the exact same set of {xi}, the same set of λi can satisfy the constraints Eqn. 59 and Eqn. 61 of both w1 and w2.\n(c) Whenever we rotate a w1 to a w2 so that the set of xi being activated changed, we can still find λi that satisfy constraint of both w1 and w2.\nCombining (a), (b) and (c) implies there exists a set of λ that satisfy the constraints for all w. Hence, it remains to show these three claims.\nWe first prove Claim (a). For each w, we must find a set of λi so that the following hold.\nβ(k)w = c · n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) , β′w = w >βw βw + β ′ w ·w = 2βg\nHere, βg and w are fixed, and w is a vector on the unit sphere. It is easy to see that βw is then determined by βg and w, and there indeed exists a solution (solving a consistent linear system). Hence we are left with a linear system with d linear equations\nβ(k)w = c · n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) ∀k ∈ [d]\nto solve with free variables being λi so that w activates xi, i.e., w>xi ≥ 0. Because the training data {(xi, yi)}ni=1 satisfies for any w, there exist at least d linearly independent xi that activate w. This guarantees for any w we must have at least d free variables. It follows that there must exist solutions λi to the linear system. This proves Claim (a).\nNext, we show that (b) for anyw1 andw2 that activate the exact same set of {xi}, the same set of λi can satisfy the constraints Eqn. 59 and Eqn. 61 of bothw1 andw2. Becausew1 andw2 are activated by the same set of xi, this implies\nβw1 = c · n∑ i=1 λi · xi · I ( w>1 xi ≥ 0 ) = c · n∑ i=1 λi · xi · I ( w>2 xi ≥ 0 ) = βw2\nSince λi already satisfy constraint Eqn. 59 for w1, they also satisfy that for w2. Thus, it remains to show that βw1 + β ′ w1 · w1 = βw2 + β ′ w2 · w1 assuming βw1 = βw2 , β ′ w1 = w > 1 βw1 , and β′w2 = w > 2 βw2 . This indeed holds because\nβw1 + β ′ w1 ·w1 = βw2 + β ′ w2 ·w2\n⇐⇒ β′w1 ·w > 1 = β ′ w2 ·w > 2 ⇐⇒ w>1 βw1w>1 = w>2 βw2w>2 ⇐⇒ w>1 w1β>w1 = w > 2 w2β > w2 ⇐⇒ 1 · β>w1 = 1 · β > w2\n⇐⇒ βw1 = βw1 Here, we used the fact that w1 and w2 are vectors on the unit sphere. This proves Claim (b).\nFinally, we show (c) that Whenever we rotate a w1 to a w2 so that the set of xi being activated changed, we can still find λi that satisfy constraint of both w1 and w2. Suppose we rotate w1 to w2 so that w2 lost activation with x1,x2, ...,xp which in the set of linearly independent xi’s being activated byw1 and their opposite vectors −xi are also in the training set (without loss of generality). Then w2 must now also get activated by −x1,−x2, ...,−xp. This is because if w>2 xi < 0, we must have w>2 (−xi) > 0. Recall that in the proof of Claim (a), we only needed the λi from linearly independent xi that we used to solve the linear systems, and their opposite as the free variables to solve the linear system of\nd equations. Hence, we can set λ to 0 for the other xi while still satisfying the linear system. Then, suppose there exists λi that satisfy\nβ(k)w1 = c · d∑ i=1 λi · x(k)i\nwhere the xi are the linearly independent vectors that activatew1 with opposite vectors in the training set, which we have proved in (a). Then we can satisfy the constraint for βw2 below\nβ(k)w2 = c · p∑ i=1 λ̂i · (−xi)(k) + d∑ i=p+1 λi · x(k)i\nby setting λ̂i = −λi for i = 1...p. Indeed, this gives\nβ(k)w2 = c · p∑ i=1 (−λi) · (−xi)(k) + d∑ i=p+1 λi · x(k)i\n= c · d∑ i=1 λi · x(k)i\nThus, we can also find λi that satisfy the constraint for βw2 . Here, we do not consider the case where w2 is parallel with an xi because such w2 has measure zero. Note that we can apply this argument iteratively because the flipping the sign always works and will not create any inconsistency.\nMoreover, we can show that the constraint for β′w2 is satisfied by a similar argument as in proof of Claim (b). This follows from the fact that our construction makes βw1 = βw2 . Then we can follow the same argument as in (b) to show that βw1 + β ′ w1 ·w1 = βw2 + β ′ w2 ·w1. This completes the proof of Claim (c).\nIn summary, combining Claim (a), (b) and (c) gives that Claim 1 holds. That is, given our training data, the global optimum to the constrained optimization problem of finding the min-norm solution among functions that fit the training data satisfies βw+β′w ·w = 2βg . We also showed that this claim implies exact extrapolation, i.e., the network’s learned function f(x) is equal to the true underlying function g(x) for all x ∈ Rd. This completes the proof." }, { "heading": "B.3 PROOF OF THEOREM 2", "text": "Proof of the asymptotic convergence to extrapolation builds upon our proof of exact extrapolation, i.e., Lemma 1. The proof idea is that if the training data distribution has support at all directions, when the number of samples n→∞, asymptotically the training set will converge to some imaginary training set that satisfies the condition for exact extrapolation. Since if training data are close the neural tangent kernels are also close, the predictions or learned function will converge to a function that achieves perfect extrapolation, that is, the true underlying function.\nAsymptotic convergence of data sets. We first show the training data converge to a data set that satisfies the exact extrapolation condition in Lemma 1. Suppose training data {xi}ni=1 are sampled from a distribution whose support contains a connected set S that intersects all directions, i.e., for any non-zero w ∈ Rd, there exists k > 0 so that kw ∈ S. Let us denote by S the set of datasets that satisfy the condition in Lemma 1. In fact, we will use a relaxed condition in the proof of Lemma 1 (Lemma 1 in the main text uses a stricter condition for simplicity of exposition). Given a general dataset X and a dataset S ∈ S of the same size n, let σ(X,S) denote a matching of their data points, i.e., σ outputs a sequence of pairs\nσ(X,S)i = (xi, si) for i ∈ [n] s.t. X = {xi}ni=1\nS = {si}ni=1\nLet ` : Rd × Rd → R be the l2 distance that takes in a pair of points. We then define the distance between the datasets d(X,S) as the minimum sum of l2 distances of their data points over all\npossible matching.\nd(X,S) = minσ n∑ i=1 ` (σ (X,S)i) |X| = |S| = n\n∞ |X| 6= |S|\nWe can then define a “closest distance to perfect dataset” function D∗ : X → R which maps a dataset X to the minimum distance ofX to any dataset in S\nD∗ (X) = min S∈S d (X,S)\nIt is easy to see that for any datasetX = {xi}ni=1, D∗ (X) can be bounded by the minimum of the closest distance to perfect dataset D∗ of sub-datasets ofX of size 2d.\nD∗ ({xi}ni=1) ≤ bn/2dc min k=1 D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) (62)\nThis is because for any S ∈ S, and any S ⊆ S′, we must have S′ ∈ S because a dataset satisfies exact extrapolation condition as long as it contains some key points. Thus, adding more data will not hurt, i.e., for anyX1 ⊆X2, we always have\nD∗ (X1) ≤ D∗ (X2)\nNow let us denote by Xn a random dataset of size n where each xi ∈ Xn is sampled from the training distribution. Recall that our training data {xi}ni=1 are sampled from a distribution whose support contains a connected set S∗ that intersects all directions, i.e., for any non-zerow ∈ Rd, there exists k > 0 so that kw ∈ S∗. It follows that for a random dataset X2d of size 2d, the probability that D∗(X2d) > happens is less than 1 for any > 0. First there must exist S0 = {si}2di=1 ∈ S of size 2d, e.g., orthogonal basis and their opposite vectors. Observe that if we scale any si by k > 0, the resulting dataset is still in S by the definition of S . We denote the set of datasets where we are allowed to scale elements of S0 by S0. It follows that\nP (D∗(X2d) > ) = P (\nmin S∈S d (X2d,S) > ) ≤ P ( min S∈S0 d (X2d,S) >\n) = P ( min S∈S0 min σ n∑ i=1 ` (σ (X2d,S)i) > )\n= 1− P ( min S∈S0 min σ n∑ i=1 ` (σ (X2d,S)i) ≤ )\n≤ 1− P (\nmin S∈S0 min σ n max i=1\n` (σ (X2d,S)i) ≤ )\n≤ δ < 1\nwhere we denote the bound of P (D∗(X2d) > ) by δ < 1, and the last step follows from P (\nmin S∈S0 min σ n max i=1\n` (σ (X2d,S)i) ≤ ) > 0\nwhich further follows from the fact that for any si ∈ S0, by the assumption on training distribution, we can always find k > 0 so that ksi ∈ S∗, a connected set in the support of training distribution. By the connectivity of support S∗, ksi cannot be an isolated point in S∗, so for any > 0, we must have∫\n‖x−ksi‖≤ ,x∈S∗\nfX(x)dx > 0\nHence, we can now apply Eqn. 62 to bound D∗(Xn). Given any > 0, we have\nP (D∗(Xn) > ) = 1− P (D∗(Xn) ≤ ) ≤ 1− P (bn/2dc\nmin k=1\nD∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) ≤ )\n≤ 1− 1− bn/2dc∏ k=1 P ( D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) > )\n= bn/2dc∏ k=1 P ( D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) > ) ≤ δbn/2dc\nHere δ < 1. This implies D∗(Xn) p−→ 0, i.e.,\nlim n→∞\nP (D∗(Xn) > ) = 0 ∀ > 0 (63)\nEqn. 63 says as the number of training samples n→∞, our training set will converge in probability to a dataset that satisfies the requirement for exact extrapolation.\nAsymptotic convergence of predictions. Let NTK(x,x′) : Rd × Rd → R denote the neural tangent kernel for a two-layer ReLU MLP. It is easy to see that if x → x∗, then NTK(x, ·) → NTK(x∗, ·) (Arora et al. (2019b)). Let NTKtrain denote the n× n kernel matrix for training data. We have shown that our training set converges to a perfect data set that satisfies conditions of exact extrapolation. Moreover, note that our training set will only have a finite number of (not increase with n) xi that are not precisely the same as those in a perfect dataset. This is because a perfect data only contains a finite number of key points and the other points can be replaced by any other points while still being a perfect data set. Thus, we have NTKtrain → N∗, where N∗ is the n× n NTK matrix for some perfect data set.\nBecause neural tangent kernel is positive definite, we have NTK−1train → N∗ −1\n. Recall that for any x ∈ Rd, the prediction of NTK is\nfNTK(x) = (NTK(x,x1), ...,NTK(x,xn)) · NTK−1trainY ,\nwhere NTKtrain is the n × n kernel for training data, NTK(x,xi) is the kernel value between test data x and training data xi, and Y is training labels.\nSimilarly, we have (NTK(x,x1), ...,NTK(x,xn))→ (NTK(x,x∗1), ...,NTK(x,x∗n)), where x∗i is a perfect data set that our training set converges to. Combining this with NTK−1train → N∗ −1 gives\nfNTK p−→ f∗NTK = g,\nwhere fNTK is the function learned using our training set, and f∗NTK is that learned using a perfect data set, which is equal to the true underlying function g. This completes the proof." }, { "heading": "B.4 PROOF OF COROLLARY 1", "text": "In order for GNN with linear aggregations h(k)u = ∑\nv∈N (u)\nMLP(k) ( h(k)u , h (k) v ,x(u,v) ) ,\nhG = MLP(K+1) (∑ u∈G h(K)u ) ,\nto extrapolate in the maximum degree task, it must be able to simulate the underlying function\nhG = max u∈G ∑ v∈N (u) 1\nBecause the max function cannot be decomposed as the composition of piece-wise linear functions, the MLP(K+1) module in GNN must learn a function that is not piece-wise linear over domains outside the training data range. Since Theorem 1 proves for two-layer overparameterized MLPs, here we also assume MLP(K+1) is a two-layer overparameterized MLP, although the result can be extended to more layers. It then follows from Theorem 1 that for any input and label (and thus gradient), MLP(K+1) will converge to linear functions along directions from the origin. Hence, there are always domains where the GNN cannot learn a correct target function." }, { "heading": "B.5 PROOF OF THEOREM 3", "text": "Our proof applies the similar proof techniques for Lemma 1 and 2 to Graph Neural Networks (GNNs). This is essentially an analysis of Graph Neural Tangent Kernel (GNTK), i.e., neural tangent kernel of GNNs.\nWe first define the simple GNN architecture we will be analyzing, and then present the GNTK for this architecture. Suppose G = (V,E) is an input graph without edge feature, and xu ∈ Rd is the node feature of any node u ∈ V . Let us consider the simple one-layer GNN whose input is G and output is hG\nhG = W (2) max\nu∈G ∑ v∈N (u) W (1)xv (64)\nNote that our analysis can be extended to other variants of GNNs, e.g., with non-empty edge features, ReLU activation, different neighbor aggregation and graph-level pooling architectures. We analyze this GNN for simplicity of exposition.\nNext, let us calculate the feature map of the neural tangent kernel for this GNN. Recall from Section A that consider a graph neural network f(θ, G) : G → R where θ ∈ Rm is the parameters in the network and G ∈ G is the input graph. Then the neural tangent kernel is\nHij =\n〈 ∂f(θ, Gi)\n∂θ , ∂f(θ, Gj) ∂θ\n〉 ,\nwhere θ are the infinite-dimensional parameters. Hence, the gradients with respect to all parameters give a natural feature map. Let us denote, for any node u, the degree of u by\nhu = ∑\nv∈N (u)\nxv (65)\nIt then follows from simple computation of derivative that the following is a feature map of the GNTK for Eqn. 64\nφ(G) = c · ( max u∈G ( w(k) > hu ) , ∑ u∈G I ( u = arg max v∈G w(k) > hv ) · hu, ... ) , (66)\nwhere w(k) ∼ N (0, I), with k going to infinity. c is a constant, and I is the indicator function. Next, given training data {(Gi, yi}ni=1, let us analyze the function learned by GNN through the min-norm solution in the GNTK feature space. The same proof technique is also used in Lemma 1 and 2.\nRecall the assumption that all graphs have uniform node feature, i.e., the learning task only considers graph structure, but not node feature. We assume xv = 1 without loss of generality. Observe that in this case, there are two directions, positive or negative, for one-dimensional Gaussian distribution. Hence, we can simplify our analysis by combining the effect of linear coefficients for w in the same direction as in Lemma 1 and 2.\nSimilarly, for any w, let us define β̂w ∈ R as the linear coefficient corresponding to∑ u∈G\nI ( u = arg max\nv∈G w>hv\n) · hu in RKHS space, and denote by β̂′w ∈ R the weight for\nmax u∈G\n( w>hu ) . Similarly, we can combine the effect of all β̂ in the same direction as in Lemma 1\nand 2. We define the combined effect with βw and β′w. This allows us to reason about w with two directions, + and −. Recall that the underlying reasoning function, maximum degree, is\ng(G) = max u∈G hu.\nWe formulate the constrained optimization problem, i.e., min-norm solution in GNTK feature space that fits all training data, as\nmin β̂,β̂′\n∫ β̂2w + β̂ ′2 wdP(w)\ns.t. ∫ ∑ u∈Gi I ( u = arg max v∈G w · hv ) · β̂w · hu + max u∈Gi (w · hu) · β̂′wdP(w) = max u∈Gi hu ∀i ∈ [n],\nwhere Gi is the i-th training graph and w ∼ N (0, 1). By combining the effect of β̂, and taking the derivative of the Lagrange for the constrained optimization problem and setting to zero, we get the global optimum solution satisfy the following constraints.\nβ+ = c · n∑ i=1 λi · ∑ u∈Gi hu · I ( u = arg max v∈Gi hv ) (67)\nβ− = c · n∑ i=1 λi · ∑ u∈Gi hu · I ( u = arg min v∈Gi hv ) (68)\nβ′+ = c · n∑ i=1 λi · max u∈Gi hu (69)\nβ′− = c · n∑ i=1 λi · min u∈Gi hu (70)\nmax u∈Gi hu = β+ · ∑ u∈Gi I ( u = arg max v∈Gi hv ) · hu + β′+ · max u∈Gi hu (71)\n+ β− · ∑ u∈Gi I ( u = arg min v∈Gi hv ) · hu + β′− · min u∈Gi hu ∀i ∈ [n] (72)\nwhere c is some constant, λi are the Lagrange parameters. Note that here we used the fact that there are two directions +1 and −1. This enables the simplification of Lagrange derivative. For a similar step-by-step derivation of Lagrange, refer to the proof of Lemma 1.\nLet us consider the solution β′+ = 1 and β+ = β− = β ′ − = 0. It is clear that this solution can fit the training data, and thus satisfies Eqn. 71. Moreover, this solution is equivalent to the underlying reasoning function, maximum degree, g(G) = maxu∈G hu.\nHence, it remains to show that, given our training data, there exist λi so that the remaining four constraints are satisfies for this solution. Let us rewrite these constraints as a linear systems where the variables are λi\nβ+β−β′+ β′− = c · n∑ i=1 λi · ∑ u∈Gi hu · I ( u = arg max v∈Gi hv ) ∑ u∈Gi hu · I ( u = arg min v∈Gi hv ) max u∈Gi hu\nmin u∈Gi hu\n (73)\nBy standard theory of linear systems, there exist λi to solve Eqn. 73 if there are at least four training data Gi whose following vectors linear independent ∑ u∈Gi hu · I ( u = arg max v∈Gi hv ) ∑ u∈Gi hu · I ( u = arg min v∈Gi hv ) max u∈Gi hu\nmin u∈Gi hu\n = max u∈Gi hu ·Nmaxi min u∈Gi hu ·Nmini max u∈Gi hu\nmin u∈Gi hu\n (74)\nHere, Nmaxi denotes the number of nodes that achieve the maximum degree in the graph Gi, and Nmini denotes the number of nodes that achieve the min degree in the graph Gi. By the assumption of our training data that there are at least four Gi ∼ G with linearly independent Eqn. 74. Hence, our simple GNN learns the underlying function as desired.\nThis completes the proof." }, { "heading": "B.6 PROOF OF LEMMA 2", "text": "Let W denote the span of the feature maps of training data xi, i.e.\nW = span (φ (x1) , φ (x2) , ..., φ (xn)) .\nThen we can decompose the coordinates of fNTK in the RKHS space, βNTK, into a vector β0 for the component of fNTK in the span of training data features W , and a vector β1 for the component in the orthogonal complement W>, i.e.,\nβNTK = β0 + β1.\nFirst, note that since fNTK must be able to fit the training data (NTK is a universal kernel as we will discuss next), i.e.,\nφ(xi) >βNTK = yi.\nThus, we have φ(xi)>β0 = yi. Then, β0 is uniquely determined by the kernel regression solution with respect to the neural tangent kernel\nfNTK(x) = (〈 φ(x), φ(x1) 〉 , ..., 〈 φ(x), φ(xn) 〉) · NTK−1trainY ,\nwhere NTKtrain is the n× n kernel for training data, 〈 φ(x), φ(xi) 〉 is the kernel between test data x and training data xi, and Y is training labels.\nThe kernel regression solution fNTK is uniquely determined because the neural tangent kernel NTKtrain is positive definite assuming no two training data are parallel, which can be enforced with a bias term (Du et al., 2019c). In any case, the solution is a min-norm by pseudo-inverse.\nMoreover, a unique kernel regression solution fNTK that spans the training data features corresponds to a unique representation in the RKHS space β0.\nSince β0 and β1 are orthogonal, we also have the following\n‖βNTK‖22 = ‖β0 + β1‖22 = ‖β0‖22 + ‖β1‖22.\nThis implies the norm of βNTK is at least as large as the norm of any β such that φ(xi)>βNTK = yi. Moreover, observe that the solution to kernel regression Eqn. 9 is in the feature span of training data, given the kernel matrix for training data is full rank.\nfNTK(x) = (〈 φ(x), φ(x1) 〉 , ..., 〈 φ(x), φ(xn) 〉) · NTK−1trainY .\nSince β1 is for the component of fNTK in the orthogonal complement of training data feature span, we must have β1 = 0. It follows that βNTK is equivalent to\nmin β ‖β‖2\ns.t. φ(xi)>β = yi, for i = 1, ..., n.\nas desired." }, { "heading": "B.7 PROOF OF LEMMA 3", "text": "We first compute the neural tangent kernel NTK(x,x′) for a two-layer multi-layer perceptron (MLP) with ReLU activation function, and then show that it can be induced by the feature space φ(x) specified in the lemma so that NTK(x,x′) = 〈 φ(x), φ(x′) 〉 .\nRecall that Jacot et al. (2018) have derived the general framework for computing the neural tangent kernel of a neural network with general architecture and activation function. This framework is also described in Arora et al. (2019b); Du et al. (2019b), which, in addition, compute the exact kernel formula for convolutional networks and Graph Neural Networks, respectively. Following the framework in Jacot et al. (2018) and substituting the general activation function σ with ReLU gives the kernel formula for a two-layer MLP with ReLU activation. This has also been described in several previous works (Du et al., 2019c; Chizat et al., 2019; Bietti & Mairal, 2019).\nBelow we describe the general framework in Jacot et al. (2018) and Arora et al. (2019b). Let σ denote the activation function. The neural tangent kernel for an h-layer multi-layer perceptron can be recursively defined via a dynamic programming process. Here, Σ(i) : Rd × Rd → R for i = 0...h is the covariance for the i-th layer.\nΣ(0)(x,x′) = x>x′, ∧(i) (x,x′) = (\nΣ(i−1)(x,x) Σ(i−1)(x,x′) Σ(i−1)(x′,x) Σ(i−1)(x′,x′)\n) ,\nΣ(i)(x,x′) = c · E u,v∼N (0,∧(i)) [σ(u)σ(v)] .\nThe derivative covariance is defined similarly:\nΣ̇(i)(x,x′) = c · E u,v∼N (0,∧(i)) [σ̇(u)σ̇(v)] .\nThen the neural tangent kernel for an h-layer network is defined as\nNTK(h−1)(x,x′) = h∑ i=1\n( Σ(i−1)(x,x′) ·\nh∏ k=i Σ̇(k)(x,x′)\n) ,\nwhere we let Σ̇(h)(x,x′) = 1 for the convenience of notations.\nWe compute the explict NTK formula for a two-layer MLP with ReLU activation function by following this framework and substituting the general activation function with ReLU, i.e. σ(a) = max(0, a) = a · I(a ≥ 0) and σ̇(a) = I(a ≥ 0).\nNTK(1)(x,x′) = 2∑ i=1\n( Σ(i−1)(x,x′) ·\nh∏ k=i Σ̇(k)(x,x′) ) = Σ(0)(x,x′) · Σ̇(1)(x,x′) + Σ(1)(x,x′)\nSo we can get the NTK via Σ(1)(x,x′) and Σ̇(1)(x,x′), Σ(0)(x,x′). Precisely,\nΣ(0)(x,x′) = x>x′, ∧(1) (x,x′) = ( x>x x>x′\nx′ > x x′ > x′\n) = ( x x′ ) · ( x x′ ) ,\nΣ(1)(x,x′) = c · E u,v∼N (0,∧(1)) [u · I(u ≥ 0) · v · I(v ≥ 0)] .\nTo sample from N (0,∧(1)), we let L be a decomposition of ∧(1), such that ∧(1) = LL>. Here, we can see that L = (x,x′)>. Thus, sampling from N (0,∧(1)) is equivalent to first sampling w ∼ N (0, I), and output\nLw = w>(x,x′).\nThen we have the equivalent sampling (u, v) = (w>x,w>x′). It follows that\nΣ(1)(x,x′) = c · E w∼N (0,I)\n[ w>x · I ( w>x ≥ 0 ) ·w>x′ · I ( w>x′ ≥ 0 )]\nIt follows from the same reasoning that\nΣ̇(1)(x,x′) = c · E w∼N (0,I)\n[ I ( w>x ≥ 0 ) · I ( w>x′ ≥ 0 )] .\nThe neural tangent kernel for a two-layer MLP with ReLU activation is then\nNTK(1)(x,x′) = Σ(0)(x,x′) · Σ̇(1)(x,x′) + Σ(1)(x,x′) = c · E\nw∼N (0,I)\n[ x>x′ · I ( w>x ≥ 0 ) · I ( w>x′ ≥ 0 )] + c · E\nw∼N (0,I)\n[ w>x · I ( w>x ≥ 0 ) ·w>x′ · I ( w>x′ ≥ 0 )] .\nNext, we use the kernel formula to compute a feature map for a two-layer MLP with ReLU activation function. Recall that by definition a valid feature map must satisfy the following condition\nNTK(1)(x,x′) = 〈 φ(x), φ(x′) 〉 It is easy to see that the way we represent our NTK formula makes it easy to find such a decomposition. The following infinite-dimensional feature map would satisfy the requirement because the inner product of φ(x) and φ(x′) for any x, x′ would be equivalent to the expected value in NTK, after we integrate with respect to the density function of w.\nφ (x) = c′ ( x · I ( w(k) > x ≥ 0 ) ,w(k) > x · I ( w(k) > x ≥ 0 ) , ... ) ,\nwherew(k) ∼ N (0, I), with k going to infinity. c′ is a constant, and I is the indicator function. Note that here the density of features of φ(x) is determined by the density of w, i.e. Gaussian." }, { "heading": "C EXPERIMENTAL DETAILS", "text": "In this section, we describe the model, data and training details for reproducing our experiments. Our experiments support all of our theoretical claims and insights.\nOverview. We classify our experiments into the following major categories, each of which includes several ablation studies:\n1) Learning tasks where the target functions are simple nonlinear functions in various dimensions and training/test distributions: quadratic, cosine, square root, and l1 norm functions, with MLPs with a wide range of hyper-parameters. This validates our implications on MLPs generally cannot extrapolate in tasks with nonlinear target functions, unless the nonlinear function is directionally linear out-of-distribution. In the latter case, the extrapolation error is more sensitive to the hyper-parameters.\n2) Computation of the R-Squared of MLP’s learned functions along (thousands of) randomly sampled directions in out-of-distribution domain. This validates Theorem 1 and shows the convergence rate is very high in practice, and often happens immediately out of training range.\n3) Learning tasks where the target functions are linear functions with MLPs. These validate Theorem 2 and Lemma 1, i.e., MLPs can extrapolate if the underlying function is linear under conditions on training distribution. This section includes four ablation studies: a) Training distribution satisfy the conditions in Theorem 2 and cover all directions, and\nhence, MLPs extrapolate. b) Training data distribution is restricted in some directions, e.g., restricted to be posi-\ntive/negative/constant in some feature dimensions. This shows when training distribution is restrictive in directions, MLPs may fail to extrapolate.\nc) Exact extrapolation with infinitely-wide neural networks, i.e., exact computation with neural tangent kernel (NTK) on the data regime in Lemma 1. This is mainly for theoretical understanding.\n4) MLPs with cosine, quadratic, and tanh activation functions.\n5) Learning maximum degree of graphs with Graph Neural Networks. Extrapolation on graph structure, number of nodes, and node features. To show the role of architecture for extrapolation, we study the following GNN architecture regimes. a) GNN with graph-level max-pooling and neighbor-level sum-pooling. By Theorem 3,\nthis GNN architecture extrapolates in max degree with appropriate training data. b) GNN with graph-level and neighbor-level sum-pooling. By Corollary 1, this default\nGNN architecture cannot extrapolate in max degree. To show the importance of training distribution, i.e., graph structure in training set, we study the following training data regimes. a) Node features are identical, e.g., 1. In such regimes, our learning tasks only consider\ngraph structure. We consider training sets sampled from various graph structure, and find only those satisfy conditions in Theorem 3 enables GNNs with graph-level max-pooling to extrapolate.\nb) Node features are spurious and continuous. This also requires extrapolation on OOD node features. GNNs with graph-level max-pooling with appropriate training sets also extrapolate to OOD spurious node features.\n6) Learning the length of the shortest path between given source and target nodes, with Graph Neural Networks. Extrapolation on graph structure, number of nodes, and edge weights. We study the following regimes. a) Continuous features. Edge and node features are real values. This regime requires\nextrapolating to graphs with edge weights out of training range. Test graphs are all sampled from the “general graphs” family with a diverse range of structure. Regarding the type of training graph structure, we consider two schemes. Both schemes show a U-shape curve of extrapolation error with respect to the sparsity of training graphs. a) Specific graph structure: path, cycle, tree, expander, ladder, complete graphs, general\ngraphs, 4-regular graphs. b) Random graphs with a range of probability p of an edge between any two nodes.\nSmaller p samples sparse graphs and large p samples dense graphs. 7) Physical reasoning of the n-Body problem in the orbit setting with Graph Neural Networks.\nWe show that GNNs on the original features from previous works fail to extrapolate to unseen masses and distances. On the other hand, we show extrapolation can be achieved via an improved representation of the input edge features. We consider the following extrapolation regimes. a) Extrapolation on the masses of the objects. b) Extrapolation on the distances between objects.\nWe consider the following two input representation schemes to compare the effects of how representation helps extrapolation. a) Original features. Following previous works on solving n-body problem with GNNs,\nthe edge features are simply set to 0. b) Improved features. We show although our edge features do not bring in new information,\nit helps extrapolation." }, { "heading": "C.1 LEARNING SIMPLE NON-LINEAR FUNCTIONS", "text": "Dataset details. We consider four tasks where the underlying functions are simple non-linear functions g : Rd → R. Given an input x ∈ Rd, the label is computed by y = g(x) for all x. We consider the following four families of simple functions g.\na) Quadratic functions g(x) = x>Ax. In each dataset, we randomly sampleA. In the simplest case where A = I , g(x) = ∑d i=1 x 2 i .\na) Cosine functions g(x) = ∑d i=1 cos (2π · xi).\nc) Square root functions g(x) = ∑d i=1 √ xi. Here, the domain X of x is restricted to the space\nin Rd with non-negative value in each dimension.\nd) L1 norm functions g(x) = |x|1 = ∑d i=1 |xi|.\nWe sample each dataset of a task by considering the following parameters\na) The shape and support of training, validation, and test data distributions. i) Training, validation, and test data are uniformly sampled from a hyper-cube. Training\nand validation data are sampled from [−a, a]d with a ∈ {0.5, 1.0}, i.e., each dimension of x ∈ Rd is uniformly sampled from [−a, a]. Test data are sampled from [−a, a]d with a ∈ {2.0, 5.0, 10.0}.\nii) Training and validation data are uniformly sampled from a sphere, where every point has L2 distance r from the origin. We sample r from r ∈ {0.5, 1.0}. Then, we sample a random Gaussian vector q in Rd. We obtain the training or validation data x = q/‖q‖2 · r. This corresponds to uniform sampling from the sphere.\nTest data are sampled (non-uniformly) from a hyper-ball. We first sample r uniformly from [0.0, 2.0], [0.0, 5.0], and [0.0, 10.0]. Then, we sample a random Gaussian vector q in Rd. We obtain the test data x = q/‖q‖2 · r. This corresponds to (non-uniform) sampling from a hyper-ball in Rd.\nb) We sample 20, 000 training data, 1, 000 validation data, and 20, 000 test data. c) We sample input dimension d from {1, 2, 8}. d) For quadratic functions, we sample the entries of A uniformly from [−1, 1].\nModel and hyperparameter settings. We consider the multi-layer perceptron (MLP) architecture. MLP(x) = W (d) · σ ( W (d−1)σ ( ...σ ( W (1)x ))) We search the following hyper-parameters for MLPs\na) Number of layers d from {2, 4}. b) Width of eachW (k) from {64, 128, 512}. c) Initialization schemes.\ni) The default initialization in PyTorch. ii) The initialization scheme in neural tangent kernel theory, i.e., we sample entries ofW k from N (0, 1) and scale the output after each W (k) by √\n2 dk , where dk is the output\ndimension ofW (k). d) Activation function σ is set to ReLU.\nWe train the MLP with the mean squared error (MSE) loss, and Adam and SGD optimizer. We consider the following hyper-parameters for training\na) Initial learning rate from {5e − 2, 1e − 2, 5e − 3, 1e − 3}. Learning rate decays 0.5 for every 50 epochs\nb) Batch size from {32, 64, 128}. c) Weight decay is set to 1e− 5. d) Number of epochs is set to 250.\nTest error and model selection. For each dataset and architecture, training hyper-parameter setting, we perform model selection via validation set, i.e., we report the test error by selecting the epoch where the model achieves the best validation error. Note that our validation sets always have the same distribution as the training sets.\nWe train our models with the MSE loss. Because we sample test data from different ranges, the mean absolute percentage error (MAPE) loss, which scales the error by the actual value, better measures the extrapolation performance\nMAPE = 1\nn ∣∣∣∣Ai − FiAi ∣∣∣∣ ,\nwhere Ai is the actual value and Fi is the predicted value. Hence, in our experiments, we also report the MAPE." }, { "heading": "C.2 R-SQUARED FOR OUT-OF-DISTRIBUTION DIRECTIONS", "text": "We perform linear regression to fit the predictions of MLPs along randomly sampled directions in out-of-distribution regions, and compute the R-squared (or R2) for these directions. This experiment is to validate Theorem 1 and show that the convergence rate (to a linear function) is very high in practice.\nDefinition. R-squared, also known as coefficient of determination, assesses how strong the linear relationship is between input and output variables. The closer R-squared is to 1, the stronger the linear relationship is, with 1 being perfectly linear.\nDatasets and models. We perform the R-squared computation on over 2, 000 combinations of datasets, test/train distributions, and hyper-parameters, e.g., learning rate, batch size, MLP layer, width, initialization. These are described in Appendix C.1.\nComputation. For each combination of dataset and model hyper-parameters as described in Section C.1, we save the trained MLP model f : Rd → R. For each dataset and model combination, we then randomly sample 5, 000 directions via Gaussian vectors N (0, I). For each of these directions w, we compute the intersection point xw of direction w and the training data distribution support (specified by a hyper-sphere or hyper-cube; see Section C.1 for details).\nWe then collect 100 predictions of the trained MLP f along directionw (assumew is normalized) with {(\nxw + k · r 10 ·w ) , f ( xw + k · r 10 ·w )}100 k=0 , (75)\nwhere r is the range of training data distribution support (see Section C.1). We perform linear regression on these predictions in Eqn. 75, and obtain the R-squared.\nResults. We obtain the R-squared for each combination of dataset, model and training setting, and randomly sampled direction. For the tasks of learning the simple non-linear functions, we confirm that more than 96% of the R-squared results are above 0.99. This empirically confirms Theorem 1 and shows that the convergence rate is in fact fast in practice. Along most directions, MLP’s learned function becomes linear immediately out of the training data support." }, { "heading": "C.3 LEARNING LINEAR FUNCTIONS", "text": "Dataset details. We consider the tasks where the underlying functions are linear g : Rd → R. Given an input x ∈ Rd, the label is computed by y = g(x) = Ax for all x. For each dataset, we sample the following parameters\na) We sample 10, 000 training data, 1, 000 validation data, and 2, 000 test data.\nb) We sample input dimension d from {1, 2, 32}. c) We sample entries of A uniformly from [−a, a], where we sample a ∈ {5.0, 10.0}. d) The shape and support of training, validation, and test data distributions.\ni) Training, validation, and test data are uniformly sampled from a hyper-cube. Training and validation data are sampled from [−a, a]d with a ∈ {5.0, 10.0}, i.e., each dimension of x ∈ Rd is uniformly sampled from [−a, a]. Test data are sampled from [−a, a]d with a ∈ {20.0, 50.0}.\nii) Training and validation data are uniformly sampled from a sphere, where every point has L2 distance r from the origin. We sample r from r ∈ {5.0, 10.0}. Then, we sample a random Gaussian vector q in Rd. We obtain the training or validation data x = q/‖q‖2 · r. This corresponds to uniform sampling from the sphere.\nTest data are sampled (non-uniformly) from a hyper-ball. We first sample r uniformly from [0.0, 20.0] and [0.0, 50.0],. Then, we sample a random Gaussian vector q in Rd. We obtain the test data x = q/‖q‖2 · r. This corresponds to (non-uniform) sampling from a hyper-ball in Rd.\ne) We perform ablation study on how the training distribution support misses directions. The test distributions remain the same as in d).\ni) We restrict the first dimension of any training data xi to a fixed number 0.1, and randomly sample the remaining dimensions according to d).\nii) We restrict the first k dimensions of any training data xi to be positive. For input dimension 32, we only consider the hyper-cube training distribution, where we sample the first k dimensions from [0, a] and sample the remaining dimensions from [−a, a]. For input dimensions 1 and 2, we consider both hyper-cube and hyper-sphere training distribution by performing rejection sampling. For input dimension 2, we consider k from {1, 2}. For input dimension 32, we consider k from {1, 16, 32}.\niii) We restrict the first k dimensions of any training data xi to be negative. For input dimension 32, we only consider the hyper-cube training distribution, where we sample the first k dimensions from [−a, 0] and sample the remaining dimensions from [−a, a]. For input dimensions 1 and 2, we consider both hyper-cube and hyper-sphere training distribution by performing rejection sampling. For input dimension 2, we consider k from {1, 2}. For input dimension 32, we consider k from {1, 16, 32}.\nModel and hyperparameter settings. For the regression task, we search the same set of hyperparameters as those in simple non-linear functions (Section C.1).We report the test error with the same validation procedure as in Section C.1.\nExact computation with neural tangent kernel Our experiments with MLPs validate Theorem 2 asymptotic extrapolation for neural networks trained in regular regimes. Here, we also validate Lemma 1, exact extrapolation with finite data regime, by training an infinitely-wide neural network. That is, we directly perform the kernel regression with the neural tangent kernel (NTK). This experiment is mainly of theoretical interest.\nWe sample the same test set as in our experiments with MLPs. For training set, we sample 2d training examples according to the conditions in Lemma 1. Specifically, we first sample an orthogonal basis and their opposite vectorsX = {ei,−ei}di=1. We then randomly sample 100 orthogonal transform matrices Q via the QR decomposition. Our training samples are QX , i.e., multiply each point inX by Q. This gives 100 training sets with 2d data points satisfying the condition in Lemma 1.\nWe perform kernel regression on these training sets using a two-layer neural tangent kernel (NTK). Our code for exact computation of NTK is adapted from Arora et al. (2020); Novak et al. (2020). We verify that the test losses are all precisely 0, up to machine precision. This empirically confirms Lemma 1.\nNote that due to the difference of hyper-parameter settings in different implementations of NTK, to reproduce our experiments and achieve zero test error, the implementation by Arora et al. (2020) is assumed." }, { "heading": "C.4 MLPS WITH COSINE, QUADRATIC, AND TANH ACTIVATION", "text": "This section describes the experimental settings for extrapolation experiments for MLPs with cosine, quadratic, and tanh activation functions. We train MLPs to learn the following functions:\na) Quadratic function g(x) = x>Ax, where A is a randomly sampled matrix. b) Cosine function g(x) = ∑d i=1 cos(2π · xi).\nc) Hyperbolic tangent function g(x) = ∑d i=1 tanh(xi).\nd) Linear function g(x) = Wx+ b.\nDataset details. We use 20,000 training, 1,000 validation, and 20,000 test data. For quadratic, we sample input dimension d from {1, 8}, training and validation data from [−1, 1]d, and test data from [−5, 5]d. For cosine, we sample input dimension d from {1, 2}, training and validation data from [−100, 100]d, and test data from [−200, 200]d. For tanh, we sample input dimension d from {1, 8}, training and validation data from [−100, 100]d, and test data from [−200, 200]d. For linear, we use a subset of datasets from Appendix C.3: 1 and 8 input dimensions with hyper-cube training distributions.\nModel and hyperparameter settings. We use the same hyperparameters from Appendix C.1, except we fix the batch size to 128, as the batch size has minimal impact on models. MLPs with cos activation is hard to optimize, so we only report models with training MAPE less than 1." }, { "heading": "C.5 MAX DEGREE", "text": "Dataset details. We consider the task of finding the maximum degree on a graph. Given any input graph G = (V,E), the label is computed by the underlying function y = g(G) = max\nu∈G\n∑ v∈N (u) 1.\nFor each dataset, we sample the graphs and node features with the following parameters\na) Graph structure for training and validation sets. For each dataset, we consider one of the following graph structure: path graphs, cycles, ladder graphs, 4-regular random graphs, complete graphs, random trees, expanders (here we use random graphs with p = 0.8 as they are expanders with high probability), and general graphs (random graphs with p = 0.1 to 0.9 with equal probability for a broad range of graph structure). We use the networkx library for sampling graphs.\nb) Graph structure for test set. We consider the general graphs (random graphs with p = 0.1 to 0.9 with equal probability).\nc) The number of vertices of graphs |V | for training and validation sets are sampled uniformly from [20...30]. The number of vertices of graphs |V | for test set is sampled uniformly from [50..100].\nd) We consider two schemes for node features. i) Identical features. All nodes in training, validation and set sets have uniform feature 1.\nii) Spurious (continuous) features. Node features in training and validation sets are sampled uniformly from [−5.0, 5.0]3, i.e., a three-dimensional vector where each dimension is sampled from [−5.0, 5.0]. There are two schemes for test sets, in the first case we do not extrapolate node features, so we sample node features uniformly from [−5.0, 5.0]3. In the second case we extrapolate node features, we sample node features uniformly from [−10.0, 10.0]3.\ne) We sample 5, 000 graphs for training, 1, 000 graphs for validation, and 2, 500 graphs for testing.\nModel and hyperparameter settings. We consider the following Graph Neural Network (GNN) architecture. Given an input graph G, GNN learns the output hG by first iteratively aggregating and transforming the neighbors of all node vectors h(k)u (vector for node u in layer k), and perform a max or sum-pooling over all node features hu to obtain hG. Formally, we have\nh(k)u = ∑\nv∈N (u)\nMLP(k) ( h(k−1)v , h (k−1) u ) , hG = MLP(K+1) ( graph-pooling{h(K)u : u ∈ G} ) .\n(76)\nHere, N (u) denotes the neighbors of u, K is the number of GNN iterations, and graph-pooling is a hyper-parameter with choices as max or sum. h(0)u is the input node feature of node u. We search the following hyper-parameters for GNNs\na) Number of GNN iterations K is 1. b) Graph pooling is from max or sum. c) Width of all MLPs are set to 256.\nd) The number of layers for MLP(k) with k = 1..K are set to 2. The number of layers for MLP(K+1) is set to 1.\nWe train the GNNs with the mean squared error (MSE) loss, and Adam and SGD optimizer. We search the following hyper-parameters for training\na) Initial learning rate is set to 0.01.\nb) Batch size is set to 64.\nc) Weight decay is set to 1e− 5. d) Number of epochs is set to 300 for graphs with continuous node features, and 100 for graphs\nwith uniform node features.\nTest error and model selection. For each dataset and architecture, training hyper-parameter setting, we perform model selection via validation set, i.e., we report the test error by selecting the epoch where the model achieves the best validation error. Note that our validation sets always have the same distribution as the training sets. Again, we report the MAPE for test error as in MLPs." }, { "heading": "C.6 SHORTEST PATH", "text": "Dataset details. We consider the task of finding the length of the shortest path on a graph, from a given source to target nodes. Given any graph G = (V,E), the node features, besides regular node features, encode whether a node is source s, and whether a node is target t. The edge features are a scalar representing the edge weight. For unweighted graphs, all edge weights are 1. Then the label y = g(G) is the length of the shortest path from s to t on G.\nFor each dataset, we sample the graphs and node, edge features with the following parameters\na) Graph structure for training and validation sets. For each dataset, we consider one of the following graph structure: path graphs, cycles, ladder graphs, 4-regular random graphs, complete graphs, random trees, expanders (here we use random graphs with p = 0.6 which are expanders with high probability), and general graphs (random graphs with p = 0.1 to 0.9 with equal probability for a broad range of graph structure). We use the networkx library for sampling graphs.\nb) Graph structure for test set. We consider the general graphs (random graphs with p = 0.1 to 0.9 with equal probability).\nc) The number of vertices of graphs |V | for training and validation sets are sampled uniformly from [20...40]. The number of vertices of graphs |V | for test set is sampled uniformly from [50..70].\nd) We consider the following scheme for node and edge features. All edges have continuous weights. Edge weights for training and validation graphs are sampled from [1.0, 5.0]. There are two schemes for test sets, in the first case we do not extrapolate edge weights, so we sample edge weights uniformly from [1.0, 5.0]. In the second case we extrapolate edge weights, we sample edge weights uniformly from [1.0, 10.0]. All node features are [h, I(v = s), I(v = t)] with h sampled from [−5.0, 5.0].\ne) After sampling a graph and edge weights, we sample source s and t by randomly sampling s, t and selecting the first pair s, s whose shortest path involves at most 3 hops. This enables us to solve the task using GNNs with 3 iterations.\nf) We sample 10, 000 graphs for training, 1, 000 graphs for validation, and 2, 500 graphs for testing.\nWe also consider the ablation study of training on random graphs with different p. We consider p = 0.05..1.0 and report the test error curve. The other parameters are the same as described above.\nModel and hyperparameter settings. We consider the following Graph Neural Network (GNN) architecture. Given an input graph G, GNN learns the output hG by first iteratively aggregating and\ntransforming the neighbors of all node vectors h(k)u (vector for node u in layer k), and perform a max or sum-pooling over all node features hu to obtain hG. Formally, we have\nh(k)u = min v∈N (u)\nMLP(k) ( h(k−1)v , h (k−1) u , w(u,v) ) , hG = MLP(K+1) ( min u∈G hu ) . (77)\nHere, N (u) denotes the neighbors of u, K is the number of GNN iterations, and for neighbor aggregation we run both min and sum. h(0)u is the input node feature of node u. w(u,v) is the input edge feature of edge (u, v). We search the following hyper-parameters for GNNs\na) Number of GNN iterations K is set to 3. b) Graph pooling is set to min. c) Neighobr aggregation is selected from min and sum. d) Width of all MLPs are set to 256.\ne) The number of layers for MLP(k) with k = 1..K are set to 2. The number of layers for MLP(K+1) is set to 1.\nWe train the GNNs with the mean squared error (MSE) loss, and Adam and SGD optimizer. We consider the following hyper-parameters for training\na) Initial learning rate is set to 0.01. b) Batch size is set to 64. c) Weight decay is set to 1e− 5. d) Number of epochs is set to 250.\nWe perform the same model selection and validation as in Section C.5." }, { "heading": "C.7 N-BODY PROBLEM", "text": "Task description. The n-body problem asks a neural network to predict how n stars in a physical system evolves according to physics laws. That is, we train neural networks to predict properties of future states of each star in terms of next frames, e.g., 0.001 seconds.\nMathematically, in an n-body system S = {Xi}ni=1, such as solar systems, all n stars {Xi}ni=1 exert distance and mass-dependent gravitational forces on each other, so there were n(n− 1) relations or forces in the system. Suppose Xi at time t is at position xti and has velocity v t i . The overall forces a star Xi receives from other stars is determined by physics laws as the following\nF ti = G · ∑ j 6=i mi ×mj ‖xti − xtj‖32 · ( xtj − xti ) , (78)\nwhere G is the gravitational constant, and mi is the mass of star Xi. Then acceralation ati is determined by the net force F ti and the mass of star mi\nati = F t i /mi (79)\nSuppose the velocity of star Xi at time t is vti . Then assuming the time steps dt, i.e., difference between time frames, are sufficiently small, the velocity at the next time frame t+1 can be approximated by\nvt+1i = v t i + a t i · dt. (80)\nGiven mi, xti, and v t i , our task asks the neural network to predict v t+1 i for all stars Xi. In our task, we consider two extrapolation schemes\na) The distances between stars ‖xti − xtj‖2 are out-of-distribution for test set, i.e., different sampling ranges from the training set.\nb) The masses of stars mi are out-of-distribution for test set, i.e., different sampling ranges from the training set.\nHere, we use a physics engine that we code in Python to simulate and sample the inputs and labels. We describe the dataset details next.\nDataset details. We first describe the simulation and sampling of our training set. We sample 100 videos of n-body system evolution, each with 500 rollout, i.e., time steps. We consider the orbit situation: there exists a huge center star and several other stars. We sample the initial states, i.e., position, velocity, masses, acceleration etc according to the following parameters.\na) The mass of the center star is 100kg.\nb) The masses of other stars are sampled from [0.02, 9.0]kg.\nc) The number of stars is 3.\nd) The initial position of the center star is (0.0, 0.0).\nd) The initial positions xti of other objects are randomly sampled from all angles, with a distance in [10.0, 100.0]m.\ne) The velocity of the center star is 0.\nf) The velocities of other stars are perpendicular to the gravitational force between the center star and itself. The scale is precisely determined by physics laws to ensure the initial state is an orbit system.\nFor each video, after we get the initial states, we continue to rollout the next frames according the physics engine described above. We perform rejection sampling of the frames to ensure that all pairwise distances of stars in a frame are at least 30m. We guarantee that there are 10, 000 data points in the training set.\nThe validation set has the same sampling and simultation parameters as the training set. We have 2, 500 data points in the validation set.\nFor test set, we consider two datasets, where we respectively have OOD distances and masses. We have 5, 000 data points for each dataset.\na) We sample the distance OOD test set to ensure all pairwise distances of stars in a frame are from [1..20]m, but have in-distribution masses.\nb) We sample the mass OOD test set as follows\ni) The mass of the center star is 200kg, i.e., twice of that in the training set. ii) The masses of other stars are sampled from [0.04, 18.0]kg, compared to [0.02, 9.0]kg\nin the training set. iii) The distances are in-distribution, i.e., same sampling process as training set.\nModel and hyperparameter settings. We consider the following one-iteration Graph Neural Network (GNN) architecture, a.k.a. Interaction Networks. Given a collection of stars S = {Xi}ni=1, our GNN runs on a complete graph with nodes being the stars Xi. GNN learns the star (node) representations by aggregating and transforming the interactions (forces) of all other node vectors\nou = MLP(2) ∑ v∈S\\{u} MLP(1) ( hv, hu, w(u,v) ) . (81) Here, hv is the input feature of node v, including mass, position and velocity\nhv = (mv,xv,vv)\nw(u,v) is the input edge feature of edge (u, v). The loss is computed and backpropagated via the MSE loss of\n‖[o1, ..., on]− [ans1, .., ansn]‖2,\nwhere oi denotes the output of GNN for node i, and ansi denotes the true label for node i in the next frame.\nWe search the following hyper-parameters for GNNs\na) Number of GNN iterations is set to 1.\nb) Width of all MLPs are set to 128.\nc) The number of layers for MLP(1) is set to 4. The number of layers for MLP(2) is set to 2. d) We consider two representations of edge/relations w(i,j).\ni) The first one is simply 0. ii) The better representation, which makes the underlying target function more linear, is\nw(i,j) = mj ‖xti − xtj‖32 · ( xtj − xti ) We train the GNN with the mean squared error (MSE) loss, and Adam optimizer. We search the following hyper-parameters for training\na) Initial learning rate is set to 0.005. learning rate decays 0.5 for every 50 epochs b) Batch size is set to 32. c) Weight decay is set to 1e− 5. d) Number of epochs is set to 2, 000.\nD VISUALIZATION AND ADDITIONAL EXPERIMENTAL RESULTS\nD.1 VISUALIZATION RESULTS\nIn this section, we show additional visualization results of the MLP’s learned function out of training distribution (in black color) v.s. the underlying true function (in grey color). We color the predictions in training distribution in blue color.\nIn general, MLP’s learned functions agree with the underlying true functions in training range (blue). This is explained by in-distribution generalization arguments. When out of distribution, the MLP’s learned functions become linear along directions from the origin. We explain this OOD directional linearity behavior in Theorem 1.\nFinally, we show additional experimental results for graph-based reasoning tasks." }, { "heading": "D.2 EXTRA EXPERIMENTAL RESULTS", "text": "In this section, we show additional experimental results." } ]
2,021
HOW NEURAL NETWORKS EXTRAPOLATE: FROM FEEDFORWARD TO GRAPH NEURAL NETWORKS
SP:8c168e9fb22c78e446487b4c0c4b3a1e27a716aa
[ "This paper proposes STRATA, a simple adversarial attack against the code2seq model. The key idea is to replace local variable names in the input code with other randomly chosen sub-tokens with embedding vectors of relatively high L2 norms. Meanwhile, they observe that such tokens often appear frequently in the training set, thus alternatively they can simply use frequently appeared tokens as the target to perform the attacks. In this way, they can attack the model in the black-box scenario, without the knowledge of the model parameters and the training data, as long as they can roughly approximate the frequency distribution of different code sub-tokens in the training set. They evaluate their approach on code2seq models trained for the Java code, and compare with existing attacks against code2seq models. They first show that the 5-same attack, i.e., repeating a sub-token 5 times and concatenating them as the new local variable name, is the most effective attack. This attack decreases the F1 scores more compared to the baseline attack from prior work. In addition, they show that by adding STRATA adversarial examples for adversarial training, the new model becomes more robust to their proposed attacks.", "This paper proposes STRATA, a novel adversarial attack against source code models, more precisely against code2seq. The attack strategy can be applied under black- or white-box threat models, targeted or untargeted. Adversarial training is based on STRATA adversarial examples is proposed to render the models robust. Experiments are performed on Java code datasets of variable sizes." ]
Adversarial examples are imperceptible perturbations in the input to a neural model that result in misclassification. Generating adversarial examples for source code poses an additional challenge compared to the domains of images and natural language, because source code perturbations must adhere to strict semantic guidelines so the resulting programs retain the functional meaning of the code. We propose a simple and efficient gradient-free method for generating state-of-the-art adversarial examples on models of code that can be applied in a white-box or black-box setting. Our method generates untargeted and targeted attacks, and empirically outperforms competing gradient-based methods with less information and less computational effort.
[]
[ { "authors": [ "Abdullah Al-Dujaili", "Alex Huang", "Erik Hemberg", "Una-May O’Reilly" ], "title": "Adversarial Deep Learning for Robust Detection of Binary Encoded Malware", "venue": "IEEE Security and Privacy Workshops (SPW),", "year": 2018 }, { "authors": [ "Miltiadis Allamanis", "Earl T Barr", "Premkumar Devanbu", "Charles Sutton" ], "title": "A survey of machine learning for big code and naturalness", "venue": "ACM Computing Surveys (CSUR),", "year": 2018 }, { "authors": [ "Uri Alon", "Shaked Brody", "Omer Levy", "Eran Yahav" ], "title": "code2seq: Generating sequences from structured representations of code", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Uri Alon", "Meital Zilberstein", "Omer Levy", "Eran Yahav" ], "title": "code2vec: Learning distributed representations of code", "venue": "Proceedings of the ACM on Programming Languages,", "year": 2019 }, { "authors": [ "Moustafa Alzantot", "Yash Sharma Sharma", "Ahmed Elgohary", "Bo-Jhang Ho", "Mani Srivastava", "Kai-Wei Chang" ], "title": "Generating natural language adversarial examples", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Yonatan Belinkov", "Yonatan Bisk" ], "title": "Synthetic and natural noise both break neural machine translation", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Réjean Ducharme", "Pascal Vincent", "Christian Jauvin" ], "title": "A neural probabilistic language model", "venue": "Journal of machine learning research,", "year": 2003 }, { "authors": [ "Pavol Bielik", "Martin Vechev" ], "title": "Adversarial robustness for code", "venue": "arXiv preprint arXiv:2002.04694,", "year": 2020 }, { "authors": [ "Minhao Cheng", "Jinfeng Yi", "Pin-Yu Chen", "Huan Zhang", "Cho-Jui Hsieh" ], "title": "Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples", "venue": "Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Javid Ebrahimi", "Anyi Rao", "Daniel Lowd", "Dejing Dou" ], "title": "Hotflip: White-box adversarial examples for text classification", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),", "year": 2018 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and Harnessing Adversarial Examples", "venue": "International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Kathrin Grosse", "Nicolas Papernot", "Praveen Manoharan", "Michael Backes", "Patrick McDaniel" ], "title": "Adversarial Perturbations Against Deep Neural Networks for Malware Classification", "venue": "[cs],", "year": 2016 }, { "authors": [ "Arvinder Kaur", "Kamaldeep Kaur" ], "title": "An Empirical Study of Robustness and Stability of Machine Learning Classifiers in Software Defect Prediction", "venue": "Advances in Intelligent Informatics,", "year": 2015 }, { "authors": [ "Bojan Kolosnjaji", "Ambra Demontis", "Battista Biggio", "Davide Maiorca", "Giorgio Giacinto", "Claudia Eckert", "Fabio Roli" ], "title": "Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables", "venue": "In 2018 26th European Signal Processing Conference (EUSIPCO),", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Paul Michel", "Xian Li", "Graham Neubig", "Juan Pino" ], "title": "On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models", "venue": "In Proceedings of the 2019 Conference of the North,", "year": 2019 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Omar Fawzi", "Pascal Frossard" ], "title": "Universal Adversarial Perturbations", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Somesh Jha", "Matt Fredrikson", "Z Berkay Celik", "Ananthram Swami" ], "title": "The limitations of deep learning in adversarial settings", "venue": "IEEE European symposium on security and privacy (EuroS&P),", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ananthram Swami", "Richard Harang" ], "title": "Crafting adversarial input sequences for recurrent neural networks", "venue": "IEEE Military Communications Conference,", "year": 2016 }, { "authors": [ "Steven T Piantadosi" ], "title": "Zipf’s word frequency law in natural language: A critical review and future directions", "venue": "Psychonomic bulletin & review,", "year": 2014 }, { "authors": [ "Erwin Quiring", "Alwin Maier", "Konrad Rieck" ], "title": "Misleading authorship attribution of source code using adversarial learning", "venue": "In 28th {USENIX} Security Symposium ({USENIX} Security 19),", "year": 2019 }, { "authors": [ "Md. Rafiqul Islam Rabin", "Mohammad Amin Alipour" ], "title": "Evaluation of Generalizability of Neural Program Analyzers under Semantic-Preserving Transformations", "venue": "[cs],", "year": 2020 }, { "authors": [ "Goutham Ramakrishnan", "Jordan Henkel", "Zi Wang", "Aws Albarghouthi", "Somesh Jha", "Thomas Reps" ], "title": "Semantic Robustness of Models of Source Code. arXiv:2002.03043 [cs, stat", "venue": "URL http://arxiv.org/abs/2002.03043", "year": 2020 }, { "authors": [ "Veselin Raychev", "Pavol Bielik", "Martin Vechev" ], "title": "Probabilistic model for code with decision trees", "venue": "ACM SIGPLAN Notices,", "year": 2016 }, { "authors": [ "Henry Gordon Rice" ], "title": "Classes of recursively enumerable sets and their decision problems", "venue": "Transactions of the American Mathematical Society,", "year": 1953 }, { "authors": [ "Roei Schuster", "Congzheng Song", "Eran Tromer", "Vitaly Shmatikov" ], "title": "You autocomplete me: Poisoning vulnerabilities in neural code completion", "venue": "arXiv preprint arXiv:2007.02220,", "year": 2020 }, { "authors": [ "Octavian Suciu", "Scott E. Coull", "Jeffrey Johns" ], "title": "Exploring Adversarial Examples in Malware", "venue": "Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "USA CA", "May" ], "title": "IEEE", "venue": "ISBN 978-1-72813-508-3. doi: 10.1109/SPW.2019.00015. URL", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna Estrach", "Dumitru Erhan", "Ian" ], "title": "Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "Eric Wong", "Leslie Rice", "J Zico Kolter" ], "title": "Fast is better than free: Revisiting adversarial training", "venue": "Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Hongyu Zhang" ], "title": "Exploring regularity in source code: Software science and zipf’s law", "venue": "[cs],", "year": 2020 }, { "authors": [ "Ramakrishnan" ], "title": "optimization, oftentimes require a GPU for efficient implementation. STRATA, however, can be implemented to run quickly on even CPU-only machines. After an initial pre-processing step to mark local variable names for easy replacement which took less than five minutes on our 24-core CPU-only machine, we were able to construct adversarial examples using STRATA on a dataset", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Although machine learning has been shown to be effective at a wide variety of tasks across computing, statistical models are susceptible to adversarial examples. Adversarial examples, first identified in the continuous domain by Szegedy et al. (2014), are imperceptible perturbations to input that result in misclassification. Researchers have developed effective techniques for adversarial example generation in the image domain (Goodfellow et al., 2015; Moosavi-Dezfooli et al., 2017; Papernot et al., 2016a) and in the natural language domain (Alzantot et al., 2018; Belinkov & Bisk, 2018; Cheng et al., 2020; Ebrahimi et al., 2018; Michel et al., 2019; Papernot et al., 2016b), although work in the source code domain is less extensive (see Related Work). The development of adversarial examples for deep learning models has progressed in tandem with the development of methods to make models which are robust to such adversarial attacks, though much is still being learned about model robustness (Goodfellow et al., 2015; Madry et al., 2018; Shafahi et al., 2019; Wong et al., 2019).\nThe threat of adversarial examples poses severe risks for ML-based malware defenses (Al-Dujaili et al., 2018; Grosse et al., 2016; Kaur & Kaur, 2015; Kolosnjaji et al., 2018; Kreuk et al., 2019; Suciu et al., 2019), and introduces the ability of malicious actors to trick ML-based code-suggestion tools to suggest bugs to an unknowing developer (Schuster et al., 2020). Thus, developing state-of-the-art attacks and constructing machine learning models that are robust to these attacks is important for computer security applications. Generating adversarial examples for models of code poses a challenge compared to the image and natural language domain, since the input data is discrete and textual and adversarial perturbations must abide by strict syntactical rules and semantic requirements. The CODE2SEQ model is a state-of-the-art model of code that has been used to explore adversarial example design and robustness methods on models of code (Rabin & Alipour, 2020; Ramakrishnan et al., 2020).\nIn this work, we propose the Simple TRAined Token Attack (STRATA), a novel and effective method for generating black-box and white-box adversarial attacks against CODE2SEQ. Our method replaces local variable names with high impact candidates that are identified by dataset statistics. It can also be used effectively for targeted attacks, where the perturbation targets a specific (altered) output classification. Further, we demonstrate that adversarial training, that is, injecting adversarial examples into CODE2SEQ’s training set, improves the robustness of CODE2SEQ to adversarial attacks.\nWe evaluate STRATA on CODE2SEQ, though we hypothesize that the method can be applied to other models. The principles underlying STRATA apply not only to models of source code, but also to natural language models in contexts where the vocabulary is large and there is limited training data.\nSTRATA has a number of advantages compared to previously proposed adversarial attack strategies:\n1. STRATA constructs state-of-the-art adversarial examples using a gradient-free approach that outperforms gradient-based methods; 2. STRATA generates white-box adversarial examples that are extremely effective; blackbox attacks that use dictionaries created from unrelated code datasets perform similarly (Appendix C) 3. STRATA does not require the use of a GPU and can be executed more quickly than competing gradient-based attacks (Appendix D.1); 4. STRATA is the only available method (known to the authors at present) which performs targeted attacks on CODE2SEQ, which is the current state-of-the-art for models of code." }, { "heading": "2 MOTIVATION", "text": "CODE2SEQ, developed by Alon et al. (2019a), is an encoder-decoder model inspired by SEQ2SEQ (Sutskever et al., 2014); it operates on code rather than natural language. CODE2SEQ is the state-ofthe-art code model, and therefore it represents a good target for adversarial attacks and adversarial training. The model is tasked to predict method names from the source code body of a method. The model considers both the structure of an input program’s Abstract Syntax Trees (ASTs) as well as the tokens corresponding to identifiers such as variable names, types, and invoked method names. To reduce the vocabulary size, identifier tokens are split into subtokens by commonly used delimiters such as camelCase and under_scores. In this example, subtokens would include “camel” and “case” and “under” and “scores”. CODE2SEQ encodes subtokens into distributed embedding vectors. These subtoken embedding vectors are trained to capture semantic structure, so nearby embedding vectors should correspond to semantically similar subtokens (Bengio et al., 2003). In this paper, we distinguish between subtoken embedding vectors and token embedding vectors. Subtoken embedding vectors are trained model parameters. Token embedding vectors are computed as a sum of the embedding vectors of the constituent subtokens. If the token contains more than five subtokens, only the first five are summed, as per the CODE2SEQ architecture. The full description and architecture of the CODE2SEQ model is given in the original paper by Alon et al. (2019a).\nThe CODE2SEQ model only updates a subtoken embedding as frequently as that subtoken appears during training, which is proportional to its representation in the training dataset. However, the training datasets have very large vocabularies consisting not only of standard programming language keywords, but also a huge quantity of neologisms. The frequency at which subtokens appear in the CODE2SEQ java-large training set varies over many orders of magnitude, with the least common subtokens appearing fewer than 150 times, and the most common over 108 times.\nThus, subtoken embedding vectors corresponding with infrequently-appearing subtokens will be modified by the training procedure much less often than common subtokens. Figure 1a demonstrates this phenomenon, showing a disparity between L2 norms of frequent and infrequently-appearing subtoken embedding vectors.\nWe confirm this empirically. When we initialized embedding vectors uniformly at random and then trained the model as normal, as per Alon et al. (2019a), we found that the vast majority of final, i.e., post-training, embedding vectors change very little from their initialization value. In fact, 90% of embedding tokens had an L2 distance of less than 0.05 between the initial vector and final, post-training vector when trained on a java dataset. About 10% of subtokens had a large L2 distance between the initial embedding and final embedding; these subtokens were more frequent in the training dataset and had embedding vectors with a notably larger final L2 magnitude (Figure 1).\nThe observation that high-L2-norm embedding vectors are associated with subtokens that appear sufficiently frequently in the dataset motivates the core intuitions of our attack1. We show in this paper that subtokens with high-L2-norm embedding vectors can be used for effective adversarial examples, which are constructed as follows:\n1We note very-high-frequency subtokens have small L2 norms. Examples of these very-high-frequency subtokens include: get, set, string, and void, which appear so often as to not be useful for classification. Despite the fact that these subtokens are not good adversarial candidates for STRATA, there are so few of them that we expect them to have minimal influence on the effectiveness of our attack.\n1. To maximize adversarial effectiveness in a white-box setting, we should use tokens with high L2 norm embedding vectors as local variable name replacements. We confirm this empirically in the Experiments section. 2. In the absence of information about the L2 norms of embedding vectors, we can isolate high-L2-norm subtokens for local variable name replacement by selecting tokens which appear in the training dataset often enough to be well trained. This is empirically confirmed by the large intersection of high-L2-norm subtokens and subtokens with high frequency." }, { "heading": "3 METHODS", "text": "" }, { "heading": "3.1 DATASET DEFINITIONS AND CONSIDERATIONS", "text": "We evaluate our attack on four datasets that are used for training different CODE2SEQ models. There are three, non-overlapping Java datasets: java-small (700k examples), java-medium (4M examples), and java-large (16M examples) (Alon et al., 2019a), and one Python dataset, python150k (Raychev et al., 2016). We disambiguate the trained CODE2SEQ models for each datasets by denoting them CODE2SEQ-SM, CODE2SEQ-MD, CODE2SEQ-LG, and CODE2SEQ-PY for models trained on java-small, -medium, -large, and python150k respectively. Many of our experiments are evaluated on all four models; however, experiments that require adversarial training are only evaluated on CODE2SEQ-SM, for computational feasibility." }, { "heading": "3.2 THE ATTACK", "text": "Traditional adversarial attacks on discrete spaces involve searching the discrete space for semantically similar perturbations that yield a misclassification. Searching the space of all possible valid discrete changes in source code is often intractable or even impossible (Rice, 1953). However, there are strategies to reduce the search space. For example, perturbations may be limited to a small number of predefined operations which are known to be semantically equivalent, such as inserting deadcode, replacing local variable names, or replacing expressions with known equivalent expressions. Gradientbased attacks on the embedding space may also be used in order to optimize the search of the space itself (Ramakrishnan et al., 2020; Yefet et al., 2020). However, gradient-based attacks are computationally expensive and rely heavily on knowledge of the exact parameters of the model.\nWe propose STRATA, which replaces local variable names with high-impact subtokens to generate adversarial examples. STRATA leverages the fact that only a relatively small number of subtoken embeddings are critical for the classification task performed by the CODE2SEQ model.\nIn Section 2, we presented two ways to identify high-impact subtokens. STRATA will use these to replace local variable names. Recall that the model composes these subtokens into tokens by summing the first five constituent subtoken embedding vectors. We wish to maximize the L2 norm of the resulting token, while minimizing the semantic change. We propose three strategies:\n1. single: pick a single subtoken as the token;\n2. 5-diff: pick five different (not necessarily unique) subtokens and concatenate them, which will have a higher expected L2 norm than single; 3. 5-same: pick a single subtoken, and repeat the subtoken five times to form a token, which will have the largest expected L2 norm, by the triangle inequality2.\nWe subjectively propose that single is the smallest and most realistic semantic change, 5-same is the largest change and the “best-case\" for an adversarial example, and 5-diff represents an intermediate attack strength.\nFor a given method, STRATA generates an untargeted perturbation as follows:\n1. Select one random local variable v; 2. Choose an adversarial token v∗ appropriately, using the chosen concatenation strategy\n(single, 5-diff, or 5-same). For white-box attacks, choose each subtoken from a high-L2norm vocabulary (top-n by L2 norm). For black-box attacks, choose each subtoken with sufficiently high frequency (top-n by frequency). We discuss the optimal cutoff values (n) for L2 and frequency in Section 3.4.\n3. Replace v with v∗.\nFor attacks on the Python dataset, since determining whether a variable is local or non-local is not always possible by looking at only the body of the method, we treat all variables as local.\nTo perform targeted attacks in which we want the output to include a particular subtoken t, we perform the same steps as the untargeted attack, and choose v∗ to be a 5-same concatenation of t." }, { "heading": "3.3 DATASET-AGNOSTIC BLACK-BOX ATTACKS", "text": "STRATA can generate effective adversarial attacks even without the training dataset. We can determine subtoken frequency statistics from a different (potentially non-overlapping) dataset. We empirically confirm that STRATA can use non-overlapping datasets (java-small, java-medium, and java-large) in this way to attack CODE2SEQ models that have been trained on a different Java dataset. We conclude that the subtoken distributions of non-overlapping Java datasets are sufficiently similar for STRATA to be effective (Appendix C). This confirms that STRATA can be applied in a completely black-box setting without knowledge of the model parameters or training dataset." }, { "heading": "3.4 IMPLEMENTATION", "text": "We assayed the effectiveness of the method by applying a simple local variable token replacement attack. We select a single locally-declared variable at random and rename it with an adversarial token which does not conflict with another name. Because the initial variable is locally declared, we know that changing the name will have no effect elsewhere in the program, and will have no behavioral effect. The token replacement, however, can effectively attack CODE2SEQ; example shown in the Appendix.\nThe java-small test set consists of ~57,000 examples; we exclude from our testing dataset all methods that cannot be attacked by our method, i.e. all methods without local variables, leaving ~40,000 examples. The python150k dataset consists of 50,000 testing examples. We then create several vocabularies of subtokens from which to choose adversarial substitutions:\n1. All: Contains all subtokens. Note that the number of subtokens varies by dataset (Table 1); 2. Top n by L2 norm contains subtokens for which their L2 norm embedding vectors are the n\nhighest; 3. Top n by frequency contains only the n subtokens which occur in the training data with\nhighest frequency.\nTo obtain optimal thresholds of n, we swept the range of possibilities to find the n that minimizes F1 score, i.e., generates the best performing adversarial examples (Figure 2). We present the final values of n in Table 1.\n2The triangle inequality states that ‖x+ y‖ ≤ ‖x‖+ ‖y‖, and is equal (and thus maximized) when x and y are colinear, which occurs when x = y. This is easily generalized to five vectors." }, { "heading": "4 EXPERIMENTS AND DISCUSSION", "text": "" }, { "heading": "4.1 UNTARGETED ATTACKS", "text": "Table 2 compares the three proposed concatenation strategies on each of the four proposed vocabularies (three in Java, one in Python). To perform comparisons, we measure the F1 score of the model on the java-small testing dataset, with each adversarial perturbation. Lower F1 scores correspond with better attacks. We see that the performance of CODE2SEQ drops when a local variable is replaced with a token composed of a random selection from all subtokens, using the 5-diff and 5-same concatenation strategies. However, we observe a larger drop in F1 score when we make variable replacements from subtokens selected from the top-n subtokens by L2 norm or by frequency, using values of n specified in Table 1. When we replaced local variables with random tokens with high L2 norm, the F1 score dropped substantially, confirming our hypothesis that we can improve the effectiveness of adversarial examples by selecting replacement tokens such that their subtoken embedding vectors have high L2 norm. Similarly, adversarial examples that replace a local variable with subtokens of\nhigh frequency in the training dataset are highly effective, suggesting that the black-box method of choosing adversarial subtokens based on frequency alone can approximate the white-box attack.\nSurprisingly, the F1 score of CODE2SEQ-SM increased for random and frequency-based adversarial perturbations constructed with the single concatenation strategy, suggesting that CODE2SEQ-SM relies less on variable names for classification than CODE2SEQ-MD or CODE2SEQ-LG. The attacks on CODE2SEQ-PY were also incredibly effective, although the baseline accuracy for that model was already lower than the rest of the models.\nAs proposed in Section 3.2, the effectiveness of the adversarial attack is optimized when we replace local variables with a token that is constructed with the 5-same strategy. Strikingly, the black-box attack (top-n by frequency) is nearly as effective as the white-box attack (top-n by L2)." }, { "heading": "4.2 TARGETED ATTACKS", "text": "We perform targeted adversarial attacks on the Java datasets that aim to inject a chosen subtoken into the output by performing a STRATA attack using the targeted subtoken for replacement. See Appendix D for an example.\nTo assay the effectiveness of targeted attacks, we perform targeted attacks that target three different subtoken vocabularies: (1) all valid subtokens, (2) the optimized L2 vocabulary, and (3) the optimized frequency vocabulary, where vocabularies are optimized for CODE2SEQ-SM, CODE2SEQ-MD, or CODE2SEQ-LG appropriately. We determine that a particular attack is successful if the selected subtoken is included in the output. We measure the percent of successful attacks, thus computing an aggregate effectiveness of targeted attacks (Table 3). We omit targeted attacks on the Python dataset although we expect that a similar trend holds.\nTable 3 reveals that CODE2SEQ is especially vulnerable to targeted attacks performed on high-impact subtokens. The black-box (frequency) attack performs similarly to the white-box (L2) attack." }, { "heading": "4.3 TRANSFERABILITY", "text": "In this section we show that gradient-based adversarial training as proposed by Ramakrishnan et al. (2020) is not effective at improving robustness to STRATA attacks. We test a CODE2SEQ model that has been trained to be robust to the gradient-based adversarial examples proposed by Ramakrishnan et al. (2020), and find that the model is indeed robust to the gradient-based token-replacement perturbations. However, neither the original nor the robust model are impervious to perturbations produced by STRATA (Table 4). This result confirms that STRATA can effectively target models that are robust to some gradient-based perturbations; therefore it is a useful tool when hardening models of code, even when gradient-based perturbations are also being used." }, { "heading": "5 STRATA OUTPERFORMS SIMILAR ATTACKS", "text": "At the time of writing there are two other works that address adversarial attacks targeting CODE2SEQ: a class of source code transformations by Rabin & Alipour (2020) and gradient-based attacks by Ramakrishnan et al. (2020). We greatly outperform the transformations considered in Rabin & Alipour (2020) (Appendix B).\nWe compare our work to gradient-based adversarial perturbations proposed by Ramakrishnan et al. (2020), in which they attack a CODE2SEQ-SM model. We consider the variable replacement, print statement insertion, try-catch statement insertion, and worst-case single transformation attacks for our comparison. Note: to establish a fair comparison, we include the 17,000 examples in the testing set that do not include a local variable, and thus we do not even attempt to perturb them, hence why the F1 scores of the adversarial examples are larger than as reported in Table 2. We find that STRATA outperforms all attacks performed by Ramakrishnan et al. (2020) except for the worst-case transformation, which is inherently a larger transformation than our varible-replacement attack. This includes greatly outperforming the gradient-based local variable replacement attack and performing similarly to the worst-case transformation, despite the fact that STRATA generates smaller perturbations with less computational effort, and with less information about the model (Table 5). These results indicate that STRATA attacks are state-of-the-art on CODE2SEQ." }, { "heading": "6 RELATED WORK", "text": "Adversarial examples for models of code Allamanis et al. (2018) provide a comprehensive survey of prior research on models of code. Several papers develop techniques for generating adversarial examples on models of source code: Quiring et al. (2019) perform adversarial attacks on source code by applying a Monte-Carlo tree search over possible transformations, and Zhang et al. (2020) apply Metropolis-Hastings sampling to perform identifier replacement to create adversarial examples. Bielik & Vechev (2020) improve on the adversarial robustness of models of code by developing a model that can learn to abstain if uncertain.\nAdversarial examples for CODE2VEC and CODE2SEQ Yefet et al. (2020) generate both gradientbased targeted and untargeted adversarial examples for CODE2VEC Alon et al. (2019b). Most directly related to our paper, Ramakrishnan et al. (2020) perform gradient-based adversarial attacks and adversarial training on CODE2SEQ. Rabin & Alipour (2020) evaluate the robustness of CODE2SEQ to semantic-preserving transformations.\nTargeted attacks on models of code As previously noted, Yefet et al. (2020) propose a method for targeted adversarial examples for CODE2VEC (Alon et al., 2019b). To our knowledge, at the time of writing, no other paper performs targeted attacks on CODE2SEQ.\nAdversarial training for robustness Many recent papers have examined the robustness of neural network models to adversarial perturbations. Szegedy et al. (2014) demonstrates that neural networks are vulnerable to adversarial perturbations, but that training on these perturbations, i.e., adversarial training, can increase robustness. Further papers explore faster and more effective methods of adversarial training to improve robustness, though mostly in the continuous domain (Madry et al., 2018; Shafahi et al., 2019; Wong et al., 2019). Ramakrishnan et al. (2020) perform adversarial training on CODE2SEQ to improve robustness. Yefet et al. (2020) propose multiple methods to improve robustness of models of source code, including adversarial training, outlier detection, and excluding variable names from model input." }, { "heading": "7 CONCLUSION", "text": "In this work, we presented STRATA, a simple, gradient-free method for generating adversarial examples in discrete space that can be used to help build robustness in models of code. Because the L2 norm of an embedding vector can be approximated by the frequency in the training data, STRATA can be used to generate gradient-free white- and black-box attacks. We presented effective attacks using this method, including targeted attacks, and also showed that adversarial fine-tuning using STRATA examples can lead to increased robustness, even when the fine-tuned model is only being tested on clean data.\nOur work does have some limitations. In the continuous domain, many adversarial attacks may not even be detected by humans; in the discrete domain, this is not possible, but some attacks in the discrete space are more realistic (harder to spot) than others. The most powerful attack we propose, 5-same, is also the least realistic; single is a smaller perturbation but it has less effective attacks. We expect that the attack could be expanded by adding dummy local variables to methods that do not initially contain a local variable, mitigating the current inability to attack methods without local variables.\nSTRATA does not just have application as an attack, but also as a defense. We present preliminary adversarial training results in the appendix which suggest that using adversarial examples generated with STRATA for adversarial training can improve robustness to adversarial attacks. Since we show that standard gradient-based adversarial training is ineffective at defending against STRATA, it is important for robust models to be adversarial training with STRATA adversarial examples to ensure a comprehensive robustness. A complete study and evaluation of the effectiveness of adversarial training using STRATA is an important future direction.\nAnother exciting area of future inquiry would be the application of this attack to other models, including natural language models. The magnitude differences in subtoken frequency in code is similar to the magnitude differences in word frequency in natural language (Piantadosi, 2014). In fact,\ntokens in software follow the same distribution as words in natural language (Zhang, 2008). Thus we believe that our technique should be applicable in the NLP setting (with modification; for example, replacement dictionaries might only be filled with high-impact synonyms). We theorize that STRATA could be used to good effect in models of natural language as well as models of code, and that the basic intuitions underlying the attack can extend even to domains where the actual training data is not available, as long as a suitably similar dataset is available. Our results stem from fundamental features of dataset distribution and training process, and so they should generalize to other applications.\nSTRATA has a broad application to the field of machine learning in discrete spaces with large vocabularies. We present the insight that subtoken frequency correlates with L2 norm and thus can be used to determine high impact subtokens; this technique is not a priori exclusive to code2seq or even models of code in general since it relies on properties of the dataset distribution and training technique and not the model architecture. STRATA is a high-impact, low-cost attack strategy that can be used to bolster robustness; we have made its code open-source so that those who wish to use it may do so." }, { "heading": "A ADVERSARIAL TRAINING", "text": "We perform adversarial training in order to make CODE2SEQ most robust to adversarial attacks. To test the robustness of an adversarially-trained CODE2SEQ model, we perform the following experiment:\nOur adversarial training results are preliminary. Further work is needed to evaluate the robustness of the model after adversarial training. Nonetheless, our results strongly suggest that adversarial training with STRATA adversarial examples is highly effective at defending against STRATA attacks (Table 6). Unexpectedly, training with adversarial examples slightly improves the ability of the model to classify clean data (improvement from F1 of 0.369 to 0.371), suggesting that the added robustness forces the model to learn a better representation of the source code. This adversarial fine-tuning was not more computationally intensive than standard training and represents a simple way to make this model of code more robust.\nOur method differs from standard adversarial training method proposed in Madry et al. (2018) in which adversarial examples are regenerated at every epoch. Since our adversarial examples only rely\non knowing the top-n subtokens by embedding L2 norm which does not change drastically over time, we do not have to recompute new adversarial examples after each epoch and instead can compute large batches of adversarial examples prior to training. Since STRATA is gradient-free and does not require re-computation every step in the adversarial training process, our method is extremely inexpensive, whereas traditional gradient-based methods for developing adversarial examples are highly expensive, and are less effective (Table 5). Therefore, our method can represent an extremely easy hardening technique for all types of models operating in the discrete domain with a large vocabulary space. Further, we have shown that traditional gradient-based adversarial training is largely ineffective at defending against our attack (Table 4), thus our method is vital for a comprehensive defense against adversarial attacks." }, { "heading": "B FULL COMPARISON WITH OTHER CODE2SEQ ATTACKS", "text": "We have shown that STRATA works well to attack the CODE2SEQ model and can outperform the attacks by Ramakrishnan et al. (2020). Here, we present a comparison by the transformations proposed by Rabin & Alipour (2020). In order to be able to compare the same metrics, we calculate the percent prediction change, which is the percent of the time that an adversarial change resulted in a change in prediction. A higher percent prediction change indicates a better attack.\nIn Table 7, we compare the performance of our attack to the performance of the transformations generated by Rabin & Alipour (2020) We find that the transformations performed by Rabin & Alipour (2020) result in fewer prediction changes than STRATA. As above, the most effective strategy is our 5-same attack." }, { "heading": "C CROSS-DATASET ATTACKS", "text": "We present two fully black-box attacks that do not require any information about the targeted CODE2SEQ model or dataset:\nAs a surrogate model, we train a CODE2SEQ model on any available dataset for the targeted programming language. To obtain adversarial examples from the surrogate, we identify optimal L2 and frequency cutoffs for this model. Using these cutoffs, we construct a vocabulary of the optimal top-n by frequency or by L2 norm. We show that these adversarial examples can be transferred to other models.\nWe present the results of the cross-dataset transfer attack proposed in Section 3.3. In particular, we generate both frequency and L2 STRATA adversarial examples. We use the L2 norm of the embeddings of CODE2SEQ-SM, CODE2SEQ-MD, and CODE2SEQ-LG, and the subtoken frequencies of java-small, java-medium, and java-large to construct six different collections of adversarial examples, of which each collection is a perturbation of the java-small test set. We test each dataset on each model. Tables 8 and 9 show the results of the experiments, revealing that while the white-box and known-dataset attacks (the diagonals of the tables) outperform the cross-dataset attacks, the cross-dataset attacks are nonetheless effective. Furthermore, we note that L2-based cross-dataset attacks are more effective than frequency-based cross-dataset attacks, confirming that L2 norms can\neffectively identify subtokens that are high impact in other models. We conclude that STRATA can be performed in a true black-box setting with no information about the model parameters nor the training dataset. The cross-dataset attack is likely effective due to similar distributions of the Java datasets. Similar to word frequencies in natural language corpora, we expect that most Java datasets should have a similar subtoken distributions, and thus STRATA should transfer across models trained on different datasets." }, { "heading": "D EXAMPLES OF TARGETED ATTACKS", "text": "To illustrate the effectiveness of STRATA targeted attacks more concretely, we target particular arbitrarily-picked subtokens and measure the success rate over the entire testing set (Table 10) and find that though effectiveness can vary across different targets, the average effectiveness is quite high.\nD.1 COMPUTATIONALLY INEXPENSIVE\nCurrent alternative methods for attacking models of source code with comparable results involve either an extensive search for optimal transformations or gradient-based optimization for token\nreplacement, or a combination of the two (Rabin & Alipour, 2020; Ramakrishnan et al., 2020; Yefet et al., 2020). Extensive searches are inherently computationally expensive and, in the case of gradientbased optimization, oftentimes require a GPU for efficient implementation. STRATA, however, can be implemented to run quickly on even CPU-only machines. After an initial pre-processing step to mark local variable names for easy replacement which took less than five minutes on our 24-core CPU-only machine, we were able to construct adversarial examples using STRATA on a dataset of 20,000 within seconds. The analogous gradient-based method proposed by Ramakrishnan et al. (2020) took multiple hours on the same machine. The combined speed and effectiveness of STRATA will allow researchers to quickly harden their models against adversarial attacks with efficient large-scale adversarial training." } ]
2,020
null
SP:9a4c3ea3b70f57c94a649f12b8c85c35e6b3b189
[ "Paper proposed an ensemble learning approach for the low-data regime. Paper uses various sources of diversity - pre-training, fine-tuning and combined to create ensembles. It then uses nearest-neighbor accuracy to rank pre-trained models, fine-tune the best ones with a small hyper-parameter sweep, and greedily construct an ensemble to minimize validation cross-entropy. Paper claims to achieve state-of-the art performance with much lower inference budget. ", "[Summary] This paper presents different ways of creating ensembles from pre-trained models. Specifically, authors first utilize nearest-neighbor accuracy to to rank pre-trained models, then fine-tune the best ones with a small hyperparameter sweep, and finally greedily construct an ensemble to minimize validation cross-entropy. Experiments on the Visual Task Adaptation Benchmark show the efficacy of the approach in selecting few models within a computational budget." ]
In the low-data regime, it is difficult to train good supervised models from scratch. Instead practitioners turn to pre-trained models, leveraging transfer learning. Ensembling is an empirically and theoretically appealing way to construct powerful predictive models, but the predominant approach of training multiple deep networks with different random initialisations collides with the need for transfer via pre-trained weights. In this work, we study different ways of creating ensembles from pre-trained models. We show that the nature of pre-training itself is a performant source of diversity, and propose a practical algorithm that efficiently identifies a subset of pre-trained models for any downstream dataset. The approach is simple: Use nearest-neighbour accuracy to rank pre-trained models, fine-tune the best ones with a small hyperparameter sweep, and greedily construct an ensemble to minimise validation cross-entropy. When evaluated together with strong baselines on 19 different downstream tasks (the Visual Task Adaptation Benchmark), this achieves state-of-the-art performance at a much lower inference budget, even when selecting from over 2,000 pre-trained models. We also assess our ensembles on ImageNet variants and show improved robustness to distribution shift.
[]
[ { "authors": [ "Ayan Acharya", "Eduardo R. Hruschka", "Joydeep Ghosh", "Sreangsu Acharyya" ], "title": "Transfer learning with cluster ensembles", "venue": "In ICML Workshop on Unsupervised and Transfer Learning,", "year": 2012 }, { "authors": [ "Philip Bachman", "Ouais Alsharif", "Doina Precup" ], "title": "Learning with pseudo-ensembles", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2014 }, { "authors": [ "Andrei Barbu", "David Mayo", "Julian Alverio", "William Luo", "Christopher Wang", "Dan Gutfreund", "Josh Tenenbaum", "Boris Katz" ], "title": "Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models", "venue": "In International Conf. on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Yoshua Bengio" ], "title": "Deep learning of representations for unsupervised and transfer learning", "venue": "In ICML Unsupervised and Transfer Learning Workshop,", "year": 2011 }, { "authors": [ "Rich Caruana", "Alexandru Niculescu-Mizil", "Geoff Crew", "Alex Ksikes" ], "title": "Ensemble selection from libraries of models", "venue": "In International Conference on Machine Learning (ICML),", "year": 2004 }, { "authors": [ "Gong Cheng", "Junwei Han", "Xiaoqiang Lu" ], "title": "Remote sensing image scene classification: Benchmark and state of the art", "venue": "Proceedings of the IEEE,", "year": 2017 }, { "authors": [ "Brian Cheung", "Alex Terekhov", "Yubei Chen", "Pulkit Agrawal", "Bruno Olshausen" ], "title": "Superposition of many models into one", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Mircea Cimpoi", "Subhransu Maji", "Iasonas Kokkinos", "Sammy Mohamed", "Andrea Vedaldi" ], "title": "Describing textures in the wild", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2014 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Fei-Fei Li" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2009 }, { "authors": [ "Guneet S. Dhillon", "Pratik Chaudhari", "Avinash Ravichandran", "Stefano Soatto" ], "title": "A baseline for few-shot image classification", "venue": "In International Conf. on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Josip Djolonga", "Jessica Yung", "Michael Tschannen", "Rob Romijnders", "Lucas Beyer", "Alexander Kolesnikov", "Joan Puigcerver", "Matthias Minderer", "Alexander D’Amour", "Dan Moldovan", "Sylvain Gelly", "Neil Houlsby", "Xiaohua Zhai", "Mario Lucic" ], "title": "On robustness and transferability of convolutional neural networks", "venue": null, "year": 2007 }, { "authors": [ "Nikita Dvornik", "Cordelia Schmid", "Julien Mairal" ], "title": "Diversity with cooperation: Ensemble methods for few-shot classification", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Stanislav Fort", "Huiyi Hu", "Balaji Lakshminarayanan" ], "title": "Deep ensembles: A loss landscape perspective", "venue": null, "year": 2019 }, { "authors": [ "Andreas Geiger", "Philip Lenz", "Raquel Urtasun" ], "title": "Are we ready for autonomous driving? the kitti vision benchmark suite", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2012 }, { "authors": [ "Xin He", "Kaiyong Zhao", "Xiaowen Chu" ], "title": "AutoML: A survey of the state-of-the-art", "venue": null, "year": 2019 }, { "authors": [ "Patrick Helber", "Benjamin Bischke", "Andreas Dengel", "Damian Borth" ], "title": "EuroSAT: A novel dataset and deep learning benchmark for land use and land cover classification", "venue": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In International Conf. on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Steven Basart", "Norman Mu", "Saurav Kadavath", "Frank Wang", "Evan Dorundo", "Rahul Desai", "Tyler Zhu", "Samyak Parajuli", "Mike Guo", "Dawn Song", "Jacob Steinhardt", "Justin Gilmer" ], "title": "The many faces of robustness: A critical analysis of out-of-distribution generalization", "venue": null, "year": 2006 }, { "authors": [ "Justin Johnson", "Bharath Hariharan", "Laurens van der Maaten", "Fei-Fei Li", "C Lawrence Zitnick", "Ross Girshick" ], "title": "CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Alexander Kolesnikov", "Lucas Beyer", "Xiaohua Zhai", "Joan Puigcerver", "Jessica Yung", "Sylvain Gelly", "Neil Houlsby" ], "title": "Big transfer (BiT): General visual representation learning", "venue": null, "year": 1912 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, University of Toronto,", "year": 2009 }, { "authors": [ "Samuli Laine", "Timo Aila" ], "title": "Temporal ensembling for semi-supervised learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Yann LeCun", "Fu Jie Huang", "Léon Bottou" ], "title": "Learning methods for generic object recognition with invariance to pose and lighting", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2004 }, { "authors": [ "Stefan Lee", "Senthil Purushwalkam", "M. Cogswell", "David J. Crandall", "Dhruv Batra" ], "title": "Why M heads are better than one: Training a diverse ensemble of deep networks", "venue": null, "year": 2015 }, { "authors": [ "Fei-Fei Li", "Rob Fergus", "Pietro Perona" ], "title": "Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories", "venue": "Computer Vision and Pattern Recognition Workshop,", "year": 2004 }, { "authors": [ "Loic Matthey", "Irina Higgins", "Demis Hassabis", "Alexander Lerchner" ], "title": "dSprites: Disentanglement testing sprites dataset, 2017", "venue": "URL https://github.com/deepmind/ dsprites-dataset/", "year": 2017 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Andrew Y. Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS Workshop on Deep Learning and Unsupervised Feature Learning", "year": 2011 }, { "authors": [ "Behnam Neyshabur", "Hanie Sedghi", "Chiyuan Zhang" ], "title": "What is being transferred in transfer learning", "venue": null, "year": 2008 }, { "authors": [ "Jiquan Ngiam", "Daiyi Peng", "Vijay Vasudevan", "Simon Kornblith", "Quoc V. Le", "Ruoming Pang" ], "title": "Domain adaptive transfer learning with specialist models", "venue": null, "year": 2018 }, { "authors": [ "Maria-Elena Nilsback", "Andrew Zisserman" ], "title": "A visual vocabulary for flower classification", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2006 }, { "authors": [ "Omkar M. Parkhi", "Andrea Vedaldi", "Andrew Zisserman", "C.V. Jawahar" ], "title": "Cats and dogs", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2012 }, { "authors": [ "Joan Puigcerver", "Carlos Riquelme", "Basil Mustafa", "Cedric Renggli", "André Susano Pinto", "Sylvain Gelly", "Daniel Keysers", "Neil Houlsby" ], "title": "Scalable transfer learning with expert models", "venue": null, "year": 2009 }, { "authors": [ "Maithra Raghu", "Chiyuan Zhang", "Jon Kleinberg", "Samy Bengio" ], "title": "Transfusion: Understanding transfer learning for medical imaging", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Esteban Real", "Jonathon Shlens", "Stefano Mazzocchi", "Xin Pan", "Vincent Vanhoucke" ], "title": "YouTubeBoundingBoxes: A large high-precision human-annotated data set for object detection in video", "venue": "In Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Benjamin Recht", "Rebecca Roelofs", "Ludwig Schmidt", "Vaishaal Shankar" ], "title": "Do ImageNet classifiers generalize to ImageNet", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Giovanni Seni", "John F Elder" ], "title": "Ensemble methods in data mining: improving accuracy through combining predictions", "venue": null, "year": 2010 }, { "authors": [ "Vaishaal Shankar", "Achal Dave", "Rebecca Roelofs", "Deva Ramanan", "Benjamin Recht", "Ludwig Schmidt" ], "title": "Do image classifiers generalize across time", "venue": "In ICML Workshop on Deep Phenomena,", "year": 2019 }, { "authors": [ "Asa Cooper Stickland", "Iain Murray" ], "title": "Diverse ensembles improve calibration", "venue": null, "year": 2007 }, { "authors": [ "Chen Sun", "Abhinav Shrivastava", "Saurabh Singh", "Abhinav Gupta" ], "title": "Revisiting unreasonable effectiveness of data in deep learning era", "venue": "In International Conf. on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Shiliang Sun", "Zhijie Xu", "Mo Yang" ], "title": "Transfer learning with part-based ensembles", "venue": "In Multiple Classifier Systems,", "year": 2013 }, { "authors": [ "Chuanqi Tan", "Fuchun Sun", "Tao Kong", "Wenchang Zhang", "Chao Yang", "Chunfang Liu" ], "title": "A survey on deep transfer learning", "venue": "In International Conf. on Artificial Neural Networks (ICANN),", "year": 2018 }, { "authors": [ "Bastiaan S. Veeling", "Jasper Linmans", "Jim Winkens", "Taco Cohen", "Max Welling" ], "title": "Rotation equivariant CNNs for digital pathology", "venue": "In Medical Image Computing and Computer Assisted Intervention (MICCAI),", "year": 2018 }, { "authors": [ "Andrew M. Webb", "Charles Reynolds", "Wenlin Chen", "Henry Reeve", "Dan-Andrei Iliescu", "Mikel Lujan", "Gavin Brown" ], "title": "To ensemble or not ensemble: When does end-to-end training fail", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Yeming Wen", "Dustin Tran", "Jimmy Ba" ], "title": "Batchensemble: An alternative approach to efficient ensemble and lifelong learning", "venue": "In International Conf. on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Florian Wenzel", "Jasper Snoek", "Dustin Tran", "Rodolphe Jenatton" ], "title": "Hyperparameter ensembles for robustness and uncertainty quantification", "venue": null, "year": 2006 }, { "authors": [ "Martin Wistuba", "Nicolas Schilling", "Lars Schmidt-Thieme" ], "title": "Automatic Frankensteining: Creating complex ensembles autonomously", "venue": "In International Conference on Data Mining,", "year": 2017 }, { "authors": [ "João C. Xavier-Júnior", "Alex A. Freitas", "Antonio Feitosa-Neto", "Teresa B. Ludermir" ], "title": "A novel evolutionary algorithm for automated machine learning focusing on classifier ensembles", "venue": "In Brazilian Conference on Intelligent Systems (BRACIS),", "year": 2018 }, { "authors": [ "Jason Yosinski", "Jeff Clune", "Yoshua Bengio", "Hod Lipson" ], "title": "How transferable are features in deep neural networks? In Neural information processing systems (NeurIPS)", "venue": null, "year": 2014 }, { "authors": [ "Xiaohua Zhai", "Joan Puigcerver", "Alexander Kolesnikov", "Pierre Ruyssen", "Carlos Riquelme", "Mario Lucic", "Josip Djolonga", "Andre Susano Pinto", "Maxim Neumann", "Alexey Dosovitskiy", "Lucas Beyer", "Olivier Bachem", "Michael Tschannen", "Marcin Michalski", "Olivier Bousquet", "Sylvain Gelly", "Neil Houlsby" ], "title": "A large-scale study of representation learning with the visual task adaptation benchmark", "venue": null, "year": 1910 } ]
[ { "heading": "1 INTRODUCTION", "text": "There are many ways to construct models with minimal data. It has been shown that fine-tuning pre-trained deep models is a compellingly simple and performant approach (Dhillon et al., 2020; Kolesnikov et al., 2019), and this is the paradigm our work operates in. It is common to use networks pre-trained on ImageNet (Deng et al., 2009), but recent works show considerable improvements by careful, task-specific pre-trained model selection (Ngiam et al., 2018; Puigcerver et al., 2020).\nEnsembling multiple models is a powerful idea that often leads to better predictive performance. Its secret relies on combining different predictions. The source of diversity for deep networks has been studied (Fort et al., 2019; Wenzel et al., 2020), though not thoroughly in the low-data regime. Two of the most common approaches involve training independent models from scratch with (a) different random initialisations, (b) different random subsets of the training data. Neither of these are directly applicable downstream with minimal data, as we require a pre-trained initialisation to train competitive models1, and data scarcity makes further data fragmentation impractical. We study some ways of encouraging model diversity in a supervised transfer-learning setup, but fundamentally argue that the nature of pre-training is itself an easily accessible and valuable form of diversity.\nPrevious works consider the construction of ensembles from a set of candidate models (Caruana et al., 2004). Services such as Tensorflow Hub (Google, 2018) and PyTorch Hub (FAIR, 2019) contain hundreds of pre-trained models for computer vision; these could all be fine-tuned on a new task to generate candidates. Factoring in the cost of hyperparameter search, this may be prohibitively expensive. We would like to know how suited a pre-trained model is for our given task before training it. This need has given rise to cheap proxy metrics which assess this suitability (Puigcerver et al., 2020). We use such metrics - leave-one-out nearest-neighbour (kNN) accuracy, in particular - as a way of selecting a subset of pre-trained models, suitable for creating diverse ensembles of task-specific experts. We show that our approach is capable of quickly narrowing large pools (up to 2,000) of candidate pre-trained models down to manageable (15 models) task-specific sets, yielding a practical algorithm in the common context of the availability of many pre-trained models.\n1For an illustration of the importance of using pre-trained models in the low-data regime see Appendix C.1.\nWe first experiment with sources of downstream diversity (induced only by hyperparameterisation, augmentation or random data ordering), giving significant performance boosts over single models. Using our algorithm on different pools of candidate pre-trained models, we show that various forms of upstream diversity produce ensembles that are more accurate and robust to domain shift than this. Figure 1 illustrates the different approaches studied in our work. Ultimately, this new form of diversity improves on the Visual Task Adaptation Benchmark (Zhai et al., 2019) SOTA by 1.8%.\nThe contributions of this paper can be summarized as follows:\n• We study ensembling in the context of transfer learning in the low data regime & propose a number of ways to induce advantageous ensemble diversity which best leverage pre-trained models.\n• We show that diversity from upstream pre-training achieves better accuracy than that from the downstream fine-tuning stage (+1.2 absolute points on average across the 19 downstream classification VTAB tasks), and that it is more robust to distribution shift (+2.2 absolute average accuracy increase on distribution shifted ImageNet variants).\n• We show that they also surpass the accuracy of large SOTA models (76.2% vs. 77.6%) at a much lower inference cost, and achieve equal performance with less than a sixth of the FLOPS.\n• We extend the work from Puigcerver et al. (2020) and demonstrate the efficacy of kNN accuracy as a cheap proxy metric for selecting a subset of candidate pre-trained models." }, { "heading": "2 CREATING ENSEMBLES FROM PRE-TRAINED MODELS", "text": "We first formally introduce the technical problem we address in this paper. Next we discuss baseline approaches which use a single pre-trained model, and then we present our method that exploits using multiple pre-trained models as a source of diversity." }, { "heading": "2.1 THE LEARNING SETUP: UPSTREAM, MODEL SELECTION, DOWNSTREAM", "text": "Transfer learning studies how models trained in one context boost learning in a different one. The most common approach pre-trains a single model on a large dataset such as ImageNet, to then tune the model weights to a downstream task. Despite algorithmic simplicity, this idea has been very\nsuccessful. In a downstream low-data scenario, it is more difficult for a one-size-fits-all approach to triumph as specializing the initial representation becomes harder. As in Puigcerver et al. (2020), we explore the scenario where a range of pre-trained models is available, and we can look at the target data to make a decision on which models to fine-tune. However, we generalize and improve it by simultaneously selected several models for fine-tuning, since downstream tasks may benefit from combining expert representations aimed at capturing different aspects of the learning task: for instance, on a natural scenery dataset one could merge different models that focus on animals, plants, food, or buildings. Fine-tuning all pre-trained models to pick the best one is a sensible strategy, but rarely feasible. To keep the algorithms practical, we identify two compute budgets that should be controlled for: The fine-tuning budget, i.e. the total number of models we can fine-tune on a downstream task; and the inference budget, the maximum size of the final model." }, { "heading": "2.2 BASELINES: DIVERSITY FROM DOWNSTREAM TRAINING", "text": "The baselines we propose leverage transfer learning by requiring a pre-trained model - this is crucial, see Appendix C.1. We use a strong generalist model (BiT-ResNet 50s from Kolesnikov et al. (2019), trained on all upstream data) and consider three methods to create a model set for ensemble selection.\nRandom Seeds. Fine-tuning a generalist model multiple times with fixed hyperparameters will yield different classifiers, analagous to the DeepEnsembles of Lakshminarayanan et al. (2017). Note, here we can only take advantage of randomised data ordering/augmentation, which Fort et al. (2019) showed, though useful, was not as beneficial as diversity from random initalisation.\nHyperEnsembles. Hyperparameter diversity was recently shown to further improve DeepEnsembles (Wenzel et al., 2020). We define a hyperparameter search space, randomly sample as many configurations as we have fine-tuning budget, and fine-tune the generalist on downstream data with each of those configurations. Further details on training are given in Appendix A.2.\nAugEnsembles. We generate a set of models by fine-tuning the generalist on each task with randomly sampled families of augmentation (but fixed hyperparameters). Details are in Appendix A.3." }, { "heading": "2.3 OUR METHOD: DIVERSITY FROM UPSTREAM PRE-TRAINING", "text": "Fort et al. (2019) explain the strong performance of classical ensembling approaches – independently training randomly initialised deep networks – by showing that each constituent model explores a different mode in the function space. For transfer learning, Neyshabur et al. (2020) show that with pre-trained weights, fine-tuned models stay in a local ‘basin’ in the loss landscape. Combining both gives a compelling reasoning for the use of multiple pre-trained networks for transfer with ensembles, as we propose here. Instead of diversity from downstream fine-tuning, we show that in the low data regime, better ensembles can be created using diversity from pre-training.\nWe consider three sources of upstream diversity. First, we consider generalists that were pre-trained with different random seeds on the same architecture and data. Second, we consider experts, specialist models which were pre-trained on different subsets of the large upstream dataset. Lastly, we exploit diversity in scale – pre-trained models with architectures of different sizes. Given a pool of candidate models containing such diversity, we propose the following algorithm (Figure 1):\n1. Pre-trained model selection. Fine-tuning all experts on the new task would be prohibitively expensive. Following Puigcerver et al. (2020), we rank all the models by their kNN leave-one-out accuracy as a proxy for final fine-tuned accuracy, instead keeping the K best models (rather than 1).\n2. Fine-tuning. We add a fully connected layer to each model’s final representation, and then train the whole model by minimising categorical cross-entropy via SGD. Given a pool of K pre-trained models from stage 1, we tune each with 4 learning rate schedules, yielding a total of L = 4K models for the step 3 (Usually K = 15 and L = 60). See Appendix A.1.1 for more details.\n3. Ensemble construction. This is shared among all presented ensembles. We use the greedy algorithm introduced by Caruana et al. (2004). At each step, we greedily pick the next model which minimises cross-entropy on the validation set when it is ensembled with already chosen models.\nThese steps are independently applied to each task; each step makes use of the downstream dataset, so each dataset gets a tailored set of pre-trained models to create the ensemble pool and therefore\nvery different final ensembles result. We also considered a greedy ensembling algorithm in kNN space which aims to sequentially pick complementary models which will likely ensemble well together (Appendix C.6), but picking top-K was generally better." }, { "heading": "2.3.1 COMBINED APPROACHES", "text": "The diversity induced by different upstream models and distinct downstream hyperparameters should be complementary. Given a fine-tuning budget of L, we can set the number of pre-trained models K in advance, providing each of them with a random hyperparameter sweep of size L/K. However, for some tasks it may be more beneficial to have fewer different pre-trained models and a wider sweep, or vice versa. We aim to dynamically set this balance per-dataset using a heuristic based on the kNN accuracies; namely, we keep all pre-trained models within some threshold percentage τ% of the top kNN accuracy, up to a maximum of K = 15. Ideally, this would adaptively discard experts poorly suited to a given task, whose inclusion would likely harm ensemble performance. The saved compute budget is then used to squeeze more performance from available experts by testing more hyperparameters, and hopefully leading to greater useful diversity. We arbitrarily set τ = 15% for our experiments, but this choice could likely be improved upon. Appendix C.5 shows how the number of models picked varies per task, and the gains with respect to having a fixed K." }, { "heading": "2.3.2 PRE-TRAINING MODELS", "text": "We use BiT ResNets pre-trained on two large upstream datasets with hierarchical label spaces: JFT-300M (Sun et al., 2017) and ImageNet-21k (Deng et al., 2009). We consider two types of pretrained models. Generalists are trained on the entire upstream dataset. In particular, we consider 15 JFT ResNet-50 generalists that were pre-trained with different random initalisations. Experts are generated by splitting the hierarchical label spaces into sub-trees and training independent models on the examples in each sub-tree. We pre-train 244 experts from JFT and 50 from ImageNet21k, following the protocol of (Puigcerver et al., 2020) (see Appendix A.1). For low-data downstream tasks, this is by far the most expensive stage of the process. It is however only incurred once, and its cost is amortized as new downstream tasks are served, since any downstream task can reuse them." }, { "heading": "3 ENSEMBLE EVALUATION", "text": "Downstream Tasks. We evaluate our models on the Visual Task Adaptation Benchmark (Zhai et al., 2019): 19 diverse downstream classification tasks, split into ‘natural’, ‘specialised’ and ‘structured’ categories. As we are primarily interested in low-data regimes, the tasks only have 1000 training datapoints (i.e., VTAB1K) with a number of classes ranging from 2 to 397. We split data into 800 training examples and 200 validation examples. See Appendix B.1 for more information.\nTest Performance. For our final competitive models, we first train all the individual models on the 800 training points. Then, we use the 200 validation data points to find the best ensemble (both running the greedy algorithm and choosing the overall ensemble size). For the resultant ensemble, we retrain constituent models on the full 1000 data points, and evaluate it on the test data.\nRobustness. We train ExpertEnsembles and HyperEnsembles on ImageNet (Deng et al., 2009). While ImageNet does not match our low-data regime of interest, previous work and additional datasets allow us to conveniently measure robustness and uncertainty metrics. Thus, alongside reporting the accuracy on the official validation split, we assess models on a number of ImageNetbased robustness benchmarks, aiming to quantify calibration and robustness to distribution shift (Djolonga et al., 2020). More details on these variants are available in Appendices B.2 and A.4." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "Unless otherwise specified, all experiments use a fine-tuning budget and inference budget of 60 and 15 models respectively. This was set arbitrarily; we experiment with both budgets to see the effect." }, { "heading": "4.1 THE VALUE OF ENSEMBLES", "text": "We first show that ensembles in low-data transfer regimes dramatically beat their single-model counterparts –which are often much larger networks. Figure 2 and Table 1 compare our best ensembles (which all use upstream or combined diversity) on VTAB1K tasks. Our baselines are BiT models from (Kolesnikov et al., 2019), which had until now state-of-the-art performance.\nJFT pre-trained models. The most standard approach –fine-tuning a single R50 model trained on all of JFT– leads to an average accuracy of 70.6%. It greatly lags behind compared to the ensemblebased algorithms; in particular, the difference is striking for structured and natural datasets. On average, the ensembles selected between 9 and 10 downstream models –this number greatly varies depending on the task, e.g. 3 were selected for CalTech101 and 14 for Diabetic Retinopathy. Accordingly, capacity-wise, it makes sense to compare the ensembles to larger ResNets. Table 1 shows that the JFT R50 ensembles match or slightly beat the performance of a R152x4. In particular, the ensembles offer a large advantage in settings where tasks diverge from single-object recognition, e.g. in the structured datasets. Even in natural domains, the experts have a better accuracy/FLOP ratio than the R152x4, which has 40× more parameters than a single R50. Even more significant is the difference in inference time, as ensemble predictions can be easily parallelized.\nImageNet21k pre-trained models. The story is fairly similar for the pool of ImageNet21k experts. The ensembles select on average between 7 and 8 models –again, strongly task-dependent. Ensembles’ average performance improvement with respect to a single R50 trained on all of ImageNet21k is around 5 absolute points (more than 7%). We also consider a much larger generalist baseline, in this case a R101x3, which has more than 16× as many parameters as a single R50. Still the ensembles far outperform it, especially in structured datasets.\nOverall. A more complete story supporting the ensembles’ value is depicted in Figure 2. The gray dashed line represents the previous Pareto frontier of VTAB1K average accuracy per FLOP. The ensemble models dominate the state-of-the-art, indicating their efficacy in the low-data regime." }, { "heading": "4.2 THE VALUE OF UPSTREAM DIVERSITY", "text": "Results in Table 2 suggest that upstream diversity improves downstream diversity. For both JFT and ImageNet21k pre-training, ensembles benefiting from upstream sources of diversity outperform their downstream-based counterparts. Combining both gives a further small boost for expert ensembles.\nAblations on JFT pre-trained models are shown in Table 3; we now discuss the learnings from that.\nExperts help when pre-training is relevant: Results on Table 2 used kNN to pick from a pool of 15 generalists and 244 experts. We break these pools down separately. Experts give a significant boost on Natural datasets; the ensembles take advantage of the relevance of experts pre-trained on the predominately ‘natural’ slices of the upstream data. For datasets without clear experts, there is less benefit to this approach, and the generalists shine.\nPerformance improvements stack with scale: The strong performance of the R101 Expert Ensemble shows performance gains stack somewhat with scale; it improves on R50 Experts by 1.2% absolute points in accuracy, improving in all categories. We explore this more thoroughly in Appendix C.7.\nCombining upstream and downstream diversity helps: As discussed in Section 2.3.1, we combine experts with hyper or augmentation ensembles. The simple approach of thresholding by kNN accuracy works well, giving a small boost in test performance (R50 Aug(Gen. + Experts), R50 HyperExperts, R50 AugExperts). More details on this are provided in Appendix C.5." }, { "heading": "4.3 THE VALUE OF NEAREST NEIGHBOUR SELECTION", "text": "kNN may possibly help: The greedy ensemble algorithm is not perfect, and with such a small validation set it is prone to overfit. When all upstream JFT R50 experts are fine-tuned and passed to the greedy algorithm, test performance drops slightly. We further explore this in Appendices C.4, C.9.\nIt can compare models of different sizes: Overall, larger models perfom better at transfer (Kolesnikov et al., 2019). Per-dataset, this is not the case; e.g. we found R34 experts were best on structured tasks. One may expect kNN selection or the greedy algorithm to be biased towards selecting larger architectures. The final ensembles instead use a mix of scales. The R18/R34/R50 experts ensemble improves on just R50s by 0.4%, indicating possible benefits; more discussion is in Appendix C.7.\nCan filter a very large pool of models: When selecting only 15 pre-trained models from over 2000 candidates (different architecture sizes and upstream datasets), the overall VTAB performance (All Experts in Table 3 is similar to only selecting from ResNet-101s. This highlights the remarkable robustness of our model selection. These results are broken down further in Appendix C.8.\nMirroring Puigcerver et al. (2020), we have shown kNN to be a cheap yet successful way of selecting models. It is not perfect - when combining pools, one would hope for at least a ‘best of both’ performance. kNN selection wasn’t needed for generalists (we had 15 pre-trained models), but when combining the generalists and experts in a pool, specialised/structured performance drops slightly." }, { "heading": "4.4 EFFECT OF FINE-TUNING BUDGET", "text": "In most experiments, the kNN picksK = 15 experts. With the default 4 hyperparameters, this is a fine-tuning budget of 60 models. This is the number of models trained for a given task, and the majority of compute expenditure incurred by a practitioner, as the kNN selection/ensembling are comparatively cheap. The hyperensemble was run with the same budget. Figure 3 shows how performance drops with reduced fine-tuning budget. Interestingly, the expert ensembles are actually more robust to a reduced budget, retaining higher performance when training fewer models, indicating the kNN’s usefulness as a pre-selection phase." }, { "heading": "4.5 ROBUSTNESS TO DISTRIBUTION SHIFT", "text": "Previous work has shown ensembles help with metrics relating to uncertainty and calibration (Lakshminarayanan et al., 2017; Stickland & Murray, 2020). To assess this, we train JFT R50 HyperEnsembles and ExpertEnsembles for ImageNet classification. For the former, we use the BiT generalist; for the latter, we use the 244 experts, applying kNN to 50,000 examples from the training set to select experts. For both we use the validation set for greedy ensembling. Once the ensembles are trained and constructed, we assess them on a suite of datasets aiming to quantify robustness to distribution shift. Each dataset introduces some form of distribution shift (further details in Appendix B.2) - what we assess is the accuracy on these datasets. Figure 4 shows them. The expert ensembles offer a slightly better accuracy on the held out data; more importantly, they perform significantly better under distribution shift, improving over the HyperEnsembles by on average 2.2% across datasets." }, { "heading": "5 RELATED WORK", "text": "We present literature related to the main aspects of this work. As well as previous highlighted novelties, we believe our contribution is distinguished from previous ensembling works by focusing on diverse datasets with production-scale deep models (instead of demonstrative smaller architectures and datasets), limiting the training data available, and formally assessing distribution shift.\nTransfer Learning. Relating to a long history of research in manifold and representation learning (Tan et al., 2018), the transfer of features learnt by deep neural networks aims to reuse the abstract features learnt on a source (upstream) dataset in order to improve performance or data efficiency on a target (downstream) dataset. Bengio (2011) studied this in the context of unsupervised pre-training,\nproposing a number of ways to learn and re-use the features. Many works have shown benefits of transfer relating to convergence speed (Raghu et al., 2019), generalisation (Yosinski et al., 2014), accuracy (Zhai et al., 2019) and robustness (Djolonga et al., 2020), with the latter two showing particular benefits in the low-data regime.\nEnsemble Learning. Known for improving models, ensembling methods have been studied in depth in and out of deep learning academia (Seni & Elder, 2010). There are few works which study ensembles in the context of transfer learning (Acharya et al., 2012; Sun et al., 2013). Bachman et al. (2014) pre-train entire ensembles on the source data and transfer, instead of transferring individual models. Work in the low-data regime is sparser. Using an ensemble of models from multiple training checkpoints, Laine & Aila (2017) label unlabelled data to then train individual models further, improving data efficiency for CIFAR100/SVHN. For few-shot classification on a new class, Dvornik et al. (2019) construct ensembles of mean-centroid classifiers from pre-trained ResNet18s.\nDeep Ensembles. Lakshminarayanan et al. (2017) show that a simple approach of adversarially training multiple randomly initialised models from scratch and ensembling them yielded models with strong predictive uncertainty and calibration. Wenzel et al. (2020) showed that hyperensembles, which vary random initialisations and hyperparameters, outperform these deep ensembles.\nOn Constructing Ensembles. A key part of our algorithm is the use of the kNN to narrow down candidate pre-trained models into a relevant subset. Caruana et al. (2004) was arguably the seminal work studying how to select an optimal ensemble from a set of candidate models. A number of works extend AutoML frameworks (He et al., 2019) to explicitly optimise both the ensembling method and the members to maximise overall performance (Wistuba et al., 2017; Xavier-Júnior et al., 2018)." }, { "heading": "6 CONCLUSIONS", "text": "We have studied simple ways of creating performant ensembles with a limited amount of data. Overall, ensembles dramatically outperform their single-model counterparts. We show that diversity from upstream pre-training results in better ensembles than diversity induced downstream, regardless of whether this upstream diversity comes from pre-training multiple generalist models with different initialisations, using different architectures or specialisation via pre-training on different data. We demonstrate the efficacy of the nearest-neighbours classifier as an easily calculated discriminator between different pre-trained models, and even as a way to decide how many models to try on a downstream task, leading to convenient ways to combine both upstream and downstream diversity.\nThese ensembles achieve SOTA performance on the Visual Task Adaptation Benchmark at a significantly smaller inference cost, while also outperforming ensemble approaches relying on downstream diversity. They also exhibit higher robustness to domain shift as assessed by ImageNet variants.\nThere are many interesting avenues for future work. All our considered models were pre-trained in a supervised fashion, and this should certainly be extended to include other forms of pre-training. This approach of combining different pre-trained models is highly complimentary with efforts which train ensembles end-to-end with diversity-encouraging losses, such as those in Lee et al. (2015) and Webb et al. (2019). Lastly, works such as Batch Ensembles (Wen et al., 2020) and Parameter Superposition (Cheung et al., 2019) systematically decompose network parameters to compactly train ensembles. For pre-trained models with the same architecture, weights could be deconstructed to initialise those methods so as to benefit from transfer learning and make them feasible in the low-data regime." } ]
2,020
null
SP:f17ad6d00a23e46ebe9175e1eeea7d3eef7f8c84
[ "This paper proposes a setting called \"recognition-aware image processing.\" The key idea is to make the images output by image processing methods still be readily recognized by image recognition methods. Realizing this will help to better meet the requirement from both human observers and machines. Formally, this is formulated as a combined optimization in which the losses from image processing and recognition tasks are jointly considered. This framework is further extended to unsupervised case and the case of intermediate transformer to make it more flexible. The transferability issue is discussed and it is observed that the model trained by the proposed method can generally help even when other recognition models or tasks are used. Experimental study is conducted to demonstrate the performance of the proposed method. ", "The paper proposed a learnable image processing methods that improve machine interpretability of processed image. The paper mainly claimed that improvement of machine recognition is transferrable when evaluated on models of different architectures, recognized categories, tasks and training datasets. Additionally, the paper also try to explain this transferability phenomenon by demonstrating the similarities of different models’ decision boundaries.", "This paper presents several models for visual recognition in the presence of image degradation (e.g., low-resolution, noise, compression artifacts). In the models, an image enhancement network is placed in front of a recognition model and trained together with the recognizer to improve the recognition accuracy as well as to enhance the image quality. The proposed approach is simple, straightforward, yet effective. It has been also shown that the image enhancement module is transferable between different recognition tasks and architectures." ]
Recent progress in image recognition has stimulated the deployment of vision systems (e.g. image search engines) at an unprecedented scale. As a result, visual data are now often consumed not only by humans but also by machines. Meanwhile, existing image processing methods only optimize for better human perception, whereas the resulting images may not be accurately recognized by machines. This can be undesirable, e.g., the images can be improperly handled by search engines or recommendation systems. In this work, we propose simple approaches to improve machine interpretability of processed images: optimizing the recognition loss directly on the image processing neural network or through an intermediate transforming model, a process which we show can also be done in an unsupervised manner. Interestingly, the processing model’s ability to enhance the recognition performance can transfer when evaluated on different recognition models, even if they are of different architectures, trained on different object categories or even different recognition tasks. This makes the solutions applicable even when we do not have the knowledge about future downstream recognition models, e.g., if we are to upload the processed images to the Internet. We conduct comprehensive experiments on three image processing tasks with two downstream recognition tasks, and confirm our method brings substantial accuracy improvement on both the same recognition model and when transferring to a different one, with minimal or no loss in the image processing quality.
[]
[ { "authors": [ "Namhyuk Ahn", "Byungkon Kang", "Kyung-Ah Sohn" ], "title": "Fast, accurate, and lightweight superresolution with cascading residual network", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Yancheng Bai", "Yongqiang Zhang", "Mingli Ding", "Bernard Ghanem" ], "title": "Finding tiny faces in the wild with generative adversarial network", "venue": null, "year": 2018 }, { "authors": [ "Sreya Banerjee", "Rosaura G VidalMata", "Zhangyang Wang", "Walter J Scheirer" ], "title": "Report on ugˆ 2+ challenge track 1: Assessing algorithms to improve video object detection and classification from unconstrained mobility platforms", "venue": "arXiv preprint arXiv:1907.11529,", "year": 2019 }, { "authors": [ "Emmanuel J Candès", "Justin Romberg", "Terence Tao" ], "title": "Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information", "venue": "IEEE Transactions on information theory,", "year": 2006 }, { "authors": [ "Jingwen Chen", "Jiawei Chen", "Hongyang Chao", "Ming Yang" ], "title": "Image blind denoising with generative adversarial network based noise modeling", "venue": null, "year": 2018 }, { "authors": [ "Jifeng Dai", "Yi Li", "Kaiming He", "Jian Sun" ], "title": "R-fcn: Object detection via region-based fully convolutional networks", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "Chao Dong", "Chen Change Loy", "Kaiming He", "Xiaoou Tang" ], "title": "Learning a deep convolutional network for image super-resolution", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Chao Dong", "Chen Change Loy", "Xiaoou Tang" ], "title": "Accelerating the super-resolution convolutional neural network", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "David Eigen", "Dilip Krishnan", "Rob Fergus" ], "title": "Restoring an image taken through a window covered with dirt or rain", "venue": "In ICCV,", "year": 2013 }, { "authors": [ "Leon A Gatys", "Alexander S Ecker", "Matthias Bethge" ], "title": "A neural algorithm of artistic style", "venue": "arXiv preprint arXiv:1508.06576,", "year": 2015 }, { "authors": [ "Muhammad Haris", "Greg Shakhnarovich", "Norimichi Ukita" ], "title": "Task-driven super resolution: Object detection in low-resolution images", "venue": "arXiv preprint arXiv:1803.11316,", "year": 2018 }, { "authors": [ "Muhammad Haris", "Gregory Shakhnarovich", "Norimichi Ukita" ], "title": "Deep back-projection networks for super-resolution", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "arXiv preprint arXiv:1903.12261,", "year": 2019 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "NIPS Workshop,", "year": 2014 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": null, "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": null, "year": 2017 }, { "authors": [ "Justin Johnson", "Alexandre Alahi", "Li Fei-Fei" ], "title": "Perceptual losses for real-time style transfer and super-resolution", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Jiwon Kim", "Jung Kwon Lee", "Kyoung Mu Lee" ], "title": "Deeply-recursive convolutional network for image super-resolution", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Jiwon Kim", "Jung Kwon Lee", "Kyoung Mu Lee" ], "title": "Accurate image super-resolution using very deep convolutional networks", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Wei-Sheng Lai", "Jia-Bin Huang", "Narendra Ahuja", "Ming-Hsuan Yang" ], "title": "Deep laplacian pyramid networks for fast and accurate super-resolution", "venue": null, "year": 2017 }, { "authors": [ "Gustav Larsson", "Michael Maire", "Gregory Shakhnarovich" ], "title": "Learning representations for automatic colorization", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Christian Ledig", "Lucas Theis", "Ferenc Huszár", "Jose Caballero", "Andrew Cunningham", "Alejandro Acosta", "Andrew Aitken", "Alykhan Tejani", "Johannes Totz", "Zehan Wang" ], "title": "Photo-realistic single image super-resolution using a generative adversarial network", "venue": null, "year": 2017 }, { "authors": [ "Stamatios Lefkimmiatis" ], "title": "Universal denoising networks: a novel cnn architecture for image denoising", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Da Li", "Jianshu Zhang", "Yongxin Yang", "Cong Liu", "Yi-Zhe Song", "Timothy M Hospedales" ], "title": "Episodic training for domain generalization", "venue": null, "year": 1902 }, { "authors": [ "Bee Lim", "Sanghyun Son", "Heewon Kim", "Seungjun Nah", "Kyoung Mu Lee" ], "title": "Enhanced deep residual networks for single image super-resolution", "venue": "In CVPR Workshops,", "year": 2017 }, { "authors": [ "Feng Liu", "Ronghang Zhu", "Dan Zeng", "Qijun Zhao", "Xiaoming Liu" ], "title": "Disentangling features in 3d face shapes for joint face reconstruction and recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Delving into transferable adversarial examples and black-box attacks", "venue": "arXiv preprint arXiv:1611.02770,", "year": 2016 }, { "authors": [ "Xiaojiao Mao", "Chunhua Shen", "Yu-Bin Yang" ], "title": "Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections", "venue": null, "year": 2016 }, { "authors": [ "Sung Cheol Park", "Min Kyu Park", "Moon Gi Kang" ], "title": "Super-resolution image reconstruction: a technical overview", "venue": "IEEE signal processing magazine,", "year": 2003 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": "In NIPS Workshop,", "year": 2017 }, { "authors": [ "Joseph Redmon", "Santosh Divvala", "Ross Girshick", "Ali Farhadi" ], "title": "You only look once: Unified, real-time object detection", "venue": null, "year": 2016 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "venue": "In NIPS,", "year": 2015 }, { "authors": [ "Leonid I Rudin", "Stanley Osher", "Emad Fatemi" ], "title": "Nonlinear total variation based noise removal algorithms", "venue": "Physica D: nonlinear phenomena,", "year": 1992 }, { "authors": [ "Mehdi SM Sajjadi", "Bernhard Scholkopf", "Michael Hirsch" ], "title": "Enhancenet: Single image superresolution through automated texture synthesis", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Shiv Shankar", "Vihari Piratla", "Soumen Chakrabarti", "Siddhartha Chaudhuri", "Preethi Jyothi", "Sunita Sarawagi" ], "title": "Generalizing across domains via cross-gradient training", "venue": "arXiv preprint arXiv:1804.10745,", "year": 2018 }, { "authors": [ "Vivek Sharma", "Ali Diba", "Davy Neven", "Michael S Brown", "Luc Van Gool", "Rainer Stiefelhagen" ], "title": "Classification-driven dynamic image enhancement", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Wenzhe Shi", "Jose Caballero", "Ferenc Huszár", "Johannes Totz", "Andrew P Aitken", "Rob Bishop", "Daniel Rueckert", "Zehan Wang" ], "title": "Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network", "venue": null, "year": 2016 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "ICLR,", "year": 2015 }, { "authors": [ "Ying Tai", "Jian Yang", "Xiaoming Liu" ], "title": "Image super-resolution via deep recursive residual network", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Ying Tai", "Jian Yang", "Xiaoming Liu", "Chunyan Xu" ], "title": "Memnet: A persistent memory network for image restoration", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Tong Tong", "Gen Li", "Xiejie Liu", "Qinquan Gao" ], "title": "Image super-resolution using dense skip connections", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Florian Tramèr", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "The space of transferable adversarial examples", "venue": "arXiv preprint arXiv:1704.03453,", "year": 2017 }, { "authors": [ "R Tsai" ], "title": "Multiframe image restoration and registration", "venue": "Advance Computer Visual and Image Processing,", "year": 1984 }, { "authors": [ "Rosaura G VidalMata", "Sreya Banerjee", "Brandon RichardWebster", "Michael Albright", "Pedro Davalos", "Scott McCloskey", "Ben Miller", "Asong Tambo", "Sushobhan Ghosh", "Sudarshan Nagesh" ], "title": "Bridging the gap between computational photography and visual recognition", "venue": "arXiv preprint arXiv:1901.09482,", "year": 2019 }, { "authors": [ "Sicheng Wang", "Bihan Wen", "Junru Wu", "Dacheng Tao", "Zhangyang Wang" ], "title": "Segmentation-aware image denoising without knowing true segmentation", "venue": null, "year": 1905 }, { "authors": [ "Zhangyang Wang", "Shiyu Chang", "Yingzhen Yang", "Ding Liu", "Thomas S Huang" ], "title": "Studying very low resolution recognition using deep networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Zhangyang Wang", "Ding Liu", "Shiyu Chang", "Qing Ling", "Yingzhen Yang", "Thomas S Huang" ], "title": "Deep dual-domain based fast restoration of jpeg-compressed images", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Zhou Wang", "Alan C Bovik", "Hamid R Sheikh", "Eero P Simoncelli" ], "title": "Image quality assessment: from error visibility to structural similarity", "venue": "IEEE transactions on image processing,", "year": 2004 }, { "authors": [ "Junyuan Xie", "Linli Xu", "Enhong Chen" ], "title": "Image denoising and inpainting with deep neural networks", "venue": "In NIPS,", "year": 2012 }, { "authors": [ "Jianwei Yang", "Jiasen Lu", "Dhruv Batra", "Devi Parikh" ], "title": "A faster pytorch implementation of faster r-cnn", "venue": null, "year": 2017 }, { "authors": [ "Kaipeng Zhang", "Zhanpeng Zhang", "Chia-Wen Cheng", "Winston H Hsu", "Yu Qiao", "Wei Liu", "Tong Zhang" ], "title": "Super-identity convolutional neural network for face hallucination", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros" ], "title": "Colorful image colorization", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros", "Eli Shechtman", "Oliver Wang" ], "title": "The unreasonable effectiveness of deep features as a perceptual metric", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Yulun Zhang", "Kunpeng Li", "Kai Li", "Lichen Wang", "Bineng Zhong", "Yun Fu" ], "title": "Image superresolution using very deep residual channel attention networks", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Yulun Zhang", "Yapeng Tian", "Yu Kong", "Bineng Zhong", "Yun Fu" ], "title": "Residual dense network for image super-resolution", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In ICCV,", "year": 2017 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nUnlike in image recognition where a neural network maps an image to a semantic label, a neural network used for image processing maps an input image to an output image with some desired properties. Examples include image super-resolution (Dong et al., 2014), denoising (Xie et al., 2012), deblurring (Eigen et al., 2013), colorization (Zhang et al., 2016) and style transfer (Gatys et al., 2015). The goal of such systems is to produce images of high perceptual quality to a human\nobserver. For example, in image denoising, we aim to remove noise in the signal that is not useful to an observer and restore the image to its original “clean” form. Metrics like PSNR and SSIM (Wang et al., 2004) are often used (Dong et al., 2014; Tong et al., 2017) to approximate human-perceived similarity between the processed images with the original images, and direct human assessment on the fidelity of the output is often considered the “gold-standard” assessment (Ledig et al., 2017; Zhang et al., 2018b). Therefore, many techniques (Johnson et al., 2016; Ledig et al., 2017; Isola et al., 2017) have been proposed for making the output images look perceptually pleasing to human.\nHowever, image processing outputs may not be accurately recognized by image recognition systems. As shown in Fig. 1, the output image of an denoising model could easily be recognized by a human as a bird, but a recognition model classifies it as a kite. One could specifically train a recognition model only on these output images produced by the denoising model to achieve better performance on such images, or could leverage some domain adaptation approaches to adapt the\nrecognition model to this domain, but the performance on natural images can be harmed. This retraining/adaptation scheme might also be impractical considering the significant overhead induced by catering to various image processing tasks and models.\nWith the fast-growing size of image data, many images are often “viewed” and analyzed more by machines than by humans. Nowadays, any image uploaded to the Internet is likely to be analyzed by certain vision systems. For example, Facebook uses a system called Rosetta to extract texts from over 1 billion user-uploaded images every day (Maria, 2018). It is of great importance that the processed images be recognizable by not only humans, but also by machines. In other words, recognition systems (e.g., image classifier or object detector), should be able to accurately explain the underlying semantic meaning of the image content. In this way, we make them potentially easier to search, recommended to more interested audience, and so on, as these procedures are mostly executed by machines based on their understanding of the images. Therefore, we argue that image processing systems should also aim at better machine recognizability. We call this problem “Recognition-Aware Image Processing”.\nIt is also important that the enhanced recognizability is not specific to any concrete neural networkbased recognition model, i.e., the improvement on recognition performance is only achieved when the output images are evaluated on that particular model. Instead, the improvement should ideally be transferable when evaluated on different models, to support its usage without access to possible future recognition systems, since we may not decide what model will be used for recognizing the processed image, for example if we upload it to the Internet or share it on social media. We may not know what network architectures (e.g. ResNet or VGG) will be used for inference, what object categories the downstream model recognizes (e.g. animals or scenes), or even what task will be performed on the processed image (e.g. classification or detection). Without these specifications, it might be hard to enhance image’s machine semantics.\nIn this work, we propose simple and highly effective approaches to make image processing outputs more accurately recognized by downstream recognition systems, transferable among different recognition architectures, categories and tasks. The approaches we investigate add a recognition loss optimized jointly with the image processing loss. The recognition loss is computed using a fixed recognition model that is pretrained on natural images, and can be done in an unsupervised manner, e.g., without semantic labels of the image. It can be optimized either directly by the original image processing network, or through an intermediate transforming network. We conduct extensive experiments, on multiple image enhancement/restoration (super-resolution, denoising, and JPEG-deblocking) and recognition (classification and detection) tasks, and demonstrate that our approaches can substantially boost the recognition accuracy on the downstream systems, with minimal or no loss in the image processing quality measured by conventional metrics. Also, the accuracy improvement transfers favorably among different recognition model architectures, object categories, and recognition tasks, which renders our simple solution effective even when we do not have access to the downstream recognition models. Our contributions can be summarized as follows:\n• We propose to study the problem of enhancing the machine interpretability of image processing outputs, a desired property considering the amount of images analyzed by machines nowadays.\n• We propose simple and effective methods towards this goal, suitable for different use cases, e.g., without ground truth semantic labels. Extensive experiments are conducted on multiple image processing and recognition tasks, demonstrating the wide applicability of the proposed methods.\n• We show that using our simple approaches, the recognition accuracy improvement could transfer among recognition architectures, categories and tasks, a desirable behavior making the proposed methods applicable without access to the downstream recognition model." }, { "heading": "2 RELATED WORK", "text": "Image processing/enhancement problems such as super-resolution and denoising have a long history (Tsai, 1984; Park et al., 2003; Rudin et al., 1992; Candès et al., 2006). Since the initial success of deep neural networks on these problems (Dong et al., 2014; Xie et al., 2012; Wang et al., 2016b), a large body of works try to investigate better model architecture design and training techniques (Dong et al., 2016; Kim et al., 2016b; Shi et al., 2016; Kim et al., 2016a; Mao et al., 2016; Lai et al., 2017; Tai et al., 2017a; Tong et al., 2017; Tai et al., 2017b; Lim et al., 2017; Zhang et al., 2018d; Ahn\net al., 2018; Lefkimmiatis, 2018; Chen et al., 2018; Haris et al., 2018b), mostly on the image superresolution task. These works focus on generating high visual quality images under conventional metrics or human evaluation, without considering recognition performance on the output.\nThere are also a number of works that relate image recognition with processing. Some works (Zhang et al., 2016; Larsson et al., 2016; Zhang et al., 2018c; Sajjadi et al., 2017) use image classification accuracy as an evaluation metric for image colorization/super-resolution, but without optimizing for it during training. Wang et al. (2016a) incorporates super-resolution and domain adaptation techniques for better recognition on very low resolution images. Bai et al. (2018) train a superresolution and refinement network simultaneously to better detect faces in the wild. Zhang et al. (2018a) train networks for face hallucination and recognition jointly to achieve better recover the face identity from low-resolution images. Liu et al. (2018) considers 3D face reconstruction and trains the recognition model jointly with the reconstructor. Sharma et al. (2018) trains a classification model together with an enhancement module. Our problem setting is different from these works, in that we assume we do not have the control on the recognition model, as it might be on the cloud or decided in the future, thus we advocate adapting the image processing model only. This also ensures the recognition model is not harmed on natural images. Haris et al. (2018a) investigate how super-resolution could help object detection in low-resolution images. VidalMata et al. (2019) and Banerjee et al. (2019) also aims to enhance machine accuracy on poor-conditioned images but mostly focus on better image processing techniques without using recognition models. Wang et al. (2019) propose a method to make denoised images more accurately segmented, also presenting some interesting findings in transferability. Most existing works only consider one image processing task or image domain, and develop specific techniques, while our simpler approach is task-agnostic and potentially more widely applicable. Our work is also related but different from those which aims for robustness of the recognition model (Hendrycks & Dietterich, 2019; Li et al., 2019; Shankar et al., 2018), since we focus on the training of the processing models and assume the recognition model is given." }, { "heading": "3 METHOD", "text": "In this section we first introduce the problem setting of “recognition-aware” image processing, and then we develop various approaches to address it, each suited for different use cases." }, { "heading": "3.1 PROBLEM SETTING", "text": "In a typical image processing problem, given a set of training input images {Ikin} and corresponding target images {Iktarget} (k = 1, · · ·N ), we aim to train a neural network that maps an input image to its corresponding target. For example, in image denoising, Ikin is a noisy image and I k target is the corresponding clean image. Denoting this mapping network as P (for processing), parameterized by WP , during training our optimization objective is:\nmin WP\nLproc = 1\nN N∑ k=1 lproc ( P ( Ikin ) , Iktarget ) , (1)\nwhere P ( Ikin ) is simply the output of the processing model Iout, and lproc is the loss function for each sample. The pixel-wise mean-squared-error (MSE, or L2) loss is one of the most popular choices. During evaluation, the performance is typically measured by average similarity (e.g., PSNR, SSIM) between Iktarget and I k out = P ( Ikin ) , or through human assessment.\nIn our problem setting of recognition-aware processing, we are interested in a recognition task, with a trained recognition model R (R for recognition), parameterized by WR. We assume each input/target image pair Ikin/I k target is associated with a ground truth semantic label S\nk for the recognition task. Our goal is to train a image processing model P such that the recognition performance on the output images {Ikout = P ( Ikin ) } is high, when evaluated using R with the semantic labels {Sk}. In practice, the recognition model R might not be available (e.g., on the cloud), in which case we could resort to other models if the performance improvement transfers among models." }, { "heading": "3.2 OPTIMIZING RECOGNITION LOSS", "text": "Given our goal is to make the output images by P more recognizable by R, it is natural to add a recognition loss on top of the objective of the image processing task (Eqn. 1) during training:\nmin WP\nLrecog = 1\nN N∑ k=1 lrecog ( R ( P ( Ikin )) , Sk ) (2)\nlrecog is the per-example recognition loss defined by the downstream recognition task. For example, for image classification, lrecog could be the cross-entropy (CE) loss. Adding the image processing loss (Eqn. 1) and recognition loss (Eqn. 2) together, our total training objective becomes\nmin WP Lproc + λLrecog (3)\nwhere λ is the coefficient controlling the weights of Lrecog relative to Lproc. We denote this simple solution as “RA (Recognition-Aware) processing”, which is visualized in Fig. 2 left. Note that once the training is finished, the recognition model used as loss is not needed anymore, and during inference, we only need the processing model P, thus no additional overhead is introduced when the model is actually put to deployment.\nA potential shortcoming of directly optimizing Lrecog is that it might deviate P from optimizing the original loss Lproc, and the trained P will generate images that are not as good as if we only optimize Lproc. We will show that in experiments, however, with proper choice of λ, we could substantially boost the recognition performance with minimal or no sacrifice on image quality.\nIf using R as a fixed loss function can only boost the recognition accuracy on R itself, the use of the method could be restricted. Sometimes we do not have the knowledge about the downstream recognition model or even task, but we still would like to improve future recognition performance. Interestingly, we find that image processing models trained with the loss of one recognition model R1, can also boost the performance when evaluated using recognition model R2, even if model R2 has a different architecture, recognizes a different set of categories or even is trained for a different task. This makes our method effective even when we cannot access the target downstream model, in which case we could use another trained model we do have access to as the loss function. This phenomenon also implies that the “recognizability” of a processed image can be a more general notion than just the extent it fits to a specific model. More details on how the improvement is transferable among different recognition models will be presented in the experiments." }, { "heading": "3.3 UNSUPERVISED OPTIMIZATION OF RECOGNITION LOSS", "text": "The solution mentioned above requires semantic labels available for training images, which however, may not be satisfied all the time. In this case, we could instead resort to regress the recognition model’s output of the target image R(Iktarget), given the target images {Iktarget} at hand, and that the recognition model R is pretrained and fixed. The recognition objective in Eqn. 2 changes to\nmin WP\nLrecog = 1\nN N∑ k=1 ldis ( R ( P ( Ikin )) , R ( Iktarget )) (4)\nwhere ldis is a distance metric between R’s output given input of processed image P ( Ikin ) and ground truth target image Iktarget. For example, when R is a classification model and outputs a probability distribution over classes, ldis could be the KL divergence or simply aL2 distance. During evaluation, the output of R is still compared to the ground truth semantic label Sk. We call this approach “unsupervised RA”. Note that it is only “unsupervised” for training model P , but not necessarily for the model R. The (pre)training of the model R is not our concern since in our problem setting (Sec. 3.1) R is a given trained model, and it can be trained in any manner, either with or without full supervision, and it can even be trained with another dataset as we later show in Sec. 4.4. This approach is to some extent related to the “knowledge distillation” paradigm (Hinton et al., 2014) used for network model compression, where the output of a large model is used to guide the output of a small model, given the same input images. Instead we use the same recognition model R but guide the upstream processing model to generate input to R which produces similar output with that of the target image." }, { "heading": "3.4 USING AN INTERMEDIATE TRANSFORMER", "text": "Sometimes we do want to guarantee that the added recognition lossLrecog will not deviate the model P from optimizing its original loss. We can achieve this by introducing another intermediate transformation model T . After the input image going through the image processing model P , the output image is first fed to the model T , and T ’s output image serves as the input for the recognition model R (Fig. 2 right). In this case, T ’s parameters WT are optimized for minimizing the recognition loss:\nmin WT\nLrecog = 1\nN N∑ k=1 lrecog ( R ( T ( P ( Ikin ))) , Sk ) (5)\nIn this way, with the help of T on optimizing the recognition loss, the model P can now “focus on” its original image processing loss Lproc. The optimization objective becomes:\nmin WP Lproc +min WT λLrecog (6)\nIn Eqn. 6, P is still solely optimizing Lproc as in the original image processing problem (Eqn. 1). P is learned as if there is no recognition loss, and therefore the image processing quality of its output will not be affected. This could be achieved by “cutting” the gradient generated by Lrecog between the model T and P (Fig. 2 right). The responsibility for a better recognition performance falls on the model T . We term this solution as “RA with transformer”.\nThe downside of using a transformer compared with directly optimizing recognition loss using the processing model, is that there are two instances for each image (the output of model P and T ), one is “for human” and the other is “for machines”. Also, as we will show later, it can sometimes harm the transferability of the performance improvement, possibly because there is no image processing loss as a constraint on T ’s output. Therefore, the transformer is best suited for the case where we want to guarantee the image processing quality not affected at all, at the expense of maintaining another image and losing some transferability." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate our proposed methods on three image processing tasks, namely image super-resolution, denoising, and JPEG-deblocking. More specifically, these are image enhancement or restoration tasks, where usually the target image is an enhanced image or the original image. Other more broader image processing tasks such as pattern detection, segmentation, object extraction are not considered in this work. To obtain the input images, for super-resolution, we use a downsampling scale factor of 4×; for denoising, we add Gaussian noise on the images with a standard deviation of 0.1 to obtain the noisy images; for JPEG deblocking, a quality factor of 10 is used to compress the image to JPEG format. We pair these three tasks with two common visual recognition tasks, image classification and object detection. We adopt the SRResNet (Ledig et al., 2017) as the architecture of the image processing model P , due to its popularity and simplicity. For the transformer model T , we use the 6-block ResNet architecture in CycleGAN (Zhu et al., 2017), a general-purpose image to image transformation network. For classification we use the ImageNet and for detection we use PASCAL VOC as our benchmark. The recognition architectures are ResNet, VGG and DenseNet. Training is performed with the training set and results on the validation set are reported. For more details on the training settings and hyperparameters of each task, please refer to Appendix A." }, { "heading": "4.1 EVALUATION ON THE SAME RECOGNITION MODEL", "text": "We first show our results when evaluating on the same recognition model, i.e., the R used for evaluation is the same as the R we use as the recognition loss in training. Table 1a shows our results on ImageNet classification. ImageNet-pretrained classification models ResNet-18/50/101, DenseNet121 and VGG-16 are denoted as R18/50/101, D121, V16 in Table 1a. The “No Processing” row denotes the recognition performance on the input of the image processing model: for denoising/JPEGdeblocking, this corresponds to the noisy/JPEG-compressed images; for super-resolution, the lowresolution images are bicubic interpolated to the original resolution. “Plain Processing” denotes conventional image processing models trained without recognition loss as described in Eqn. 1. We observe that a plainly trained processing model can boost the accuracy over unprocessed images. These two are considered as baselines in our experiments.\nFrom Table 1a, using RA processing can significantly boost the accuracy of output images over plainly processed ones, for all image processing tasks and recognition models. This is more prominent when the accuracy of plain processing is lower, e.g., in super-resolution and JPEG-deblocking, in which case we mostly obtain ∼10% accuracy improvement. Even without semantic labels, our unsupervised RA can still in most cases outperform baseline methods, despite achieves lower accuracy than its supervised counterpart. Also in super-resolution and JPEG-deblocking, using an intermediate transformer T can bring additional improvement over RA processing.\nThe results for object detection are shown in Table 1b. We observe similar trend as in classification: using recognition loss can consistently improve the mAP over plain image processing by a notable margin. On super-resolution, RA processing mostly performs on par with RA with transformer, but on the other two tasks using a transformer is slightly better. The model with transformer performs better more often possibly because with this extra network in the middle, the capacity of the whole system is increased: in RA Processing the processing model P optimizes both processing and recognition loss, but now P optimizes processing loss while T optimizes recognition loss" }, { "heading": "4.2 TRANSFER BETWEEN RECOGNITION ARCHITECTURES", "text": "In reality, sometimes the recognition model R we want to eventually evaluate the output images on might not be available for us to use as a loss for training, e.g., it could be on the cloud, kept confidential or decided later. In this case, we could train an image processing model P using recognition model RA that is accessible to us, and after we obtain the trained model P , evaluate its output images’ recognition accuracy using another unseen recognition model RB . We evaluate all model architecture pairs on ImageNet classification in Table 2 and Table 3, for RA Processing and RA with Transformer respectively, where row corresponds to the model used as recognition loss (RA), and\ncolumn corresponds to the evaluation model (RB). For RA with Transformer, we use the processing model P and transformer T trained with RA together when evaluating on RB .\nIn Table 2’s each column, training with any model RA produces substantially higher accuracy than plainly processed images on RB . Thus, we conclude that the improvement on the recognition accuracy is transferable among different recognition architectures. A possible explanation for this is that these models are all trained on the same ImageNet dataset, such that their mapping functions from input to output are similar, and optimizing the loss of one would lead to the lower loss of another. This phenomenon enables us to use RA processing without the knowledge of the downstream recognition model architecture. However, among all rows, the RA that achieves the highest accuracy is still the same model as RB , indicated by the diagonal boldface numbers in Table 2.\nMeanwhile in Table 3, in most cases improvement is still transferable when we use a transformer T , but there are a few exceptions. For example, when RA is ResNet or DenseNet and when RB is VGG-16, in most cases the accuracy fall behind plain processing by a large margin. This weaker transferability is possibly caused by the fact that there is no constraint imposed by the image processing loss on T ’s output, thus it “overfits” more to the specificR it is trained with. For more results on object detection and unsupervised RA, please refer to the Appendix B.1. This is intuitive since the processing model optimizes the same recognition loss during training as that used in evaluation.\nOne of the reasons why our method attains transferability is possibly that these models learn many common features that could be useful for general computer vision, especially in shallower layers. More importantly, the reason could be similar to the reason why adversarial examples can transfer among models: different models’ decision boundaries are similar. Liu et al. (2016) studies adversarial examples’ transferability and shows decision boundaries of different models align well with each other; Tramèr et al. (2017) quantitatively analyzes similarity of different models’ decision boundaries, and shows that the boundaries are close in arbitrary directions, whether adversarial or benign." }, { "heading": "4.3 TRANSFER BETWEEN OBJECT CATEGORIES", "text": "What if the RA and RB recognize different categories of objects? Can RA processing still bring transferable improvement? To answer this question, we divide the 1000 classes from ImageNet into two splits (denoted as category A/B), each with 500 classes, and train two 500-way classification models (ResNet-18) on both splits, obtaining RA and RB . Next, we train two image processing models PA and PB with the RA and RB as recognition loss, using images from category A and B respectively. Note that neither the image processing model P nor the recognition model R has\nseen any images from the other split of categories during training, and RA and RB learn completely different mappings from input to output. The plain processing counterparts of PA and PB are also trained with category A and B respectively, but without the recognition loss. We evaluate obtained image processing models on both splits, and the results are shown in Table 4.\nWe observe that using RA processing still benefits the recognition accuracy even when transferring across categories (e.g., in super-resolution, from 60.1% to 66.5% when transferring from categoryA to categoryB, on super resolution). The improvement is only marginally lower than directly training with recognition model of the same category (e.g., from 60.2% to 67.8% when trained and evaluated both on category B). Such transferability between categories suggest the learned image processing models do not improve accuracy by adding category-specific signals to the output images, instead they generate more general signals that enable a wider set of classes to be better recognized." }, { "heading": "4.4 TRANSFER BETWEEN RECOGNITION TASKS", "text": "What if we take a further step to the case when RA and RB not only recognize different categories, but also perform different tasks? We evaluate such task transferability for when task A is classification and task B is object detection in Table 5. For results on the opposite direction and results for unsupervised RA, please refer to Appendix B.2.\nIn Table 5, note that rows indicate classification models used as loss and columns indicate detection models, so even if they are of the same name (e.g., “R18”), they are still different models, and are trained on different datasets for different tasks. We are transferring between architectures, categories as well as tasks in this experiment. There is even a domain shift since the model P is trained with ImageNet training set but fed with PASCAL VOC input images during evaluation. Here “Plain Processing” models are trained on the ImageNet instead of PASCAL VOC dataset, thus the results are different from those in Table 1b. We observe that except two cases on the “V16” column in denoising, using classification loss on model A (row) can boost the detection accuracy on model B notably upon plain processing. This improvement is even comparable with directly training using the detection loss, as in Table 1b. Such task transferability suggests the “machine semantics” of the image could even be a task-agnostic property, and makes our method even more broadly applicable." }, { "heading": "4.5 IMAGE PROCESSING QUALITY COMPARISON", "text": "We have analyzed the recognition accuracy of the output images, now we compare the output image quality using conventional metrics PSNR and SSIM. When using RA with transformer, the output quality of P is guaranteed unaffected, therefore here we evaluate RA processing. We use ResNet-18 on ImageNet as R, and report results with different λs (Eqn. 3) in Table 6.\nλ = 0 corresponds to plain processing. When λ = 10−4, in super-resolution, the PSNR/SSIM metrics are even slightly higher, and in denoising and JPEG-deblocking they are only marginally worse. However, the accuracy obtained is significantly higher. This suggests that the added recognition loss is not harmful when λ is chosen properly. When λ is excessively\nlarge (10−2), the image quality is hurt more, and interestingly even the recognition accuracy start to decrease. A proper balance between image processing loss and recognition loss is needed for both image quality and performance on downstream recognition tasks.\nIn Fig. 3, we visualize some examples where the output image is incorrectly classified with a plain image processing model, and correctly recognized with RA processing. With smaller λ (10−2 and 10−3), the image is nearly the same as the plainly processed images. When λ is too large (10−2), we could see some extra textures when zooming in. For more results please refer to Appendix C." }, { "heading": "5 ANALYSIS", "text": "In this section we analyze some alternatives to our approaches. All experiments in this section are conducted using RA processing on super-resolution, with ResNet-18 trained on ImageNet as the recognition model, and λ= 10−3 if used.\nTraining without the Image Processing Loss. It is possible to train the processing model on the recognition loss Lrecog, without even keeping the original image processing loss Lproc (Eqn. 3). This may presumably lead to better recognition performance since the model P can now “focus on” optimizing the recognition loss. However, we found removing the original image processing loss hurts the recognition performance: the accuracy drops from 61.8% to 60.9%; even worse, the SSIM/PSNR metrics drop from 26.33/0.792 to 16.92/0.263, which is reasonable since the image processing loss is not optimized during training. This suggests the original image processing loss is helpful for the recognition accuracy, since it helps the corrupted image to restore to its original form.\nFine-tuning the Recognition Model. Instead of fixing the recognition model R, we could finetune it together with the training of image processing model P , to optimize the recognition loss.\nMany prior works (Sharma et al., 2018; Bai et al., 2018; Zhang et al., 2018a) do train/fine-tune the recognition model jointly with the image processing model. We use SGD with momentum as R’s optimizer, and the final accuracy reaches 63.0%. However, since we do not fix R, it becomes a model that specifically recognizes super-resolved images, and we found its performance on original target images drops from 69.8% to 60.5%. Moreover, when transferring the trained P on ResNet56, the accuracy is 62.4 %, worse than 66.7% when we train with a fixed ResNet-18. We lose some transferability if we do not fix the recognition model R.\nTraining Recognition Models from Scratch. Other than fine-tuning a pretrained recognition model R, we could first train a super-resolution model, and then train R from scratch on the output images. We achieve 66.1% accuracy on the output images in the validation set, higher than 61.8% in RA processing. However, the accuracy on original clean images drops from 69.8% to 66.1%. Alternatively, we could even train R from scratch on the interpolated low-resolution images, in which case we achieve 66.0% on interpolated validation data but only 50.2% on the original validation data. In summary, training or fine-tuning R to cater the need of super-resolved or interpolated images can harm its performance on the original clean images, and causes additional overhead in storing models. In contrast, using our RA processing technique could boost the accuracy of output images with the performance on original images intact." }, { "heading": "6 CONCLUSION", "text": "We investigated the problem of enhancing the machine interpretability of image processing outputs. We find our simple approach – optimizing with the additional recognition loss during training can significantly boost the recognition accuracy with minimal or no loss in image processing quality. Moreover, such improvement can transfer to recognition architectures, object categories, and vision tasks unseen during training, indicating the enhanced interpretability is not specific to one particular model but generalizable to others. This makes the proposed approach feasible even when the future downstream recognition models are unknown." }, { "heading": "A EXPERIMENTAL DETAILS", "text": "General Setup. We evaluate our proposed methods on three image processing tasks: image superresolution, denoising, and JPEG-deblocking. In those tasks, the target images are all the original images from the datasets. To obtain the input images, for super-resolution, we use a downsampling scale factor of 4×; for denoising, we add Gaussian noise on the images with a standard deviation of 0.1 to obtain the noisy images; for JPEG deblocking, a quality factor of 10 is used to compress the image to JPEG format. The image processing loss used is the mean squared error (MSE, or L2) loss. For the recognition tasks, we consider image classification and object detection, two common tasks in computer vision. In total, we have 6 (3 × 2) task pairs to evaluate. We adopt the SRResNet (Ledig et al., 2017) as the architecture of the image processing model P , which is simple yet effective in optimizing the MSE loss. Even though SRResNet is originally designed for super-resolution, we find it also performs well on denoising and JPEG deblocking when its upscale parameter set to 1 for the same input-output sizes. Throughout the experiments, on both the image processing network and the transformer, we use the Adam optimizer (Kingma & Ba, 2014) with an initial learning rate of 10−4, following the original SRResNet (Ledig et al., 2017). Our implementation is in PyTorch (Paszke et al., 2017).\nImage Classification. For image classification, we evaluate our method on the large-scale ImageNet benchmark (Deng et al., 2009). We use five pre-trained image classification models, ResNet18/50/101 (He et al., 2016), DenseNet-121 (Huang et al., 2017) and VGG-16 (Simonyan & Zisserman, 2015) with BN (Ioffe & Szegedy, 2015) (denoted as R18/50/101, D121, V16 in Table 1a), on which the top-1 accuracy (%) of the original validation images is 69.8, 76.2, 77.4, 74.7, and 73.4 respectively. We train the processing models for 6 epochs on the training set, with a learning rate decay of 10× at epoch 5 and 6, and a batch size of 20. In evaluation, we feed unprocessed validation images to the image processing model, and report the accuracy of the output images evaluated on the pre-trained classification networks. For unsupervised RA, we use L2 distance as the function ldis in Eqn. 4. The hyperparameter λ is chosen using super-resolution with the ResNet-18 recognition model, on two small subsets for training/validation from the original large training set. The λ chosen for RA processing, RA with transformer, and unsupervised RA is 10−3, 10−2 and 10 respectively.\nObject Detection. For object detection, we evaluate on PASCAL VOC 2007 and 2012 dataset, using Faster-RCNN (Ren et al., 2015) as the recognition model. Our implementation is based on the code from (Yang et al., 2017). Following common practice (Redmon et al., 2016; Ren et al., 2015; Dai et al., 2016), we use VOC 07 and 12 trainval data as the training set, and evaluate on VOC 07 test data. The Faster-RCNN training uses the same hyperparameters in (Yang et al., 2017). For the recognition model’s backbone architecture, we evaluate ResNet-18/50/101 and VGG-16 (without BN (Ioffe & Szegedy, 2015)), obtaining mAP of 74.2, 76.8, 77.9, 72.2 on the test set respectively. Given those trained detectors as recognition loss functions, we train the models on the training set for 7 epochs, with a learning rate decay of 10 × at epoch 6 and 7, and a batch size of 1. We report the mean Average Precision (mAP) of processed images in the test set. As in image classification, we use λ = 10−3 for RA processing, and λ = 10−2 for RA with transformer." }, { "heading": "B MORE RESULTS ON TRANSFERABILITY", "text": "We present some additional results on transferability here.\nB.1 TRANSFERRING BETWEEN ARCHITECTURES\nWe provide the model transferability results of RA processing on object detection in Table 7. Rows indicate the models trained as recognition loss and columns indicate the evaluation models. We see similar trend as in classification (Table 1a): using other architectures as loss can also improve recognition performance over plain processing; the loss model that achieves the highest performance is mostly the model itself, as can be seen from the fact that most boldface numbers are on the diagonals.\nAs a complement in Section 4.2, we present the results when transferring between recognition architectures, using unsupervised RA, in Table 8. We note that for super-resolution and JPEG-deblocking, similar trend holds as in (supervised) RA processing, as using any architecture in training will improve over plain processing. But for denoising, this is not always the case. Some models P trained with unsupervised RA are slightly worse than the plain processing counterpart. A possible reason for this is the noise level in our experiments is not large enough and plain processing achieve very high accuracy already.\nB.2 TRANSFERRING BETWEEN RECOGNITION TASKS\nIn Section 4.4, we investigated the transferability of improvement from classification to detection. Here we evaluate the opposite direction, from detection to classification. The results are shown in Table 9. Here, using RA processing can still consistently improve over plain processing for any pair of models, but we note that the improvement is not as significant as directly training using classification models as loss (Table 1a and Table 2).\nAdditionally, the results when we transfer the model P trained with unsupervised RA with image classification to object detection are shown in Table 10. In most cases, it improves over plain processing, but for image denoising, this is not always the case. Similar to results in Table 8, this could be because the noise level is relatively low in our experiments." }, { "heading": "C MORE VISUALIZATIONS", "text": "We provide more visualizations in Fig. 4 where the output image is incorrectly classified by ResNet18 with a plain image processing model, and correctly recognized with RA processing, as in Fig. 3 at Section 4.5." }, { "heading": "D RESULTS ON IMAGENET-C", "text": "We evaluate our methods on the ImageNet-C benchmark (Hendrycks & Dietterich, 2019). It imposes 17 different types of corruptions on the ImageNet (Deng et al., 2009) validation set. Despite ImageNet-C benchmark is designed for more robust recognition models, but not for testing image processing models, it is a good testbed to test our methods in a broader range of processing tasks. Since only corrupted images from the validation set are released, we divide it evenly for each class into two halves and train/test on its first/second half. The corrupted image is the input image to the processing model and the original clean image is the target image. The recognition model used in this experiment is an ImageNet-pretrained ResNet-18.\nIn Table 11, we evaluate RA Processing on all 17 types of corruptions, with corruption level 5 as in (Hendrycks & Dietterich, 2019). We observe that RA Processing brings consistent improvement over plain processing, sometimes by an even larger margin than the tasks considered in Sec. 4.\nIn Table 12, we experiment with different levels of corruptions with corruption type “speckle noise” and “snow”. We also evaluate with our variants – Unsupervised RA and RA with Transformer. We observe that when the corruption level is higher, our methods tend to bring more recognition accuracy gain.\nIn Table 13, we examine the transferability of RA Processing between recognition architectures, using the same two tasks “speckle noise” and “snow”, with corruption level 5. Note the recognition loss used during training is from a ResNet-18, and we evaluate the improvement over plain processing on ResNet-50/101, DenseNet-121 and VGG-16. We observe that the improvement over plain processing is transferable among different architectures." } ]
2,019
null
SP:213a295549ebc49eda533baf77de2e0aed81cbb1
[ "The paper defines a new measure of distance between a hypothesis $h$ and a point $x$, which is the probability mass of the smallest (by probability mass) disagreement region (induced by the other $h' \\in \\mathcal{H}$) containing $x$. In general this is intractable so the authors offer two assumptions about the relationship between this measure and more tractable quantities (one being the distance between the model parameters of those hypotheses, and the other being what the authors call the 'variation ratio'). The reasonability of these assumptions is then assessed, and the algorithm is tested on a variety of dataset and against several reasonable competitors. ", "This paper is motivated by the idea that unlabelled samples near the estimated decision boundary show to be very informative/useful in an active learning setting. However, measuring the distance between an instance and the decision boundary is a non-trivial task in numerous machine learning algorithms, especially in deep learning. The paper proposes a (theoretical) sample distance to the decision boundary that relies on the least probable disagreement region (LPDR) that still contains the sample. The paper makes two assumptions to evaluate the proposed distance empirically: (1) closeness of the parameters of two hypotheses implies closeness of these hypotheses as defined by the probability of the disagreement region and (2) the variation ratio of labels obtained by evaluating a set of hypotheses sampled around the decision boundary is a proxy for the proposed distance. Considering these assumptions, hypotheses are sampled around a given decision boundary by adding gaussian noise to the parameters of the fitted model. Both assumptions are validated empirically on different datasets and varying levels of variance of the noise term to show the effect on the variation ratio and distance respectively. Consequently, an iterative active learning algorithm is proposed which adapts the variance of the noise term in order to select the predefined number of samples. Extensive experimental results indicate that LPDR outperforms other uncertainty based active learning algorithms on various datasets or is at least on par with them. " ]
Active learning strategy to query unlabeled samples nearer the estimated decision boundary at each step has been known to be effective when the distance from the sample data to the decision boundary can be explicitly evaluated; however, in numerous cases in machine learning, especially when it involves deep learning, conventional distance such as the `p from sample to decision boundary is not readily measurable. This paper defines a theoretical distance of unlabeled sample to the decision boundary as the least probable disagreement region (LPDR) containing the unlabeled sample, and it discusses how this theoretical distance can be empirically evaluated with a lower order of time complexity. Monte Carlo sampling of the hypothesis is performed in approximating the theoretically defined distance. Experimental results on various datasets show that the proposed algorithm consistently outperforms all other high performing uncertainty based active learning algorithms and leads to state-of-the-art active learning performance on CIFAR10, CIFAR100, Tiny ImageNet and Food101 datasets. Only the proposed algorithm outperforms random sampling on CIFAR100 dataset using K-CNN while all other algorithms fail to do so.
[]
[ { "authors": [ "Maria-Florina Balcan", "Andrei Broder", "Tong Zhang" ], "title": "Margin based active learning", "venue": "In International Conference on Computational Learning Theory,", "year": 2007 }, { "authors": [ "William H Beluch", "Tim Genewein", "Andreas Nürnberger", "Jan M Köhler" ], "title": "The power of ensembles for active learning in image classification", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Lukas Bossard", "Matthieu Guillaumin", "Luc Van Gool" ], "title": "Food-101 – mining discriminative components with random forests", "venue": "In European Conference on Computer Vision,", "year": 2014 }, { "authors": [ "Djallel Bouneffouf", "Romain Laroche", "Tanguy Urvoy", "Raphael Féraud", "Robin Allesiardo" ], "title": "Contextual bandit for active learning: Active thompson sampling", "venue": "In International Conference on Neural Information Processing,", "year": 2014 }, { "authors": [ "Gregory Cohen", "Saeed Afshar", "Jonathan Tapson", "Andre Van Schaik" ], "title": "Emnist: Extending mnist to handwritten letters", "venue": "In 2017 International Joint Conference on Neural Networks (IJCNN),", "year": 2017 }, { "authors": [ "David A Cohn", "Zoubin Ghahramani", "Michael I Jordan" ], "title": "Active learning with statistical models", "venue": "Journal of artificial intelligence research,", "year": 1996 }, { "authors": [ "Aron Culotta", "Andrew McCallum" ], "title": "Reducing labeling effort for structured prediction tasks", "venue": "In AAAI,", "year": 2005 }, { "authors": [ "Elizabeth D Dolan", "Jorge J Moré" ], "title": "Benchmarking optimization software with performance profiles", "venue": "Mathematical programming,", "year": 2002 }, { "authors": [ "Melanie Ducoffe", "Frederic Precioso" ], "title": "Qbdc: query by dropout committee for training deep supervised architecture", "venue": "arXiv preprint arXiv:1511.06412,", "year": 2015 }, { "authors": [ "Melanie Ducoffe", "Frederic Precioso" ], "title": "Adversarial active learning for deep networks: a margin based approach", "venue": "arXiv preprint arXiv:1802.09841,", "year": 2018 }, { "authors": [ "Yarin Gal", "Riashat Islam", "Zoubin Ghahramani" ], "title": "Deep bayesian active learning with image data", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Daniel Gissin", "Shai Shalev-Shwartz" ], "title": "Discriminative active learning", "venue": "arXiv preprint arXiv:1907.06347,", "year": 2019 }, { "authors": [ "Denis Gudovskiy", "Alec Hodgkinson", "Takuya Yamaguchi", "Sotaro Tsukizawa" ], "title": "Deep active learning for biased datasets via fisher kernel self-supervision", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Neil Houlsby", "Ferenc Huszár", "Zoubin Ghahramani", "Máté Lengyel" ], "title": "Bayesian active learning for classification and preference learning", "venue": "arXiv preprint arXiv:1112.5745,", "year": 2011 }, { "authors": [ "Daniel Joseph Hsu" ], "title": "Algorithms for active learning", "venue": "PhD thesis, UC San Diego,", "year": 2010 }, { "authors": [ "Ajay J Joshi", "Fatih Porikli", "Nikolaos Papanikolopoulos" ], "title": "Multi-class active learning for image classification", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Andreas Kirsch", "Joost van Amersfoort", "Yarin Gal" ], "title": "Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "Cifar-10 and cifar-100 datasets. https: //www.cs.toronto.edu/ ̃kriz/cifar.html", "venue": "The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist,", "year": 2009 }, { "authors": [ "David D Lewis", "William A Gale" ], "title": "A sequential algorithm for training text classifiers", "venue": "In SIGIR’94,", "year": 1994 }, { "authors": [ "Tom M Mitchell" ], "title": "Generalization as search", "venue": "Artificial intelligence,", "year": 1982 }, { "authors": [ "Stephen Mussmann", "Percy S Liang" ], "title": "Uncertainty sampling is preconditioned stochastic gradient descent on zero-one loss", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "A Ng" ], "title": "The street view house numbers (svhn", "venue": null, "year": 2019 }, { "authors": [ "Robert Pinsler", "Jonathan Gordon", "Eric Nalisnick", "José Miguel Hernández-Lobato" ], "title": "Bayesian batch active learning as sparse subset approximation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nicholas Roy", "Andrew McCallum" ], "title": "Toward optimal active learning through monte carlo estimation of error reduction", "venue": "ICML, Williamstown, pp", "year": 2001 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Tobias Scheffer", "Christian Decomain", "Stefan Wrobel" ], "title": "Active hidden markov models for information extraction", "venue": "In International Symposium on Intelligent Data Analysis,", "year": 2001 }, { "authors": [ "Ozan Sener", "Silvio Savarese" ], "title": "Active learning for convolutional neural networks: A core-set approach", "venue": "arXiv preprint arXiv:1708.00489,", "year": 2017 }, { "authors": [ "Ozan Sener", "Silvio Savarese" ], "title": "A geometric approach to active learning for convolutional neural networks", "venue": "arXiv preprint arXiv,", "year": 2017 }, { "authors": [ "Burr Settles" ], "title": "Active learning literature survey", "venue": "Technical report, University of Wisconsin-Madison Department of Computer Sciences,", "year": 2009 }, { "authors": [ "Burr Settles", "Mark Craven", "Soumya Ray" ], "title": "Multiple-instance active learning", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "H Sebastian Seung", "Manfred Opper", "Haim Sompolinsky" ], "title": "Query by committee", "venue": "In Proceedings of the fifth annual workshop on Computational learning theory,", "year": 1992 }, { "authors": [ "Claude E Shannon" ], "title": "A mathematical theory of communication", "venue": "Bell system technical journal,", "year": 1948 }, { "authors": [ "Weishi Shi", "Qi Yu" ], "title": "Integrating bayesian and discriminative sparse kernel machines for multiclass active learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Samarth Sinha", "Sayna Ebrahimi", "Trevor Darrell" ], "title": "Variational adversarial active learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Charles Spearman" ], "title": "General Intelligence” objectively determined and measured", "venue": "American Journal of Psychology,", "year": 1904 }, { "authors": [ "Philipp Tschandl", "Cliff Rosendahl", "Harald Kittler" ], "title": "The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions", "venue": "Scientific data,", "year": 2018 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Beichen Zhang", "Liang Li", "Shijie Yang", "Shuhui Wang", "Zheng-Jun Zha", "Qingming Huang" ], "title": "Staterelabeling adversarial active learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "SVHN (Netzer" ], "title": "2019) is a real-world digit image dataset which has a training set of 73, 257 samples and a test set of 26, 032 samples in 10 classes. Each sample is a color image", "venue": null, "year": 2019 }, { "authors": [ "HAM10000 (Tschandl" ], "title": "2018) is a imbalanced dataset which has 10, 015 samples in 7 classes. Each sample is a color image and resized to 75 × 75", "venue": null, "year": 2018 }, { "authors": [ "Komodakis" ], "title": "The optimizer, initial learning rate, learning rate schedule and batch size", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Active learning (Cohn et al., 1996) is a subfield of machine learning to attain data efficiency with fewer labeled training data when it is allowed to choose the training data from which to learn. For many real-world learning problems, large collections of unlabeled samples is assumed available, and based on a certain query strategy, the label of the most informative data is iteratively queried to an oracle to be used in retraining the model (Bouneffouf et al., 2014; Roy & McCallum, 2001; Sener & Savarese, 2017b; Settles et al., 2008; Sinha et al., 2019; Sener & Savarese, 2017a; Pinsler et al., 2019; Shi & Yu, 2019; Gudovskiy et al., 2020). Active learning attempts to achieve high accuracy using as few labeled samples as possible (Settles, 2009).\nOf the possible query strategies, uncertainty-based sampling (Culotta & McCallum, 2005; Scheffer et al., 2001; Mussmann & Liang, 2018), which enhances the current model by labeling unlabeled samples that are difficult for the model to predict, is a simple strategy commonly used in pool-based active learning (Lewis & Gale, 1994). Nevertheless, many existing uncertainty-based algorithms have their own limitations. Entropy (Shannon, 1948) based uncertainty sampling can query unlabeled samples near the decision boundary for binary classification, but it does not perform well in multiclass classification as entropy does not equate well with the distance to a complex decision boundary (Joshi et al., 2009). Another approach based on MC-dropout sampling (Gal et al., 2017) which uses a mutual information based BALD (Houlsby et al., 2011) as an uncertainty measure identifies unlabeled samples that are individually informative. This approach, however, is not necessarily informative when it is jointly considered with other samples for label acquisition. To address this problem, BatchBALD (Kirsch et al., 2019) is introduced. However, BatchBALD computes, theoretically, all possible joint mutual information of batch, and is infeasible for large query size. The ensemble method (Beluch et al., 2018), one of the query by committee (QBC) algorithm (Seung et al., 1992), has been shown to perform well in many cases. The fundamental premise behind the QBC is minimizing the version space (Mitchell, 1982), which is the set of hypotheses that are consistent with labeled samples. However, the ensemble method requires high computation load because all networks that make up the ensemble must be trained.\nThis paper defines a theoretical distance referred to as the least probable disagreement region (LPDR) from sample to the estimated decision boundary, and in each step of active learning, labels of unlabeled samples nearest to the decision boundary in terms of LPDR are obtained to be used for retraining the classifier to improve accuracy of the estimated decision boundary. It is generally understood that labels to samples near the decision boundary are the most informative as the samples are uncertain. Indeed in Balcan et al. (2007), selecting unlabeled samples with the smallest margin to the linear decision boundary and thereby minimal certainty attains exponential improvement over random sampling in terms of sample complexity. In deep learning, it is difficult to identify samples nearest to the decision boundary as sample distance to decision boundary is difficult to evaluate. An adversarial approach (Ducoffe & Precioso, 2018) to approximate the sample distance to decision boundary has been studied but this method does not show preservation of the order of the sample distance and requires considerable computation in obtaining the distance." }, { "heading": "2 DISTANCE: LEAST PROBABLE DISAGREEMENT REGION (LPDR)", "text": "This paper proposes an algorithm for selecting unlabeled data that are close to the decision boundary which can not be explicitly defined in many of cases.\nLet X , Y , H and D be the instance space, the label space, the set of hypotheses h : x → y and the joint distribution over (x, y) ∈ X × Y . The distance between two hypotheses ĥ and h is defined as the probability of the disagreement region for ĥ and h. This distance was originally defined in Hanneke et al. (2014) and Hsu (2010):\nρ(ĥ, h) := PD[ĥ(X) 6= h(X)]. (1)\nThis paper defines the sample distance d of x to the hypothesis ĥ ∈ H based on ρ as the least probable disagreement region (LPDR) that contains x:\nd(x, ĥ) := inf h∈H(x,ĥ) ρ(ĥ, h) (2)\nwhereH(x, ĥ) = {h ∈ H : ĥ(x) 6= h(x)}.\nFigure 1 shows an example of LPDR. Let’s define H = {hθ : hθ(x) = I[x > θ]} on input x sampled from uniform distribution D = U [0, 1] where I[·] is an indicator function. Suppose x = x0 and ĥ = ha ∈ H when a < x0. Here, H(x0, ha) consists of all hypotheses whose prediction on x0 is in disagreement with ha(x0) = 1, i.e., H(x0, ha) = {hb ∈ H : hb(x0) = 0} = {hb ∈ H : b > x0}. Then, the LPDR between x0 and ha, d(x0, ha) = x0 − a as the infimum of the distance between ha and hb ∈ H(x0, ha) is ρ(ha, hx0) = x0 − a. Here, the sample distribution D is unknown, and H(x, ĥ) may be uncountably infinite. Therefore, a\nsystematic and empirical method for evaluating the distance is required. One might the procedure below: Sample hypotheses sets H′ = {h′ : ρ(ĥ, h′) ≤ ρ′} in terms of ρ′, and perform grid search to determine the smallest ρ′ such that there exists h′ ∈ H′ satisfying ĥ(x) 6= h′(x) for a given x. Sampling the hypotheses within the ball can be performed by sampling the corresponding parameters with the assumption that the expected hypothesis distance is monotonically increasing for the expected distance between the corresponding parameters (see Assumption 1). This scheme is based on performing grid search on ρ′ and is therefore computationally inefficient. However, unlabeled samples can be ordered according to d without grid search with the assumption that there exists aH′ such that variation ratio V (x) = 1− f (x)m /|H′| and d(x, ĥ) have strong negative correlation where f (x) m = maxc ∑ h′∈H′ I[h′(x) = c] (see Assumption 2).\nAssumption 1. The expected distance between ĥ and randomly sampled h is monotonically increasing in the expected distance between the corresponding ŵ and w, i.e., E[‖ŵ − w1‖ | ŵ] ≤ E[‖ŵ − w2‖ | ŵ] implies that E[ρ(ĥ, h1) | ĥ] ≤ E[ρ(ĥ, h2) | ĥ] where ŵ,w1 and w2 are the parameters pertaining to ĥ, h1 and h2 respectively.\nAssumption 2. There exists a hypothesis set H′ sampled around ĥ having the property that large variation ratio for a given sample data implies small sample distance to ĥ with high probability, i.e., there existsH′ such that V (x1) ≥ V (x2) implies that d(x1, ĥ) ≤ d(x2, ĥ) with high probability." }, { "heading": "3 EMPIRICAL STUDIES OF LPDR", "text": "" }, { "heading": "3.1 HYPOTHESES AND PARAMETERS IN DEEP NETWORKS: ASSUMPTION 1", "text": "The distance between two hypotheses can be approximated by vectors of the predicted labels on random samples by the hypotheses:\nρ(ĥ, h) ≈ ρe(ĥ, h) = 1\nm m∑ i=1 I [ ĥ(x(i)) 6= h(x(i)) ] (3)\nwhere x(i) is the ith sample for i ∈ [m]. The h is sampled by sampling model parameter w ∼ N (ŵ, Iσ2) where ŵ is the model parameter of ĥ, and the expectation of distances between w and ŵ depends on σ. The ρe is obtained by the average of 100 times for a fixed σ. The left-hand side of Figure 2 shows the relationship between ρe and σ on various datasets and deep networks. The ρe increases almost monotonically as σ increases. This implies that the order is preserved between\nthe σ and ρe. Furthermore, the ρe is almost linearly proportional to log(σ) in the ascension of the graph, i.e., σ ∝ eβρe for some β > 0. The right-hand side of Figure 2 shows V with respect to σ for each unlabeled sample on MNIST. The sample distance to the decision boundary can be expressed as σ at which the variation ratio is not zero for the first time (white arrow), where the indices of unlabeled samples in y-axis are ordered by LPDR. The variation ratio increases as the σ increases, and it is expected that the data point with short distance has the large variation ratio compared to the data point with long distance on a certain range of σ." }, { "heading": "3.2 LPDR VS VARIATION RATIO: ASSUMPTION 2", "text": "The left-hand side of Figure 3 shows Spearman’s rank correlation coefficient (Spearman, 1904) between LPDR and the variation ratio with respect to σ. The correlation is calculated using only unlabeled samples whose variation ratio is not 0. The strong rank correlation is verified when the σ has the appropriate value. Too larger value of σ generates hypotheses too far away from ĥ, which is not helpful to measure the distance. The right-hand side of Figure 3 shows an example of σ (log(σ) = −5.0) which makes LPDR and the variation ratio have a strong negative correlation on MNIST, that is, the data point with larger variation is closer to the decision boundary. Results for various datasets and networks are presented in Appendix C.\nThe time complexity is discussed to validate the efficiency of using variation ratio. Let m, N and nσ be the unlabeled sample size, |H′| and the number of grid for σ respectively. Ordering unlabeled samples in terms of LPDR by grid search with respect to σ requires the time complexity of m × N × nσ (see the right-hand side of Figure 2). However, using variation ratio for ordering unlabeled samples reduces the time complexity to m×N . In the case of nσ = cN for some c > 0, then the time complexity can be reduced from O(mN2) to O(mN)." }, { "heading": "4 ALGORITHM FOR LPDR", "text": "" }, { "heading": "4.1 FRAMEWORK", "text": "Let Lt and Ut be the labeled and unlabeled samples at step t. At step t, LPDR trains model parameters ŵt using labeled samples Lt, and constructs H′ by sampling the model parameters w′n ∼ N (ŵt, Iσ2) for n ∈ [N ]. Then, LPDR queries the top q unlabeled samples having highest variation ratio from the pool data Pt ⊂ Ut of size m." }, { "heading": "4.2 CONSTRUCTION OF SAMPLED HYPOTHESIS SET", "text": "It is important to set an appropriate σ when constructing H′ as variation ratios goes to 0 with decreasing σ (see the right-hand side of Figure 2) and the rank correlation goes to zero with increasing σ (see the left-hand side of Figure 3). Theoretically, let’s consider the binary classification with logistic regression where the predicted label is defined as y = sgn(xTw) and supx∈X ‖x‖∞ <∞. Then the following theorem holds and the proof is described in Appendix A.\nTheorem 1. Suppose that w′n for n = 1, . . . , N are generated with the variance of σ2. For all x, the followings hold: 1) As N → ∞, 1 − f (x)m /N goes to 0 in probability as σ2 goes to 0, 2) As N → ∞, 1 − f (x)m /N goes to 1/2 in probability for binary classification using logistic regression as σ2 goes to∞. The implication of Theorem 1 is that when σ is too small or too large, it would be difficult to compare the sample distances of unlabeled samples. In this active learning task, at least q most informative unlabeled samples must be identified. To meet this condition, it is reasonable to set ρ′n denoted in the Algorithm 1 as ρ∗ = q/m, which is not very small and is less than 1/2 in general, forN hypotheses.\nThis can be attained by updating σ′n as σ ′ n+1 = σ ′ ne −β(ρ′n−ρ ∗) where β > 0 (see Appendix D). The Figure 10 in Appendix E shows the final test accuracy with respect to target ρ′n on MNIST dataset. The LPDR performs best when the target ρ′n is roughly ρ\n∗. In addition, the range of target ρ′n, associated with the best performance, is wide; thus, LPDR is relatively robust against target ρ ′ n. Furthermore, LPDR is robust against hyperparameters β, N and sampling layers (see Appendix F).\nAlgorithm 1 Least Probable Disagree Region (LPDR) Input: L0, U0 : Initial labeled and unlabeled samples m, q : Size of pool data and number of queries σ20 : Initial variance for sampling ρ∗ : Target hypothesis distance (= q/m)\nProcedure: 1: for step t = 0, 1, 2, . . . , T − 1 2: Train parameters ŵt with Lt, then evaluate its empirical error ε̂t on Lt 3: σt → σ′1 4: for n = 1, 2, . . . , N 5: Sample parameters w′n ∼ N (ŵt, Iσ′2n ) for h′n 6: Compute γn = e−(ε ′ n−ε̂t)+ where ε′n is empirical error of w ′ n on Lt\n7: Compute ρ′n = ρe(ĥt, h ′ n) 8: Update σ′n+1 = σ ′ ne −β(ρ′n−ρ\n∗) where β > 0 9: end for\n10: σ′N+1 → σt+1 11: Compute Vw(x(i)) = 1− f (i)w / ∑N n=1 γn where f (i) m = maxc ∑N n=1 γnI [ h′n(x (i)) = c ]\n12: Get I∗ = arg maxI⊂IPt ,|I|=q ∑ i∈I Vw(x (i)) where IPt = { j : x(j) ∈ Pt ⊆ Ut } 13: Update Lt+1 = Lt ∪ {( x(i), y(i) )} i∈I∗ and Ut+1 = Ut \\ { x(i) } i∈I∗ 14: end for\nMeanwhile, the efficiency of querying samples in the disagreement region of the version space is well known both theoretically (Hanneke et al., 2014) and empirically (Beluch et al., 2018). When the trained hypothesis ĥt is in the version space, the sampled hypotheses h′ns are in the version space with high probability, but there are cases where they are outside the version space (see Appendix G). Thus, LPDR gives weight γn on the prediction of sampled hypothesis h′n where γn = e −(ε′n−ε̂t)+ is a function of ε̂t = errLt(ĥt) and ε ′ n = errLt(h ′ n). Here, (·)+ is max{0, ·} and errL(h) is the empirical error of h on L. Then, LPDR uses weighted variation ratio Vw as a function of the weighted frequency of the modal class fw as defined below:\nVw(x (i)) = 1− f (i) w∑N\nn=1 γn (4)\nwhere f (i)w = maxc ∑N n=1 γnI [ h′n(x (i)) = c ] and x(i) ∈ Pt ⊆ Ut.\nIf H′ is a subset of the version space in realizable case, the sample complexity of LPDR follows Hanneke’s theorem (Hanneke et al., 2014). Let Λ be the sample complexity defined as the smallest integer t such that for all t′ ≥ t, err(ht′) ≤ where err(h) := PD[h(X) 6= Y ] with probability at least 1 − δ. Then, LPDR achieves a sample complexity Λ such that, for D in the realizable case, ∀ , δ ∈ (0, 1),\nΛ( , δ,D) . ξ · ( D log ξ + log ( log(1/ )\nδ\n)) · log 1\nwhere D and ξ are the VC-dimension ofH and the disagree coefficient with respect toH and D. When ξ = O(1), in terms of , the number of labeled samples required by LPDR is just O(log(1/ ) · log log(1/ )), while the number of labeled samples by a passive learning is Ω(1/ ). Therefore, in this case, LPDR provides an exponential improvement over passive learning in sample complexity (Hsu, 2010)." }, { "heading": "5 EXPERIMENTS", "text": "This section discusses experimental results on 8 benchmark datasets: MNIST (LeCun et al., 1998), CIFAR10 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2019), EMNIST (Cohen et al., 2017), CIFAR100 (Krizhevsky et al., 2009), Tiny ImageNet (subset of the ILSVRC dataset containing\n200 categories rather than the usual 1000 categories; Russakovsky et al., 2015), Food101 (Bossard et al., 2014) and HAM10000 (Tschandl et al., 2018) datasets. For fair comparison with other active learning algorithms, simple two layered CNN, referred to as ‘S-CNN’ (Chollet et al., 2015) is used for MNIST and four layered CNN, referred to as ‘K-CNN’ (Chollet et al., 2015) is used for CIFAR10, SVHN, EMNIST and CIFAR100. Additionally, Wide-ResNet (WRN-16-8; Zagoruyko & Komodakis, 2016) is used for CIFAR100, Tiny ImageNet, Food101 and HAM10000.\nFigure 4–7 show magnified plots of test accuracy to accentuate the difference in performance among different methods: initial labeled sample sizes are not shown in the figures. Figures that include initial labeled sample size are presented in Appendix H." }, { "heading": "5.1 EXPERIMENTAL SETTINGS", "text": "Experimental settings regarding total number of epochs, data size and acquisition size are summarized in Table 1, and other details concerning the model, optimizer, batch size, learning rate and hyperparameters are presented in Appendix B." }, { "heading": "5.2 RESULTS FOR MNIST, CIFAR10, SVHN AND EMNIST", "text": "A number of experiments are conducted to compare performance of LPDR with other high performing uncertainty based active learning algorithms on 8 datasets. Figure 4 shows the test accuracy with respect to the number of labeled samples on MNIST, CIFAR10, SVHN and EMNIST datasets. Each algorithm is denoted such as ‘LPDR’: the proposed algorithm, ‘Random’: random sampling, ‘Entropy’: entropy based uncertainty sampling, ‘MC-BALD’: MC dropout sampling using BALD, ‘MC-VarR’: MC dropout sampling using variation ratio (Ducoffe & Precioso, 2015) and ‘ENSVarR’: ensemble method. Overall, LPDR either performs best or comparable with all other algorithms. Its performance is consistent regardless of the benchmark datasets. In the early step, LPDR significantly outperforms all other algorithms on MNIST and CIFAR10 datasets. Of all the algorithms compared, Entropy performed the worst. MC-BALD performed well only on SVHN dataset: it seems that the performance of BALD is highly dependent on the dataset. With the query size set to 1, LPDR outperforms BatchBALD on MNIST dataset (see Appendix I). Although MC-VarR and ENS-VarR are based on different sampling methods, both perform similarily-both outperforming all others on EMNIST dataset, while showing a significant drop in performance compared to LPDR on SVHN and CIFAR10 datasets. It is observed that the performances of other algorithms have a relatively strong data dependency compared to LPDR. On CIFAR10 dataset, the performances of MC-VarR and ENS-VarR are no better than that of Random, and Entropy and MC-BALD have lower performance than Random. These results can be attributed to the low network capacity compared to the data complexity. This issue will be discussed in next section." }, { "heading": "5.3 RESULTS FOR CIFAR100 WITH K-CNN AND WIDE-RESNET", "text": "In order to compare the performance of the algorithms with respect to the network capacity, experiments are conducted using networks of different capacity but on the same dataset. Figure 5 shows the results of test accuracy with respect to the number of labeled samples on CIFAR100 dataset with K-CNN and WRN-16-8. The left-hand figure is the results of using K-CNN, which has a relatively smaller network capacity than that of WRN-16-8. With the exception of LPDR, the performances of all algorithms are much worse than that of Random. The right-hand figure is the result of using WRN-16-8, which has a relatively larger network capacity. In contrast to the results for K-CNN, most algorithms outperform Random. With a large network capacity, the performance gap between LPDR and the other algorithms is reduced, but LPDR still outperforms others. LPDR is able to perform consistently better than Random regardless of the network capacity, and it seems to be particularly effective with low capacity networks." }, { "heading": "5.4 RESULTS FOR TINY IMAGENET AND FOOD101", "text": "Experiments on a more difficult task are conducted. Figure 6 shows test accuracy with respect to the number of labeled samples on Tiny ImageNet and Food101 datasets with WRN-16-8. Tiny ImageNet and Food101 are considered to be more difficult than CIFAR100. Even on more difficult tasks, LPDR outperforms all other algorithms.\n5.5 RESULTS FOR HAM10000\nAdditional experiments are conducted to compare the performance of the algorithms on imbalanced HAM10000 dataset with WRN-16-8. Figure 7 shows the results of the test accuracy with respect to the number of labeled samples. The LPDR outperforms all other algorithms compared. Figure 15 in Appendix J shows the results of AUC with respect to the number of labeled samples. The LPDR performs comparable with all other algorithms.\nTo sum up the comparing algorithms across all experimental settings and repetitions, rank and DolanMore curves are presented in Appendix K. The LPDR consistently achieves top rank for all steps and significantly outperforms the other algorithms in all experimental settings." }, { "heading": "6 RELATED WORK", "text": "Other than uncertainty-based sampling framework (Culotta & McCallum, 2005; Scheffer et al., 2001; Mussmann & Liang, 2018; Lewis & Gale, 1994; Gal et al., 2017; Kirsch et al., 2019; Beluch et al., 2018) for active learning, decision-theoretic framework based methods such as expected model change (Settles et al., 2008) have certain relevance to the proposed LPDR as unlabeled samples nearer the decision boundary which LPDR is attempting to identify have larger gradients leading larger model change. Recently, adversarial approaches are proposed to discriminate labeled and unlabeled samples (Gissin & Shalev-Shwartz, 2019; Sinha et al., 2019; Zhang et al., 2020), and after performing adversarial learning, any unlabeled samples that is most confidently predicted as unlabeled is queried and used to retrain the network. Here adversarial learning is used to indirectly identify sample near the decision boundary." }, { "heading": "7 CONCLUSION", "text": "This paper defines a theoretical distance of unlabeled sample to the decision boundary referred to as the least probable disagreement region (LPDR) containing the unlabeled sample for active learn-\ning. LPDR can be evaluated empirically with low computational load by making two assumptions regarding parameters of the hypothesis space, variation ratio and the LPDR. The two assumptions are empirically verified.\nExperimental results on various datasets show that LPDR consistently outperforms all other high performing uncertainty based active learning algorithms and leads to state-of-the-art active learning performance on CIFAR10, CIFAR100, Tiny ImageNet, and Food101 datasets. In addition, LPDR is able to perform consistently better than random sampling regardless of the network capacity while all other algorithms compared fail to do so.\nLPDR is simple enough to be applied to various classification tasks with deep networks: the implementation requires only sampling a subset of parameters (parameters in the last FC layer of the deep network). Additionally, LPDR is capable of quick and reliable performance in a variety of different settings with only a computational load that is not much higher than that of other uncertainty sampling methods. In conclusion, LPDR is an effective uncertainty based sampling algorithm in pool-based active learning." }, { "heading": "A PROOF OF THEOREM 1", "text": "Assume that ‖ŵt‖ = 1 without the loss of generality, and ‖x‖ 6= 0 to avoid the null case. The predicted label of x by w′n disagrees with that by ŵt if sgn ( xTw′n ) 6= sgn ( xTŵt ) , here, sgn(0) = 1. Note that xTw′n = x Tŵt + σx Te′n where e ′ n = (Zn1, . . . , Zn|w|) T,\nand Znks are independent random variables from N (0, 1). The event of {sgn ( xTw′n ) 6=\nsgn ( xTŵt ) } is equal to that of E1 ∪E2 where E2 = {σxTe′n ≥ 0,xTŵt < 0} and E2 = {σxTe′n < 0,xTŵt ≥ 0}. Thus, the proof has two folds: the cases of 1) E1 and 2) E2. In the first fold,\nP [E1] = P [ σxTe′n ≥ |xTŵt| ] = P [ σ‖x‖Z ≥ |xTŵt| ] = 1− Φ\n( a (x, ŵt)\nσ ) where Z ∼ N (0, 1), Φ is the cumulative distribution function of the normal distribution, and a (x, ŵt) = |xTŵt|/‖x‖. Note that σxTe′n ∼ N (0, σ2‖x‖2). Consequently, P[E1] < 1/2 due to a (x, ŵt) > 0. Hence, the following\nf (x) m\nN = N∑ n=1 1 N I [ ĥt(x) = h ′ n(x) ] goes to value greater than 1/2 in probability as N → ∞ because Var(f (x)m /N) → 0 as N → ∞. Therefore, as N →∞, ∀x, the variation ratio is\n1− f (x) m\nN = 1− N∑ n=1 1 N I [ ĥt(x) = h ′ n(x) ] → 1− Φ ( a (x, ŵt) σ ) in probability. This is due to that f (x)m is the frequency of mode class with probability tending to 1 as N →∞. By the smoothness of Φ,\n1− f (x) m\nN → 1− Φ(∞) = 0 as σ2 → 0\nand\n1− f (x) m\nN → 1− Φ(0) = 1 2 as σ2 →∞.\nNext, in the second fold,\nP [E2] = P [ σxTe′n < −|xTŵt| ] = P [ σ‖x‖Z < −|xTŵt| ] = Φ ( −a (x, ŵt)\nσ\n) .\nConsequently, P[E2] < 1/2. Hence the following\nf (x) m\nN = N∑ n=1 1 N I [ ĥt(x) = h ′ n(x) ] goes to the value greater than 1/2 in probability as N →∞ because Var(f (x)m /N)→ 0 as N →∞. Therefore, as N →∞, ∀x, the variation ratio is\n1− f (x) m\nN = 1− N∑ n=1 1 N I [ ĥt(x) = h ′ n(x) ] → Φ ( −a (x, ŵt) σ ) = 1− Φ ( a (x, ŵt) σ ) in probability. This is due to that f (x)m is the frequency of mode class with probability tending to 1 as N →∞. By the smoothness of Φ,\n1− f (x) m\nN → 1− Φ(∞) = 0 as σ2 → 0\nand\n1− f (x) m\nN → 1− Φ(0) = 1 2 as σ2 →∞.\nThis completes the proof." }, { "heading": "B EXPERIMENTAL SETTINGS", "text": "B.1 DATASETS\nMNIST (LeCun et al., 1998) is a dataset of handwritten digits which has a training set of 60, 000 samples and a test set of 10, 000 samples in 10 classes. Each sample is a black and white image and 28× 28 in size. CIFAR10 and CIFAR100 (Krizhevsky et al., 2009) are labeled subsets of the 80 million tiny images dataset which have a training set of 50, 000 samples and a test set of 10, 000 samples in 10 and 100 classes respectively. Each sample is a color image and 32× 32 in size. SVHN (Netzer et al., 2019) is a real-world digit image dataset which has a training set of 73, 257 samples and a test set of 26, 032 samples in 10 classes. Each sample is a color image and 32× 32 in size.\nEMNIST (Cohen et al., 2017) is a dataset of handwritten character digits which has a training set of 80, 000 samples and a test set of 10, 000 samples in 47 classes. Each sample is a black and white image and 28× 28 in size. Tiny ImageNet is a subset of the ILSVRC (Russakovsky et al., 2015) dataset which has 100, 000 samples in 200 classes. Each sample is a color image and 64 × 64 in size. In experiments, Tiny ImageNet is split into two parts: a training set of 90, 000 samples and a test set of 10, 000 samples.\nFood101 (Bossard et al., 2014) is a fine grained dataset which has a training set of 75, 750 samples and a test set of 25, 250 samples in 101 classes. Each sample is a color image and resized to 75×75. HAM10000 (Tschandl et al., 2018) is a imbalanced dataset which has 10, 015 samples in 7 classes. Each sample is a color image and resized to 75 × 75. In experiments, HAM10000 is split into two parts: a training set of 8, 515 samples and a test set of 1, 500 samples.\nAll datasets are used without any preprocessing of images.\nB.2 SETTINGS\nS-CNN, which is the Keras MNIST CNN implementation (Chollet et al., 2015), consists of [3×3× 32 conv - 3×3×64 conv - 2×2 maxpool - dropout (0.25) - 128 dense - dropout (0.5) - #class dense - softmax] layers. K-CNN, which is the Keras CIFAR CNN implementation (Chollet et al., 2015), consists of [two 3 × 3 × 32 conv - 2 × 2 maxpool - dropout (0.25) - two 3 × 3 × 64 conv - 2 × 2 maxpool - dropout (0.25) - 512 dense - dropout (0.5) - #class dense - softmax] layers. WRN-16-8 is a wide residual network that has 16 convolutional layers and a widening factor 8 (Zagoruyko & Komodakis, 2016). The optimizer, initial learning rate, learning rate schedule and batch size for each experimental setting are described in Table 2. He normal initialization is used for all models. All experiments are run for a fixed number of acquisition steps until a certain amount of training data is labeled. Results are averaged over 5 repetitions. For all datasets, the initial labeled samples for each repetition are randomly sampled according to the distribution of the training set. For MC dropout we use 100 forward passes, and ensemble consists of 5 networks of identical architecture but different random initialization and random batches. For LPDR, we set σ0 = 0.01, β = 1, N = 100 and parameter sampling is applied to the last dense layer of each network." }, { "heading": "C RANK CORRELATION BETWEEN LPDR AND VARIATION RATIO", "text": "D REGULATING ρ′n BY THE VARIANCE OF SAMPLING\nThe left-hand side of Figure 9 shows the ρ′n with respect to the active learning progress. For all experiments, LPDR reliably guides the ρ′n to be ρ\n∗ = q/m (MNIST: 0.01, CIFAR10: 0.05, SVHN: 0.05, EMNIST: 0.075, CIFAR100 (KCNN): 0.1, CIFAR100 (WRN): 0.2, Tiny ImageNet: 0.25, Food101: 0.2 and HAM10000: 0.1) after the initial few steps. The right-hand side of Figure 9 shows\nlog(σ) with respect to the active learning progress. For all experiments, the variance of sampling increases as the labeling proceeds. This is because larger variance is required to make the ρ′n = ρ ∗ since unlabeled samples move away from the learned decision boundary from labeled samples due to an increase in network confidence as the number of labeled samples increases.\nE FINAL TEST ACCURACY VS TARGET ρ′n\nThe Figure 10 shows the final test accuracy with respect to target ρ′n on MNIST dataset. The results show that at around ρ∗(= 0.02), it performs the best for q = 20 and m = 1000. In addition, the range of target ρ′n, associated with the best performance, is wide (0.01 ∼ 0.1); thus, LPDR is robust against the target ρ′n in the wide range." }, { "heading": "F ROBUSTNESS OF LPDR AGAINST HYPERPARAMETERS", "text": "LPDR has four hyperparameters: 1) the initial variance of sampling σ0; 2) the positive hyperparameter for regulating the variance of sampling β; 3) the number of sampled hypotheses N , and 4) the layer index of the network to which sampling is applied. The σ0 has no significant effect on the performance of LPDR since σ is adaptively regulated based on the ρ′n while sampling the sampled hypothesis. Thus, σ0 is not examined in detail. Figure 11 shows the performance comparison with respect to the hyperparameters of LPDR on MNIST and CIFAR10 datasets. The left figures show that there is no significant difference in the performance of LPDR for various β ∈ {0.1, 1, 10} on both datasets. The robustness of LPDR against β is based on the sufficient buffer for regulating σ since the range of target ρ′n associated with the best performance is wide. The middle figures show that there is no significant difference in the performance of LPDR for various N ∈ {5, 10, 20, 50, 100, 200} on both datasets. The robustness of LPDR against N is based on the sufficient discrimination in the variation ratio for identifying q most informative unlabeled samples with a small number of sampled\nhypotheses by setting ρ∗ = q/m. The right figures show that there is no significant difference in the performance of LPDR for the sampling to the parameters of last layer and to the parameters of all layers of the networks on both datasets." }, { "heading": "G EMPIRICAL ERRORS OF LEARNED AND SAMPLED HYPOTHESES", "text": "Figure 12 shows the empirical error of the learned and the sampled hypotheses with respect to the acquisition step for all experimental settings. In many cases, the empirical error of the learned hypothesis becomes zero, thus it is placed in the version space, while the sampled hypothesis is often placed outside the version space, e.g., in SVHN dataset. Even in the cases of EMNIST and CIFAR100 with K-CNN datasets, as the number of labeled samples increases, the empirical error of the learned and the sampled hypothesis increases. To address this situation, LPDR incorporates the\nweighted hypotheses based on the prediction error difference between the learned and the sampled hypotheses, and it works well empirically." }, { "heading": "H PLOTS FOR TEST ACCURACY", "text": "Figure 13 shows the test accuracy with respect to the number of labeled samples from initial to final step for all experimental settings." }, { "heading": "I LPDR VS MC-BATCHBALD", "text": "Figure 14 shows the performance comparison between LPDR and MC-BALD on MNIST dataset using S-CNN when the query size is 1 or 20. LPDR significantly outperforms MC-BatchBALD on MNIST dataset when q = 1 such that MC-BatchBALD is completely identical to MC-BALD. LPDR is also expected to outperform MC-BatchBALD even when q > 1: LPDR with q > 1 performs better than MC-BALD with q = 1 that MC-BatchBALD with q > 1 does not exceed (Kirsch et al., 2019).\nFigure 14: The comparison of performance between LPDR and MC-BALD on MNIST dataset where the query size is 1 or 20. The performance of BatchBALD with q > 1 does not exceed that of MC-BALD (q = 1) and LPDR (q = 20) outperforms MC-BALD (q = 1)." }, { "heading": "J AUC OF HAM10000 DATASET", "text": "On imbalanced dataset, the performance comparison is performed not only for accuracy but also for AUC. Figure 15 shows the results of AUC with respect to the number of labeled samples on HAM10000 dataset. LPDR performs comparable with Entropy or ENS-VarR performing better than other algorithms." }, { "heading": "K RANK AND DOLAN-MORE CURVES", "text": "Rank curves and Dolan-More curves are used to compare the performance of the algorithms across all experimental settings and repetitions. Figure 16 shows the rank and Dolan-More curves for all\nalgorithms considered in the experiment. The rank curve of each algorithm in the left-hand figure represents the mean of ranks on all datasets at each steps of active learning. LPDR consistently is top-ranked for all steps.\nThe right-hand figure shows Dolan-More curves defined as follows (Dolan & Moré, 2002). Let accpa be the final test accuracy of the a algorithm on the p problem. After defining the performance gap as ∆pa = maxx(acc p x)−accpa, we can define Dolan-More curve Ra(·) as a function of the performance gap factor τ :\nRa(τ) = #(p : ∆pa ≤ τ)\nnp\nwhere np is the total number of evaluations for the problem p. Thus, Ra(τ) is the ratio of problems with performance gap between algorithm a and the best performing competitor not more than τ . Note that Ra(0) is the ratio of problems on which algorithm a performs the best. LPDR has the highest value Ra(0) = 43.3%, and LPDR maintains the highest Ra(τ) for all τ .\nTable 3 presents the mean and the standard deviation of performance gap from the best competitor for all steps of each algorithm on each dataset. Consistent with all the results so far, LPDR significantly outperforms the other algorithms in all experimental settings." } ]
2,020
null
SP:14fa0894cc0b4dd4bdb51c089cf5511c89de8b4f
[ "This paper presents a way to view contrastive divergence (CD) learning as an adversarial learning procedure where a discriminator is tasked with classifying whether or not a Markov chain, generated from the model, has been time-reversed. Beginning with the classic derivation of CD and its approximate gradient, noting relevant issues regarding this approximation, the authors present a way to view CD as an extension of the conditional noise contrastive estimation (CNCE) method where the contrastive distribution is continually updated to keep the discrimination task difficult. Specifically, when the contrastive distribution is chosen such that the detailed balance property is satisfied, then the CNCE loss becomes exactly proportional the CD-1 update with the derivation further extended to CD-k. Practical concerns regarding lack of detailed balance are mitigated through the use of Metropolis-Hastings rejection or an adaptive weighting that arises when deriving the gradient of their time-reversal classification loss. A toy example providing empirical support for correcting the lack of detailed balance is included.", "To implement the contrastive divergence (CD) algorithm in practice, an intractable term is typically omitted from the gradient. This leads to an approximation. This work shows that the resulting algorithm can in fact be viewed as an exact algorithm targeting a different, adversarial objective. The derivation in this paper also shows how Markov chains which are not reversible w.r.t. the posterior distribution of interest can be employed within the algorithm. Effectively, this assigns an importance weight to each sample which akin to the acceptance ratio which would be needed for a Metropolis--Hastings type correction." ]
Contrastive divergence (CD) learning is a classical method for fitting unnormalized statistical models to data samples. Despite its wide-spread use, the convergence properties of this algorithm are still not well understood. The main source of difficulty is an unjustified approximation which has been used to derive the gradient of the loss. In this paper, we present an alternative derivation of CD that does not require any approximation and sheds new light on the objective that is actually being optimized by the algorithm. Specifically, we show that CD is an adversarial learning procedure, where a discriminator attempts to classify whether a Markov chain generated from the model has been time-reversed. Thus, although predating generative adversarial networks (GANs) by more than a decade, CD is, in fact, closely related to these techniques. Our derivation settles well with previous observations, which have concluded that CD’s update steps cannot be expressed as the gradients of any fixed objective function. In addition, as a byproduct, our derivation reveals a simple correction that can be used as an alternative to Metropolis-Hastings rejection, which is required when the underlying Markov chain is inexact (e.g., when using Langevin dynamics with a large step).
[ { "affiliations": [], "name": "Yair Omer" }, { "affiliations": [], "name": "Tomer Michaeli" } ]
[ { "authors": [ "Yoshua Bengio", "Olivier Delalleau" ], "title": "Justifying and generalizing contrastive divergence", "venue": "Neural Computation,", "year": 2009 }, { "authors": [ "Miguel A Carreira-Perpinan", "Geoffrey E Hinton" ], "title": "On contrastive divergence learning", "venue": "In Society for Artificial Intelligence and Statistics,", "year": 2005 }, { "authors": [ "Ciwan Ceylan", "Michael U Gutmann" ], "title": "Conditional noise-contrastive estimation of unnormalised models", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Thomas M Cover", "J Halliwell" ], "title": "Which processes satisfy the second law", "venue": "Physical Origins of Time Asymmetry,", "year": 1994 }, { "authors": [ "Yilun Du", "Igor Mordatch" ], "title": "Implicit generation and generalization in energy-based models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ruiqi Gao", "Yang Lu", "Junpei Zhou", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "Learning generative convnets via multi-grid modeling and sampling", "venue": "In Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Will Grathwohl", "Kuan-Chieh Wang", "Joern-Henrik Jacobsen", "David Duvenaud", "Mohammad Norouzi", "Kevin Swersky" ], "title": "Your classifier is secretly an energy based model and you should treat it like one", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In Society for Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "W Keith Hastings" ], "title": "Monte Carlo sampling methods using markov chains and their applications", "venue": null, "year": 1970 }, { "authors": [ "Geoffrey E Hinton" ], "title": "Training products of experts by minimizing contrastive divergence", "venue": "Neural Computation,", "year": 2002 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "Geoffrey E Hinton", "Russ R Salakhutdinov" ], "title": "Replicated softmax: an undirected topic model", "venue": "In Advances in Neural Information Processing Systems,", "year": 2009 }, { "authors": [ "Geoffrey E Hinton", "Simon Osindero", "Yee-Whye Teh" ], "title": "A fast learning algorithm for deep belief nets", "venue": "Neural Computation,", "year": 2006 }, { "authors": [ "Aapo Hyvärinen" ], "title": "Estimation of non-normalized statistical models by score matching", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Aapo Hyvärinen" ], "title": "Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables", "venue": "IEEE Transactions on Neural Networks,", "year": 2007 }, { "authors": [ "Qiang Liu", "Dilin Wang" ], "title": "Learning deep energy models: Contrastive divergence vs. amortized mle", "venue": "arXiv preprint arXiv:1707.00797,", "year": 2017 }, { "authors": [ "Abdel-rahman Mohamed", "Geoffrey Hinton" ], "title": "Phone recognition using restricted boltzmann machines", "venue": "In International Conference on Acoustics, Speech and Signal Processing,", "year": 2010 }, { "authors": [ "Erik Nijkamp", "Mitch Hill", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "Learning non-convergent nonpersistent short-run mcmc toward energy-based model", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yixuan Qiu", "Lingsong Zhang", "Xiao Wang" ], "title": "Unbiased contrastive divergence algorithm for training energy-based latent variable models", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ruslan Salakhutdinov", "Geoffrey Hinton" ], "title": "Deep Boltzmann machines", "venue": "In Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Ruslan Salakhutdinov", "Andriy Mnih", "Geoffrey Hinton" ], "title": "Restricted Boltzmann machines for collaborative filtering", "venue": "In International Conference on Machine Learning,", "year": 2007 }, { "authors": [ "Paul Smolensky" ], "title": "Information processing in dynamical systems: Foundations of harmony theory", "venue": "Technical report, Colorado Univ at Boulder Dept of Computer Science,", "year": 1986 }, { "authors": [ "Jascha Sohl-Dickstein", "Peter Battaglino", "Michael R DeWeese" ], "title": "Minimum probability flow learning", "venue": "In ICML,", "year": 2011 }, { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Generative modeling by estimating gradients of the data distribution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ilya Sutskever", "Tijmen Tieleman" ], "title": "On the convergence properties of contrastive divergence", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Tijmen Tieleman" ], "title": "Some investigations into energy-based models", "venue": "PhD thesis, University of Toronto,", "year": 2007 }, { "authors": [ "Jianwen Xie", "Yang Lu", "Song-Chun Zhu", "Yingnian Wu" ], "title": "A theory of generative convnet", "venue": "In International Conference on Machine Learning,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Unnormalized probability models have drawn significant attention over the years. These models arise, for example, in energy based models, where the normalization constant is intractable to compute, and are thus relevant to numerous settings. Particularly, they have been extensively used in the context of restricted Boltzmann machines (Smolensky, 1986; Hinton, 2002), deep belief networks (Hinton et al., 2006; Salakhutdinov & Hinton, 2009), Markov random fields (Carreira-Perpinan & Hinton, 2005; Hinton & Salakhutdinov, 2006), and recently also with deep neural networks (Xie et al., 2016; Song & Ermon, 2019; Du & Mordatch, 2019; Grathwohl et al., 2019; Nijkamp et al., 2019).\nFitting an unnormalized density model to a dataset is challenging due to the missing normalization constant of the distribution. A naive approach is to employ approximate maximum likelihood estimation (MLE). This approach relies on the fact that the likelihood’s gradient can be approximated using samples from the model, generated using Markov Chain Monte Carlo (MCMC) techniques. However, a good approximation requires using very long chains and is thus impractical. This difficulty motivated the development of a plethora of more practical approaches, like score matching (Hyvärinen, 2005), noise contrastive estimation (NCE) (Gutmann & Hyvärinen, 2010), and conditional NCE (CNCE) (Ceylan & Gutmann, 2018), which replace the log-likelihood loss with objectives that do not require the computation of the normalization constant or its gradient.\nPerhaps the most popular method for learning unnormalized models is contrastive divergence (CD) (Hinton, 2002). CD’s advantage over MLE stems from its use of short Markov chains initialized at the data samples. CD has been successfully used in a wide range of domains, including modeling images (Hinton et al., 2006), speech (Mohamed & Hinton, 2010), documents (Hinton & Salakhutdinov, 2009), and movie ratings (Salakhutdinov et al., 2007), and is continuing to attract significant research attention (Liu & Wang, 2017; Gao et al., 2018; Qiu et al., 2019).\nDespite CD’s popularity and empirical success, there still remain open questions regarding its theoretical properties. The primary source of difficulty is an unjustified approximation used to derive its objective’s gradient, which biases its update steps (Carreira-Perpinan & Hinton, 2005; Bengio & Delalleau, 2009). The difficulty is exacerbated by the fact that CD’s update steps cannot be expressed as the gradients of any fixed objective (Tieleman, 2007; Sutskever & Tieleman, 2010).\nIn this paper, we present an alternative derivation of CD, which relies on completely different principles and requires no approximations. Specifically, we show that CD’s update steps are the gradients of an adversarial game in which a discriminator attempts to classify whether a Markov chain generated from the model is presented to it in its original or a time-reversed order (see Fig. 1). Thus, our derivation sheds new light on CD’s success: Similarly to modern generative adversarial methods (Goodfellow et al., 2014), CD’s discrimination task becomes more challenging as the model approaches the true distribution. This keeps the update steps effective throughout the entire training process and prevents early saturation as often happens in non-adaptive methods like NCE and CNCE. In fact, we derive CD as a natural extension of the CNCE method, replacing the fixed distribution of the contrastive examples with an adversarial adaptive distribution.\nCD requires that the underlying MCMC be exact, which is not the case for popular methods like Langevin dynamics. This commonly requires using Metropolis-Hastings (MH) rejection, which ignores some of the generated samples. Interestingly, our derivation reveals an alternative correction method for inexact chains, which does not require rejection." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 THE CLASSICAL DERIVATION OF CD", "text": "Assume we have an unnormalized distribution model pθ. Given a dataset of samples {xi} independently drawn from some unknown distribution p, CD attempts to determine the parameters θ with which pθ best explains the dataset. Rather than using the log-likelihood loss, CD’s objective involves distributions of samples along finite Markov chains initialized at {xi}. When based on chains of length k, the algorithm is usually referred to as CD-k.\nConcretely, let qθ(x′|x) denote the transition rule of a Markov chain with stationary distribution pθ, and let rmθ denote the distribution of samples after m steps of the chain. As the Markov chain is initialized from the dataset distribution and converges to pθ, we have that r0θ = p and r ∞ θ = pθ. The CD algorithm then attempts to minimize the loss\n`CD-k = DKL(r 0 θ ||r∞θ )−DKL(rkθ ||r∞θ )\n= DKL(p||pθ)−DKL(rkθ ||pθ), (1)\nwhere DKL is the Kullback-Leibler divergence. Under mild conditions on qθ (Cover & Halliwell, 1994) this loss is guaranteed to be positive, and it vanishes when pθ = p (in which case rkθ = pθ).\nTo allow the minimization of (1) using gradient-based methods, one can write\n∇θ`CD-k =EX̃∼rkθ [∇θ log pθ(X̃)]− EX∼p[∇θ log pθ(X)] + dDKL(r\nk θ ||pθ)\ndrkθ ∇θrkθ . (2)\nHere, the first two terms can be approximated using two batches of samples, one drawn from p and one from rkθ . The third term is the derivative of the loss with respect only to the θ that appears in r k θ , ignoring the dependence of pθ on θ. This is the original notation from (Hinton, 2002); an alternative way to write this term would be ∇θ̃DKL(rkθ̃ ||pθ). This term turns out to be intractable and in the original derivation, it is argued to be small and thus neglected, leading to the approximation\n∇θ`CD-k ≈ 1\nn ∑ i (∇θ log pθ(x̃i)−∇θ log pθ(xi)) (3)\nHere {xi} is a batch of n samples from the dataset and {x̃i} are n samples generated by applying k MCMC steps to each of the samples in that batch. The intuition behind the resulting algorithm (summarized in App. A) is therefore simple. In each gradient step θ ← θ− η∇θ`CD-k, the log-likelihood of samples from the dataset is increased on the expense of the log-likelihood of the contrastive samples {x̃i}, which are closer to the current learned distribution pθ. Despite the simple intuition, it has been shown that without the third term, CD’s update rule (2) generally cannot be the gradient of any fixed objective (Tieleman, 2007; Sutskever & Tieleman, 2010) except for some very specific cases. For example, Hyvärinen (2007) has shown that when the Markov chain is based on Langevin dynamics with a step size that approaches zero, the update rule of CD-1 coincides with that of score-matching Hyvärinen (2005). Similarly, the probability flow method of Sohl-Dickstein et al. (2011) has been shown to be equivalent to CD with a very unique Markov chain. Here, we show that regardless of the selection of the Markov chain, the update rule is in fact the exact gradient of a particular adversarial objective, which adapts to the current learned model in each step." }, { "heading": "2.2 CONDITIONAL NOISE CONTRASTIVE ESTIMATION", "text": "Our derivation views CD as an extension of the CNCE method, which itself is an extension of NCE. We therefore start by briefly reviewing those two methods.\nIn NCE, the unsupervised density learning problem is transformed into a supervised one. This is done by training a discriminator Dθ(x) to distinguish between samples drawn from p and samples drawn from some preselected contrastive distribution pref. Specifically, let the random variable Y denote the label of the class from which the variable X has been drawn, so that X|(Y = 1) ∼ p and X|(Y = 0) ∼ pref. Then it is well known that the discriminator minimizing the binary cross-entropy (BCE) loss is given by\nDopt(x) = P(Y = 1|X = x) = p(x)\np(x) + pref(x) . (4)\nTherefore, letting our parametric discriminator have the form\nDθ(x) = pθ(x)\npθ(x) + pref(x) , (5)\nand training it with the BCE loss, should in theory lead to Dθ(x) = Dopt(x) and thus to pθ(x) = p(x). In practice, however, the convergence of NCE highly depends on the selection of pref. If it significantly deviates from p, then the two distributions can be easily discriminated even when the learned distribution pθ is still very far from p. At this point, the optimization essentially stops updating the model, which can result in a very inaccurate estimate for p. In the next section we provide a precise mathematical explanation for this behavior.\nThe CNCE method attempts to alleviate this problem by drawing the contrastive samples based on the dataset samples. Specifically, each dataset sample x is paired with a contrastive sample x̃ that is drawn conditioned on x from some predetermined conditional distribution q(x̃|x) (e.g. N (x, σ2I)). The pair is then concatenated in a random order, and a discriminator is trained to predict the correct\norder. This is illustrated in Fig. 2a. Specifically, here the two classes are of pairs (A,B) corresponding to (A,B) = (X, X̃) for Y = 1, and (A,B) = (X̃,X) for Y = 0, and the discriminator minimizing the BCE loss is given by\nDopt(a, b) = P(Y = 1|A = a,B = b) = q(b|a)p(a)\nq(b|a)p(a) + q(a|b)p(b) . (6)\nTherefore, constructing a parametric discriminator of the form\nDθ(a, b) = q(b|a)pθ(a)\nq(b|a)pθ(a) + q(a|b)pθ(b) =\n( 1 +\nq(a|b)pθ(b) q(b|a)pθ(a)\n)−1 , (7)\nand training it with the BCE loss, should lead to pθ ∝ p. Note that here Dθ is indifferent to a scaling of pθ, which is thus determined only up to an arbitrary multiplicative constant.\nCNCE improves upon NCE, as it allows working with contrastive samples whose distribution is closer to p. However, it does not completely eliminate the problem, especially when p exhibits different scales of variation in different directions. This is the case, for example, with natural images, which are known to lie close to a low-dimensional manifold. Indeed if the conditional distribution q(·|·) is chosen to have a small variance, then CNCE fails to capture the global structure of p. And if q(·|·) is taken to have a large variance, then CNCE fails to capture the intricate features of p (see Fig. 3). The latter case can be easily understood in the context of images (see Fig. 2a). Here, the discriminator can easily distinguish which of its pair of input images is the noisy one, without having learned an accurate model for the distribution of natural images (e.g., simply by comparing their smoothness). When this point is reached, the optimization essentially stops.\nIn the next section we show that CD is in fact an adaptive version of CNCE, in which the contrastive distribution is constantly updated in order to keep the discrimination task hard. This explains why CD is less prone to early saturation than NCE and CNCE." }, { "heading": "3 AN ALTERNATIVE DERIVATION OF CD", "text": "We now present our alternative derivation of CD. In Sec. 3.1 we identify a decomposition of the CNCE loss, which reveals the term that is responsible for early saturation. In Sec. 3.2, we then present a method for adapting the contrastive distribution in a way that provably keeps this term bounded away from zero. Surprisingly, the resulting update step turns out to precisely match that of CD-1, thus providing a new perspective on CD learning. In Sec. 3.3, we extend our derivation to include CD-k (with k ≥ 1)." }, { "heading": "3.1 REINTERPRETING CNCE", "text": "Let us denote\nwθ(a, b) , q(a|b)pθ(b) q(b|a)pθ(a) , (8)\nso that we can write CNCE’s discriminator (7) as\nDθ(a, b) = (1 + wθ(a, b)) −1 . (9)\nThen we have the following observation (see proof in App. B).\nObservation 1. The gradient of the CNCE loss can be expressed as\n∇θ`CNCE = E X∼p X̃|X∼q\n[ αθ(X, X̃) ( ∇θ log pθ(X̃)−∇θ log pθ(X) )] , (10)\nwhere αθ(x, x̃) , (1 + wθ(x, x̃) −1)−1. (11)\nNote that (10) is similar in nature to the (approximate) gradient of the CD loss (3). Particularly, as in CD, the term∇θ log pθ(X̃)−∇θ log pθ(X) causes each gradient step to increase the log-likelihood of samples from the dataset on the expense of the log-likelihood of the contrastive samples. However, as opposed to CD, here we also have the coefficient αθ(x, x̃), which assigns a weight between 0 and 1 to each pair of samples (x, x̃). To understand its effect, observe that\nαθ(x, x̃) = 1−Dθ(x, x̃) = Dθ(x̃, x). (12)\nNamely, this coefficient is precisely the probability that the discriminator assigns to the incorrect order of the pair. Therefore, this term gives a low weight to “easy” pairs (i.e., for which Dθ(x, x̃) is close to 1) and a high weight to “hard” ones.\nThis weighting coefficient is of course essential for ensuring convergence to p. For example, it prevents log pθ from diverging to ±∞ when the discriminator is presented with the same samples over and over again. The problem is that a discriminator can often correctly discriminate all training pairs, even with a pθ that is still far from p. In such cases, αθ becomes practically zero for all pairs and the model stops updating. This shows that a good contrastive distribution is one which keeps the discrimination task hard throughout the training. As we show next, there is a particular choice which provably prevents αθ from converging to zero, and that choice results in the CD method.\n3.2 FROM CNCE TO CD-1\nTo bound αθ away from 0, and thus avoid the early stopping of the training process, we now extend the original CNCE algorithm by allowing the conditional distribution q to depend on pθ (and thus to change from one step to the next). Our next key observation is that in this setting there exists a particular choice that keeps αθ constant. Observation 2. If q is chosen to be the transition probability of a reversible Markov chain with stationary distribution pθ, then\nαθ(x, x̃) = 1\n2 , ∀x, x̃. (13)\nProof. A reversible chain with transition q and stationary distribution pθ, satisfies the detailed balance property q(x̃|x)pθ(x) = q(x|x̃)pθ(x̃), ∀x, x̃. (14) Substituting (14) into (8) leads to wθ(x, x̃) = 1, which from (11) implies αθ(x, x̃) = 12 .\nThis observation directly links CNCE to CD. First, the suggested method for generating the contrastive samples is precisely the one used in CD-1. Second, as this choice of q leads to αθ(x, x̃) = 12 , it causes the gradient of the CNCE loss (10) to become\n∇θ`CNCE = 12E X∼p X̃|X∼q\n[ ∇θ log pθ(X̃)−∇θ log pθ(X) ] , (15)\nwhich is exactly proportional to the CD-1 update (3). We have thus obtained an alternative derivation of CD-1. Namely, rather than viewing CD-1 learning as an approximate gradient descent process for the loss (1), we can view each step as the exact gradient of the CNCE discrimination loss, where the reference distribution q is adapted to the current learned model pθ. This is illustrated in Fig. 2b.\nSince q is chosen based on pθ, the overall process is in fact an adversarial game. Namely, the optimization alternates between updating q, which acts as a generator, and updating pθ, which defines the discriminator. As pθ approaches p, the distribution of samples generated from the MCMC also becomes closer to p, which makes the discriminator’s task harder and thus prevents early saturation.\nIt should be noted that formally, since q depends on pθ, it also indirectly depends on θ, so that a more appropriate notation would be qθ. However, during the update of pθ we fix qθ (and vise versa), so that the gradient in the discriminator update does not consider the dependence of qθ on θ. This is why (15) does not involve the gradient of X̃ which depends on qθ.\nThe reason for fixing qθ comes from the adversarial nature of the learning process. Being part of the chain generation process, the goal of the transition rule qθ is to generate chains that appear to be timereversible, while the goal of the classifier, which is based on the model pθ, is to correctly classify\nwhether the chains were reversed. Therefore, we do not want the optimization of the classifier to affect qθ. This is just like in GANs, where the generator and discriminator have different objectives, and so when updating the discriminator the generator is kept fixed.\n3.3 FROM CD-1 TO CD-k\nTo extend our derivation to CD-k with an arbitrary k ≥ 1, let us now view the discrimination problem of the previous section as a special case of a more general setting. Specifically, the pairs of samples presented to the discriminator in Sec. 3.2, can be viewed as Markov chains of length two (comprising the initial sample from the dataset and one extra generated sample). It is therefore natural to consider also Markov chains of arbitrary lengths. That is, assume we initialize the MCMC at a sample xi from the dataset and run it for k steps to obtain a sequence (x(0), x(1), . . . , x(k)), where x(0) = xi. We can then present this sequence to a discriminator either in its original order, or time-reversed, and train the discriminator to classify the correct order. We coin this a time-reversal classification task. Interestingly, in this setting, we have the following. Observation 3. When using a reversible Markov chain of length k + 1 with stationary distribution pθ, the gradient of the BCE loss of the time-reversal classification task is given by\n∇θ`CNCE = 12E [ ∇θ log pθ(X(k))−∇θ log pθ(X(0)) ] , (16)\nwhich is exactly identical to the CD-k update (3) up to a multiplicative factor of 12 .\nThis constitutes an alternative interpretation of CD-k. That is, CD-k can be viewed as a time-reversal adversarial game, where in each step, the model pθ is updated so as to allow the discriminator to better distinguish MCMC chains from their time-reversed counterparts.\nTwo remarks are in order. First, it is interesting to note that although the discriminator’s task is to classify the order of the whole chain, its optimal strategy is to examine only the endpoints of the chain, x(0) and x(k). Second, it is insightful to recall that the original motivation behind the CD-k loss (1) was that when pθ equals p, the marginal probability of each individual step in the chain is also p. Our derivation, however, requires more than that. To make the chain indistinguishable from its time-reversed version, the joint probability of all samples in the chain must be invariant to a flip of the order. When pθ = p, this is indeed the case, due to the detailed balance property (14).\nProof of Observation 3. We provide the outline of the proof (see full derivation in App. C). Let (A(0), A(1), . . . , A(k)) denote the input to the discriminator and let Y indicate the order of the chain, with Y = 1 corresponding to (A(0), A(1), . . . , A(k)) = (X(0), X(1), . . . , X(k)) and Y = 0 to (A(0), A(1), . . . , A(k)) = (X(k), X(k−1), . . . , X(0)). The discriminator that minimizes the BCE loss is now given by\nD(a0, a1, . . . , ak) = P(Y = 1|A(0) = a0, A(1) = a1, . . . , A(k) = ak)\n= ( 1 +\nq(a0|a1) · · · q(ak−1|ak)p(ak) q(ak|ak−1) · · · q(a1|a0)p(a0) )−1 = ( 1 +\nk∏ i=1 wθ(ai−1, ai)\n)−1 . (17)\nThe CNCE paradigm thus defines a discriminator Dθ having the form of (17) but with p replaced by pθ. Recall that despite the dependence of the transition probability q on the current learned model pθ, it is regarded as fixed within each discriminator update step. We therefore omit the subscript θ from q here. Similarly to the derivation of (10), explicitly writing the gradient of the BCE loss of our discriminatrion task, gives\n∇θ`chain = E (1 + k∏ i=1 wθ(X (i−1), X(i))−1 )−1 ( ∇θ log pθ(X(k))−∇θ log pθ(X(0)) ) = E [ αθ(X (0), . . . , X(k)) ( ∇θ log pθ(X(k))−∇θ log pθ(X(0)) )] . (18)\nwhere we now defined\nαθ(a0, . . . , ak) ,\n( 1 +\nk∏ i=1 wθ(ai−1, ai) −1\n)−1 . (19)\nNote that (10) is a special case of (18) corresponding to k = 1, where X and X̃ in (10) are X(0) and X(1) in (18). As before, when q satisfies the detailed balance property (14), we obtain wθ = 1 and consequently the weighting term αθ again equals 12 . Thus, the gradient (18) reduces to (16), which is exactly proportional to the CD-k update (3)." }, { "heading": "3.4 MCMC PROCESSES THAT DO NOT HAVE DETAILED BALANCE", "text": "In our derivation, we assumed that the MCMC process is reversible, and thus exactly satisfies the detailed balance property (14). This assumption ensured that wθ = 1 and thus αθ = 12 . In practice, however, commonly used MCMC methods satisfy this property only approximately. For example, the popular discrete Langevin dynamics process obeys detailed balance only in the limit where the step size approaches zero. The common approach to overcome this is through Metropolis-Hastings (MH) rejection (Hastings, 1970), which guarantees detailed balance by accepting only a portion of the proposed MCMC transitions. In this approach, the probability of accepting a transition from x to x̃ is closely related to the weighing term wθ, and is given by\nA(x, x̃) = min (1, wθ(x, x̃)) . (20)\nInterestingly, our derivation reveals an alternative method for accounting for lack of detailed balance.\nConcretely, we saw that the general expression for the gradient of the BCE loss (before assuming detailed balance) is given by (18). This expression differs from the original update step of CD-k only in the weighting term αθ(x(0), . . . , x(k)). Therefore, all that is required for maintaining correctness in the absence of detailed balance, is to weigh each chain by its “hardness” αθ(x(0), . . . , x(k)) (see Alg. 2 in App. A). Note that in this case, the update depends not only on the end-points of the chains, but rather also on their intermediate steps. As can be seen in Fig. 4, this method performs just as well as MH, and significantly better than vanilla CD without correction." }, { "heading": "4 ILLUSTRATION THROUGH A TOY EXAMPLE", "text": "To illustrate our observations, we now conclude with a simple toy example (see Fig. 3). Our goal here is not to draw general conclusions regarding the performance of CNCE and CD, but rather merely to highlight the adversarial nature of CD and its importance when the data density exhibits different scales of variation along different directions.\nWe take data concentrated around a 2-dimensional manifold embedded in 10-dimensional space. Specifically, let e(1), . . . , e(10) denote the standard basis in R10. Then each data sample is generated by adding Gaussian noise to a random point along a 2D spiral lying in the e(1)-e(2) plane. The STD of the noise in the e(1) and e(2) directions is 5 times larger than that in the other 8 axes. Figure 3a shows the projections of the the data samples onto the the first 3 dimensions. Here, we use a\nmulti-layer perceptron (MLP) as our parametric model, log pθ, and train it using several different learning configurations (for the full details see App. D).\nFigure 3b visualizes the training as well as the final result achieved by each configuration. The first two rows show CNCE with Gaussian contrastive distributions of two different STDs. The third row shows the adjusted CD described in Sec. 3.4 with Langevin Dynamics as its MCMC process. As can be seen, for CNCE with a large STD, the contrastive samples are able to explore large areas around the original samples, but this causes their majority to lie relatively far from the manifold (see their projections onto the e(1)-e(3) plane). In this case, αθ decreases quickly, causing the learning process to ignore most samples at a very early stage of the training. When using CNCE with a small STD, the samples remain relevant throughout the training, but this comes at the price of inability to capture the global structure of the distribution. CD, on the other hand, is able to enjoy the best of both worlds as it adapts the contrastive distribution over time. Indeed, as the learning progresses, the contrastive samples move closer to the manifold to maintain their relevance. Note that since we use the adjusted version of CD, the weights in this configuration are not precisely 1. We chose the step size of the Langevin Dynamics so that the median of the weights is approximately 10−2.\nFigure 4 shows the results achieved by different variants of CD. As can be seen, without correcting for the lack of detailed balance, CD fails to estimate the density correctly. When using MH rejection to correct the MCMC, or our adaptive CD (ADC) to correct the update steps, the estimate is significantly improved." }, { "heading": "5 CONCLUSION", "text": "The classical CD method has seen many uses and theoretical analyses over the years. The original derivation presented the algorithm as an approximate gradient descent process for a certain loss. However, the accuracy of the approximation has been a matter of much dispute, leaving it unclear what objective the algorithm minimizes in practice. Here, we presented an alternative derivation of CD’s update steps, which involves no approximations. Our analysis shows that CD is in essence an adversarial learning procedure, where a discriminator is trained to distinguish whether a Markov chain generated from the learned model has been time-flipped or not. Therefore, although predating GANs by more than a decade, CD in fact belongs to the same family of techniques. This provides a possible explanation for its empirical success.\nAcknowledgement This research was supported by the Technion Ollendorff Minerva Center." }, { "heading": "A ALGORITHMS", "text": "Below we summarize the algorithms of the classical CD and the proposed adjusted version described in Sec. 3.4.\nAlgorithm 1: Contrastive Divergence - k Require: parametric model pθ, MCMC transition rule qθ(·|·) with stationary distribution pθ, step size η, chain length k. while not converged do Sample a batch {xi}ni=1 from the dataset Initialize {x̃i}ni=1 to be a copy of the batch for i=1 to n do\nfor j=1 to k do Draw a sample x′ from qθ(·|x̃i) x̃i ← x′ end gi ← ∇θ log pθ(x̃i)−∇θ log pθ(xi)\nend θ ← θ − η 1n ∑ i gi\nend\nAlgorithm 2: Adjusted Contrastive Divergence - k Require: parametric model pθ, MCMC transition rule qθ(·|·) whose stationary distribution is pθ, step size η, chain length k. while not converged do Sample a batch {xi}ni=1 from the dataset Initialize {x̃i}ni=1 to be a copy of the batch for i=1 to n do\nwtoti ← 1 for j=1 to k do\nDraw a sample x′ from qθ(·|x̃i) wtoti ← wtoti · q(xi|x′)pθ(x′) q(x′|xi)pθ(xi)\nx̃i ← x′ end αi ← (1 + 1/wtoti )−1 gi ← ∇θ log pθ(x̃i)−∇θ log pθ(xi)\nend θ ← θ − η 1n ∑ i αi · gi\nend" }, { "heading": "B DERIVATION OF CNCE’S GRADIENT", "text": "Proof of Observation 1. The BCE loss achieved by the CNCE discriminator (7) is given by\n`CNCE = − 12E A∼p B|A∼q [log(Dθ(A,B))]− 12E B∼p A|B∼q [log(1−Dθ(A,B))] =\n= −E X∼p X̃|X∼q\n[ log(Dθ(X, X̃)) ] , (21)\nwhere we used the fact that 1 −Dθ(a, b) = Dθ(b, a). Now, substituting the definition of Dθ form (9), the gradient of (21) can be expressed as\n∇θ`CNCE = E X∼p X̃|X∼q\n[ ∇θ log ( 1 + wθ(X, X̃) )] = E X∼p\nX̃|X∼q\n[( 1 + wθ(X, X̃) )−1 ∇θwθ(X, X̃) ]\n= E X∼p X̃|X∼q\n[( 1 + wθ(X, X̃) )−1 wθ(X, X̃) wθ(X, X̃) ∇θwθ(X, X̃) ]\n= E X∼p X̃|X∼q (1 + wθ(X, X̃) wθ(X, X̃) )−1 ∇θwθ(X, X̃) wθ(X, X̃) = E X∼p\nX̃|X∼q\n[( 1 + wθ(X, X̃) −1 )−1 ∇θ log(wθ(X, X̃)) ] = E X∼p\nX̃|X∼q\n[ αθ(X, X̃) ( ∇θ log pθ(X̃)−∇θ log pθ(X) )] , (22)\nwhere we used the fact that∇θwθ = wθ∇θ log(wθ) and the definition of αθ from (11)." }, { "heading": "C DERIVATION OF THE GRADIENT OF CNCE WITH MULTIPLE MC STEPS", "text": "We here describe the full derivation of the gradient in (18) following the same steps as in (22). The BCE loss achieved by the discriminator in (17) is given by\n`chain = −E [ log(Dθ(X (0), X(1), . . . , X(k))) ] . (23)\nwhere we again used the fact that 1−Dθ(a0, a1, . . . , ak) = Dθ(ak, ak−1, . . . , a0). Now, substituting the definition of Dθ form (17), the gradient of (23) can be expressed as\n∇θ`chain = E [ ∇θ log ( 1 +\nk∏ i=1 wθ(X (i−1), X(i))\n)]\n= E (1 + k∏ i=1 wθ(X (i−1), X(i)) )−1 ∇θ ( k∏ i=1 wθ(X (i−1), X(i)) ) = E\n(1 +∏ki=1 wθ(X(i−1), X(i))∏k i=1 wθ(X (i−1), X(i)) )−1 ∇θ (∏ki=1 wθ(X(i−1), X(i)))∏k i=1 wθ(X (i−1), X(i)) = E\n(1 + k∏ i=1 wθ(X (i−1), X(i))−1 )−1 ∇θ log ( k∏ i=1 wθ(X (i−1), X(i)) ) = E [ αθ(X (0), . . . , X(k)) ( ∇θ log pθ(X(k))−∇θ log pθ(X(0)) )] , (24)\nwhere we used the definition of αθ from (19)." }, { "heading": "D TOY EXPERIMENT AND TRAINING DETAILS", "text": "We here describe the full details of the toy model and learning configuration, which we used to produce the results in the paper. The code for reproducing the results is available at —- (for the blind review the code will be available in the supplementary material).\nThe toy model used in the paper consists of a distribution concentrated around a 2D spiral embedded in a 10 dimensional space. Denoting the 10 orthogonal axes of the standard basis in this space by\ne(1), . . . , e(10), the spiral lies in the e(1)-e(2) plane and is confined to [−1, 1] in each of these two axes. The samples of the model are produced by selecting random points along the spiral and adding Gaussian noise to them. In order to keep the samples close to the e(1)-e(2) plane we used a nonisotropic noise with an STD of 0.05 in the e(1) and e(2) directions, and an STD of 0.01 in the directions e(3), . . . , e(10).\nAs a parametric model for log pθ(x), we used an 8-layer multi-layer perceptron (MLP) of width 512 with skip connections, as illustrated in Fig. 5.\nThroughout the paper we referred to the results of five different learning configurations.\n1. CNCE with an optimal (small) variance. This configuration uses additive Gaussian noise as its contrastive distribution. We found 0.0075 to be the STD of the Gaussian which produces the best results.\n2. CNCE with a large variance. This configuration is similar to the previous one except for the STD of the Gaussian which was set to 0.3 in order to illustrate the problems of using a conditional distribution with a large variance.\n3. CD without any MCMC correction. For the MCMC process we used 5 steps of Langevin dynamics, where we did not employ any correction for the inaccuracy which results from using Langavin dynamics with a finite step size. We found 0.0075 to be the step size (multiplying the standard Gaussian noise term) which produces the best results.\n4. CD with MH correction. This configuration is similar to the previous one except for a MH rejection scheme which was used during the MCMC sampling. In this case we found the step size of 0.0125 to produce the best results.\n5. Adjusted CD. This configuration is similar to the previous one except that we used the method from Sec. 3.4 instead of MH rejection. Similarly to the previous configuration, we found the step size of 0.0125 to produce the best results.\nThe optimization of all configurations was preformed using SGD with a momentum of 0.9 and an exponential decaying learning rate. Except for the training of the third configuration, the learning rate ran down from 10−2 to 10−4 over 100000 optimization steps. For the third configuration we had to reduce the learning rate by a factor of 10 in order to prevent the optimization from diverging.\nIn order to select the best step size / variance for each of the configurations we ran a parameter sweep around the relevant value range. The results of this sweep are shown in Fig. 6.\nFor the selection of the number of training steps, we have iteratively increased the number of steps until the results stopped improving for all configurations. These results are presented in Fig. 7." } ]
2,021
CONTRASTIVE DIVERGENCE LEARNING IS A TIME REVERSAL ADVERSARIAL GAME
SP:beaf78b9053a49c23e984589327f48513f1d4277
[ "This submission proposes an approach to modulate activations of general convolutional neural networks by means of an auxiliary network trained on additional metadata to a dataset. The specific goal is to improve out-of-distribution (OOD) generalisation. This *conditional network* approach is illustrated for two standard convolutional neural network (CNN) architectures, U-Net and VGG, on two benchmark datasets suitable for OOD detection, the Inria Aerial Image Labeling Dataset and the Tumor Infiltrating Lymphocytes classification dataset. The conditional network approach yields favourable results compared to competing segmentation as well as classification networks and exhibits a reduction of the generalisation gap compared to the baseline methods. ", "This paper aims to tackle the out-of-distribution generalization problem where a model needs to generalize to new distributions at test time. The authors propose to utilize some extra information like the additional annotations as the conditional input and output the affine transformation parameters for the batch normalization stage. This extra information helps the backbone network get a more general representation from the training set thus the model is robust to the distribution shift when testing. Experiments are conducted on the Aerial Image Labeling and the Tumor-Infiltrating Lymphocytes datasets which correspond to the image segmentation and classification task respectively." ]
In this work we tackle the problem of out-of-distribution generalization through conditional computation. Real-world applications often exhibit a larger distributional shift between training and test data than most datasets used in research. On the other hand, training data in such applications often comes with additional annotation. We propose a method for leveraging this extra information by using an auxiliary network that modulates activations of the main network. We show that this approach improves performance over a strong baseline on the Inria Aerial Image Labeling and the Tumor Infiltrating Lymphocytes (TIL) Datasets, which by design evaluate out-of-distribution generalization in both semantic segmentation and image classification.
[]
[ { "authors": [ "Helen Angell", "Jérôme Galon" ], "title": "From the immune contexture to the immunoscore: the role of prognostic and predictive immune markers in cancer", "venue": "Current opinion in immunology,", "year": 2013 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Tom B Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell" ], "title": "Language models are few-shot learners", "venue": null, "year": 2005 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Yeounoh Chung", "Peter J Haas", "Eli Upfal", "Tim Kraska" ], "title": "Unknown examples & machine learning model generalization", "venue": "arXiv preprint arXiv:1808.08294,", "year": 2018 }, { "authors": [ "Harm De Vries", "Florian Strub", "Jérémie Mary", "Hugo Larochelle", "Olivier Pietquin", "Aaron C Courville" ], "title": "Modulating early visual processing by language", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Vincent Dumoulin", "Jonathon Shlens", "Manjunath Kudlur" ], "title": "A learned representation for artistic style", "venue": "arXiv preprint arXiv:1610.07629,", "year": 2016 }, { "authors": [ "Vincent Dumoulin", "Ethan Perez", "Nathan Schucher", "Florian Strub", "Harm de Vries", "Aaron Courville", "Yoshua Bengio" ], "title": "Feature-wise transformations. Distill, 2018", "venue": "doi: undefined", "year": 2018 }, { "authors": [ "Wolf Herman Fridman", "Franck Pagès", "Catherine Sautes-Fridman", "Jérôme Galon" ], "title": "The immune contexture in human tumours: impact on clinical outcome", "venue": "Nature Reviews Cancer,", "year": 2012 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": null, "year": 1911 }, { "authors": [ "Geoffrey Hinton", "Li Deng", "Dong Yu", "George Dahl", "Abdel-rahman Mohamed", "Navdeep Jaitly", "Andrew Senior", "Vincent Vanhoucke", "Patrick Nguyen", "Brian Kingsbury" ], "title": "Deep neural networks for acoustic modeling in speech recognition", "venue": "IEEE Signal processing magazine,", "year": 2012 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Le Hou", "Kunal Singh", "Dimitris Samaras", "Tahsin M Kurc", "Yi Gao", "Roberta J Seidman", "Joel H Saltz" ], "title": "Automatic histopathology image analysis with cnns", "venue": "In 2016 New York Scientific Data Summit (NYSDS),", "year": 2016 }, { "authors": [ "Le Hou", "Vu Nguyen", "Ariel B Kanevsky", "Dimitris Samaras", "Tahsin M Kurc", "Tianhao Zhao", "Rajarsi R Gupta", "Yi Gao", "Wenjin Chen", "David Foran" ], "title": "Sparse autoencoder for unsupervised nucleus detection and representation in histopathology", "venue": "images. Pattern recognition,", "year": 2019 }, { "authors": [ "Bohao Huang", "Kangkang Lu", "Nicolas Audeberr", "Andrew Khalel", "Yuliya Tarabalka", "Jordan Malof", "Alexandre Boulch", "Bertr Le Saux", "Leslie Collins", "Kyle Bradbury" ], "title": "Large-scale semantic classification: outcome of the first year of inria aerial image labeling benchmark", "venue": "IGARSS", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "Preceedings of the International Conference in Machine Learning (ICML),", "year": 2015 }, { "authors": [ "Xiang Jiang", "Mohammad Havaei", "Farshid Varno", "Gabriel Chartrand", "Nicolas Chapados", "Stan Matwin" ], "title": "Learning to learn with conditional class dependencies", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Frederick Klauschen", "K-R Müller", "Alexander Binder", "Michael Bockmayr", "M Hägele", "P Seegerer", "Stephan Wienert", "Giancarlo Pruneri", "S de Maria", "S Badve" ], "title": "Scoring of tumor-infiltrating lymphocytes: From visual estimation to machine learning", "venue": "In Seminars in cancer biology,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Emmanuel Maggiori", "Yuliya Tarabalka", "Guillaume Charpiat", "Pierre Alliez" ], "title": "Can semantic labeling methods generalize to any city? the inria aerial image labeling benchmark", "venue": "In IEEE International Geoscience and Remote Sensing Symposium (IGARSS)", "year": 2017 }, { "authors": [ "Vincent Michalski", "Vikram Voleti", "Samira Ebrahimi Kahou", "Anthony Ortiz", "Pascal Vincent", "Chris Pal", "Doina Precup" ], "title": "An empirical study of batch normalization and group normalization in conditional computation", "venue": null, "year": 1908 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Anthony Ortiz", "Alonso Granados", "Olac Fuentes", "Christopher Kiekintveld", "Dalton Rosario", "Zachary Bell" ], "title": "Integrated learning and feature selection for deep neural networks in multispectral images", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2018 }, { "authors": [ "Anthony Ortiz", "Caleb Robinson", "Mahmudulla Hassan", "Dan Morris", "Olac Fuentes", "Christopher Kiekintveld", "Nebojsa Jojic" ], "title": "Local context normalization: Revisiting local normalization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Robert M Patton", "J Travis Johnston", "Steven R Young", "Catherine D Schuman", "Thomas E Potok", "Derek C Rose", "Seung-Hwan Lim", "Junghoon Chae", "Le Hou", "Shahira Abousamra" ], "title": "Exascale deep learning to accelerate cancer research", "venue": "arXiv preprint arXiv:1909.12291,", "year": 2019 }, { "authors": [ "Ethan Perez", "Florian Strub", "Harm De Vries", "Vincent Dumoulin", "Aaron Courville" ], "title": "Film: Visual reasoning with a general conditioning layer", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "C. Robinson", "Le Hou", "Kolya Malkin", "Rachel Soobitsky", "Jacob Czawlytko", "Bistra Dilkina", "Nebojsa Jojic" ], "title": "Large scale high-resolution land cover mapping with multi-resolution data", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-Net: Convolutional networks for biomedical image segmentation", "venue": "In International Conference on Medical image computing and computerassisted intervention,", "year": 2015 }, { "authors": [ "Sebastian Ruder" ], "title": "An overview of multi-task learning in deep neural networks", "venue": "arXiv preprint arXiv:1706.05098,", "year": 2017 }, { "authors": [ "Joel Saltz", "Rajarsi Gupta", "Le Hou", "Tahsin Kurc", "Pankaj Singh", "Vu Nguyen", "Dimitris Samaras", "Kenneth R Shroyer", "Tianhao Zhao", "Rebecca Batiste" ], "title": "Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology", "venue": "images. Cell reports,", "year": 2018 }, { "authors": [ "Hidetoshi Shimodaira" ], "title": "Improving predictive inference under covariate shift by weighting the loglikelihood function", "venue": "Journal of statistical planning and inference,", "year": 2000 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Peng Su", "Kun Wang", "Xingyu Zeng", "Shixiang Tang", "Dapeng Chen", "Di Qiu", "Xiaogang Wang" ], "title": "Adapting object detectors with conditional domain normalization", "venue": "arXiv preprint arXiv:2003.07071,", "year": 2020 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Hung-Yu Tseng", "Hsin-Ying Lee", "Jia-Bin Huang", "Ming-Hsuan Yang" ], "title": "Cross-domain few-shot classification via learned feature-wise transformation", "venue": "arXiv preprint arXiv:2001.08735,", "year": 2020 }, { "authors": [ "Gregor Urban", "Krzysztof J Geras", "Samira Ebrahimi Kahou", "Ozlem Aslan", "Shengjie Wang", "Rich Caruana", "Abdelrahman Mohamed", "Matthai Philipose", "Matt Richardson" ], "title": "Do deep convolutional nets really need to be deep and convolutional", "venue": "arXiv preprint arXiv:1603.05691,", "year": 2016 }, { "authors": [ "Xuezhi Wang", "Jeff Schneider" ], "title": "Flexible transfer learning under support and model shift", "venue": "In Advances in Neural Information Processing Systems,", "year": 1898 }, { "authors": [ "Yan Xu", "Zhipeng Jia", "Yuqing Ai", "Fang Zhang", "Maode Lai", "I Eric", "Chao Chang" ], "title": "Deep convolutional activation features for large scale brain tumor histopathology image classification and segmentation", "venue": "In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP),", "year": 2015 }, { "authors": [ "Xiao Xiang Zhu", "Devis Tuia", "Lichao Mou", "Gui-Song Xia", "Liangpei Zhang", "Feng Xu", "Friedrich Fraundorfer" ], "title": "Deep learning in remote sensing: A comprehensive review and list of resources", "venue": "IEEE Geoscience and Remote Sensing Magazine,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning has achieved great success in many core artificial intelligence (AI) tasks (Hinton et al., 2012; Krizhevsky et al., 2012; Brown et al., 2020) over the past decade. This is often attributed to better computational resources (Brock et al., 2018) and large-scale datasets (Deng et al., 2009).\nCollecting and annotating datasets which represent a sufficient diversity of real-world test scenarios for every task or domain is extremely expensive and time-consuming. Hence, sufficient training data may not always be available. Due to many factors of variation (e.g., weather, season, daytime, illumination, view angle, sensor, and image quality), there is often a distributional change or domain shift that can degrade performance in real-world applications (Shimodaira, 2000; Wang & Schneider, 2014; Chung et al., 2018). Applications in remote sensing, medical imaging, and Earth observation commonly suffer from distributional shifts resulting from atmospheric changes, seasonality, weather, use of different scanning sensors, different calibration and other variations which translate to unexpected behavior at test time (Zhu et al., 2017; Robinson et al., 2019; Ortiz et al., 2018).\nIn this work, we present a novel neural network architecture to increase robustness to distributional changes (See Figure 1). Our framework combines conditional computation (Dumoulin et al., 2018; 2016; De Vries et al., 2017; Perez et al., 2018) with a task specific neural architecture for better domain shift generalization.\nOne key feature of this architecture is the ability to exploit extra information, often available but seldom used by current models, through a conditioning network. This results in models with better generalization, better performance in both independent and identically distributed (i.i.d.) and non- i.i.d. settings, and in some cases faster convergence. We demonstrate these methodological innovations on an aerial building segmentation task, where test images are from different geographic areas than the ones seen during training (Maggiori et al., 2017) and on the task of Tumor Infiltrating Lymphocytes (TIL) classification (Saltz et al., 2018).\nWe summarize our main contributions as follows:\n• We propose a novel architecture to effectively incorporate conditioning information, such as metadata.\n• We show empirically that our conditional network improves performance in the task of semantic segmentation and image classification.\n• We study how conditional networks improve generalization in the presence of distributional shift." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "Self-supervised learning. Self-supervised learning extracts and uses available relevant context and embedded metadata as supervisory signals. It is a representation learning approach that exploits a variety of labels that come with the data for free. To leverage large amounts of unlabeled data, it is possible to set the learning objectives such that supervision is generated from the data itself. The selfsupervised task, also known as pretext task, guides us to a supervised loss function (Gidaris et al., 2018; Oord et al., 2018; He et al., 2019; Chen et al., 2020). However, in self-supervised learning we usually do not emphasize performance on this auxiliary task. Rather we focus on the learned intermediate representation with the expectation that this representation can carry good semantic or structural meanings and can be beneficial to a variety of practical downstream tasks. Conditional networks can be seen as a self-supervision approach in which the pretext task is jointly learned with the downstream task.\nOur proposed modulation of a network architecture based on an auxiliary network’s intermediate representation can also be seen as an instance of knowledge transfer (Hinton et al., 2015; Urban et al., 2016; Buciluǎ et al., 2006). Because the auxiliary network has an additional task signal – metadata prediction – information about this task can be transferred to the main task network.\nConditional Computation. Ioffe and Szegedy designed Batch Normalization (BN) as a technique to accelerate the training of deep neural networks (Ioffe & Szegedy, 2015). BN normalizes a given mini-batch B = {Fn,.,.,.}Nn=1 of N feature maps Fn,.,.,. as described by the following Equation:\nBN(Fn,c,h,w|γc, βc) = γc Fn,c,h,w − EB [F.,c,.,.]√\nVarB [F.,c,.,.] + + βc, (1)\nwhere c, h and w are indexing the channel, height and width axis, respectively, γc and βc are trainable scale and shift parameters, introduced to keep the representational power of the original network, and is a constant factor for numerical stability. For convolutional layers the mean and variance are computed over both the batch and spatial dimensions, implying that each location in the feature map is normalized in the same way.\nDe Vries et al. (2017); Perez et al. (2018) introduced Conditional Batch Normalization (CBN) as a method for language-vision tasks. Instead of setting γc and βc in Equation 1 directly, CBN defines them as learned functions βn,c = βc(qn) and γn,c = γc(qn) of a conditioning input qn. Note that this results in a different scale and shift for each sample in a mini-batch. Scale (γn,c) and shift (βn,c) parameters for each convolutional feature are generated and applied to each feature via an affine transformation. Feature-wise transformations frequently have enough capacity to model complex\nphenomena in various settings (Dumoulin et al., 2018). For instance, they have been successfully applied to neural style transfer (Dumoulin et al., 2016) and visual question answering (Perez et al., 2018). This kind of conditional computation scheme is not tied to the type of normalization used. Wu & He (2018) recently proposed Group Normalization (GN). GN divides feature maps into groups and normalizes the features within each group. GN only uses the layer dimension, hence its computation is independent of batch size. Ortiz et al. (2020) proposed Local Context Normalization (LCN) to encourage local contrast enhancement by normalizing features based on a spatial window around it and the filters in its group. Recently, Michalski et al. (2019) showed that Conditional Group Normalization (CGN) offer performance similar to CBN. In this work, we show results using CBN and CGN. Conditional normalization methods have been applied to tasks related to generalization, such as few-shot learning (Jiang et al., 2018; Tseng et al., 2020) and domain adaption (Su et al., 2020). Su et al. propose to use conditional normalization and an adversarial loss for domain adaption in object detection. In contrast to this work, we propose a method for implicit conditioning on an auxiliary task to leverage available metadata." }, { "heading": "3 FORMULATION AND NETWORK ARCHITECTURE", "text": "" }, { "heading": "3.1 PROBLEM ABSTRACTION", "text": "We first establish notation. Let x be an image input, associated with a ground-truth target y for the main task (e.g. a segmentation mask). Let available extra annotation for x be denoted by t. The main network is trained to predict y given x and contextual information from an auxiliary network. The auxiliary network learns to predict t, also given x. Features z of an intermediate layer of the auxiliary network are used to transform the main task network’s layers using conditional normalization parameterized by a learned function of z.\nThe motivation for this method of implicit conditioning is the following:\n1. Since t’s are only used as training targets, auxiliary annotation is not required at test time. 2. During training, the auxiliary network learns (via backpropagation) to visually capture in-\nformation predictive of t. At test time, the auxiliary network reasons about the unavailable\nt in terms of visual patterns that correlate with auxiliary annotations of training data. Note that this allows the distribution of auxiliary information at test time to differ from the training data (see for example our experiments on out-of-distribution generalization in remote sensing in Section 4.1).\nWhile the first statement is true for any multi-task architecture, the second statement describes the flexibility of the proposed method in leveraging auxiliary information of varying degrees of relevance. Obviously, the modulation will help most, if the auxiliary information is maximally relevant for the main task. Since the mapping from z to the modulation parameters is trained with the main task’s training signal, the network can learn to discard components of z that are not useful for the main task. It is also possible for the network to learn a constant identity transformation of the main network’s features in case no correlation is found. This reduces the potential of negative transfer learning between unrelated tasks common in multi-task learning (Ruder, 2017).\nTo provide an example of how this method can help to exploit inexpensive metadata. Consider the task of segmenting satellite imagery of different regions on the globe. We can use the prediction of geographic coordinates, which are often logged by default when building satellite imagery datasets, as the auxiliary task. In this case, the auxiliary network may learn to capture visual characteristics that are distinctive for each region in the training set, such as a predominance for smaller buildings. This would provide a useful inductive bias for the segmentation network, even for regions with very different coordinates. By using feature modulation to integrate this contextual information, we hypothesize that the main network can learn more general purpose features, which can be attended to based on the context." }, { "heading": "3.2 NETWORK ARCHITECTURE", "text": "Our proposed architecture modification transforms any standard neural network with normalization layers into one that incorporates conditioning information t. In each convolutional block of the neural network we substitute the normalization layer by it’s conditional counterpart. We refer to this family of networks as Conditional Networks. Figure 3 shows this extension applied to the popular U-Net Ronneberger et al. (2015) architecture. U-Net is an encoder-decoder network architecture with skip connections. Figure 3 shows the auxiliary network on the left modulating the modified U-Net on the right. The conditioning network is a convolutional architecture (LeCun et al., 1998) followed by a fully-connected layer that predicts metadata tn as a function of the input image xn. The pre-activation features before the output layer are used as z (xn). The functions βn,c(z (xn)) and γn,c(z (xn)) mapping z (xn) to the scale and shift parameters are implemented with a multi-\nlayer perceptron (MLP). Using the latent representations instead of directly using tn allows us to leverage combinations of features that were useful in localizing images from previously seen data, potentially improving generalization.\nBecause all its parts are differentiable, conditional networks can be trained end-to-end using gradient-based optimization. Our full objective is described in Equation 2, where α is a hyperparameter balancing the main and auxiliary losses. Lmain task represents a standard main task loss. Lmain task depends on the task, such as Jaccard, cross-entropy, and dice for semantic segmentation. Lconditioning ensures the conditioning networks correctly predicts tn.\nLcond. net = Lmain task + α · Lconditioning (2)" }, { "heading": "4 EXPERIMENTS", "text": "We study the following hypotheses:\nH1: Generalization through context. Explicit incorporation of conditioning information improves generalization in semantic segmentation and image classification tasks.\nH2: Interpretability. The features learned by the conditioning and the main task network reflect context-specific and context-invariant information, respectively." }, { "heading": "4.1 CONDITIONAL NETWORKS FOR SEMANTIC SEGMENTATION OF AERIAL IMAGES", "text": "To study hypotheses H1 and H2 we focus on the Inria Aerial Image Labeling Dataset. This dataset was introduced to test out-of-distribution generalization of remote-sensing segmentation models (Maggiori et al., 2017). It includes imagery from 10 dissimilar urban areas in North America and Europe. Instead of splitting adjacent portions of the same images into training and test sets, the splitting was done city wise. All tiles of five cities were included in the training set and the remaining cities are used as test set. The test set also has variation in illumination, landscape, and time, making it well-suited to evaluate out-of-distribution generalization. The provided imagery comes orthorectified and has a spatial resolution of 0.3m per pixel covering 810km2 (405km2 for training and 405km2 for the test set) on an evenly spaced grid. Images were labeled for the semantic classes of building and not building. We use geographical coordinates of the images as target data for the auxiliary network (see Section 3).\nFor H1, we compare model performances using the standard benchmark training-test split. For H2, we perform an exploratory visualization of the feature-activation maps for the different models.\nGeneralization via conditioning. We used the Inria standard train-test split to see whether conditioning information helps out-of-distribution generalization. From the training set we reserved five images of each city for the validation set as suggested by Maggiori et al. (2017).\nFor this set of experiments we trained our conditional U-Net presented in Figure 3 end-to-end from scratch. We used as segmentation network the AMLL U-Net as described by Huang et al. (2018), which is a version of U-Net with fewer filters. The AMLL U-Net was the winning entry and top of the Inria leaderboard and we use it as baseline for comparison in this section.\nThe Conditional U-Net was trained exactly as AMLL U-Net, but without any data augmentation technique. We used the standardized latitude and longitude of the center pixel of each patch as the conditioning information to be predicted by the conditioning network and the mean-squared error (MSE) as the conditioning loss Lconditioning and cross-entropy as segmentation loss Lmain task. All training details and a figure showing histograms of final Intersection-over-Union (IoU) scores of the different models can be found in the Appendix sections A.1 and A.2.\nTable 1 shows the performance of the AMLL U-Net baselines and different variations of our proposed architecture on the test set. Conditioning uniformly improved segmentation performance over the corresponding conditioning-free models. This empirically validates our hypothesis H1 about generalization through context. We identify as Cond. U-Net those models in which both the encoder and decoder are modulated, which yielded a small gain in performance over just modulating the encoder or decoder alone. We recommend conditioning all blocks since the extra computational cost is very small. Modulating using CGN consistently outperforms CBN.\ntn Matters. An experiment using the conditioning network without the auxiliary task of predicting tn shows a deterioration of performance relative to the baseline U-Net as shown in Table 1. We see this as evidence for the importance of guiding the learning process of the latent conditioning representation z (xn).\nTable 2 shows the generalization gap as the difference between the validation set (i.i.d.) and the test set performance. Notice how the models’ performances consistently degrade when we evaluate them on cities not seen during training. Conditioning substantially reduces the generalization gap induced by the distribution-shift between the training and test sets, yielding evidence for hypothesis H1.\nFigure 2 shows a qualitative comparison in the performance of the proposed network. The baseline labels a beach (2a) and power lines (2d) as buildings while the Conditional U-Net does not." }, { "heading": "4.2 INTERPRETATION OF CONDITIONING FEATURES.", "text": "To evaluate hypothesis H2, we analyze patterns of activations across these experiments. This hypothesis is interesting for several reasons,\n• It serves as a sanity check for the proposed architecture, ensuring that supervision from conditioning information leads to features that do indeed distinguish between cities. • It sheds light on the potential of using conditioning information to facilitate learning of\ngeneralizable features by intentionally learning context-dependent features. • We can begin to characterize how generalization occurs, by identifying which training cities\na test patch “looks” like.\nAs our first approach towards characterizing feature context-dependence, we associate activations with underlying conditioning information. We apply the following procedure for a few models,\n• Compute all activations at a pre-specified layer in the network for all patches within an epoch.\n• Compute the norm of activations for each feature map in that layer. • Arrange these values into a patch × feature matrix, and visualize using a heatmap.\nFor U-Net models, we focus on the “bottom” of the U, which has a large number of filters with small spatial extent. In the conditional U-Net, we additionally compute the activations from the last layer before prediction of patch coordinates1 If we notice separation of patch activations according to conditioning information, we deduce that the learned feature maps are not invariant to that conditioning context.\nThe learned activations are displayed in Figure 4 2. We find, that individual features that activate for a patch in one city tend to activate in a large fraction of patches across all cities. Further, across similar cities, patterns of activation are similar, for both conditioned and unconditioned models. Consider relative similarity between Chicago and San Francisco, for example 3.\nFor the conditioned model, the large majority of features are zero, across all patches. However, when they are nonzero, their values tend to be larger than the typical activation in unconditioned models. From these observations, we conclude that the improved generalization ability of the conditional U-Net is not due to any ability to learn features that are more invariant to the identity of the corresponding city. Instead, it appears that the conditional U-Net learns a smaller collection of features that are ultimately more useful in the downstream segmentation task. We speculate that having fewer active features for any prediction allows for sharper predictions, preventing “blurring” that could result from averaging across feature maps. More details around H2 are presented in the analysis of feature variance and Figure 10 in the Appendix section A.3. In summary, Figure 4 gives evidence against H2, suggesting that conditional network architectures do not neatly segregate context-specific and context-dependent features. Figure 4 and 10 both suggest that the learned features are qualitatively different between the architectures." }, { "heading": "4.3 CONDITIONAL NETWORKS FOR TUMOR INFILTRATING LYMPHOCYTES CLASSIFICATION", "text": "We also test Conditional Networks for the task of tumor infiltrating lymphocytes (TIL) Classification. During the cancer diagnosis and treatment process, a patient may have a biopsy, which produces a diagnostic tissue sample. Using this sample, a slide is prepared and examined under a microscope by a pathologist to understand both how to treat the disease and to provide a prognosis for the patient’s future. Virtually all cancer patients undergo these biopsies, producing large volumes of these pathology slides.\nA significant feature in these images is tumor infiltrating lymphocytes (TILs), which are types of immune cells that move into a tumor to try to attack the cancer. The quantification of TILs is\n1Since these features have no spatial extent, we do not take any norms. 2Tyrol/train refers to West Tyrol city, Tyrol/Validation refers to East Tyrol city as established in the Inria\ndataset 3We also present the associated t-SNE projection in section A.3\nwell known to have prognostic value in many contexts (Fridman et al., 2012; Angell & Galon, 2013) because understanding patient immune response to tumors is becoming increasingly important with the growth of cancer immunotherapy. Features such as TILs can be quantified through image analysis and deep learning algorithms (Saltz et al., 2018; Klauschen et al., 2018). In Saltz et al. (2018), a convolutional neural network (CNN) architecture is systematically optimized to carry out classification of nuclei from pathology images. This led to the release of a dataset consisting of TIL maps corresponding to roughly 5,000 whole slide images from The Cancer Genome Atlas (TCGA). Individual slide images were split into 100× 100 patches and the tasks is to classify TIL patches as positive or negative in the exact same dataset setup as Saltz et al. (2018). The training set consists of 86,154 patches that were manually annotated with TIL classification (Saltz et al. (2018)). There are 64,381 TIL negative patches and 21,773 TIL positive patches. All patches are in 100×100 pixel resolution, 20 times magnification, and are annotated as TIL positive or TIL negative. Examples of the images and their labels are given in Figure 5.\nThese training images represent seven different cancer types: invasive carcinoma of the breast (BRCA), colon adenocarcinoma (COAD), lung adenocarcinoma (LUAD), pancreatic adenocarcinoma (PAAD), prostate adenocarcinoma (PRAD), skin cutaneous melanoma (SKCM), and endometrial carcinoma of the uterine corpua (UCEC). The cancer type is the conditioning informations or metadata (tn) used to train the conditioning network. We use another 652 patches as our validation set, and 900 manually annotated patches from twelve cancer types in total as the testing set. The twelve cancer types are the seven listed above, as well as five novel ones never seen during training (urothelial carcinoma of the bladder (BLCA), cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC), lung squamous cell carcinoma (LUSC), rectal adenocarcinoma (READ), and stomach adenocarcinoma (STAD)).\nMethods tested. We train the VGG 16-layers network as baseline (Simonyan & Zisserman (2014)). VGG networks have been shown to work well for pathology image classification (Xu et al. (2015); Hou et al. (2016)). We then build the conditional version of VGG16 which we refer to as Conditional VGG16. Conditional VGG16 is created in a similar way as conditional U-Net and share a similar conditioning network. In this experiments the conditioning network is trained to perform image classification for the type of cancer (tn) using a seven-dimensional softmax layer. As with conditional U-Net, we predict βn,c(z (xn)) and γn,c(z (xn)) from features in the conditioning network. βn,c(z (xn)) and γn,c(z (xn)) are then used to modulate VGG16 architecture. We use binary cross-entropy as Lmain task and multiclass cross-entropy as Lconditioning. Training details are provided in the supplemental material.\nResults. Table 3 shows the results for the Tumor infiltrating Lymphocyte classification task using different approaches. Augmenting VGG16 using conditional networks improves its performance by a large margin allowing us to obtain state-of-the-art performance in the task, even improving over previous top performing approaches. It is also important to remember that tn (cancer type) is only required during training. Inference only requires the input image, as in competing methods.\nTo show the importance of the conditioning network we trained a version of conditional VGG16 where conditioning parameters are predicted directly from the cancer type softmax prediction from\nthe conditioning/auxiliary network and we refer to it as “direct cond. VGG16” on Table 3. As expected, doing so does not work well. Properly predicting for the conditioning task is also very important for conditional networks to work well. When we only optimize for Lmain task performance degrades as it is shown on Table 3 under “Cond. VGG16 alpha 0”." }, { "heading": "5 DISCUSSION AND CONCLUSIONS", "text": "We have presented conditional networks, a family of neural networks that leverages conditional information for improved performance and better generalization. Conditional networks can be applied to any network architecture that uses normalization layers. We showed how the performance of two widely adopted network architectures (U-Net and VGG16) can be greatly improved by applying conditional networks for both semantic segmentation and image classification. We have shown that conditional networks consistently reduce the generalization gap observed when there is a shift of the underlying distribution at test time. After carefully studying the network feature activations, we found that the improved generalization ability of the proposed network is not due to the ability of learning more invariant features. It appears, instead, that the conditional network learns a smaller collection of features more relevant to the task. Conditional networks exploits available extra annotation tn while training, however tn is not required during inference. It is not always obvious which choice of tn helps the most to better estimate zn. Future work involves studying how the choice of tn influences performance in a larger set of tasks." }, { "heading": "A APPENDIX", "text": "A.1 IMPLEMENTATION DETAILS\nTraining Details Aerial Image labeling Experiments and Conditional U-Net We trained all networks for the task of aerial image labeling using 572x572 randomly sampled patches from all training image tiles. We used the Adam optimizer (Kingma & Ba, 2014) with a batch size of 12. All networks were trained from scratch with a learning rate of 0.001. Every network was trained for 100 epochs. We keep the same learning rate for the first 60 epochs and decay the rate to 0.0001 over the next 40 epochs. In every epoch 8,000 patches are seen. Binary Cross-Entropy was used as the segmentation loss function. For the conditioning part of the conditional network we used the latitude and longitude of every patch center pixel as t. Latitude and longitude were standardised to be from -1 to +1. The dimension of z is 4374 (9x9x54). z is the input of the MLP predicting for γ and β. We used 1024 hidden units in the MLP.\nFigure 6 shows the learning curves of the baseline U-Net and the fully conditional U-Net (CGN). The AMLL U-Net seems more prone to overfitting than our conditional U-Net.\nTraining Details Conditional VGG16. We trained all training 100 × 100 TIL patches from the training set. We used the Adam optimizer (Kingma & Ba, 2014) with a batch size of 16. All networks were trained from scratch for 20 epochs with a learning rate of 0.001. Binary Cross-Entropy loss was used for the Lmain task (TIL classification) and multiclass cross-entropy for the Lconditioning task (cancer type classification) loss function. For the conditioning part of the conditional network we used the cancer type as metadata t. α for the conditional loss was 0.8. No data augmentation was performed.\nA.2 DETAILED PERFORMANCE CONDITIONAL U-NET\nFigure 7 shows the overall and per-city performance of the models. The fully-conditional U-Net variant using CGN consistently outperforms the other models on every city in the test set. The CGN variant where only the encoder is conditioned has a stronger overall performance and generalizes significantly better than the conditional U-Net variants using BN. This is consistent with the observation of Wu & He (2018) that (regular) GN outperforms BN in segmentation tasks.\nA.3 NETWORK FEATURE ACTIVATIONS\nFigures 8 and 9 are the t-SNE figures obtained from inspecting the activations of the bottom of the U in the U-Net for both the AMLL U-Net and Conditional U-Net CGN. Figure 8 shows the t-SNE associated to sample images from the same cities in the training set 4. Figure 8 shows the t-SNE associated to sample images from both training and test set 5.\n4Tyrol legend refers to West Tyrol as it refers to the training set 5Tyrol+Train refers to West Tyrol and Tyrol when train is false refers to East Tyrol\nIn light of the differences in scale and sparsity of feature activations in the conditional and AMLL U-Nets in Figure 4, we zoom into individual features of interest in Figure 10 6. It is not surprising that the overall y-axis scale differs between the two networks – this is clear from the legend of Figure 4. However, for both rows of Figure 10, we observe that there are more instances of high variation in activation across patches in the conditional U-Net, compared to the AMLL U-Net. In the conditional U-Net, there seem to be features that, while typically active, are often exactly or near zero, and similarly, features that are typically inactive, but occasionally spike. This type of variation is even observed within individual cities. This suggests a type of specificity in the learned features. Rather than activating slightly more or slightly less across all patches, features seem sensitive to particular features within the patches that they activate. While conditional U-Net features are not invariant to conditioning data, they do appear to be more specialized.\n6We refer to this figure while discussing hypothesis 2 in the section 4.2" } ]
2,020
null
SP:03a7c25f464f8e293bf300d897342f5f82a51f28
[ "This paper considers federated learning with straggling and adversarial devices. To tackle stragglers, the paper proposes semi-synchronous averaging wherein models with the same staleness are first averaged together, and then a weighted average of the results with different stateless is computed. To mitigate adversaries, the paper proposes to first perform entropy-based filtering to remove suspected outliers, and then compute loss-weighted average. The server is assumed to have some public data, which is used for entropy-based filtering. Together, the proposed algorithm is called semi-synchronous entropy and loss based filtering (Sself). ", "The paper claims to propose the first algorithm that can handle adversarial machines and stragglers simultaneously in the federated learning setting. To handle stragglers, the paper takes a semi-synchronous approach by taking a weighted sum of gradients depending on staleness. To handle adversarial machines, the algorithm uses an entropy based filtering and a loss based averaging strategy. Note that to handle the adversaries, the algorithm needs a public dataset at the server, using which it can evaluate the entropy and loss scores of each gradient." ]
While federated learning allows efficient model training with local data at edge devices, two major issues that need to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries. While the presence of both stragglers and adversaries raises serious concerns for the deployment of practical federated learning systems, no known schemes or known combinations of schemes, to our best knowledge, effectively address these two issues at the same time. In this work, we propose Sself, a semi-synchronous entropy and loss based filtering/averaging, to tackle both stragglers and adversaries simultaneously. The stragglers are handled by exploiting different staleness (arrival delay) information when combining locally updated models during periodic global aggregation. Various adversarial attacks are tackled by utilizing a small amount of public data collected at the server in each aggregation step, to first filter out the model-poisoned devices using computed entropies, and then perform weighted averaging based on the estimated losses to combat data poisoning and backdoor attacks. A theoretical convergence bound is established to provide insights on the convergence of Sself. Extensive experimental results show that Sself outperforms various combinations of existing methods aiming to handle stragglers/adversaries.
[]
[ { "authors": [ "Eugene Bagdasaryan", "Andreas Veit", "Yiqing Hua", "Deborah Estrin", "Vitaly Shmatikov" ], "title": "How to backdoor federated learning", "venue": "arXiv preprint arXiv:1807.00459,", "year": 2018 }, { "authors": [ "Battista Biggio", "Blaine Nelson", "Pavel Laskov" ], "title": "Poisoning attacks against support vector machines", "venue": "arXiv preprint arXiv:1206.6389,", "year": 2012 }, { "authors": [ "Peva Blanchard", "Rachid Guerraoui", "Julien Stainer" ], "title": "Machine learning with adversaries: Byzantine tolerant gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Xinyun Chen", "Chang Liu", "Bo Li", "Kimberly Lu", "Dawn Song" ], "title": "Targeted backdoor attacks on deep learning systems using data poisoning", "venue": "arXiv preprint arXiv:1712.05526,", "year": 2017 }, { "authors": [ "Yudong Chen", "Lili Su", "Jiaming Xu" ], "title": "Distributed statistical machine learning in adversarial settings: Byzantine gradient descent", "venue": "Proceedings of the ACM on Measurement and Analysis of Computing Systems,", "year": 2017 }, { "authors": [ "Tianyu Gu", "Brendan Dolan-Gavitt", "Siddharth Garg" ], "title": "Badnets: Identifying vulnerabilities in the machine learning model supply chain", "venue": "arXiv preprint arXiv:1708.06733,", "year": 2017 }, { "authors": [ "Jakub Konečnỳ", "H Brendan McMahan", "Felix X Yu", "Peter Richtárik", "Ananda Theertha Suresh", "Dave Bacon" ], "title": "Federated learning: Strategies for improving communication efficiency", "venue": "arXiv preprint arXiv:1610.05492,", "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Leslie Lamport", "Robert Shostak", "Marshall Pease" ], "title": "The byzantine generals problem", "venue": "In Concurrency: the Works of Leslie Lamport,", "year": 2019 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated learning: Challenges, methods, and future directions", "venue": "arXiv preprint arXiv:1908.07873,", "year": 2019 }, { "authors": [ "Xiang Li", "Kaixuan Huang", "Wenhao Yang", "Shusen Wang", "Zhihua Zhang" ], "title": "On the convergence of fedavg on non-iid data", "venue": "arXiv preprint arXiv:1907.02189,", "year": 2019 }, { "authors": [ "Yanan Li", "Shusen Yang", "Xuebin Ren", "Cong Zhao" ], "title": "Asynchronous federated learning with differential privacy for edge intelligence", "venue": "arXiv preprint arXiv:1912.07902,", "year": 2019 }, { "authors": [ "Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Aguera y Arcas" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Krishna Pillutla", "Sham M Kakade", "Zaid Harchaoui" ], "title": "Robust aggregation for federated learning", "venue": "arXiv preprint arXiv:1912.13445,", "year": 2019 }, { "authors": [ "Ziteng Sun", "Peter Kairouz", "Ananda Theertha Suresh", "H Brendan McMahan" ], "title": "Can you really backdoor federated learning", "venue": "arXiv preprint arXiv:1911.07963,", "year": 2019 }, { "authors": [ "Wentai Wu", "Ligang He", "Weiwei Lin", "Stephen Jarvis" ], "title": "Safa: a semi-asynchronous protocol for fast federated learning with low overhead", "venue": null, "year": 1910 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Cong Xie", "Oluwasanmi Koyejo", "Indranil Gupta" ], "title": "Generalized byzantine-tolerant sgd", "venue": "International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Cong Xie", "Sanmi Koyejo", "Indranil Gupta" ], "title": "Asynchronous federated optimization", "venue": "arXiv preprint arXiv:1903.03934,", "year": 2019 }, { "authors": [ "Cong Xie", "Sanmi Koyejo", "Indranil Gupta" ], "title": "Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Cong Xie", "Sanmi Koyejo", "Indranil Gupta" ], "title": "Zeno++: Robust fully asynchronous sgd", "venue": "arXiv preprint arXiv:1903.07020,", "year": 2019 }, { "authors": [ "Dong Yin", "Yudong Chen", "Kannan Ramchandran", "Peter Bartlett" ], "title": "Byzantine-robust distributed learning: Towards optimal statistical rates", "venue": "arXiv preprint arXiv:1803.01498,", "year": 2018 }, { "authors": [ "Dong Yin", "Yudong Chen", "Kannan Ramchandran", "Peter Bartlett" ], "title": "Defending against saddle point attack in byzantine-robust distributed learning", "venue": "arXiv preprint arXiv:1806.05358,", "year": 2018 }, { "authors": [ "Yue Zhao", "Meng Li", "Liangzhen Lai", "Naveen Suda", "Damon Civin", "Vikas Chandra" ], "title": "Federated learning with non-iid data", "venue": "arXiv preprint arXiv:1806.00582,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Large volumes of data collected at various edge devices (i.e., smart phones) are valuable resources in training machine learning models with a good accuracy. Federated learning (McMahan et al., 2017; Li et al., 2019a;b; Konečnỳ et al., 2016) is a promising direction for large-scale learning, which enables training of a shared global model with less privacy concerns. However, current federated learning systems suffer from two major issues. First is the devices called stragglers that are considerably slower than the average, and the second is the adversaries that enforce various adversarial attacks. Regarding the first issue, waiting for all the stragglers at each global round can significantly slow down the overall training process in a synchronous setup. To address this, an asynchronous federated learning scheme was proposed in (Xie et al., 2019a) where the global model is updated every time the server receives a local model from each device, in the order of arrivals; the global model is updated asynchronously based on the device’s staleness t− τ , the difference between the current round t and the previous round τ at which the device received the global model from the server. However, among the received results at each global round, a significant portion of the results with large staleness does not help the global model in a meaningful way, potentially making the scheme ineffective. Moreover, since the model update is performed one-by-one asynchronously, the scheme in (Xie et al., 2019a) would be vulnerable to various adversarial attacks; any attempt to combine this type of asynchronous scheme with existing adversary-resilient ideas would not likely be fruitful. There are different forms of adversarial attacks that significantly degrade the performance of current federated learning systems. First, in untargeted attacks, an attacker can poison the updated model at the devices before it is sent to the server (model update poisoning) (Blanchard et al., 2017; Lamport et al., 2019) or can poison the datasets of each device (data poisoning) (Biggio et al., 2012; Liu et al., 2017), which degrades the accuracy of the model. In targeted attacks (or backdoor attacks) (Chen et al., 2017a; Bagdasaryan et al., 2018; Sun et al., 2019), the adversaries cause the model to misclassify the targeted subtasks only, while not degrading the overall test accuracy. To resolve these issues, a robust federated averaging (RFA) scheme was recently proposed in (Pillutla et al., 2019) which utilizes the geometric median of the received results for aggregation. However, RFA tends to lose performance rapidly as the portion of adversaries exceeds a certain threshold. In this sense, RFA\nis not an ideal candidate to be combined with known straggler-mitigating strategies (e.g., ignoring stragglers) where a relatively small number of devices are utilized for global aggregation; the attack ratio can be very high, significantly degrading the performance. To our knowledge, there are currently no existing methods or known combinations of ideas that can effectively handle both stragglers and adversaries at the same time, an issue that is becoming increasingly important in practical scenarios. Contributions. In this paper, we propose Sself, semi-synchronous entropy and loss based filtering/averaging, a robust federated learning strategy which can tackle both stragglers and adversaries simultaneously. In the proposed idea, the straggler effects are mitigated by semi-synchronous global aggregation at the server, and in each aggregation step, the impact of adversaries are countered by a new aggregation method utilizing public data collected at the server. The details of our key ideas are as follows. Targeting the straggler issue, our strategy is to perform periodic global aggregation while allowing the results sent from stragglers to be aggregated in later rounds. The key strategy is a judicious mix of both synchronous and asynchronous approaches. At each round, as a first step, we aggregate the results that come from the same initial models (i.e., same staleness), as in the synchronous scheme. Then, we take the weighted sum of these aggregated results with different staleness, i.e., coming from different initial models, as in the asynchronous approach. Regarding the adversarial attacks, robust aggregation is realized via entropy-based filtering and loss-weighted averaging. This can be employed at the first step of our semi-synchronous strategy described above, enabling protection against model/data poisoning and backdoor attacks. To this end, our key idea is to utilize public IID (independent, identically distributed) data collected at the server. We can imagine a practical scenario where the server has some global data uniformly distributed over classes, as in the setup of (Zhao et al., 2018). This is generally a reasonable setup since data centers mostly have some collected data (although they can be only a few) of the learning task. For example, different types of medical data are often open to public in various countries. Based on the public data, the server computes entropy and loss of each received model. We use the entropy of each model to filter out the devices whose models are poisoned. In addition, by taking the loss-weighted averaging of the survived models, we can protect the system against local data poisoning and backdoor attacks. We derive a theoretical bound for Sself to ensure acceptable convergence behavior. Experimental results on different datasets show that Sself outperforms various combinations of straggler/adversary defense methods with only a small portion of public data at the server. Related works. The authors of (Li et al., 2019c; Wu et al., 2019; Xie et al., 2019a) have recently tackled the straggler issue in a federated learning setup. The basic idea is to allow the devices and the server to update the models asynchronously. Especially in (Xie et al., 2019a), the authors proposed an asynchronous scheme where the global model is updated every time the server receives a local model of each device. However, a fair portion of the received models with large staleness does not help the global model in meaningful ways, potentially slowing down the convergence speed. A more critical issue here is that robust methods designed to handle adversarial attacks, such as RFA (Pillutla et al., 2019), Multi-Krum (Blanchard et al., 2017) or the presently proposed entropy/loss based idea, are hard to be implemented in conjunction with this asynchronous scheme. To combat adversaries, various aggregation methods have been proposed in a distributed learning setup with IID data across nodes (Yin et al., 2018a;b; Chen et al., 2017b; Blanchard et al., 2017; Xie et al., 2018). The authors of (Chen et al., 2017b) suggests a geometric median based aggregation rule of the received models or the gradients. In (Yin et al., 2018a), a trimmed mean approach is proposed which removes a fraction of largest and smallest values of each element among the received results. In Multi-Krum (Blanchard et al., 2017), among N workers in the system, the server tolerates f Byzantine workers under the assumption of 2f + 2 < N . Targeting federated learning with non-IID data, the recently introduced RFA method of (Pillutla et al., 2019) utilizes the geometric median of models sent from devices, similar to (Chen et al., 2017b). However, as mentioned above, these methods are ineffective when combined with a straggler-mitigation scheme, potentially degrading the performance of learning. Compared to Multi-Krum and RFA, our entropy/loss based scheme can tolerate adversaries even with a high attack ratio, showing remarkable advantages, especially when combined with straggler-mitigation schemes.\nFinally, we note that the authors of (Xie et al., 2019c) considered both stragglers and adversaries but in a distributed learning setup with IID data across the nodes. Compared to these works, we target non-IID data distribution setup in a federated learning scenario." }, { "heading": "2 PROPOSED FEDERATED LEARNING WITH SSELF", "text": "We consider the following federated optimization problem:\nw∗ = argmin w F (w) = argmin w N∑ k=1 mk m Fk(w), (1)\nwhereN is the number of devices, mk is the number of data samples in device k, andm = ∑N k=1mk is the total number of data samples of all N devices in the system. By letting xk,j be the jth data sample in device k, the local loss function of device k, Fk(w), is written as Fk(w) =\n1 mk ∑mk j=1 `(w;xk,j). In the following, we provide solutions aiming to solve the above problem under the existence of stragglers (subsection 2.1) and adversaries (subsection 2.2), and finally propose Sself handling both issues (subsection 2.3)." }, { "heading": "2.1 SEMI-SYNCHRONOUS SCHEME AGAINST STRAGGLERS", "text": "In the t-th global round, the server sends the current model wt to K devices in St (|St| = K ≤ N), which is a set of indices randomly selected from N devices in the system. We let C = K/N be the ratio of devices that participate at each global round. Each device in St performs E local updates with its own data and sends the updated model back to the server. In conventional federated averaging (FedAvg), the server waits until the results of allK devices in St arrive and then performs aggregation to obtain wt+1 = ∑ k∈St mk∑ k∈St mk wt(k), where wt(k) is the model after E local updates at device k starting from wt. However, due to the effect of stragglers, waiting for all K devices at the server can significantly slow down the overall training process. In resolving this issue, our idea assumes periodic global aggregation at the server. At each global round t, the server transmits the current model/round (wt, t) to the devices in St. Instead of waiting for all devices in St, the server aggregates the models that arrive until a fixed time deadline Td to obtain wt+1, and moves on to the next global round t+ 1. Hence, model aggregation is performed periodically with every Td. A key feature here is that we do not ignore the results sent from stragglers (not arrived by the deadline Td). These results are utilized at the next global aggregation step, or even later, depending on the delay or staleness. Let U (t)i be the set of devices 1) that are selected from the server at global round t, i.e., U (t)i ⊆ St and 2) that successfully sent their results to the server at global round i for i ≥ t. Then, we can write St = ∪∞i=tU (t) i , where U (t) i ∩ U (t) j = ∅ for i 6= j. Here, U (t) ∞ can be viewed as the devices that are selected at round t but failed to successfully send their results back to the server. According to these notations, the devices whose training results arrive at the server during global round t belong to one of the following t+ 1 sets: U (0)t , U (1) t , ..., U (t) t . Note that the result sent from device k ∈ U (i)t is the model after E local updates starting from wi, and we denote this model by wi(k). At each round t, we first perform FedAvg as\nv (i) t+1 = ∑ k∈U(i)t mk∑ k∈U(i)t mk wi(k) (2)\nfor all i = 0, 1, ..., t, where v(i)t+1 is the aggregated result of locally updated models (starting from wi) received at round t with staleness t− i+ 1. Then from v(0)t+1, v (1) t+1,..., v (t) t+1, we take the weighted\naveraging of results with different staleness to obtain ∑t i=0 αt(i)v (i) t+1. Here, αt(i) ∝ ∑ k∈U(i)t mk (t−i+1)c is a normalized coefficient that is proportional to the number of data samples in U (i)t and inversely proportional to (t− i+ 1)c, for a given hyperparameter c ≥ 0. Hence, we have a larger weight for v\n(i) t+1 with a smaller t− i+ 1 (staleness). This is to give more weights to more recent results. Based on the weighted sum ∑t i=0 αt(i)v (i) t+1, we finally obtain wt+1 as\nwt+1 = (1− γ)wt + γ t∑ i=0 αt(i)v (i) t+1, (3)\nwhere γ combines the aggregated result with the latest global model wt. Now we move on to the next round t + 1, where the server selects St+1 and sends (wt+1, t+ 1) to these devices. Here, if the server knows the set of active devices (which are still performing computation), St+1 can be\n𝒰\n𝒰 Received at global\nround t-1\nReceived at global round t\n…\nconstructed to be disjoint with the active devices. If not, the server randomly chooses St+1 among all devices in the system and the selected active devices can ignore the current request of the server. The left-hand side of Fig. 1 describes our semi-synchronous scheme. The key characteristics of our scheme can be summarized as follows. First, by periodic global aggregation at the server, our scheme is not delayed by the effect of stragglers. Secondly, our scheme fully utilizes the results s nt from stragglers in the future global rounds; we first perform federated averaging for the devices with same staleness (as in the synchronous scheme), and then take the weighted sum of these averaged results with different staleness (as in the asynchronous scheme)." }, { "heading": "2.2 ENTROPY AND LOSS BASED FILTERING/AVERAGING AGAINST ADVERSARIES", "text": "In this subsection, we propose entropy-based filtering and loss-weighted averaging which not only show better performance with or without attacks but also combine well with the semi-synchronous scheme compared to existing adversary-resilient aggregation methods. Our key idea is to utilize the public IID data collected at the server. We can imagine a practical scenario where the server (or data center) has its own data samples as in(Zhao et al., 2018), e.g., various medical data that are open to public. Using these public data at the server, we provide the following two solutions which can protect the system against model update poisoning, data poisoning and backdoor attacks. 1) Entropy-based filtering: Let npub be the number of public data samples in the server. We also let xpub,j be the j-th sample in the server. When the server receives the locally updated models from the devices, it measures the entropy of each device k by utilizing the public data as\nEavg(k) = 1\nnpub npub∑ j=1 Expub,j (k), (4)\nwhere Expub,j(k) is the shannon entropy of the model of k-th device on the sample xpub,j written as Expub,j (k) = − ∑Q q=1 P (q) xpub,j (k) log P (q) xpub,j (k). Here, Q is the number of classes of the dataset and P (q) xpub,j (k) is the probability of prediction for the q-th class on a sample xpub,j , using the model of the k-th device. In supervised learning tasks, the model produces a high-confident prediction for the ground truth label of the trained samples and thus has a low entropy for the prediction. However, if the local model is poisoned, e.g., by reverse sign attack, the model is more likely to predict randomly for all classes and thus has a high entropy. Based on this observation, the server filters out the models that have entropy greater than some threshold value Eth. It can be seen later in Section 4 that Eth is a hyperparameter that can be easily tuned since there is a huge gap between the entropy values of benign and adversarial devices for all datasets. Note that the above method is robust against model update poisoning even with a large portion of adversaries because it just filters out the results whose entropy is greater than Eth. This is a significant advantage compared to the median based method (Pillutla et al., 2019) whose performance is significantly degraded when the attack ratio is high. 2) Loss-weighted averaging: The server also measures the loss of each received model using the public data. Based on the loss values, the server then aggregates the received models as follows:\nwt+1 = ∑ k∈St βt(k)wt(k) where βt(k) ∝ mk {Fpub(wt(k))}δ and ∑ k∈St βt(k) = 1 (5)\nHere, wt(k) is the locally updated model of the k-th device at global round t. Fpub(wt(k)) is defined as the averaged loss of wt(k) computed on public data at the server, i.e., Fpub(wt(k)) =\n1 npub ∑npub j=1 `(wt(k);xpub,j). Finally, δ(≥ 0) in {Fpub(·)}δ is a parameter related to the impact of the loss on public data. We note that setting δ = 0 in (5) makes our loss-weighted averaging method equal to FedAvg of (1). Under the data poisoning or backdoor attacks, the models of malicious devices would have relatively larger losses compared to others. By the definition of βt(k), such devices would be given a small weight and has a less impact on the next global update. By replacing the federated averaging with the above loss-weighted averaging, we are able to build a system which is robust against local data poisoning and backdoor attacks.\nAlgorithm 1 Semi-Synchronous Entropy and Loss based Filtering/Averaging (Sself) Input: Initialized model w0, Output: Final global model wT Algorithm at the Server\n1: for each global round t = 0, 1, ..., T − 1 do 2: Choose St and send the current model and the global round (wt, t) to the devices 3: Wait for Td and then: 4: for i = 0, 1, ..., t do 5: for k ∈ U (i)t do 6: U (i)t ← U (i) t − {k}, if Eavg(k) > Eth // Entropy-based filtering 7: end for 8: end for 9: for i = 0, 1, ..., t do\n10: v(i)t+1 = ∑ k∈U(i)t\nβt(k)wi(k) // Loss-weighted average of results with same staleness 11: end for 12: wt+1 = (1−γ)wt+γ ∑t i=0 αt(i)v (i) t+1 // Weighted average of results with different staleness 13: end for Algorithm at the Devices: If device k receives (wt, t) from the server, it performs E local updates to obtain wt(k). Then each benign device k sends (wt(k), t) to the server, while an adversarial device transmits a poisoned model depending on the type of attack.\nThe above two methods can be easily combined to tackle model update poisoning, data poisoning and backdoor attack issues. The server first filters out the model-poisoned devices based on the entropy, and then take the loss-weighted average with the survived devices to combat data poisoning and backdoor attacks." }, { "heading": "2.3 SEMI-SYNCHRONOUS ENTROPY AND LOSS BASED FILTERING/AVERAGING (SSELF)", "text": "The details of overall Sself operation are described in Algorithm 1 and Fig. 1. At global round t, the server chooses St and sends (wt, t) to devices. The server collects the results from the devices for a time period Td, and calculates entropy Eavg(k) and loss Fpub(wt(k)) as in (4) and (5), respectively. Based on the entropy, the server first filters out the results sent from the model poisoned devices. Then, the server aggregates the models that have the same staleness, to obtain v\n(i) t+1 for i = 0, 1, ..., t. In this aggregating process, we take loss-weighted averaging as in (5) instead of conventional averaging of FedAvg, to defend the system against data poisoning or backdoor attacks. Now using v(0)t+1,v (1) t+1, ...,v (t) t+1, we finally obtain wt+1 as in (3). Here we note that the server can compute entropy and loss whenever the model is received, i.e., based on the order of arrival. After computing entropy and loss of the last model of global round t, the server just needs to compute the weighted sum of the results. Hence, in practical setups where cloud servers have large enough computing powers, Sself does not cause a significant time delay at the server, compared to FedAvg. The computational complexity of Sself depends on the number of received models at each global round, and the running time for computing the entropy/loss with each model. Although direct comparison with other baselines is tricky, if we assume that the complexity of computing entropy or loss is linear to the number of model parameters as in (Xie et al., 2019b), Sself has larger complexity than that of RFA by a factor of npub. The additional computational complexity of Sself compared to RFA is the cost for better robustness against adversaries.\nAt the device-side, each device starts local model update whenever it receives (wt, t) from the server. After performing E local updates, device k transmits (wt(k), t) to the server. These processes at the server and the devices are performed in parallel and asynchronously, until the last global round ends." }, { "heading": "3 CONVERGENCE ANALYSIS", "text": "In this section, we provide insights on the convergence of Sself with the following standard assumptions in federated learning (Li et al., 2019b; Xie et al., 2019a).\nAssumption 1 The global loss fuction F defined in (1) is µ-strongly convex and L-smooth.\nAssumption 2 Let ξit(k) be a set of data samples that are randomly selected from the k-th device during the i-th local update at global round t. Then, E‖∇Fk(wt(k), ξit(k)) − ∇F (wt(k))‖2 ≤ ρ1 holds for all t and k = 1, . . . , N and i = 1, . . . , E.\nAssumption 3 The second moments of stochastic gradients in each device is bounded, i.e., E‖∇Fk(wt(k), ξit(k))‖2 ≤ ρ2 for all t and k = 1, . . . , N and i = 1, . . . , E.\nWe also have another assumption that describes the bounds on the error for the adversaries. Let B\n(i) t and M (i) t be the set for benign and adversarial devices of U (i) t respectively, satisfying U (i) t =\nB (i) t ∪M (i) t and B (i) t ∩M (i) t = ∅. Let\nΩ (i) t = ∑ k∈M(i)t βi(k) (6)\nbe the sum of loss weights for the adversarial devices in U (i)t . Now we have the following assumption.\nAssumption 4 For an adversarial device k ∈ M (i)t , there exists an arbitrarily large Γ such that E[F (wt(k))− F (w∗)] ≤ Γ <∞ holds for all i = 1, . . . , t.\nBased on the above assumptions, we state the following theorem which provides the convergence bound of our scheme. The proof can be found in Supplementary Material. Theorem 1 Suppose Assumptions 1, 2, 3, 4 hold and the learning rate η is set to be less than 1L . If U\n(t) t 6= ∅ for all t ∈ {0, 1, ..., T}, then Sself satisfies\nE[F (wT )− F (w∗)] ≤ νT [F (w0)− F (w∗)] + (1− νT )C ′ (7)\nwhere ν = 1− γ + γ(1− ηµ)E , C ′ = ρ1+ρ2+2µΩmaxΓ2ηµ2 , Ωmax = max0≤i≤t,0≤t≤T Ω (i) t .\nWe have the following important observations from Theorem 1. First, we can observe a trade-off between convergence rate νT and the error term (1 − νT )C ′. If we increase γ, the convergence rate improves but the error term increases as in (Xie et al., 2019a). By adjusting this γ, we can make the convergence speed faster at the beginning of training while reducing the error at the end of training. Another important observation is the impact of the adversaries. If we have a large Ωmax for a fixed ν, it can be seen from the definition of C ′ that we have a large error term (1 − νT )C ′. However, if the entropy-based filtering method successfully filters out the model poisoned devices, and the loss-weights βi(k) of the adversaries are significantly small for data poisoning and backdoor attacks, we have a small Ω(i)t (close to zero) from (6). This means that we have a significantly small Ωmax, i.e., a small error term (1− νT )C ′. In the next section, we show via experiments that Sself successfully combats both stragglers and adversaries simultaneously and achieves fast convergence with a small error term." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we validate Sself on MNIST (LeCun et al., 1998), FMNIST (Xiao et al., 2017) and CIFAR-10 (Krizhevsky et al., 2009). The overall dataset is split into 60,000 training and 10,000 test samples for MNIST and FMNIST, and split into 50,000 training and 10,000 test samples for CIFAR-10. A simple convolutional neural network (CNN) with 2 convolutional layers and 2 fully connected layers is utilized for MNIST, while CNN with 2 convolutional layers and 1 fully connected layer is used for FMNIST. When training with CIFAR-10, we utilized VGG-11. We consider\n0 20 40 60 80\nRunning time\n20\n40\n60\n80\n100\nT e\ns t\na c c u\nra c y\nSself FedAsync Ignore stragglers + FedAvg Wait for stragglers + FedAvg\n(a) MNIST, C = 0.1\n0 20 40 60 80\nRunning time\n20\n30\n40\n50\n60\n70\n80\n90\nT e\ns t\na c c u\nra c y\nSself FedAsync Ignore stragglers + FedAvg Wait for stragglers + FedAvg\n(b) FMNIST, C = 0.1\n0 200 400 600 800 1000 1200\nRunning time\n10\n20\n30\n40\n50\n60\n70\nT e\ns t\na c c u\nra c y\nSself FedAsync Ignore stragglers + FedAvg Wait for stragglers + FedAvg\n(c) CIFAR-10, C = 0.1\n0 10 20 30 40 50 60 70 80 90\nRunning time\n50\n60\n70\n80\n90\n100\nT e\ns t\na c c u\nra c y\nSself FedAsync Ignore stragglers + FedAvg Wait for stragglers + FedAvg\n(d) MNIST, C = 0.2\n0 20 40 60 80 100\nRunning time\n20\n30\n40\n50\n60\n70\n80\n90\nT e\ns t\na c c u\nra c y\nSself FedAsync Ignore stragglers + FedAvg Wait for stragglers + FedAvg\n(e) FMNIST, C = 0.2\n0 200 400 600 800 1000 1200\nRunning time\n10\n20\n30\n40\n50\n60\n70\nT e\ns t\na c c u\nra c y\nSself FedAsync Ignore stragglers + FedAvg Wait for stragglers + FedAvg\n(f) CIFAR-10, C = 0.2\nFigure 2: Test accuracy versus training time with only stragglers. Sself is our scheme.\nN = 100 devices each having the same number of data samples. We randomly assigned two classes to each device to create non-IID situations. Considering the non-IID cases, we ignored the batch normalization layers when training VGG-11 with CIFAR-10. At each global round, we randomly selected a fraction C of devices in the system to participate. For the proposed Sself method, we let 2% of the entire training data to be the public data and performed federated training with the remaining 98% of the training set. The number of local epochs at each device is set to 5 for all experiments and the local batch size is set to 10 for all experiments except for the backdoor attack. In addition, we used stochastic gradient descent and tuned hyperparameters for Sself and other comparison schemes; the details are described in Supplementary Material. Here, we emphasize that Sself outperforms existing methods even with naively chosen hyperparameters, as also shown in Supplementary Material.\nExperiments with stragglers. To confirm the advantage of Sself, we first consider the scenario with only the stragglers. The adversarial attacks are not considered here. We compare Sself with the following methods. First is the wait for stragglers approach where FedAvg is applied after waiting for all the devices at each global round. The second scheme is the ignore stragglers approach where FedAvg is applied after waiting for a certain timeout threshold and ignore the results sent from slow devices. Finally, we consider the asynchronous scheme (FedAsync) (Xie et al., 2019a) where the global model is updated every time the result of each device arrives. For Sself and FedAsync, γ is decayed while the learning rate is decayed in other schemes. In Fig. 2, we plot the test accuracy versus running time on different datasets and C values. For a fair comparison, the global aggregation at the server is performed with every Td = 1 periodically for Sself and other comparison schemes (ignore stragglers, FedAsync). To model stragglers, each device can have delay of 0, 1, 2 which is determined independently and uniformly random. In other words, at each global round t, we have St = U (t) t ∪ U (t) t+1 ∪ U (t) t+2. Our first observation from Fig. 2 is that the ignore stragglers scheme can lose significant data at each round and often converges to a suboptimal point with less accuracy. The wait for stragglers scheme requires the largest running time until convergence due to the delays caused by slow devices. Finally, it is observed that Sself performs the best, even better than the state-of-the-art FedAsync. Experiments with adversaries. Next, we confirm the performance of Sself in Fig. 3 under the scenario with only the adversaries in a synchronous setup. We compare our method with geometric median-based RFA (Pillutla et al., 2019) and FedAvg under the model update/data poisoning and backdoor attacks. Comparison with the Multi-Krum is illustrated in Supplementary Material. For data poisoning attack, we conduct label-flipping (Biggio et al., 2012), where each label i is flipped to label i + 1. For model update poisoning, each adversarial device takes the opposite sign of all weights and scales up 10 times before transmitting the model to the server. For both attacks, we set C to 0.2 and the portion of adversarial devices is assumed to be r = 0.2 at each global round.\nFor the backdoor, we use the model replacement method (Bagdasaryan et al., 2018) in which adversarial devices transmit the scaled version of the corrupted model to replace the global model with a bad model. We conduct the pixel-pattern backdoor attack (Gu et al., 2017) in which the specific pixels are embedded in a fraction of images, where these images are classified as a targeted label. We embedded 12 white pixels in the top-left corner of the image and the labels of these poisoned images are set to 2. We utilize the Dirichlet distribution with parameter 0.5 for distributing training samples to N = 100 devices. We let C = 0.1, r = 0.1, and the local batch size is set to 64. The number of poisoned images in a batch is set to 20, and we do not decay the learning rate here. In this backdoor scenario, we additionally compare Sself with the norm-thresholding strategy (Sun et al., 2019), in which the server ignores the models with the norm greater than a pre-defined threshold. We measure the attack success rate of the backdoor task by embedding the pixel-pattern into all test samples (except data with label 2) and then comparing the predicted label with the targeted label 2. We applied backdoor attack in every round after the 10-th global round for MNIST and FMNIST, and after the 1000-th global round for CIFAR-10. Fig. 3 shows the performance of each scheme over global round under three attack scenarios. For both data and model poisoning attacks, it can be seen that Sself shows better performance than other schemes. FedAvg does not work well on all datasets, and the performance of RFA gets worse as the dataset/neural network model become more complex. In the backdoor attack scenario, Sself and the norm-thresholding method have low attack success rates on all datasets. The other schemes cannot defend the backdoor attack having the high attack success rate as global round increases. Experiments with both stragglers and adversaries. Finally in Fig. 4, we consider the setup with both stragglers and adversaries. We compare Sself with various straggler/adversary defense combinations. Comparison with the Multi-Krum is illustrated in Supplementary Material. We set C = 0.2, r = 0.2 for model/data poisoning while the results on the backdoor attack are also shown in Supplementary Material. The stragglers and adversaries are modeled as in Figs. 2 and 3, respectively. We have the following observations from Fig. 4. First, FedAsync (Xie et al., 2019a) does not perform well when combined with entropy-based filtering and loss-weighted averaging, since the model\n0 20 40 60 80\nRunning time\n30\n40\n50\n60\n70\n80\n90\n100\nT e s t a c c u ra\nc y\nSself ' Semi-Synchronous + RFA\n(a) MNIST, data poisoning\n0 50 100 150\nRunning time\n20\n30\n40\n50\n60\n70\n80\n90\nT e s t a c c u ra\nc y\nFedAsync + ELF ' Ignore stragglers + RFA .\n(b) FMNIST, data poisoning\n0 200 400 600 800 1000\nRunning time\n10\n20\n30\n40\n50\n60\n70\nT e s t a c c u ra\nc y\nWait for stragglers + RFA\n(c) CIFAR-10, data poisoning\n0 20 40 60 80\nRunning time\n20\n40\n60\n80\n100\nT e\ns t\na c c u\nra c y\n(d) MNIST, model poisoning\n0 50 100 150\nRunning time\n10\n20\n30\n40\n50\n60\n70\n80\n90\nT e\ns t\na c c u\nra c y\n(e) FMNIST, model poisoning\n0 200 400 600 800 1000 1200\nRunning time\n10\n20\n30\n40\n50\n60\n70\nT e\ns t\na c c u\nra c y\n(f) CIFAR-10, model poisoning\nFigure 4: Performance of different schemes with both stragglers and adversaries.\nupdate is conducted one-by-one in the order of arrivals. Due to the same issue, FedAsync cannot be combined with RFA. Our second observation is that the semi-synchronous or ignore stragglers method combined with RFA exhibits poor performance. The reason is that the attack ratio could often be very high (larger than r) for these deadline-based schemes, which degrades the performance of RFA. Compared to RFA, our entropy and loss based filtering/averaging can be applied even with a high attack ratio. It can be also seen that the wait for stragglers scheme combined with RFA suffers from the straggler issue. Overall, the proposed Sself algorithm performs the best, confirming significant advantages of our scheme under the existence of both stragglers and adversaries." }, { "heading": "5 CONCLUSION", "text": "We proposed Sself, a robust federated learning scheme against both stragglers and adversaries. The semi-synchronous component allows the server to fully utilize the results sent from the stragglers by taking advantages of both synchronous and asynchronous elements. In each aggregation step of the semi-synchronous approach, entropy-based filtering screens out the model-poisoned devices and loss-weighted averaging reduces the impact of data poisoning and backdoor attacks. Extensive experimental results show that Sself enables fast and robust federated learning in practical scenarios with a large number of slow devices and adversaries." }, { "heading": "A HYPERPARAMETER SETTING", "text": "" }, { "heading": "A.1 SCENARIO WITH ONLY STRAGGLERS", "text": "The hyperparameter settings for Sself are shown in Table 1. For the schemes ignore stragglers and wait for stragglers combined with FedAvg, we decayed the learning rate during training. For the FedAsync scheme in (Xie et al., 2019a), we take a polynomial strategy with hyperparameters a = 0.5, α = 0.8, and decayed γ during training." }, { "heading": "A.2 SCENARIO WITH ONLY ADVERSARIES", "text": "Data poisoning and model update poisoning attacks: Table 2 describes the hyperparameters for Sself with only adversaries, under data poisoning and model update poisoning attacks. For the RFA in (Pillutla et al., 2019), maximum iteration is set to 10. In this setup, the learning rate is decayed for all three schemes (Sself, RFA, FedAvg).\nBackdoor attack: In this backdoor attack scenario, note that we utilized the Dirichlet distribution with parameter 0.5 for distributing training samples to N = 100 devices. Local batch size is set to 64 and the number of poisoned images is 20. In this experiment, we additionally compared our scheme with the norm-thresholding strategy (Sun et al., 2019) where the threshold value is set to 2. The hyperparameter details for Sself are shown in Table 3." }, { "heading": "A.3 SCENARIO WITH BOTH STRAGGLERS AND ADVERSARIES", "text": "Data poisoning and model update poisoning attacks: The hyperparameters for Sself are exactly the same as in Table 2.\nBackdoor attack: The hyperparameter details are shown in Table 4.\nFor the comparison schemes, we considered: 1) Semi-synchronous + RFA, 2) FedAsync + ELF (entropy and loss based filtering/averaging), 3) Ignore stragglers + RFA, 4) Wait for stragglers + RFA. Each setting is set to be the same as in the previous experiments." }, { "heading": "B ADDITIONAL EXPERIMENTS UNDER BACKDOOR ATTACK", "text": "" }, { "heading": "B.1 EXPERIMENTS WITH BOTH STRAGGLERS AND ADVERSARIES UNDER BACKDOOR ATTACK", "text": "Based on the hyperparameters described in Table 4, we show experimental results with both stragglers and adversaries under backdoor attack. It can be observed from Fig. B.1 that Sself successfully defends against the backdoor attack while other schemes show high attack ratios as global round increases." }, { "heading": "B.2 EXPERIMENTS UNDER NO-SCALED BACKDOOR ATTACK", "text": "In addition to model replacement backdoor attack we considered so far, we perform additional experiments under no-scaled backdoor attack (Bagdasaryan et al., 2018) where the adversarial devices do not scale the weights and only transmit the corrupted model to the server. Fig. B.2 shows the performance under no-scaled backdoor attack with only adversaries (no stragglers). It can be seen that our Sself consistently achieves low attack success rates compared to others. Since the adversaries do not scale the weights, the norm-thresholding approach cannot defend against the attack." }, { "heading": "C EXPERIMENTAL RESULTS FOR VARYING HYPERPARAMETERS", "text": "To observe the impact of hyperparameter setting, we performed additional experiments with various δ and Eth values, the key hyperparameters of Sself. The results are shown in Fig. C.1 with only adversaries. We performed data poisoning attack for varying δ and model update poisoning attack for varying Eth. It can be seen that our scheme still performs well (better than RFA), even with naively chosen hyperparameters, confirming the advantage of Sself in terms of reducing the overhead associated with hyperparameter tuning." }, { "heading": "D PERFORMANCE COMPARISON WITH MULTI-KRUM", "text": "While we compared Sself with RFA in our main manuscript, here we compare our scheme with Multi-Krum (Blanchard et al., 2017) which is a Byzantine-resilient aggregation method targeting conventional distributed learning setup with IID data across nodes. In Multi-Krum, among N workers in the system, the server tolerates f Byzantine workers under the assumption of 2f + 2 < N . After filtering f worker nodes based on squared-distances, the server chooses N workers among N − f remaining workers with the best scores and aggregates them. We set M = N − f for comparing our scheme with Multi-Krum.\nFig. C.2 compares Sself with Multi-Krum under model update poisoning. We first observe Figs. 2(a) and 2(b) which show the results with only adversaries. It can be seen that if the number of adversaries exceed f , the performance of Multi-Krum significantly decreases. Compared to Multi-Krum, the proposed Sself method can filter out the poisoned devices and then take the weighted sum of the survived results even when the portion of adversaries is high. Figs. 2(c) and 2(d) show the results under the existence of both stragglers and adversaries, under model update poisoning attack. The parameter f of Multi-Krum is set to the maximum value satisfying 2f +2 < N , where N depends on the number of received results for both semi-synchronous and ignore stragglers approaches. However, even when we set f to the maximum value, the number of adversaries can still exceed f , which degrades the performance of Multi-Krum combined with semi-synchronous and ignore stragglers approaches. Obviously, Multi-Krum can be combined with the wait for stragglers approach by setting\n0 20 40 60 80 100 120\nGlobal round\n10\n20\n30\n40\n50\n60\n70\n80\n90\nT e\ns t\na c c u\nra c y\nPortion of Public Data: 0.03% Portion of Public Data: 1% Portion of Public Data: 2% Portion of Public Data: 6%\n(a) Data poisoning\n0 50 100 150\nGlobal round\n20\n30\n40\n50\n60\n70\n80\n90\nT e\ns t\na c c u\nra c y\nPortion of Public Data: 0.03% Portion of Public Data: 1% Portion of Public Data: 2% Portion of Public Data: 6%\n(b) Model update poisoning\n0 100 200 300\nGlobal round\n0\n20\n40\n60\n80\n100\nA tt\na c k s\nu c c e\ns s r\na te\nPortion of Public Data: 0.03% Portion of Public Data: 1% Portion of Public Data: 2% Portion of Public Data: 6%\n(c) Backdoor attack\nFigure D.1: Impact of portion of public data at the server using FMNIST. We set C = 0.1, r = 0.1 for the backdoor attack and C = 0.2, r = 0.2 for the others.\nf large enough. However, this scheme still suffers from the effect of stragglers, which significantly slows down the overall training process.\nFig. C.3 compares Sself with Multi-Krum under scaled backdoor attack. The results are consistent with the results in Fig. C.2, confirming the advantage of Sself over Multi-Krum combined with straggler-mitigating schemes.\nE IMPACT OF PUBLIC DATA\nIn the experiments in our main manuscript, we utilized 2% of the training data samples as public data to defend against adversarial attacks. In this section, to observe the impact of the portion of public data, we performed additional experiments by changing the portion of public data under three attack scenarios in a synchronous setup. In the main manuscript, we let 2% of the entire training set to be the public data and the remaining data to be the training data at the devices for a fair comparison with other schemes. Here, the overall training set is utilized at the devices and among them, a certain portion of data are collected at the server. Fig. D.1 shows the results with various portions of public data on FMNIST. From the results, it can be seen that our Sself protects the system against adversarial attacks with only a small amount of public data. But as shown in the plot where the portion of the public data is 0.03%, if the amount of public data becomes smaller than a certain threshold, the robustness of Sself does suffer." }, { "heading": "F EXPERIMENTS ON COVID-19 DATASET OPEN TO PUBLIC", "text": "In this section, we performed additional experiments on Kaggle’s Covid-19 dataset1 which is open to public. We consider both model update poisoning and data poisoning attacks in a synchronous setup. Image classification is performed to detect Covid-19 using Chest X-ray images. The dataset consists of 317 color images of 3480 × 4248 pixels in 3 classes (Normal, Covid and Viral-Pneumonia). There are 251 training images and 66 test images. We resized the images into 224 × 224 pixels and used convolutional neural network with 6 convolutional layers and 1 fully connected layer. We used 6% of the training data as the public data. We divided the remaining training samples into 10 devices and set C = 1 and r = 0.1.\nFig. F.1 shows the results of different schemes under data poisoning and model update poisoning attacks on Covid-19 dataset. As other baseline schemes, our Sself shows robustness against model update poisoning attack. In data poisoning attack, our Sself shows the best performance compared to other schemes. In conclusion, utilizing part of the open medical dataset as public data, we show that our Sself could effectively defend against model update and data poisoning attacks.\n1https://www.kaggle.com/pranavraikokte/covid19-image-dataset\n0 50 100 150 200 250 300\nGlobal round\n30\n40\n50\n60\n70\n80\n90\n100\nT e\ns t\na c c u\nra c y\nSself RFA Multi-Krum: f=1 Federated averaging\n(a) Data poisoning\n0 50 100 150 200 250 300\nGlobal round\n30\n40\n50\n60\n70\n80\n90\n100\nT e\ns t\na c c u\nra c y\nSself RFA Multi-Krum: f=1 Federated averaging\n(b) Model update poisoning\nFigure F.1: Performance of different schemes on medical dataset (Covid-19 image dataset) under data and model update poisoning attacks. We set C = 1, r = 0.1.\n0 50 100 150\nRunning time\n10\n30\n50\n70\n90\nT e\ns t\na c c u\nra c y\nSself Semi-Synchronous + RFA Ignore stragglers + RFA Wait for stragglers + RFA\n(a) FMNIST, data poisoning\n0 50 100 150\nRunning time\n0\n20\n40\n60\n80\n100\nT e\ns t\na c c u\nra c y\nSself Semi-Synchronous + RFA Ignore stragglers + RFA Wait for stragglers + RFA\n(b) FMNIST, model-update poisoning\nFigure G.1: Performance in a more severe straggler scenario where each device can have delay of 0 to 4. Data and model-update poisoning attacks are considered with FMNIST. We set C = 0.4, r = 0.2." }, { "heading": "G EXPERIMENTS IN A MORE SEVERE STRAGGLER SCENARIO", "text": "When modeling stragglers, we gave a delay of 0, 1, 2 to each device in the experiments of main manuscript. In this section, each device can have delay of 0 to 4 which is again determined independently and uniformly random. In Fig. G.1, we show the results with both stragglers and adversaries under data and model-update poisoning on FMNIST dataset. We set C to 0.4 and r to 0.2. It can be seen that our Sself still shows the best performance under both data poisoning and model-update poisoning compared to other baseline schemes." }, { "heading": "H EXPERIMENTS WITH VARYING PORTION OF ADVERSARIES", "text": "In this section, we show the performance of Sself with varying portion of adversaries under data and model poisoning attacks. We do not consider stragglers here. We set δ to 1 and Eth to 1 as in the experiments of the main manuscript. Fig. H.1 shows the results with different attack ratio on FMNIST dataset. For data poisoning, our Sself shows robustness against up to 0.4 of the attack ratio, but with 0.5 or higher, performance is degraded. For model update poisoning, it can be seen that our Sself performs well even with a higher attack ratio." }, { "heading": "I PROOF OF THEOREM 1", "text": "" }, { "heading": "I.1 ADDITIONAL NOTATIONS FOR PROOF", "text": "After receiving the results at global round t, the server first performs entropy-based filtering and obtain U (0)t , U (1) t ,..., U (t) t . Let w j t (k) be the model of the k-th benign device after j local updates\n0 50 100 150\nGlobal round\n0\n20\n40\n60\n80\n100\nT e\ns t\na c c u\nra c y\nAttack ratio(r):0.2 Attack ratio(r):0.3 Attack ratio(r):0.4 Attack ratio(r):0.5 Attack ratio(r):0.6\n(a) FMNIST, Data poisoning\n0 50 100 150\nGlobal round\n20\n40\n60\n80\n90\nT e\ns t\na c c u\nra c y\nAttack ratio(r):0.2 Attack ratio(r):0.3 Attack ratio(r):0.4 Attack ratio(r):0.5 Attack ratio(r):0.6\n(b) FMNIST, Model update poisoning\nFigure H.1: Performance with varying portion of adversaries. Data and model-update poisoning attacks are considered with FMNIST. We set C = 0.2.\nstarting from global round t. At global round t, each device receives the current global model wt and round index t from the server, and sets its initial model to wt, i.e., w0t (k)← wt for all k = 1, . . . , N . Then each k-th benign device performs E local updates of stochastic gradient descent (SGD) with learning rate η:\nwjt (k)← w j−1 t (k)− η∇Fk(w j−1 t (k), ξ j−1 t (k)) for j = 1, . . . , E, (8)\nwhere ξjt (k) is a set of data samples that are randomly selected from the k-th device during the j-th local update at global round t. After E local updates, the k-th benign device transmits wEt (k) to the server. However, in each round, the adversarial devices transmit poisoned model parameters.\nUsing these notations, the parameters defined in Section 2 can be rewritten as follows:\nv (i) t+1 = ∑ k∈U(i)t βi(k)w E i (k) where βi(k) ∝ mk {Fpub(wEi (k))}δ and ∑ k∈U(i)t βi(k) = 1 (9)\nzt+1 = t∑ i=0 αt(i)v (i) t+1 where αt(i) ∝\n∑ k∈U(i)t mk (t− i+ 1)c and t∑ i=0 αt(i) = 1 (10)\nwt+1 = (1− γ)wt + γzt+1 (11)" }, { "heading": "I.2 KEY LEMMA", "text": "We introduce the following key lemma for proving Theorem 1. Our proof is largely based on the convergence proof of FedAsync in (Xie et al., 2019a).\nLemma 1 Suppose Assumptions 1, 2 hold and the learning rate η is set to be less than 1L . Consider the k-th benign device that received the current global model wt from the server at global round t. After E local updates, the following holds:\nE[F (wEt (k))− F (w∗)|w0t (k)] ≤ (1− ηµ)E [F (w0t (k))− F (w∗)] + Eρ1η\n2 . (12)\nProof of Lemma 1. First, consider one step of SGD in the k-th local device. For a given wj−1t (k), for all global round t and for all local updates j ∈ {1, . . . , E}, we have\nE[F (wjt (k))− F (w∗)|w j−1 t (k)]\n≤ F (wj−1t (k))− F (w∗)− ηE[∇F (w j−1 t (k)) T∇Fk(wj−1t (k), ξ j−1 t (k))|w j−1 t (k)]\n+ Lη2\n2 E[‖∇Fk(wj−1t (k), ξ j−1 t )‖2|w j−1 t (k)] I SGD update and L-smoothness\n≤ F (wj−1t (k))− F (w∗) + η 2 E[‖∇F (wj−1t (k))−∇Fk(w j−1 t (k), ξ j−1 t (k))‖2|w j−1 t (k)]\n− η 2 ‖∇F (wj−1t (k))‖2 I η < 1 L\n≤ F (wj−1t (k))− F (w∗)− η 2 ‖∇F (wj−1t (k))‖2 + ηρ1 2\nI Assumption 2\n≤ (1− ηµ)[F (wj−1t (k))− F (w∗)] + ηρ1 2\nI µ-strongly convexity\n(13)\nApplying above result to E local updates in k-th local device, we have\nE [ F (wEt (k))− F (w∗)|w0t (k) ] = E[ E[F (wEt (k))− F (w∗)|wE−1t (k)]|w0t (k) ] I Law of total expectation\n≤ (1− ηµ)E[[F (wE−1t (k))− F (w∗)]|w0t (k)] + ηρ1 2\nI Inequality (13)\n...\n≤ (1− ηµ)E [F (w0t (k))− F (w∗)] + ηρ1 2 E∑ j=1 (1− ηµ)j−1\n= (1− ηµ)E [F (w0t (k))− F (w∗)] + ηρ1 2\n1− (1− ηµ)E\nηµ I From η <\n1 L ≤ 1 µ , ηµ < 1\n≤ (1− ηµ)E [F (w0t (k))− F (w∗)] + Eηρ1\n2 I From ηµ < 1, 1− (1− ηµ)E ≤ Eηµ" }, { "heading": "I.3 PROOF OF THEOREM 1", "text": "Now utilizing Lemma 1, we provide the proof for Theorem 1. First, consider one round of global aggregation at the server. For a given wt−1, the server updates the global model according to equation\n(11). Then for all t ∈ 1, . . . , T , we have\nE[F (wt)− F (w∗)|wt−1] (a)\n≤ (1− γ)[F (wt−1)− F (w∗)] + γE[F (zt)− F (w∗)|wt−1] (b)\n≤ (1− γ)[F (wt−1)− F (w∗)] + γ t−1∑ i=0 αt−1(i)E[F (vit)− F (w∗)|wt−1]\n(c) ≤ (1− γ)[F (wt−1)− F (w∗)] + γ t−1∑ i=0 αt−1(i) ∑\nk∈U(i)t−1\nβi(k)E[F (wEi (k))− F (w∗)|wt−1]\n= (1− γ)[F (wt−1)− F (w∗)] + γ t−1∑ i=0 αt−1(i) { ∑ k∈B(i)t−1 βi(k)E[F (wEi (k))− F (w∗)|wt−1]\n+ ∑\nk∈M(i)t−1\nβi(k)E[F (wEi (k))− F (w∗)|wt−1] }\n(d) ≤ (1− γ + γαt−1(t− 1)(1− Ωt−1t−1)(1− ηµ)E)[F (wt−1)− F (w∗)] + Eηρ1γ\n2 + γ(1− ηµ)E t−2∑ i=0 αt−1(i) ∑\nk∈B(i)t−1\nβi(k) [ F (w0i (k))− F (w∗) ]\n+ γ t−1∑ i=0 αt−1(i) ∑\nk∈M(i)t−1\nβi(k)E[F (wEi (k))− F (w∗)|wt−1]\n(e) ≤ (1− γ + γαt−1(t− 1)(1− Ωt−1t−1)(1− ηµ)E)[F (wt−1)− F (w∗)] + γΩmaxΓ\n+ γ(1− ηµ)E t−2∑ i=0 αt−1(i) ∑\nk∈B(i)t−1\nβi(k) [ F (w0i (k))− F (w∗) ] + Eηρ1γ\n2\n(f) ≤ (1− γ + γαt−1(t− 1)(1− Ωt−1t−1)(1− ηµ)E)[F (wt−1)− F (w∗)] + γΩmaxΓ\n+ γ(1− ηµ)E t−2∑ i=0 αt−1(i) ∑\nk∈B(i)t−1\nβi(k) 1\n2µ ‖∇F (w0i (k))‖2 +\nEηρ1γ\n2\n(g) ≤ (1− γ + γαt−1(t− 1)(1− Ωt−1t−1)(1− ηµ)E)[F (wt−1)− F (w∗)] + γΩmaxΓ\n+ Eηρ1γ 2 + γ(1− αt−1(t− 1))(1− ηµ)Eρ2\n2µ (h) ≤ (1− γ + γαt−1(t− 1)(1− Ωt−1t−1)(1− ηµ)E)[F (wt−1)− F (w∗)]\n+ γ(Eρ1 + (1− αt−1(t− 1))ρ2 + 2µΩmaxΓ)\n2µ (14)\nwhere (a), (b), (c) come from convexity, (d) follows Lemma 1, (e) comes from Ωmax = max\n0≤i≤t,0≤t≤T Ω\n(i) t and the Assumption 4. (f) is due to µ-strongly convexity, (g) is from Assumption 3 and (h) comes from ηµ < 1. Note that ∑t−1 i=0 αt−1(i) = 1 for all t.\nApplying the above result to T global aggregations in the server, we have\nE[F (wT )− F (w∗)|w0] (a) = E [E[F (wT )− F (w∗)|wT−1]|w0] (b)\n≤ E [ (1− γ + γαT−1(T − 1)(1− ΩT−1T−1)(1− ηµ) E)[F (wT−1)− F (w∗)]|w0 ]\n+ γ(Eρ1 + (1− αt−1(t− 1))ρ2 + 2µΩmaxΓ)\n2µ\n(c) ≤ T−1∏ τ=0 (1− γ + γατ (τ)(1− Ωττ )(1− ηµ)E)[F (w0)− F (w∗)] + γ(Eρ1 + (1− αT−1(T − 1))ρ2 + 2µΩmaxΓ) 2µ\n+ T−1∑ τ=1 γ(Eρ1 + (1− αT−1−τ (T − 1− τ))ρ2 + 2ΩmaxΓ) 2µ τ∏ k=1 (1− γ + γαT−k(T − k)(1− ΩT−kT−k)(1− ηµ) E)\n(d) ≤ (1− γ + γ(1− ηµ)E)T [F (w0)− F (w∗)] + [ 1− {1− γ + γ(1− ηµ)E}T ] Eρ1 + ρ2 + 2µΩmaxΓ 2µ(1− (1− ηµ)E)\n(e) ≤ (1− γ + γ(1− ηµ)E)T [F (w0)− F (w∗)] + [ 1− {1− γ + γ(1− ηµ)E}T ] ρ1 + ρ2 + 2µΩmaxΓ 2ηµ2\n= νT [F (w0)− F (w∗)] + (1− νT )C ′\nwhich completes the proof. Here, (a) comes from the Law of total expectation, (b), (c) are due to inequality (14). And (d) comes from 0 ≤ αt(i) ≤ 1 and 0 ≤ Ω(i)t < 1 for all i, t. In addition, (e) is from ηµ ≤ 1 and E is a positive integer." } ]
2,020
null
SP:a27d66876fcdc3f3871485445e09041a8927b147
[ "the paper aims to explain the success of BYOL, a recently proposed contrastive method that mysteriously avoids the trivial constant solution without requiring negative samples. The paper proposes a new loss named RAFT. Compared to BYOL, RAFT is more general since it subsumes a variation of BYOL as its special case, and contains a cross-model term to be maximized which regularizes the alignment loss and encourages the online encoder to \"run away\" from the mean teacher.", "The paper provides a new perspective on the BYOL self-supervised learning method. First, the paper introduces an upper-bound objective, BYOL', that is easier to analyze than BYOL because it is composed of two well understood losses: an alignment loss and cross-model loss. Further, it shows empirically that optimizing BYOL' is similar to optimizing BYOL. Second, the paper introduces the RAFT method which maximizes the alignment loss instead of minimizing it. The paper proves that under some assumptions, such as a linear predictor function, optimizing BYOL' is equivalent to RAFT. Based on this analysis, the paper explains why the predictor function is essential for BYOL and why it is hard to achieve convergence." ]
Recently, a newly proposed self-supervised framework Bootstrap Your Own Latent (BYOL) seriously challenges the necessity of negative samples in contrastive learning frameworks. BYOL works like a charm despite the fact that it discards the negative samples completely and there is no measure to prevent collapse in its training objective. In this paper, we suggest understanding BYOL from the view of our newly proposed interpretable self-supervised learning framework, Run Away From your Teacher (RAFT). RAFT optimizes two objectives at the same time: (i) aligning two views of the same data to similar representations and (ii) running away from the model’s Mean Teacher (MT, the exponential moving average of the history models) instead of BYOL’s running towards it. The second term of RAFT explicitly prevents the representation collapse and thus makes RAFT a more conceptually reliable framework. We provide basic benchmarks of RAFT on CIFAR10 to validate the effectiveness of our method. Furthermore, we prove that BYOL is equivalent to RAFT under certain conditions, providing solid reasoning for BYOL’s counter-intuitive success.
[]
[ { "authors": [ "Ben Athiwaratkun", "Marc Finzi", "Pavel Izmailov", "Andrew Gordon Wilson" ], "title": "There are many consistent explanations of unlabeled data: Why you should average", "venue": null, "year": 2019 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeshwar", "Sherjil Ozair", "Yoshua Bengio", "Aaron Courville", "Devon Hjelm" ], "title": "Mutual information neural estimation", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Pratik Chaudhari", "Anna Choromanska", "Stefano Soatto", "Yann LeCun", "Carlo Baldassi", "Christian Borgs", "Jennifer Chayes", "Levent Sagun", "Riccardo Zecchina" ], "title": "Entropy-sgd: Biasing gradient descent into wide valleys", "venue": "Journal of Statistical Mechanics: Theory and Experiment,", "year": 2019 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Xinlei Chen", "Haoqi Fan", "Ross Girshick", "Kaiming He" ], "title": "Improved baselines with momentum contrastive learning", "venue": "arXiv preprint arXiv:2003.04297,", "year": 2020 }, { "authors": [ "Stuart Geman", "Elie Bienenstock", "René Doursat" ], "title": "Neural networks and the bias/variance dilemma", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre H Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap your own latent: A new approach to self-supervised learning", "venue": "arXiv preprint arXiv:2006.07733,", "year": 2020 }, { "authors": [ "Raia Hadsell", "Sumit Chopra", "Yann LeCun" ], "title": "Dimensionality reduction by learning an invariant mapping", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06),", "year": 2006 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "arXiv preprint arXiv:1808.06670,", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alexander Kolesnikov", "Xiaohua Zhai", "Lucas Beyer" ], "title": "Revisiting self-supervised visual representation learning", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Simon Kornblith", "Jonathon Shlens", "Quoc V Le" ], "title": "Do better imagenet models transfer better", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Samuli Laine", "Timo Aila" ], "title": "Temporal ensembling for semi-supervised learning", "venue": "arXiv preprint arXiv:1610.02242,", "year": 2016 }, { "authors": [ "Roman Novak", "Yasaman Bahri", "Daniel A Abolafia", "Jeffrey Pennington", "Jascha SohlDickstein" ], "title": "Sensitivity and generalization in neural networks: an empirical study", "venue": "arXiv preprint arXiv:1802.08760,", "year": 2018 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Ben Poole", "Sherjil Ozair", "Aaron van den Oord", "Alexander A Alemi", "George Tucker" ], "title": "On variational bounds of mutual information", "venue": null, "year": 1905 }, { "authors": [ "Kihyuk Sohn" ], "title": "Improved deep metric learning with multi-class n-pair loss objective", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Antti Tarvainen", "Harri Valpola" ], "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "arXiv preprint arXiv:1906.05849,", "year": 2019 }, { "authors": [ "Michael Tschannen", "Josip Djolonga", "Paul K Rubenstein", "Sylvain Gelly", "Mario Lucic" ], "title": "On mutual information maximization for representation learning", "venue": null, "year": 1907 }, { "authors": [ "Tongzhou Wang", "Phillip Isola" ], "title": "Understanding contrastive representation learning through alignment and uniformity on the hypersphere", "venue": "arXiv preprint arXiv:2005.10242,", "year": 2020 }, { "authors": [ "Svante Wold", "Kim Esbensen", "Paul Geladi" ], "title": "Principal component analysis", "venue": "Chemometrics and intelligent laboratory systems,", "year": 1987 }, { "authors": [ "Chengxu Zhuang", "Alex Lin Zhai", "Daniel Yamins" ], "title": "Local aggregation for unsupervised learning of visual embeddings", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Novak et al", "Chaudhari" ], "title": "2019), among which the major conclusion states that the consistency loss between the student and its MT acts as a regularizer for better generalization. The proven properties of MT might lead us to focus on how the online network’s learning from MT effectively regularizes", "venue": null, "year": 2019 }, { "authors": [ "Grill" ], "title": "2020) and train the BYOL on the training set for 300 epochs with batch size", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recently the performance gap between self-supervised learning and supervised learning has been narrowed thanks to the development of contrastive learning (Chen et al., 2020b;a; Tian et al., 2019; Chen et al., 2020b; Sohn, 2016; Zhuang et al., 2019; He et al., 2020; Oord et al., 2018; Hadsell et al., 2006). Contrastive learning distinguishes positive pairs of data from the negative. It has been shown that when the representation space is l2-normalized, i.e. a hypersphere, optimizing the contrastive loss is approximately equivalent to optimizing the alignment of positive pairs and the uniformity of the representation distribution at the same time (Wang & Isola, 2020). This equivalence conforms to our intuitive understanding. One can easily imagine a failed method when we only optimize either of the properties: aligning the positive pairs without uniformity constraint causes representation collapse, mapping different data all to the same point; scattering the data uniformly in the representation space without aligning similar ones yields no more meaningful representation than random.\nThe proposal of Bootstrap Your Own Latent (BYOL) fiercely challenges our consensus that negative samples are necessary to contrastive methods (Grill et al., 2020). BYOL trains the model (online network) to predict its Mean Teacher (moving average of the online, refer to Appendix B.2) on two augmented views of the same data (Tarvainen & Valpola, 2017). There is no explicit constraint on uniformity in BYOL, while the expected collapse never happens, what’s more, it reaches the SOTA performance on the downstream tasks. Although BYOL has been empirically proven to be an effective self-supervised learning approach, the mechanism that keeps it from collapse remains unrevealed. Without disclosing this mystery, it would be disturbing for us to adapt BYOL to other problems, let alone further improve it. Therefore solving the puzzle of BYOL is an urgent task.\nIn this paper, we explain how BYOL works through another interpretable learning framework which leverages the MT in the exact opposite way. Based on a series of theoretical derivation and empirical approximation, we build a new self-supervised learning framework, Run Away From your Teacher (RAFT), which optimizes two objectives at the same time: (i) minimize the representation\ndistance between two samples from a positive pair and (ii) maximize the representation distance between the online and its MT. The second objective of RAFT incorporates the MT in a way exactly opposite to BYOL, and it explicitly prevents the representation collapse by encouraging the online to be different from its history (Figure 2a). Moreover, we empirically show that the second objective of RAFT is a more effective and consistent regularizer for the first objective, which makes RAFT more favorable than BYOL. Finally, we solve the puzzle of BYOL by theoretically proving that BYOL is a special form of RAFT when certain conditions and approximation hold. This proof explains why collapse does not happen in BYOL, and also makes the performance of BYOL′ an approximate guarantee of the effectiveness of RAFT.\nThe main body of the paper is organized in the same order of how we explore the properties of BYOL and establish RAFT based on them (refer to Appendix A for more details). In section 3, we investigate the phenomenon that BYOL fails to work when the predictor is removed. In section 4, we establish two meaningful objectives out of BYOL by upper bounding. Based on that, we propose RAFT due to its stronger regularization effect and its accordance with our knowledge. In section 5, we prove that, as a representation learning framework, BYOL is a special form of RAFT under certain achievable conditions.\nIn summary, our contributions are listed as follows:\n• We present a new self-supervised learning framework RAFT that minimizes the alignment and maximizes the distance between the online network and its MT. The motivation of RAFT conforms to our understanding of balancing alignment and uniformity of the representation space, and thus could be easily extended and adapted to future problems.\n• We equate two seemingly opposite ways of incorporating MT in contrastive methods under certain conditions. By doing so, we unravel the puzzle of how BYOL avoids representation collapse." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "" }, { "heading": "2.1 TWO METRICS OPTIMIZED IN CONTRASTIVE LEARNING", "text": "Optimizing contrastive learning objective has been empirically proven to have positive correlations with the downstream task performance (Chen et al., 2020b;a; Tian et al., 2019; Chen et al., 2020b; Sohn, 2016; Zhuang et al., 2019; He et al., 2020; Oord et al., 2018). Wang & Isola (2020) puts the contrastive learning under the context of hypersphere and formally showcases that optimizing the contrastive loss (for preliminary of contrastive learning, refer to Appendix B.1) is equivalent to optimizing two metrics of the encoder network when the size of negative samplesK is sufficiently large: the alignment of the two augmented views of the same data and the uniformity of the representation population. We introduce the alignment objective and uniformity objective as follows. Definition 2.1 (Alignment loss) The alignment loss Lalign(f,Ppos) of the function f over positivepair distribution Ppos is defined as:\nLalign(f ;Ppos) , E(x1,x2)∼Ppos [ ‖f(x1)− f(x2)‖22 ] , (1)\nwhere the positive pair (x1, x2) are two augmented views of the same input data x ∼ X , i.e. (x1, x2) = (t1(x), t2(x)) and t1 ∼ T1, t2 ∼ T2 are two augmentations. For the sake of simplicity, we omit Ppos and use Lalign(f) in the following content. Definition 2.2 (Uniformity loss) The loss of uniformity Luniform(f ;X ) of the encoder function f over data distribution X is defined as\nLuniform(f ;X ) , logE(x,y)∼X 2 [ e−t‖f(x)−f(y)‖ 2 2 ] , (2)\nwhere t > 0 is a fixed parameter and is empirically set to t = 2. To note here, the vectors in the representation space are automatically l2-normalized, i.e. f(x) , f(x)/‖f(x)‖2, as we limit the representation space to a hypersphere following Wang & Isola (2020) and Grill et al. (2020) and the representation vectors in the following context are also automatically l2-normalized, unless specified otherwise. Wang & Isola (2020) has empirically demonstrated that the balance of the alignment loss\nand the uniformity loss is necessary when learning representations through contrastive method. The rationale behind it is straightforward: Lalign provides the motive power that concentrates the similar data, and Luniform prevents it from mapping all the data to the same meaningless point." }, { "heading": "2.2 BYOL: BIZARRE ALTERNATIVE OF CONTRASTIVE", "text": "A recently proposed self-supervised representation learning algorithm BYOL hugely challenges the common understanding, that the alignment should be balanced by negative samples during the contrastive learning. It establishes two networks, online and target, approaching to each other during training. The online is trained to predict the target’s representations and the target is the Exponential Moving Average (EMA) of the parameters of the online. The loss of BYOL at every iteration could be written as\nLBYOL , E(x,t1,t2)∼(X ,T1,T2) [∥∥qw(fθ(t1(x)))− fξ(t2(x))∥∥22] , (3)\nwhere two vectors in representation space are automatically l2-normalized. fθ is the online encoder network parameterized by θ and qw is the predictor network parameterized by w. x ∼ X is the input sampled from the data distribution X , and t1(x), t2(x) are two augmented views of x where t1 ∼ T1, t2 ∼ T2 are two data augmentations. The target network fξ is of the same architecture as fθ and is updated by EMA with τ controlling to what degree the target network preserves its history\nξ ← τξ + (1− τ)θ. (4)\nFrom the scheme of BYOL training, it seems like there is no constraint on the uniformity, and thus most frequently asked question about BYOL is how it prevents the representation collapse. Theoretically, we would expect that when the final convergence of the online and target is reached, LBYOL degenerates to Lalign and therefore causes representation collapse, while this speculation never happens in reality. Despite the perfect SOTA performance of BYOL, there is one inconsistency not to be neglected: it fails with representation collapse when the predictor is removed, which means qw(x) = x for any given x. This inconsistent behavior of BYOL weakens its reliability and further poses questions on future adaptation of the algorithm. The motivation of understanding and even solving this inconsistency is the start point of this paper." }, { "heading": "3 ON-AND-OFF BYOL: FAILURE WITHOUT PREDICTOR", "text": "We start by presenting a dissatisfactory property of BYOL: its success heavily relies on the existence of the predictor qw. The experimental setup of this paper is listed in Appendix C. The performance of BYOL original model, whose predictor qw is a two-layer MLP with batch normalization, evaluated on the linear evaluation protocol (Kolesnikov et al., 2019; Kornblith et al., 2019; Chen et al., 2020a; He et al., 2020; Grill et al., 2020) reaches 68.08 ± 0.84%. When the predictor is removed, the performance degenerates to 20.92±1.29%, which is even lower than the random baseline’s 42.74± 0.41%. We examine the speculation that the performance drop is caused by the representation collapse both visually (refer to Appendix F.1) and numerically. Inspired by Wang & Isola (2020), we use Luniform(fθ;X ) to evaluate to what degree the representations are spread on the hypersphere and Lalign(qw ◦ fθ) to evaluate how the similar samples are aligned in the representation space. The results in Table 1 show that with the predictor, BYOL optimizes the uniformity of the representation distribution. On the contrary, when taken away the predictor, the alignment of two augmented views is overly optimized and the uniformity of the representation deteriorates (Figure 4), therefore we conclude the predictor is essential to the collapse prevention in BYOL.\nOne reasonable follow-up explanation on the efficacy of the predictor may consider its specially designed architecture or some good properties brought by the weight initialization, which makes it hard to understand the mechanism behind it. Fortunately, after replacing the current predictor, two-layer MLP with batch normalization (Ioffe & Szegedy, 2015), with different network architectures and weight initializations, we find that there is no significant change either on linear evaluation protocol or on the model behavior during training (Table 1, for detailed training trajectory, refer to Figure 4). We first replace the complex structure with linear mapping qw(·) = W (·). This replacement provides a naive solution to representation collapse: W = I , while it never converges to this apparent collapse. Surprisingly enough when we go harsher on this linear predictor by initializing\nW with the apparent collapse solution I , the model itself seems to have a self-recovering mechanism even though it starts off at a poor position: the loss quickly approaches to 0 and the uniformity deteriorates for 10-20 epochs and suddenly it deflects from the collapse and keeps on the right track. We conduct a theoretical proof that a randomly initialized linear predictor prevents the (more strict form of) representation collapse by creating infinite non-trivial solutions when the convergence is achieved (refer to Appendix I), while we fail to correlate the consistently optimized uniformity with the presence of the predictor, which indicates that a deeper rationale needs to be found." }, { "heading": "4 RUN AWAY FROM YOUR TEACHER: MORE EFFECTIVE REGULARIZER", "text": "" }, { "heading": "4.1 DISENTANGLE THE BYOL LOSS BY UPPER BOUNDING", "text": "Analyzing LBYOL is hard, since it only has one single mean squared error term and there are many factors entangled within it, e.g., two augmented views of the same data, predictor, and the EMA updating rule. Inspired by the Bias-Variance decomposition on squared loss (Geman et al., 1992), we extract the alignment loss by subtracting and adding the same term qw(fθ(t2(x))) and further yield the upper bound of LBYOL. For details, please refer to Appendix G. Definition 4.1 (Cross-model loss) The cross-model loss Lcross-model(f, g;X ) of the function f and g over the data distribution X is defined as\nLcross-model(f, g;X ) , Ex∼X [∥∥f(x)− g(x)∥∥2\n2\n] . (5)\nDefinition 4.2 (BYOL′ loss) The BYOL′ loss LBYOL′ is defined as\nLBYOL′ , αLalign(qw ◦ fθ;Ppos) + βLcross-model(qw ◦ fθ, fξ;X2) (6)\nwhere α, β > 0 are constants, Ppos is defined in Eq. 1 and X2 = T2(X ) is the distribution of the augmented data. For the sake of simplicity, we use Lalign(qw ◦ fθ) to denote Lalign(qw ◦ fθ;Ppos) in the following content. For the sake of symmetry, we use Lcross-model(qw ◦ fθ) to denote (1/2)[Lcross-model(qw ◦ fθ, fξ,X1) + Lcross-model(qw ◦ fθ, fξ,X2)] to compute the cross-model loss.\nTheorem 4.1 (LBYOL′ is an upper bound of LBYOL) LBYOL′ is an upper bound of LBYOL if we ignore the scalar multiplication. Concretely speaking, for any given constants α, β > 0, we have\nLBYOL ≤ ( 1\nα +\n1 β )LBYOL′ . (7)\nProof Please refer to Appendix G.\nIdeally, minimizing LBYOL′ would yield similar performance as minimizing LBYOL. We exemplify the legitimacy of LBYOL′ by setting (α, β) = (1, 1). In Table 1, the performance of BYOL and BYOL′ are close to each other with respect to three metrics: alignment, uniformity, and downstream linear evaluation protocol, regardless of the form of predictors. When the predictor is linear mapping, the performance differences between them are subtle. Besides, when the predictor is removed, the representation collapse also happens to BYOL′. So we conclude that optimizing LBYOL′ is almost equivalent to LBYOL. In spite of the performance similarity, LBYOL′ is of a more disentangled form than LBYOL and therefore we focus on studying the former instead of the latter. The new objective consists of two terms: the first term Lalign minimizes the representation distance between samples from a positive pair and has already been shown crucial to the successful contrastive methods (Wang & Isola, 2020). Intuitively, it provides the motive power to concentrate similar data in the representation space. Based on the form of BYOL′, we conclude that MT is used to regularize the alignment loss. This perspective of two terms regularizing each other is crucial to our analysis and improvement of the original BYOL framework. Understanding why BYOL works without collapse is approximately equivalent to understanding how minimizing Lcross-model(qw ◦ fθ, fξ) effectively regularizes the alignment loss, or even actively optimizes the uniformity." }, { "heading": "4.2 RAFT: RUN AWAY FROM YOUR TEACHER", "text": "The major difficulty of correlating Lcross-model with Luniform is that their optimization intentions are not only irrelevant, but somewhat opposite. Minimizing the cross-model loss asks the network to produce close representations for certain inputs, while optimizing the uniformity loss requires it to produce varying representations. The disparity residing in the form pushes us to question the original motivation of BYOL: do we really want the online network to approach to the Mean Teacher? To test our suspicion, we minimize [Lalign(qw ◦ fθ) − Lcross-model(qw ◦ fθ, fξ)] instead of [Lalign(qw ◦ fθ) + Lcross-model(qw ◦ fθ, fξ)], and we find it works as well. This bizarre phenomenon will be explained in Section 5. Removing the predictor, we observe that although minimizing [Lalign(fθ)−Lcross-model(fθ, fξ)] fails to yield better representation than the random baseline,\nit prevents the overly-optimized alignment loss, i.e. it works as an effective regularizer for the alignment loss, while minimizing Lcross-model(fθ, fξ) does not. Based on the conclusion above and law of Occam’s Razor, we propose a new self-supervised learning framework, Run Away From your Teacher (RAFT), which optimizes two learning objectives simultaneously: (i) minimize the alignment loss of two samples from a positive pair and (ii) maximize the distance between the online network and its MT (refer to Figure 1 and Algorithm 1). Definition 4.3 (RAFT loss) The RAFT loss LRAFT is defined as\nLRAFT , αLalign(qw ◦ fθ;Ppos)− βLcross-model(qw ◦ fθ, fξ;X2), (8)\nwhere α, β > 0 are constants and other components follows the Definition 4.2.\nCompared to BYOL and BYOL′, RAFT better conforms to our knowledge and is a conceptually non-collapsing algorithm. There has been a lot of work demonstrating that weight averaging is roughly equal to sample averaging (Tarvainen & Valpola, 2017), thus if two samples’ representations are close to each other at the beginning and their initial updating directions are opposite, then RAFT consistently separates them in the representation space. All the forms of loss terms could be classified into three categories: uniformity optimizer, effective regularizer for alignment loss, and others (refer to Figure 2b). According to our experiments, when the predictor is removed, running away from MT remains an effective regularizer for the alignment loss while BYOL’s running towards MT fails to do so, thus RAFT is of more unified and consistent form. In summary, our proposed learning framework RAFT is completely based on the intention of solving the inconsistency of the predictor in BYOL, and it’s better than BYOL in threefold:\n• Consistency. Compared to BYOL, our newly proposed method has an effective regularizer for the alignment loss regardless of the presence of predictor.\n• Interpretability. Mean teacher uses the technique of weight averaging and thus could be considered as an approximate ensemble of the previous versions of the model. Running away from the mean teacher intuitively encourages the diversity of the representation, which is positively correlated to the uniformity.\n• Disentanglement. The learning objective is decoupled into aligning two augmented views and running away from the mean teacher, and hence could be independently studied.\nWe will discuss the relationship between RAFT and BYOL′ in the next section, and we find BYOL′ is a special form of RAFT under certain conditions, which makes the performance of BYOL′ a guarantee of the effectiveness of RAFT. We provide benchmarks of alignment, uniformity, and downstream linear evaluation performance on CIFAR10 (Table 3). We discover that balancing the alignment loss and the cross-model loss is not an easy job with the predictor taken away. The imbalance between the alignment loss and the cross-model loss would lead to representation collapse or over-regularized alignment where every data is randomly projected. One interesting research direction is to study the efficacy of the predictor. The reason why it helps the two terms to achieve an equilibrium is left to be answered." }, { "heading": "5 UNDERSTANDING BYOL VIA RAFT", "text": "In Section 4.1 we derive an upper bound LBYOL′ of LBYOL and explicitly extract two terms Lalign and Lcross-model. In BYOL′, two terms are simultaneously minimized, while in RAFT, we minimize Lalign but maximize Lcross-model instead. To clearly distinguish the difference between the two objectives, we rewrite them as following:\nLBYOL′ = αLalign(qw ◦ fθ) + βLcross-model(qw ◦ fθ, fξ), (9) LRAFT = αLalign(qw ◦ fθ)− βLcross-model(qw ◦ fθ, fξ), (10)\nwhere α, β > 0 are constants.\nIn form, LRAFT and LBYOL′ seem to evolve in opposite optimizing direction on the second term, but the empirical study has shown that both of them work. How can two opposite optimization goals produce similar effect? Since RAFT is a conceptually working method, we analyze the mechanism of BYOL′ by establishing the equivalence between the parameters of BYOL′ and RAFT under mild conditions. Theorem 5.1 (One-to-one correspondence between BYOL′ and RAFT) There is a one-to-one correspondence between parameter trajectories of BYOL′ and RAFT when the following three conditions hold:\ni. the representation space is a hypersphere;\nii. the predictor is a linear transformation, i.e. qw(·) = W (·);\niii. only the tangential component of the gradient on the hypersphere is preserved.\nProof We prove the theorem by construction. For the detail, please refer to Appendix H.\nRemark The third condition conforms to the property of the hypersphere representation space and is easy to achieve. One can preserve only the tangential gradient by slightly modifying the loss. For example, suppose the representation of the MT is z̄ and the representation of the input is z which are both normalized, the cross-model loss ∥∥z̄ − z∥∥2 2 can be revised as ∥∥z̄ − λz∥∥2 2 /λ, where λ = sg(〈z, z̄〉) stands for stopping gradient of the inner product 〈z, z̄〉. Our experiments in Table 1 demonstrates that the condition of the tangential component of the gradient doesn’t turn any of the algorithms including BYOL, BYOL′ and RAFT into a collapsed one.\nIn Theorem 1, we show that optimizing LBYOL′ with initial parameters (θ(0),W (0)) is equivalent to optimizing LRAFT with initial parameters (θ(0),−W (0)) when the aforementioned three conditions are satisfied. This equivalence demonstrates that the final encoder network fθ and fθ′ equal to each other. Therefore we conclude that, as representation learning framework, BYOL′ is equivalent to our newly proposed RAFT.\nFrom a geometric point of view, the optimization process is the data points moving in the representation space under the guidance of the training loss. The loss function measures the potential energy of the parameters, and the gradient with regard to the data points is the motive force. If the representation space is a hypersphere as in BYOL, then the tangential force, i.e. the tangential component of the gradient, is the only key to scattering or concentrating the data points in the representation space. By the central symmetry of the hypersphere, clockwise and counterclockwise moving directions are equivalent to some extent, for example, pushing a point by π/2 and pulling it by π/2 on the 2-dimensional sphere causes the same effect.\nThe equivalence between BYOL′ and RAFT offers us a direct way to understand some strange phenomena we observe which are also reported in the original BYOL paper. Firstly, the non-collapse of BYOL is explained, since the RAFT is an intuitively and practically working algorithm. The equivalence of BYOL′ and RAFT when predictor is linear helps us understand why BYOL is an effective self-supervised learning algorithm. It also explains our initial question why BYOL fails to avoid representation collapse without the predictor: removing the predictor means fixing W = I , which breaks the RAFT’s designing principle of running away from the MT. Secondly, though the BYOL’s optimization procedure is of the form that two models approaching to each other, there has been no report of convergence in the original paper. The established equivalence perfectly explains it. RAFT incorporates the MT in an extremely dynamic way since it continuously varies from the history models, thus there would be no convergence of the data points. So does the parameters." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "In this paper, we address the problem of why the newly proposed self-supervised learning framework Bootstrap Your Own Latent (BYOL) works without negative samples. By decomposing, upper bounding and approximating the original loss of BYOL, we establish another interpretable selfsupervised learning method, Run Away From your Teacher. We show that RAFT contains an explicit term that prevents the representation collapse and we also empirically validate the effectiveness of RAFT. By constructing a one-to-one correspondence from RAFT to BYOL′ (variant of BYOL), we successfully explain the mechanism behind BYOL that makes it work and therefore implies the huge potential of our proposed RAFT. Based on the observation and the conclusion, here we have several suggestions for future work:\nTheoretical guarantees of RAFT. Though we have intuitively explained why running away from the MT is an effective regularizer, we don’t provide theoretical guarantees why optimizing RAFT would be favorable with respect to the representation learning. In future, one can try to relate RAFT to the theory of Mutual Information (MI) maximization (Belghazi et al., 2018; Hjelm et al., 2018; Tschannen et al., 2019), as the training objective of contrastive learning InfoNCE has been proven to be a lower bound of the MI (Poole et al., 2019). One detail should be noticed when attempting to correlate RAFT with MI maximization. Even though RAFT is an effective regularizer, it fails to yield good-quality representations when the predictor is removed, thus any theoretical proof on the effectiveness of RAFT should well explain the mechanism behind this extra predictor.\nOn the efficacy of the predictor. It has become a popular and almost standardized method to add an extra MLP on top of the network in contrastive learning methods (Chen et al., 2020a;b; Grill et al., 2020), while most of the work adopts this method as a special trick without considering the effect this MLP brings to the algorithm. In this paper, however, we find that this extra MLP may bring some unexpected properties to the original training objective: although the representations are optimized by disparate motivations (in our paper, BYOL′ running towards MT and RAFT running away from MT), the encoder network is trained to be exactly the same. This observation indicates that the mechanism of the extra MLP to the network needs to be further studied." }, { "heading": "A MAIN THREAD OF PAPER", "text": "The proposal of RAFT is based on a series of theoretical derivation and empirical approximation. Therefore the logic chain of our paper is fundamental to the legitimacy of our explanation on BYOL and the superiority of our newly proposed RAFT. Here we organize our main thread in the same order as the sections, to provide a clear view with readers.\nIn Section 3,\n• As a learning framework, BYOL does not consistently work. It heavily relies on the existence of the predictor. We want to understand why this inconsistency exists.\n• The architecture of the predictor doesn’t affect the collapse of BYOL, the fact that the linear predictor qw(·) = W (·) prevents collapse will be used as a crucial condition in Section 5.\nIn Section 4.1,\n• A new disentangled objective LBYOL′ = αLalign(qw ◦ fθ) + βLcross-model(qw ◦ fθ, fξ) is established by upper bounding.\n• We showcase that minimizing LBYOL′ is close to minimizing LBYOL in terms of alignment, uniformity, and linear evaluation protocol, which indicates that understanding the behavior of optimizing BYOL’s upper bound is approximately equivalent to understanding BYOL.\nIn Section 4.2,\n• We find minimizing [Lalign(qw ◦ fθ) − Lcross-model(qw ◦ fθ, fξ)] works as well, which has the exact opposite way of incorporating the cross-model loss to BYOL′.\n• Based on the observation above, we propose a new self-supervised learning approach Run Away From your Teacher, which regularizes Lalign(qw ◦fθ) by maximizing Lcross-model(qw ◦ fθ, fξ). Compared with BYOL, RAFT accords more with our common understanding. • Additional experiments show that without predictor, BYOL′ fails to regularize Lalign(fθ), let alone optimizing uniformity. On the contrary, although not able to actively optimize uniformity either, RAFT’s maximizing Lcross-model(fθ, fξ) continues to be an effective regularizer for Lalign(fθ), which makes it more favorable (Figure 2b).\nIn Section 5,\n• We prove that when the predictor is linear (qw = W ) and the representation space is a hypersphere where only the tangential component of gradient is preserved during training, minimizing Lcross-model(W ◦ fθ, fξ) and maximizing it obtain the same encoder fθ. • Based on the equivalence above, we conclude that BYOL′ is a special case of RAFT un-\nder conditions above. The equivalence established helps understanding several counterintuitive behaviors of BYOL." }, { "heading": "B BACKGROUND AND RELATED WORK", "text": "B.1 CONTRASTIVE LEARNING\nContrastive methods relies on the assumption that two views of the same data point share the information, and thus creates a positive pair. By separating the positives and the negatives, the neural network trained by the algorithm learns to extract the most useful information from the data and performs better on the downstream tasks. Typically, the algorithm uses the InfoNCE objective:\nLcontrast(h,K) = E (x,x+)∼Ppos {x−i } K i=1∼X K\n[ − log e h(x,x+)\neh(x,x+) + ∑K i=1 e h(x,x−i )\n] , (11)\nwhere (x, x+) are sampled from the positive pair distribution Ppos, which is built by a series of data augmentation functions [ref]. The negative samples {x−i }K are i.i.d sampled for K times from the data distribution X ; function h(x, y) measures the similarity between two input data (x, y). Empirically for the sake of symmetry, the measurement function h(x, y) = d(f(x), f(y)) has an encoder f(·) and a similarity metric d(·, ·) evaluating how close the two representations are.\nB.2 MEAN TEACHER\nThere is one type of semi-supervised learning method that BYOL constantly reminds people of, Mean Teacher (MT) (Tarvainen & Valpola, 2017; Laine & Aila, 2016). Like BYOL, MT is also of Teacher-Student (T-S) framework, where the teacher network is also the EMA of the student network. The additional consistency loss between the teacher and student is applied to the supervised signals. There has been a lot of work demonstrating the efficacy of MT(Athiwaratkun et al., 2019; Novak et al., 2018; Chaudhari et al., 2019), among which the major conclusion states that the consistency loss between the student and its MT acts as a regularizer for better generalization. The proven properties of MT might lead us to focus on how the online network’s learning from MT effectively regularizes Lalign in BYOL. In this paper, however, we propose the opposite way of leveraging MT in contrastive methods." }, { "heading": "C EXPERIMENTAL SETUP", "text": "Dataset Our main goal is to unravel the mystery why BYOL doesn’t collapse during training and to solve the predictor-inconsistency. The most important metric is whether the algorithm collapses or not, and we don’t target on developing a more powerful self-supervised learning algorithm that surpasses SOTA on large dataset. In this repsect, we limit our experiments to the scope of the CIFAR10 dataset. Each image is resized from 32× 32 to 96× 96. This change is the consequence of the tradeoff between the effect of the data augmentation and batch size: larger size of the image would allow more subtle and informative data augmentation scheme while it will reduce the training batch size, which has already been empirically shown is harmful to the model performance.\nModel architecture In our experiments, the model is composed of three stages: an encoder fθ that adopts the ResNet18 architecture (without the classifier on top); a projector gθ that is comprised of a linear layer with output size 512, batch normalization, rectified linear units (ReLU), and a final linear layer with output size 128; a predictor qw that is comprised of the same architecture as the projector but without the batch normalization.\nTraining We adopt the same data augmentation scheme that is used in Chen et al. (2020a) and Grill et al. (2020) and train the BYOL on the training set for 300 epochs with batch size 128 on 3 random seeds. The objective of training is specified accordingly and the model is trained on the Adam optimizer with learning rate 3 × 10−4 (Kingma & Ba, 2014). Unless stated otherwise, we update the target network with the EMA rate 4× 10−3 without the cosine smoothing trick. Evaluation After training, we evaluate the encoder’s performance on the widely adopted linear evaluation protocol: we fix the parameter of the encoder and we train another linear classifier on top of it using all the training labels for 100 epochs with learning rate 5 × 10−4. The final classification accuracy indicates to what degree the representations of the same class concentrate and the representations of the different class separate, and thus tells the quality of the representation." }, { "heading": "D TABLES", "text": "" }, { "heading": "E ALGORITHMS", "text": "Algorithm 1: RAFT: Run Away From your Teacher Inputs : X , T1, and T2 set of images and distributions of transformations θ and fθ model parameters and encoder w and qw predictor parameters and predictor ξ and fξ MT parameters and MT optimizer optimizer, updates online parameters using the loss gradient K and N total number of optimization steps and batch size {τk}Kk=1 and {ηk}Kk=1 target network update schedule and learning rate schedule\n1 for k = 1 to K do 2 B ← {xi}Ni=1 ∼ XN // sample a batch of N images 3 for xi ∈ B do 4 t1 ∼ T1 and t2 ∼ T2 // sample image transformations 5 z1 ← qw(fθ(t1(xi))) and z2 ← qw(fθ(t2(xi))) // reps for model 6 z′1 ← fξ(t1(xi)) and z′2 ← fξ(t2(xi)) // reps for MT 7 li = ∥∥ z1 ‖z1‖2 − z2 ‖z2‖2 ∥∥2 2 // loss for alignment\n8 l′i = − 12 (∥∥ z1 ‖z1‖2 − z′1 ‖z′1‖2 ∥∥2 2 + ∥∥ z2 ‖z2‖2 − z′2 ‖z′2‖2 ∥∥2 2 ) // loss for cross-model 9 end\n10 δθ ← 1N ( N∑ i=1 ∂θli + ∂θl ′ i ) // compute the loss gradient w.r.t. θ 11 θ ← optimizer(θ, δθ, ηk) // update trainable parameters 12 ξ ← τkξ + (1− τk)θ // update target parameters 13 end\nOutput: encoder fθ\nF VISUALIZATION OF TRAINING EVOLUTION\nF.1 REPRESENTATION DISTRIBUTION EVOLUTION OF DIFFERENT LEARNING ALGORITHMS\nF.2 EVOLUTION OF NUMERICAL METRIC IN DIFFERENT LEARNING ALGORITHMS" }, { "heading": "G PROOF OF BYOL UPPER BOUNDING", "text": "In this section, we provide how we derive the upper bound of LBYOL. For the sake of simplicity, without loss of rigor, we use t1 = t1(x) to represent the transformed input x.\nLBYOL = Ex∼X ,t1∼T1,t2∼T2 [ ‖qw(fθ(t1(x)))− fξ(t2(x))‖22 ] = E [ ‖qw(fθ(x1))− qw(fθ(x2)) + qw(fθ(x2))− fξ(x2)‖22 ] . (12)\nBy applying the Cauchy-Schwarz’s inequality to Eq. 12, we yield:\nLBYOL ≤ (1 + 1 λ ) ( E [∥∥qw(fθ(x1))− qw(fθ(x2))∥∥22]+ λE [∥∥qw(fθ(x2))− fξ(x2)∥∥22])\n= (1 + 1\nλ ) (Lalign(qw ◦ fθ;Ppos) + λLcross-model(qw ◦ fθ, fξ,X )) (13)\nwhich stands for any λ > 0, where the positive-pair distribution Ppos is modeled by the chain rule of the conditional probability:\nPpos(x1, x2) = X (x) · T1(t1|x) · T2(t2|x).\nFor any given pair α, β > 0, we let λ = β/α and substitute it back to Eq. 13, yielding\nLBYOL ≤ (1 + α\nβ )\n[ Lalign(qw ◦ fθ;Ppos) + β\nα Lcross-model(qw ◦ fθ, fξ,X ) ] = ( 1\nα +\n1 β ) [αLalign(qw ◦ fθ;Ppos) + βLcross-model(qw ◦ fθ, fξ,X )]\n= ( 1\nα +\n1 β )LBYOL′, (14)\nand as an optimization objective, we have\nmin( 1\nα +\n1 β )LBYOL′ ⇔ minLBYOL′ (15)\nTherefore we have proven that LBYOL′ as optimization objective is the upper bound of LBYOL. To note here, one can subtract and add a different term fξ(x1) to form the alignment loss on the side of MT fξ,\nLBYOL = E [∥∥qw(fθ(x1))− fξ(x1) + fξ(x1)− fξ(x2)∥∥22] , (16)\nwhile it doesn’t help to solve the problem since the alignment constraint on the side of MT doesn’t generate gradients." }, { "heading": "H PROOF OF ONE-TO-ONE CORRESPONDENCE BETWEEN BYOL′ AND RAFT", "text": "Theorem (One-to-one correspondence between BYOL′ and RAFT) There is a one-to-one correspondence between parameter trajectories of BYOL′ and RAFT when the following three conditions hold:\ni. the representation space is a hypersphere;\nii. the predictor is a linear transformation, i.e. qw(·) = W (·);\niii. only the tangential component of the gradient on the hypersphere is preserved.\nWithout losing generality, suppose that x1 = t1(x), x2 = t2(x) where x is an arbitrary input and batch size is 1, and (α, β) = (1, 1). We set BYOL′ and RAFT with initial parameters (θ′,W ′) = (θ(0),W (0)) and (θ,W ) = (θ(0),−W (0)) respectively. For convenience, we assumes the dot product “·” ignores the row layout or column layout in the chain rule of derivatives and we define the following symbols:\nz2 = fξ(x2), (17) z′1 = W ′fθ′(x1), (18) z′2 = W ′fθ′(x2), (19)\nz1 = Wfθ(x1), (20) z2 = Wfθ(x2). (21)\nBased on the notations defined, we rewrite the loss terms of BYOL′ and RAFT as follows:\nLBYOL ′ align = ∥∥z′1 − z′2∥∥22, (22)\nLBYOL ′ cross-model = ∥∥z′2 − z2∥∥22, (23)\nLRAFTalign = ∥∥z1 − z2∥∥22, (24)\nLRAFTcross-model = − ∥∥z2 − z2∥∥22. (25)\nThe two objectives are following:\nLBYOL′ = LBYOL ′ align + LBYOL ′ cross-model, (26)\nLRAFT = LRAFTalign + LRAFTcross-model. (27)\nWe claim that under the third condition, the following equations hold:[∂LBYOL′ ∂θ′ ] ‖ = [∂LRAFT ∂θ ] ‖ , [∂LBYOL′ ∂W ′ ] ‖ = − [∂LRAFT ∂W ] ‖ , (28)\nwhere subscript ‖ denotes the tangential component of the gradient.\nFirstly we show the equivalence with respect to θ. Differentiate LBYOL′align ,LRAFTalign with respect to θ′ij , θij respectively, we obtain\n∂LBYOL′align ∂θ′ij\n= 2 [ (z′1 − z′2)‖ + (z ′ 1 − z′2)⊥ ] · ( ∂z′1 ∂θ′ij − ∂z ′ 2 ∂θ′ij ) , (29)\n∂LRAFTalign ∂θij\n= 2 [ (z1 − z2)‖ + (z1 − z2)⊥ ] · ( ∂z1 ∂θij − ∂z2 ∂θij ) , (30)\n[∂LBYOL′align ∂θ′ij ] ‖ = 2 (z′1 − z′2)‖ · ( ∂z′1 ∂θ′ij − ∂z ′ 2 ∂θ′ij ) , (31)\n[∂LRAFTalign ∂θij ] ‖ = 2 (z1 − z2)‖ · ( ∂z1 ∂θij − ∂z2 ∂θij ) , (32)\nwhere (z′1 − z′2) , (z1 − z2) are vectors at the points z′2 and z2 on the hypersphere and we decompose the vector into the tangential (denoted by ‖) and normal component (denoted by ⊥):\n(z′1 − z′2) = [ (z′1 − z′2)‖ + (z ′ 1 − z′2)⊥ ] , (z1 − z2) = [ (z1 − z2)‖ + (z1 − z2)⊥ ] , (33)\nGenerally, suppose z is a unit vector starting at the origin point, which is perpendicular to the unit hypersphere at the point z, for any vector v starting at the point z, we have\nv⊥ = 〈v, z〉 · z, v‖ = v − v⊥ = v − 〈v, z〉 · z. (34)\nThen we can compute the tangential component of the gradient:\n(z′1 − z′2)‖ = (z ′ 1 − z′2)− 〈z′1 − z′2, z′2〉 · z′2 =z′1 − 〈z′2, z′1〉 · z′2, (z1 − z2)‖ = (z1 − z2)− 〈z1 − z2, z2〉 · z2\n=z1 − 〈z2, z1〉 · z2. (35)\nBecause of the initialization, z′1 = −z1, z2 = −z′2, therefore we have\n(z′1 − z′2)‖ = − (z1 − z2)‖ , (36)( ∂z′1 ∂θ′ij − ∂z ′ 2 ∂θ′ij ) = − ( ∂z1 ∂θij − ∂z2 ∂θij ) . (37)\nSo we show that [∂LBYOL′align ∂θ′ij ] ‖ = [∂LRAFTalign ∂θij ] ‖ . (38)\nWe differentiate LBYOL′cross-model, LRAFTcross-model with respect to θ′ij , θij respectively, we obtain that[∂LBYOL′cross-model ∂θ′ij ] ‖ = −2 (z2 − z′2)‖ ·W ′ · ∂fθ ′(x2) ∂θ′ij , (39)\n[∂LRAFTcross-model ∂θij ] ‖ = 2 (z2 − z2)‖ ·W · ∂fθ(x2) ∂θij . (40)\nSimilar to Eq. 35, we derive that\n(z2 − z′2)‖ = (z2 − z2)‖ (41)\nSince θ′ = θ, ∂fθ′(x2)/∂θ′ij = ∂fθ(x2)/∂θij and W ′ = −W , we have that[∂LBYOL′cross-model ∂θ′ij ] ‖ = [∂LRAFTcross-model ∂θij ] ‖ . (42)\nTherefore by Eq. 38 and Eq. 42, RAFT’s updating of the parameter θ is equal to BYOL′:[∂LBYOL′ ∂θ′ ] ‖ = [∂LRAFT ∂θ ] ‖ . (43)\nAlso we differentiate LBYOL′align ,LRAFTalign with respect to W ′ij , Wij respectively, we obtain that[∂LBYOL′align ∂W ′ij ] ‖ = 2 (z′1 − z′2)‖ · ( ∂z′1 ∂W ′ij − ∂z ′ 2 ∂W ′ij ) , (44)\n[∂LRAFTalign ∂Wij ] ‖ = 2 (z1 − z2)‖ · ( ∂z1 ∂Wij − ∂z2 ∂Wij ) . (45)\nNote that z1 = −z′1, z2 = −z′2 and similar to Eq. 35, easy to show that\n(z′1 − z′2)‖ = −(z1 − z2)‖. (46)\nAlso,\n∂z′1 ∂W ′ij = ∂z1 ∂Wij , (47)\n∂z′2 ∂W ′ij = ∂z2 ∂Wij . (48)\nSo we have [∂LBYOL′align ∂W ′ij ] ‖ = − [∂LRAFTalign ∂Wij ] ‖ . (49)\nDifferentiate LBYOL′cross-model,LRAFTcross-model with respect to W ′ij ,Wij respectively, we obtain that[∂LBYOL′cross-model ∂W ′ij ] ‖ = −2 (z2 − z′2)‖ · ∂z′2 ∂W ′ij , (50)\n[∂LRAFTcross-model ∂Wij ] ‖ = 2 (z2 − z2)‖ · ∂z2 ∂Wij . (51)\nThen we have that [∂LBYOL′cross-model ∂W ′ij ] ‖ = − [∂LRAFTcross-model ∂Wij ] ‖ . (52)\nBy Eq. 49 and Eq. 52, we prove that the cross-model loss of BYOL′ generates the opposite gradient to RAFT, namely, [∂LBYOL′\n∂W ′ ] ‖ = − [∂LRAFT ∂W ] ‖ . (53)\nTherefore by the two main conclusions Eq. 43 and Eq. 53, for BYOL′ with parameters (θ′,W ′) = (θ(0),W (0)) and RAFT with parameters (θ,W ) = (θ(0),−W (0)) respectively, we have\nBYOL: ( θ′(1) = θ′(0) − η [∂LBYOL′ ∂θ′ ] ‖ ∣∣∣∣ θ′=θ′(0) ,W ′(1) = W ′(0) − η [∂LBYOL′ ∂W ′ ] ‖ ∣∣∣∣ W ′=W ′(0) ) ,\n(54) RAFT: ( θ(1) = θ(0) − η [∂LRAFT ∂θ ] ‖ ∣∣∣∣ θ=θ(0) ,W (1) = W (0) − η [∂LRAFT ∂W ] ‖ ∣∣∣∣ W=W (0) ) . (55)\nWe derive that θ(1) = θ′(1),W (1) = −W ′(1), and furthermore, θ(k) = θ′(k),W (k) = −W ′(k) at any iteration k. In this way, we establish an one-to-one correspondence between the parameter trajectories of BYOL′ and RAFT in training, referred to asH:\nH : RAFT(θ,W ) 7→ BYOL′(θ,−W ) (56)" }, { "heading": "I NON-TRIVIAL SOLUTIONS CREATED BY PREDICTOR", "text": "Suppose inputs x1 = t1(x) and x2 = t2(x) is n-dimensional. And in linear model, fθ, fξ and qw is parameterized by matrices (θij)n×n, (ξij)n×n and (Wij)m×m respectively.\nThe objective is LBYOL = E(x,t1,t2)∼(X ,T1,T2) [∥∥qw(fθ(t1(x)))− fξ(t2(x))∥∥22]\n= E(x,t1,t2)∼(X ,T1,T2) [ ‖Wθx1 − ξx2‖22 ] (57)\nDifferentiate ‖Wθx1 − ξx2‖22 with respect to θij and Wij , we have ∂ [ ‖Wθx1 − ξx2‖22 ] ∂θij = m∑ k=1 ∂ [Wk,:(θx1)− ξk,:x2]2 ∂θij\n= m∑ k=1 2 [Wk,:(θx1)− ξk,:x2] ∂ [Wk,:(θx1)− ξk,:x2] θij\n= m∑ k=1 2TkWk,:(x1)j\n= 2 [( W>T ) x>1 ] ij , (58)\n∂ [ ‖Wθx1 − ξx2‖22 ] ∂Wij = m∑ k=1 ∂ [Wk,:(θx1)− ξk,:x2]2 ∂Wij\n= m∑ k=1 2 [Wk,:(θx1)− ξk,:x2] ∂ [Wk,:(θx1)− ξk,:x2] ∂Wij\n= m∑ k=1 2Tk(θx1)j1{k=i}\n= 2Tiθj,:x1 = 2 [ T (θx1) >] ij , (59)\nwhere Tk = [Wk,:(θx1)− ξk,:x2], T = (T1, T2, . . . , Tm)> = W (θx1)− ξx2, and Wk,:, ξk,: are the k−th row of W and ξ respectively. So\n∂ [ ‖Wθx1 − ξx2‖22 ] ∂θ = 2 [( W>T ) x> ] , ∂ [ ‖Wθx1 − ξx2‖22 ] ∂W = 2 [ T (θx)> ] (60)\nLet\n∂LBYOL ∂θ = 0, ∂LBYOL ∂W = 0, (61)\nwe have that ∂E(x,t1,t2)∼(X ,T1,T2) [ ‖Wθx1 − ξx2‖22 ] ∂θ = E(x,t1,t2)∼(X ,T1,T2)[ ∂‖Wθx1 − ξx2‖22 ∂θ ] = 0 (62)\n∂E(x,t1,t2)∼(X ,T1,T2) [ ‖Wθx1 − ξx2‖22 ] ∂W = E(x,t1,t2)∼(X ,T1,T2)[ ∂‖Wθx1 − ξx2‖22 ∂W ] = 0 (63)\nWhen the weight of target ξ converge, we have ξ(k) = ξ(k+1) in the updating rule,\nξ(k+1) = τkξ k + (1− τk)θ(k)\nθ(k) = ξ(k+1) = ξ(k) (64)\nSubstituting ξ by θ, we obtain E(x,t1,t2)∼(X ,T1,T2) [ W>(Wθx1 − θx2)x>1 ] = 0 (65)\nE(x,t1,t2)∼(X ,T1,T2) [ (Wθx1 − θx2)x>1 θ> ] = 0 (66)\nLet E(x,t1,t2)∼(X ,T1,T2)(x1x>1 ) = A, E(x,t1,t2)∼(X ,T1,T2)(x2x>1 ) = B, we have that\neq : sylW>(WθA) = W>θB\nWθAθ> = θBθ>\n⇒Wθ − θBA−1 = 0 (67)\nTo solve Eq. ?? (which is called Sylvester’s equation), we using the Kronecker product notation and the vectorization operator vec, we can rewrite the equation in the form(\nIm ⊗W − (BA−1)T ⊗ In ) vec θ = vec0 (68)\nSo it has a non-trivial solution θ if and only if ( Im ⊗W − (BA−1)T ⊗ In ) has a non-trivial null space. An equivalent condition to having a non-trivial null space is having zero as an eigenvalue. Let W has eigenvalues in common with BA−1, then we have a non-trivial solution of θ, which is exactly the prevention for collapse." } ]
2,020
null
SP:0af1989b2e643d013174489704d0a052bad77f95
[ "1. The premise of the paper is that the adversary can perturb the *test* set so that the model is shown to perform better that it really is capable of. And in Section 7 (Conclusion) the paper claims that it exposes this new risk. However, remember that this risk is already mitigated in practice by keeping the test data *independent* of the model/classifier (e.g., see Kaggle competitions where the test set is hidden). Therefore, the perceived risk is not even present. In the context that the technique has been introduced, it seems like the [malicious] actor would only be fooling him/her self rather than fooling the model/classifier.", "This paper presents a new kind of adversarial attacks, named hypocritical attack. It is a reverse version of the original adversarial attack. It tricks a model into classifying data correctly with a perturbation. This can be a problem since it can make people satisfy the model performance, but the model is not robust on the real test dataset. The authors review the adversarial attack and define the new hypocritical examples and risk. The authors also show the simple results why hypocritical attach is a critical issue by a Naive model that is initialized randomly. It shows high performance on the hypocritical examples but low on the clean test data. They also investigate the algorithms that improve model robustness, THRM, and TRADES. Experiments show a trade-off between original classification loss and hypocritical risks, and THRM is a tight upper bound against the TRADES." ]
Adversarial examples arise from excessive sensitivity of a model. Commonly studied adversarial examples are malicious inputs, crafted by an adversary from correctly classified examples, to induce misclassification. This paper studies an intriguing, yet far overlooked consequence of the excessive sensitivity, that is, a misclassified example can be easily perturbed to help the model to produce correct output. Such perturbed examples look harmless, but actually can be maliciously utilized by a false friend to make the model self-satisfied. Thus we name them hypocritical examples. With false friends like these, a poorly performed model could behave like a state-of-the-art one. Once a deployer trusts the hypocritical performance and uses the “well-performed” model in real-world applications, potential security concerns appear even in benign environments. In this paper, we formalize the hypocritical risk for the first time and propose a defense method specialized for hypocritical examples by minimizing the tradeoff between natural risk and an upper bound of hypocritical risk. Moreover, our theoretical analysis reveals connections between adversarial risk and hypocritical risk. Extensive experiments verify the theoretical results and the effectiveness of our proposed methods.
[]
[ { "authors": [ "Martin Abadi", "Andy Chu", "Ian Goodfellow", "H Brendan McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang" ], "title": "Deep learning with differential privacy", "venue": "In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2016 }, { "authors": [ "Sergio A Alvarez" ], "title": "An exact analytical relation among recall, precision, and classification accuracy in information retrieval", "venue": "Boston College, Boston, Technical Report BCCS-02-01,", "year": 2002 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Battista Biggio", "Igino Corona", "Davide Maiorca", "B Nelson", "N Srndic", "P Laskov", "Giorgio Giacinto", "Fabio Roli" ], "title": "Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases", "venue": "(ECML PKDD),", "year": 2013 }, { "authors": [ "Michael Buckland", "Fredric Gey" ], "title": "The relationship between recall and precision", "venue": "Journal of the American society for information science,", "year": 1994 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE symposium on security and privacy (sp),", "year": 2017 }, { "authors": [ "Nicholas Carlini", "Anish Athalye", "Nicolas Papernot", "Wieland Brendel", "Jonas Rauber", "Dimitris Tsipras", "Ian Goodfellow", "Aleksander Madry", "Alexey Kurakin" ], "title": "On evaluating adversarial robustness", "venue": null, "year": 1902 }, { "authors": [ "Lin Chen", "Yifei Min", "Mingrui Zhang", "Amin Karbasi" ], "title": "More data can expand the generalization gap between adversarially robust and standard models", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Daniel Cullina", "Arjun Nitin Bhagoji", "Prateek Mittal" ], "title": "Pac-learning in the presence of adversaries", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Edgar Dobriban", "Hamed Hassani", "David Hong", "Alexander Robey" ], "title": "Provable tradeoffs in adversarially robust classification", "venue": "arXiv preprint arXiv:2006.05161,", "year": 2020 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Jun Zhu", "Xiaolin Hu", "Jianguo Li" ], "title": "Boosting adversarial attacks with momentum", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR),", "year": 2018 }, { "authors": [ "Gamaleldin Elsayed", "Shreya Shankar", "Brian Cheung", "Nicolas Papernot", "Alexey Kurakin", "Ian Goodfellow", "Jascha Sohl-Dickstein" ], "title": "Adversarial examples that fool both computer vision and time-limited humans", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Gamaleldin F. Elsayed", "Ian Goodfellow", "Jascha Sohl-Dickstein" ], "title": "Adversarial reprogramming of neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Ian Goodfellow", "Nicolas Papernot" ], "title": "Is attacking machine learning easier than defending it", "venue": "Blog post on Feb,", "year": 2017 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Alex Graves", "Abdel-rahman Mohamed", "Geoffrey Hinton" ], "title": "Speech recognition with deep recurrent neural networks", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "year": 2013 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Kun He", "Yan Wang", "John Hopcroft" ], "title": "A powerful generative model using random weights for the deep image representation", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Saumya Jetley", "Nicholas Lord", "Philip Torr" ], "title": "With friends like these, who needs adversaries", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (NeurIPS)", "venue": null, "year": 2012 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "In International Conference on Learning Representations (ICLR) Workshops,", "year": 2016 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "LI Lei" ], "title": "On the establishment of autonomous vehicles regulatory system in china", "venue": "Journal of Beijing Institute of Technology (Social Sciences Edition),", "year": 2018 }, { "authors": [ "Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Delving into transferable adversarial examples and black-box attacks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Somesh Jha", "Matt Fredrikson", "Z Berkay Celik", "Ananthram Swami" ], "title": "The limitations of deep learning in adversarial settings", "venue": "IEEE European symposium on security and privacy (EuroS&P),", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Martı́n Abadi", "Ulfar Erlingsson", "Ian Goodfellow", "Kunal Talwar" ], "title": "Semisupervised knowledge transfer for deep learning from private training data", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow", "Somesh Jha", "Z Berkay Celik", "Ananthram Swami" ], "title": "Practical black-box attacks against machine learning", "venue": "In Proceedings of the 2017 ACM on Asia conference on computer and communications security (ASIA CCS),", "year": 2017 }, { "authors": [ "Yao Qin", "Nicholas Carlini", "Garrison Cottrell", "Ian Goodfellow", "Colin Raffel" ], "title": "Imperceptible, robust, and targeted adversarial examples for automatic speech recognition", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Aditi Raghunathan", "Sang Michael Xie", "Fanny Yang", "John Duchi", "Percy Liang" ], "title": "Understanding and mitigating the tradeoff between robustness and accuracy", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Leslie Rice", "Eric Wong", "J Zico Kolter" ], "title": "Overfitting in adversarially robust deep learning", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision (IJCV),", "year": 2015 }, { "authors": [ "Hadi Salman", "Andrew Ilyas", "Logan Engstrom", "Ashish Kapoor", "Aleksander Madry" ], "title": "Do adversarially robust imagenet models transfer better", "venue": "arXiv preprint arXiv:2007.08489,", "year": 2020 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Florian Tramèr", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "The space of transferable adversarial examples", "venue": "arXiv preprint arXiv:1704.03453,", "year": 2017 }, { "authors": [ "Florian Tramèr", "Jens Behrmann", "Nicholas Carlini", "Nicolas Papernot", "Jörn-Henrik Jacobsen" ], "title": "Fundamental tradeoffs between invariance and sensitivity to adversarial perturbations", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Florian Tramer", "Nicholas Carlini", "Wieland Brendel", "Aleksander Madry" ], "title": "On adaptive attacks to adversarial example defenses", "venue": "arXiv preprint arXiv:2002.08347,", "year": 2020 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Jonathan Uesato", "Brendan O’Donoghue", "Pushmeet Kohli", "Aaron Oord" ], "title": "Adversarial risk and the dangers of evaluating against weak attacks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Hongjun Wang", "Guangrun Wang", "Ya Li", "Dongyu Zhang", "Liang Lin" ], "title": "Transferable, controllable, and inconspicuous adversarial attacks on person re-identification with deep mis-ranking", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Yisen Wang", "Difan Zou", "Jinfeng Yi", "James Bailey", "Xingjun Ma", "Quanquan Gu" ], "title": "Improving adversarial robustness requires revisiting misclassified examples", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Dongxian Wu", "Yisen Wang", "Shu-Tao Xia", "James Bailey", "Xingjun Ma" ], "title": "Skip connections matter: On the transferability of adversarial examples generated with resnets", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Tong Wu", "Liang Tong", "Yevgeniy Vorobeychik" ], "title": "Defending against physically realizable attacks on image classification", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Kaidi Xu", "Gaoyuan Zhang", "Sijia Liu", "Quanfu Fan", "Mengshu Sun", "Hongge Chen", "Pin-Yu Chen", "Yanzhi Wang", "Xue Lin" ], "title": "Adversarial t-shirt! evading person detectors in a physical world", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Haichao Zhang", "Jianyu Wang" ], "title": "Towards adversarially robust object detection", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric Xing", "Laurent El Ghaoui", "Michael Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Jingfeng Zhang", "Xilie Xu", "Bo Han", "Gang Niu", "Lizhen Cui", "Masashi Sugiyama", "Mohan Kankanhalli" ], "title": "Attacks which do not kill training make adversarial learning stronger", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Stephan Zheng", "Yang Song", "Thomas Leung", "Ian Goodfellow" ], "title": "Improving the robustness of deep neural networks via stability training", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have achieved breakthroughs in a variety of challenging problems such as image understanding (Krizhevsky et al., 2012), speech recognition (Graves et al., 2013), and automatic game playing (Mnih et al., 2015). Despite these remarkable successes, their pervasive failures in adversarial settings, the phenomenon of adversarial examples (Biggio et al., 2013; Szegedy et al., 2014), have attracted significant attention in recent years (Athalye et al., 2018; Carlini et al., 2019; Tramer et al., 2020). Such small perturbations on inputs crafted by adversaries are capable of causing well-trained models to make big mistakes, which indicates that there is still a large gap between machine and human perception, thus posing potential security concerns for practical machine learning (ML) applications (Kurakin et al., 2016; Qin et al., 2019; Wu et al., 2020b).\nAn adversarial example is “an input to a ML model that is intentionally designed by an attacker to fool the model into producing an incorrect output” (Goodfellow & Papernot, 2017). Following the definition of adversarial examples on classification problems (Goodfellow et al., 2015; Papernot et al., 2016; Elsayed et al., 2018; Carlini et al., 2019; Zhang et al., 2019; Wang et al., 2020b; Zhang et al., 2020; Tramèr et al., 2020), given a DNN classifier f and a correctly classified example x with class label y (i.e., f(x) = y), an adversarial example xadv is generated by perturbing x such that f(xadv) 6= y and xadv ∈ B (x). The neighborhood B (x) denotes the set of points within a fixed distance > 0 of x, as measured by some metric (e.g., the lp distance), so that xadv is visually the “same” for human observers. Then, an imperfection of the classifier is highlighted by Gadv = Acc(D)−Acc(A), the performance gap between the accuracy (denoted by Acc(·)) evaluated on clean set sampled from data distribution D and adversarially perturbed set A. An adversary could construct such a perturbed setA that looks no different from D but can severely degrade the performance of even state-of-the-art DNN models. From direct attacks in the digital space (Goodfellow et al., 2015; Carlini & Wagner, 2017) to robust attacks in the physical world (Kurakin et al., 2016; Xu et al., 2020), from toy classification problems (Chen et al., 2020; Dobriban et al., 2020) to complicated perception tasks (Zhang & Wang, 2019; Wang et al., 2020a), from the high dimensional nature of the input space (Goodfellow et al., 2015; Gilmer et al., 2018) to the\nframework of (non)-robust features (Jetley et al., 2018; Ilyas et al., 2019), many efforts have been devoted to understanding and mitigating the risk raised by adversarial examples, thus closing the gap Gadv. Previous works mainly concern the adversarial risk on correctly classified examples. However, they typically neglect a risk on misclassified examples themselves which will be formalized in this work.\nIn this paper, we first investigate an intriguing, yet far overlooked phenomenon, where given a DNN classifier f and a misclassified example x with class label y (i.e., f(x) 6= y), we can easily perturb x to xhyp such that f(xhyp) = y and xhyp ∈ B (x). Such an example xhyp looks harmless, but actually can be maliciously utilized by a false friend to fool a model to be self-satisfied. Thus we name them hypocritical examples (see Figure 1 for a comparison with adversarial examples).\nAdversarial examples and hypocritical examples are two sides of the same coin. On the one side, a well-performed but sensitive model becomes unreliable in the existence of adversaries. On the other side, a poorly performed but sensitive model behaves well with the help of friends. With false friends like these, a naturally trained suboptimal model could have state-of-the-art performance, and even worse, a randomly initialized model could behave like a well-trained one (see Section 2.1).\nIt is natural then to wonder: Why should we care about hypocritical examples? Here we give two main reasons:\n1. This is of scientific interest. Hypocritical examples are the opposite of adversarial examples. While adversarial examples are hard test data to a model, hypocritical examples aim to make it easy to do correct classification. Hypocritical examples warn ML researchers to\nthink carefully about high test accuracy: Does our model truly achieve human-like intelligence, or is it just simply because the test data prefers the model?\n2. There are practical threats. A variety of nefarious ends may be achievable if the mistakes of ML systems can be covered up by hypocritical attackers. For instance, before allowing autonomous vehicles to drive on public roads, manufacturers must first pass tests in specific environments (closed or open roads) to obtain a license (Administration et al., 2016; Briefs, 2015; Lei, 2018). An attacker may add imperceptible perturbations on the test examples (e.g., the “stop sign” on the road) stealthily without human notice, to hypocritically help an ML-based autonomous vehicle to pass the tests that might otherwise fail. However, the high performance can not be maintained on public roads without the help of the attacker. Thus, the potential risk is underestimated and traffic accidents might happen unexpectedly when the vehicle driving on public roads.\nIn such a case, if the examples used to evaluate a model are falsified by a false friend, the model will manifest like a perfect one (on hypocritical examples), but it actually may not be well performed even on clean examples, not to mention adversarial examples. Thus a new imperfection of the classifier can be found in Ghyp = Acc(F)−Acc(D), the performance gap between the accuracy evaluated on clean set sampled from D and hypocritically perturbed set F . Still, F looks no different from D but can stealthily upgrade the performance. Once a deployer trusts the hypocritical performance carefully designed by a false friend and uses the “well-performed” model in real-world applications, potential security concerns appear even in benign environments. Thus we need methods to defend our models from false friends, that is, making our models have self-knowledge.\nWe propose a defense method by improving model robustness against hypocritical perturbations. Specifically, we formalize the hypocritical risk and minimize it via a differentiable surrogate loss (Section 3). Experimentally, we verify the effectiveness of our proposed attack (Section 2.1) and defense (Section 4.1). Further, we study the transferability of hypocritical examples across models trained with various methods (Section 4.2). Finally, we conclude our paper by discussing and summarizing our results (Section 5 and Section 6). Our main contributions are:\n• We give a formal definition of hypocritical examples. We demonstrate the unreliability of standard evaluation process in the existence of false friends and show the potential security risk on the deployment of a model with high hypocritical performance.\n• We formalize the hypocritical risk and analyze its relation with natural risk and adversarial risk. We propose the first defense method specialized for hypocritical examples by minimizing the tradeoff between the natural risk and an upper bound of hypocritical risk.\n• Extensive experiments verify the effectiveness of our proposed methods. We also examine the transferability of hypocritical examples. We show that the transferability is not always desired by the attackers, which depends on their purpose." }, { "heading": "2 FALSE FRIENDS AND ADVERSARIES", "text": "Better an open enemy than a false friend! Only by being aware of the potential risk of the false friend can we prevent it. In this section, we expose a kind of false friends, who are capable of manipulating model performance stealthily during the evaluation process, thus making the evaluation results unreliable.\nWe consider a classification task with data (x, y) ∈ Rd ×{1, . . . , C} from a distribution D. Denote by f : Rd → {1, ..., C} the classifier which predicts the class of an input example x: f(x) = arg maxk pk(x), where pk(x) is the kth component of p(x) : Rd → ∆C (e.g., the output after softmax activation), in which ∆C = {u ∈ RC | 1Tu = 1,u ≥ 0} is the probabilistic simplex. Adversarial examples are malicious inputs crafted by an adversary to induce misclassification. We first give the commonly accepted definition of adversarial examples as follows:\nDefinition 1 (Adversarial Examples). Given a classifier f and a correctly classified input (x, y) ∼ D (i.e., f(x) = y), an -bounded adversarial example is an input x∗ ∈ Rd such that:\nf(x∗) 6= y and x∗ ∈ B (x).\nThe assumption underlying this definition is that inputs satisfying x∗ ∈ B (x) preserve the label y of the original input x. The reason for the existence of adversarial examples is that a model is overly sensitive to non-semantic changes. Next, we formalize a complementary phenomenon to adversarial examples, called hypocritical examples. Hypocritical examples are malicious inputs crafted by a false friend to stealthily correct the prediction of a model: Definition 2 (Hypocritical Examples). Given a classifier f and a misclassified input (x, y) ∼ D (i.e., f(x) 6= y), an -bounded hypocritical example is an input x∗ ∈ Rd such that:\nf(x∗) = y and x∗ ∈ B (x).\nThe same as adversarial examples, hypocritical examples are bounded to preserve the label of the original input, and are another consequence that arises from excessive sensitivity of a classifier.\nAs a false friend, a hypocritical example can be generated from a misclassified example by maximizing\nmax x′∈B (x)\n1(f(x′) = y), (1)\nwhich is equivalent to minimizing\nmin x′∈B (x)\n1(f(x′) 6= y), (2)\nwhere 1(·) is the indicator function. Similar to Madry et al. (2018); Wang et al. (2020b), in practice, we leverage the commonly used cross entropy (CE) loss as the surrogate loss of 1(f(x′) 6= y) and minimize it by projected gradient descent (PGD).\nNote that Equation 2 looks similar to but conceptually differs from the known targeted adversarial attack (Carlini & Wagner, 2017), which generates a kind of adversarial examples defined on correctly classified clean inputs and targeted to wrong classes. The hypocritical examples here are defined on misclassified inputs and are targeted to their right classes." }, { "heading": "2.1 ATTACK RESULTS", "text": "In this subsection, we demonstrate the power of our proposed hypocritical attack on three benchmark datasets: MNIST (LeCun et al., 1998), CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015).\nWe attack models trained with standard approach using clean examples (Standard) and models that randomly initialized without training (Naive). For MNIST, the hypocritically perturbed set F and the adversarially perturbed set A are constructed by attacking every example in the clean test set sampled from D. Both attacks are bounded by a l∞ ball with radius = 0.2. For ImageNet, F and A are constructed based on its validation set sampled from D. Both attacks are bounded by a l∞ ball with radius = 16/255. For each experiment, we conduct 3 trials with different random seeds and report the averaged result to reduce the impact of random variations. Appendix A.2 describes further experimental details about DNN architecture, training procedure and more results.\nResults on MNIST and ImageNet are summarized in Table 1 and Table 2, respectively. First, we find that the naturally trained models are extremely sensitive to hypocritical perturbations (e.g., Standard (MLP) and Standard (LeNet) achieve 100% accuracy on hypocritically perturbed MNIST test set, and Standard (VGG16) and Standard (ResNet50) achieve 99% accuracy on hypocritically perturbed ImageNet validation set). Second, we find that part of randomly initialized models is extremely\nsensitive (e.g., Naive (MLP) and Naive (VGG16) achieve 100% accuracy on F of MNIST and ImageNet). These results demonstrate the unreliability of standard evaluation process in the existence of false friends. Once a “well-performed” model (such as Naive (MLP) or Naive (VGG16)) is permitted to be deployed in real-world applications due to that the deployer has a false sense of performance, potential security concerns appear even in benign environments.\nIt seems that the Naive (ResNet50) model is relatively robust to hypocritical examples on ImageNet. But that is just a trivial defense, it simply predicts most of the points in input region as a certain class because of the poor scaling of network weights at initialization (He et al., 2016b; Elsayed et al., 2019). More discussions are in Appendix A.2. Therefore, it is not enough to blindly pursue robustness against hypocritical perturbations but ignore the performance on clean examples." }, { "heading": "3 HYPOCRITICAL RISK", "text": "In this section, we formalize the hypocritical risk and analyze the relation between natural risk, adversarial risk, and hypocritical risk. We propose a defense method specialized for hypocritical examples by minimizing the tradeoff between natural risk and an upper bound of the hypocritical risk. Moreover, by decomposing a existing method designed for adversarial defense (TRADES (Zhang et al., 2019)), we find that, surprisingly, TRADES minimizes not only the adversarial risk on correctly classified examples, but also a looser upper bound of the hypocritical risk. Our theoretical analysis suggests that TRADES can be another candidate defense method for hypocritical examples.\nTo characterize the adversarial robustness of a classifier f , Madry et al. (2018); Uesato et al. (2018); Cullina et al. (2018) defined the adversarial risk under the threat model of bounded ball:\nRadv(f) = E (x,y)∼D\n[ max\nx′∈B (x) 1(f(x′) 6= y)\n] . (3)\nThe standard measure of classifier performance, known as natural risk, is denoted as Rnat(f) = E(x,y)∼D [1(f(x) 6= y)]. Let q(x, y) be the probability density function of data distribution D. We denote by S+f the conditional data distribution on correctly classified examples w.r.t. f , with a conditional density function q(x, y | E) = q(x, y)/Z(E) if E is true (otherwise q(x, y | E) = 0), where the event E is f(x) = y and Z(E) = ∫ x,y\n1(f(x) = y)dq(x, y) is a normalizing constant. We denote by S−f the conditional data distribution on misclassified examples with the conditional density function q(x, y | E) and f(x) 6= y as the event E. Then we have the following relation between the natural risk and the adversarial risk: Proposition 1. Denote the adversarial risk on correctly classified examples by\nR̂adv(f) = E (x,y)∼S+f\n[ max\nx′∈B (x) 1(f(x′) 6= y)\n] ,\nthen we have Radv(f) = Rnat(f) + (1−Rnat(f))R̂adv(f).\nProposition 1 shows that we can view the adversarial risk Radv(f) as the tradeoff between Rnat(f) and R̂adv(f) with the scaling parameter λ = 1−Rnat(f). The adversarial risk on correctly classified examples R̂adv(f) is in sharp contrast to the hypocritical risk defined on misclassified examples formalized as follows: Definition 3 (Hypocritical Risk). The hypocritical risk on misclassified examples of a classifier f under the threat model of bounded ball is defined as\nR̂hyp(f) = E (x,y)∼S−f\n[ max\nx′∈B (x) 1(f(x′) = y)\n] .\nThe hypocritical risk R̂hyp(f) is the proportion of perturbed examples (originally misclassified) that can be successfully correctly classified by the classifier after a false friend’s attack. When considering the existence of false friends, a good model should have not only low natural risk but also low hypocritical risk, to be robust against hypocritical perturbations." }, { "heading": "3.1 TRADEOFF BETWEEN NATURAL AND HYPOCRITICAL RISKS", "text": "Motivated by the tradeoff between natural and adversarial risks Tsipras et al. (2019); Zhang et al. (2019), we notice that there may also exist an inherent tension between the goal of natural risk minimization and hypocritical risk minimization. To illustrate the phenomenon, we provide a toy example here, which is modified from the example in Zhang et al. (2019), and its risk minimization solutions can be analytically found.\nConsider the case (x, y) ∈ R × {−1,+1} from a distribution D, where the marginal distribution over the instance space is a uniform distribution over [0, 1], and for k = 0, 1, · · · , d 12 − 1e,\nη(x) := Pr(y = +1 | x)\n= { 1/4, x ∈ [2k , (2k + 1) ), 1, x ∈ ((2k + 1) , (2k + 2) ].\n(4)\nSee Figure 2 for visualization of η(x). In this problem, we consider two classifiers: a) the Bayes optimal classifier sign(2η(x)− 1); b) the all-one classifier which always outputs “positive”. Table 3 displays the trade-off between natural and hypocritical risks: the minimal natural risk 1/8 is achieved by the Bayes optimal classifier with large hypocritical risk, while the optimal hypocritical risk 0 is achieved by the all-one classifier with large natural risk." }, { "heading": "3.2 UPPER BOUNDS OF HYPOCRITICAL RISK", "text": "It is natural then to optimize our models to minimize natural and hypocritical risks at the same time. However, it’s hard to do optimization over R̂hyp(f). To ease the optimization obstacles in there, we derive the following upper bounds. Theorem 1. For any data distribution D and its corresponding conditional distribution on misclassified examples S−f w.r.t. a classifier f , we have\nE (x,y)∼S−f 1(f(xhyp) = y)︸ ︷︷ ︸ R̂hyp(f) ≤ E (x,y)∼S−f 1(f(xhyp) 6= f(x))︸ ︷︷ ︸ Rhyp(f) ≤ E (x,y)∼S−f 1(f(xrev) 6= f(x))︸ ︷︷ ︸ Rhyp(f) ,\nwhere xhyp = arg max x′∈B (x) 1(f(x′) = y) and xrev = arg max x′∈B (x) 1(f(x′) 6= f(x)).\nHere xrev means that it pursues to reverse a clean example to a different class, from the point of view of the model. The upper bounds found in Theorem 1 allow us to optimize the hypocritical risk using\nproper surrogate loss functions which are both physical meaningful and computaionally tractable. Before moving forward to algorithmic design, we state a useful proposition below, which reveals the internal mechanism behind in TRADES. Proposition 2. Rrev(f) = (1−Rnat(f))R̂adv(f)+Rnat(f)Rhyp(f) = E\n(x,y)∼D 1(f(xrev) 6= f(x)).\nProposition 2 shows a connection between adversarial risk and hypocritical risk: the adversarial risk on correctly classified examples R̂adv(f) and the looser upper bound of the hypocritical risk on misclassified examplesRhyp(f) can be seamlessly united to a new risk on all examplesRrev(f). We name it reversible risk since minimizing it pursues the model whose predictions can’t be reversed by small perturbations." }, { "heading": "3.3 ALGORITHMIC DESIGN", "text": "Now we are ready to design objective functions that improve model robustness against hypocritical examples while keeping model accuracy on clean examples.\nSimilar to Zhang et al. (2019); Wang et al. (2020b); Tramèr et al. (2020); Raghunathan et al. (2020), we propose a defense objective by minimizing the tradeoff between the natural risk and the tighter upper bound of the hypocritical risk:\nRTHRM(f) = Rnat(f) + λRhyp(f), (5)\nwhere λ > 0 is a tunable scaling parameter balancing the importance of natural risk and hypocritical risk. We name our method THRM (Tradeoff for Hypocritical Risk Minimization).\nOptimization over 0-1 loss in THRM is still intractable. In practice, for the indicator function 1(f(x) 6= y) in Rnat(f), we adopt the commonly used CE loss as surrogate loss. Observed that Rhyp(f) = 1Rnat(f)E(x,y)∼D1(f(xhyp) 6= f(x)), we absorb the Rnat(f) term into λ and use KL divergence as the surrogate loss of the indicator function 1(f(xhyp) 6= f(x)) (Zheng et al., 2016; Zhang et al., 2019; Wang et al., 2020b), since f(xhyp) 6= f(x) implies that the perturbed examples have different output distributions to that of clean examples. Our final objective function for THRM becomes\nLTHRM = E (x,y)∼D [LCE(p(x), y) + λLKL(p(x), p(xhyp))] . (6)\nIntuition behind the objective LTHRM: the first term in Equation 6 encourages the natural risk to be optimized, while the second regularization term encourages the output to be stable against hypocritical perturbations, that is, the classifier should not be overly confident in its predictions especially when a false friend wants it to be.\nTo derive the objective function for TRADES, we can minimize the tradeoff between the natural risk and the reversible risk:\nRTRADES(f) = Rnat(f) + λRrev(f). (7)\nSimilar to THRM, we use CE loss and KL divergence as the surrogate loss of 1(f(x) 6= y) and 1(f(xrev) 6= f(x)), respectively. The final objective function becomes\nLTRADES = E (x,y)∼D [LCE(p(x), y) + λLKL(p(x), p(xrev))] , (8)\nwhich is exactly the multi-class classification objective function first proposed in Zhang et al. (2019) for adversarial defense. From the perspective of the hypocritical risk, our Proposition 2 reveals an advantage behind it, that is, TRADES is capable of minimizing the upper bound of hypocritical risk Rhyp(f), thus can be considered as a candidate defense method for hypocritical examples. Proposition 2 also implies that there may be a deeper connection between adversarial robustness and hypocritical robustness. We will discuss it more and compare our proposed THRM with TRADES in next section." }, { "heading": "4 EXPERIMENTS", "text": "In this section, to verify the effectiveness of the methods (THRM and TRADES) suggested in Section 3.3, we conduct experiments on real-world datasets including MNIST and CIFAR-10." }, { "heading": "4.1 WHITE-BOX ANALYSIS", "text": "For the wide range of the scaling parameter λ, we conduct experiments in parallel over multiple NVIDIA Tesla V100 GPUs. On MNIST, perturbations are bounded by l∞ norm with = 0.2. On CIFAR-10, models are trained against 3 different hypocritical attackers bounded by l∞ norm with = 1/255, = 2/255 and = 8/255, respectively. Each experiment is conducted 3 times with different random seeds. The hypocritical risk reported here is actually an approximation of the real value, since the optimization problem in it is NP-hard and we approximately solve it using surrogate loss and PGD on test set. Further details about model architecture and training procedure are in Appendix A.3. Note that these experiments are extensive. It takes over 230 GPU days to completely train the models considered in this section. We believe that these experiments are beneficial to the ML community to further understand the tradeoffs and relative merits in THRM and TRADES.\nResults on MNIST ( = 0.2) and CIFAR10 ( = 2/255) are shown in Figure 3. Each data point represents a model trained with different λ. More results including comparison with Madry’s defense (Madry et al., 2018) are provided in Appendix A.3 due to the limited space. First, we observe that, on both datasets, as the regularization parameter λ increases, the natural risk Rnat increases while the hypocritical risk R̂hyp decreases, which verifies the effectiveness of our proposed method and the theoretical analysis in Proposition 2, where we reveal that TRADES is capable of minimizing a looser upper bound of hypocritical risk. Second, we show that THRM achieves better tradeoff on MNIST since it optimizes a tighter upper bound than TRADES. However, the situation becomes nuanced on CIFAR-10. As we can see in Figure 3(b), THRM seems to behave better in the beginning when λ is small but is surpassed by TRADES when λ increases. Overall, optimizing only a tighter upper bound of hypocritical risk achieves better tradeoff on test set when the task is relatively simple (e.g., on MNIST with = 0.2), while simultaneously optimizing hypocritical risk and adversarial risk achieves better tradeoff on test set when the task tends to be hard (e.g., on CIFAR-10 with = 2/255 and = 8/255).\nAbove phenomenon shows that, when dealing with finite sample size and finite-time gradientdescent trained classifiers, better adversarial robustness may help the generalization of hypocritical robustness, which conforms our intuition that they are two sides of the same coin. Interestingly, a contemporary work claims that, on CIFAR-10, TRADES achieves better adversarial robustness than Madry’s defense in fair hyperparameter settings (Anonymous, 2021). Thus there may be potential mutual benefits between adversarial robustness and hypocritical robustness. After all, robust training objectives force DNNs to be invariant to signals that humans are invariant to, which may lead to feature representations that are more similar to what humans use (Salman et al., 2020). A rigorous treatment of the synergism is beyond the scope of the current paper but is an important future direction." }, { "heading": "4.2 TRANSFERABILITY ANALYSIS", "text": "Transferability of adversarial examples across models is well known (Tramèr et al., 2017; Papernot et al., 2017b; Ilyas et al., 2019) and here we examine the transferability of hypocritical examples\non MNIST and CIFAR-10. We observe that hypocritical examples, i) can transfer easily between naturally trained models, ii) are hard to transfer from randomly initialized models to other models (and vise versa), iii) are hard to transfer from standard models to defended models, iv) generated from THRM models usually have high transferability. Experimental details are in Appendix A.4. Better transferability is beneficial for black-box attacks but is not always desired by hypocritical attackers. A hypocritical attacker only expects high transferability on the targeted model the attacker chose to help. If there are other competing models available to the deployer, the attacker actually does not want the hypocritical examples to be successfully transferred to those competing models. Thus fine-grained attack methods are required. We leave this to future work." }, { "heading": "5 DISCUSSION", "text": "The false friends considered in this paper are as powerful as typical adversaries. They all know the ground truth labels of clean examples. Such powerful friends actually can help a model to not only correctly classify a misclassified clean example but also correctly classify an adversarial example crafted by an adversary. One may expect to rely on true friends against adversaries. Unfortunately, an omniscient and faithful friend is unachievable in practical tasks, so far at least. Once it is achieved, the problem of robustness disappears immediately. What we can do at present is using a relatively more robust model as a surrogate of the true friend to improve the robustness of a weak model. This induces a promising general method in practice, that is, high-performance models can be employed as true friends to help a weak model without exposing training data and model weights for the purpose of privacy protection and knowledge transfer (Abadi et al., 2016; Papernot et al., 2017a). Additional discussions are in Appendix C." }, { "heading": "6 CONCLUSION", "text": "In this work, we expose a new risk arising from excessive sensitivity. Model performance becomes hypocritical in the existence of false friends. By formalizing the hypocritical risk and analyzing its relation with natural risk and adversarial risk, we propose to use THRM and TRADES as defense methods against hypocritical perturbations. Extensive experiments verify the effectiveness of methods. These findings open new avenues for mitigating and exploiting model sensitivity." }, { "heading": "A EXPERIMENTAL DETAILS", "text": "" }, { "heading": "A.1 DETAILS IN FIGURE 1", "text": "Attack procedure. In adversarial attacks, we perturb clean inputs to maximize the surrogate loss using PGD. In hypocritical attacks, we perturb clean inputs to minimize the surrogate loss using PGD. In both attacks, for the purpose of imperceptibility, we execute PGD attack 100 steps (step size is /50) with early stopping on ImageNet and the budget here is 2/255.\nMore examples. More adversarial examples and hypocritical examples generated on ImageNet using our methods are shown in Figure 5. More hypocritical examples generated on MNIST and CIFAR-10 are shown in Figure 4(a) and Figure 4(b). The victim models are LeNet (Standard) and Wide ResNet (Standard) for MNIST and CIFAR-10, respectively. They are trained with the same procedures described in Appendix A.2. In both attacks, for the purpose of imperceptibility, we execute 100 steps PGD attacks (step size is /50) with early stopping on MNIST and CIFAR-10. The budget for MNIST here is 0.2. The budget for CIFAR-10 here is 8/255." }, { "heading": "A.2 DETAILS IN SECTION 2.1", "text": "Architecture. For MNIST, a four-layer multilayer perception (MLP) (2 hidden layers, 768 neurons in each) with ReLU activations and a variant of LeNet model (2 convolutional layers of sizes 32 and 64, and a fully connected layer of size 1024) are adopted. For CIFAR-10, a four-layer MLP (2 hidden layers, 3072 neurons in each) with ReLU activations, a ResNet18 (He et al., 2016a) and a Wide ResNet (Zagoruyko & Komodakis, 2016) (with depth 28 and width factor 10) are adopted. For ImageNet, a VGG16 (Simonyan & Zisserman, 2014) and a ResNet50 (He et al., 2016a) are adopted.\nTraining procedure. i) Models trained with standard approach using clean examples (Standard). For MNIST, models are trained for 80 epochs with Adam optimizer with batch size 128 and a learning rate of 0.001. Early stopping is done with holding out 1000 examples from the MNIST training set. For CIFAR-10, models are trained for 150 epochs with SGD optimizer with batch size 128 and the learning rate starts with 0.1, and is divided it by 10 at 90 and 125 epochs. We apply weight decay of 2e-4 and momentum of 0.9. Early stopping is done with holding out 1000 examples from the CIFAR-10 training set. For ImageNet, we use the pretrained standard models available within PyTorch (torchvision.models). ii) Models that randomly initialized without training (Naive). For all models, we use the default PyTorch initialization, except that we initialize the convolutional weights in Wide ResNet with He initialization (He et al., 2015). We conduct all the experiments using a single NVIDIA Tesla V100 GPU. Each experiment is conducted 3 times with different random seeds, except the standard models trained on ImageNet, in which we use the pretrained standard models available within PyTorch.\nAttack procedure. In adversarial attacks, we perturb clean inputs to maximize the surrogate loss using PGD. In hypocritical attacks, we perturb clean inputs to minimize the surrogate loss using PGD. In both attacks, we execute 50 steps PGD attacks (step size is /10) with 20 times of random restart on MNIST and CIFAR-10, and we use 50 steps PGD attacks (step size is /8) on ImageNet. Other hyperparameter choices didn’t offer a significant change in accuracy. On MNIST, the hypocritical perturbed set F and the adversarially perturbed set A are constructed by attacking every example in the clean test set sampled from D. Both attacks are bounded by a l∞ ball with radius = 0.2. On CIFAR-10, both attacks are bounded by a l∞ ball with radius = 8/255. On ImageNet, F and A are constructed based on its validation set sampled from D. Both attacks are bounded by a l∞ ball with radius = 16/255.\nNumerical results. The attack results on CIFAR-10 are shown in Table 4. Full results of Table 1, Table 2 and Table 4 are shown in Table 5, Table 6 and Table 7, respectively. Moreover, we show the attack results of 9 Naive models evaluated on ImageNet in Table 6. We find that all the Naive models in VGG family achieve high accuracy on F and all the Naive models in ResNet family have relatively poor performance onF . Especially, the Naive (ResNet152) model in Trial 1 is invariant to hypocritical perturbations. Even in the existence of a strong false friend, the hypocritical performance is still as low as the clean performance (only 0.1%). We carefully examined the Naive (ResNet152) model and find that it’s actually a trivial classifier, which purely classifies almost all the points in input region [0, 1]d as a certain class for some simple reasons, such as poor scaling of network weights at initialization. Therefore, it is not enough to blindly pursue robustness against hypocritical perturbations but ignore the performance on clean examples. Once we train a Naive model with clean examples, the model becomes vulnerable immediately (see Standard (ResNet50)), whereas the trained weights are better conditioned (Elsayed et al., 2019)." }, { "heading": "A.3 DETAILS IN SECTION 4.1", "text": "Architecture. For MNIST, a variant of LeNet model (2 convolutional layers of sizes 32 and 64, and a fully connected layer of size 1024) is adopted. For CIFAR-10, a Wide ResNet (with depth 28 and width factor 10) is adopted.\nTraining procedure. For the wide range of the scaling parameter λ, we conduct experiments in parallel over multiple NVIDIA Tesla V100 GPUs. Each experiment is conducted 3 times with different random seeds. For MNIST, all models (including Standard, Madry, TRADES, THRM) are trained for 80 epochs with Adam optimizer with batch size 128 and a learning rate of 0.001. Early stopping is done with holding out 1000 examples from the MNIST training set as suggested in Rice et al. (2020). For CIFAR-10, all models are trained for 150 epochs with SGD optimizer with batch size 128 and the learning rate starts with 0.1, and is divided it by 10 at 90 and 125 epochs. We apply weight decay of 2e-4 and momentum of 0.9. Early stopping is done with holding out 1000 examples from the CIFAR-10 training set as suggested in Rice et al. (2020).\nAttack procedure. For the inner maximization in the objective function of THRM, we perturb clean inputs to minimize the CE loss as the surrogate loss. For the inner maximization in TRADES, we maximize the KL divergence as the surrogate loss. For the inner maximization in Madry, we maximize the CE loss as the surrogate loss. On MNIST, the training attack is PGD with random start and 10 iterations (step size /4). On CIFAR-10, the training attack is PGD with random start and 10 iterations (step size /4) when = 8/255, and the training attack is PGD with random start and 7 iterations (step size /3) when = 1/255 and = 2/255. On all experiments, the test attack is 50 steps PGD (step size is /10) with 20 times of random restart. Other hyperparameter choices didn’t offer a significant change in accuracy.\nNumerical results. The natural risk reported here is estimated on test set. The hypocritical risk reported here is estimated on test set and is actually an approximation of the real value since we approximately solve the optimization problem by PGD on examples from test set. Results on MNIST ( = 0.2) and CIFAR-10 ( = 1/255, = 2/255 and = 8/255) are shown in Figure\n6. Each point in Figure 6 represents one model trained with a certain λ. Full numerical results on MNIST ( = 0.2) and CIFAR-10 ( = 1/255, = 2/255 and = 8/255) can be found in Table 8, Table 9, Table 10 and Table 11, respectively. On MNIST ( = 0.2), THRM has better tradeoff than TRADES. However, when the task becomes hard, TRADES performs as well as or better than THRM. On CIFAR-10, as the task becomes harder (the larger the radius the harder the task), the gap between TRADES and THRM becomes larger. This phenomenon shows that better adversarial robustness may help the generalization of hypocritical robustness, especially when the task is hard. Moreover, we compare our methods with Madry et al. (2018)’s defense designed for adversarial robustness (denoted as Madry) 1 and standard training method (denoted as Standard). We summarize results in Table 12. For direct comparison, we pick a certain λ for each model trained by TRADES and THRM in each task. We observed that, in all tasks, Madry’s defense has nonnegligible robustness on hypocritical examples, although there is no hypocritical risk or its upper bound in the objective function. This phenomenon indicates that optimizing only adversarial risk could bring a certain degree of robustness against hypocritical examples. While this experimental results partly support our hypothesis (i.e., the potential mutual benefits between robustness against adversarial perturbations and hypocritical perturbations), we do not take the evidence as an ultimate one and further exploration is needed. We note that the standard deviation becomes larger when λ is bigger in TRADES and THRM, which is attributed to optimization difficulty and result in more significant difference among different trials. Reducing the initial learning rate may mitigate this phenomenon.\nFor completeness, we further evaluate the adversarial risk on correctly classified examples of the models trained by THRM and TRADES. Results on MNIST ( = 0.2) and CIFAR-10 ( = 2/255) are summarized in Table 13 and Table 14, respectively. One interesting finding is that models trained with THRM manifest noteworthy adversarial robustness, especially on CIFAR-10, although there is no adversarial risk in the objective function of THRM. These facts also support the hypothesis (i.e., the potential mutual benefits between robustness against adversarial perturbations and hypocritical perturbations).\n1They actually optimize the adversarial risk in Equation 3 via surrogate loss." }, { "heading": "A.4 DETAILS IN SECTION 5", "text": "Hypocritical attacks here are executed by 50 steps PGD (step size is /10) on source models. Note that the optimization method we used here is not to pursue state-of-the-art transferability, but to examine the transferability of hypocritical examples. There are many methods designed to improve the transferability of adversarial examples may be extended to hypocritical examples (Liu et al., 2017; Dong et al., 2018; Wu et al., 2020a). Figure 7 shows the transferability heatmap of hypocritical attack over 9 models trained on MNIST. Figure 8 shows the transferability heatmap of hypocritical attack over 7 models trained on CIFAR-10. The value in the i-th row and j-th cloumn of each heatmap matrix is the proportion of the hypocritical examples successfully transferred to target model j out of all hypocritical examples generated by source model i (including both successful and failed attacks on the source model)." }, { "heading": "B PROOFS OF MAIN RESULTS", "text": "In this section, we provide the proofs of our main results." }, { "heading": "B.1 A PROOF OF PROPOSITION 1", "text": "Proposition 1. Denote the adversarial risk on correctly classified examples by\nR̂adv(f) = E (x,y)∼S+f\n[ max\nx′∈B (x) 1(f(x′) 6= y)\n] ,\nthen we have Radv(f) = Rnat(f) + (1−Rnat(f))R̂adv(f)." }, { "heading": "Proof.", "text": "Radv(f) = E (x,y)∼D\n[ max\nx′∈B (x) 1(f(x′) 6= y) ] = E\n(x,y)∼D\n[ 1(f(x) = y) · max\nx′∈B (x) 1(f(x′) 6= y) ] + E\n(x,y)∼D\n[ 1(f(x) 6= y) · max\nx′∈B (x) 1(f(x′) 6= y) ] =Rnat(f) E\n(x,y)∼S−f\n[ max\nx′∈B (x) 1(f(x′) 6= y) ] + (1−Rnat(f)) E\n(x,y)∼S+f\n[ max\nx′∈B (x) 1(f(x′) 6= y) ] =Rnat(f) E\n(x,y)∼S−f [1(f(x) 6= y)] + (1−Rnat(f)) E (x,y)∼S+f\n[ max\nx′∈B (x) 1(f(x′) 6= y) ] =Rnat(f) + (1−Rnat(f)) E\n(x,y)∼S+f\n[ max\nx′∈B (x) 1(f(x′) 6= y) ] =Rnat(f) + (1−Rnat(f))R̂adv(f)." }, { "heading": "B.2 A PROOF OF THEOREM 1", "text": "Theorem 1. For any data distribution D and its corresponding conditional distribution on misclassified examples S−f w.r.t. a classifier f , we have\nE (x,y)∼S−f 1(f(xhyp) = y)︸ ︷︷ ︸ R̂hyp(f) ≤ E (x,y)∼S−f 1(f(xhyp) 6= f(x))︸ ︷︷ ︸ Rhyp(f) ≤ E (x,y)∼S−f 1(f(xrev) 6= f(x))︸ ︷︷ ︸ Rhyp(f) ,\nwhere xhyp = arg max x′∈B (x) 1(f(x′) = y) and xrev = arg max x′∈B (x) 1(f(x′) 6= f(x)).\nProof. To prove the first inequality, we have\nR̂hyp(f) = E (x,y)∼S−f\n[ max\nx′∈B (x) 1(f(x′) = y) ] = E\n(x,y)∼S−f 1(f(xhyp) = y)\n≤ E (x,y)∼S−f 1(f(xhyp) 6= f(x)),\nwhere the above inequality involves two conditions:\n1(f(xhyp) = y) = { 1 = 1(f(xhyp) 6= f(x)), if f (xhyp) = y, 0 ≤ 1(f(xhyp) 6= f(x)), if f (xhyp) 6= y.\nTo prove the second inequality, we have\nRhyp(f) = E (x,y)∼S−f 1(f(xhyp) 6= f(x))\n≤ E (x,y)∼S−f 1(f(xrev) 6= f(x)).\nSince (x, y) ∼ S−f , we have f(x) 6= y. If there exists a xhyp such that f(xhyp) = y, then f(xhyp) 6= f(x). Now let xrev = xhyp, then f(xrev) 6= f(x) is true. Otherwise, if we couldn’t find a xhyp such that f(xhyp) = y, there still exists a posibility to find a xrev such that f(xrev) 6= y but f(xrev) 6= f(x) is true. Therefore, the above inequalities holds." }, { "heading": "B.3 A PROOF OF PROPOSITION 2", "text": "Proposition 2. Rrev(f) = (1−Rnat(f))R̂adv(f)+Rnat(f)Rhyp(f) = E (x,y)∼D 1(f(xrev) 6= f(x))." }, { "heading": "Proof.", "text": "Rrev(f) =(1−Rnat(f))R̂adv(f) +Rnat(f)Rhyp(f) =(1−Rnat(f)) E\n(x,y)∼S+f [1(f(xadv) 6= y)] +Rnat(f) E (x,y)∼S−f [1(f(xrev) 6= f(x))]\n= E (x,y)∼D [1(f(x) = y) · 1(f(xadv) 6= y)] + E (x,y)∼D [1(f(x) 6= y) · 1(f(xrev) 6= f(x))]\n= E (x,y)∼D\n[ 1(f(x) = y) · max\nx′∈B (x) 1(f(x′) 6= y) ] + E\n(x,y)∼D\n[ 1(f(x) 6= y) · max\nx′∈B (x) 1(f(x′) 6= f(x)) ] = E\n(x,y)∼D\n[ 1(f(x) = y) · max\nx′∈B (x) 1(f(x′) 6= f(x)) ] + E\n(x,y)∼D\n[ 1(f(x) 6= y) · max\nx′∈B (x) 1(f(x′) 6= f(x)) ] = E\n(x,y)∼D\n[ max\nx′∈B (x) 1(f(x′) 6= f(x)) ] = E\n(x,y)∼D [1(f(xrev) 6= f(x))] ." }, { "heading": "C ADDITIONAL DISCUSSIONS", "text": "We showed that correctly classified examples (hypocritical examples) could be easily found in the vicinity of misclassified clean examples. As a result, a hypocritically perturbed set could be constructed with these hypocritical examples. The victim model’s standard accuracy evaluated on the hypocritically perturbed set becomes higher than that on the clean set. It is natural then to wonder: How about adversarially robust accuracy (i.e., accuracy under adversarial perturbations) of the victim model on hypocritical examples? It’s easy to see that, if the adversary is bounded by the same ball as the false friend, the model’s adversarial accuracy evaluated on hypocritically perturbed set is zero, since a misclassified example exists in the ball of a hypocritical example (by definition). However, if the adversary’s power is restricted by another δ ball such that δ < , then a robust hypocritical example may exist in the vicinity of a clean example so that a δ-bounded adversary can not change the model’s prediction on the robust hypocritical example. In such a case, the model’s adversarial accuracy evaluated on the robustly hypocritically perturbed set could be higher than that on the clean set. New attack and defense methods are required to further explore this phenomenon." }, { "heading": "D TRADEOFF BETWEEN ADVERSARIAL AND HYPOCRITICAL RISKS", "text": "Despite the experiments in Section 4.1 and Appendix A.3 showed that, when dealing with finite sample size and finite-time gradient-descent trained classifiers, there may be mutual benefits between adversarial robustness and hypocritical robustness in real-world datasets, we note that in general, this synergism does not necessarily exist. We illustrate the phenomenon by providing another toy example here, which is inspired by the precision-recall tradeoff Buckland & Gey (1994); Alvarez (2002).\nConsider the case (x, y) ∈ R2 × {−1,+1} from a distribution D, where the marginal distribution over the instance space is a uniform distribution over [0, 1]2. We assume that the decision boundary of the oracle (ground truth) is a circle:\nO(x) = sign(r − ‖x− c‖2), (9)\nwhere the centre c = (0.5, 0.5)> and the radius r = 0.4. The points inside the circle are labeled as belonging to the positive class, otherwise they are labeled as belonging to the negative class. We consider the linear classifier f with fixed w = (0, 1)> and a tunable threshold b:\nf(x) = sign(w>x− b) = sign(x2 − b). (10)\nSee Figure 9 for a visualization of the oracle and the linear classifier over the instance space. In this problem, we can show the tradeoffs by tuning the threshold b of the linear classifier over [0, 1]. The precision is the number of true positives (i.e. the number of examples correctly classified as positive class) divided by the the sum of true positives and false positives (i.e. the number of examples misclassified as positive class). The recall is the number of true positives divided by the sum of true positives and false negatives (i.e. the number of examples misclassified as negative class). We compare the adversarial risk on correctly classified examples R̂adv(f) defined in Proposition 1 and the hypocritical risk on misclassified examples R̂hyp(f) defined in Definition 3. The computing\nformulas of these values are visualized in Figure 10. Here we choose the bounded l2 ball B (x) = {x′ ∈ R2 : ‖x′ − x‖2 ≤ } with = 0.1 as the threat model. Figure 11 plots the curve of precision and recall versus threshold b. We can see that there is a obvious precision-recall tradeoff between the two gray dotted lines. Similarly, Figure 12 plots the curve of R̂adv(f) and R̂hyp(f) versus threshold b. We can see that the tradeoff exists almost everywhere: as the adversarial risk increases, the hypocritical risk decreases, and vise versa." } ]
2,020
WITH FALSE FRIENDS LIKE THESE, WHO CAN HAVE SELF-KNOWLEDGE?
SP:916fbf4e8da5fb73f5012ec5711662cd9be2e067
[ "This paper aims to answer a very important and difficult question, i.e., given a clustering application what are the desirable qualities (i.e., similarity indices) to have. This work argues that there are so many clustering similarity indices with (sometimes) disagreements among them. The authors run experiments on 16 real-world datasets and 8 well-known clustering algorithms and provide a theoretical solution and a list of desirable properties that can help practitioners make informed decisions. Moreover, the authors also discuss the important pros and cons of the similarity indices in the context of the applications.", "Cluster Similarity Indices (CSIs) take as input two clusterings A, B and assign a similarity score for the given pair of clusterings. The index calculates a score based on the number of pairs of elements that clustered together on both clustering (N++), those that are not clustered together in non of A,B (N--), those that are clustered together in A but not in B (N+-), and vice versa (N-+). CSIs can be used to evaluate clusterings produced by different algorithms with respect to some reliable reference clustering on a single instance, and choose the one that is the closest to the reference clustering (indicated by the CSIs). The selected clustering algorithm can be then applied to different instances of the same kind where we do not have a reference clustering. " ]
There are many cluster similarity indices used to evaluate clustering algorithms, and choosing the best one for a particular task remains an open problem. We demonstrate that this problem is crucial: there are many disagreements among the indices, these disagreements do affect which algorithms are chosen in applications, and this can lead to degraded performance in real-world systems. We propose a theoretical solution to this problem: we develop a list of desirable properties and theoretically verify which indices satisfy them. This allows for making an informed choice: given a particular application, one can first select properties that are desirable for a given application and then identify indices satisfying these. We observe that many popular indices have significant drawbacks. Instead, we advocate using other ones that are not so widely adopted but have beneficial properties.
[]
[ { "authors": [ "Ahmed N. Albatineh", "Magdalena Niewiadomska-Bugaj", "Daniel Mihalko" ], "title": "On similarity indices and correction for chance agreement", "venue": "Journal of Classification,", "year": 2006 }, { "authors": [ "Mehdi Allahyari", "Seyedamin Pouriyeh", "Mehdi Assefi", "Saied Safaei", "Elizabeth D Trippe", "Juan B Gutierrez", "Krys Kochut" ], "title": "A brief survey of text mining: Classification, clustering and extraction techniques", "venue": "arXiv preprint arXiv:1707.02919,", "year": 2017 }, { "authors": [ "Alessia Amelio", "Clara Pizzuti" ], "title": "Is normalized mutual information a fair measure for comparing community detection methods", "venue": "In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining", "year": 2015 }, { "authors": [ "Enrique Amigó", "Julio Gonzalo", "Javier Artiles", "Felisa Verdejo" ], "title": "A comparison of extrinsic clustering evaluation metrics based on formal constraints", "venue": "Information retrieval,", "year": 2009 }, { "authors": [ "Vladimir Batagelj", "Matevz Bren" ], "title": "Comparing resemblance measures", "venue": "Journal of Classification,", "year": 1995 }, { "authors": [ "Shai Ben-David", "Margareta Ackerman" ], "title": "Measures of clustering quality: A working set of axioms for clustering", "venue": "Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Shai Ben-David", "Margareta Ackerman" ], "title": "Measures of clustering quality: A working set of axioms for clustering", "venue": "In Advances in neural information processing systems,", "year": 2009 }, { "authors": [ "Seung-Seok Choi", "Sung-Hyuk Cha", "Charles C Tappert" ], "title": "A survey of binary similarity and distance measures", "venue": "Journal of Systemics, Cybernetics and Informatics,", "year": 2010 }, { "authors": [ "Claire Donnat", "Susan Holmes" ], "title": "Tracking network dynamics: a survey of distances and similarity metrics, 2018", "venue": null, "year": 2018 }, { "authors": [ "Lawrence Hubert" ], "title": "Nominal scale response agreement as a generalized correlation", "venue": "British Journal of Mathematical and Statistical Psychology,", "year": 1977 }, { "authors": [ "Lawrence Hubert", "Phipps Arabie" ], "title": "Comparing partitions", "venue": "Journal of classification,", "year": 1985 }, { "authors": [ "Jon Kleinberg" ], "title": "An impossibility theorem for clustering", "venue": "Advances in neural information processing systems,", "year": 2002 }, { "authors": [ "Sven Kosub" ], "title": "A note on the triangle inequality for the jaccard distance", "venue": "Pattern Recognition Letters,", "year": 2018 }, { "authors": [ "Yang Lei", "James C. Bezdek", "Simone Romano", "Nguyen Xuan Vinh", "Jeffrey Chan", "James Bailey" ], "title": "Ground truth bias in external cluster validity indices", "venue": "Pattern Recognition,", "year": 2031 }, { "authors": [ "Marina Meilă" ], "title": "Comparing clusteringsan information based distance", "venue": "Journal of multivariate analysis,", "year": 2007 }, { "authors": [ "Mark EJ Newman", "Michelle Girvan" ], "title": "Finding and evaluating community structure in networks", "venue": "Physical review E,", "year": 2004 }, { "authors": [ "S Romano", "J Bailey", "Nguyen the vinh", "Karin Verspoor" ], "title": "Standardized mutual information for clustering comparisons: One step further in adjustment for chance", "venue": "31st International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Simone Romano", "Nguyen Xuan Vinh", "James Bailey", "Karin Verspoor" ], "title": "Adjusting for chance clustering comparison measures", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Alexander Strehl" ], "title": "Relationship-based clustering and cluster ensembles for high-dimensional data mining", "venue": "PhD thesis,", "year": 2002 }, { "authors": [ "Stijn Van Dongen", "Anton J Enright" ], "title": "Metric distances derived from cosine similarity and pearson and spearman correlations", "venue": "arXiv preprint arXiv:1208.3145,", "year": 2012 }, { "authors": [ "Twan Van Laarhoven", "Elena Marchiori" ], "title": "Axioms for graph clustering quality functions", "venue": "The Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Nguyen Xuan Vinh", "Julien Epps", "James Bailey" ], "title": "Information theoretic measures for clusterings comparison: is a correction for chance necessary", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Nguyen Xuan Vinh", "Julien Epps", "James Bailey" ], "title": "Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Seppo Virtanen", "Mark Girolami" ], "title": "Precision-recall balanced topic modelling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Zhongdao Wang", "Liang Zheng", "Yali Li", "Shengjin Wang" ], "title": "Linkage based face clustering via graph convolution network", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Wahyu Wibowo", "Hugh E Williams" ], "title": "Strategies for minimising errors in hierarchical web categorisation", "venue": "In Proceedings of the eleventh international conference on Information and knowledge management,", "year": 2002 }, { "authors": [ "Dongkuan Xu", "Yingjie Tian" ], "title": "A comprehensive survey of clustering algorithms", "venue": "Annals of Data Science,", "year": 2015 }, { "authors": [ "Lei" ], "title": "ARI. Therefore, there was no need to construct ARI from Rand in the first place", "venue": null, "year": 2017 }, { "authors": [ "Amigó" ], "title": "2009) is the constant baseline which was not analyzed in their work. We find this property extremely important while it is failed by most of the widely used indices including their BCubed. To conclude, our research gives a more comprehensive list of constraints and focuses on those that are desirable in a wide range of applications", "venue": null, "year": 2009 }, { "authors": [ "Jaccard. In Kosub" ], "title": "2019), it is proven that the Jaccard distance 1− J is indeed a distance. Correlation Distance", "venue": null, "year": 2019 }, { "authors": [ "FNMI", "Wallace" ], "title": "These indices cannot be transformed to distances as they are not symmetric. SMI. SMI does not satisfy the maximal agreement property", "venue": "(Romano et al.,", "year": 2014 }, { "authors": [ "Van Dongen", "Enright" ], "title": "We give an alternative proof that allows for a geometric interpretation. First we map each partition A to an N -dimensional vector on the unit sphere", "venue": null, "year": 2012 }, { "authors": [ "LEI" ], "title": "2017) describe the following biases for cluster similarity indices: NCinc — the average value for a random guess increases monotonically with the Number of Clusters (NC) of the candidate; NCdec — the average value for a random guess decreases monotonically with the number of clusters, and GTbias — the direction of the monotonicity depends on the specific Ground Truth (GT)", "venue": null, "year": 2017 }, { "authors": [ "wine", "wisc", "cpu", "iono", "iris", "sonar", "thy", "zoo" ], "title": "On these datasets, we ran 8 well-known clustering algorithms (Scikit-learn, 2020): KMeans, AffinityPropagation, MeanShift, AgglomerativeClustering, DBSCAN, OPTICS, Birch, GaussianMixture. For AgglomerativeClustering, we used 4 different linkage types (‘ward’, ‘average’, ‘complete’, ‘single’). For GaussianMixture, we used 4 different covariance types (‘spherical’, ‘diag’, ‘tied’, ‘full’)", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Clustering is an unsupervised machine learning problem, where the task is to group objects that are similar to each other. In network analysis, a related problem is called community detection, where grouping is based on relations between items (links), and the obtained clusters are expected to be densely interconnected. Clustering is used across various applications, including text mining, online advertisement, anomaly detection, and many others (Allahyari et al., 2017; Xu & Tian, 2015).\nTo measure the quality of a clustering algorithm, one can use either internal or external measures. Internal measures evaluate the consistency of the clustering result with the data being clustered, e.g., Silhouette, Hubert-Gamma, Dunn, and many other indices. Unfortunately, it is unclear whether optimizing any of these measures would translate into improved quality in practical applications. External (cluster similarity) measures compare the candidate partition with a reference one (obtained, e.g., by human assessors). A comparison with such a gold standard partition, when it is available, is more reliable. There are many tasks where external evaluation is applicable: text clustering (Amigó et al., 2009), topic modeling (Virtanen & Girolami, 2019), Web categorization (Wibowo & Williams, 2002), face clustering (Wang et al., 2019), news aggregation (see Section 3), and others. Often, when there is no reference partition available, it is possible to let a group of experts annotate a subset of items and compare the algorithms on this subset.\nDozens of cluster similarity measures exist and which one should be used is a subject of debate (Lei et al., 2017). In this paper, we systematically analyze the problem of choosing the best cluster similarity index. We start with a series of experiments demonstrating the importance of the problem (Section 3). First, we construct simple examples showing the inconsistency of all pairs of different similarity indices. Then, we demonstrate that such disagreements often occur in practice when well-known clustering algorithms are applied to real datasets. Finally, we illustrate how an improper choice of a similarity index can affect the performance of production systems.\nSo, the question is: how to compare cluster similarity indices and choose the best one for a particular application? Ideally, we would want to choose an index for which good similarity scores translate to good real-world performance. However, opportunities to experimentally perform such a validation of validation indices are rare, typically expensive, and do not generalize to other applications. In contrast, we suggest a theoretical approach: we formally define properties that are desirable across various applications, discuss their importance, and formally analyze which similarity indices satisfy them (Section 4). This theoretical framework would allow practitioners to choose the best index\nbased on relevant properties for their applications. In Section 5, we advocate two indices that are expected to be suitable across various applications.\nWhile many ideas discussed in the paper can be applied to all similarity indices, we also provide a more in-depth theoretical characterization of pair-counting ones (e.g., Rand and Jaccard), which gives an analytical background for further studies of pair-counting indices. We formally prove that among dozens of known indices, only two have all the properties except for being a distance: Correlation Coefficient and Sokal & Sneath’s first index (Lei et al., 2017). Surprisingly, both indices are rarely used for cluster evaluation. The correlation coefficient has an additional advantage of being easily convertible to a distance measure via the arccosine function. The obtained index has all the properties except constant baseline, which is still satisfied asymptotically.\nConstant baseline is a particular focus of the current research: this is one of the most important and non-trivial properties. Informally, a sensible index should not prefer one candidate partition over another just because it has too large or too small clusters. To the best of our knowledge, we are the first to develop a rigorous theoretical framework for analyzing this property. In this respect, our work improves over the previous (mostly empirical) research on constant baseline of particular indices (Albatineh et al., 2006; Lei et al., 2017; Strehl, 2002; Vinh et al., 2009; 2010), we refer to Appendix A for a detailed comparison to related research." }, { "heading": "2 CLUSTER SIMILARITY INDICES", "text": "We assume that there is a set of elements V with size n = |V |. A clustering is a partition of V into disjoint subsets. Capital letters A,B,C will be used to name the clusterings, and we will represent them as A = {A1, . . . , AkA}, where Ai is the set of elements belonging to i-th cluster. If a pair of elements v, w ∈ V lie in the same cluster in A, we refer to them as an intra-cluster pair of A, while inter-cluster pair will be used otherwise. The total number of pairs is denoted by N = ( n 2 ) . The value that an index I assigns to the similarity between partitions A and B will be denoted by I(A,B). Let us now define some of the indices used throughout the paper, while a more comprehensive list, together with formal definitions, is given in Appendix B.1 and B.2.\nPair-counting indices consider clusterings to be similar if they agree on many pairs. Formally, let ~A be the N -dimensional vector indexed by the set of element-pairs, where the entry corresponding to (v, w) equals 1 if (v, w) is an intra-cluster pair and 0 otherwise. Further, let MAB be the N × 2 matrix that results from concatenating the two (column-) vectors ~A and ~B. Each row of MAB is either 11, 10, 01, or 00. Let the pair-counts N11, N10, N01, N00 denote the number of occurrences for each of these rows in MAB .\nDefinition 1. A pair-counting index is a similarity index that can be expressed as a function of the pair-counts N11, N10, N01, N00.\nSome popular pair-counting indices are Rand and Jaccard:\nR = N11 +N00\nN11 +N10 +N01 +N00 , J = N11 N11 +N10 +N01 .\nAdjusted Rand (AR) is a linear transformation of Rand ensuring that for a random B we have AR(A,B) = 0 in expectation. A less widely used index is the Pearson Correlation Coefficient (CC) between the binary incidence vectors ~A and ~B.1 Another index, which we discuss further in more details, is the Correlation Distance CD(A,B) := 1π arccosCC(A,B). In Table 4, we formally define 27 known pair-counting indices and only mention ones of particular interest throughout the main text.\nInformation-theoretic indices consider clusterings similar if they share a lot of information, i.e., if little information is needed to transform one clustering into the other. Formally, let H(A) := H(|A1|/n, . . . , |AkA |/n) be the Shannon entropy of the cluster-label distribution of A. Similarly, the joint entropyH(A,B) is defined as the entropy of the distribution with probabilities (pij)i∈[kA],j∈[kB ], where pij = |Ai ∩ Bj |/n. Then, the mutual information of two clusterings can be defined as\n1Note that Spearman and Pearson correlation are equal when comparing binary vectors. Kendall rank correlation for binary vectors coincides with the Hubert index, which is linearly equivalent to Rand.\nM(A,B) = H(A) + H(B) − H(A,B). There are multiple ways of normalizing the mutual information, the most widely used ones are:\nNMI(A,B) = M(A,B)\n(H(A) +H(B))/2 , NMImax(A,B) =\nM(A,B)\nmax{H(A), H(B)} .\nNMI is known to be biased towards smaller clusters, and several modifications try to mitigate this bias: Adjusted Mutual Information (AMI) and Standardized Mutual Information (SMI) subtract the expected mutual information from M(A,B) and normalize the obtained value (Vinh et al., 2009), while Fair NMI (FNMI) multiplies NMI by a penalty factor e−|kA−kB |/kA (Amelio & Pizzuti, 2015)." }, { "heading": "3 MOTIVATING EXPERIMENTS", "text": "As discussed in Section 2, many cluster similarity indices are used by researchers and practitioners. A natural question is: how to choose the best one? Before trying to answer this question, it is important to understand whether the problem is relevant. Indeed, if the indices are very similar to each other and agree in most practical applications, then one can safely take any index. In this section, we demonstrate that this is not the case, and the proper choice matters.\nFirst, we illustrate the inconsistency of all indices. We say that two indices I1 and I2 are inconsistent for a triplet of partitions (A,B1, B2) if I1(A,B1) > I1(A,B2) but I2(A,B1) < I2(A,B2). We took 15 popular cluster similarity measures and constructed just four triplets such that each pair of indices is inconsistent for at least one triplet. One example is shown in Figure 1: for this simple example, about half of the indices prefer the left candidate, while the others prefer the right one. Other examples can be found in Appendix F.1.\nNMI VI AR S&S1 CC NMI – 40.3 15.7 20.1 18.5 VI – 37.6 36.0 37.2 AR – 11.7 8.3 S&S1 – 3.6 CC –\ntition and B1, B2 are provided by two algorithms. For a given pair of indices and all such triplets, we look at whether the indices are consistent. Table 1 shows the relative inconsistency for several indices (the extended table together with a detailed description of the experimental setup and more analysis is given in Appendix F.2). The inconsistency rate is significant: e.g., popular measures Adjusted Rand and Variation if Information disagree in almost 40% of the cases, which is huge. Interestingly, the best agreeing indices are S&S and CC, which satisfy most of our properties, as shown in the next section. In contrast, the Variation of Information very often disagrees with other indices.\nTo show that the choice of similarity index may affect the final performance in a real production scenario, we conducted an experiment within a major news aggregator system. The system groups news articles to events and shows the list of most important events to users. For grouping, a clustering algorithm is used, and the quality of this algorithm affects the user experience: merging different clusters may lead to not showing an important event, while too much splitting may cause duplicate events. When comparing several candidate clustering algorithms, it is important to determine which one is the best for the system. Online experiments are expensive and can be used only for the best candidates. Thus, we need a tool for an offline comparison. For this purpose, we manually created a reference partition on a small fraction of news articles. We can use this partition to evaluate the\ncandidates. We performed such an offline comparison for two candidate algorithms and observed that different indices preferred different algorithms. Then, we launched an online user experiment and verified that one of the candidates is better for the system according to user preferences. Hence, it is important to be careful when choosing a similarity index for the offline comparison. See Appendix F.3 for more detailed description of this experiment and quantitative analysis." }, { "heading": "4 ANALYSIS OF CLUSTER SIMILARITY INDICES", "text": "In this section, we motivate and formally define properties that are desirable for cluster similarity indices. We start with simple and intuitive ones that can be useful in some applications but not always necessary. Then, we discuss more complicated properties, ending with constant baseline that is extremely important but least trivial. In Tables 2 and 3, indices of particular interest are listed along with the properties satisfied. In Appendix C, we give the proofs for all entries of these tables.\nFor pair-counting indices we perform a more detailed analysis and define additional properties. For such indices, we interchangeably use the notation I(A,B) and I(N11, N10, N01, N00).\nSome of the indices have slight variants that are essentially the same. For example, the Hubert Index (Hubert, 1977) can be expressed as a linear transformation of the Rand index as H(A,B) = 2R(A,B)− 1. All the properties defined in this paper are invariant under linear transformations and interchanging A and B. Hence, we define the following linear equivalence relation on similarity indices and check the properties for at most one representative of each equivalence class. Definition 2. Similarity indices I1 and I2 are linearly equivalent if there exists a nonconstant linear function f such that either I1(A,B) = f(I2(A,B)) or I1(A,B) = f(I2(B,A)).\nThis allows us to conveniently restrict to indices for which higher numerical values indicate higher similarity of partitions. Table 5 in the Appendix lists the equivalent indices." }, { "heading": "4.1 PROPERTY 1: MAXIMAL AGREEMENT", "text": "The numerical value that an index assigns to a similarity must be easily interpretable. In particular, it should be easy to see whether the candidate clustering is maximally similar to (i.e., coincides with) the reference clustering. Formally, we require that I(A,A) − cmax is constant and either a strict upper bound for I(A,B) for all A 6= B. The equivalence from Definition 2 allows us to assume that I(A,A) is a maximum w.l.o.g. This property is easy to check, and it is satisfied by almost all indices, except for SMI and Wallace.\nProperty 1′: Minimal agreement The maximal agreement property makes the upper range of the index interpretable. Similarly, having a numerical value for a low agreement would make the lower range interpretable. A minimal agreement is not well defined for general partitions: it is not clear which partition would be most dissimilar to a given one. However, referring to Lemma 1 in Appendix, pair-counting indices form a subclass of graph similarity indices. For a given graph G = (V,E), it is clear that the graph most dissimilar to G is its complement GC = (V,EC). Comparing a graph to its complement would result in pair-counts N11 = N00 = 0 and N10 +N01 = N . This motivates the following definition: Definition 3. A pair-counting index I has the minimal agreement property if there exists a constant cmin so that I(N11, N10, N01, N00) ≥ cmin with equality if and only if N11 = N00 = 0.\nThis property is satisfied by Rand, Correlation Coefficient, and Sokal&Sneath, while it is violated by Jaccard, Wallace, and Dice. Adjusted Rand does not have this property since substituting N11 = N00 = 0 gives the non-constant AR(0, N10, N01, 0) = − N10N011\n2N 2−N10N01\n." }, { "heading": "4.2 PROPERTY 3: SYMMETRY", "text": "Similarity is intuitively understood as a symmetric concept. Therefore, a good similarity index is expected to be symmetric, i.e., I(A,B) = I(B,A) for all partitions A,B.2 Tables 2 and 3\n2In some applications, A and B may have different roles (e.g., reference and candidate partitions), and an asymmetric index may be suitable if there are different consequences of making false positives or false negatives.\nTable 2: Requirements for general similarity indices\nM ax\n.a gr\nee m\nen t\nSy m\nm et\nry\nD is\nta nc\ne\nL in\n.c om\npl ex\nity\nM on\not on\nic ity\nC on\nst .b\nas el\nin e\nNMI 3 3 7 3 3 7 NMImax 3 3 3 3 7 7\nFNMI 3 7 7 3 7 7 VI 3 3 3 3 3 7\nSMI 7 3 7 7 7 3 FMeasure 3 3 7 3 7 7\nBCubed 3 3 7 3 3 7 AMI 3 3 7 7 3 3\nM ax\n.a gr\nee m\nen t\nM in\n.a gr\nee m\nen t\nSy m\nm et\nry\nD is\nta nc\ne\nL in\n.c om\npl ex\nity\nM on\not on\nic ity\nSt ro\nng m\non ot\non ic\nity\nC on\nst .b\nas el\nin e\nA s.\nco ns\nt. ba\nse lin\ne\nR 3 3 3 3 3 3 3 7 7 ↗↘ AR 3 7 3 7 3 3 7 3 3\nJ 3 7 3 3 3 3 7 7 7 ↘ W 7 7 7 7 3 7 7 7 7 ↘ D 3 7 3 7 3 3 7 7 7 ↘\nCC 3 3 3 7 3 3 3 3 3 S&S1 3 3 3 7 3 3 3 3 3\nCD 3 3 3 3 3 3 3 7 3\nshow that most indices are symmetric. The asymmetric ones are precision and recall (Wallace) and FNMI (Amelio & Pizzuti, 2015), which is a product of NMI and the asymmetric penalty factor." }, { "heading": "4.3 PROPERTY 4: LINEAR COMPLEXITY", "text": "For clustering tasks on large datasets, running time is crucial, and algorithms with superlinear time can be infeasible. In these cases, a validation index with superlinear running time would be a significant bottleneck. Furthermore, computationally heavy indices also tend to be complicated and hard to interpret intuitively. We say that an index has linear complexity when its worst-case running time is O(n). In Appendix C.2, we prove that any pair-counting index has O(n) complexity. Many general indices have this property as well, except for SMI and AMI." }, { "heading": "4.4 PROPERTY 4. DISTANCE", "text": "For some applications, a distance-interpretation of dissimilarity may be desirable: whenever A is similar to B and B is similar to C, then A should also be somewhat similar to C. For example, assume that we have the reference clustering that is an approximation of the ground truth (e.g., labeled by experts). In such situations, it may be reasonable to argue that the reference clustering is at most a distance ε from the true clustering, so that the triangle inequality bounds the dissimilarity of the candidate clustering to the unknown true clustering.\nA function d is a distance metric if it satisfies three distance axioms: 1) symmetry (d(A,B) = d(B,A)); 2) positive-definiteness (d(A,B) ≥ 0 with equality iff A = B); 3) the triangle inequality (d(A,C) ≤ d(A,B) + d(B,C)). We say that I is linearly transformable to a distance metric if there exists a linearly equivalent index that satisfies these three distance axioms. Note that all three axioms are invariant under re-scaling of d. We have already imposed the symmetry as a separate property, and the positive-definiteness is equivalent to the maximal agreement property. Therefore, whenever I has these two properties, it satisfies the distance property iff d(A,B) = cmax − I(A,B) satisfies the triangle inequality, for cmax as defined in Section 4.1.\nExamples of popular indices having this property are Variation of Information and the Mirkin metric. In Vinh et al. (2010), it is proved that when Mutual Information is normalized by the maximum of entropies, the resulting NMI is equivalent to a distance metric. A proof that the Jaccard index is equivalent to a distance is given in Kosub (2019). See Appendix C.1 for all the proofs.\nCorrelation Distance Among all the considered indices, there are two pair-counting ones having all the properties except for being a distance: Sokal&Sneath and Correlation Coefficient. However, the correlation coefficient can be transformed to a distance metric via a non-linear transformation. We\n2All known pair-counting indices excluded from this table do not satisfy either constant baseline, symmetry, or maximal agreement.\ndefine Correlation Distance (CD) as CD(A,B) := 1π arccosCC(A,B), where CC is the Pearson correlation coefficient and the factor 1/π scales the index to [0, 1]. To the best of our knowledge, this Correlation Distance has never before been used as a similarity index for comparing clusterings throughout the literature.\nLet us point out that the Correlation Distance is indeed a distance. It follows from the fact that the correlation coefficient is obtained by first mapping the binary vectors ~A, ~B to the unit sphere, and then taking their standard inner product. The arccosine of the inner product of two unit vectors corresponds to their angle, which is indeed a distance metric. A more detailed proof of this claim can be found in Appendix E.2. Further in this section, we show that the distance property of Correlation Distance is achieved at the cost of not having the exact constant baseline, which is still satisfied asymptotically." }, { "heading": "4.5 PROPERTY 5: MONOTONICITY", "text": "When one clustering is changed such that it resembles the other clustering more, the similarity score ought to improve. Hence, we require an index to be monotone w.r.t. changes that increase the similarity. This can be formalized via the following definition.\nDefinition 4. For clusterings A and B, we say that B′ is an A-consistent improvement of B iff B 6= B′ and all pairs of elements agreeing in A and B also agree in A and B′.\nThis leads to the following monotonicity property.\nDefinition 5. An index I satisfies the monotonicity property if for every two clusterings A,B and any B′ that is an A-consistent improvement of B, it holds that I(A,B′) > I(A,B).\nTo look at monotonicity from a different perspective, we define the following operations:\n• Perfect split: B′ is a perfect split of B (w.r.t. A) if B′ is obtained from B by splitting a single cluster B1 into two clusters B′1, B ′ 2 such that no two elements of the same cluster of\nA are in different parts of this split, i.e., for all i, Ai ∩B1 is a subset of either B′1 or B′2. • Perfect merge: We say that B′ is a perfect merge of B (w.r.t. A) if there exists some Ai\nand B1, B2 ⊂ Ai such that B′ is obtained by merging B1, B2 into B′1.\nThe following theorem gives an alternative definition of monotonicity and is proven in Appendix E.1.\nTheorem 1. B′ is an A-consistent improvement of B iff B′ can be obtained from B by a sequence of perfect splits and perfect merges.\nNote that this monotonicity is a stronger form of the first two constraints defined in (Amigó et al., 2009): Cluster Homogeneity is a weaker form of our monotonicity w.r.t. perfect splits, while Cluster Equivalence is equivalent to our monotonicity w.r.t. perfect merges.\nMonotonicity is a critical property that should be satisfied by any sensible index. Surprisingly, not all indices satisfy this: we have found counterexamples that prove that SMI, FNMI, and Wallace do not have the monotonicity property. Furthermore, for NMI, whether monotonicity is satisfied depends on the normalization: the normalization by the average of the entropies has monotonicity, while the normalization by the maximum of the entropies does not.\nProperty 5′. Strong monotonicity For pair-counting indices, we can define a stronger monotonicity property in terms of pair-counts.\nDefinition 6. A pair-counting index I satisfies strong monotonicity if it increases with N11, N00 and decreases with N10, N01.\nThis property is stronger than monotonicity as it additionally allows for comparing similarities across different settings: we could compare the similarity betweenA1, B1 on n1 elements with the similarity between A2, B2 on n2 elements, even when n1 6= n2. This ability to compare similarity scores across different numbers of elements is similar to the Few data points property of SMI (Romano et al., 2014) that allows its scale to have a similar interpretation across different settings.\nWe found several examples of indices that have Property 5 while not satisfying Property 5′. Jaccard and Dice indices are constant w.r.t. N00, so they are not strongly monotone. A more interesting example is the Adjusted Rand index, which may become strictly larger if we only increase N10." }, { "heading": "4.6 PROPERTY 6. CONSTANT BASELINE.", "text": "This property is arguably the most significant: it is less intuitive than the other ones and may lead to unexpected consequences in practice. Informally, a good similarity index should not give a preference to a candidate clustering B over another clustering C just because B has many or few clusters. This intuition can be formalized using random partitions: assume that we have some reference clustering A and two random partitions B and C. While intuitively both random guesses are equally bad approximations of A, it has been known throughout the literature (Albatineh et al., 2006; Romano et al., 2014; Vinh et al., 2010) that some indices tend to give higher scores for random guesses with a larger number of clusters. Ideally, we want the similarity value of a random candidate w.r.t. the reference partition to have a fixed expected value cbase (independent of A). We formalize this in the following way. Let S(B) denote the specification of the cluster sizes of the clustering B, i.e., S(B) := [|B1|, . . . , |BkB |], where [. . . ] denotes a multiset. For a cluster sizes specification s, let C(s) be the uniform distribution over clusterings B with S(B) = s. Definition 7. An index I satisfies the constant baseline property if there exists a constant cbase so that EB∼C(s)[I(A,B)] = cbase for any cluster-sizes specification s and clustering A with 1 < kA < n.\nNote that this property is symmetric since it does not matter whether we permute the labels of A while keeping B constant or vice versa. In the definition, we have excluded the cases where A is a trivial clustering consisting of either 1 or n clusters. Including them would cause problems for s = S(A), as C(s) would be a constant distribution surely returning A and any sensible index should have I(A,A) 6= cbase. Constant baseline is extremely important in many practical applications: if an index violates this property, then its optimization may lead to undesirably biased results. For instance, if a biased index is used to choose the best algorithm among several candidates, then it is likely that the decision will be biased towards those who produce too large or too small clusters. This problem is often attributed to NMI (Romano et al., 2014; Vinh et al., 2010), but we found out that almost all indices suffer from it. The only indices that satisfy the constant baseline property are Adjusted Rand index, Correlation Coefficient, SMI, and AMI with cbase = 0 and Sokal&Sneath with cbase = 1/2. Interestingly, out of these five indices, three were specifically designed to satisfy this property, which made them less intuitive and resulted in other important properties being violated.\nThe only condition under which the constant baseline property can be safely ignored is knowing in advance all cluster sizes. In this case, bias towards particular cluster sizes would not affect decisions. However, we are not aware of any practical application where such an assumption can be made. Note that knowing only the number of clusters is insufficient. We illustrate this in Appendix D.4, where we also show that the bias of indices violating the constant baseline is easy to identify empirically.\nProperty 6′: Asymptotic constant baseline For pair-counting indices, a deeper analysis of the constant baseline property is possible. Let mA = N11 + N10, mB = N11 + N01 be the number of intra-cluster pairs of A and B, respectively. Note that mA and mB are constant as A is constant and B ∼ C(s), so that its cluster-sizes are constant. Furthermore, the pair-counts N10, N01, N00 are functions of N,mA,mB , N11. Hence, to find the expected value of the index, we need to inspect it as a function of a single random variable N11. For a random pair, the probability that it is an intra-cluster pair of both clusterings is mAmB/N2, so the expected values of the pair-counts are\nN11 := mAmB N , N10 := mA −N11, N01 := mB −N11, N00 := N −mA −mB +N11. (1)\nWe can use these values to define a weaker variant of constant baseline. Definition 8. A pair-counting index I has the asymptotic constant baseline property if there exists a constant cbase so that I ( N11, N10, N01, N00 ) = cbase for all A with 1 < kA < n.\nIn contrast to Definition 7, asymptotic constant baseline is very easy to verify: one just has to substitute the values from (1) to the index and check whether the obtained value is constant. Another important observation is that under mild assumptions I (N11, N10, N01, N00) converges in probability\nto I ( N11, N10, N01, N00 ) as n grows which justifies the usage of the name asymptotic constant baseline, see Appendix D.2 for more details.\nNote that the non-linear transformation of Correlation Coefficient to Correlation Distance makes the latter one violate the constant baseline property. CD does, however, still have the asymptotic constant baseline at 1/2 and we prove in Appendix E.3 that the expectation in Definition 7 is very close to this value. To the best of our knowledge, there does not exist a cluster similarity index that is a distance while having the exact constant baseline.\nBiases of cluster similarity indices Given the fact that there are so many biased indices, one may be interested in what kind of candidates they favor. While it is unclear how to formalize this concept for general validation indices, we can do this for pair-counting ones by analyzing them in terms of a single variable.\nWhile there are previous attempts to characterize types of biases (Lei et al., 2017), they mostly rely on an analysis based on the number of clusters. However, we argue that the number of clusters is not a good measure of the granularity of a clustering. Instead, we show that the number of inter-cluster pairs should be analysed to determine the biases of pair-counting indices. We formally define and analyze two types of biases: NPdec and NPinc, where NP stands for Number of inter-cluster Pairs.\nDefinition 9. Let I be a pair-counting index and define I(s)(mA,mB) = I ( N11, N10, N01, N00 ) for the expected pair-counts as defined in (1). We define the following biases:\n(i) I suffers from NPdec bias if there are mA,mB ∈ (0, N) such that ddmB I (s)(mA,mB) > 0.\n(ii) I suffers from NPinc bias if there are mA,mB ∈ (0, N) such that ddmB I (s)(mA,mB) < 0.\nApplying this definition to Jaccard J (s)(mA,mB) = mAmBN(mA+mB)−mAmB and RandR (s)(mA, pB) = 1− (mA +mB)/N + 2mAmB/N2 immediately shows that Jaccard suffers from NPdec bias and Rand suffers from both biases, confirming the findings of Lei et al. (2017). Furthermore, the direction of the monotonicity for the bias of Rand is now determined by the condition 2mA > N instead of the more complicated but equivalent condition on the quadratic entropy of A that is given in Lei et al. (2017). Performing the same for Wallace and Dice shows that both suffer from NPdec bias. Note that an index satisfying the asymptotic constant baseline property will not have any of these biases as I(s)(mA,mB) = cbase." }, { "heading": "5 DISCUSSION AND CONCLUSION", "text": "At this point, we better understand the theoretical properties of cluster similarity indices, so it is time to answer the question: which index is the best? Unfortunately, there is no simple answer, but we can make an informed decision. In this section, we sum up what we have learned, argue that there are indices that are strictly better alternatives than some widely used ones, and give practical advice on how to choose a suitable index for a given application.\nAmong all properties discussed in this paper, monotonicity is the most crucial one. Violating this property is a fatal problem: such indices can prefer candidates which are strictly worse than others. Hence, we cannot advise using the well-known NMImax, FMeasure, FNMI, and SMI indices.\nThe constant baseline property is much less trivial but is equally important: it addresses the problem of preferring some partitions only because they have small or large clusters. This property is essential unless you know all cluster sizes. Since we are not aware of practical applications where all cluster sizes are known, below we assume that this is not the case.3 This requirement is satisfied by just a few indices, so we are only left with AMI, Adjusted Rand (AR), Correlation Coefficient (CC), and Sokal&Sneath (S&S). Additionally, Correlation Distance (CD) satisfies constant baseline asymptotically and deviations from the exact constant baseline are extremely small (see Section E.3).\nLet us note that among the remaining indices, AR is strictly dominated by CC and S&S since it does not have the minimum agreement and strong monotonicity. Also, similarly to AMI, AR is specifically\n3However, in applications where such an assumption holds, it can be reasonable to use, e.g., BCubed, Variation of Information, and NMI.\ncreated to have a constant baseline, which made this index more complex and less intuitive than other pair-counting indices. Hence, we are only left with four indices: AMI, S&S, CC, and CD.\nAccording to their theoretical properties, all these indices are good, and any of them can be chosen. Figure 2 illustrates how a final decision can be made. First, one can decide whether the distance property is needed. For example, suppose one wants to cluster the algorithms by comparing the partitions provided by them. In that case, the metric property of a similarity index allows the use of metric clustering algorithms. In this case, a distance property is desirable, and CD is the best choice: it has all properties except for the exact constant baseline, which is still satisfied asymptotically. Next, it is important to decide whether computation time is of the essence. Linear computation time is essential for large-scale applications.\nFor instance, assume that there is a production system that groups news articles or user photos. There is a candidate algorithm, and we want to compare it with the currently used one to avoid major changes. In this case, we have to compare huge partitions, and time is of the essence. Another example is multiple comparisons: choosing the best algorithm among many candidates (differing, e.g., by a parameter value). If this is the case, then AMI is not a proper choice, and one has to choose between CC and S&S. Otherwise, all three indices are suitable according to our formal constraints.\nLet us discuss an (informal) criterion that may help to choose between AMI and pair-counting alternatives. Different indices may favor a different balance between errors in small and large clusters. In particular, all pair-counting indices give larger weights to errors in large clusters: misclassifying one element in a cluster of size k costs k−1 incorrect pairs. It is known (empirically) that informationtheoretic indices do not have this property and give a higher weight to small clusters (Amigó et al., 2009).4 Amigó et al. (2009) argue that for their particular application (text clustering), it is desirable not to give a higher weight to large clusters. In contrast, there are applications where the opposite may hold. For instance, consider a system that groups user photos based on identity and shows these clusters to a user as a ranked list. In this case, a user is likely to investigate the largest clusters consisting of known people and would rarely spot an error in a small cluster. The same applies to any system that ranks the clusters, e.g., to news aggregators. Based on what is desirable for a particular application, one can choose between AMI and pair-counting CC and S&S.\nThe final decision between CC and S&S is hard to make since they are equally good in terms of their theoretical properties. Interestingly, although some works (Choi et al., 2010; Lei et al., 2017) list Pearson correlation as a cluster similarity index, it has not received attention that our results suggest it deserves, similarly to S&S. First, both indices are interpretable. CC is a correlation between the two incidence vectors, which is a very natural concept. S&S is the average of precision and recall (for binary classification of pairs) plus their inverted counterparts, which can also be intuitively understood. Also, CC and S&S usually agree in practice: in Tables 1 and 6 we can see that they have the largest agreement. Hence, one can take any of these indices. Another option would be to check whether there are situations where these indices disagree and, if this happens, perform an experiment similar to what we did in Section 3 for news aggregation.\nFinally, while some properties listed in Tables 2 and 3 are not mentioned in the discussion above, they can be important for particular applications. For instance, maximum and minimum agreements are useful for interpretability, but they can also be essential if some operations are performed over the index values: e.g., averaging the scores of different algorithms. Symmetry can be necessary if there is no “gold standard” partition, but algorithms are compared only to each other.\n4This is an interesting aspect that has not received much attention in our research since we believe that the desired balance between large and small clusters may differ per application and we are not aware of a proper formalization of this “level of balance” in a general form." }, { "heading": "A FURTHER RELATED WORK", "text": "Several attempts to the comparative analysis of cluster similarity indices have been made in the literature, both in machine learning and complex networks communities. In particular, the problem of indices favoring clusterings with smaller or larger clusters has been identified (Albatineh et al., 2006; Lei et al., 2017; Vinh et al., 2009; 2010). The most popular approach to resolving the bias of an index is to subtract its expected value and normalize the resulting quantity to obtain an index that satisfies the maximum agreement property. This approach has let to ‘adjusted’ indices such as AR (Hubert & Arabie, 1985) and AMI (Vinh et al., 2009). In Albatineh et al. (2006), the family of pair-counting indices L is introduced for which adjusted forms can be computed easily. This family corresponds to the set of all pair-counting indices that are linear functions of N11 for fixed N11+N10, N11+N01. A generalization of information-theoretic indices by Tsallis q-entropy is given in Romano et al. (2016) and is shown to correspond to pair-counting indices for q = 2. Formulas are provided for adjusting these generalized indices for chance.\nA disadvantage of this adjustment scheme is that an index can be normalized in many ways, while it is difficult to grasp the differences between these normalizations intuitively. For example, three\nvariants of AMI have been introduced (Vinh et al., 2009), and we show that normalization by the maximum entropies results in an index that fails monotonicity. Romano et al. (2014) go one step further by standardizing mutual information, while Amelio & Pizzuti (2015) multiply NMI with a penalty factor that decreases with the difference in the number of clusters.\nIn summary, all these works take a popular biased index and ‘patch’ it to get rid of this bias. This approach has two disadvantages: firstly, these patches often introduce new problems (e.g., FNMI and SMI fail monotonicity), and secondly, the resulting index is usually less interpretable than the original. We have taken a different approach in our work: instead of patching existing indices, we analyze previously introduced indices to see whether they satisfy more properties. Our analysis shows that ARI is dominated by Pearson correlation, which was introduced more than 100 years before ARI. Therefore, there was no need to construct ARI from Rand in the first place.\nIn Lei et al. (2017), the biases of pair-counting indices are characterized. They define these biases as a preference towards either few or many clusters. They prove that the direction of Rand’s bias depends on the Havrda-Charvat entropy of the reference clustering. In the present work, we show that the number of clusters is not an adequate quantity for expressing these biases. We introduce methods to easily analyze the bias of any pair-counting index and simplify the condition for the direction of Rand’s bias to mA < N/2.\nA paper closely related to the current research (Amigó et al., 2009) formulates several constraints (axioms) for cluster similarity indices. Their cluster homogeneity is a weaker analog of our monotonicity w.r.t. perfect splits while their cluster equivalence is equivalent to our monotonicity w.r.t. perfect merges. The third rag bag constraint is motivated by a subjective claim that “introducing disorder into a disordered cluster is less harmful than introducing disorder into a clean cluster”. While this is important for their particular application (text clustering), we found no other work that deemed this constraint necessary; hence, we disregarded this constraint in the current research. The last constraint by Amigó et al. (2009) concerns the balance between making errors in large and small clusters. Though this is an interesting aspect that has not received much attention in our research, this constraint poses a particular balance while we believe that the desired balance may differ per application. Hence, this property seems to be non-binary and we are not aware of a proper formalization of this “level of balance” in a general form. Hence, we do not include this in our list of formal properties. The most principal difference of our work compared to Amigó et al. (2009) is the constant baseline which was not analyzed in their work. We find this property extremely important while it is failed by most of the widely used indices including their BCubed. To conclude, our research gives a more comprehensive list of constraints and focuses on those that are desirable in a wide range of applications. We also cover all similarity indices often used in the literature and give formal proofs for all index-property combinations.\nA property similar to our monotonicity property is also given in Meilă (2007), where the similarity between clusterings A and B is upper-bounded by the similarity between A and A⊗B (as defined in Section C.4). One can show that this property is implied by our monotonicity but not vice versa, i.e., the variant proposed by Meilă (2007) is weaker. Our analysis of monotonicity generalizes and unifies previous approaches to this problem, see Theorem 1 that relates consistent improvements to perfect splits and merges.\nWhile we focus on external cluster similarity indices that compare a candidate partition with a reference one, there are also internal similarity measures that estimate the quality of partitions with respect to internal structure of data (e.g., Silhouette, Hubert-Gamma, Dunn, and many other indices). Kleinberg (2002) used an axiomatic approach for internal measures and proved an impossibility theorem: there are three simple and natural constraints such that no internal clustering measure can satisfy all of them. More work in this direction can be found in, e.g., Ben-David & Ackerman (2008). In network analysis, internal measures compare a candidate partition with the underlying graph structure. They quantify how well a community structure (given by a partition) fits the graph and are often referred to as goodness or quality measures. The most well-known example is modularity (Newman & Girvan, 2004). Axioms that these measures ought to satisfy are given in (Ben-David & Ackerman, 2009; Van Laarhoven & Marchiori, 2014). Note that all pair-counting indices discussed in this paper can also be used for graph-partition similarity, as we discuss in Section B.3." }, { "heading": "B CLUSTER SIMILARITY INDICES", "text": "" }, { "heading": "B.1 GENERAL INDICES", "text": "Here we give the definitions of the indices listed in Table 2. We define the contingency variables as nij = |Ai ∩ Bj |. We note that all indices discussed in this paper can be expressed as functions of these contingency variables.\nThe F-Measure is defined as the harmonic mean of recall and precision. Recall is defined as\nr(A,B) = 1\nn kA∑ i=1 max j∈[kB ] {nij},\nand precision is its symmetric counterpart r(B,A).\nIn (Amigó et al., 2009), recall is redefined as\nr′(A,B) = 1\nn kA∑ i=1 1 |Ai| kB∑ j=1 n2ij ,\nand BCubed is defined as the harmonic mean of r′(A,B) and r′(B,A).\nThe remainder of the indices are information-theoretic and require some additional definitions. Let p1, . . . , p` be a discrete distribution (i.e., all values are nonnegative and sum to 1). The Shannon entropy is then defined as\nH(p1, . . . , p`) := − ∑̀ i=1 pi log(pi).\nThe entropy of a clustering is defined as the entropy of the cluster-label distribution of a random item, i.e.,\nH(A) := H(|A1|/n, . . . , |AkA |/n), and similarly for H(B). The joint entropy H(A,B) is then defined as the entropy of the distribution with probabilities (pij)i∈[kA],j∈[kB ], where pij = nij/n.\nVariation of Information (Meilă, 2007) is defined as\nVI(A,B) = 2H(A,B)−H(A)−H(B).\nMutual information is defined as\nM(A,B) = H(A) +H(B)−H(A,B).\nThe mutual information between A and B is upper-bounded by H(A) and H(B), which gives multiple possibilities to normalize the mutual information. In this paper, we discuss two normalizations: normalization by the average of the entropies 12 (H(A)+H(B)), and normalization by the maximum of entropies max{H(A), H(B)}. We will refer to the corresponding indices as NMI and NMImax, respectively:\nNMI(A,B) = M(A,B)\n(H(A) +H(B))/2 ,\nNMImax(A,B) = M(A,B)\nmax{H(A), H(B)} .\nFair NMI is a variant of NMI that includes a factor that penalizes large differences in the number of clusters (Amelio & Pizzuti, 2015). It is given by\nFNMI(A,B) = e−|kA−kB |/kANMI(A,B).\nIn this definition, NMI may be normalized in various ways. We note that a different normalization would not result in more properties being satisfied.\nAdjusted Mutual Information addresses for the bias of NMI by subtracting the expected mutual informationVinh et al. (2009). It is given by\nAMI(A,B) = M(A,B)−EB′∼C(S(B))[M(A,B′)]√ H(A) ·H(B)−EB′∼C(S(B))[M(A,B′)] .\nHere, a normalization by the geometric mean of the entropies is used, while other normalizations are also used (Vinh et al., 2009).\nStandardized Mutual Information standardizes the mutual information w.r.t. random permutations of the items (Romano et al., 2014), i.e.,\nSMI(A,B) = M(A,B)−EB′∼C(S(B))(M(A,B′))\nσB′∼C(S(B))(M(A,B′)) ,\nwhere σ denotes the standard deviation. Calculating the expected value and standard deviation of the mutual information is nontrivial and requires significantly more computation power than other indices. For this, we refer to the original paper (Romano et al., 2014). Note that this index is symmetric since it does not matter whether we keep A constant while randomly permuting B or keep B constant while randomly permuting A." }, { "heading": "B.2 PAIR-COUNTING INDICES AND THEIR EQUIVALENCES", "text": "Pair-counting similarity indices are defined in Table 4. Table 5 lists linearly equivalent indices (see Definition 2). Note that our linear equivalence differs from the less restrictive monotonous equivalence given in (Batagelj & Bren, 1995). In the current work, we have to restrict to linear equivalence as the constant baseline property is not invariant to non-linear transformations." }, { "heading": "B.3 DEFINING THE SUBCLASS OF PAIR-COUNTING INDICES", "text": "From Definition 1 of the main text, it follows that a pair-counting index is a function of two binary vectors ~A, ~B of lengthN . Note that this binary-vector representation has some redundancy: whenever u, v and v, w form intra-cluster pairs, we know that u,w must also be an intra-cluster pair. Hence, not every binary vector of length N represents a clustering. The class of N -dimensional binary vectors is, however, isomorphic to the class of undirected graphs on n vertices. Therefore, pair-counting indices are also able to measure the similarity between graphs. For example, for an undirected graph G = (V,E), one can consider its incidence vector ~G = (1{{v, w} ∈ E})v,w∈V . Hence, paircounting indices can be used to measure the similarity between two graphs or between a graph and a clustering. So, one may see a connection between graph and cluster similarity indices. For example, the Mirkin metric is a pair-counting index that coincides with the Hamming distance between the edge-sets of two graphs (Donnat & Holmes, 2018). Another example is the Jaccard graph distance, which turns out to be more appropriate for comparing sparse graphs (Donnat & Holmes, 2018). Thus, all pair-counting indices and their properties discussed in the current paper can also be applied to graph-graph and graph-partition similarities.\nIn this section, we show that the subclass of pair-counting similarity indices can be uniquely defined by the property of being pair-symmetric.\nFor two graphs G1 and G2 let MG1G2 denote the N × 2 matrix that is obtained by concatenating their adjacency vectors. Let us write I(G)M (MG1G2) for the similarity between two graphs G1, G2 according to some graph similarity index I(G). We will now characterize all pair-counting similarity indices as a subclass of the class of similarity indices between undirected graphs.\nDefinition 10. We define a graph similarity index I(G)M (MG1G2) to be pair-symmetric if interchanging two rows of MG1,G2 leaves the index unchanged.\nWe give the following result. Lemma 1. The class of pair-symmetric graph similarity indices coincides with the class of paircounting cluster similarity indices.\n5Throughout the literature, the Mirkin metric is defined as 2(N10+N01), but we use this variant as it satisfies the scale-invariance.\nProof. A matrix is an ordered list of its rows. An unordered list is a multiset. Hence, when we disregard the ordering of the matrix MAB , we get a multiset of the rows. This multiset contains at most four distinct elements, each corresponding to one of the pair-counts. Therefore, each I(G)M (MAB) that is symmetric w.r.t. interchanging rows is equivalently a function of the pair-counts of A and B." }, { "heading": "C CHECKING PROPERTIES FOR INDICES", "text": "In this section, we check all non-trivial properties for all indices. The properties of symmetry, maximal/minimal agreement and asymptotic constant baseline can trivially be tested by simply checking I(B,A) = I(A,B), I(A,A) = cmax, I(0, N10, N01, 0) = cmin and I(p)(pApB , pA, pB) = cbase respectively." }, { "heading": "C.1 DISTANCE", "text": "" }, { "heading": "C.1.1 POSITIVE CASES", "text": "NMI and VI. In (Vinh et al., 2010) it is proven that for max-normalization 1−NMI is a distance, while in (Meilă, 2007) it is proven that VI is a distance.\nRand. The Mirkin metric 1 − R corresponds to a rescaled version of the size of the symmetric difference between the sets of intra-cluster pairs. The symmetric difference is known to be a distance metric.\nJaccard. In Kosub (2019), it is proven that the Jaccard distance 1− J is indeed a distance.\nCorrelation Distance. In Theorem E.2 it is proven that Correlation Distance is indeed a distance." }, { "heading": "C.1.2 NEGATIVE CASES", "text": "To prove that an index that satisfies symmetry and maximal agreement is not linearly transformable to a distance metric, we only need to disprove the triangle inequality for one instance of its equivalence class that is nonnegative and equals zero for maximal agreement.\nFNMI and Wallace. These indices cannot be transformed to distances as they are not symmetric.\nSMI. SMI does not satisfy the maximal agreement property (Romano et al., 2014), so it cannot be transformed to a metric.\nFMeasure and BCubed. We will use a simple counter-example, where |V | = 3, kA = 1, kB = 2, kC = 3. Let us denote the FMeasure and BCubed by FM,BC respectively. We get\n1− FM(A,C) = 1− 0.5 > (1− 0.8) + (1− 0.8) = (1− FM(A,B)) + (1− FM(B,C)) and\n1− BC(A,C) = 1− 0.5 > (1− 0.71) + (1− 0.8) ≈ (1− BC(A,B)) + (1− BC(B,C)), so that both indices violate the triangle inequality in this case.\nAdjusted Rand, Dice, Correlation Coefficient, Sokal&Sneath and AMI. For these indices, we use the following counter-example: Let A = {{0, 1}, {2}, {3}}, B = {{0, 1}, {2, 3}}, C = {{0}, {1}, {2, 3}}. Then pAB = pBC = 1/6 and pAC = 0 while pA = pC = 1/6 and pB = 1/3. By substituting these variables, one can see that\n1− I(p)(pAC , pA, pC) > (1− I(p)(pAB , pA, pB)) + (1− I(p)(pBC , pB , pC)), holds for each of these indices, contradicting the triangle inequality. The same A,B and C also form a counter-example for AMI." }, { "heading": "C.2 LINEAR COMPLEXITY", "text": "We will frequently make use of the following lemma:\nLemma 2. The nonzero values of nij can be computed in O(n).\nProof. We will store these nonzero values in a hash-table that maps the pairs (i, j) to their value nij . These values are obtained by iterating through all items and incrementing the corresponding value of nij . For hash-tables, searches and insertions are known to have amortized complexity complexity O(1), meaning that any sequence of n such actions has worst-case running time of O(n), from which the result follows." }, { "heading": "C.2.1 POSITIVE CASES", "text": "NMI, FNMI and VI. Given the positive values of nij , it is clear that the joint and marginal entropy values can be computed in O(n). From these values, the indices can be computed in constant time, leading to a worst-case running time of O(n).\nFMeasure and BCubed. Note that in the expressions of recall and precision as defined by these indices, only the positive values of nij contribute. Furthermore, all of the variables ai, bj and nij appear at most once, so that these can indeed be computed in O(n). Pair-counting indices. Note that N11 = ∑ nij>0 ( nij 2 ) can obviously be computed in O(n). Sim-\nilarly, mA = ∑kA i=1 ( ai 2 ) and mB can be computed in O(kA), O(kB) respectively. The other paircounts are then obtained by N10 = mA−N11, N01 = mB−N11 and N00 = N −mA−mB+N11." }, { "heading": "C.2.2 AMI AND SMI.", "text": "Both of these require the computation of the expected mutual information. It has been known Romano et al. (2016) that this has a worst-case running time of O(n ·max{kA, kB}) while max{kA, kA} can be O(n)." }, { "heading": "C.3 STRONG MONOTONICITY", "text": "" }, { "heading": "C.3.1 POSITIVE CASES", "text": "Correlation Coefficient. This index has the property that inverting one of the binary vectors results in the index flipping sign. Furthermore, the index is symmetric. Therefore, we only need to prove that this index is increasing in N11. We take the derivative and omit the constant factor ((N00 +N10)(N00 +N01)) − 12 :\nN00√ (N11 +N10)(N11 +N01) − (N11N00 −N10N01) · 12 (2N11 +N10 +N01) [(N11 +N10)(N11 +N01)]1.5\n= 1 2N11N00(N10 +N01) +N00N10N01\n[(N11 +N10)(N11 +N01)]1.5 +\n1 2N10N01(2N11 +N10 +N01) [(N11 +N10)(N11 +N01)]1.5 > 0.\nCorrelation Distance. The correlation distance satisfies strong monotonicity as it is a monotone transformation of the correlation coefficient, which meets the property.\nSokal&Sneath. All four fractions are nondecreasing in N11, N00 and nonincreasing in N10, N01 while for each of the variables there is one fraction that satisfies the monotonicity strictly so that the index is strongly monotonous.\nRand Index. For the Rand index, it can be easily seen from the form of the index that it is increasing in N11, N00 and decreasing in N10, N01 so that it meets the property." }, { "heading": "C.3.2 NEGATIVE CASES", "text": "Jaccard, Wallace, Dice. All these three indices are constant w.r.t. N00. Therefore, these indices do not satisfy strong monotonicity.\nAdjusted Rand. It holds that AR(1, 2, 1, 0) < AR(1, 3, 1, 0),\nso that the index does not meet the strong monotonicity property." }, { "heading": "C.4 MONOTONICITY", "text": "" }, { "heading": "C.4.1 POSITIVE CASES", "text": "Rand, Correlation Coefficient, Sokal&Sneath, Correlation Distance. Strong monotonicity implies monotonicity. Therefore, these pair-counting indices satisfy the monotonicity property.\nJaccard and Dice. It can be easily seen that these indices are increasing in N11 while decreasing in N10, N01. For N00, we note that whenever N00 gets increased, either N10 or N01 must decrease, resulting in an increase of the index. Therefore, these indices satisfy monotonicity.\nAdjusted Rand. Note that for b, b+ d > 0, it holds that a+ c\nb+ d > a b ⇔ c > ad b . (2)\nFor Adjusted Rand, we have\na = N11 − 1\nN (N11 +N10)(N11 +N01), b = a+\n1 2 (N10 +N01).\nBecause of this, when we increment either N11 or N00 while decrementing either N10 or N01, we get d = c− 12 . Hence, we need to prove c > a(c− 1 2 )/b, or, equivalently\nc > − a 2(b− a)\n= 1 N (N11 +N10)(N11 +N01)−N11\nN10 +N01 .\nFor simplicity we rewrite this to\nc+ pAB − pApB\npA + pB − 2pAB > 0,\nwhere pA = 1N (N11 +N10) ∈ (0, 1) and pB = 1 N (N11 +N01) ∈ (0, 1). If we increment N00 while decrementing either N10 or N01, then\nc ∈ { 1\nN (N11 +N10),\n1\nN (N11 +N01)\n} = {pA, pB}.\nThe symmetry of AR allows us to w.l.o.g. assume that c = pA. We write\npA + pAB − pApB pA + pB − 2pAB = p2A + (1− 2pA)pAB pA + pB − 2pAB .\nWhen pA ≤ 12 , then this is clearly positive. For the case pA > 1 2 , we bound pAB ≤ pA and bound the numerator by p2A + (1− 2pA)pA = (1− pA)pA > 0. This proves the monotonicity for increasing N00. When incrementing N11 while decrementing either N10 or N01, we get c ∈ {1− pA, 1− pB}. Again, we assume w.l.o.g. that c = 1− pA and write\n1− pA + pAB − pApB pA + pB − 2pAB = pA(1− pA) + (1− 2pA)(pB − pAB) pA + pB − 2pAB .\nThis is clearly positive whenever pA ≤ 12 . When pA > 1 2 , we bound pAB ≥ pA+ pB − 1 and rewrite the numerator as\npA(1− pA) + (1− 2pA)(pA − 1) = (1− pA)(3pA − 1) > 0. This proves monotonicity for increasing N11. Hence, the monotonicity property is met.\nNMI and VI. Let B′ be obtained by a perfect split of a cluster B1 into B′1, B′2. Note that this increases the entropy of the candidate while keeping the joint entropy constant. Let us denote this increase in the candidate entropy by the conditional entropy H(B′|B) = H(B′) − H(B) > 0. Now, for NMI, the numerator increases by H(B′|B) while the denominator increases by at most H(B′|B) (dependent onH(A) and the specific normalization that is used). Therefore, NMI increases. Similarly, VI decreases by H(B′|B). Concluding, both NMI and VI are monotonous w.r.t. perfect splits. Now let B′′ be obtained by a perfect merge of B1, B2 into B′′1 . This results in a difference of the entropy of the candidate H(B′′)−H(B) = −H(B|B′′) < 0. The joint entropy decreases by the same amount, so that the mutual information remains unchanged. Therefore, the numerator of NMI remains unchanged while the denominator may or may not change, depending on the normalization. For min- or max-normalization, it may remain unchanged while for any other average it increases. Hence, NMI does not satisfy monotonicity w.r.t. perfect merges for min- and max-normalization but does satisfy this for average-normalization. For VI, the distance will decrease by H(B|B′′) so that it indeed satisfies monotonicity w.r.t. perfect merges.\nAMI. Let B′ be obtained by splitting a cluster B1 into B′1, B′2. This split increases the mutual information by H(B′|B)−H(A⊗B′|A⊗B). Recall the definition of the meet A⊗B from C.4 and note that the joint entropy equals H(A⊗B). For a perfect split we have H(A⊗B′|A⊗B) = 0. The expected mutual information changes with\nEA′∼C(S(A))[M(A ′, B′)−M(A′, B)] = H(B′|B)−EA′∼C(S(A))[H(A′ ⊗B′)−H(A′ ⊗B)],\nwhere we choose to randomize A instead of B′ and B for simplicity. Note that for all A′,\nH(A′ ⊗B)−H(A′ ⊗B′) = H(A′ ⊗B′|A′ ⊗B) ≥ 0, with equality if and only if the split is a perfect split w.r.t. A′. Unless A consists exclusively of singleton clusters, there is a positive probability that this split is not perfect, so that the expected value is positive. Furthermore, for the normalization term, we have √ H(A)H(B′) < √ H(A)H(B) + H(B′|B). Combining this, we get AMI(A,B′)\n= M(A,B)−EA′∼C(S(A))[M(A′, B)] +EA′∼C(S(A))[H(A′ ⊗B′|A′ ⊗B)]√\nH(A)H(B′)−H(B′|B)−EA′∼C(S(A))[M(A′, B)] +EA′∼C(S(A))[H(A′ ⊗B′|A′ ⊗B)]\n> M(A,B)−EA′∼C(S(A))[M(A′, B)] +EA′∼C(S(A))[H(A′ ⊗B′|A′ ⊗B)]√ H(A)H(B)−EA′∼C(S(A))[M(A′, B)] +EA′∼C(S(A))[H(A′ ⊗B′|A′ ⊗B)] > M(A,B)−EA′∼C(S(A))[M(A′, B)]√ H(A)H(B)−EA′∼C(S(A))[M(A′, B)] = AMI(A,B).\nThis proves that AMI satisfies monotonicity w.r.t. perfect splits.\nNow let B′′ be obtained by a perfect merge of B1, B2 into B′′1 . Again, we have H(B ′′)−H(B) = −H(B|B′′ < 0) and M(A,B′′) =M(A,B). Let A′ ∼ C(S(A)) (again, randomizing A instead of B and B′′ for simplicity), then H(A′ ⊗B′′) ≥ H(A′ ⊗B)−H(B|B′′) with equality if and only if B′′ is a perfect merge w.r.t. A′ which happens with probability strictly less than 1 (unless A consists of a single cluster). Therefore, as long as kA > 1, the expected mutual information decreases. For the normalization, we have √ H(A)H(B′′) < √ H(A)H(B). Hence,\nAMI(A,B′′) = M(A,B′′)−EA′∼C(S(A))[M(A′, B′′)]√ H(A)H(B′′)−EA′∼C(S(A))[M(A′, B′′)]\n= M(A,B)−EA′∼C(S(A))[M(A′, B′′)]√ H(A)H(B′′)−EA′∼C(S(A))[M(A′, B′′)] > M(A,B)−EA′∼C(S(A))[M(A′, B)]√ H(A)H(B′′)−EA′∼C(S(A))[M(A′, B)] > M(A,B)−EA′∼C(S(A))[M(A′, B)]√ H(A)H(B)−EA′∼C(S(A))[M(A′, B)]\n= AMI(A,B).\nBCubed. Note that a perfect merge increases BCubed recall while leaving BCubed precision unchanged and that a perfect split increases precision while leaving recall unchanged. Hence, the harmonic mean increases." }, { "heading": "C.4.2 NEGATIVE CASES", "text": "FMeasure. We give a numerical counter-example: consider A = {{0, . . . , 6}}, B = {{0, 1, 2, 3}, {4, 5}, {6}} and merge the last two clusters to obtain B′ = {{0, 1, 2, 3}, {4, 5, 6}}. Then, the FMeasure remains unchanged and equal to 0.73, violating monotonicity w.r.t. perfect merges.\nFNMI We will give the following numerical counter-example: Consider A = {{0, 1}, {2}, {3}}, B = {{0}, {1}, {2, 3}} and merge the first two clusters to obtain B′ = {{0, 1}, {2, 3}}. This results in FNMI(A,B) ≈ 0.67 > 0.57 ≈ FNMI(A,B′). This non-monotonicity is caused by the penalty factor that equals 1 for the pair A,B and equals exp(−1/3) ≈ 0.72 for A,B′.\nSMI. For this numerical counter-example we rely on the Matlab-implementation of the index by its original authors (Romano et al., 2014). Let A = {{0, . . . , 4}, {5}}, B = {{0, 1}, {2, 3}, {4}, {5}} and consider merging the two clusters resulting in B′ = {{0, 1, 2, 3}, {4}, {5}}. The index remains unchanged and equals 2 before and after the merge.\nWallace. Let kA = 1 and let kB > 1. Then any merge of B is a perfect merge, but no increase occurs since W1(A,B) = 1." }, { "heading": "C.5 CONSTANT BASELINE", "text": "" }, { "heading": "C.5.1 POSITIVE CASES", "text": "AMI and SMI. Both of these indices satisfy the constant baseline by construction since the expected mutual information is subtracted from the actual mutual information in the numerator.\nAdjusted Rand, Correlation Coefficient and Sokal&Sneath. These indices all satisfy ACB while being PAB-linear for fixed pA, pB . Therefore, the expected value equals the asymptotic constant." }, { "heading": "C.5.2 NEGATIVE CASES", "text": "For all the following indices, we will analyse the following counter-example. Let |V | = n, kA = kB = n− 1. For each index, we will compute the expected value and show that it is not constant. All of these indices satisfy the maximal agreement property and maximal agreement is achieved with probability 1/N (the probability that the single intra-pair of A coincides with the single intra-pair of B). Furthermore, each case where the intra-pairs do not coincide will result in the same contingency variables and hence the same value of the index. We will refer to this value as cn(I). Therefore, the expected value will only have to be taken over two values and will be given by\nE[I(A,B)] = 1\nN cmax + N − 1 N cn(I).\nFor each of these indices we will conclude that this is a non-constant function of n so that the index does not satisfy the constant baseline property.\nJaccard and Dice. For both these indices we have cmax = 1 and cn(I) = 0 (as N11 = 0 whenever the intra-pairs do not coincide). Hence, E[I(A,B)] = 1N , which is not constant.\nRand and Wallace. As both functions are linear in N11 for fixed mA = N11 + N10,mB = N11 +N01, we can compute the expected value by simply substituting N11 = mAmB/N . This will result in expected values 1− 2/N + 2/N2 and 1/N for Rand and Wallace respectively, which are both non-constant.\nCorrelation distance. Here cmax = 0 and\ncn(CD) = 1\nπ arccos\n( 0− 1/N2\n(N − 1)/N2\n) ,\nso that the expected value will be given by\nE[CD(A,B)] = N − 1 Nπ arccos ( − 1 N − 1 ) .\nThis is non-constant (it evaluates to 0.44, 0.47 for n = 3, 4 respectively). Note that this expected value converges to 12 for n→∞, which is indeed the asymptotic baseline of the index.\nFNMI and NMI. Note that in this case kA = kB so that the penalty term of FNMI will equal 1 and FNMI will coincide with NMI. Again cmax = 1. For the case where the intra-pairs do not coincide, the joint entropy will equal H(A,B) = ln(n) while each of the marginal entropies will equal\nH(A) = H(B) = n− 2 n ln(n) + 2 n ln(n/2) = ln(n)− 2 n ln(2).\nThis results in\ncn(NMI) = 2H(A)−H(A,B) H(A) = 1− 2 ln(n) n ln(n)− 2 ln(2) ,\nand the expected value will be given by the non-constant\nE[NMI(A,B)] = 1− N − 1 N\n2 ln(n)\nn ln(n)− 2 ln(2) .\nNote that as H(A) = H(B), all normalizations of MI will be equal so that this counter-example proves that none of the variants of (F)NMI satisfy the constant baseline property.\nVariation of Information. In this case cmax = 0. We will use the entropies from the NMIcomputations to conclude that\nE[V I(A,B)] = N − 1 N (2H(A,B)−H(A)−H(B)) = N − 1 N 4 n ln(2),\nwhich is again non-constant.\nF-measure. Here cmax = 1. In the case where the intra-pairs do not coincide, all contingency variables will be either one or zero so that both recall and precision will equal 1 − 1/n so that cn(FM) = 1− 1/n. This results in the following non-constant expected value\nE[FM(A,B)] = 1− N − 1 N 1 n .\nNote that because recall equals precision in both cases, this counter-example also works for other averages than the harmonic average.\nBCubed. Again cmax = 1. In the other case, the recall and precision will again be equal. Because for BCubed, the contribution of cluster i is given by 1n max{n 2 ij}/|Ai|, the contributions of the oneand two-clusters will be given by 1n , 1 2n respectively. Hence, cn(BC) = n−2 n + 1 2n = 1− 3 2n and we get the non-constant\nE[BC(A,B)] = 1− N − 1 N · 3 2n .\nWe note that again, this counter-example can be extended to non-harmonic averages of the BCubed recall and precision." }, { "heading": "D FUTHER ANALYSIS OF CONSTANT BASELINE PROPERTY", "text": "" }, { "heading": "D.1 ANALYSIS OF EXACT CONSTANT BASELINE PROPERTY", "text": "Let us show that the definition of the constant baseline applies not only to uniform (within a given sizes specification) distribution but also to all symmetric distributions over clusterings.\nDefinition 11. We say that a distribution over clusterings B is element-symmetric if for every two clusterings B and B′ that have the same cluster-sizes, B returns B and B′ with equal probabilities. Lemma 3. Let I be an index with a constant baseline as defined in Definition 7, let A be a clustering with 1 < kA < n and let B be an element-symmetric distribution. Then EB∼B[I(A,B)] = cbase.\nProof. We write EB∼B[I(A,B)] = ∑ s PB∼B(S(B) = s)EB∼B[I(A,B)|S(B) = s]\n= ∑ s PB∼B(S(B) = s)EB∼C(s)[I(A,B)]\n= ∑ s PB∼B(S(B) = s) cbase = cbase,\nwhere the sum ranges over cluster-sizes of n elements." }, { "heading": "D.2 ANALYSIS OF ASYMPTOTIC CONSTANT BASELINE PROPERTY", "text": "Definition 12. An index I is said to be scale-invariant, if it can be expressed as a continuous function of the three variables pA := mA/N, pB := mB/N and pAB := N11/N .\nAll indices in Table 3 are scale-invariant. For such indices, we will write I(p)(pAB , pA, pB). Note that when B ∼ C(s) for some s, the values pA, pB are constants while pAB is a random variable. Therefore, we further write PAB to stress that this is a random variable.\nTheorem 2. Let I be a scale-invariant pair-counting index, and consider a sequence of clusterings A(n) and cluster-size specifications s(n). Let N (n)11 , N (n) 10 , N (n) 01 , N (n) 00 be the corresponding paircounts. Then, for any ε > 0, as n→∞,\nP (∣∣∣I (N (n)11 , N (n)10 , N (n)01 , N (n)00 )− I (N (n)11 , N (n)10 , N (n)01 , N (n)00 )∣∣∣ > ε)→ 0.\nProof. We prove the equivalent statement I(p) ( P\n(n) AB , p (n) A , p (n) B\n) − I(p) ( p (n) A p (n) B , p (n) A , p (n) B ) P→ 0 .\nWe first prove that P (n)AB − p (n) A p (n) B P→ 0 so that the above follows from the continuous mapping theorem. Chebychev’s inequality gives\nP (∣∣P (n)AB − p(n)A p(n)B ∣∣ > ε) ≤ 1(n\n2\n)2 ε2\nVar ( N\n(n) 11\n) → 0.\nThe last step follows from the fact that Var(N11) = o(n4), as we will prove in the remainder of this section. Even though in the definition, A is fixed while B is randomly permuted, it is convenient to equivalently consider both clusterings are randomly permuted for this proof.\nWe will show that Var(N11) = o(n4). To compute the variance, we first inspect the second moment. Let A(S) denote the indicator function of the event that all elements of S ⊂ V are in the same cluster in A. Define B(S) similarly and let AB(S) = A(S)B(S). Let e, e1, e2 range over subsets of V of\nsize 2. We write\nN211 = (∑ e AB(e) )2 = ∑ e1,e2 AB(e1)AB(e2)\n= ∑\n|e1∩e2|=2\nAB(e1)AB(e2) + ∑\n|e1∩e2|=1\nAB(e1)AB(e2) + ∑\n|e1∩e2|=0\nAB(e1)AB(e2)\n=N11 + ∑\n|e1∩e2|=1\nAB(e1 ∪ e2) + ∑\ne1∩e2=∅\nAB(e1)AB(e2).\nWe take the expectation\nE[N211] = E[N11] + 6\n( n\n3\n) E[AB({v1, v2, v3})] + ( n\n2 )( n− 2 2 ) E[AB(e1)AB(e2)],\nwhere v1, v2, v3 ∈ V distinct and e1 ∩ e2 = ∅. The first two terms are obviously o(n4). We inspect the last term (\nn\n2 )( n− 2 2 ) E[AB(e1)AB(e2)]\n=\n( n\n2 )∑ i,j P(e1 ⊂ Ai ∩Bj)× ( n− 2 2 ) E[AB(e2)|e1 ⊂ Ai ∩Bj ] .\n(3)\nNow we rewrite E[N11]2 to\nE[N11] 2 =\n( n\n2 )∑ i,j P(e1 ⊂ Ai ∩Bj) ( n 2 ) E[AB(e2)].\nNote that ( n 2 ) E[AB(e2)] > ( n−2 2 ) E[AB(e2)] so that the difference between (3) and E[N11]2 can be\nbounded by( n\n2 )( n− 2 2 )∑ i,j P(e1 ⊂ Ai ∩Bj) · (E[AB(e2)|e1 ⊂ Ai ∩Bj ]−E[AB(e2)]).\nAs ( n 2 )( n−2 2 ) = O(n4), what remains to be proven is∑ i,j P(e1 ⊂ Ai ∩Bj) · (E[AB(e2)|e1 ⊂ Ai ∩Bj ]−E[AB(e2)]) = o(1).\nNote that it is sufficient to prove that E[AB(e2)|e1 ⊂ Ai ∩Bj ]−E[AB(e2)] = o(1),\nfor all i, j. Note that E[AB(e2)] = mAmB/N2, while\nE[AB(e2)|e1 ⊂ Ai ∩Bj ] = (mA − (2ai − 3))(mB − (2bj − 3))\n(N − (2n− 3))2 .\nHence, the difference will be given by (mA − (2ai − 3))(mB − (2bj − 3))\n(N − (2n− 3))2 − mAmB N2\n= N2(mA − (2ai − 3))(mB − (2bj − 3)) N2(N − (2n− 3))2 − (N − (2n− 3)) 2mAmB N2(N − (2n− 3))2\n= N2((2ai − 3)(2bj − 3)−mA(2bj − 3)−mB(2ai − 3)) N2(N − (2n− 3))2 + mAmB(2N(2n− 3)− (2n− 3)2) N2(N − (2n− 3))2\n= ((2ai − 3)(2bj − 3)−mA(2bj − 3)−mB(2ai − 3)) (N − (2n− 3))2 + mAmB N2 (2N(2n− 3)− (2n− 3)2) (N − (2n− 3))2\n= O(n3) (N − (2n− 3))2 + mAmB N2\nO(n3)\nN2(N − (2n− 3))2\n=o(1),\nas required." }, { "heading": "D.3 STATISTICAL TESTS FOR CONSTANT BASELINE", "text": "In this section, we provide two statistical tests: one test to check whether an index I satisfies the constant baseline property and another to check whether I has a selection bias towards certain cluster sizes.\nChecking constant baseline. Given a reference clustering A and a number of cluster sizes specifications s1, . . . , sk, we test the null hypothesis that\nEB∼C(si)[I(A,B)]\nis constant in i = 1, . . . , k. We do so by using one-way Analysis Of Variance (ANOVA). For each cluster sizes specification, we generate r clusterings. Although ANOVA assumes the data to be normally distributed, it is known to be robust for sufficiently large groups (i.e., large r).\nChecking selection bias. In (Romano et al., 2014) it is observed that some indices with a constant baseline do have a selection bias; when we have a pool of random clusterings of various sizes and select the one that has the highest score w.r.t. a reference clustering, there is a bias of selecting certain cluster sizes. We test this bias in the following way: given a reference clustering A and cluster sizes specifications s1, . . . , sk, we repeatedly generate B1 ∼ C(s1), . . . , Bk ∼ C(sk). The null-hypothesis will be that each of these clusterings Bi has an equal chance of maximizing I(A,Bi). We test this hypothesis by generating r pools and using the Chi-squared test.\nWe emphasize that these statistical tests cannot prove whether an index satisfies the property or has a bias. Both will return a confidence level p with which the null hypothesis can be rejected. Furthermore, for an index to not have these biases, the null hypothesis should be true for all choices of A, s1, . . . , sk, which is impossible to verify statistically.\nThe statistical tests have been implemented in Python and the code supplements the submission. We applied the tests to the indices of Tables 2 and 3. We chose n = 50, 100, 150, . . . , 1000 and r = 500. For the cluster sizes, we define the balanced cluster sizes BS(n, k) to be the cluster-size specification for k clusters of which n− k ∗ bn/kc clusters have size dn/ke while the remainder have size bn/kc. Then we choose A(n) to be a clustering with sizes BS(n, bn0.5c) and consider candidates with sizes s (n) 1 = BS(n, bn0.25c), s (n) 2 = BS(n, bn0.5c), s (n) 3 = BS(n, bn0.75c). For each n, the statistical test returns a p-value. We use Fisher’s method to combine these p-values into one single p-value and then reject the constant baseline if p < 0.05. The obtained results agree with Tables 2 and 3 except for Correlation Distance, which is so close to having a constant baseline that the tests are unable to detect it.\nD.4 ILLUSTRATING SIGNIFICANCE OF CONSTANT BASELINE\nIn this section, we conduct two experiments illustrating the biases of various indices. We perform two experiments that allow us to identify the direction of the bias in different situations. Our reference clustering corresponds to the expert-annotated clustering of the production experiment described in Section 3 and Appendix F.3, where n = 924 items are grouped into kA = 431 clusters (305 of them consist of a single element).\nIn the first experiment, we randomly cluster the items into k approximately equally sized clusters for various k. Figure 3 shows the averages and 90% confidence bands for each index. It can be seen that some indices (e.g., NMI and Rand) have a clear increasing baseline while others (e.g., Jaccard and VI) have a decreasing baseline. In contrast, all unbiased indices have a constant baseline.\nIn Section 4.6 we argued that these biases could not be described in terms of the number of clusters alone. Our second experiment illustrates that the bias also heavily depends on the sizes of the clusters. In this case, items are randomly clustered into 32 clusters, 31 of which are “small” clusters of size s while one cluster has size n− 31 · s, where s is varied between 1 and 28. We see that the biases are clearly visible. This shows that, even when fixing the number of clusters, biased indices may heavily distort an experiment’s outcome.\nFinally, recall that we have proven that the baseline of CD is only asymptotically constant. Figures 3 and 4 show that for practical purposes its baseline can be considered constant." }, { "heading": "E ADDITIONAL RESULTS", "text": "" }, { "heading": "E.1 PROOF OF THEOREM 1", "text": "Let B′ be an A-consistent improvement of B. We define\nB ⊗B′ = {Bj ∩B′j′ |Bj ∈ B,B′j′ ∈ B′, Bj ∩B′j′ 6= ∅}\nand show that B ⊗ B′ can be obtained from B by a sequence of perfect splits, while B′ can be obtained from B ⊗ B′ by a sequence of perfect merges. Indeed, the assumption that B′ does not introduce new disagreeing pairs guarantees that anyBj ∈ B can be split intoBj∩B′1, . . . , Bj∩B′kB′ without splitting over any intra-cluster pairs of A. Let us prove that B′ can be obtained from B ⊗B′ by perfect merges. Suppose there are two B′′1 , B ′′ 2 ∈ B ⊗ B′ such that both are subsets of some B′j′ . Assume that this merge is not perfect, then there must be v ∈ B′′1 , w ∈ B′′2 such that v, w are in different clusters of A. As v, w are in the same cluster of B′, it follows from the definition of B ⊗ B′ that v, w must be in different clusters of B. Hence, v, w is an inter-cluster pair in both A and B, while it is an intra-cluster pair of B′, contradicting the assumption that B′ is an A-consistent improvement of B. This concludes the proof." }, { "heading": "E.2 CORRELATION DISTANCE IS A DISTANCE", "text": "Theorem. The Correlation Distance is indeed a distance.\nProof. A proof of this is given in Van Dongen & Enright (2012). We give an alternative proof that allows for a geometric interpretation. First we map each partition A to an N -dimensional vector on the unit sphere by\n~u(A) := 1√ N 1 if kA = 1, ~A−pA1 ‖ ~A−pA1‖ if 1 < kA < n,\n− 1√ N 1 if kA = n,\nwhere 1 is the N -dimensional all-one vector and ~A is the binary vector representation of a partition introduced in Section 2. Straightforward computation gives ‖ ~A − pA1‖ = √ NpA(1− pA), and standard inner product 〈 ~A− pA1, ~B − pB1〉 = N(pAB − pApB),\nso that indeed 〈 ~A− pA1, ~B − pB1〉 ‖ ~A− pA1‖‖ ~B − pB1‖ = CC(p)(pAB , pA, pB).\nIt is a well-known fact that the inner product of two vectors of unit length corresponds to the cosine of their angle. Hence, taking the arccosine gives us the angle. The angle between unit vectors corresponds to the distance along the unit sphere. As ~u is an injection from the set of partitions to points on the unit sphere, we may conclude that this index is indeed a distance on the set of partitions." }, { "heading": "E.3 DEVIATION OF CD FROM CONSTANT BASELINE", "text": "Theorem. Given ground truth A with a number of clusters 1 < kA < n, a cluster-size specification s and a random partition B ∼ C(s), the expected difference between Correlation Distance and its baseline is given by\nEB∼C(s)[CD(A,B)]− 1 2 = − 1 π ∞∑ k=1 (2k)! 22k(k!)2 EB∼C(s)[CC(A,B) 2k+1] 2k + 1 .\nProof. We take the Taylor expansion of the arccosine around CC(A,B) = 0 and get\nCD(A,B) = 1 2 − 1 π ∞∑ k=0 (2k)! 22k(k!)2 CC(A,B)2k+1 2k + 1 .\nWe take the expectation of both sides and note that the first moment of CC equals zero, so the starting index is k = 1.\nFor B ∼ C(s) and large n, the value CC(A,B) will be concentrated around 0. This explains that in practice, the mean tends to be very close to the asymptotic baseline." }, { "heading": "E.4 COMPARISON WITH LEI ET AL. (2017)", "text": "Lei et al. (2017) describe the following biases for cluster similarity indices: NCinc — the average value for a random guess increases monotonically with the Number of Clusters (NC) of the candidate; NCdec — the average value for a random guess decreases monotonically with the number of clusters, and GTbias — the direction of the monotonicity depends on the specific Ground Truth (GT), i.e., on the reference partition. In particular, the authors conclude from numerical experiments that Jaccard suffers from NCdec and analytically prove that Rand suffers from GTbias, where the direction of the bias depends on the quadratic entropy of the ground truth clustering. Here we argue that these biases are not well defined, suggest replacing them by well-defined analogs, and show how our analysis allows to easily test indices on these biases.\nWe argue that the quantity of interest should not be the number of clusters, but the number of intracluster pairs of the candidate. Theorem 2 shows that the asymptotic value of the index depends on the number of intra-cluster pairs of both clusterings. The key insight is that more clusters do not necessarily imply fewer intra-cluster pairs. For example, let s denote a cluster-sizes specification for 3 clusters each of size ` > 2. Now let s′ be the cluster-sizes specification for one cluster of size 2` and ` clusters of size 1. Then, any B ∼ C(s) will have 3 clusters and 3 ( ` 2 ) intra-cluster pairs\nwhile any B′ ∼ C(s′) will have ` + 1 > 3 clusters and ( 2` 2 ) > 3 ( ` 2 ) intra-cluster pairs. For any ground truth A with cluster-sizes s, we have E[J(A,B)] < E[J(A,B′)] because of a larger amount of intra-cluster pairs In contrast, Lei et al. (2017) classifies Jaccard as an NCdec index, so that the expected value should increase, contradicting the definition of NCdec. The NPinc and NPdec biases that are defined in Definition 9 are sound versions of these NCinc and NCdec biases because they depend on the expected number of agreeing pairs. This allows to analytically determine which bias a given pair-counting index has." }, { "heading": "F EXPERIMENT", "text": "" }, { "heading": "F.1 SYNTHETIC EXPERIMENT", "text": "In this experiment, we construct several simple examples to illustrate the inconsistency among the indices. Recall that two indices I1 and I2 are inconsistent for a triplet of partitions (A,B1, B2) if I1(A,B1) > I1(A,B2) but I2(A,B1) < I2(A,B2).\nWe take all indices from Tables 2 and 3 and construct several triplets of partitions to distinguish them all. Let us note that the pairs Dice vs Jaccard and CC vs CD cannot be inconsistent since they are monotonically transformable to each other. Also, we do not compare with SMI since it is much more computationally complex than all other indices. Thus, we end up with 13 indices and are looking for simple inconsistency examples.\nThe theoretical minimum of examples needed to find inconsistency for all pairs of 13 indices is 4. We were able to find such four examples, see Figure 5. In this figure, we show four inconsistency triplets. For each triplet, the shapes (triangle, square, etc.) denote the reference partition A. Left and right figures show candidate partitions B1 and B2. In the caption, we specify which similarity indices favor this candidate partition over the other one.\nIt is easy to see that for each pair of indices, there is a simple example where they disagree. For example, NMI and NMImax are inconsistent for triplets 3. Also, we know that Jaccard in general\n(1a) FNMI, Rand, AdjRand, Jaccard, Dice, Wallace, FMeasure, BCubed\n(1b) NMI, NMImax, VI, AMI, S&S, CC, CD\nfavors larger clusters, while Rand and NMI often prefer smaller ones. Hence, they often disagree in this way (see the triplets 2 and 4)." }, { "heading": "F.2 EXPERIMENTS ON REAL DATASETS", "text": "In this section, we test whether the inconsistency affects conclusions obtained in experiments on real data.\nFor that, we used 16 real-world datasets (GitHub, 2020). We took all real-world datasets available there and removed the ones with categorial features or without the explicit target class field defined. The values of the “target class” field were used as a reference partition. We end up with the following real-world datasets: arrhythmia, balance-scale, ecoli, heart-statlog, letter, segment, vehicle, wdbc, wine, wisc, cpu, iono, iris, sonar, thy, zoo.\nOn these datasets, we ran 8 well-known clustering algorithms (Scikit-learn, 2020): KMeans, AffinityPropagation, MeanShift, AgglomerativeClustering, DBSCAN, OPTICS, Birch, GaussianMixture. For AgglomerativeClustering, we used 4 different linkage types (‘ward’, ‘average’, ‘complete’, ‘single’). For GaussianMixture, we used 4 different covariance types (‘spherical’, ‘diag’, ‘tied’, ‘full’). For methods requiring the number of clusters as a parameter (KMeans, Birch, AgglomerativeClustering, GaussianMixture), we took up to 4 different values (less than 4 if some of them are equal): 2, ref-clusters, max(2,ref-clusters/2), min(items, 2·ref-clusters), where ref-clusters is the number of clusters in the reference partition and items is the number of elements in the dataset. For MeanShift, we used the option cluster all = True. All other settings were default or taken from examples in the sklearn manual.\nFor all datasets, we calculated all the partitions for all methods described above. We removed all partitions having only one cluster or which raised any calculation error. Then, we considered all possible triplets A,B1, B2, where A is a reference partition and B1 and B2 are candidates obtained with two different algorithms. We have 8688 such triplets in total. For each triplet, we check whether the indices are consistent. The inconsistency frequency is shown in Table 6. Note that Wallace is highly asymmetrical and does not satisfy most of the properties, so it is not surprising that it is in general very inconsistent with others. However, the inconsistency rates are significant even for widely used pairs of indices such as, e.g., Variation of Information vs NMI (40.3%, which is an extremely high disagreement). Interestingly, the best agreeing indices are S&S and CC which satisfy most of our properties. This means that conclusions made with these indices are likely to be similar.\nActually, one can show that all indices are inconsistent using only one dataset. This holds for 11 out of 16 datasets: heart-statlog, iris, segment, thy, arrhythmia, vehicle, zoo, ecoli, balance-scale, letter, wine. We do not present statistics for individual datasets since we found the aggregated Table 6 to be more useful.\nFinally, to illustrate the biases of indices, we compare two KMeans algorithms with k = 2 and k = 2·ref-clusters. The comparison is performed on 10 datasets (where both algorithms are successfully completed). The results are shown in Table 7. In this table, biases and inconsistency are clearly\n5 The code supplements the submission.\nseen. We see that NMI and NMImax almost always prefer the larger number of clusters. In contrast, Variation of Information and Rand usually prefer k = 2 (Rand prefers k = 2 in all cases)." }, { "heading": "F.3 PRODUCTION EXPERIMENT", "text": "To show that the choice of similarity index may have an effect on the final quality of a production algorithm, we conducted an experiment within a major news aggregator system. The system aggregates all news articles to events and shows the list of most important events to users. For grouping, a clustering algorithm is used and the quality of this algorithm affects the user experience: merging different clusters may lead to not showing an important event, while too much splitting may cause the presence of duplicate events.\nThere is an algorithm Aprod currently used in production and two alternative algorithms A1 and A2. To decide which alternative is better for the system, we need to compare them. For that, it is possible to either perform an online experiment or make an offline comparison, which is much cheaper and allows us to compare more alternatives. For the offline comparison, we manually grouped 1K news articles about volleyball, collected during a period of three days, into events. Then, we compared the obtained reference partition with partitions Aprod, A1, and A2 obtained by Aprod, A1, and A2, respectively (see Table 8). According to most of the indices, A2 is closer to the reference partition than A1, and A1 is closer than Aprod. However, according to some indices, including the well-known NMImax, NMI, and Rand, A1 better corresponds to the reference partition than A2. As a result, we see that in practical application different similarity indices may differently rank the algorithms.\nTo further see which algorithm better agrees with user preferences, we launched the following online experiment. During one week we compared Aprod and A1 and during another — Aprod and A2 (it is not technically possible to compare A1 and A2 simultaneously). In the first experiment, A1 gave +0.75% clicks on events shown to users; in the second, A2 gave +2.7%, which clearly confirms that these algorithms have different effects on user experience andA2 is a better alternative thanA1. Most similarity indices having nice properties, including CC, CD, and S&S, are in agreement with user preferences. In contrast, AMI ranks A1 higher than A2. This can be explained by the fact that AMI gives more weight to small clusters compared to pair-counting indices, which can be undesirable for this particular application, as we discuss in Section 5." } ]
2,020
null
SP:813473d94da9db192e13548da7f92149773062a5
[ "The paper overall is of good quality. The story of the work is well-written which makes the contributions easier to digest. One suggestion would be to comment a bit more on the relevance of the margin distribution for readers that are unfamiliar with it, for instance, in Figure 1, the term margin distribution is thrown without explaining why one should look into it. ", "The generalization performance of learning algorithms characterizes their ability to generalize their empirical behavior on training examples to unseen test data, which provides an intuitive understanding of how different parameters affect the learning performance and some guides to design learning machines. Different from the traditional error analysis, this paper focuses on bounding the divergence bettween the test error and the training error by the the corresponding distillation error and distillation complexity, e.g., test error is bounded by training error + distillation error + distillation complexity. The current learning theory analysis may be important to understand the theoretical foundations of distillation strategy in deep networks. However, some theoretical issues should be illustrated to improve its readability, e.g,. " ]
This paper theoretically investigates the following empirical phenomenon: given a high-complexity network with poor generalization bounds, one can distill it into a network with nearly identical predictions but low complexity and vastly smaller generalization bounds. The main contribution is an analysis showing that the original network inherits this good generalization bound from its distillation, assuming the use of well-behaved data augmentation. This bound is presented both in an abstract and in a concrete form, the latter complemented by a reduction technique to handle modern computation graphs featuring convolutional layers, fullyconnected layers, and skip connections, to name a few. To round out the story, a (looser) classical uniform convergence analysis of compression is also presented, as well as a variety of experiments on cifar10 and mnist demonstrating similar generalization performance between the original network and its distillation. 1 OVERVIEW AND MAIN RESULTS Generalization bounds are statistical tools which take as input various measurements of a predictor on training data, and output a performance estimate for unseen data — that is, they estimate how well the predictor generalizes to unseen data. Despite extensive development spanning many decades (Anthony & Bartlett, 1999), there is growing concern that these bounds are not only disastrously loose (Dziugaite & Roy, 2017), but worse that they do not correlate with the underlying phenomena (Jiang et al., 2019b), and even that the basic method of proof is doomed (Zhang et al., 2016; Nagarajan & Kolter, 2019). As an explicit demonstration of the looseness of these bounds, Figure 1 calculates bounds for a standard ResNet architecture achieving test errors of respectively 0.008 and 0.067 on mnist and cifar10; the observed generalization gap is 10−1, while standard generalization techniques upper bound it with 10. Contrary to this dilemma, there is evidence that these networks can often be compressed or distilled into simpler networks, while still preserving their output values and low test error. Meanwhile, these simpler networks exhibit vastly better generalization bounds: again referring to Figure 1, those same networks from before can be distilled with hardly any change to their outputs, while their bounds reduce by a factor of roughly 10. Distillation is widely studied (Buciluŭ et al., 2006; Hinton et al., 2015), but usually the original network is discarded and only the final distilled network is preserved. The purpose of this work is to carry the good generalization bounds of the distilled network back to the original network; in a sense, the explicit simplicity of the distilled network is used as a witness to implicit simplicity of the original network. The main contributions are as follows. • The main theoretical contribution is a generalization bound for the original, undistilled network which scales primarily with the generalization properties of its distillation, assuming that wellbehaved data augmentation is used to measure the distillation distance. An abstract version of this bound is stated in Lemma 1.1, along with a sufficient data augmentation technique in Lemma 1.2. A concrete version of the bound, suitable to handle the ResNet architecture in Figure 1, is described in Theorem 1.3. Handling sophisticated architectures with only minor proof alterations is another contribution of this work, and is described alongside Theorem 1.3. This abstract and concrete analysis is sketched in Section 3, with full proofs deferred to appendices. • Rather than using an assumption on the distillation process (e.g., the aforementioned “wellbehaved data augmentation”), this work also gives a direct uniform convergence analysis, culminating in Theorem 1.4. This is presented partially as an open problem or cautionary tale, as
[ { "affiliations": [], "name": "Daniel Hsu" }, { "affiliations": [], "name": "Ziwei Ji" }, { "affiliations": [], "name": "Matus Telgarsky" }, { "affiliations": [], "name": "Lan Wang" } ]
[ { "authors": [ "Martin Anthony", "Peter L. Bartlett" ], "title": "Neural Network Learning: Theoretical Foundations", "venue": null, "year": 1999 }, { "authors": [ "Sanjeev Arora", "Rong Ge", "Behnam Neyshabur", "Yi Zhang" ], "title": "Stronger generalization bounds for deep nets via a compression approach", "venue": null, "year": 2018 }, { "authors": [ "Peter L Bartlett", "Dylan J Foster", "Matus J Telgarsky" ], "title": "Spectrally-normalized margin bounds for neural networks", "venue": "In NIPS, pp", "year": 2017 }, { "authors": [ "Peter L. Bartlett", "Nick Harvey", "Chris Liaw", "Abbas Mehrabian" ], "title": "Nearly-tight vc-dimension and pseudodimension bounds for piecewise linear neural networks. 2017b", "venue": null, "year": 2017 }, { "authors": [ "Cody A. Coleman", "Deepak Narayanan", "Daniel Kang", "Tian Zhao", "Jian Zhang", "Luigi Nardi", "Peter Bailis", "Kunle Olukotun", "Chris Ré", "Matei Zaharia" ], "title": "Dawnbench: An end-to-end deep learning benchmark and competition", "venue": "In NIPS ML Systems Workshop,", "year": 2017 }, { "authors": [ "Gintare Karolina Dziugaite", "Daniel M. Roy" ], "title": "Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. 2017", "venue": null, "year": 2017 }, { "authors": [ "Gintare Karolina Dziugaite", "Alexandre Drouin", "Brady Neal", "Nitarshan Rajkumar", "Ethan Caballero", "Linbo Wang", "Ioannis Mitliagkas", "Daniel M. Roy" ], "title": "In search of robust measures of generalization", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Dylan J. Foster", "Alexander Rakhlin" ], "title": "`∞ vector contraction for rademacher complexity. 2019", "venue": null, "year": 1911 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks. 2019", "venue": null, "year": 2019 }, { "authors": [ "Jonathan Frankle", "Gintare Karolina Dziugaite", "Daniel M. Roy", "Michael Carbin" ], "title": "Pruning neural networks at initialization: Why are we missing the mark? 2020", "venue": null, "year": 2009 }, { "authors": [ "Noah Golowich", "Alexander Rakhlin", "Ohad Shamir" ], "title": "Size-independent sample complexity of neural networks", "venue": "In COLT,", "year": 2018 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": null, "year": 2015 }, { "authors": [ "Heinrich Jiang" ], "title": "Uniform convergence rates for kernel density estimation", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Yiding Jiang", "Dilip Krishnan", "Hossein Mobahi", "Samy Bengio" ], "title": "Predicting the generalization gap in deep networks with margin distributions", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Yiding Jiang", "Behnam Neyshabur", "Hossein Mobahi", "Dilip Krishnan", "Samy Bengio" ], "title": "Fantastic generalization measures and where to find them. 2019b. arXiv:1912.02178 [cs.LG", "venue": null, "year": 1912 }, { "authors": [ "Philip M. Long", "Hanie Sedghi" ], "title": "Generalization bounds for deep convolutional neural networks. 2019", "venue": null, "year": 1905 }, { "authors": [ "Vaishnavh Nagarajan", "J. Zico Kolter" ], "title": "Uniform convergence may be unable to explain generalization in deep learning", "venue": null, "year": 2019 }, { "authors": [ "Gilles Pisier" ], "title": "Remarques sur un résultat non publié de b", "venue": "maurey. Séminaire Analyse fonctionnelle (dit), pp", "year": 1980 }, { "authors": [ "Tamas Sarlos" ], "title": "Improved approximation algorithms for large matrices via random projections", "venue": "In FOCS, pp. 143–152,", "year": 2006 }, { "authors": [ "Robert E. Schapire", "Yoav Freund" ], "title": "Boosting: Foundations and Algorithms", "venue": null, "year": 2012 }, { "authors": [ "Shai Shalev-Shwartz", "Shai Ben-David" ], "title": "Understanding Machine Learning: From Theory to Algorithms", "venue": null, "year": 2014 }, { "authors": [ "Jingtong Su", "Yihang Chen", "Tianle Cai", "Tianhao Wu", "Ruiqi Gao", "Liwei Wang", "Jason D. Lee" ], "title": "Sanity-checking pruning methods: Random tickets can win the jackpot", "venue": null, "year": 2020 }, { "authors": [ "Taiji Suzuki", "Hiroshi Abe", "Tomoaki Nishimura" ], "title": "Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network. 2019", "venue": null, "year": 1909 }, { "authors": [ "Colin Wei", "Tengyu Ma" ], "title": "Data-dependent sample complexity of deep neural networks via lipschitz augmentation", "venue": null, "year": 2019 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": null, "year": 2016 }, { "authors": [ "‖v" ], "title": "u‖∞, and in particular v 7→ φγ(v)/y is (1/γ)-Lipschitz with respect to the `∞ norm. Applying the aforementioned Lipschitz composition rule (Foster", "venue": "Radn", "year": 2019 }, { "authors": [ "C ≤Mγ(f(x))y" ], "title": "SAMPLING TOOLS The proofs of Lemma 3.1 and Lemma 3.2 both make heavy use of sampling. Lemma C.1 (Maurey (Pisier, 1980)). Suppose random variable V is almost surely supported on a subset S of some Hilbert space, and let (V1", "venue": null, "year": 1980 } ]
[ { "heading": "1 OVERVIEW AND MAIN RESULTS", "text": "Generalization bounds are statistical tools which take as input various measurements of a predictor on training data, and output a performance estimate for unseen data — that is, they estimate how well the predictor generalizes to unseen data. Despite extensive development spanning many decades (Anthony & Bartlett, 1999), there is growing concern that these bounds are not only disastrously loose (Dziugaite & Roy, 2017), but worse that they do not correlate with the underlying phenomena (Jiang et al., 2019b), and even that the basic method of proof is doomed (Zhang et al., 2016; Nagarajan & Kolter, 2019). As an explicit demonstration of the looseness of these bounds, Figure 1 calculates bounds for a standard ResNet architecture achieving test errors of respectively 0.008 and 0.067 on mnist and cifar10; the observed generalization gap is 10−1, while standard generalization techniques upper bound it with 1015.\nContrary to this dilemma, there is evidence that these networks can often be compressed or distilled into simpler networks, while still preserving their output values and low test error. Meanwhile, these simpler networks exhibit vastly better generalization bounds: again referring to Figure 1, those same networks from before can be distilled with hardly any change to their outputs, while their bounds reduce by a factor of roughly 1010. Distillation is widely studied (Buciluŭ et al., 2006; Hinton et al., 2015), but usually the original network is discarded and only the final distilled network is preserved.\nThe purpose of this work is to carry the good generalization bounds of the distilled network back to the original network; in a sense, the explicit simplicity of the distilled network is used as a witness to implicit simplicity of the original network. The main contributions are as follows.\n• The main theoretical contribution is a generalization bound for the original, undistilled network which scales primarily with the generalization properties of its distillation, assuming that wellbehaved data augmentation is used to measure the distillation distance. An abstract version of this bound is stated in Lemma 1.1, along with a sufficient data augmentation technique in Lemma 1.2. A concrete version of the bound, suitable to handle the ResNet architecture in Figure 1, is described in Theorem 1.3. Handling sophisticated architectures with only minor proof alterations is another contribution of this work, and is described alongside Theorem 1.3. This abstract and concrete analysis is sketched in Section 3, with full proofs deferred to appendices.\n• Rather than using an assumption on the distillation process (e.g., the aforementioned “wellbehaved data augmentation”), this work also gives a direct uniform convergence analysis, culminating in Theorem 1.4. This is presented partially as an open problem or cautionary tale, as\nits proof is vastly more sophisticated than that of Theorem 1.3, but ultimately results in a much looser analysis. This analysis is sketched in Section 3, with full proofs deferred to appendices.\n• While this work is primarily theoretical, it is motivated by Figure 1 and related experiments: Figures 2 to 4 demonstrate that not only does distillation improve generalization upper bounds, but moreover it makes them sufficiently tight to capture intrinsic properties of the predictors, for example removing the usual bad dependence on width in these bounds (cf. Figure 3). These experiments are detailed in Section 2." }, { "heading": "1.1 AN ABSTRACT BOUND VIA DATA AUGMENTATION", "text": "This subsection describes the basic distillation setup and the core abstract bound based on data augmentation, culminating in Lemmas 1.1 and 1.2; a concrete bound follows in Section 1.2.\nGiven a multi-class predictor f : Rd → Rk, distillation finds another predictor g : Rd → Rk which is simpler, but close in distillation distance Φγ,m, meaning the softmax outputs φγ are close on average over a set of points (zi)mi=1:\nΦγ,m(f, g) := 1\nm m∑ i=1 ∥∥φγ(f(zi))− φγ(g(zi))∥∥1 , where φγ(f(z)) ∝ exp (f(z)/γ) . (1.1) The quantity γ > 0 is sometimes called a temperature (Hinton et al., 2015). Decreasing γ increases sensitivity near the decision boundary; in this way, it is naturally related to the concept of margins in generalization theory, as detailed in Appendix B. due to these connections, the use of softmax is beneficial in this work, though not completely standard in the literature (Buciluŭ et al., 2006).\nWe can now outline Figure 1 and the associated empirical phenomenon which motivates this work. (Please see Section 2 for further details on these experiments.) Consider a predictor f which has good test error but bad generalization bounds; by treating the distillation distance Φγ,m(f, g) as an objective function and increasingly regularizing g, we obtain a sequence of predictors (g0, . . . , gt), where g0 = f , which trade off between distillation distance and predictor complexity. The curves in Figure 1 are produced in exactly this way, and demonstrate that there are predictors nearly identical to the original f which have vastly smaller generalization bounds.\nOur goal here is to show that this is enough to imply that f in turn must also have good generalization bounds, despite its apparent complexity. To sketch the idea, by a bit of algebra (cf. Lemma A.2), we can upper bound error probabilities with expected distillation distances and errors:\nPrx,y[arg max y′\nf(x)y′ 6= y] ≤ 2Ex ∥∥φγ(f(x))− φγ(g(x))∥∥1 + 2Ex,y (1− φγ(g(x))y) .\nThe next step is to convert these expected errors into quantities over the training set. The last term is already in a form we want: it depends only on g, so we can apply uniform convergence with the low complexity of g. (Measured over the training set, this term is the distillation error in Figure 1.)\nThe expected distillation distance term is problematic, however. Here are two approaches.\n1. We can directly apply uniform convergence; for instance, this approach was followed by Suzuki et al. (2019), and a more direct approach is followed here to prove Theorem 1.4. Unfortunately, it is unclear how this technique can avoid paying significantly for the high complexity of f .\n2. The idea in this subsection is to somehow trade off computation for the high statistical cost of the complexity of f . Specifically, notice that Φγ,m(f, g) only relies upon the marginal distribution of the inputs x, and not their labels. This subsection will pay computation to estimate Φγ,m with extra samples via data augmentation, offsetting the high complexity of f .\nWe can now set up and state our main distillation bound. Suppose we have a training set ((xi, yi))ni=1 drawn from some measure µ, with marginal distribution µX on the inputs x. Suppose we also have (zi) m i=1 drawn from a data augmentation distribution νn, the subscript referring to the fact that it depends on (xi)ni=1. Our analysis works when ‖dµX/dνn‖∞, the ratio between the two densities, is finite. If it is large, then one can tighten the bound by sampling more from νn, which is a computational burden; explicit bounds on this term will be given shortly in Lemma 1.2. Lemma 1.1. Let temperature parameter γ > 0 be given, along with sets of multiclass predictors F and G. Then with probability at least 1 − 2δ over an iid draw of data ((xi, yi))ni=1 from µ and (zi) m i=1 from νn, every f ∈ F and g ∈ G satisfy\nPr[arg max y′ f(x)y′ 6= y] ≤ 2 ∥∥∥∥dµXdνn ∥∥∥∥ ∞ Φγ,m(f, g) + 2 n n∑ i=1 ( 1− φγ(g(xi))yi ) + Õ ( k3/2\nγ ∥∥∥∥dµXdνn ∥∥∥∥ ∞ ( Radm(F) + Radm(G) ) + √ k γ Radn(G) ) + 6 √ ln(1/δ)\n2n\n( 1 + ∥∥∥∥dµXdνn ∥∥∥∥ ∞ √ n m ) ,\nwhere Rademacher complexities Radn and Radm are defined in Section 1.4.\nA key point is that the Rademacher complexity Radm(F) of the complicated functions F has a subscript “m”, which explicitly introduces a factor 1/m in the complexity definition (cf. Section 1.4). As such, sampling more from the data augmentation measure can mitigate this term, and leave the complexity of the distillation class G as the dominant term. Of course, this also requires ‖dµX/dνn‖∞ to be reasonable. As follows is one data augmentation scheme (and assumption on marginal distribution µX ) which ensures this. Lemma 1.2. Let (xi)ni=1 be a data sample drawn iid from µX , and suppose the corresponding density p is supported on [0, 1]d and is Hölder continuous, meaning |p(x)− p(x′)| ≤ Cα‖x− x′‖α for some Cα ≥ 0, α ∈ [0, 1]. Define a data augmentation measure νn via the following sampling procedure.\n• With probability 1/2, sample z uniformly within [0, 1]d.\n• Otherwise, select a data index i ∈ [n] uniformly, and sample z from a Gaussian centered at xi, and having covariance σ2I where σ := n−1/(2α+d).\nThen with probability at least 1− 1/n over the draw of (xi)ni=1,∥∥∥∥dµXdνn ∥∥∥∥ ∞ = 4 +O ( √ lnn nα/(2α+d) ) .\nThough the idea is not pursued here, there are other ways to control ‖dµX/dνn‖∞, for instance via an independent sample of unlabeled data; Lemma 1.1 is agnostic to these choices." }, { "heading": "1.2 A CONCRETE BOUND FOR COMPUTATION GRAPHS", "text": "This subsection gives an explicit complexity bound which starts from Lemma 1.1, but bounds ‖dµX/dνn‖∞ via Lemma 1.2, and also includes an upper bound on Rademacher complexity which can handle the ResNet, as in Figure 1. A side contribution of this work is the formalism to easily handle these architectures, detailed as follows.\nCanonical computation graphs are a way to write down feedforward networks which include dense linear layers, convolutional layers, skip connections, and multivariate gates, to name a few, all while allowing the analysis to look roughly like a regular dense network. The construction applies directly to batches: given an input batch X ∈ Rn×d, the output Xi of layer i is defined inductively as\nXT0 := X T, XTi := σi ( [WiΠiDi|〉Fi]XTi−1 ) = σi\n([ WiΠiDiX T i−1\nFiX T i−1\n]) ,\nwhere: σi is a multivariate-to-multivariate ρi-Lipschitz function (measured over minibatches on either side with Frobenius norm); Fi is a fixed matrix, for instance an identity mapping as in a residual network’s skip connection; Di is a fixed diagonal matrix selecting certain coordinates, for instance the non-skip part in a residual network; Πi is a Frobenius norm projection of a full minibatch;Wi is a weight matrix, the trainable parameters; [WiΠiDi|〉Fi] denotes row-wise concatenation of WiΠiDi and Fi.\nAs a simple example of this architecture, a multi-layer skip connection can be modeled by including identity mappings in all relevant fixed matrices Fi, and also including identity mappings in the corresponding coordinates of the multivariate gates σi. As a second example, note how to model convolution layers: each layer outputs a matrix whose rows correspond to examples, but nothing prevents the batch size from changes between layers; in particular, the multivariate activation before a convolution layer can reshape its output to have each row correspond to a patch of an input image, whereby the convolution filter is now a regular dense weight matrix.\nA fixed computation graph architecture G(~ρ,~b, ~r, ~s) has associated hyperparameters (~ρ,~b, ~r, ~s), described as follows. ~ρ is the set of Lipschitz constants for each (multivariate) gate, as described before. ri is a norm bound ‖W Ti ‖2,1 ≤ ri (sum of the ‖ · ‖2-norms of the rows), bi √ n (where n is the input batch size) is the radius of the Frobenius norm ball which Πi is projecting onto, and si is the operator norm of X 7→ [WiΠiDiXT|〉FiXT]. While the definition is intricate, it cannot only model basic residual networks, but it is sensitive enough to be able to have si = 1 and ri = 0 when residual blocks are fully zeroed out, an effect which indeed occurs during distillation.\nTheorem 1.3. Let temperature parameter γ > 0 be given, along with multiclass predictors F , and a computation graph architecture G. Then with probability at least 1− 2δ over an iid draw of data ((xi, yi)) n i=1 from µ and (zi) n i=1 from νn, every f ∈ F satisfies\nPr[arg max y′ f(x)y′ 6= y] ≤ inf (~b,~r,~s)≥1\ng∈G(~ρ,~b,~r,~s)\n2 [∥∥∥∥dµXdνn ∥∥∥∥ ∞ Φγ,m(f, g) + 2 n n∑ i=1 (1− φγ(g(xi))yi\n+ Õ ( k3/2\nγ ∥∥∥∥dµXdνn ∥∥∥∥ ∞ Radm(F) ) + 6 √ ln(1/δ) 2n ( 1 + ∥∥∥∥dµXdνn ∥∥∥∥ ∞ √ n m )\n+ Õ\n( √ k\nγ √ n\n( 1 + k ∥∥∥∥dµXdνn ∥∥∥∥ ∞ √ n m )(∑ i [ ribiρi L∏ l=i+1 slρl ]2/3)3/2)] .\nUnder the conditions of Lemma 1.2, ignoring an additional failure probability 1/n, then ‖dµXdνn ‖∞ = 4 +O ( √ lnn\nnα/(2α+d)\n) .\nA proof sketch of this bound appears in Section 3, with full details deferred to appendices. The proof is a simplification of the covering number argument from (Bartlett et al., 2017a); for another computation graph formalism designed to work with the covering number arguments from (Bartlett et al., 2017a), see the generalization bounds due to Wei & Ma (2019)." }, { "heading": "1.3 A UNIFORM-CONVERGENCE APPROACH TO DISTILLATION", "text": "In this section, we derive a Rademacher complexity bound on F whose proof internally uses compression; specifically, it first replaces f with a narrower network g, and then uses a covering number bound sensitive to network size to control g. The proof analytically chooses g’s width based on the structure of f and also the provided data, and this data dependence incurs a factor which causes the familiar 1/√n rate to worsen to 1/n1/4 (which appears as ‖X‖F/n3/4). This proof is much more intricate than the proofs coming before, and cannot handle general computation graphs, and also ignores the beneficial structure of the softmax. Theorem 1.4. Let data matrix X ∈ Rn×d be given, and let F denote networks of the form x 7→ σL(WL · · ·σ1(W1x)) with spectral norm ‖Wi‖2 ≤ si, and 1-Lipschitz and 1-homogeneous activations σi, and ‖Wi‖F ≤ Ri and width at most m. Then\nRad(F) = Õ ( ‖X‖F n3/4 [∏ j sj ] [∑ i (Ri/si) 4/5 ]5/4 [∑ i lnRi ]1/4) .\nThe term Ri/si is the square root of the stable rank of weight matrixWi, and is a desirable quantity in a generalization bound: it scales more mildly with width than terms like ‖W Ti ‖2,1 and ‖W Ti ‖F √ width which often appear (the former appears in Theorem 1.3 and Lemma 3.1). Another stable rank bound was developed by Suzuki et al. (2019), but has an extra mild dependence on width.\nAs depicted in Figure 2, however, this bound is not fully width-independent. Moreover, we can compare it to Lemma 3.1 throughout distillation, and not only does this bound not capture the power of distillation, but also, eventually its bad dependence on n causes it to lose out to Lemma 3.1." }, { "heading": "1.4 ADDITIONAL NOTATION", "text": "Given data (zi)ni=1, the Rademacher complexity of univariate functionsH is\nRad(H) := E~ sup h∈H\n1\nn ∑ i ih(zi), where i i.i.d.∼ Uniform({−1,+1}).\nRademacher complexity is the most common tool in generalization theory (Shalev-Shwartz & BenDavid, 2014), and is incorporated in Lemma 1.1 due to its convenience and wide use. To handle multivariate (multiclass) outputs, the definition is overloaded via the worst case labels as Radn(F) = sup~y∈[k]n Rad({(x, y) 7→ f(x)y : f ∈ F}). This definition is for mathematical convenience, but overall not ideal; Rademacher complexity seems to have difficulty dealing with such geometries.\nRegarding norms, ‖ · ‖ = ‖ · ‖F will denote the Frobenius norm, and ‖ · ‖2 will denote spectral norm." }, { "heading": "2 ILLUSTRATIVE EMPIRICAL RESULTS", "text": "This section describes the experimental setup, and the main experiments: Figure 1 showing progressive distillation, Figure 2 comparing Theorem 1.4, Lemma 3.1 and VC dimension, Figure 3 showing width independence after distillation, and Figure 4 showing the effect of random labels.\nExperimental setup. As sketched before, networks were trained in a standard way on either cifar10 or mnist, and then distilled by trading off between complexity and distillation distance Φγ,m. Details are as follows.\n1. Training initial network f . In Figures 1 and 2a, the architecture was a ResNet8 based on one used in (Coleman et al., 2017), and achieved test errors 0.067 and 0.008 on cifar10 and mnist, respectively, with no changes to the setup and a modest amount of training; the training algorithm was Adam; this and most other choices followed the scheme in (Coleman et al., 2017) to achieve a competitively low test error on cifar10. In Figures 2b, 3 and 4, a 6-layer fully connected network was used (width 8192 in Figure 2b, widths {64, 256, 1024} in Figure 3, width 256 in Figure 4), and vanilla SGD was used to optimize.\n2. Training distillation network g. Given f and a regularization strength λj , each distillation gj was found via approximate minimization of the objective\ng 7→ Φγ,m(f, g) + λjComplexity(g). (2.1)\nIn more detail, first g0 was initialized to f (g and f always used the same architecture) and optimized via eq. (2.1) with λ0 set to roughly risk(f)/Complexity(f), and thereafter gj+1 was initialized to gj and found by optimizing eq. (2.1) with λj+1 := 2λj . The optimization method was the same as the one used to find f . The term Complexity(g) was some computationally reasonable approximation of Lemma 3.1: for Figures 2b, 3 and 4, it was just ∑ i ‖W Ti ‖2,1, but for Figures 1 and 2a, it also included a tractable surrogate for the product of the spectral norms, which greatly helped distillation performance with these deeper architectures. In Figures 2b, 3 and 4, a full regularization sequence was not shown, only a single gj . This was chosen with a simple heuristic: amongst all (gj)j≥1, pick the one whose 10% margin quantile is largest (see the definition and discussion of margins below).\nMargin histograms. Figures 2b, 3 and 4 all depict margin histograms, a flexible tool to study the individual predictions of a network on all examples in a training set (see for instance (Schapire & Freund, 2012) for their use studying boosting, and (Bartlett et al., 2017a; Jiang et al., 2019a) for\ntheir use in studying deep networks). Concretely, given a predictor g ∈ G, the prediction on every example is replaced with a real scalar called the normalized margin via\n(xi, yi) 7→ g(xi)yi −maxj 6=yi g(xi)j\nRadn(G) ,\nwhere Radn(G) is the Rademacher complexity (cf. Section 1.4), and then the histogram of these n scalars is plotted, with the horizontal axis values thus corresponding to normalized margins. By using Rademacher complexity as normalization, these margin distributions can be compared across predictors and even data sets, and give a more fine-grained analysis of the quality of the generalization bound. This normalization choice was first studied in (Bartlett et al., 2017a), where it was also mentioned that this normalization allows one to read off generalization bounds from the plot. Here, it also suggests reasonable values for the softmax temperature γ.\nFigure 1: effect of distillation on generalization bounds. This figure was described before; briefly, a highlight is that in the initial phase, training and testing errors hardly change while bounds drop by a factor of nearly 1010. Regarding “generalization measure”, this term appears in studies of quantities which correlate with generalization, but are not necessarily rigorous generalization bounds (Jiang et al., 2019b; Dziugaite et al., 2020); in this specific case, the product of Frobenius norms requires a dense ReLU network (Golowich et al., 2018), and is invalid for the ResNet (e.g., a complicated ResNet with a single identity residual block yields a value 0 by this measure).\nFigure 2a: comparison of Theorem 1.4, Lemma 3.1 and VC bounds. Theorem 1.4 was intended to internalize distillation, but as in Figure 2a, clearly a subsequent distillation still greatly reduces the bound. While initially the bound is better than Lemma 3.1 (which does not internalize distillation), eventually the n1/4 factor causes it to lose out. Also note that eventually the bounds beat the VC bound, which has been identified as a surprisingly challenging baseline (Arora et al., 2018).\nFigure 3: width independence. Prior work has identified that generalization bounds are quite bad at handling changes in width, even if predictions and test error don’t change much (Nagarajan & Kolter, 2019; Jiang et al., 2019b; Dziugaite et al., 2020). This is captured in Figure 3a, where the margin distributions (see above) with different widths are all very different, despite similar test errors. However, following distillation, the margin histograms in Figure 3b are nearly identical! That is to say: distillation not only decreases loose upper bounds as before, it tightens them to the point where they capture intrinsic properties of the predictors.\nFigure 2b: failure of width independence with Theorem 1.4. The bound in Theorem 1.4 was designed to internalize compression, and there was some hope of this due to the stable rank term.\nUnfortunately, Figure 2b shows that it doesn’t quite succeed: while the margin histograms are less separated than for the undistilled networks in Figure 3a, they are still visibly separated unlike the post-distillation histograms in Figure 3b.\nFigure 4: random labels. A standard sanity check for generalization bounds is whether they can reflect the difficulty of fitting random labels (Zhang et al., 2016). While it has been empirically shown that Rademacher bounds do sharply reflect the presence of random labels (Bartlett et al., 2017a, Figures 2 & 3), the effect is amplified with distillation: even randomizing just 25% shrinks the margin distribution significantly." }, { "heading": "3 ANALYSIS OVERVIEW AND SKETCH OF PROOFS", "text": "This section sketches all proofs, and provides further context and connections to the literature. Full proof details appear in the appendices." }, { "heading": "3.1 ABSTRACT DATA AUGMENTATION BOUNDS IN SECTION 1.1", "text": "As mentioned in Section 1.1, the first step of the proof is to apply Lemma A.2 to obtain Prx,y[arg max\ny′ f(x)y′ 6= y] ≤ 2Ex ∥∥φγ(f(x))− φγ(g(x))∥∥1 + 2Ex,y (1− φγ(g(x))y) ; this step is similar to how the ramp loss is used with margin-based generalization bounds, a connection which is discussed in Appendix B.\nSection 1.1 also mentioned that the last term is easy: φγ is (1/γ)-Lipschitz, and we can peel it off and only pay the Rademacher complexity associated with g ∈ G. With data augmentation, the first term is also easy:\nEΦγ,m(f, g) = ∫ ‖φγ(f(z))− φγ(g(z))‖1 dµX (z) = ∫ ‖φγ(f(z))− φγ(g(z))‖1\ndµX dνn dνn(z)\n≤ ∥∥∥∥dµXdνn ∥∥∥∥ ∞ ∫ ‖φγ(f(z))− φγ(g(z))‖1 dνn(z),\nand now we may apply uniform convergence to νn rather than µX . In the appendix, this proof is handled with a bit more generality, allowing arbitrary norms, which may help in certain settings. All together, this leads to a proof of Lemma 1.1.\nFor the explicit data augmentation estimate in Lemma 1.2, the proof breaks into roughly two cases: low density regions where the uniform sampling gives the bound, and high density regions where the Gaussian sampling gives the bound. In the latter case, the Gaussian sampling in expectation behaves as a kernel density estimate, and the proof invokes a standard bound (Jiang, 2017)." }, { "heading": "3.2 CONCRETE DATA AUGMENTATION BOUNDS IN SECTION 1.2", "text": "The main work in this proof is the following generalization bound for computation graphs, which follows the proof scheme from (Bartlett et al., 2017a), though simplified in various ways, owing mainly to the omission of general matrix norm penalties on weight matrices, and the omission of the reference matrices. The reference matrices were a technique to center the weight norm balls away from the origin; a logical place to center them was at initialization. However, in this distillation setting, it is in fact most natural to center everything at the origin, and apply regularization and shrink to a well-behaved function (rather than shrinking back to the random initialization, which after all defines a complicated function). The proof also features a simplified (2, 1)-norm matrix covering proof (cf. Lemma C.3). Lemma 3.1. Let data X ∈ Rn×d be given. Let computation graph G be given, where Πi projects to Frobenius-norm balls of radius bi √ n, and ‖W Ti ‖2,1 ≤ ri, and ‖[WiΠiDi|〉Fi]‖2 ≤ si, and Lip(σi) ≤ ρi, and all layers have width at most m. Then for every > 0 there exists a covering set M satisfying\nsup g∈G min X̂∈M\n∥∥∥g(XT)− X̂∥∥∥ ≤ and ln |M| ≤ 24/3n ln(2m2) 2 [∑ i ( ribiρi L∏ l=i+1 slρl )2/3]3 .\nConsequently,\nRad(G) ≤ 4 n + 12\n√ ln(2m2)\nn\n[∑ i ( ribiρi L∏ l=i+1 slρl )2/3]3/2 .\nFrom there, the proof of Theorem 1.3 follows via Lemmas 1.1 and 1.2, and many union bounds." }, { "heading": "3.3 DIRECT UNIFORM CONVERGENCE APPROACH IN THEOREM 1.4", "text": "As mentioned before, the first step of the proof is to sparsify the network, specifically each matrix product. Concretely, given weights Wi of layer i, letting XTi−1 denote the input to this layer, then\nWiX T i−1 = m∑ j=1 (Wiej)(Xi−1ej) T.\nWritten this way, it seems natural that the matrix product should “concentrate”, and that considering all m outer products should not be necessary. Indeed, exactly such an approach has been followed before to analyze randomized matrix multiplication schemes (Sarlos, 2006). As there is no goal of high probability here, the analysis is simpler, and follows from the Maurey lemma (cf. Lemma C.1), as is used in the (2, 1)-norm matrix covering bound in Lemma C.3.\nLemma 3.2. Let a network be given with 1-Lipschitz homogeneous activations σi and weight matrices (W1, . . . ,WL) of maximum width m, along with data matrix X ∈ Rn×d and desired widths (k1, . . . , kL) be given. Then there exists a sparsified network output, recursively defined via\nX̂T0 := X T, and X̂Ti := Πiσi(WiMiX T i−1), where Mi := ∑ j∈Si Zjeje T j ‖Aej‖ ,\nwhere Si is a multiset of ki = |Si| indices, Πi denotes projection onto the Frobenius-norm ball of radius ‖X‖F ∏ j≤i ‖Wj‖2, and the scaling term Zj satisfies Zj ≤ ‖Wk‖F √ m/kj , and\n‖σL(WL · · ·σ1(W1XT) · · · )− X̂TL‖F ≤ ‖X‖F L∏ i=1 ‖Wi‖2 L∑ i=1 √ ‖Wi‖2F ki‖Wi‖22 ,\nThe statement of this lemma is lengthy and detailed because the exact guts of the construction are needed in the subsequent generalization proof. Specifically, now that there are few nodes, a generalization bound sensitive to narrow networks can be applied. On the surface, it seems reasonable to apply a VC bound, but this approach did not yield a rate better than n−1/6, and also had an explicit dependence on the depth of the network, times other terms visible in Theorem 1.4.\nInstead, the approach here, aiming for a better dependence on n and also no explicit dependence on network depth, was to produce an∞-norm covering number bound (see (Long & Sedghi, 2019) for a related approach), with some minor adjustments (indeed, the∞-norm parameter covering approach was applied to obtain a Frobenius-norm bound, as in Lemma 3.1). Unfortunately, the magnitudes of weight matrix entries must be controlled for this to work (unlike the VC approach), and this necessitated the detailed form of Lemma 3.2 above.\nTo close with a few pointers to the literature, as Lemma 3.2 is essentially a pruning bound, it is potentially of independent interest; see for instance the literature on lottery tickets and pruning (Frankle & Carbin, 2019; Frankle et al., 2020; Su et al., 2020). Secondly, there is already one generalization bound in the literature which exhibits spectral norms, due to (Suzuki et al., 2019); unfortunately, it also has an explicit dependence on network width." }, { "heading": "ACKNOWLEDGMENTS", "text": "MT thanks Vaishnavh Nagarajan for helpful discussions and suggestions. ZJ and MT are grateful for support from the NSF under grant IIS-1750051, and from NVIDIA under a GPU grant." }, { "heading": "A PROOFS FOR SECTION 1.1", "text": "The first step is an abstract version of Lemma 1.1 which does not explicitly involve the softmax, just bounded functions.\nLemma A.1. Let classes of bounded functions F and G be given with F 3 f : X → [0, 1]k and G 3 g : X → [0, 1]k. Let conjugate exponents 1/p + 1/q = 1 be given. Then with probability at least 1− 2δ over the draw of ((xi, yi))ni=1 from µ and (zi)mi=1 from νn, for every f ∈ F and g ∈ G,\nEf(x)y ≤ 1\nn n∑ i=1 g(xi)yi + 2Radn ({ (x, y) 7→ g(x)y : g ∈ G }) + 3 √ ln(1/δ) 2n\n+ ∥∥∥∥dµXdνn ∥∥∥∥ Lq(νn) ( 1 m m∑ i=1 ‖f(zi)− g(zi)‖pp + 3 √ ln(1/δ) 2m\n+ 2Radm ({ z 7→ min{1, ‖f(z)− g(z)‖pp} : f ∈ F , g ∈ G }))1/p where\nRadm ({ z 7→ min{1, ‖f(z)− g(z)‖pp} : f ∈ F , g ∈ G }) ≤ p\nk∑ y′=1 [ Radm({z 7→ f(z)y′ : f ∈ F}) + Radm({z 7→ g(z)y′ : g ∈ G}) ] .\nProof of Lemma A.1. To start, for any f ∈ F and g ∈ G, write\nEf(x)y = E(f(x)− g(x))y + Eg(x)y.\nThe last term is easiest, and let’s handle it first: by standard Rademacher complexity arguments (Shalev-Shwartz & Ben-David, 2014), with probability at least 1− δ, every g ∈ G satisfies\nEg(x)y ≤ 1\nn n∑ i=1 g(xi)yi + 2Radn({(x, y) 7→ g(x)y : g ∈ G}) + 3 √ ln(1/δ) 2n .\nFor the first term, since f : X → [0, 1]k and g : X → [0, 1]k, by Hölder’s inequality E(f(x)− g(x))y = ∫ min{1, (f(x)− g(x))y}dµ(x, y)\n≤ ∫ min{1, ‖f(x)− g(x)‖p} dµ(x, y)\n= ∫ min{1, ‖f(x)− g(x)‖p}\ndµX dνn (x) dνn(x)\n≤ ∥∥∥min{1, ‖f − g‖p}∥∥∥\nLp(νn) ∥∥∥∥dµXdνn ∥∥∥∥ Lq(νn) .\nOnce again invoking standard Rademacher complexity arguments (Shalev-Shwartz & Ben-David, 2014), with probability at least 1 − δ, every mapping z 7→ min{1, ‖f(z) − g(z)‖pp} where f ∈ F and g ∈ G satisfies∫\nmin{1, ‖f(z)− g(z)‖pp} dνn(z) ≤ 1\nm m∑ i=1 min{1, ‖f(zi)− g(zi)‖pp}+ 3 √ ln(1/δ) 2m\n+ 2Radm ({ z 7→ min{1, ‖f(z)− g(z)‖pp} : f ∈ F , g ∈ G }) .\nCombining these bounds and unioning the two failure events gives the first bound.\nFor the final Rademacher complexity estimate, first note r 7→ min{1, r} is 1-Lipschitz and can be peeled off, thus\nmRadm ({ z 7→ min{1, ‖f(z)− g(z)‖pp} : f ∈ F , g ∈ G }) ≤ mRadm ({ z 7→ ‖f(z)− g(z)‖pp : f ∈ F , g ∈ G\n}) = E sup\nf∈F g∈G\nm∑ i=1 i‖f(zi)− g(zi)‖pp\n≤ k∑\ny′=1 E sup f∈F g∈G m∑ i=1 i|f(zi)− g(zi)|py′\n= k∑ y′=1 mRadm ({ z 7→ |f(z)− g(z)|py′ : f ∈ F , g ∈ G }) .\nSince f and g have range [0, 1]k, then (f − g)y′ has range [−1, 1] for every y′, and since r 7→ |r|p is p-Lipschitz over [−1, 1] (for any p ∈ [1,∞), combining this with the Lipschitz composition rule for Rademacher complexity and also the fact that a Rademacher random vector ∈ {±1}m is distributionally equivalent to its coordinate-wise negation − , then, for every y′ ∈ [k],\nRadm({z 7→ |f(z)− g(z)|py′ : f ∈ F , g ∈ G}) ≤ pRadm({z 7→ (f(z)− g(z))y′ : f ∈ F , g ∈ G})\n= p\nm E sup f∈F sup g∈G m∑ i=1 i(f(zi)− g(zi))y′\n= p\nm E sup\nf∈F m∑ i=1 if(zi)y′ + p m E sup g∈G m∑ i=1 − ig(zi)y′\n= pRadm({z 7→ f(z)y′ : f ∈ F}) + pRadm({z 7→ g(z)y′ : g ∈ G}).\nTo prove Lemma 1.1, it still remains to collect a few convenient properties of the softmax. Lemma A.2. For any v ∈ Rk and y ∈ {1, . . . , k},\n2(1− φγ(v))y ≥ 1[y 6= arg max i vi].\nMoreover, for any functions F with F 3 f : X → Rk,\nRadn ({ (x, y) 7→ φγ(f(x))y : f ∈ F }) = Õ\n(√ k\nγ Radn(F)\n) .\nProof. For the first property, let v ∈ Rk be given, and consider two cases. If y = arg maxi vi, then φγ(v) ∈ [0, 1]k implies 2 ( 1− φγ(v) ) y ≥ 0 = 1[y 6= arg max\ni vi].\nOn the other hand, if y 6= arg maxi vi, then φγ(v)y ≤ 1/2, and 2 ( 1− φγ(v) ) y ≥ 1 = 1[y 6= arg max\ni vi].\nThe second part follows from a multivariate Lipschitz composition lemma for Rademacher complexity due to (Foster & Rakhlin, 2019, Theorem 1); all that remains to prove is that v 7→ φγ(v)y is (1/γ)-Lipschitz with respect to the `∞ norm for any v ∈ Rk and y ∈ [k]. To this end, note that\nd\ndvy φγ(v)y =\nexp(v/γ)y ∑ j 6=y exp(v/γ)j\nγ( ∑ j exp(v/γ)j) 2 ,\nd\ndvi6=y φγ(v)y = − exp(v/γ)y exp(v/γ)i γ( ∑ j exp(v/γ)j) 2 ,\nand therefore ∥∥∇φγ(v)y∥∥1 = 2 exp(v/γ)y ∑ j 6=y exp(v/γ)j\nγ( ∑ j exp(v/γ)j) 2 ≤ 1 γ ,\nand thus, by the mean value theorem, for any u ∈ Rk and v ∈ Rk, there exists z ∈ [u, v] such that∣∣φγ(v)y − φγ(u)y∣∣ = ∣∣∣〈∇φγ(z)y, v − u〉∣∣∣ ≤ ‖v − u‖∞ ·∥∥∇φγ(v)y∥∥1 ≤ 1γ ‖v − u‖∞, and in particular v 7→ φγ(v)/y is (1/γ)-Lipschitz with respect to the `∞ norm. Applying the aforementioned Lipschitz composition rule (Foster & Rakhlin, 2019, Theorem 1),\nRadn ({ (x, y) 7→ φγ(f(x))y : f ∈ F }) = Õ\n(√ k\nγ Radn(F)\n) .\nLemma 1.1 now follows by combining Lemmas A.1 and A.2.\nProof of Lemma 1.1. Define ψ := 1 − φγ . The bound follows by instantiating Lemma A.1 with p = 1 and the two function classes\nQF := {(x, y) 7→ ψ(f(x)y) : f ∈ F} and QG := {(x, y) 7→ ψ(g(x)y) : g ∈ G},\ncombining its simplified Rademacher upper bounds with the estimates for Radm(QF ) and Radm(QG) and Radn(QG) from Lemma A.2, and by using Lemma A.2 to lower bound the left hand side with\nEψ(f(x))y = E(1− φγ(f(x))y) ≥ 1 2 1 [ arg max y′ f(x)y′ 6= y ] ,\nand lastly noting that\n1\nm m∑ i=1 ‖ψ(f(zi))− ψ(g(zi))‖1 = 1 m m∑ i=1 ‖1− φγ(f(zi))− 1 + φγ(g(zi))‖1 = Φγ,m(f, g).\nTo complete the proofs for Section 1.1, it remains to handle the data augmentation error, namely the term ‖dµX/dνn‖∞. This proof uses the following result about Gaussian kernel density estimation. Lemma A.3 (See (Jiang, 2017, Theorem 2 and Remark 8)). Suppose density p is α-Hölder continuous, meaning |p(x) − p(x′)| ≤ Cα‖x − x′‖α for some Cα ≥ 0 and α ∈ [0, 1]. There there exists a constant C ≥ 0, depending on α, Cα, maxx∈Rd p(x), and the dimension, but independent of the sample size, so that with probability at least 1 − 1/n, the Gaussian kernel density estimate with bandwidth σ2I where σ = n−1/(2α+d) satisfies\nsup x∈Rd\n|p(x)− pn(x)| ≤ C √ ln(n)\nn2α/(2α+d) .\nThe proof of Lemma 1.2 follows.\nProof of Lemma 1.2. The proposed data augmentation measure νn has a density pn,β over [0, 1]d, and it has the form pn,β(x) = β + (1− β)pn(x), where β = 1/2, and pn is the kernel density estimator as described in Lemma A.3, whereby\n|pn(x)− p(x)| ≤ n := O\n( √ lnn\nnα/(2α+d)\n) .\nThe proof proceeds to bound ‖dµX/dνn‖∞ = ‖p/pn,β‖∞ by considering three cases.\n• If x 6∈ [0, 1]d, then p(x) = 0 by the assumption on the support of µX , whereas pn,β(x) ≥ pn(x)/2 > 0, thus p(x)/pn,β(x) = 0.\n• If x ∈ [0, 1]d and p(x) ≥ 2 n, then pn,β(x) ≥ (1− β)p(x)− n) ≥ n/2, and p(x)\npn,β(x) = 1 + p(x)− pn,β(x) pn,β(x)\n≤ 1 + βp(x) pn,β(x) + (1− β)|p(x)− pn(x)| pn,β(x)\n≤ 1 + βp(x) (1− β)(p(x)− n) + (1− β) n n/2\n≤ 1 + β (1− β)(1− n/p(x)) + 1\n≤ 4.\n• If x ∈ [0, 1]d and p(x) < 2 n, since pn,β(x) ≥ β = 1/2, then p(x)\npn,β(x) < 2 n β = 4 n.\nCombining these cases, ‖dµX/dνn‖∞ = ‖p/pn,β‖∞ ≤ max{4, 4 n} ≤ 4 + 4 n." }, { "heading": "B REPLACING SOFTMAX WITH STANDARD MARGIN (RAMP) LOSS", "text": "The proof of Lemma 1.1 was mostly a reduction to Lemma A.1, which mainly needs bounded functions; for the Rademacher complexity estimates, the Lipschitz property of φγ was used. As such, the softmax can be replaced with the (1/γ)-Lipschitz ramp loss as is standard from margin-based generalization theory (e.g., in a multiclass version as appears in (Bartlett et al., 2017a)). Specifically, defineMγ : Rk → [0, 1]k for any coordinate j as\nMγ(v)j := `γ(vj − arg max y′ 6=j vy′), where `γ(z) := 1 z ≤ 0, 1− zγ z ∈ (0, γ), 0 z ≥ γ.\nWe now have 1[arg maxy′ f(x)y′ ] ≤ Mγ(f(x))y without a factor of 2 as in Lemma A.2, and can plug it into the general lemma in Lemma A.1 to obtain the following corollary. Corollary B.1. Let temperature (margin!) parameter γ > 0 be given, along with sets of multiclass predictors F and G. Then with probability at least 1−2δ over an iid draw of data ((xi, yi))ni=1 from µ and (zi)ni=1 from νn, every f ∈ F and g ∈ G satisfy\nPr[arg max y′ f(x)y′ 6= y] ≤ ∥∥∥∥dµXdνn ∥∥∥∥ ∞ 1 m m∑ i=1 ‖Mγ(f)−Mγ(g)‖1 + 1 n n∑ i=1 Mγ(g(xi))yi\n+ Õ ( k3/2\nγ ∥∥∥∥dµXdνn ∥∥∥∥ ∞ ( Radm(F) + Radm(G) ) + √ k γ Radn(G) ) + 3 √ ln(1/δ)\n2n\n( 1 + ∥∥∥∥dµXdνn ∥∥∥∥ ∞ √ n m ) .\nProof. Overload function composition notation to sets of functions, meaning Mγ ◦ F = { (x, y) 7→ Mγ(f(x))y : f ∈ F } .\nFirst note thatMγ is (2/γ)-Lipschitz with respect to the `∞ norm, and thus, applying the multivariate Lipschitz composition lemma for Rademacher complexity (Foster & Rakhlin, 2019, Theorem 1) just as in the proof for the softmax in Lemma A.2,\nRadm(Mγ ◦ F) = Õ\n( 2 √ k\nγ Radm(F)\n) ,\nwith similar bounds for Radm(Mγ ◦ G) and Radn(Mγ ◦ G). The desired statement now follows by combining these Rademacher complexity bounds with Lemma 1.1 applied toMγ ◦ F andMγ ◦ G, and additionally using 1[arg maxy′ f(x)y′ 6= y] ≤Mγ(f(x))y ." }, { "heading": "C SAMPLING TOOLS", "text": "The proofs of Lemma 3.1 and Lemma 3.2 both make heavy use of sampling. Lemma C.1 (Maurey (Pisier, 1980)). Suppose random variable V is almost surely supported on a subset S of some Hilbert space, and let (V1, . . . , Vk) be k iid copies of V . Then there exist (V̂1, . . . , V̂k) ∈ Sk with∥∥∥∥∥∥EV − 1k ∑ i V̂i ∥∥∥∥∥∥ 2\nF\n≤ E V1,...,Vk ∥∥∥∥∥∥EV − 1k ∑ i Vi ∥∥∥∥∥∥ 2\nF\n= 1\nk\n[ E‖V ‖2F − ‖EV ‖2F ] ≤ 1 k E‖V ‖2F ≤ 1 k sup V̂ ∈S ‖V̂ ‖2F .\nProof of Lemma C.1. The first inequality is via the probabilistic method. For the remaining inequalities, by expanding the square multiple times,\nE V1,...,Vk ∥∥∥∥∥∥EV − 1k ∑ i Vi ∥∥∥∥∥∥ 2\nF\n≤ E V1,...,Vk\n1\nk2 ∑ i ‖EV − Vi‖2F + ∑ i 6=j 〈 EV − Vi,EV − Vj 〉 = 1\nk EV1‖V1 − EV ‖ 2 F =\n1\nk\n[ E‖V ‖2F − ‖EV ‖2F ] ≤ 1 k E‖V ‖2F ≤ 1 k sup V̂ ∈S ‖V̂ ‖2F .\nA first key application of Lemma C.1 is to sparsify products, as used in Lemma 3.2. Lemma C.2. Let matrices A ∈ Rd×m and B ∈ Rn×m be given, along with sampling budget k. Then there exists a selection (i1, . . . , ik) of indices and a corresponding diagonal sampling matrix M with at most k nonzero entries satisfying\nM := ‖A‖2F k k∑ j=1 eije T ij ‖Aeij‖2 and\n∥∥ABT −AMBT∥∥2 ≤ 1 k ‖A‖2‖B‖2.\nProof of Lemma C.2. For convenience, define columns ai := Aei and bi := Bei for i ∈ {1, . . . ,m}. Define importance weighting βi := (‖ai‖/‖A‖F)2, whereby ∑ i βi = 1, and let V\nbe a random variable with Pr [ V = β−1i aib T i ] = βi,\nwhereby EV = m∑ i=1 β−1i aib T iβi = m∑ i=1 (Aei)(Bei) T = A [ m∑ i=1 eie T i ] BT = A [I]BT = AB,\nE‖V ‖2 = m∑ i=1 β−2i ‖aib T i‖2F βi = m∑ i=1 β−1i ‖ai‖ 2‖bi‖2 = m∑ i=1 ‖A‖2F ‖bi‖2 = ‖A‖2F · ‖B‖2F .\nBy Lemma C.1, there exist indices (i1, . . . , ik) and matrices V̂j := β−1ij aij b T ij with∥∥∥∥∥∥ABT − 1k ∑ j V̂j ∥∥∥∥∥∥ 2 ≤ ∥∥∥∥∥∥EV − 1k ∑ j V̂j ∥∥∥∥∥∥ 2 = 1 k [ ‖A‖2F ‖B‖2F − ‖AB‖2F ] ≤ 1 k ‖A‖2F ‖B‖2F .\nTo finish, by the definition of M ,\n1\nk ∑ j V̂j = 1 k ∑ j β−1ij (Aeij )(Beij ) T = A 1 k ∑ j β−1ij eije T ij BT = A [M ]BT.\nA second is to cover the set of matrices W satisfying a norm bound ‖W T‖2,1 ≤ r. The proof here is more succinct and explicit than the one in (Bartlett et al., 2017a, Lemma 3.2). Lemma C.3 (See also (Bartlett et al., 2017a, Lemma 3.2)). Let norm bound r ≥ 0, X ∈ Rn×d, and integer k be given. Define a family of matrices\nM := r‖X‖Fk k∑ l=1 sleile T jl ‖Xejl‖ : sl ∈ {±1}, il ∈ {1, . . . , n}, jl ∈ {1, . . . , d} . Then\n|M| ≤ (2nd)k, sup ‖WT‖2,1≤r min Ŵ∈M\n‖WXT − ŴXT‖2F ≤ r2‖X‖2F\nk .\nProof. Let W ∈ Rm×d be given with ‖W T‖2,1 ≤ r. Define sij := Wij/|Wij |, and note\nWXT = ∑ i,j eie T iWeje T jX T = ∑ i,j eiWij(Xej) T = ∑ i,j\n|Wij |‖Xej‖2 r‖X‖F︸ ︷︷ ︸ =:qij r‖X‖Fsijei(Xej)T ‖Xej‖︸ ︷︷ ︸ =:Uij .\nNote by Cauchy-Schwarz that∑ i,j qij ≤ 1 r‖X‖F ∑ i √∑ j W 2ij‖X‖F = ‖W T‖2,1‖X‖F r‖X‖F ≤ 1,\npotentially with strict inequality, thus q is not a probability vector. To remedy this, construct probability vector p from q by adding in, with equal weight, some Uij and its negation, so that the above summation form of WXT goes through equally with p and with q.\nNow define iid random variables (V1, . . . , Vk), where\nPr[Vl = Uij ] = pij , EVl = ∑ i,j pijUij = ∑ i,j qijUij = WX T,\n‖Uij‖ = ∥∥∥∥∥sijei(Xej)‖Xej‖2 ∥∥∥∥∥\nF\n· r‖X‖F = |sij | · ‖ei‖2 · ∥∥∥∥∥ Xej‖Xej‖2 ∥∥∥∥∥\n2\n· r‖X‖F = r‖X‖F,\nE‖Vl‖2 = ∑ i,j pij‖Uij‖2 ≤ ∑ ij pijr 2‖X‖2F = r2‖X‖2F .\nBy Lemma C.1, there exist (V̂1, . . . , V̂k) ∈ Sk with∥∥∥∥∥∥WXT − 1k ∑ l V̂l ∥∥∥∥∥∥ 2 ≤ E ∥∥∥∥∥∥EV1 − 1k ∑ l Vl ∥∥∥∥∥∥ 2 ≤ 1 k E‖V1‖2 ≤ r2‖X‖2F k .\nFurthermore, the matrices V̂l have the form\n1\nk ∑ l V̂l = 1 k ∑ l sleil(Xejl) T ‖Xejl‖ = 1 k ∑ l sleile T jl ‖Xejl‖ XT =: ŴXT, where Ŵ ∈M. Lastly, note |M| has cardinality at most (2nd)k." }, { "heading": "D PROOFS FOR SECTION 1.2", "text": "The bulk of this proof is devoted to establishing the Rademacher bound for computation graphs in Lemma 3.1; thereafter, as mentioned in Section 3, it suffices to plug this bound and the data augmentation bound in Lemma 1.2 into Lemma 1.1, and apply a pile of union bounds.\nAs mentioned in Section 3, this proof follows the scheme laid out in (Bartlett et al., 2017a), with simplifications due to the removal of “reference matrices” and some norm generality.\nProof of Lemma 3.1. Let cover scale and per-layer scales ( 1, . . . , L) be given; the proof will develop a covering number parameterized by these per-layer scales, and then optimize them to derive the final covering number in terms of . From there, a Dudley integral will give the Rademacher bound. Define b̃i := bi √ n for convenience. As in the statement, recursively define\nXT0 := X T, XTi := σi ( [WiΠiDi|〉Fi]XTi−1 ) .\nThe proof will recursively construct an analogous cover via\nX̂T0 := X T, X̂Ti := σi ( [ŴiΠiDi|〉Fi]X̂Ti−1 ) ,\nwhere the choice of Ŵi depends on X̂i−1, and thus the total cover cardinality will product (and not simply sum) across layers. Specifically, the cover Ni for Ŵi is given by Lemma C.3 by plugging in ‖ΠiDiX̂Ti−1‖F ≤ b̃i, and thus it suffices to choose\ncover cardinality k := r2i b̃ 2 i\n2i , whereby min Ŵi∈Ni ‖WiΠiDiX̂Ti−1 − ŴiΠiDiX̂Ti−1‖ ≤ i.\nBy this choice (and the cardinality estimate in Lemma C.3, the full cover N satisfies\nln |N | = ∑ i ln |Ni| ≤ ∑ i r2i b̃ 2 i 2i ln(2m2).\nTo optimize the parameters ( 1, . . . , L), the first step is to show via induction that\n‖XTi − X̂Ti ‖F ≤ ∑ j≤i jρj i∏ l=j+1 slρl.\nThe base case is simply ‖XT0 − X̂T‖ = ‖XT − XT‖ = 0, thus consider layer i > 0. Using the inductive formula for X̂i and the cover guarantee on Ŵi,∥∥∥XTi − X̂Ti ∥∥∥ =∥∥∥σi([WiΠiDi|〉Fi]XTi−1)− σi([ŴiΠiDi|〉Fi]X̂Ti−1)∥∥∥\n≤ ρi ∥∥∥[WiΠiDi|〉Fi]hXTi−1 − [ŴiΠiDi|〉Fi]X̂Ti−1∥∥∥\n≤ ρi ∥∥∥[WiΠiDi|〉Fi]XTi−1 − [WiΠiDi|〉Fi]X̂Ti−1∥∥∥+ ρi∥∥∥[WiΠiDi|〉Fi]X̂Ti−1 − [ŴiΠiDi|〉Fi]X̂Ti−1∥∥∥\n≤ ρi ∥∥[WiΠiDi|〉Fi]∥∥2∥∥∥XTi−1 − X̂Ti−1∥∥∥+ ρi∥∥∥[(Wi − Ŵi)ΠiDiX̂Ti−1|〉(Fi − Fi)X̂Ti−1]∥∥∥ ≤ siρi ∑ j≤i−1 jρj i−1∏ l=j+1 slρl + ρi\n∥∥∥(Wi − Ŵi)ΠiDiX̂Ti−1∥∥∥ ≤ ∑ j≤i−1 jρj i∏ l=j+1 slρl + ρi i ≤ ∑ j≤i jρj i∏ l=j+1 slρl.\nTo balance ( 1, . . . , L), it suffices to minimize a Lagrangian corresponding to the cover size subject to an error constraint, meaning\nL(~ , λ) = L∑ i=1 αi 2i + λ L∑ i=1 iβi − where αi := r2i b̃2i ln(2m2), βi := ρi L∏ l=i+1 slρl,\nwhose unique critical point for ~ > 0 implies the choice\ni := 1\nZ ( 2αi βi )1/3 where Z := 1 ∑ i (2αiβ 2 i ) 1/3,\nwhereby ‖XTL − X̂TL‖ ≤ automatically, and\nln |N | ≤ Z2 ∑ i r2i b̃ 2 i ln(2m 2) (2αi/βi)2/3\n= 1\n222/3 2∑ i r 2/3 i b̃ 2/3 i β 2/3 i ln(2m 2)1/3 2∑ i r 2/3 i b̃ 2/3 i ln(2m 2)1/3β 2/3 i\n= 24/3 ln(2m2)\n2\n[∑ i ( rib̃iρi L∏ l=i+1 slρl )2/3]3 =: τ2 2 ,\nas desired, with τ introduced for convenience in what is to come.\nFor the Rademacher complexity estimate, by a standard Dudley entropy integral (Shalev-Shwartz & Ben-David, 2014), setting τ̂ := max{τ, 1/3} for convenience,\nnRad(G) ≤ inf ζ\n4ζ √ n+12 ∫ √n ζ √ τ̂ d = inf ζ 4ζ √ n+12τ̂ ln( ) ∣∣√n ζ = inf ζ 4ζ √ n+12τ̂(ln √ n−ln ζ),\nwhich is minimized at ζ = 3τ̂ / √ n, whereby\nnRad(G) ≤ 12τ̂ + 6τ̂ lnn− 12τ̂ ln(3τ̂ / √ n) = 12τ̂(1− ln(3τ̂)) ≤ 12τ̂ ≤ 12τ + 4.\nThis now gives the proof of Theorem 1.3.\nProof of Theorem 1.3. With Lemma 1.1, Lemma 1.2, and Lemma 3.1 out of the way, the main work of this proof is to have an infimum over distillation network hyperparameters (~b, ~r, ~s) on the right hand side, which is accomplished by dividing these hyperparameters into countably many shells, and unioning over them.\nIn more detail, divide (~b, ~r, ~s) into shells as follows. Divide each bi and ri into shells of radius increasing by one, meaning meaning for example the first shell for bi has bi ≤ 1, and the jth shell has bi ∈ (j − 1, j], and similarly for ri; moreover, associate the jth shell with prior weight qj(bi) := (j(j + 1)) −1, whereby ∑ j≥1 qj(bi) = 1. Meanwhile, for si use a finer grid where the first shell has si ≤ 1/L, and the jth shell has si ∈ ((j − 1)/L, j/L), and again the prior weight is qj(si) = (j(j + 1))\n−1. Lastly, given a full set of grid parameters (~b, ~r, ~s), associate prior weight q(~b, ~r, ~s) equal to the product of the individual prior weight, whereby the sum of the prior weights over the entire product grid is 1. Enumerate this grid in any way, and define failure probability δ(~b, ~r, ~s) := δ · q(~b, ~r, ~s).\nNext consider some fixed grid shell with parameters (~b′, ~r′, ~s′) and letH denote the set of networks for which these parameters form the tightest shell, meaning that for any g ∈ H with parameters (~b, ~r, ~s), then (~b′, ~r′, ~s′) ≤ (~b + 1, ~r + 1, ~s + 1) component-wise. As such, by Lemma 1.1, with probability at least 1− δ(~b′, ~r′, ~s′), each g ∈ H satisfies\nPr[arg max y′ f(x)y′ 6= y] ≤ 2 ∥∥∥∥dµXdνn ∥∥∥∥ ∞ Φγ,m(f, g) + 2 n n∑ i=1 (1− φγ(g(xi))yi\n+ Õ ( k3/2\nγ ∥∥∥∥dµXdνn ∥∥∥∥ ∞ ( Radm(F) + Radm(H) ) + √ k γ Radn(H) )\n+ 6\n√ ln(q(~b′, ~r′, ~s′)) + ln(1/δ)\n2n\n( 1 + ∥∥∥∥dµXdνn ∥∥∥∥ ∞ √ n m ) .\nTo simplify this expression, first note by Lemma 3.1 and the construction of the shells (relying in particular on the finer grid for si to avoid a multiplicative factor L) that\nRadm(H) = Õ [ 1√ n (∑ i [ r′ib ′ iρi L∏ l=i+1 s′lρl ]2/3)3/2]\n= Õ [ 1√ n (∑ i [ (ri + 1)(bi + 1)ρi L∏ l=i+1 (sl + 1/L)ρl ]2/3)3/2]\n= Õ [ 1√ n (∑ i [ ribiρi L∏ l=i+1 slρl ]2/3)3/2] ,\nand similarly for Radm(H) (the only difference being √ m replaces √ n). Secondly, to absorb the term ln(q(~b′, ~r′, ~s′)), noting that ln(a) ≤ ln(γ2) + (a− γ2)/(γ2), and also using ρi ≥ 1, then\nln(q(~r′,~b′, ~s′)) = O ln∏ i (ri + 1) 2(bi + 1) 2((si + 1)L) 2 = O L lnL+ ln∏ i r 2/3 i b 2/3 i s 2/3 i = Õ\nL+∑ i ln(r 2/3 i b 2/3 i ) + ln ∏ i s 2/3 i = Õ L+ ln(γ2) + 1 γ2 ∑ i r2/3i b2/3i +∏ l>i s 2/3 l \n= Õ ( L+ 1\nγ2 ∑ i [ ribi L∏ l=i+1 sl ]2/3)\n= Õ ( L+ 1\nγ2 ∑ i [ ribiρi L∏ l=i+1 slρl ]2/3) .\nTogether,\nPr[arg max y′ f(x)y′ 6= y] ≤ 2 ∥∥∥∥dµXdνn ∥∥∥∥ ∞ Φγ,m(f, g) + 2 n n∑ i=1 (1− φγ(g(xi))yi\n+ Õ ( k3/2\nγ ∥∥∥∥dµXdνn ∥∥∥∥ ∞ Radm(F) ) + 6 √ ln(1/δ) 2n ( 1 + ∥∥∥∥dµXdνn ∥∥∥∥ ∞ √ n m )\n+ Õ ( √ k\nγ √ n\n( 1 + k ∥∥∥∥dµXdνn ∥∥∥∥ ∞ √ n m )(∑ i [ ribiρi L∏ l=i+1 slρl ]2/3)3/2) .\nSince h ∈ H was arbitrary, the bound may be wrapped in infg∈H. Similarly, unioning bounding away the failure probability for all shells, since this particular shell was arbitrary, an infimum over shells can be added, which gives the final infimum over (~b, ~r, ~s). The last touch is to apply Lemma 1.2 to bound ‖dµX/dνn‖∞." }, { "heading": "E PROOF OF STABLE RANK BOUND, THEOREM 1.4", "text": "The first step is to establish the sparsification lemma in Lemma 3.2, which in turn sparsifies each matrix product, cannot simply invoke Lemma C.2: pre-processing is necessary to control the elementwise magnitudes of the resulting matrix. Throughout this section, define the stable rank of a matrix W as sr(W ) := ‖W‖2F /‖W‖22 (or 0 when W = 0).\nLemma E.1. Let matrices A ∈ Rd×m and B ∈ Rn×m be given, along with sampling budget k. Then there exists a selection (i1, . . . , ik) of indices and a corresponding diagonal sampling matrix M with at most k nonzero entries satisfying\nM := k∑ j=1 Zijeije T ij ‖aij‖ where Zij ≤ ‖A‖F √ m k , and\n∥∥ABT −AMBT∥∥2 ≤ 4 k ‖A‖2‖B‖2.\nProof. Let τ > 0 be a parameter to be optimized later, and define a subset of indices S := {i ∈ {1, . . . ,m} : ‖Aei‖ ≥ τ}, with Sc := {1, . . . ,m} \\ S. Let Aτ denote the matrix obtained by zeroing out columns not in S, meaning\nAτ := ∑ i∈S (Aei)e T i ,\nwhereby ‖ABT −AτBT‖F ≤ ‖A−Aτ‖ · ‖B‖ ≤ ‖B‖ √∑ i∈Sc ‖Aei‖2 ≤ τ √ m‖B‖.\nApplying Lemma C.2 to AτBT gives\nM := ‖Aτ‖2\nk k∑ j=1 eije T ij ‖Aτeij‖2 = k∑ j=1 Zijeije T ij ‖Aτeij‖ such that ‖AτBT−AτMBT‖2 ≤ 1 k ‖Aτ‖2‖B‖2,\nwhere Zij is specified by these equalities. To simplify, note ‖Aτ‖ ≤ ‖A‖, and AτM = AM . Combining the two inequalities,\n‖ABT −AMBT‖ ≤ ‖ABT −AτBT‖+ ‖AτBT −AτMBT‖ ≤ τ √ m‖B‖+ 1√\nk ‖A‖‖B‖.\nTo finish, setting τ := ‖A‖/ √ mk gives the bound, and ensures that the scaling term Zij satisfies, for any ij ∈ S,\nZij = ‖Aτ‖2 k‖Aτeij‖ ≤ ‖A‖ 2 F kτ = ‖A‖F\n√ m\nk .\nWith this tool in hand, the proof of Lemma 3.2 is as follows.\nProof of Lemma 3.2. Let Xj denote the network output after layer j, meaning\nXT0 := X T, XTj := σj(WjX T j−1),\nwhereby ‖XTj ‖F = ‖σj(WjXTj−1)− σj(0)‖F ≤ ‖WjXTj−1‖F ≤ ‖Wj‖2‖XTj−1‖F ≤ ‖X‖F ∏ i≤j ‖Wi‖2.\nThe proof will inductively choose sampling matrices (M1, . . . ,ML) as in the statement and construct\nX̂T0 := X T, X̂Tj := Πjσj(WjMjX̂ T j−1), where Πj denotes projection onto the Frobenius-norm ball of radius ‖X‖F ∏ i≤j ‖Wi‖2 (whereby ΠjXj = Xj), satisfying\n∥∥∥Xj − X̂j∥∥∥ F ≤ ‖X‖F j∏ p=1 ‖Wp‖2 j∑ i=1 √ sr(Wi) ki ,\nwhich gives the desired bound after plugging in j = L.\nProceeding with the inductive construction, the base case is direct since X̂0 = X = X0 and∥∥∥X0 − X̂0∥∥∥ F = 0, thus consider some j > 0. Applying Lemma E.1 to the matrix multiplication WjX̂j−1 with kj samples, there exists a multiset of Sj coordinates and a corresponding sampling matrix Mj , as specified in the statement, satisfying∥∥∥WjX̂Tj−1 −WjMjX̂Tj−1∥∥∥ F ≤ 1√ kj ‖Wj‖F‖X̂j−1‖F ≤ 1√ kj ‖Wj‖F‖X‖F ∏ i<j ‖Wi‖2.\nUsing the choice X̂Tj := Πjσj(WjMjX̂ T j−1),∥∥∥Xj − X̂j∥∥∥ F = ∥∥∥σj(WjXTj−1)−Πjσj(WjMjX̂Tj−1)∥∥∥ F\n≤ ∥∥∥WjXTj−1 −WjMjX̂Tj−1∥∥∥\nF = ∥∥∥WjXTj−1 −WjX̂Tj−1 +WjX̂Tj−1 −WjMjX̂Tj−1∥∥∥\nF ≤ ∥∥∥WjXTj−1 −WjX̂Tj−1∥∥∥ F + ∥∥∥WjX̂Tj−1 −WjMjX̂Tj−1∥∥∥ F\n≤ ∥∥Wj∥∥2∥∥∥Xj−1 − X̂j−1∥∥∥F + 1√kj ‖Wj‖F‖X‖F ∏ i<j ‖Wi‖2 ≤ ∥∥Wj∥∥2 ‖X‖F ∏ i<j ‖Wi‖2 ∑ i<j √ sr(Wi) ki + √ sr(Wj) kj ‖X‖F ∏ i≤j ‖Wi‖2\n≤ ‖X‖F ∏ i≤j ‖Wi‖2 ∑ i≤j √ sr(Wi) ki\nas desired.\nTo prove Theorem 1.4 via Lemma 3.2, the first step is a quick tool to cover matrices element-wise.\nLemma E.2. Let A denote matrices with at most k2 nonzero rows and k1 nonzero columns, entries bounded in absolute value by b, and total number of rows and columns each at most m. Then there exists a cover setM⊆ A satisfying\n|M| ≤ mk1+k2 ( 2b √ k1k2 )k1k2 , and sup\nA∈A min Â∈M\n‖A− Â‖F ≤ .\nProof. Consider some fixed set of k2 nonzero rows and k1 nonzero columns, and letM0 denote the covering set obtained by gridding the k1 · k2 entries at scale √k1k2 , whereby\n|M0| ≤\n( 2b √ k1k2 )k1k2 .\nFor any A ∈ A with these specific nonzero rows and columns, the  ∈ M0 obtained by rounding each nonzero entry of A to the nearest grid element gives\n‖A− Â‖2 = ∑ i,j (Aij − Âij)2 ≤ ∑ i,j ( √ k1k2 )2 = 2 ∑ i,j 1 k1k2 = 2.\nThe final coverM is now obtained by unioning copies ofM0 for all ( m k1 )( m k2 ) ≤ mk1+k2 possible submatrices of size k2 × k1.\nThe proof of Theorem 1.4 now carefully combines the preceding pieces.\nProof of Theorem 1.4. The proof proceeds in three steps, as follows.\n1. A covering number is estimate for sparsified networks, as output by Lemma 3.2.\n2. A covering number for general networks is computed by balancing the error terms from Lemma 3.2 and its cover computed here.\n3. This covering number is plugged into a Dudley integral to obtain the desired Rademacher bound.\nProceeding with this plan, let (X̂T0 , . . . , X̂ T L) be the layer outputs (and network input) exactly as provided by Lemma 3.2. Additionally, define diagonal matrices Dj := ∑ l ! ∈Sj+1 ele T l (with DL = I , where the “!” denotes unique inclusion; these matrices capture the effect of the subsequent sparsification, and can be safely inserted after each Wj without affecting X̂Tj , meaning\nX̂Tj = Πjσj(WjMjX̂ T j−1) = Πjσj(DjWjMjX̂ T j−1).\nLet per-layer cover precisions ( 1, . . . , L) be given, which will be optimized away later. This proof will inductively construct\nX̃T0 := X T, X̃Tj := Πjσj(W̃jX̃ T j−1),\nwhere W̃j is a cover element for DjWjMj , and inductively satisfying ‖X̂Tj − X̃Tj ‖ ≤ ‖X‖Fmj/2 ∑ i≤j i ∏ l≤j l 6=i ‖Wj‖F.\nTo construct the per-layer cover elements W̃j , first note by the form of Mj (and the scaling Zi provided by Lemma 3.2) that\nb := max i,l (DjWjMj)l,i ≤ max i ‖WjMjei‖ ≤ Zi ‖Wjei‖ ‖Wjei‖\n≤ ‖Wj‖F √ m\nkj−1 .\nConsequently, by Lemma E.2, there exists a cover Cj of matrices of the form DjWjMj satisfying |Cj | ≤ mkj+kj−1 ( 2b √ kjkj−1\nj\n)kjkj−1 ≤ mkj+kj−1 ( 2‖Wj‖F √ kjm\nj\n)kjkj−1 ,\nand the closest cover element W̃jCj to DjWjMj satisfies ‖DjWjMj − W̃j‖F ≤ j .\nProceeding with the induction, the base case has ‖X̂T0 − X̃T0‖ = ‖XT − XT‖ = 0, thus consider j > 0. The first step is to estimate the spectral norm of DjWjMj , which can be coarsely upper bounded via\n‖DjWjMj‖22 ≤ ‖DjWjMj‖2F ≤ ∑ i ‖WjMjei‖2 ≤ ∑ i ‖Wj‖2F m kj−1 ≤ ‖Wj‖2Fm.\nBy the form of X̂j and X̃j ,\n‖X̂Tj − X̃Tj ‖ = ‖Πjσj(DjWjMjX̂Tj−1)−Πjσj(W̃jX̃Tj−1)‖\n≤ ‖DjWjMjX̂Tj−1 − W̃jX̃Tj−1‖\n≤ ‖DjWjMjX̂Tj−1 −DjWjMjX̃Tj−1‖+ ‖DjWjMjX̃Tj−1 − W̃jX̃Tj−1‖\n≤ ‖DjWjMj‖2‖X̂Tj−1 − X̃Tj−1‖F + ‖DjWjMj − W̃j‖2‖X̃Tj−1‖F ≤ √ m‖Wj‖F [ ‖X‖Fm(j−1)/2 ∑ i<j i ∏ l<j l 6=i ‖Wj‖F ] + j‖X‖F ∏ i<j ‖Wj‖2\n≤ ‖X‖Fmj/2 ∑ i≤j i ∏ l≤j l 6=i ‖Wj‖F,\nwhich establishes the desired bound on the error.\nThe next step is to optimize kj . Let > 0 be arbitrary, and set −1j := −12L √ m‖X‖F ∏ i 6=j ‖Wi‖F, whereby\n‖X̂TL − X̃TL‖F ≤ 2 , |Cj | ≤ mkj+kj−1\n( 4mL √ kj‖X‖F ∏ i ‖Wi‖F )kjkj−1 .\nThe overall network cover N is the product of the covers for all layers, and thus has cardinality satisfying\nln |N | ≤ ∑ j ln |Cj | ≤ 2 ∑ j kj lnm+ ∑ j kjkj−1 ln\n( 4mL √ kj‖X‖F ∏ i ‖Wi‖F\nj\n)\n≤ 2 ∑ j kj lnm+ ∑ j 2k2j ln\n( 4mL √ kj‖X‖F j ) + ∑ j 2k2j · ∑ j ln ‖Wj‖F . To choose (k1, . . . , kL), letting XTL denote the output of the original unsparsified network, note firstly that the full error bound satisfies\n‖XTL − X̃TL‖ ≤ ‖XTL − X̂TL‖+ ‖X̂TL − X̃TL‖\n≤ ∑ i αi√ ki + 2 , where αi := ‖X‖F ∏ i ‖Wi‖2 √sr(Wi). To choose ki, the approach here is to minimize a Lagrangian corresponding to the cover cardinality, subject to the total cover error being . Simplifying the previous expressions and noting 2kjkj−1 ≤ k2j + k 2 j−1, whereby the dominant term in ln |N | is ∑ j k 2 j , consider Lagrangian\nL(k1, . . . , kl, λ) := ∑ i k2i + λ ∑ i αi√ ki − 2 , which has critical points when each ki satisfies\nk 5/2 i\nαi = λ 4 ,\nthus ki := α 2/5 i /Z with Z := 2/(2 ∑ j α 4/5 j )\n2. As a sanity check (since it was baked into the Lagrangian), plugging this into the cover error indeed gives\n‖XTL − X̃TL‖ ≤ ∑ i αi√ ki + 2 = √ Z ∑ i α 4/5 i + 2 = .\nTo upper bound the cover cardinality, first note that∑ i k2i = 1 Z2 ∑ i α 4/5 i = 4 4 (∑ i α 4/5 i )5 ,\nwhereby\nln |N | = Õ [∑ i k2i ] · [∑ i ln ‖Wi‖F ]\n= β\n4 where β = Õ ( ‖X‖4F [∏ j ‖Wj‖42 ] [∑ i sr(Wi)2/5 ]5 [∑ i ln ‖Wi‖F ]) .\nThe final step is to apply a Dudley entropy integral (Shalev-Shwartz & Ben-David, 2014), which gives\nnRad(F) = inf ζ\n( 4ζ √ n+ 12 ∫ √n ζ √ β 2 d ) = inf ζ ( 4ζ √ n+ 12 [ 1 ζ − 1√ n ]√ β ) .\nDropping the negative term gives an expression of the form aζ + b/ζ, which is convex in ζ > 0 and has critical point at ζ2 = b/a, which after plugging back in gives an upper bound 2 √ ab, meaning\nnRad(F) ≤ 2 ( 4 √ n · 12 √ β )1/2 = 8 √ 3n1/4β1/4.\nDividing by n and expanding the definition of β gives the final Rademacher complexity bound." } ]
2,021
GENERALIZATION BOUNDS VIA DISTILLATION
SP:0fa59beb93e339dc3612719931b206653916b8b5
[ "This paper proposes a novel model integrating both causal inference and structure-aware counterfactual training to enhance the long-tail performances of information extraction. The causal mechanism considers a structured causal model that takes into account all possible cause-effect relations for the final predictions, including contexts, target representations, POS tags, NERs, etc. They also implement counterfactual training strategy by selecting the most important factors and wipe off the side effects to enhance the long-tail situations.", "The novelty of the paper seems to be in application of the counterfactual analysis to address the long-tailed IE issues, which might be interesting to the IE researchers. Overall, more theory about the counterfactual generation for IE task should be added, for this is what the novelty of the paper; also, for the rebalancing learning for slide effect and counterfactual, the theory appears to be not enough. The weak of this work is the theoretical and conceptual underpinnings of the proposed methodology. " ]
Information Extraction (IE) aims to extract structured information from unstructured texts. However, in practice, the long-tailed and imbalanced data may lead to severe bias issues for deep learning models, due to very few training instances available for the tail classes. Existing works are mainly from computer vision society, leveraging re-balancing, decoupling, transfer learning and causal inference to address this problem on image classification and scene graph generation. However, these approaches may not achieve good performance on textual data, which involves complex language structures that have been proven crucial for the IE tasks. To this end, we propose a novel framework (named CFIE) based on language structure and causal reasoning with three key ingredients. First, by fusing the syntax information to various structured causal models for mainstream IE tasks including relation extraction (RE), named entity recognition (NER), and event detection (ED), our approach is able to learn the direct effect for classification from an imbalanced dataset. Second, counterfactuals are generated based on an explicit language structure to better calculate the direct effect during the inference stage. Third, we propose a flexible debiasing approach for more robust prediction during the inference stage. Experimental results on three IE tasks across five public datasets show that our model significantly outperforms the state-of-the-arts by a large margin in terms of Mean Recall and Macro F1, achieving a relative 30% improvement in Mean Recall for 7 tail classes on the ACE2005 dataset. We also discuss some interesting findings based on our observations.
[]
[ { "authors": [ "Ehsan Abbasnejad", "Damien Teney", "Amin Parvaneh", "Javen Shi", "Anton van den Hengel" ], "title": "Counterfactual vision and language learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Anders Björkelund", "Love Hafdell", "Pierre Nugues" ], "title": "Multilingual semantic role labeling", "venue": "In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL", "year": 2009 }, { "authors": [ "Léon Bottou", "Jonas Peters", "Joaquin Quiñonero-Candela", "Denis X Charles", "D Max Chickering", "Elon Portugaly", "Dipankar Ray", "Patrice Simard", "Ed Snelson" ], "title": "Counterfactual reasoning and learning systems: The example of computational advertising", "venue": "The Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Laura Chiticariu", "Yunyao Li", "Frederick R. Reiss" ], "title": "Rule-based information extraction is dead! long live rule-based information extraction systems", "venue": "In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing,", "year": 2013 }, { "authors": [ "Jason P.C. Chiu", "Eric Nichols" ], "title": "Named entity recognition with bidirectional LSTM-CNNs", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2016 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "George Doddington", "Alexis Mitchell", "Mark Przybocki", "Lance Ramshaw", "Stephanie Strassel", "Ralph Weischedel" ], "title": "The automatic content extraction (ACE) program – tasks, data, and evaluation", "venue": "In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04),", "year": 2004 }, { "authors": [ "Finale Doshi-Velez", "Been Kim" ], "title": "Towards a rigorous science of interpretable machine learning", "venue": "arXiv preprint arXiv:1702.08608,", "year": 2017 }, { "authors": [ "Javid Ebrahimi", "Anyi Rao", "Daniel Lowd", "Dejing Dou" ], "title": "Hotflip: White-box adversarial examples for text classification", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),", "year": 2018 }, { "authors": [ "Shi Feng", "Eric Wallace", "Alvin Grissom II", "Mohit Iyyer", "Pedro Rodriguez", "Jordan Boyd-Graber" ], "title": "Pathologies of neural models make interpretations difficult", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Jeffrey Flanigan", "Sam Thomson", "Jaime G Carbonell", "Chris Dyer", "Noah A Smith" ], "title": "A discriminative graph-based parser for the abstract meaning representation", "venue": "In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2014 }, { "authors": [ "Tianyu Gao", "Xu Han", "Hao Zhu", "Zhiyuan Liu", "Peng Li", "Maosong Sun", "Jie Zhou" ], "title": "Fewrel 2.0: Towards more challenging few-shot relation classification", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Claire Gardent", "Anastasia Shimorina", "Shashi Narayan", "Laura Perez-Beltrachini" ], "title": "The WebNLG challenge: Generating text from RDF data", "venue": "In Proceedings of the 10th International Conference on Natural Language Generation. Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Zhijiang Guo", "Yan Zhang", "Wei Lu" ], "title": "Attention guided graph convolutional networks for relation extraction", "venue": "In Proc. of ACL,", "year": 2019 }, { "authors": [ "Xu Han", "Pengfei Yu", "Zhiyuan Liu", "Maosong Sun", "Peng Li" ], "title": "Hierarchical relation extraction with coarse-to-fine grained attention", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Lifu Huang", "Heng Ji", "Kyunghyun Cho", "Ido Dagan", "Sebastian Riedel", "Clare Voss" ], "title": "Zero-shot transfer learning for event extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Zhanming Jie", "Wei Lu" ], "title": "Dependency-guided lstm-crf for named entity recognition", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Bingyi Kang", "Saining Xie", "Marcus Rohrbach", "Zhicheng Yan", "Albert Gordo", "Jiashi Feng", "Yannis Kalantidis" ], "title": "Decoupling representation and classifier for long-tailed recognition", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Divyansh Kaushik", "Eduard Hovy", "Zachary Lipton" ], "title": "Learning the difference that makes a difference with counterfactually-augmented data", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In Proc. of ICLR,", "year": 2017 }, { "authors": [ "Guillaume Lample", "Miguel Ballesteros", "Sandeep Subramanian", "Kazuya Kawakami", "Chris Dyer" ], "title": "Neural architectures for named entity recognition", "venue": "In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2016 }, { "authors": [ "Kai Lei", "Daoyuan Chen", "Yaliang Li", "Nan Du", "Min Yang", "Wei Fan", "Ying Shen" ], "title": "Cooperative denoising for distantly supervised relation extraction", "venue": "In Proceedings of the 27th International Conference on Computational Linguistics, Santa Fe, New Mexico,", "year": 2018 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": "arXiv preprint arXiv:1907.11692,", "year": 2019 }, { "authors": [ "Ziwei Liu", "Zhongqi Miao", "Xiaohang Zhan", "Jiayun Wang", "Boqing Gong", "Stella X Yu" ], "title": "Large-scale long-tailed recognition in an open world", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Xuezhe Ma", "Eduard Hovy" ], "title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2016 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Fausto Milletari", "Nassir Navab", "Seyed-Ahmad Ahmadi" ], "title": "V-net: Fully convolutional neural networks for volumetric medical image segmentation", "venue": "In 2016 fourth international conference on 3D vision (3DV),", "year": 2016 }, { "authors": [ "Christoph Molnar" ], "title": "Interpretable Machine Learning", "venue": "Lulu. com,", "year": 2020 }, { "authors": [ "Thien Huu Nguyen", "Ralph Grishman" ], "title": "Event detection and domain adaptation with convolutional neural networks", "venue": "In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers),", "year": 2015 }, { "authors": [ "Yulei Niu", "Kaihua Tang", "Hanwang Zhang", "Zhiwu Lu", "Xian-Sheng Hua", "Ji-Rong Wen" ], "title": "Counterfactual vqa: A cause-effect look at language bias", "venue": "arXiv preprint arXiv:2006.04315,", "year": 2020 }, { "authors": [ "Abiola Obamuyide", "Andreas Vlachos" ], "title": "Model-agnostic meta-learning for relation classification with limited supervision", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Judea Pearl", "Madelyn Glymour", "Nicholas P Jewell" ], "title": "Causal inference in statistics: A primer", "venue": null, "year": 2016 }, { "authors": [ "Nanyun Peng", "Hoifung Poon", "Chris Quirk", "Kristina Toutanova", "Wen tau Yih" ], "title": "Cross-sentence n-ary relation extraction with graph lstms", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D. Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proc. of EMNLP,", "year": 2014 }, { "authors": [ "Sameer Pradhan", "Alessandro Moschitti", "Nianwen Xue", "Hwee Tou Ng", "Anders Björkelund", "Olga Uryupina", "Yuchen Zhang", "Zhi Zhong" ], "title": "Towards robust linguistic analysis using OntoNotes", "venue": "In Proceedings of the Seventeenth Conference on Computational Natural Language Learning. Association for Computational Linguistics,", "year": 2013 }, { "authors": [ "Chris Quirk", "Hoifung Poon" ], "title": "Distant supervision for relation extraction beyond the sentence boundary", "venue": "In Proc. of EACL,", "year": 2017 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": " why should i trust you?” explaining the predictions of any classifier", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "Semantically equivalent adversarial rules for debugging nlp models", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Donald B Rubin" ], "title": "Essential concepts of causal inference: a remarkable history and an intriguing future", "venue": "Biostatistics & Epidemiology,", "year": 2019 }, { "authors": [ "Sunita Sarawagi" ], "title": "Information extraction", "venue": "Now Publishers Inc,", "year": 2008 }, { "authors": [ "Mike Schuster", "Kuldip K Paliwal" ], "title": "Bidirectional recurrent neural networks", "venue": "IEEE transactions on Signal Processing,", "year": 1997 }, { "authors": [ "Kaihua Tang", "Jianqiang Huang", "Hanwang Zhang" ], "title": "Long-tailed classification by keeping the good and removing the bad momentum causal effect", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Kaihua Tang", "Yulei Niu", "Jianqiang Huang", "Jiaxin Shi", "Hanwang Zhang" ], "title": "Unbiased scene graph generation from biased training", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Gokhan Tur", "Dilek Hakkani-Tür", "Larry Heck" ], "title": "What is left to be understood in atis", "venue": "In 2010 IEEE Spoken Language Technology Workshop,", "year": 2010 }, { "authors": [ "David Wadden", "Ulme Wennberg", "Yi Luan", "Hannaneh Hajishirzi" ], "title": "Entity, relation, and event extraction with contextualized span representations", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Tao Wang", "Yu Li", "Bingyi Kang", "Junnan Li", "Junhao Liew", "Sheng Tang", "Steven Hoi", "Jiashi Feng" ], "title": "The devil is in classification: A simple framework for long-tail instance segmentation", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Xiaozhi Wang", "Ziqi Wang", "Xu Han", "Wangyi Jiang", "Rong Han", "Zhiyuan Liu", "Juanzi Li", "Peng Li", "Yankai Lin", "Jie Zhou" ], "title": "Maven: A massive general domain event detection dataset", "venue": "In arXiv: https://arxiv.org/abs/2004.13590,", "year": 2020 }, { "authors": [ "Xu Yang", "Hanwang Zhang", "Jianfei Cai" ], "title": "Deconfounded image captioning: A causal retrospect", "venue": null, "year": 2021 }, { "authors": [ "In NeurIPS", "2020. Daojian Zeng", "Kang Liu", "Siwei Lai", "Guangyou Zhou", "Jian Zhao" ], "title": "Relation classification via", "venue": null, "year": 2020 }, { "authors": [ "Zhang", "Hanwang Zhang", "Jinhui Tang", "Xiansheng Hua", "Qianru Sun" ], "title": "Causal intervention", "venue": "Empirical Methods in Natural Language Processing (EMNLP),", "year": 2020 }, { "authors": [ "MR Overall" ], "title": "MF1 MR MF1 MR MF1 MR MF1 Micro F1 BiLSTM (Chiu", "venue": null, "year": 2017 }, { "authors": [ "MR Overall" ], "title": "MF1 MR MF1 MR MF1 MR MF1 Micro F1 BiLSTM (Chiu", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The goal of Information Extraction (IE) (Sarawagi, 2008; Chiticariu et al., 2013) is to detect the structured information from unstructured texts. IE tasks, such as named entity recognition (NER) (Lample et al., 2016), relation extraction (RE) (Zeng et al., 2014; Peng et al., 2017) and event detection (ED) (Nguyen & Grishman, 2015) have developed rapidly with the data-hungry deep learning models trained on a large amount of data. However, in real-world settings, unstructured texts follow a long-tailed distribution (Doddington et al., 2004), leading to a significant performance drop on the instance-scarce (or tail) classes which have very few instances available. For example, in the ACE2005 (Doddington et al., 2004) dataset, nearly 70% of event triggers are long-tailed while they only take up 20% of training data. On a strong baseline (Jie & Lu, 2019), the macro F1 score of instance-rich (or head) classes can be 71.6, while the score of tail classes sharply drops to 41.7.\nThe underlying causes for the above issues are the biased statistical dependencies and spurious correlations between feature representations and classes learned from an imbalanced dataset. For example, an entity Gardens appears 13 times in the training set of OntoNotes5.0 (Pradhan et al., 2013), with the NER tag LOC, and only 2 times as organization ORG. A classifier trained on this dataset will build a spurious correlations between Gardens and LOC. As a result, an organization that contains the entity Gardens may be wrongly predicted as a location LOC.\nThere are only a few studies (Zhang et al., 2019; Han et al., 2018) in the Natural Language Processing (NLP) field to address such long-tailed issues. These works mostly rely on external and pre-constructed knowledge graphs, providing useful data-specific prior information which may not be available for other datasets. On the other hand, there are plenty of works from the computer vision society, where the bias is also quite straightforward. Current solutions include re-balanced training\n(Lin et al., 2017) that re-balances the contribution of each class in the training stage, transfer learning (Liu et al., 2019b) that takes advantage of the knowledge in data-rich class to boost the performance of instance-scarce classes, decoupling (Kang et al., 2019) strategy that learns the representations and classifiers separately, and causal inference (Tang et al., 2020a;b; Abbasnejad et al., 2020) that relies on structured causal models for unbiased scene graph generation, image classification and visual question answering.\nThe aforementioned studies from the computer vision community may not achieve good performance on the textual datasets in the NLP area due to a significant difference between the two fields. For example, unlike images, texts involve complex language structures such as dependency tree and constituent tree that describe the syntactic or semantic level relations between tokens. For the longtailed IE, how to explore the rich relational information as well as complex long-distance interactions among words as conveyed by such linguistic structures remains an open challenge. Furthermore, to capture a more informative context, the way of utilizing the syntax tree for three IE tasks varies: the RE task relies more on the context and entity type rather than entities themselves, while classifications in NER and ED tasks count more on entities than the context. Hence, it is challenging to decide properly on how to utilize language structures for the above three different IE tasks. One may also think that the prevalent pre-trained models such as BERT (Devlin et al., 2019) may address the long-tailed issues. However, we empirically show that such models still suffer from bias issues.\nIn this paper, we propose CFIE, a novel framework that combines the language structure and counterfactual analysis in causal inference (Pearl et al., 2016) to alleviate the spurious correlations of the IE tasks including NER, RE and ED. From a causal perspective, counterfactuals (Bottou et al., 2013; Abbasnejad et al., 2020) state the results of the outcome if certain factors had been different. This concept entails a hypothetical scenario where the values in the causal graph can be altered to study the effect of the factor. Intuitively, the factor that yields the most significant changes in model predictions have the greatest impact and is therefore considered as main effect. Other factors with minor changes are categorized as side effects. In the context of IE with complex language structures, counterfactual analysis answers the question on “which tokens in the text would be the key clues for RE, NER or ED that could change the prediction result?”. With that in mind, our CFIE is proposed to explore the language structure to eliminate the bias caused by the side effect and maintain the main effect for the classification. We evaluate our model on five public datasets across three IE tasks, and achieve significant performance gain on instance-scarce classes. We will release our code to contribute the community. Our major contributions are summarized as:\n• To the best of our knowledge, our CFIE is the first attempt that marries the counterfactual analysis and language structure to address the long-tailed IE issues. We build different structured causal models (SCMs) (Pearl et al., 2016) for the IE tasks and fuse the dependency structure to the models to better capture the main causality for the classification.\n• We generate counterfactuals based on syntax structure, where the counterfactuals can be used as interventions to alleviate spurious corrections on models. In doing so, the main effect can be better estimated by the intervention methodology.\n• We also propose flexible classification debiasing approaches inspired by Total Direct Effect (TDE) in causal inference. Our proposed approach is able to make a good balance between the direct effect and counterfactuals representation to achieve more robust predictions." }, { "heading": "2 RELATED WORK", "text": "Long-tailed Information Extraction: Information extraction tasks, such as relation extraction (Zeng et al., 2014; Peng et al., 2017; Quirk & Poon, 2017), named entity recognition (Lample et al., 2016; Chiu & Nichols, 2016), and event extraction (Nguyen & Grishman, 2015; Huang et al., 2018) are fundamental NLP tasks and have been extensively studied in recent years, For the long-tailed IE, recent models (Lei et al., 2018; Zhang et al., 2019) leverage external rules or transfer knowledge from data-rich classes to the tail classes. Few-shot leaning (Gao et al., 2019; Obamuyide & Vlachos, 2019) has been also applied to IE tasks, although this task focuses more on new classification tasks with only a handful of training instances.\nRe-balancing/Decoupling Models: Re-balancing approaches include re-sampling strategies (Mahajan et al., 2018; Wang et al., 2020a) that aim to alleviate statistical bias from head classes, and\nre-weighting approaches (Milletari et al., 2016; Lin et al., 2017) which assign balanced weights to the losses of training samples from each class to boost the discriminability via robust classifier decision boundaries. These techniques may inevitably suffer the under-fitting/over-fitting issue to head/tail classes (Tang et al., 2020a). There are also recent studies (Kang et al., 2019) that decouple the representation learning and the classifier, which effectively mitigate the performance loss caused by direct re-sampling.\nCasual Inference: Causal inference (Pearl et al., 2016; Rubin, 2019) and counterfactuals have been widely used in psychology, politics and epidemiology for years. There are many studies in computer vision society (Tang et al., 2020b; Abbasnejad et al., 2020; Tang et al., 2020a; Niu et al., 2020; Yang et al., 2020; Zhang et al., 2020; Yue et al., 2020), which use Total Direct Effect (TDE) analysis framework and counterfactuals for Scene Graph Generation (SGG), visual question answering, and image classifications. There is also a recent work (Zeng et al., 2020) that generates counterfactuals for weakly-supervised NER by replacing the target entity with another entity. Our methods differ from the previous works in three aspects: 1) We explore the syntax structures of texts for building different causal graphs, 2) Counterfactuals are generated based on a task-specific pruned dependency tree. 3) Our proposed inference method yields robust predictions for the NER and ED tasks.\nModel Interpretation: Besides causal inference, there have been plenty of studies (Molnar, 2020) about traditional model interpretation applied in various applications, such as text and image classification (Ribeiro et al., 2016; Ebrahimi et al., 2018), question answering (Feng et al., 2018; Ribeiro et al., 2018), and machine translation (Doshi-Velez & Kim, 2017). LIME (Ribeiro et al., 2016) was proposed to select a set of instances to explain the predictions. The input reduction method (Feng et al., 2018) is able to find out the most important features and use very few words to obtain the same prediction. Unlike the LIME and input reduction method, the word selections in our CFIE are based on the syntax structure. SEARs (Ribeiro et al., 2018) induces adversaries by data augmentation during the training phase. Along this line, a recent study (Kaushik et al., 2019) also uses data augmentation technqiue to provide extra training signal. Our CFIE is orthogonal to data augmenation as it generates counterfactuals during the inference stage, where the counterfactuals are used to mitigate the spurious correlations rather than training the network parameters." }, { "heading": "3 MODEL", "text": "Figure 1 shows the work flow of our proposed CFIE. We detail these components as follows." }, { "heading": "3.1 STEP1: CAUSAL REPRESENTATION LEARNING", "text": "In this step, we train a causal graph on an imbalanced dataset. Our goal here is to teach the model to identify the main cause (main effect) and the spurious correlations (side effect) for the classification.\nStructural Causal Models (SCMs): The two well-known causal inference frameworks are SCMs and potential outcomes (Rubin, 2019) which are complementary and theoretically connected. We choose SCMs in our case due to their advantages in expressing and reasoning about the effects of\ncausal relationships among variables. An SCM can be represented as a directed acyclic graph (DAG) G = {V,F, U}, where we denote the set of observables (vertices) as V = {V1, ..., Vn} , the set of functions (directed edges) as F = {f1, ..., fn}, and the set of exogenous variables (e.g. noise) as U = {U1, ..., Un}. Note that in the deterministic case where U is given, the value of all variables in the SCM are uniquely determined (Pearl, 2009). Each observable Vi can be derived from:\nVi := fi(PAi, Ui), (i = 1, ..., n), (1)\n∀i, PAi ⊆ V\\Vi is the set of parents of Vi. Directed edges, such as PAi → Vi in the graph G, i.e., fi, refers to the direct causation from the parental variables PAi to the child variable Vi.\nOur Proposed SCMs: Figure 2(a) demonstrates our unified SCMs for IE tasks, which are built based on our prior knowledge for the tasks. The variable S indicates the contextualized representations of an unstructured input sentence, where the representations are the output from a BiLSTM (Schuster & Paliwal, 1997) or pre-trained BERT encoder (Devlin et al., 2019). Zi (i ∈ [1,m]) represents features such as the NER tags and part-of-speech (POS) tagging. The variable X is the representation of a target relation for RE, entity representation for NER, or trigger representation for ED, and Y indicates the output logits for classification.\nS\nX\nP\nY\nS\nN\nX P\nY (a) (b)\nLet E = {S,X,Z1, ..., Zm} denotes the parents of Y . The direct causal effects towards Y including X → Y , S → Y , Z1 → Y , ...., Zm → Y are linear transformations. For each edge i → Y , its transformation is denoted as WiY ∈ Rc×d, where i ∈ E and c is the number of classes. We let Hi ∈ Rd×h denote h1 representations with d dimensions for node i ∈ E . Then, the prediction can be obtained by summation Yx = ∑ i∈EWiY Hi or gated\nmechanism Yx = WgHX σ( ∑\ni∈EWiY Hi), where refers to element-wise product, Wg ∈ Rc×d is the linear transformation, and σ(·) in-\ndicates the sigmoid function. To avoid any single edge, such as S → Y , dominating the generation of the logits Yx, we add a cross-entropy loss LiY , i ∈ E for each branch, where i indicates the parent of the node Y . Let LY denote the loss for Yx, the total loss L can be computed by:\nL = LY + ∑ i∈E LiY (2)\nNote that the proposed SCM is encoder neutral. The SCM can be equipped with various encoders, such as BiLSTM, BERT and Roberta (Liu et al., 2019a). For simplicity, we omit exogenous variables U from the graph as its only useful for the derivations in the following sections.\nFusing Syntax Structures Into SCMs: So far we have built basic SCMs for IE tasks. On the edge S → X , we adopt different neural networks architectures for RE, NER and ED. For RE, we use dependency trees to aggregate long-range relations with graph convolution networks (GCN) (Kipf & Welling, 2017). Assume the length of the sentence is h. For the GCN, we generate a matrix A ∈ Rh×h from a dependency tree. The convolution computation for the node i at the l-th layer takes the representation xl−1i from previous layer as input and outputs the updated representations xli. The formulation is given as:\nxli = σ( l∑ j=1 AijW lxl−1i + b l), i ∈ [1, h] (3)\nwhere Wl and bl are the weight matrix and bias vector of the l-th layer respectively, and σ(·) is the sigmoid function. Here x0 takes value from HS and HX takes value from the output of the last GCN layer xlmax . For NER and ED, we adopt the dependency-guided concatenation approach (Jie & Lu, 2019). Given a dependency edge (th,ti,r) with th as a head (parent), ti as a dependent (child) and r is the dependency relation between them, the representations of the dependent (assume at the\n1h is the sequence length for NER and ED, and h = 1 for relation extraction.\ni-th position of a sentence) can be denoted as:\nxi = [H (i) S ;H (h) S ;vr], th = parent(ti) HX = LSTM(x) (4)\nwhere H(i)S and H (h) S are the word representations of the word ti and its parent th, vr denotes the learnable embedding of dependency relation r." }, { "heading": "3.2 STEP 2 AND 3: INFERENCE AND COUNTERFACTUAL GENERATION", "text": "We have trained our SCMs in the first step. The second step performs inference with the SCMs, and the third step generates dependency-based counterfactuals to better measure the main effect.\nInterventions: For the SCM G, an intervention indicates an operation that modifies a subset of variables V ⊆ V to new values where each variable Vi ∈ V is generated by a new structural mechanism f̂i(P̂Ai, Ui) that is independent from the original fi(PAi, Ui). Thus, the causal dependency between Vi and its parents {PAi, Ui} is cutoff. Mathematically, such intervention for one variable X ∈ V can be expressed by do-notation do(X = x∗) and where x∗ is the given value. Counterfactuals: Unlike interventions, the concept of counterfactual reflects an imaginary scenario for “what would the outcome be had the variable(s) been different”. Recall from Section 3.1 the definition of SCM and the set of environmental variables U which uniquely determines the variables in the system (Pearl, 2009). Let Y ∈ V denote the outcome variable, and let X ∈ V\\{Y } denote the variable of study. The counterfactual for setting X = x∗ is formally estimated as:\nYx∗(u) = YGx∗ (u) (5)\nwhere Gx∗ means assigning X = x∗ for all equations in the SCM G. In our CFIE setting, we aim to estimate the counterfactual for the model prediction at instance level. For the proposed SCM shown in Figure 1, the counterfactual Yx∗ for our prediction Y is practically computed as follows:\nYx∗ = YGx∗ (u) = fY (do(X = x ∗), S = s, Z = z) = ∑\ni∈E\\{X}\nWiY Hi +WXY Hx∗ (6)\nwhere fY is the function that computes Y and we only replace the original feature representation HX with Hx∗ . No actual value is needed for u. See Appendix A.1.1 for derivation.\nDependency-based Counterfactuals Generation: There are many other language structures such as constituent tree, abstract meaning representation (Flanigan et al., 2014) and semantic role labeling (Björkelund et al., 2009). We choose the dependency structure in our case as it is able to capture rich relational information as well as complex long-distance interactions that have been proven effective on IE tasks. Counterfactuals lead us to think about: “what are the key clues that determine the relations of two entities for RE, and a certain span of a sentence to be an entity or an event trigger for NER and ED task respectively?”. To generate the counterfactual representations for the RE task, we mask the tokens along the shortest path between the two entities of a relation in a dependency tree to form a new sequence. Then this masked sequence is fed to a BiLSTM or BERT encoder to output new contextualized representations S∗. For the NER and ED task, we mask entities, or the tokens in the scope of 1 hop on the dependency tree to generate S∗. Then we feed S∗ to the function S → X to get X∗. The operation on NER also aligns a recent finding (Zeng et al., 2020) that the entity itself is more important than context for entity classification. By doing so, the key clues have been wiped off in the generated counterfactuals representationsX∗, which can be used to strengthen the main effect while reduce spurious correlations and the side effect." }, { "heading": "3.3 STEP 4 AND 5: CAUSAL EFFECT ESTIMATION", "text": "We estimate the causal effect in the fourth step and make use of the couterfactuals representation for a more robust prediction in the fifth step. Inspired by Total Direct Effect (TDE) used in (Tang et al., 2020b), we can compare the original outcome Yx and its counterfactual Yx∗ to estimate the effect of RE so that the side effect can be eliminated (see Appendix A.1.2 for derivation):\nTDE = Yx − Yx∗ (7)\nAs both context and entity (or trigger) play important roles for the classification in the NER and ED tasks, we propose a novel approach to alleviate the spurious correlations caused by side effects, while strengthening the main effect at the same time. The interventional causal effect of the i-th entity in a sequence can be described as:\nEffect = Yxi − Yx∗i + αWXY x ∗ i (8)\nwhere α is the hyperparameter that balances the importance of context and entity (or trigger) for the NER and ED task. The first part Yxi − Yx∗i indicates the main effect, which reflects more about the debiased context, while the second part WXY x∗i reflects more about the entity (or trigger) itself. Combining them yields more robust prediction by better distinguishing the main and side effect. As shown in Figure 1, the sentence “The program was killed” produces biased high score for event “Life:Die” in Yx and results in wrong prediction due to the word “killed”. By computing the counterfactual Yx∗ with “program” masked, the score for “Life:Die” remains high but the score for “SW:Quit” drops dramatically. This difference Yxi−Yx∗i leads us to correct prediction and knowing the important role of the word “program”. Such a design differs from that of the previous work used in vision community (Tang et al., 2020a) by providing more flexible adjustment and effect estimation. We will show that our approach is more suitable for long-tailed IE tasks." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASETS AND SETTINGS", "text": "The five datasets used in our experiments include OntoNotes5.0 (Pradhan et al., 2013) and ATIS (Tur et al., 2010) for the NER task, ACE2005 (Doddington et al., 2004) and MAVEN (Wang et al., 2020b) for the ED task, and NYT24 (Gardent et al., 2017) for the RE task. For all the five datasets, we categorize the classes into three splits based on the number of training instances per class. The model parameters are finetuned on the development sets. For RE, we leverage Stochastic Gradient Descent (SGD) optimizer with a 0.3 learning rate and 0.9 weight decay rate. For NER and ED, we utilize Adam optimizer with an initial learning rate of 0.001. The hidden size of the BiLSTM and GCNs are set as 300, and the number of layers of GCNs is configured as 3. 300-dimensional GloVe (Pennington et al., 2014) is used to initialize the word embeddings 2. We focus more on Mean Recall (MR) (Tang et al., 2020b) and Macro F1 (MF1), two more balanced metrics to measure the performance of long-tailed IE tasks, as MR is able to better reflect the capability in identifying the instance-scare class, and MF1 can better represent the model’s ability for each class, while the conventional Micro F1 score highly depends on the data-rich classes and pays less attention to the tail classes. We report the Micro F1 score (F1) for each dataset in the Appendix. We also follow (Liu et al., 2019b) to report the MR and MF1 on three splits in Table 5 in the Appendix." }, { "heading": "4.2 BASELINES", "text": "We categorized the baselines into three groups. 1) Conventional Models include BiLSTM (Chiu & Nichols, 2016), BiLSTM+CRF (Ma & Hovy, 2016), C-GCN (Zhang et al., 2017), Dep-Guided LSTM (Jie & Lu, 2019), AGGCN (Guo et al., 2019) and BERT (Devlin et al., 2019). They do not explicitly take the long-tailed issues into consideration. 2) Re-weighting/Decoupling models refer to loss re-weighting approaches including Focal Loss (Lin et al., 2017), and two-stage decoupled learning approaches (Kang et al., 2019) that include τ -normalization, classifier retraining (cRT) and learnable weight scaling (LWS). 3) Causal model include TDE (Tang et al., 2020b). There are also recent studies based on the deconfounded methodology (Tang et al., 2020a; Yang et al., 2020), which however seem not applicable to be selected as a causal baseline in our case. In our experiments, we reproduced the results for all the baselines as most of the results have not been reported on NLP datasets. We believe some recent strong baselines, which are not mentioned in this paper due to space limitation, may also further benefit our model by integrating them into the edge S → X ." }, { "heading": "4.3 TASK DEFINITIONS", "text": "Named Entity Recognition: NER is a sequence labeling task that seeks to locate and classify named entities in unstructured text into pre-defined categories such as person, location, etc. Event\n2The statistics of the datasets and detailed hyperparameters are attached in the Appendix\nDetection: ED aims to detect the occurrences of predefined events and categorize them as triggers from unstructured text. Event trigger is defined as the words or phase that most clearly expresses an event occurrence. Taking the sentence “a cameraman died in the Palestine Hotel” as an example, the word “died” is considered as the trigger with a “Death” event. Relation Extraction: The goal of RE is to identify semantic relationships from text, given two or more entities. For example, “Paris is in France” states a “is in” relationship between two entities Paris to France. Their relation can be denoted by the triples (Paris, is in, France)." }, { "heading": "4.4 RESULTS", "text": "Named Entity Recognition: Table 1 shows the comparison results on both OntoNotes5.0 and ATIS datasets. Our models outperform the two classical models BiLSTM and BiLSTM+CRF under most settings, especially on the Few setting, e.g achieving 10.2 points higher Mean Recall (MR) against BiLSTM on OntoNotes5.0, and 12.7 points higher Mean F1 (MF1) against BiLSTM+CRF on ATIS. The results indicate the superiority of our proposed model in handling the instance-scarce classes. Comparing with the C-GCN model that makes use of dependency trees for information aggregation, our model also achieves 8.4 higher MR and comparable MF1, indicating the capability of a causal model in improving the long-tailed sequence labeling problem. Comparing with a recent causal baseline TDE, our model consistently perform better in terms of long-tailed scores, the results confirm our hypothesis that making good use of language structure helps a causal model to distinguish main effect from the side effect. Among re-balancing approaches such as Focal Loss, cRT and LWS, τ -Normalization performs best and this aligns with the findings in the previous study (Kang et al., 2019) for long-tailed image classification.\nEvent Detection: Table 2 shows comparison results on both ACE2005 and MAVEN datasets. Overall, our model significantly outperforms the baselines under the Few setting by a large margin, e.g., 12.8 and 15.8 pointers higher in terms of MR and MF1 respectively on ACE2005 dataset, 20.6 and 20.8 points higher in terms of the two metrics on MAVEN dataset. Meanwhile, our model is able to achieve better or comparable results under other settings. The results further confirm the robustness of our model in improving the classifications for tail classes with few training instances available. Our model also performs better than BERT baselines under the Few setting, indicating that the pre-trained BERT models still suffer bias issues on the long-tailed IE tasks.\nRelation Extraction: As shown in Table 3, we further evaluate CFIE for the relation extraction on NYT24 dataset. Our method significantly outperforms all other methods in MF for both tail classes and overall F1. Although cRT achieves relatively high MR, having the lowest MF1 renders it incompetent for this task. The results further confirm our hypothesis that the proposed CFIE is able to alleviate spurious correlations caused by\nimbalanced dataset by learning to distinguish the main effect from the side effect. We also observe that CFIE outperforms the previously proposed TDE by a large margin for the both Few and Overall settings, i.e., 11.5 points and 3.4 points improvement in terms of MF1. This further proves our hypothesis that properly exploring language structure on causal models will boost the performance of IE tasks on imbalanced datasets.\n4.5 DISCUSSIONS\n2\n1\n3\n4\nWhat are the most important factors for NER? We have hypothesised that the factors, such as 2-hop and 1-hop context on the dependency tree, the entity itself, and POS feature, may hold the potential to be the key clues for the NER predictions. To evaluate the impact of these factors, we first generate new sequences by masking or mit-\nigating these factors. Then we feed the generated sequences to the proposed SCM to obtain the predictions. Figure 3 shows a qualitative example for predicting the NER tag for the entity “malacca”. Specifically, Figure 3 (a) visualizes the variances of the predictions, where the histograms in the left refer to prediction probabilities for the ground truth class, while the histograms in the right are the max predictions except the results of ground truth class. Figure 3(b) illustrates how we mask the context based on a dependency tree. It shows that masking the entity, i.e., “malacca”, will lead to the most significant performance drop, indicating that entity itself plays a key role for the NER classification. This also inspires us to design step 5 in our framework. More analyses about ED and RE are given in the Appendix A.4.1 and A.4.2.\nDoes the syntax structure matter? To answer this question, we design three baselines including: 1) Causal Models w/o Syntax that doesn’t employ dependency trees during the training stage, and only uses it for generating counterfactuals, 2) Counterfactuals w/o Syntax that employs dependency structures for training but utilizes a null input as the intervention during the inference state. We refer such a setting from the previous study (Tang et al., 2020a), and 3) No Syntax that is the same to the previous work TDE (Tang et al., 2020b) which don’t involve depen-\ndency structures in both training and inference stages. As shown in Figure 4, our model outperforms the first two baselines on the ACE2005 dataset under both Few and All settings, demonstrating the effectiveness of dependency structure in improving the causal models for long-tailed IE.\nHow can we make good use of dependency structure? To answer this question, we present three tree pruning mechanisms under two graph aggregation settings, i.e., Prune with DGLSTM and Prune with C-GCN as described in Equation 3 and Equation 4. The three pruning strategies include 1) CFIE Mask 1-hop which masks the tokens that directly connect to the targeting token in a dependency tree, 2) CFIE Mask token which directly masks the targeting token, 3) CFIE Mask token&1-hop which masks both the targeting token and its 1-hop neighbours in the dependency tree. Figure 5 and Figure 6 depict the results on OntoNotes5.0 dataset. We observe that masking 1-hop neighbours in the dependency tree achieves the best performance among three strategies, indicating that an entity itself is more important in NER sequence labeling. By comparing the two graph aggregation method, we draw a conclusion that Prune with DGLSTM can make better use of dependency structures.\nHow about the performance under various interventions and SCMs? We study this question on ACE2005 dataset for ED task. We design three interventional methods including 1) Intervene X & NER, 2) Intervene X & POS, 3) Intervene X & NER & POS . Figure 7 shows that introducing interventions solely on X is able to achieve the best performance under both Few and All settings. We also introduce three variants of our proposed SCMs : 1) SCM w/o NER, 2) SCM w/o POS, 3) SCM w/o NER and POS. Figure 8 shows that mitigating the NER node will significantly decrease the ED performance, especially over the Few setting. The results prove the superiority of our proposed SCMs that explicitly involve linguistic features to calculate main effect. More analyses for NER task are given in Appendix A.4.4.\nFew All40\n45\n50\n55\n60\n65\n70\nM ea\nn F1\n(% )\nIntervene X & NER Intervene X & POS Intervene X & NER & POS Ours\nFigure 7: Various interventions.\nFew All30\n40\n50\n60\n70\nM ea\nn F1\n(% )\nSCM w/o NER SCM w/o POS SCM w/o POS and NER Ours\nFigure 8: Various SCMs.\nHow the hyper-parameter α impacts the performance? To evaluate the impact of α on the performance, we tuned the parameter on four datasets including OntoNotes5.0, ATIS, ACE2005, and MAVEN. As shown in Figure 9, when increasing α from 0 to 2.4 on ATIS dataset, the F1 scores increase at first dramatically then decrease slowly. The F1 scores reach the peak when α is set to 1.2. As the value of α represents the importance of entity for classifications, we therefore draw a conclusion that, for NER task , an entity plays a relatively more important role than the context. We also demonstrate the necessity of step 5 in our framework, since the performance is poor when α is set to 0. Experimental results on the other three datasets are given in the Appendix A.4.3." }, { "heading": "5 CONCLUSION", "text": "In this paper, we present CFIE, a novel approach to tackling the long-tailed information extraction issues via counterfactual analysis in causal inference. Experimental results on five datasets across three IE tasks show the effectiveness of our approach. The future research directions include applying the proposed framework to more challenging long-tailed document-level IE tasks." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DERIVATIONS", "text": "" }, { "heading": "A.1.1 COUNTERFACTUALS", "text": "Recall that the formal computation for counterfactual is defined as: Yx∗(u) = YMx∗ (u) whereMx∗ means assigningX = x∗ for all equations in the SCM. The crucial step in the derivation is to understand the goal of the exogenous variable U , by which the variables in the causal graph are uniquely determined. To compute the counterfactual of a prediction regarding variable X , we have to keep all other variables under the same setting as the original prediction. Consider an intuitive example that a boy got an A for the subject because he studied hard. To estimate the counterfactual ”what score would he get if he did not study hard”, we should maintain all other factors like the difficulty of the subject and the skills of the teacher and so on at the original level to simulate the hypothetical scenario that the boy travelled back in time and behaved differently. Thus, setting U = u where u is the environment (e.g. year of admission, faculty) for the original prediction is to ensure the consistency in estimating the value of all other variables, which is mathematically:\nVi = fi(PAi, U = u),∀Vi ∈ V except for the variable of interestX along with its descendants (e.g. commendation from the teacher) due to the intervention do(X = x∗). Thus, for our SCM, as long as we can ensure the value of variables (S,Z) that are not descendants of X follow the original situation, the exogenous variable u is only for notational purpose and no longer needed in computing the counterfactuals. Besides, for all variables, only the descendants of X should be re-calculated. We now present the mathematical derivation for the counterfactual Yx∗ in our SCM:\nYx∗ = YMx∗ (u)\n= Y (do(X = x∗), U = u) = fY (do(X = x ∗), S = s, Z = z) = fY (x ∗, s, z)\n= ∑\ni∈E\\{X}\nWiY Hi + WXY Hx∗\nIn short, to compute the counterfactual Yx∗ , we simply need to\n1. Assign a new value x∗ to the variable of interest X .\n2. Cut off the dependency between X and its parents in SCM.\n3. Recompute all values." }, { "heading": "A.1.2 TOTAL DIRECT EFFECT", "text": "In an SCM, let M be the mediator variables such that path X → Z → Y exists. The formal definition of Total Direct Effect (TDE) is:\nTDE = Yx(u)− Yx∗,m(u)\nwherem are the original values of the mediator variables. Thus, additional intervention do(M = m) is required to compute TDE. Fortunately, our SCM shown in Figure 1 does not have mediators for X and the computation is reduced to:\nTDE = Yx(u)− Yx∗(u) = Yx − Yx∗\nOne may question that why X imposes no effect on Z including POS and NER tags for relation extraction. This is because POS and NER tags are provided in the dataset and we are not using them for joint training. Thus, there is no direct dependency between contextual representation and the representation for the tags." }, { "heading": "A.2 DATASET STATISTICS", "text": "We give the statistics of five datasets as follows in Table 4. We follow the (Liu et al., 2019b) to split the training set as Few-shot(Few), Medium-shot(Medium) and Many-Shot(Many). We split the dataset based on the distribution of class types and numbers. Details are given in Table 5." }, { "heading": "A.3 EXPERIMENT SETTINGS", "text": "We use spaCy3 to generate the dependency tree, NER as well as POS tagging for a input sentence. The hyperparameters that we used on three tasks are listed as follows in Table 6, Table 7, and Table 8. We show the parameters in different tables as the setting varies for each task." }, { "heading": "A.4 MORE DISCUSSIONS", "text": "We add more discussions here based on Section 4.5. 3https://spacy.io/" }, { "heading": "A.4.1 WHAT ARE THE MOST IMPORTANT FACTORS FOR THE ED TASK?", "text": "To answer this question, we conduct experiments on ACE2005. We have hypothesised that the factors, such as 2-hop and 1-hop context on the dependency tree, the entity itself, POS feature, and NER feature may hold the potential to be the key clues for the ED predictions. The design of\nour experiments here are similar as that of NER task described in Section 4.5. Figure 10 shows a qualitative example for predicting the event type for the word “shot”. Specifically, Figure 10 (a) visualizes the variances of the predictions, where the histograms in the left refer to prediction probabilities for the ground truth class, while the histograms in the right are the max predictions except the results of ground truth class. Figure 10(b) illustrates how we mask the context based on a dependency tree. We obtain the same conclusion that masking the word itself, i.e., “shot”, will lead to the most significant performance drop, indicating that entity itself serves as a key for the ED classification. Also we can see that 1-hop neighbors in the dependency tree plays the second important roles. When 1-hop neighbors are masked, the lead in the probability of the ground truth class is reduced relative to the probability of the error class, which indicates the decline of the model’s classification ability." }, { "heading": "A.4.2 WHAT ARE THE MOST IMPORTANT FACTORS FOR THE RE TASK?", "text": "To answer this question, we conduct experiments on NYT24 dataset. We have hypothesised that the factors, such as context on the shortest path between the targets, the contextualized word representations, POS feature, and NER feature may hold the potential to be the key clues for the RE predictions. The design of our experiments here are similar as that of NER task described in Section 4.5. Figure 11 shows a qualitative example for predicting the relation type for the targets “Italy” and “Modena”. Specifically, Figure 11 (a) visualizes the variances of the predictions, where the histograms in the left refer to prediction probabilities for the ground truth class, while the histograms in the right are the max predictions except the results of ground truth class. Figure 11(b) illustrates how we mask the context based on a dependency tree. When we mask the context on the shortest path, we can see that the probability on the ground truth class drops significantly and the model makes a wrong prediction, which indicates the importance of context on the shortest path between subject and object in RE task.\nA.4.3 HOW THE HYPER-PARAMETER α IMPACTS THE PERFORMANCE?\nHere we show the performance on OntoNotes5.0, ACE2005, and MAVEN datasets regarding various values of α. As shown in Figure 12, we observe that the trends are similar on different datasets. The optimal values are 0.9, 1.5, 1.5 respectively on OntoNotes5.0, ACE2005, and MAVEN dataset." }, { "heading": "A.4.4 EXPLORING DIFFERENT INTERVENTIONS AND SCMS FOR NER TASK.", "text": "We conduct experiments for NER task on OntoNotes5.0 dataset regarding different intervention methods and SCMs. The design and conclusions are similar to those of the ED task described in Section 4.5. The results are shown in Table 13. To be specific, only intervening X achieves the best performance, indicating that our method is capable of capturing the most significant effect. Furthermore, our design of including POS tag in the causal graph can incorporate the useful information while eliminating the bias in POS tags." }, { "heading": "A.5 MEASURING CAUSAL EFFECTS OF VARIOUS FACTORS", "text": "We measure the causal effects of different factors for RE task. Here we define a set of factors F = {X,S,NER,POS, TAGS,Context,DepEdges}, where S,X,NER,POS are variables defined in our SCM, TAGS includes both NER tag and POS tag, Context denotes tokens along the shortest path between subject and object in RE task, and DepEdges denotes the dependency edges connected to either subject or object. We calculate the causal effect by Equation 7, where x∗ is generated by masking each factor in F. Instead of measuring the effect on a specific instance, we calculate the average effect on the ground truth class over all samples in NYT24 dataset. A larger value indicates a more significant causal effect from the specific factor to the ground truth label. From Table 9 we can observe that X and Context have the largest effect to the ground truth, which is captured in our model. Also we can conclude that masking tokens in the dependency tree is better choice compared with masking dependency relations." }, { "heading": "A.6 MORE DETAILED EXPERIMENTAL RESULTS", "text": "For the NER and ED tasks, we report more detailed comparisons on the Ontonotes5.0, ATIS, ACE2005, and MAVEN datasets in Table 10, Table 11, Table 12, and Table 13 respectively. We also report the detailed results for RE on the NYT24 dataset in Table 14." } ]
2,020
null
SP:b2f83cd755f4da835e943237e2ba6faf69e8008a
[ "This paper sheds light on how trained RNNs solve text classification problems by analyzing them from a dynamical systems perspective. It extends recent work where a similar analysis was applied to the simpler setting of binary sentiment classification. When projecting the RNN hidden states to principal dimensions that explain most of the variance, the authors find (N-1) dimensional simplex attractors for N-dimensional classification, 2D attractors for ordered classification, and N-dimensional hypercubes for multi-label classification. ", "This paper presents an analysis on the trained recurrent neural networks (RNN) especially for NLP classification problems. The analysis takes the dynamical systems point of view and investigates the dynamics by looking at the Jacobians around the fixed points. This work founds low dimensionalility and attractor dynamics in the RNNs which might lead to a better undertanding of RNNs." ]
Despite the widespread application of recurrent neural networks (RNNs), a unified understanding of how RNNs solve particular tasks remains elusive. In particular, it is unclear what dynamical patterns arise in trained RNNs, and how those patterns depend on the training dataset or task. This work addresses these questions in the context of text classification, building on earlier work studying the dynamics of binary sentiment-classification networks (Maheswaranathan et al., 2019). We study text-classification tasks beyond the binary case, exploring the dynamics of RNNs trained on both natural and synthetic datasets. These dynamics, which we find to be both interpretable and low-dimensional, share a common mechanism across architectures and datasets: specifically, these text-classification networks use low-dimensional attractor manifolds to accumulate evidence for each class as they process the text. The dimensionality and geometry of the attractor manifold are determined by the structure of the training dataset, with the dimensionality reflecting the number of scalar quantities the network remembers in order to classify. In categorical classification, for example, we show that this dimensionality is one less than the number of classes. Correlations in the dataset, such as those induced by ordering, can further reduce the dimensionality of the attractor manifold; we show how to predict this reduction using simple word-count statistics computed on the training dataset. To the degree that integration of evidence towards a decision is a common computational primitive, this work continues to lay the foundation for using dynamical systems techniques to study the inner workings of RNNs.
[ { "affiliations": [], "name": "Kyle Aitken" }, { "affiliations": [], "name": "Vinay V. Ramasesh" }, { "affiliations": [], "name": "Niru Maheswaranathan" } ]
[ { "authors": [ "Yonatan Belinkov", "James R. Glass" ], "title": "Analysis methods in neural language processing: A survey", "venue": null, "year": 2018 }, { "authors": [ "Francesco Camastra", "Alessandro Vinciarelli" ], "title": "Estimating the intrinsic dimension of data with a fractal-based method", "venue": "IEEE Transactions on pattern analysis and machine intelligence,", "year": 2002 }, { "authors": [ "Kyunghyun Cho", "Bart van Merrienboer", "Çaglar Gülçehre", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using RNN encoder-decoder for statistical machine", "venue": "translation. CoRR,", "year": 2014 }, { "authors": [ "Jasmine Collins", "Jascha Sohl-Dickstein", "David Sussillo" ], "title": "Capacity and trainability in recurrent neural networks, 2016", "venue": null, "year": 2016 }, { "authors": [ "Scott Deerwester", "Susan T Dumais", "George W Furnas", "Thomas K Landauer", "Richard Harshman" ], "title": "Indexing by latent semantic analysis", "venue": "Journal of the American society for information science,", "year": 1990 }, { "authors": [ "Dorottya Demszky", "Dana Movshovitz-Attias", "Jeongwoo Ko", "Alan Cowen", "Gaurav Nemade", "Sujith Ravi" ], "title": "Goemotions: A dataset of fine-grained emotions", "venue": "pp. 4040–4054,", "year": 2020 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Ian D. Jordan", "Piotr Aleksander Sokol", "Il Memming Park" ], "title": "Gated recurrent units viewed through the lens of continuous time dynamical systems, 2019", "venue": null, "year": 2019 }, { "authors": [ "Andrej Karpathy", "Justin Johnson", "Fei-Fei Li" ], "title": "Visualizing and understanding recurrent networks", "venue": "CoRR, abs/1506.02078,", "year": 2015 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Elizaveta Levina", "Peter J Bickel" ], "title": "Maximum likelihood estimation of intrinsic dimension", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Niru Maheswaranathan", "David Sussillo" ], "title": "How recurrent networks implement contextual processing in sentiment analysis", "venue": "arXiv preprint arXiv:2004.08013,", "year": 2020 }, { "authors": [ "Niru Maheswaranathan", "Alex Williams", "Matthew Golub", "Surya Ganguli", "David Sussillo" ], "title": "Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Christopher Manning", "Hinrich Schutze" ], "title": "Foundations of statistical natural language processing", "venue": "MIT press,", "year": 1999 }, { "authors": [ "Itamar Procaccia", "Peter Grassberger" ], "title": "Measuring the strangeness of strange attractors", "venue": "Physica. D,", "year": 1983 }, { "authors": [ "Alec Radford", "Rafal Józefowicz", "Ilya Sutskever" ], "title": "Learning to generate reviews and discovering sentiment", "venue": null, "year": 2017 }, { "authors": [ "Friedrich Schuessler", "Francesca Mastrogiuseppe", "Alexis Dubreuil", "Srdjan Ostojic", "Omri Barak" ], "title": "The interplay between randomness and structure during learning in rnns, 2020", "venue": null, "year": 2020 }, { "authors": [ "David Sussillo", "Omri Barak" ], "title": "Opening the black box: low-dimensional dynamics in highdimensional recurrent neural networks", "venue": "Neural computation,", "year": 2013 }, { "authors": [ "Saurabh Vyas", "Matthew D Golub", "David Sussillo", "Krishna V Shenoy" ], "title": "Computation through neural population dynamics", "venue": "Annual Review of Neuroscience,", "year": 2020 }, { "authors": [ "Xiang Zhang", "Junbo Zhao", "Yann LeCun" ], "title": "Character-level convolutional networks for text classification", "venue": "Advances in Neural Information Processing Systems", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Modern recurrent neural networks (RNNs) can achieve strong performance in natural language processing (NLP) tasks such as sentiment analysis, document classification, language modeling, and machine translation. However, the inner workings of these networks remain largely mysterious.\nAs RNNs are parameterized dynamical systems tuned to perform specific tasks, a natural way to understand them is to leverage tools from dynamical systems analysis. A challenge inherent to this approach is that the state space of modern RNN architectures—the number of units comprising the hidden state—is often high-dimensional, with layers routinely comprising hundreds of neurons. This dimensionality renders the application of standard representation techniques, such as phase portraits, difficult. Another difficulty arises from the fact that RNNs are monolithic systems trained end-toend. Instead of modular components with clearly delineated responsibilities that can be understood and tested independently, neural networks could learn an intertwined blend of different mechanisms needed to solve a task, making understanding them that much harder.\n∗Work started while an intern at Google. †Equal contribution.\nRecent work has shown that modern RNN architectures trained on binary sentiment classification learn low-dimensional, interpretable dynamical systems (Maheswaranathan et al., 2019). These RNNs were found to implement an integration-like mechanism, moving their hidden states along a line of stable fixed points to keep track of accumulated positive and negative tokens. Later, Maheswaranathan & Sussillo (2020) showed that contextual processing mechanisms in these networks— e.g. for handling phrases like not good—build on top of the line-integration mechanism, employing an additional subspace which the network enters upon encountering a modifier word. The understanding achieved in those works suggests the potential of the dynamical systems perspective, but it remained to be seen whether this perspective could shed light on RNNs in more complicated settings.\nIn this work, we take steps towards understanding RNN dynamics in more complicated language tasks, illustrating recurrent network dynamics in multiple text-classification tasks with more than two categories. The tasks we study—document classification, review score prediction (from one to five stars), and emotion tagging—exemplify three distinct types of classification tasks. As in the binary sentiment case, we find integration of evidence to underlie the operations of these networks; however, in multi-class classification, the geometry and dimensionality of the integration manifold depend on the type of task and the structure of the training data. Understanding and precisely characterizing this dependence is the focus of the present work." }, { "heading": "Our contributions", "text": "• We study three distinct types of text-classification tasks—categorical, ordered, and multilabeled—and find empirically that the resulting hidden state trajectories lie largely in a lowdimensional subspace of the full state space.\n• Within this low-dimensional subspace, we find a manifold of approximately stable fixed points1 near the network trajectories, and by linearizing the network dynamics, we show that this manifold enables the networks to integrate evidence for each classification as they processes the sequence.\n• We find (N − 1)-dimensional simplex attractors2 for N -class categorical classification, planar attractors for ordered classification, and attractors resembling hypercubes for multi-label classification, explaining these geometries in terms of the dataset statistics.\n• We show that the dimensionality and geometry of the manifold reflects characteristics of the training dataset, and demonstrate that simple word-count statistics of the dataset can explain the observed geometries.\n• We develop clean, simple synthetic datasets for each type of classification task. Networks trained on these synthetic datasets exhibit similar dynamics and manifold geometries to networks trained on corresponding natural datasets, furthering an understanding of the underlying mechanism.\nRelated work Our work builds directly on previous analyses of binary sentiment classification by Maheswaranathan et al. (2019) and Maheswaranathan & Sussillo (2020). Apart from these works, the dynamical properties of continuous-time RNNs have been extensively studied (Vyas et al., 2020), largely for connections to neural computation in biological systems. Such analyses have recently begun to yield insights on discrete-time RNNs: for example, Schuessler et al. (2020) showed that training continuous-time RNNs on low-dimensional tasks led to low-dimensional updates to the networks’ weight matrices; this observation held empirically in binary sentiment LSTMs as well. Similarly, by viewing the discrete-time GRU as a discretization of a continuous-time dynamical system, Jordan et al. (2019) demonstrated that the continuous-time analogue could express a wide variety of dynamical features, including essentially nonlinear features like limit cycles.\nUnderstanding and interpreting learned neural networks is a rapidly-growing field. Specifically in the context of natural language processing, the body of work on interpretability of neural models is reviewed thoroughly in Belinkov & Glass (2018). Common methods of analysis include, for example, training auxiliary classifiers (e.g., part-of-speech) on RNN trajectories to probe the network’s\n1As will be discussed in more detail below, by fixed points we mean hidden state locations that are approximately fixed on time-scales of order of the average phrase length for the task at hand. Throughout this work we will use the term fixed point manifold to be synonymous with manifolds of slow points.\n2A 1-simplex is a line segment, a 2-simplex a triangle, a 3-simplex a tetrahedron, etc. A simplex is regular if it has the highest degree of symmetry (e.g. an equilateral triangle is a regular 2-simplex).\nrepresentations; use of challenge sets to capture wider language phenomena than seen in natural corpora; and visualization of hidden unit activations as in Karpathy et al. (2015) and Radford et al. (2017)." }, { "heading": "2 SETUP", "text": "Models We study three common RNN architectures: LSTMs (Hochreiter & Schmidhuber, 1997), GRUs (Cho et al., 2014), and UGRNNs (Collins et al., 2016). We denote their n-dimensional hidden state and d-dimensional input at time t as ht and xt, respectively. The function that applies hidden state update for these networks will be denoted by F , so that ht = F (ht−1,xt). The network’s hidden state after the entire example is processed, hT, is fed through a linear layer to get N output logits for each label: y = WhT + b. We call the rows of W ‘readout vectors’ and denote the readout corresponding to the ith neuron by ri, for i = 1, . . . , N . Throughout the main text, we will present results for the GRU architecture. Qualitative features of results were found to be constant across all architectures; additional results for LSTMs and UGRNNs are given in Appendix E.\nTasks The classification tasks we study fall into three categories. In the categorical case, samples are classified into non-overlapping classes, for example “sports” or “politics”. By contrast, in the ordered case, there is a natural ordering among labels: for example, predicting a numerical rating (say, out of five stars) accompanying a user’s review. Like the categorical labels, ordered labels are exclusive. Some tasks, however, involve labels which may not be exclusive; an example of this multi-labeled case is tagging a document for the presence of one or more emotions. A detailed description of the natural and synthetic datasets used is provided in Appendices C and D, respectively.\nLinearization and eigenmodes Part of our analysis relies on linearization to render the complex RNN dynamics tractable. This linearization is possible because, as we will see, the RNN states visited during training and inference lie near approximate fixed points h∗ of the dynamics—points that the update equation leave (approximately) unchanged, i.e. for which h∗ ≈ F (h∗,x).3 Near these points, the dynamics of the displacement ∆ht := ht − h∗ from the fixed point h∗ is well approximated by the linearization\n∆ht ≈ Jrec|(h∗,x∗) ∆ht−1 + J inp ∣∣ (h∗,x∗) (xt − x∗) , (1)\nwhere we have defined the recurrent and input Jacobians J recij (h,x) := ∂F (h,x)i ∂hj and J inpij (h,x) := ∂F (h,x)i ∂xj , respectively (see Appendix A for details).\nIn the linear approximation, the spectrum of Jrec plays a key role in the resulting dynamics. Each eigenmode of Jrec represents a displacement whose magnitude either grows or shrinks exponentially in time, with a timescale τa determined by the magnitude of the corresponding (complex) eigenvalue λa via the relation τa := |log |λa||−1. Thus, eigenvalues within the unit circle thus represent stable (decaying) modes, while those outside represent unstable (growing) modes. The Jacobians we find in practice almost exclusively have stable modes, most of which decay on very short timescales (a few tokens). Eigenmodes near the unit circle have long timescales, and therefore facilitate the network’s storage of information.\nLatent semantic analysis For a given text classification task, one can summarize the data by building a matrix of word or token counts for each class (analogous to a document-term matrix (Manning & Schutze, 1999), where the documents are classes). Here, the i, j entry corresponds to the number of times the ith word in the vocabulary appears in examples belonging to the jth class. In effect, the column corresponding to a given word forms an “evidence vector”, i.e. a large entry in an particular row suggests strong evidence for the corresponding class. Latent semantic analysis (LSA) (Deerwester et al., 1990) looks for structure in this matrix via a singular value decomposition (SVD); if the evidence vectors lie predominantly in a low-dimensional subspace, LSA will pick up on this structure. The top singular modes define a “semantic space”: the left singular vectors correspond to the projections of each class label into this space, and the right singular vectors correspond to how individual tokens are represented in this space.\n3Although the fixed point expression depends on the input x, throughout this text we will only study fixed points with the zero input. That is, we focus on the autonomous dynamical system given by ht+1 = F (ht, 0) (see Appendix A for details).\nBelow, we will show that RNNs trained on classification tasks pick up on the same structure in the dataset as LSA; the dimensionality and geometry of the semantic space predicts corresponding features of the RNNs.\nRegularization While the main text focuses on the interaction between dataset statistics and resulting network dimensionality, regularization also plays a role in determining the dynamical structures. In particular, strongly regularizing the network can reduce the dimensionality of the resulting manifolds, while weakly regularizing can increase the dimensionality. Focusing on `2-regularization, we document this effect for the synthetic and natural datasets in Appendicies D.1 and F, respectively." }, { "heading": "3 RESULTS", "text": "" }, { "heading": "3.1 CATEGORICAL CLASSIFICATION YIELDS SIMPLEX ATTRACTORS", "text": "We begin by analyzing networks trained on categorical classification datasets, with natural examples including news articles (AG News dataset) and encyclopedia entries (DBPedia Ontology dataset). We find dynamics in these networks which are largely low-dimensional and governed by integration. Contrary to our initial expectations, however, the dimensionality of the network’s integration manifolds are not simply equal to the number of classes in the dataset. For example, rather than exploring a three-dimensional cube, RNNs trained on 3-class categorical tasks exhibit a largely twodimensional state space which resembles an equilateral triangle (Fig. 1a, d). As we will see, this is an example of a pattern that generalizes to larger numbers of classes.\nSynthetic categorical data To study how networks perform N -class categorical classification, we introduce a toy language whose vocabulary includes N + 1 words: a single evidence word “evidi” for each label i, and a neutral word “neutral”. Synthetic phrases, generated randomly, are labeled with the class for which they contain the most evidence words (see Appendix D for more details). This is analogous to a simple mechanism which classifies documents as, e.g., “sports” or “finance” based on whether they contain more instances of the word “football” or “dollar”.\nThe main features of the categorical networks’ integration manifolds are clearly seen in the 3-class synthetic case. First, the dynamics are low-dimensional: performing PCA on the set of hidden states explored from hundreds of test phrases reveals that more than 97% of its variance is contained in the top two dimensions. Projected onto these dimensions, the set of network trajectories takes the shape of an equilateral triangle (Fig. 1a). Diving deeper into the dynamics of the trained network, we examine the deflections, or change in hidden state, ∆ht, induced by each word. The deflections due to evidence words “evidi” align with the corresponding readout vector ri at all times (Fig. 1b). Meanwhile, the deflection caused by the “neutral” word is much smaller, and on average, nearly zero. This suggests that the RNN dynamics approximate that of a two-dimensional integrator: as the network processes each example, evidence words move its hidden state within the triangle in a manner that is approximately constant across the phrase. The location of the hidden state within the triangle encodes the integrated, relative counts of evidence for each of the three classes. Since the readouts are of approximately equal magnitude and align with the triangle’s vertices, the phrase is ultimately classified by whichever vertex is closest to the final hidden state. This corresponds to the evidence word contained the most in the given phrase.\nNatural categorical data Despite the simplicity of the synthetic categorical dataset, its working mechanism generalizes to networks trained on natural datasets. We focus here on the 3-class AG News dataset, with matching results for 4-class AG news and 3- and 4-class DBPedia Ontology in Appendix E. Hidden states of these networks, as in the synthetic case, fill out an approximate equilateral triangle whose vertices once again lie parallel to the readout vectors (Fig. 1d). While these results bear a strong resemblance to their synthetic counterparts, the manifolds for natural datasets are, unsurprisingly, less symmetric.\nThough the vocabulary in natural corpora is much larger than the synthetic vocabulary, the network still learns the same underlying mechanism: by suitably arranging its input Jacobian and embedding vectors, it aligns an input word’s deflection in the direction that changes relative class scores appropriately (Fig. 1e). Most words behave like the synthetic word “neutral”, causing little movement within the plane; certain words, however, (like “football”) cause a large shift toward a particular vertex (in this case, “Sports”). Again, the perturbation is relatively uniform across the plane, indicating that the order of words does not strongly influence the network’s prediction.\nIn both synthetic and natural cases, the two-dimensional integration mechanism is enabled by a manifold of approximate fixed points, or slow points, near the network’s hidden state trajectories, which allow the network to maintain its position in the absence of new evidence for all t = 1, . . . , T . As the position within the plane encodes the network’s integrated evidence, this maintenance is essential. In all 3-class categorical networks, we find a planar, approximately triangle-shaped manifold of fixed points which lie near the network trajectories (Fig. 1c, f); vertices of this manifold align with\nthe readout vectors. PCA reveals the dimensionality of this manifold to be very similar to that of the hidden states.\nSince we find the network’s hidden state trajectories always lie close to the fixed point manifold, we can use the fixed points’ stability as an approximate measure of the network’s ability to store integrated evidence. We check for stability directly by linearizing the dynamics around each fixed point and examining the spectra of the recurrent Jacobians. Almost all of the Jacobian’s eigenvalues are well within the unit circle, corresponding to perturbations which decay on the timescale of a few tokens. Only two modes, which lie within the fixed-point plane, are capable of preserving information on timescales on the order of the mean document length (Fig. 2a). This linear stability analysis confirms our picture of a two-dimensional attractor manifold of fixed points; the network dynamics quickly suppress activity in dimensions outside of the fixed-point plane. Integration, i.e. motion within the fixed-point plane is enabled by two eigenmodes with long time constants (relative to the average phrase length).\nLSA predictions Intuitively, the two-dimensional structure in this three-class classification task reflects the fact that the network tracks relative score between the three classes to make its prediction. To see this two-dimensional structure quantitatively in the dataset statistics, we apply latent semantic analysis (LSA) to the dataset, finding a low-rank approximation to the evidence vectors of all the words in the vocabulary. This analysis (Fig. 1h) shows that two modes are necessary to capture the variance, just as we observed in the RNNs. Moreover, the class vectors projected into this space (Fig. 1g) match exactly the structure observed in the RNN readouts. The network appears to pick up on the same structure in the dataset’s class counts identified by LSA.\nGeneralN -class categorical networks The triangular structure seen in the 3-class networks above is an example of a general pattern: N -class categorical classification tasks result in an (N −1)- dimensional simplex attractor (Fig. 3a). We verify this with synthetic data consisting of up to 10 classes, analyzing the subspace of Rn explored by the resulting networks. More than 95% of the variance of the hidden states is contained in N −1 dimensions, with the subspace taking on the approximate shape of a regular (N−1)-simplex centered about the origin (Fig. 1b). The readout vectors, which lie almost entirely within this (N−1)-dimensional subspace, align with the simplex’s vertices. Mirroring the 3-class case, dynamics occur near a manifold of fixed points which also shaped like an (N−1)-simplex. This simplex geometry reflects the fact that to classify between N classes, the network must track N−1 scalars: the relative scores for each class. As a natural example of this simplex, the full, 4-class AG News dataset results in networks whose trajectories explore an approximate 3-simplex, or tetrahedron. The fixed points also form a 3- dimensional tetrahedral attractor (Fig. 3c). Additional results for 4-class natural datasets, which also yield tetrahedron attractors, are shown in Appendix E." }, { "heading": "3.2 ORDERED CLASSIFICATION YIELDS PLANE ATTRACTORS", "text": "Having seen networks employ simplex attractors to integrate evidence in categorical classification, we turn to ordered datasets, with Yelp and Amazon review star prediction as natural examples. Star prediction is a more finely-grained version of binary sentiment prediction that RNNs solve by integrating valence along a one-dimensional line attractor (Maheswaranathan et al., 2019). This turns out not to be the case for either 3-class or 5-class star prediction.\nFor a network trained on the 5-class Yelp dataset, we plot the two-dimensional projection of RNN trajectories while processing a test batch of reviews as well as the readout vectors for each class (Fig. 4d). Similar results for 3-class Yelp and Amazon networks are in Appendix E. The top two dimensions capture more than 98% of the variance in the explored states: as with categorical classification tasks, the dynamics here are largely low-dimensional. A manifold of fixed points that is also planar exists nearby (Fig. 4f). The label predicted by the network is determined almost entirely by the position within the plane. Additionally, eigenmodes of the linearized dynamics around these fixed points show two slow modes with timescales comparable to document length, separated by a clear gap from the other, much faster, modes (Fig. 2c, d). These two integration modes lie almost entirely within the fixed-point plane, while the others are nearly orthogonal to it.\nThese facts suggest that — in contrast to binary sentiment analysis — 5-class (and 3-class) ordered networks are two-dimensional, tracking two scalars associated with each token rather than simply a single sentiment score. As an initial clue to understanding what these two dimensions represent, we examine the deflections in the plane caused by particular words (Fig. 4f). These deflections span two dimensions—in contrast to a one-dimensional integrator, the word ‘horrible’ has a different effect than multiple instances of a weaker word like “poor.” These two dimensions of the deflection vector seem to roughly correspond to a word’s “sentiment” (e.g. good vs. bad) and “intensity” (strong vs. neutral). In this two-dimensional integration, a word like ‘okay’ is treated by the network as evidence of a neutral (e.g., 3-star) review.\nInspired by these observations, we build a synthetic ordered dataset with a word bank {amazing, good, okay, bad, awful, neutral}, in which each word now has a separate sentiment and\nintensity score.4 Labels are assigned to phrases based on both its total sentiment and intensity; e.g., phrases with low intensity and sentiment scores are classified as “3 stars”, while phrases with high positive sentiment and high intensity are “5 stars” (see Appendix D.2 for full details). Networks trained on this dataset correspond very well to the networks trained on Amazon and Yelp datasets (Fig. 4a-c). Dynamics are largely two-dimensional, with readout vectors fanning out in the plane from five stars to one. Deflections from individual words correspond roughly to the sentiment and intensity scores, and the underlying fixed-point manifold is two-dimensional.\nMore generally, the appearance of a plane attractor in both 3-class and 5-class ordered classification shows that in integration models, relationships (such as order) between classes can change the dimensionality of the network’s integration manifold. These relationships cause the LSA evidence vectors for each word to lie in a low-dimensional space. As in the previous section, we can see this low-dimensional in the dataset statistics itself using LSA, showing that two singular values explain more than 95% of the variance (Fig. 4f). Thus, the planar structure of these networks, with dimensions tracking both (roughly) sentiment and intensity, is a consequence of correlations present in the dataset itself." }, { "heading": "3.3 MULTI-LABELED CLASSIFICATION YIELDS INDEPENDENT ATTRACTORS", "text": "So far, we have studied classification datasets where there is only a single label per example. This only requires networks to keep track of the relative evidence for each class, as the overall evidence does not affect the classification. Put another way, the softmax activation used in the final layer will normalize out the total evidence accumulated for a given example. This results in networks that, for an N -way classification task, need to integrate (or remember) at most N−1 quantities as we have seen above. However, this is not true in multi-label classification. Here, individual class labels are assigned independently to each example (the task involves N independent binary decisions). Networks trained on this task do need to keep track of the overall evidence level.\nTo study how this changes the geometry of integration, we trained RNNs on a multi-label classification dataset, GoEmotions (Demszky et al., 2020). Here, the labels are emotions and a particular text may be labeled with multiple emotions. We trained networks on two reduced variants of the full dataset, only keeping two or three labels. The results for three labels are detailed in Appendix E.5. For the two-class version, we only kept the labels “admiration” and “approval”, and additionally resampled the dataset so each of the 22 = 4 possible label combinations were equally likely. We found that RNNs learned a two-dimensional integration manifold where the readout vectors span a two-dimensional subspace (Fig. 5d), rather than a one-dimensional line as in binary classification. Across the fixed point manifold, there were consistently two slow eigenvalues (Fig. 5e), corresponding to the two integration modes. Similar to the previous datasets, increasing `2 regularization would (eventually) compress the dimensionality, again measured using the participation ratio (Fig. 5f). Notably, GoEmotions is a highly-imbalanced dataset; we found that balancing the number of examples per class was important to observe a match between the synthetic and the natural dynamics.\nThe synthetic version of this task classifies a phrase as if it were composed of two independent sentiment analyses (detailed in Appendix D.3). This is meant to represent the presence/absence of a given emotion in a given phrase, but ignores the possibility of correlations between certain emotions. After training a network on this data, we find a low-dimensional hidden-state and fixed-point space that both take on the shape of a square (Fig. 5a, c). The deflections of words affecting independent labels act along orthogonal directions (Fig. 5b).\nThese results suggest that integration manifolds are also found in RNNs trained on multi-labeled classification datasets. Moreover, the geometry of the corresponding fixed points and readouts is different from the exclusive case; instead of an (N−1)-dimensional simplex we get an N -dimensional hypercube. Again, this makes intuitive sense given that the networks must keep track of N independent quantities in order to solve these tasks.\n4Interestingly, a synthetic model that only tracks sentiment fails to match the dynamics of natural ordered data forN > 2. We take this as further evidence that natural ordered datasets classify based on two-dimensional integration. This simple model still produces surprisingly rich dynamics that we detail in Appendix D.2." }, { "heading": "4 DISCUSSION", "text": "In this work we have studied text classification RNNs using dynamical systems analysis. We found integration via attractor manifolds to underlie these tasks, and showed how the dimension and geometry of the manifolds were determined by statistics of the training dataset. As specific examples, we see (N−1)-dimensional simplexes inN -class categorical classification where the network needs to track relative class scores; 2-dimensional attractors in ordered classification, reflecting the need to track sentiment and intensity; and N -dimensional hypercubes in N -class multi-label classification.\nWe hope this line of analysis — using dynamical systems tools to understand RNNs — builds toward a deeper understanding of how neural language models perform more involved tasks in NLP, including language modeling or translation. These tasks cannot be solved by a pure integration mechanism, but it is plausible that integration serves as a useful computational primitive in RNNs more generally, similar to how line attractor dynamics serve as a computational primitive on top of which contextual processing occurs (Maheswaranathan & Sussillo, 2020)." }, { "heading": "A METHODS", "text": "" }, { "heading": "A.1 FIXED-POINTS AND LINEARIZATION", "text": "We study several RNN architectures and we will generically denote their n-dimensional hidden state and d-dimensional input at time t as ht and xt, respectively. The function that applies hidden state update for these networks will be denoted by F , so that ht = F (ht−1,xt). The N output logits are a readout of the final hidden state, y = WhT + b. We will denote the readout corresponding to the ith neuron by ri, for i = 1, . . . , N .\nWe define a fixed point of the hidden-state space to satisfy the expression h∗ = F (h∗,x). This definitions of fixed-points is inherently x-dependent. In this text, we focus on fixed points of the network in for zero input, i.e. when x = 0. We will also be interested in finding points in hidden state space that only satisfy this fixed point relation approximately, i.e. h∗ ≈ F (h∗,x). The slowness of the approximate fixed points can be characterized by defining a loss function q := 1n ‖h− F (h,x)‖ 2 2. Throughout this text we use the term fixed point to include these approximate fixed points as well.\nExpanding around a given hidden state and input, (he,xe), the first-order approximation of F is\nht ≈ F (he,xe) + Jrec|(he,xe) (ht−1 − h e) + Jinp ∣∣ (he,xe) (xt − xe) , (2)\nwhere we have defined the recurrent and input Jacobians as J recij (h,x) := ∂F (h,x)i ∂hj and J inpij (h,x) := ∂F (h,x)i ∂xj\n, respectively. If we expand about a fixed point h∗ ≈ F (h∗,x = 0), the effect of an input xt on the hidden state hT≥t can be approximated by (Jrec)T−tJinpxt. Writing the eigendecomposition, Jrec = RΛL, with L = R−1, we have\n(Jrec)T−tJinpxt = RΛ T−tLJinpxt = n∑ a=1 raλ T−t a ` > a J inpxt , (3)\nwhere Λ is the diagonal matrix containing the (complex) eigenvalues, λ1 ≥ λ2 ≥ · · · ≥ λn that are sorted in order of decreasing magnitude; ra are the columns of R; and `>a are the rows of L. The magnitude of the eigenvalues of Jrec correspond to a time constant τa = ∣∣∣ 1log|λa| ∣∣∣. The time constants, τa, approximately determine how long and what information the system remembers from a given input.\nWe find fixed points by minimizing a function which computes the magnitude of the displacement F (h,x = 0)− h resulting from applying the update rule at point h. That is, we numerically solve\nminh 1\n2 ‖h− F (h,x = 0)‖22 . (4)\nWe seed the minimization procedure with hidden states visited by the network while processing test examples. To better sample the region, we also add some isotropic Gaussian noise to the initial points." }, { "heading": "A.2 DIMENSIONALITY MEASURES", "text": "Here we provide details regarding the measures used to determine both the dimensionality of our hidden-state and fixed-point manifolds. When we discuss the dimensionality of a set of points, we will mean their intrinsic dimensionality. Roughly, this is the dimensionality of a manifold that summarizes the discrete data points, accounting for the fact said manifold could be embedded in a higher-dimensional space in a non-linear fashion. For example, if the discrete points lie along a onedimensional line that is non-linearly embedded in some two-dimensional space, then the measure of intrinsic dimensionality should be close to 1.\nLetX = {X1, . . . , XM} be the set ofM pointsXI for I = 1, . . . ,M for which we wish to measure the dimensionality. In this text, X is either a set of hidden-states or a set of fixed points and each XI ∈ Rn is a point in hidden-state space. To determine an accurate measure of dimensionality, we use the following measures:\n• Variance explained threshold. Let µ1 ≥ µ2 ≥ . . . ≥ µn be the eigenvalues from PCA (i.e. the variances) on X . A simple measure of dimensionality is to threshold the number of PCA dimensions needed to reach a certain percentage of variance explained. For low number of classes, this threshold can simply be set at fixed values 90% or 95%. However, we would expect such threshold to breakdown as the number of classes increase, so we also use an N -dependent threshold of N/(N + 1)%.\n• Global participation ratio. Again using PCA on X as above, the participation ratio (PR) is defined to be a scalar function of the eigenvalues:\nPR := ( ∑n i=1 µi)\n2∑n i=1 µ 2 i . (5)\nIntuitively, this is a scalar measure of the number of “important” PCA dimensions.\n• Local participation ratio. Since PCA is a linear mapping, both of the above measures will fail if the manifold is highly non-linear. We thus implement a local PCA as follows: we choose a random point and compute its k nearest neighbors, then perform PCA on this subset of k + 1 points. We then calculate the participation ratio on the eigenvalues of the local PCA using equation 5. We repeat the process over several random points, and then average the results. This measure is dependent upon the hyperparameter k.\n• MLE measure of intrinsic dimension (Levina & Bickel, 2005). This is a nearest-neighbor based measure of dimension. For a point XI , let Tk(XI) be the Euclidean distance to its kth nearest neighbor. Define the scalar quantities\nm̂k = 1\nM M∑ I=1 m̂k(XI) , m̂k(XI) = 1 k − 1 k−1∑ j=1 log Tk(XI) Tj(XI) −1 . (6) This measure is also dependent upon the number of nearest neighbors k. • Correlation Dimension (Procaccia & Grassberger, 1983; Camastra & Vinciarelli, 2002).\nDefine the scalar quantity\nCN (r) = 2\nN(N − 1) N∑ I=1 N∑ J=I+1 1{‖XI −XJ‖2 < r} . (7)\nThen, plotting logCN (r) as a function of log r, the correlation dimension is found by estimating the slope of the linear part of the plot.\nWe plot these dimensionality measures used on synthetic categorical data for class sizes N = 2 to 10 in Figure 6. Despite their simplicity, we find the 95% variance explained threshold and the global participation ratio to be the best match to what is theoretically predicted, hence we use these measures in the main text and in what follows." }, { "heading": "B MODELS AND TRAINING", "text": "The three architectures we study are specified below, with W and b respectively representing trainable weight matrices and bias parameters, and ht denoting the hidden state at timestep t. All other vectors (c,g, r, i, f ) represent intermediate quantities; σ(·) represents a pointwise sigmoid nonlinearity; and f(·) is the tanh nonlinearity." }, { "heading": "Update-Gate RNN (UGRNN)", "text": "ht = g · ht−1 + (1− g) · c , c = f\n( Wchht−1 + W cxxt + b c) ,\ng = σ ( Wghht−1 + W gxxt + b g) . (8)" }, { "heading": "Gated Recurrent Unit (GRU)", "text": "ht = g · ht−1 + (1− g) · c , c = f\n( Wch(r · ht−1) + Wcxxt + bc ) ,\ng = σ ( Wghht−1 + W gxxt + b g) ,\nr = σ ( Wrhht−1 + W rxxt + b r) . (9)\nLong-Short-Term-Memory (LSTM)\nht = ct h̃t , h̃t = f(ct) · σ\n( Whhht−1 + W hxxt + b h) ,\nct = ft · ct−1 + it · σ ( Wchh̃t−1 + W cxxt + b c ) ,\nit = σ ( Wihht−1 + W ixxt + b i) ,\nft = σ ( Wfhht−1 + W fxxt + b f) .\n(10)\nWith the natural datasets, we form the input vectors xt by using a (learned) 128-dimensional embedding layer. These UGRNNs and GRUs have hidden-state dimension n = 256, while in the LSTMs, both the hidden-state h̃t and the memory c̃t are 256-dimensional, yielding a total hidden-state dimension n = 512. For the synthetic datasets, due to their small vocabulary size, we simply pass one-hot encoded inputs in the RNN architectures, i.e. we use no embedding layer. For UGRNNs and GRU, we use a hidden-state dimension of n = 128, while for LSTMs we again use the same dimension for both h̃t and c̃t, resulting in a total hidden-state dimension of n = 256.\nThe model’s predictions (logits for each class) are computed by passing the final hidden state hT through a linear layer. In the synthetic experiments, we do not add a bias term to this linear readout layer, chosen for simplicity and ease of interpretation.\nWe train the networks using the ADAM optimizer (Kingma & Ba, 2014) with an exponentiallydecaying learning rate schedule. We train using cross-entropy loss with added `2 regularization, penalizing the squared `2 norm of the network parameters. Natural experiments use a batch size of 64 with initial learning rate η = 0.01, clipping gradients to a maximum value of 30; the learning rate decays by 0.9984 every step. Synthetic experiments use a batch size of 128, initial learning rate η = 0.1, and a gradient clip of 10; the learning rate decays by 0.9997 every step." }, { "heading": "C NATURAL DATASET DETAILS", "text": "We use the following text classification datasets in this study:\n• The Yelp reviews dataset (Zhang et al., 2015) consists of Yelp reviews, labeled by the corresponding star rating (1 through 5). Each of the five classes features 130,000 training examples and 10,000 test examples. The mean length of a review is 143 words.\n• The Amazon reviews dataset (Zhang et al., 2015) consists of reviews of products bought on Amazon.com over an 18-year period. As with the Yelp dataset, these reviews are labeled by the corresponding star rating (1 through 5). Each of the five classes features 600,000 training examples and 130,000 test examples. The mean length of a review is 86 words.\n• The DBPedia ontology dataset (Zhang et al., 2015) consists of titles and abstracts of Wikipedia articles in one of 14 non-overlapping categories, from DBPedia 2014. Categories include: company, educational institution, artist, athlete, office holder, mean of transportation, building, natural place, village, animal, plant, album, film, and written work. Each class contains 40,000 training examples and 5,000 testing examples. We use the abstract only for classification; mean abstract length is 56 words.\n• The AG’s news corpus (Zhang et al., 2015) contains titles and descriptions of news articles from the web, in the categories: world, sports, business, sci/tech. Each category features 30,000 training examples and 1,900 testing examples. We use only the descriptions for classification; the mean length of a description is 35 words.\n• The GoEmotions dataset (Demszky et al., 2020) contains text from 58,000 Reddit comments collected between 2005 and 2019. These comments are labeled with the following 27\nemotions: admiration, approval, annoyance, gratitude, disapproval, amusement, curiosity, love, optimism, disappointment, joy, realization, anger, sadness, confusion, caring, excitement, surprise, disgust, desire, fear, remorse, embarrassment, nervousness, pride, relief, grief. The mean length of a comment is 16 words.\nTwo main characteristics distinguish these datasets: (i) whether there is a notion of order among the class labels, and (ii) whether labels are exclusive. The reviews datasets, Amazon and Yelp, are naturally ordered, while the labels in the other datasets are unordered. All of the datasets besides GoEmotions feature exclusive labels; only in GoEmotions can two or more labels (e.g., the emotions anger and disappointment) characterize the same example. In addition to the standard five-class versions of the ordered datasets, we form three-class subsets by collecting reviews with 1, 3, and 5 stars (excluding reviews with 2 and 4 stars).\nWe build a vocabulary for each dataset by converting all characters to lowercase and extracting the 32,768 most common words in the training corpus. Tokenization is done by TensorFlow TF.Text WordpieceTokenizer." }, { "heading": "D SYNTHETIC DATASET DETAILS", "text": "In this appendix we provide many additional details and results from our synthetic datasets. Although these datasets represent significantly simplified settings compared to their realistic counterparts, often the results from training RNNs on the synthetic and natural datasets are strikingly similar." }, { "heading": "D.1 CATEGORICAL DATASET", "text": "For the categorical synthetic dataset used in Section 3.1, we generate phrases of L words, drawing from a word bank consisting of N + 1 words, W = {evid1, . . . , evidN , neutral}. Each word W ∈ W has an N -dimensional vector of integers associated with it, wW = {wW1 , . . . , wWN } with wWi ∈ Z for all i = 1, . . . , N . The word “evidi” has score defined by w evidi i = 1 and wevidij = 0 for i 6= j. Meanwhile, the word “neutral” has wneutralj = 0 for all j. Additionally, each phrase has a corresponding score s that also takes the form of an N -dimensional vector of integers, s = {s1, . . . , sN}. A phrase’s score is equal to the sum of scores of the words contained in said phrase, s = ∑ W∈phrase w\nW . The phrase is then assigned a label corresponding to the class with the maximum score, y = argmax(s).5\nIn the main text we analyze synthetic datasets where phrases are drawn from a uniform distribution over all possible scores, s. To do so, we enumerate all possible scores a phrase of length L can produce as well as all possible word combinations that can generate a given score. It is also possible to build phrases by drawing each word from a uniform distribution over all words inW . In practice, we find all results on synthetic datasets have minor quantitative differences when comparing these two methods, but qualitatively the results are the same.6\nAs highlighted in the main text, after training on this synthetic data we find the explored hiddenstate space to resemble a regular (N − 1)-simplex. This holds for a large range `2 values relative to the natural datasets. In Figure 7, we plot the (global) participation ratio, defined in equation 5, as a function of the number of classes, N .\nIn addition to the hidden states forming a simplex, we observe the N readout vectors are approximately equal magnitude and are aligned along the N vertices of said (N − 1)-simplex. In Figure 8, we plot several measures on the readout vectors that support this claim that we now discuss. We find the readout vectors to have very close to the same magnitude (Fig. 8, left panel). The angle (in degrees) between a pair of vectors that point from the center of a (N − 1)-simplex to two of its\n5For phrases with multiple occurrences of the maximum score, the phrase is labeled by the class with the smallest numerical index.\n6We have also analyzed the same synthetically generated data for variable phrase lengths. The qualitative results focused on in this text did not change in this setting.\nvertices is\nθtheory = 180\nπ × arccos ( − 1 N − 1 ) . (11)\nFor example, for a regular 2-simplex, i.e. an equilateral triangle, this predicts an angle between readout vectors of 120 degrees. The distribution of pairwise angles between readout vectors is plotted the center panel of Figure 8. Lastly, if the readouts lie entirely within the (N − 1)-simplex, all N of them should live in the same RN−1 subspace. To measure this, define r′i to be projection of ri into the subspace formed by the other N − 1 readout vectors, i.e. the span of the set {rj | j = 1, . . . , N ; j 6= i}. We then define the subspace percentage, Λ, as follows,\nΛ := 1\nN N∑ i=1 ‖r′i‖2 ‖ri‖2 . (12)\nIf all the readouts lie within the same RN−1 subspace, then Λ = 1. The right panel of Figure 8 shows that in practice Λ ≈ 1 for the synthetic data with `2 regularization parameter of 5× 10−4.\nWhy a Regular (N −1)-Simplex? Here we propose an intuitive scenario that leads the network’s hidden states to form a regular (N − 1)-simplex. To classify a given phrase correctly, the network must learn to keep track of the value of the N -dimensional score vector s. One way this can be\ndone is follows: Let the network’s hidden state live in some RN dimensional subspace. Within this subspace, let the N readout vectors be orthogonal and have equal magnitude. Furthermore, define a Cartesian coordinate system to have basis vectors aligned with the N readouts, with zi the coordinate along the direction of readout ri. Then, the coordinates within this subspace encodes the components of the N -dimensional score vector s: the evidence word ‘evidi moves you along the coordinate direction i some fixed amount and so si ∝ zi. Note the subspace of RN explored by hidden states has a finite extent, since the phrases are of finite length. This subspace can be further subdivided into regions corresponding to different class labels: if zi > zj for all j 6= i then h · ri > h · rj and the phrase is classified as Class i. The left panel of Figure 9 shows an example of the 3-dimensional subspace for N = 3.\nThe important step that gets us from a subspace of RN to the regular (N−1)-simplex is the presence of the softmax layer used when calculating loss. Since this function normalizes the scores, it is only the relative size of the components of s that matters. Removing the dependence on the absolute score values corresponds to projecting onto the RN−1 subspace orthogonal to the N -dimensional ones vector, (1, 1, . . . , 1). This projection results in an (N − 1)-simplex with the readouts aligned with the vertices. A demonstration of this procedure for N = 3 is shown in Figure 9." }, { "heading": "D.2 ORDERED DATASET", "text": "As alluded to in the main text, we try two renditions of ordered synthetic data. The details of both are given below. The first relies on a ground truth of only a sentiment score, while the second classifies based on both sentiment and neutrality. Although the first is simpler and still bears many resemblances to natural data (i.e. Yelp and Amazon), we find the second to be a better match overall.\nSentiment Only Synthetic Data The first synthetic data for ordered datasets is very similar into that of the categorical sets with a minor difference in the word bank and how phrases are assigned labels. For N -class ordered datasets, the word bank always consists of only three words W = {good, bad, neutral}. We now take the word and phrase scores to be 1-dimensional and wgood1 = +1, wbad1 = −1, and wneutral1 = 0. We then subdivide the range of possible scores s into N equal regions, and a phrase is labeled by the region which its score fall into. Given the above definitions, the range of scores is [−L,L], and so for the N = 3, with labels {Positive,Negative,Neutral}, we define some threshold sN = L/3. Then a label y is assigned as follows:\ny = Positive s ≥ sN , Neutral |s| < sN , Negative s ≤ −sN ,\n(13)\nMeanwhile, for N = 5, one could draw the region divisions at the score values {−3L/5,−L/5, L/5, 3L/5}. Similar to the categorical data above, in the main text we draw phrases from a uniform distribution over all possible scores.\nThe N = 2 case corresponds to sentiment analysis, and its hidden state space, word deflections, and fixed point manifold are plotted in Figure 10.7 The dynamics of this system qualitatively match the natural dataset analyzed in Maheswaranathan et al. (2019). Briefly, the sentiment score is encoded in the hidden state’s position along a one-dimensional line, aligned with the readouts which point in opposite directions. The word ‘good‘ (’bad’) moves you along this line along the ’Positive’ (’Negative’) readout, increasing the corresponding logit value.\nThe simplest ordered dataset beyond binary sentiment analysis is that of N = 3, and a plot showing the final hidden states, deflections, and fixed-point manifold is shown in top row of Figure 11. In the bottom row, we show the same plots for N = 5. In both cases, the hidden-state trajectories move away from h0 onto a curve embedded in 2d plane, with the curve bent around the origin of said plane. The N readout vectors are evenly fanned out in the 2d plane, which subdivides the curve into N regions corresponding to each of the N classes. The curve subdivisions reflect the ordering of the score subdivisions, for N = 3 we see ‘Neutral’ lying in between ‘Positive’ and ‘Negative’ and for N = 5 the stars are ordered from 1 to 5.\nIn contrast to categorical data, the word deflection ∆ht are highly varied and have a strong dependence on a state’s location in hidden-state space. On average, the words ‘good’ and ‘bad’ move the hidden state further left/right along the curve. Although ∆ht for the word ‘neutral’ is on average smaller, it tends to move the hidden state along the ‘Neutral’ or ‘3 Star‘ readout. These dynamics are how the network encodes the relative count of ‘good’ and ‘bad’ words in a phrase that ultimately determines the phrase’s classification. We show the fixed points in the far right panel of Figure 11. For N = 3, the fixed point manifold mostly resembles that of a one-dimensional bent line attractor, with a small region that is two-dimensional along the ‘Neutral‘ readout. For N = 5, the fixed point manifold is much more planar. Thus, the N = 3 case exhibits very similar dynamics to that of the line attractor studied in Maheswaranathan et al. (2019), the attractor is now simply subdivided into three regions due to the readout vector alignments.\nSentiment and Neutrality Synthetic Data Instead of classifying a phrase based off a single sentiment score, our second ordered synthetic model classifies a phrase based off of two scores that track the sentiment and intensity of a given phrase. We draw from an enhanced word bank consisting of W = {awesome, good, okay, bad, awful, neutral}. We take the two-dimensional word score to have components corresponding to (sentiment, intensity) where positive (negative) sentiment scores correspond to positive (negative) sentiment and positive (negative) intensity scores correspond to high\n7TheN = 2 ordered dataset is equivalent to theN = 2 categorical dataset. Intuitively, ‘good‘ and ‘bad‘ can be though of evidence vectors for the classes ‘Positive‘ and ‘Negative‘, respectively. Just like the categorical classification, whichever of these evidence words appears the most in a given phrase will be the phrase’s label.\n(low) emotion. The word score values we use are wawesome = (2, 1) , wgood = (1,−1/2) , wokay = (0,−2) , (14a)\nwbad = (−1,−1/2) , wawful = (−2, 1) , wneutral = (0, 0) . (14b) As with the other synthetic models, we sum all word scores across a phrase to arrive at a phrase’s sentiment and intensity score, (s, i). We then assign the phrase a label y based off the following criterion:\ny = Three Star i < 0 and |i| > |s| , otherwise: Five Star i ≥ 0 and s > 0 , Four Star i < 0 and s > 0 , Two Star i < 0 and s < 0 , One Star i ≥ 0 and s ≤ 0 .\n(15)\nThus we see that scores with negative (low) intensity where the intensity magnitude is greater than the sentiment magnitude are classified as ‘Three Star’, i.e. it is a neutral phrase. Otherwise, phrases with low intensity that are the less extreme reviews are classified as either ‘Two Star’ or ‘Four Star’ based on their sentiment. Finally, phrases with high intensity are labeled either ‘One Star’ or ‘Five Star’, again based on their sentiment." }, { "heading": "D.3 MULTI-LABELED DATASET", "text": "Here we provide details of the synthetic multi-labeled dataset, that corresponds to natural dataset GoEmotions in Section 3.3 of the main text. Let us introduce this by taking the N = 2 as an explicit example, where each phrase can have up to two labels. We draw from a word bank consisting of W = {good1, bad1, good2, bad2, neutral}, where\nwneutral = (0, 0) , wgood1 = (1, 0) , wbad1 = (−1, 0) , (16a) wgood2 = (0, 1) , wbad2 = (0,−1) . (16b)\nWe then classify each phrase with two labels, individually based on the score vector components s1 and s2. Namely,\ny1 = { Positive1 s1 ≥ 0 , Negative1 s1 < 0 ,\ny2 = { Positive2 s2 ≥ 0 , Negative2 s2 < 0 .\n(17)\nThus there are four possible combinations of labels. For this synthetic datasets, we generate phrases by uniformly drawing words one-by-one from W . Generalization of the above construction to an arbitrary number of possible labels N is straightforward: one simply adds additional N -dimension score vectors wgoodi and wbadi for each possible label i = 1, . . . , N and then uses theN components of the score to assign the N labels, yi, individually.\nThe results after training a network on the N = 2 dataset are shown in the main text in Figure 5, and results for N = 3 are shown in Figure 12. Again, we see the explored hidden-state space to be low-dimensional, but notably it now resembles a three-dimensional cube. This is certainly a large departure from the N = 8 categorical dataset, from which we expect a (seven-dimensional) regular 7-simplex. Instead, what we see here is the “outer product” of three N = 2 ordered datasets. That is, we expect a single N = 2 ordered dataset (i.e. binary sentiment analysis) to have a hidden-state space that resembles a line attractor. As one might expect, tasking the network with analyzing three such sentiments at once leads to three line attractors that are orthogonal to one another, forming a cube. This is supported in the center panel of Figure 5, where we see the various sentiment evidences are orthogonal from one another." }, { "heading": "E ADDITIONAL RESULTS ON NATURAL DATASETS", "text": "" }, { "heading": "E.1 AG NEWS", "text": "This subsection contains two figures: Figures 13 and 14 complement Figure 1 in the main text; the main text figure showed the manifolds learned by an LSTM on both 3- and 4-class AG News datasets; the figures in this appendix show corresponding manifolds learned by a GRU and UGRNN." }, { "heading": "E.2 DBPEDIA 3-CLASS AND 4-CLASS CATEGORICAL PREDICTION", "text": "Like AG News, DBPedia Ontology is a categorical classification dataset. We show results for networks trained on 3- and 4-class subsets of this dataset in Figures 15, 16, and 17." }, { "heading": "E.3 YELP 5-CLASS STAR PREDICTION", "text": "Figures 19, 18, and 20 show the fixed-point manifolds associated with a GRU, LSTM and UGRNN, respectively, trained on 5-class and 3-class Yelp dataset. These reviews are naturally five star; we\ncreate a 3-class subset by removing examples labeled with 2 and 4 stars. These figures complement Figure 1 in the main text." }, { "heading": "E.4 AMAZON 5-CLASS AND 3-CLASS STAR PREDICTION", "text": "As another example of an ordered dataset, Figures 21, 22, and 23 show results for networks trained on a 3-class and 5-class subsets of Amazon reviews. These reviews are naturally five star; we create a 3-class subset by removing examples labeled with 2 and 4 stars.\nE.5 3-CLASS GOEMOTIONS\nIn addition to the 2 class variant presented in the main text, we also trained a 3 class version of the GoEmotions dataset. We filtered the dataset to just include the following three classes: “admiration”,\n“approval”, and “annoyance” (these were selected as they were the classes with the largest number of examples). These results are presented in Figure 24. For this network, despite having three classes, we find that the fixed points are largely two dimensional (Fig. 24a). The timescales of the eigenvalues of the Jacobian computed at these fixed points have two slow modes (Fig. 24b), which overlap with the two modes (Fig. 24c); thus we have a roughly 2D plane attractor. However, the participation ratio (Fig. 24d) indicates that the dimensionality of this attractor is slightly higher than the 2D case shown in Fig. 5. We suspect that these differences are due to the strong degree of class imbalance present in the GoEmotions dataset. There are very few examples with multiple labels, for any particular combination of labels. In synthetic multi-labeled data (which is class balanced), we see much clearer 3D structure when training a 3 class network (Fig. 12)." }, { "heading": "F THE EFFECT OF `2 REGULARIZATION: COLLAPSE, CONTEXT, AND CORRELATIONS", "text": "Regularizing the parameters of the network during training can have a strong effect on the dimension of the resulting dynamics. We describe this effect first for the datasets with ordered labels, Yelp and Amazon reviews. We penalize the squared `2-norm of the parameters, adding the term λ||θ||22 to the cross-entropy prediction loss; λ is the `2 penalty and θ are the network parameters.\nCollapse: Figure 25 shows the performance of the LSTM, GRU, and UGRNN as a function of the `2 penalty. As the `2 penalty is varied, the test accuracy usually decreases gradually; however, at a few values, the accuracy takes a large hit. The first two of these jumps correspond to a decrease in the dimension of the integration manifold from 2D and 1D and then 1D to 0D. The resulting 1D manifold is shown, for the example of a GRU on the Amazon dataset in Figures 26. The effects of collapse on the other architectures for the ordered datasets are identical.\nWhen the regularization is sufficient to collapse the manifold to a 1D line, the dynamics are quite similar to the 1D line attractors studied in Maheswaranathan et al. (2019). A single accumulated valence score is tracked by the network as it moves along the line; this tracking occurs via a single eigenmode with a time constant comparable to the average document length, aligned with the fixedpoint manifold. The difference between the binary- and 5-class line-attractor networks are largely in the way the final states are classified; in the 5-class case, the line attractor is divided into sections based largely on the angle the line makes with the readout vector of each class.\nThe collapse to a 0D manifold with a higher `2 penalty is most strikingly seen in the recurrent Jacobian spectra at the fixed points (Figure 27). Here there are no modes which remember activity on the timescale of the mean document length. Given this lack of integration, it is unclear how these networks are achieving accuracies above random chance.\nContext: While the focus of this study has been on how networks perform integration, it is clear from the plots in Figure 25 that the best-performing models are doing more than just bag-of-words style integration. When the order of words in the sentence is shuffled, these models take a hit in accuracy. Interestingly, when the `2 coefficient is increased from the smallest values we use, the contextual effects are the first to be lost: the model’s accuracy on shuffled and ordered examples becomes the same.\nUnderstanding precisely how contextual processing is carried out by the network is an interesting direction for future work. It is important to show, however, that the basic two-dimensional integration\nmechanism we have presented in the main text still underlies the dynamics of the networks which are capable of handling context. To show this, we plot in Figure 28 the fixed point manifold, colored by the predicted class. As with the models which are not order-sensitive, the classification of the fixed points depends largely on their top two coordinates (after PCA projection). This is the case even though the PCA explained variance clearly shows extension of the dynamics into higher dimensions. It is thus likely that, similarly to how Maheswaranathan & Sussillo (2020) found that the contextual-processing mechanism was a perturbation on top of the integration dynamics for binary sentiment classification, the same is true for more finely-grained sentiment classification.\nCorrelations: As might be expected, increasing `2 regularization also causes collapse in models trained on categorical classification tasks. For example, as shown in Figure 29, the tetrahedral manifold seen in 4-class AG News networks becomes a square at higher values of `2, collapsing from three dimensions to two. That is, instead of class labels corresponding to vertices of a tetrahedron, when the `2 regularization is increased, these labels correspond to the vertices of a square.\nInterestingly, in the the collapse to a square, we find that — regardless of architecture and across 10 random seeds per architecture — the ordering of vertices around the square appears to reflect correlations between classes. Up to symmetries, the only possible ordering of vertices around the square are: (i) World → Sci/Tech → Business → Sports, (ii) World → Sci/Tech → Sports → Business, and (iii) World→ Sports→ Sci/Tech→ Business. In practice, we observe that most of the time (26 out of 30 trials), order (iii) appears; otherwise, order (i) appears. We never observe order (ii).\nTo show how this ordering arises from correlations between class labels, we train a bag-of-words model on the full 4-class dataset. Taking the most common 5000 words in the vocabulary, we plot, in Figure 30, the changes in each logit due to these words. As the figure shows, for most pairs of classes there is a weak negative correlation between the evidence for the pair. However, between the classes “Sports” and “Business”, there is a strong negative correlation (R=-0.81); between “Sports” and “Sci/Tech”, there is a slightly weaker negative correlation (R=-0.61). Stated another way, words which constitute positive evidence for “Sports” are likely to constitute negative evidence for “Business” and/or “Sci/Tech”. This matches with the geometries we observe in practice, where “Sports” and “Business” readouts are ‘repelled’ most often, and otherwise “Sports” and “Sci/Tech” are repelled." }, { "heading": "G A CLOSER LOOK AT THE SLOW ZONE", "text": "In the main text, we were interested in approximate fixed points, or points which were slow on the timescale given roughly by the average length of documents Tav in the training dataset. For practical purposes, points which are slow on this time scale can be treated as effectively fixed, since evolving the system for Tav will result in little motion. Whether any of the points in the slow zone are exact\n.\nfixed points of the dynamics, or whether all of them are simply slow points, is not something our numerical experiments are capable of resolving. However, we do find that there is structure to the slow zone in that all of the points are not uniformly slow.\nWe define the speed of a point, S(h), as the distance traveled from that point after a single application of the dynamical map F (h, 0) (Sussillo & Barak, 2013),\nS(h) = ‖h− F (h, 0)‖2 . (18)\nNote that, as in the main text, we are characterizing the RNN with zero input (the autonomous system). In Figure 31, we plot three orthogonal slices of this function for a GRU network trained on the 3-class AG News dataset. All the points in the roughly triangular region we identified in the main text have speeds less than a tenth of the inverse document length, and are thus slow. However, there are a few neighborhoods, roughly located near the vertices of the triangle that are even slower. This structure can be seen further in Figure 32, in which we plot the speed as a function of the PC 0 coordinate for three values of the PC 1 coordinate. Similar structures can be seen in fixed point manifolds for other networks and datasets. Fully exploring these structures and understanding how they arise from the equations governing the RNN is an interesting direction for future research.\n." } ]
2,021
THE GEOMETRY OF INTEGRATION IN TEXT CLASSIFICATION RNNS
SP:40e4749c3e5c57e12a6c540510b74ae3551e479a
[ "This paper proposes deep kernel processes (DKPs), which can be viewed as a specific kind of deep Gaussian processes where the kernel can be written as a function of the Gram matrix. The features in the intermediate layers are integrated out and the Gram matrix are Wishart distributed. A doubly stochastic variational inference method is proposed to learn DKPs. The idea looks novel to me. My major concern is about the writing.", "This paper proposes a prior distribution over covariance matrices of kernels which is defined as a sequential graphical model where each variable is Wishart distributed and its scale matrix is a non-linear transformation of its predecesor variable on the graph. The paper begins by considering a DGP with isotropic kernels across the layers and realizes that the Gram matrices are Wishart distributed. Based on this, the paper proposes to bypass the inference of the features and sample the Gram matrices directly from Wishart distributions. This insight, in addition to the layered structure of DGPs, gives rise to the proposed prior distribution. Furthermore, given the restrictions of the Wishart distribution for modelling covariance matrices of arbitrary size [1], as well as the conjugacy properties of the inverse Wishart distribution, the paper uses the inverse Wishart distribution instead. Doubly stochastic variational inference is proposed for approximating the posterior distribution which includes the use of inducing points thanks to the marginalization properties of the inverse Wishart distribution. The experimental contribution consists of a comparison against DGP and Neural Network GP on the UCI, MNIST and CIFAR-10 dataset." ]
We define deep kernel processes in which positive definite Gram matrices are progressively transformed by nonlinear kernel functions and by sampling from (inverse) Wishart distributions. Remarkably, we find that deep Gaussian processes (DGPs), Bayesian neural networks (BNNs), infinite BNNs, and infinite BNNs with bottlenecks can all be written as deep kernel processes. For DGPs the equivalence arises because the Gram matrix formed by the inner product of features is Wishart distributed, and as we show, standard isotropic kernels can be written entirely in terms of this Gram matrix — we do not need knowledge of the underlying features. We define a tractable deep kernel process, the deep inverse Wishart process, and give a doubly-stochastic inducing-point variational inference scheme that operates on the Gram matrices, not on the features, as in DGPs. We show that the deep inverse Wishart process gives superior performance to DGPs and infinite BNNs on standard fully-connected baselines.
[]
[ { "authors": [ "Laurence Aitchison" ], "title": "Why bigger is not always better: on finite and infinite neural networks", "venue": "arXiv preprint arXiv:1910.08013,", "year": 2019 }, { "authors": [ "Manabu Asai", "Michael McAleer" ], "title": "The structure of dynamic correlations in multivariate stochastic volatility models", "venue": "Journal of Econometrics,", "year": 2009 }, { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural networks", "venue": "arXiv preprint arXiv:1505.05424,", "year": 2015 }, { "authors": [ "Taras Bodnar", "Yarema Okhrin" ], "title": "Properties of the singular, inverse and generalized inverse partitioned wishart distributions", "venue": "Journal of Multivariate Analysis,", "year": 2008 }, { "authors": [ "Taras Bodnar", "Stepan Mazur", "Krzysztof Podgórski" ], "title": "Singular inverse wishart distribution and its application to portfolio theory", "venue": "Journal of Multivariate Analysis,", "year": 2016 }, { "authors": [ "Youngmin Cho", "Lawrence K Saul" ], "title": "Kernel methods for deep learning", "venue": "In Advances in neural information processing systems,", "year": 2009 }, { "authors": [ "Andreas Damianou", "Neil Lawrence" ], "title": "Deep gaussian processes", "venue": "In Artificial Intelligence and Statistics,", "year": 2013 }, { "authors": [ "A Philip Dawid" ], "title": "Some matrix-variate distribution theory: notational considerations and a Bayesian application", "venue": null, "year": 1981 }, { "authors": [ "Morris L.. Eaton" ], "title": "Multivariate Statistics. A Vector Space Approach.-A Volume in the Wiley Series in Probability and Mathematical Statistics", "venue": null, "year": 1983 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a Bayesian approximation: Representing model uncertainty in deep learning", "venue": null, "year": 2015 }, { "authors": [ "Adrià Garriga-Alonso", "Carl Edward Rasmussen", "Laurence Aitchison" ], "title": "Deep convolutional networks as shallow gaussian processes", "venue": "arXiv preprint arXiv:1808.05587,", "year": 2018 }, { "authors": [ "Pascal Germain", "Francis Bach", "Alexandre Lacoste", "Simon Lacoste-Julien" ], "title": "Pac-bayesian theory meets bayesian inference", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Christian Gourieroux", "Razvan Sufana" ], "title": "Derivative pricing with wishart multivariate stochastic volatility", "venue": "Journal of Business & Economic Statistics,", "year": 2010 }, { "authors": [ "Creighton Heaukulani", "Mark van der Wilk" ], "title": "Scalable bayesian dynamic covariance modeling with variational wishart and inverse wishart processes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Thomas Hofmann", "Bernhard Schölkopf", "Alexander J Smola" ], "title": "Kernel methods in machine learning", "venue": "The annals of statistics,", "year": 2008 }, { "authors": [ "Melih Kandemir", "Fred A Hamprecht" ], "title": "The deep feed-forward gaussian process: An effective generalization to covariance priors", "venue": "In Feature Extraction: Modern Questions and Challenges,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Plamen Koev", "Alan Edelman" ], "title": "The efficient evaluation of the hypergeometric function of a matrix argument", "venue": "Mathematics of Computation,", "year": 2006 }, { "authors": [ "AN Kolmogorov" ], "title": "Grundbegriffe der wahrscheinlichkeitreichnung", "venue": "Ergebnisse der Mathematik,", "year": 1933 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Vidhi Lalchand", "Carl Edward Rasmussen" ], "title": "Approximate inference for fully bayesian gaussian process regression", "venue": "In Symposium on Advances in Approximate Bayesian Inference,", "year": 2020 }, { "authors": [ "Jaehoon Lee", "Yasaman Bahri", "Roman Novak", "Samuel S Schoenholz", "Jeffrey Pennington", "Jascha Sohl-Dickstein" ], "title": "Deep neural networks as gaussian processes", "venue": "arXiv preprint arXiv:1711.00165,", "year": 2017 }, { "authors": [ "David JC MacKay" ], "title": "A practical bayesian framework for backpropagation networks", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "Alexander G de G Matthews", "Mark Rowland", "Jiri Hron", "Richard E Turner", "Zoubin Ghahramani" ], "title": "Gaussian process behaviour in wide deep neural networks", "venue": "arXiv preprint arXiv:1804.11271,", "year": 2018 }, { "authors": [ "David A Moore" ], "title": "Symmetrized variational inference", "venue": "In NIPS Workshop on Advances in Approximate Bayesian Inferece,", "year": 2016 }, { "authors": [ "Roman Novak", "Lechao Xiao", "Jaehoon Lee", "Yasaman Bahri", "Greg Yang", "Jiri Hron", "Daniel A Abolafia", "Jeffrey Pennington", "Jascha Sohl-Dickstein" ], "title": "Bayesian deep convolutional networks with many channels are gaussian processes", "venue": "arXiv preprint arXiv:1810.05148,", "year": 2018 }, { "authors": [ "Sebastian W Ober", "Laurence Aitchison" ], "title": "Global inducing point variational posteriors for bayesian neural networks and deep gaussian processes", "venue": "arXiv preprint arXiv:2005.08140,", "year": 2020 }, { "authors": [ "Alexander Philipov", "Mark E Glickman" ], "title": "Factor multivariate stochastic volatility via wishart processes", "venue": "Econometric Reviews,", "year": 2006 }, { "authors": [ "Alexander Philipov", "Mark E Glickman" ], "title": "Multivariate stochastic volatility via wishart processes", "venue": "Journal of Business & Economic Statistics,", "year": 2006 }, { "authors": [ "Arya A Pourzanjani", "Richard M Jiang", "Linda R Petzold" ], "title": "Improving the identifiability of neural networks for bayesian inference", "venue": "In NIPS Workshop on Bayesian Deep Learning,", "year": 2017 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "arXiv preprint arXiv:1401.4082,", "year": 2014 }, { "authors": [ "Omar Rivasplata", "Vikram M Tankasali", "Csaba" ], "title": "Szepesvari. Pac-bayes with backprop", "venue": "arXiv preprint arXiv:1908.07380,", "year": 2019 }, { "authors": [ "Hugh Salimbeni", "Marc Deisenroth" ], "title": "Doubly stochastic variational inference for deep gaussian processes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Amar Shah", "Andrew Wilson", "Zoubin Ghahramani" ], "title": "Student-t processes as alternatives to Gaussian processes", "venue": "In Artificial intelligence and statistics,", "year": 2014 }, { "authors": [ "Muni S Srivastava" ], "title": "Singular wishart and multivariate beta distributions", "venue": "The Annals of Statistics,", "year": 2003 }, { "authors": [ "Richard E Turner", "Maneesh Sahani" ], "title": "Two problems with variational expectation maximisation for time-series models", "venue": null, "year": 2011 }, { "authors": [ "Harald Uhlig" ], "title": "On singular wishart and singular multivariate beta distributions", "venue": "The Annals of Statistics,", "year": 1994 }, { "authors": [ "Christopher KI Williams", "Carl Edward Rasmussen" ], "title": "Gaussian processes for machine learning", "venue": null, "year": 2006 }, { "authors": [ "Andrew Gordon Wilson", "Zoubin Ghahramani" ], "title": "Generalised Wishart processes", "venue": "arXiv preprint arXiv:1101.0240,", "year": 2010 }, { "authors": [ "Eq. 44a" ], "title": "Using this integral to write out the generative process only in terms of K` gives the deep kernel process in Fig. 3 (bottom). While this distribution exists in principle, it cannot be evaluated analytically. But we can explicitly evaluate the expected value of K` given K`−1 using results from Cho & Saul", "venue": "In particular,", "year": 2009 }, { "authors": [ "Rivasplata" ], "title": "2019). Second, we can write down an alternative form for the ELBO as the model evidence, minus the KL-divergence between the approximate and true posterior, L = log P", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "The deep learning revolution has shown us that effective performance on difficult tasks such as image classification (Krizhevsky et al., 2012) requires deep models with flexible lower-layers that learn task-dependent representations. Here, we consider whether these insights from the neural network literature can be applied to purely kernel-based methods. (Note that we do not consider deep Gaussian processes or DGPs to be “fully kernel-based” as they use a feature-based representation in intermediate layers).\nImportantly, deep kernel methods (e.g. Cho & Saul, 2009) already exist. In these methods, which are closely related to infinite Bayesian neural networks (Lee et al., 2017; Matthews et al., 2018; Garriga-Alonso et al., 2018; Novak et al., 2018), we take an initial kernel (usually the dot product of the input features) and perform a series of deterministic, parameter-free transformations to obtain an output kernel that we use in e.g. a support vector machine or Gaussian process. However, the deterministic, parameter-free nature of the transformation from input to output kernel means that they lack the capability to learn a top-layer representation, which is believed to be crucial for the effectiveness of deep methods (Aitchison, 2019).\nTo obtain the flexibility necessary to learn a task-dependent representation, we propose deep kernel processes (DKPs), which combine nonlinear transformations of the kernel, as in Cho & Saul (2009) with a flexible learned representation by exploiting a Wishart or inverse Wishart process (Dawid, 1981; Shah et al., 2014). We find that models ranging from DGPs (Damianou & Lawrence, 2013; Salimbeni & Deisenroth, 2017) to Bayesian neural networks (BNNs; Blundell et al., 2015, App. C.1), infinite BNNs (App. C.2) and infinite BNNs with bottlenecks (App. C.3) can be written as DKPs (i.e. only with kernel/Gram matrices, without needing features or weights). Practically, we find that the deep inverse Wishart process (DIWP), admits convenient forms for variational approximate posteriors, and we give a novel scheme for doubly-stochastic variational inference (DSVI) with inducing points purely in the kernel domain (as opposed to Salimbeni & Deisenroth, 2017, who described DSVI for standard feature-based DGPs), and demonstrate improved performance with carefully matched models on fully-connected benchmark datasets." }, { "heading": "2 BACKGROUND", "text": "We briefly revise Wishart and inverse Wishart distributions. The Wishart distribution is a generalization of the gamma distribution that is defined over positive semidefinite matrices. Suppose that\nwe have a collection of P -dimensional random variables xi with i ∈ {1, . . . , N} such that\nxi iid∼ N (0,V) , then,\n∑N i=1xix T i = S ∼ W (V, N) (1)\nhas Wishart distribution with scale matrix V and N degrees of freedom. When N > P − 1, the density is,\nW (S; V, N) = 1 2NP |V|ΓP ( N 2\n) |S|(N−P−1)/2 exp ( − 12 Tr ( V−1S )) , (2)\nwhere ΓP is the multivariate gamma function. Further, the inverse, S−1 has inverse Wishart distribution,W−1 ( V−1, N ) . The inverse Wishart is defined only for N > P − 1 and also has closed-form density. Finally, we note that the Wishart distribution has mean NV while the inverse Wishart has mean V−1/(N − P − 1) (for N > P + 1)." }, { "heading": "3 DEEP KERNEL PROCESSES", "text": "We define a kernel process to be a set of distributions over positive definite matrices of different sizes, that are consistent under marginalisation (Dawid, 1981; Shah et al., 2014). The two most common kernel processes are the Wishart process and inverse Wishart process, which we write in a slightly unusual form to ensure their expectation is K. We take G and G′ to be finite dimensional marginals of the underlying Wishart and inverse Wishart process,\nG ∼ W (K /N,N) , G′ ∼ W−1 (δK , δ + (P + 1)) , (3a) G∗ ∼ W (K∗/N,N) , G′∗ ∼ W−1 (δK∗, δ + (P ∗ + 1)) , (3b)\nand where we explicitly give the consistent marginal distributions over K∗, G∗ and G′∗ which are P ∗ × P ∗ principal submatrices of the P × P matrices K, G and G′ dropping the same rows and columns. In the inverse-Wishart distribution, δ is a positive parameter that can be understood as controlling the degree of variability, with larger values for δ implying smaller variability in G′.\nWe define a deep kernel process by analogy with a DGP, as a composition of kernel processes, and show in App. A that under sensible assumptions any such composition is itself a kernel process. 1" }, { "heading": "3.1 DGPS WITH ISOTROPIC KERNELS ARE DEEP WISHART PROCESSES", "text": "We consider deep GPs of the form (Fig. 1 top) with X ∈ RP×N0\nK` =\n{ 1 N0\nXXT for ` = 1, K (G`−1) otherwise,\n(4a)\nP (F`|K`) = ∏N` λ=1N ( f `λ; 0,K` ) , (4b)\nG` = 1 N` F`F T ` . (4c)\nHere, F` ∈ RP×N` are the N` hidden features in layer `; λ indexes hidden features so f `λ is a single column of F`, representing the value of the λth feature for all training inputs. Note that K(·) is a\n1Note that we leave the question of the full Kolmogorov extension theorem (Kolmogorov, 1933) for matrices to future work: for our purposes, it is sufficient to work with very large but ultimately finite input spaces as in practice, the input vectors are represented by elements of the finite set of 32-bit or 64-bit floating-point numbers (Sterbenz, 1974).\nfunction that takes a Gram matrix and returns a kernel matrix; whereas K` is a (possibly random) variable representing a kernel matrix. Note, we have restricted ourselves to kernels, that can be written as functions of the Gram matrix, G`, and do not require the full set of activations, F`. As we describe later, this is not too restrictive, as it includes amongst others all isotropic kernels (i.e. those that can be written as a function of the distance between points Williams & Rasmussen, 2006). Note that we have a number of choices as to how to initialize the kernel in Eq. (4a). The current choice just uses a linear dot-product kernel, rather than immediately applying the kernel function K. This is both to ensure exact equivalence with infinite NNs with bottlenecks (App. C.3) and also to highlight an interesting interpretation of this layer as Bayesian inference over generalised lengthscale hyperparameters in the squared-exponential kernel (App. B e.g. Lalchand & Rasmussen, 2020).\nFor DGP regression, the outputs, Y, are most commonly given by a likelihood that can be written in terms of the output features, FL+1. For instance, for regression, the distribution of the λth output feature column could be\nP (yλ|FL+1) = N ( yλ; f L+1 λ , σ 2I ) , (5)\nbut our methods can be used with many other forms for the likelihood, including e.g. classification.\nThe generative process for the Gram matrices, G`, consists of generating samples from a Gaussian distribution (Eq. 4b), and taking their product with themselves transposed (Eq. 4c). This exactly matches the generative process for a Wishart distribution (Eq. 1), so we can write the Gram matrices, G`, directly in terms of the kernel, without needing to sample features (Fig. 1 bottom),\nP (G1|X) =W (\n1 N1 ( 1 N0 XXT ) , N1 ) , (6a)\nP (G`|G`−1) =W (K (G`−1) /N`, N`) , for ` ∈ {2, . . . L}, (6b)\nP (FL+1|GL) = ∏NL+1 λ=1 N ( fL+1λ ; 0,K (GL) ) . (6c)\nExcept at the output, the model is phrased entirely in terms of positive-definite kernels and Gram matrices, and is consistent under marginalisation (assuming a valid kernel) and is thus a DKP. At a high level, the model can be understood as alternatively sampling a Gram matrix (introducing flexibility in the representation), and nonlinearly transforming the Gram matrix using a kernel (Fig. 2).\nThis highlights a particularly simple interpretation of the DKP as an autoregressive process. In a standard autoregressive process, we might propagate the current vector, xt, through a deterministic function, f(xt), and add zero-mean Gaussian noise, ξ,\nxt+1 = f (xt) + σ 2ξ such that E [xt+1|xt] = f (xt) . (7)\nBy analogy, the next Gram matrix has expectation centered on a deterministic transformation of the previous Gram matrix,\nE [G`|G`−1] = K (G`−1) , (8)\nso G` can be written as this expectation plus a zero-mean random variable, Ξ`, that can be interpreted as noise,\nG` = K (G`−1) + Ξ`. (9)\nNote that Ξ` is not in general positive definite, and may not have an analytically tractable distribution. This noise decreases as N` increases,\nV [ G`ij ] = V [ Ξ`ij ] = 1N` ( K2ij(G`−1) +K 2 ii(G`−1)K 2 jj(G`−1) ) . (10)\nNotably, as N` tends to infinity, the Wishart samples converge on their expectation, and the noise disappears, leaving us with a series of deterministic transformations of the Gram matrix. Therefore, we can understand a deep kernel process as alternatively adding “noise” to the kernel by sampling e.g. a Wishart or inverse Wishart distribution (G2 and G3 in Fig. 2) and computing a nonlinear transformation of the kernel (K(G2) and K(G3) in Fig. 2)\nRemember that we are restricted to kernels that can be written as a function of the Gram matrix,\nK` = K (G`) = Kfeatures (F`) , K ` ij = k ( F`i,:,F ` j,: ) . (11)\nwhere Kfeatures(·) takes a matrix of features, F`, and returns the kernel matrix, K`, and k is the usual kernel, which takes two feature vectors (rows of F`) and returns an element of the kernel matrix. This does not include all possible kernels because it is not possible to recover the features from the Gram matrix. In particular, the Gram matrix is invariant to unitary transformations of the features: the Gram matrix is the same for F` and F′` = UF` where U is a unitary matrix, such that UU T = I,\nG` = 1 N` F`F T ` = 1 N` F`U`U T ` F T ` = 1 N` F′`F ′T ` . (12)\nSuperficially, this might seem very limiting — leaving us only with dot-product kernels (Williams & Rasmussen, 2006) such as,\nk(f , f ′) = f · f ′ + σ2. (13)\nHowever, in reality, a far broader range of kernels fit within this class. Importantly, isotropic or radial basis function kernels including the squared exponential and Matern depend only on the squared distance between points, R, (Williams & Rasmussen, 2006)\nk(f , f ′) = k (R) , R = |f − f ′|2 . (14)\nThese kernels can be written as a function of G, because the matrix of squared distances, R, can be computed from G,\nR`ij = 1 N` ∑N` λ=1 ( F `iλ − F `jλ )2 = 1N` ∑N` λ=1 (( F `iλ )2 − 2F `iλF `jλ + ( F `jλ )2)\n= G`ii − 2G`ij +G`jj . (15)" }, { "heading": "4 VARIATIONAL INFERENCE IN DEEP KERNEL PROCESSES", "text": "A key part of the motivation for developing deep kernel processes was that the posteriors over weights in a BNN or over features in a deep GP are extremely complex and multimodal, with a large number of symmetries that are not captured by standard approximate posteriors (MacKay, 1992; Moore, 2016; Pourzanjani et al., 2017). For instance, in the Appendix we show that there are permutation symmetries in the prior and posteriors over weights in BNNs (App. D.1) and rotational symmetries in the prior and posterior over features in deep GPs with isotropic kernels (App. D.2). The inability to capture these symmetries in standard variational posteriors may introduce biases in the parameters inferred by variational inference, because the variational bound is not uniformly tight across the state-space (Turner & Sahani, 2011). Intuitively, these symmetries arise in DGPs with isotropic kernels because the features at the next layer depend only on the kernel matrix at\nthe previous layer, and this kernel is invariant to unitary transformations of the features (Eq. 12). As such, we can sidestep these complex posterior symmetries by working directly with the Gram matrices as the random variables for variational inference.\nWe show that DGPs (Sec. 3.1) and infinite NNs with bottlenecks (App. C.3) are deep Wishart processes, so a natural approach would be to define an approximate posterior over the Gram matrices in the deep Wishart process. However, this turns out to be difficult, predominantly because the approximate posterior we would like to use, the non-central Wishart (App. E), has a probability density function that is prohibitively costly and complex to evaluate in the inner loop of a deep learning model (Koev & Edelman, 2006). Instead, we consider an inverse Wishart process prior, for which the inverse Wishart itself makes a good choice of approximate posterior." }, { "heading": "4.1 THE DEEP INVERSE WISHART PROCESSES", "text": "By analogy with Eq. (6), our deep inverse Wishart processes (DIWPs) are given by\nP (Ω) =W−1 (δ1I, δ1 +N0 + 1) , (with G1 = 1N0 XΩX T ), (16a)\nP (G`|G`−1) =W−1 (G`; δ`K (G`−1) , P + 1 + δ`) , for ` ∈ {2, . . . L}, (16b)\nP (FL+1|GL) = ∏NL+1 λ=1 N ( fL+1λ ; 0,K (GL) ) , (16c) remember that X ∈ RP×N0 , G` ∈ RP×P and F` ∈ RP×NL+1 . Note that at the input layer, K0 =\n1 N0 XXT may be singular if there are more datapoints than features. Instead of attempting to use a singular Wishart distributions over G1, which would be complex and difficult to work with (Bodnar & Okhrin, 2008; Bodnar et al., 2016), we instead define an approximate posterior over the full-rank N0 ×N0 matrix, Ω, and use G1 = 1N0 XΩX T ∈ RP×P .\nCritically, the distributions in Eq. (16b) are consistent under marginalisation as long as δ` is held constant (Dawid, 1981), with P taken to be the number of input points, or equivalently the size of K`−1. Further, the deep inverse Wishart process retains the interpretation as a deterministic transformation of the kernel plus noise because the expectation is,\nE [G`|G`−1] = δ`K (G`−1)\n(P + 1 + δ`)− (P + 1) = K (G`−1) . (17)\nThe resulting inverse Wishart process does not have a direct interpretation as e.g. a deep GP, but does have more appealing properties for variational inference, as it is always full-rank and allows independent control over the approximate posterior mean and variance. Finally, it is important to note that Wishart and inverse Wishart distributions do not differ as much as one might expect; the standard Wishart and standard inverse Wishart distributions have isotropic distributions over the eigenvectors so they only differ in terms of their distributions over eigenvalues, and these are often quite similar, especially if we consider a Wishart model with ResNet-like structure (App. H)." }, { "heading": "4.2 AN APPROXIMATE POSTERIOR FOR THE DEEP INVERSE WISHART PROCESS", "text": "Choosing an appropriate and effective form for variational approximate posteriors is usually a difficult research problem. Here, we take inspiration from Ober & Aitchison (2020) by exploiting the fact that the inverse-Wishart distribution is the conjugate prior for the covariance matrix of a multivariate Gaussian. In particular, if we consider an inverse-Wishart prior over Σ ∈ RP×P with mean δΣ0, which forms the covariance of Gaussian-distributed matrix, V ∈ RP×P , consisting of columns vλ,\nP (Σ) =W−1 (Σ; δΣ0, P + 1 + δ) , (18a) P (V|Σ) = ∏NV λ=1N (vλ; 0,Σ) , (18b) P (Σ|V) =W−1 ( Σ; δΣ0 + VV T , P + 1 + δ +NV ) . (18c)\nInspired by this exact posterior that is available in simple models, we choose the approximate posterior in our model to be,\nQ (Ω) =W−1 ( Ω; δ1I + V1V T 1 , δ1 + γ1 + (N0 + 1) ) , (19a)\nQ (G`|G`−1) =W−1 ( G`; δ`K (G`−1) + V`V T ` , δ` + γ` + (P + 1) ) , (19b)\nQ (FL+1|GL) = ∏NL+1 λ=1 N ( fL+1λ ; ΣλΛλvλ,Σλ ) , where Σλ = ( K−1 (GL) + Λλ )−1 ,\n(19c)\nand where V1 is a learned N0 ×N0 matrix, {V`}L`=2 are P × P learned matrices and {γ`}L`=1 are learned non-negative real numbers. For more details about the input layer, see App. F. At the output layer, we take inspiration from the global inducing approximate posterior for DGPs from Ober & Aitchison (2020), with learned parameters being vectors, vλ, and positive definite matrices, Λλ (see App. G).\nIn summary, the prior has parameters {δ`}L`=1 (which also appears in the approximate posterior), and the posterior has parameters {V`}L`=1 and {γ`}L`=1 for the inverse-Wishart hidden layers, and {vλ}NL+1λ=1 and {Λλ} NL+1 λ=1 at the output. In all our experiments, we optimize all five parameters ({δ`,V`, γ`}L`=1) and ({vλ,Λλ} NL+1 λ=1 ), and in addition, for inducing-point methods, we also optimize a single set of “global” inducing inputs, Xi ∈ RPi×N0 , which are defined only at the input layer." }, { "heading": "4.3 DOUBLY STOCHASTIC INDUCING-POINT VARIATIONAL INFERENCE IN DEEP INVERSE WISHART PROCESSES", "text": "For efficient inference in high-dimensional problems, we take inspiration from the DGP literature (Salimbeni & Deisenroth, 2017) by considering doubly-stochastic inducing-point deep inverse Wishart processes. We begin by decomposing all variables into inducing and training (or test) points Xt ∈ RPt×N0 ,\nX = ( Xi Xt ) , FL+1 = ( FL+1i FL+1t ) , G` = ( G`ii G ` it G`ti G ` tt ) , (20)\nwhere e.g. G`ii is Pi × Pi and G`it is Pi × Pt where Pi is the number of inducing points, and Pt is the number of testing/training points. Note that Ω does not decompose as it is N0×N0. The full ELBO including latent variables for all the inducing and training points is,\nL = E [ log P (Y|FL+1) + log P ( Ω, {G`}L`=2,FL+1|X )\nQ ( Ω, {G`}L`=2,FL+1|X\n) ] , (21)\nwhere the expectation is taken over Q ( Ω, {G`}L`=2,FL+1|X ) . The prior is given by combining all terms in Eq. (16) for both inducing and test/train inputs,\nP ( Ω, {G`}L`=2,FL+1|X ) = P (Ω) [∏L `=2 P (G`|G`−1) ] P (FL+1|GL) , (22)\nwhere the X-dependence enters on the right because G1 = 1N0 XΩX T . Taking inspiration from Salimbeni & Deisenroth (2017), the full approximate posterior is the product of an approximate posterior over inducing points and the conditional prior for train/test points,\nQ ( Ω, {G`}L`=2,FL+1|X ) =\nQ ( Ω, {G`ii}L`=2,FL+1i |Xi ) P ( {G`it}L`=2, {G`tt}L`=2,FL+1t |Ω, {G`ii}L`=2,FL+1i ,X ) . (23)\nAnd the prior can be written in the same form,\nP ( Ω, {G`}L`=2,FL+1|X ) =\nP ( Ω, {G`ii}L`=2,FL+1i |Xi ) P ( {G`it}L`=2, {G`tt}L`=2,FL+1t |Ω, {G`ii}L`=2,FL+1i ,X ) . (24)\nWe discuss the second terms (the conditional prior) in Eq. (28). The first terms (the prior and approximate posteriors over inducing points), are given by combining terms in Eq. (16) and Eq. (19),\nP ( Ω, {G`ii}L`=2,FL+1i |Xi ) = P (Ω) [∏L `=2 P ( G`ii|G`−1ii )] P ( FL+1i |G L ii ) , (25) Q ( Ω, {G`ii}L`=2,FL+1i |Xi ) = Q (Ω) [∏L `=2 Q ( G`ii|G`−1ii )] Q ( FL+1i |G L ii ) . (26)\nSubstituting Eqs. (23–26) into the ELBO (Eq. 21), the conditional prior cancels and we obtain,\nL = E log P ( Y|FL+1t ) + log Q (Ω) [∏L `=2 Q ( G`ii|G `−1 ii )] Q ( FL+1i |GLii )\nP (Ω) [∏L `=2 P ( G`ii|G `−1 ii )] P ( FL+1i |GLii )\n . (27)\nImportantly, the first term is a summation across test/train datapoints, and the second term depends only on the inducing points, so as in Salimbeni & Deisenroth (2017) we can compute unbiased estimates of the expectation by taking only a minibatch of datapoints, and we never need to compute the density of the conditional prior in Eq. (28), we only need to be able to sample it.\nFinally, to sample the test/training points, conditioned on the inducing points, we need to sample,\nP ( {G`it}L`=2, {G`tt}L`=2,FL+1t |Ω, {G`ii}L`=2,FL+1i ,X ) =\nP ( FL+1t |FL+1i ,GL )∏L `=2 P ( G`it,G ` tt|G`ii,G`−1 ) . (28)\nThe first distribution, P ( FL+1t |FL+1i ,GL ) , is a multivariate Gaussian, and can be evaluated using methods from the GP literature (Williams & Rasmussen, 2006; Salimbeni & Deisenroth, 2017). The difficulties arise for the inverse Wishart terms, P ( G`it,G ` tt|G`ii,G`−1 ) . To sample this distribution, note that samples from the joint over inducing and train/test locations can be written, (\nG`ii G ` it G`ti G ` tt\n) ∼ W−1 (( Ψii Ψit Ψti Ψtt ) , δ` + Pi + Pt + 1 ) , where ( Ψii Ψit Ψti Ψtt ) = δ`K (G`−1) ,\n(29)\nand where Pi is the number of inducing inputs, and Pt is the number of train/test inputs. Defining the Schur complements,\nG`tt·i = G ` tt −G`ti ( G`ii )−1 G`it, Ψtt·i = Ψtt −ΨtiΨ−1ii Ψit. (30)\nWe know that G`tt·i and ( G`ii )−1 G`it have distribution, (Eaton, 1983)\nG`tt·i ∣∣ G`ii,G`−1 ∼ W−1 (Ψtt·i, δ` + Pi + Pt + 1) , (31a)\n( G`ii )−1 G`it ∣∣G`tt·i,G`ii,G`−1 ∼MN ( Ψ−1ii Ψit,Ψ −1 ii ,G ` tt·i ) , (31b)\nwhereMN is the matrix normal. Now, G`it and G`tt, can be recovered by algebraic manipulation. Finally, because of the doubly stochastic form for the objective, we do not need to sample multiple of jointly consistent samples for test points; instead, (and as in DGPs Salimbeni & Deisenroth, 2017) we can independently sample each test point (App. I), which dramatically reduces computational complexity.\nWe optimize using standard reparameterised variational inference (Kingma & Welling, 2013; Rezende et al., 2014) (Ober & Aitchison, 2020, for details on how to reparameterise samples from the Wishart, see)." }, { "heading": "5 COMPUTATIONAL COMPLEXITY", "text": "As in non-deep GPs, the complexity isO(P 3) for time andO(P 2) for space for standard DKPs (the O(P 3) time dependencies emerge e.g. because of inverses and determinants required for the inverse Wishart distributions). For DSVI, there is a P 3i time and P 2 i space term for the inducing points, because the computations for inducing points are exactly the same as in the non-DSVI case. As we can treat each test/train point independently (App. I), the complexity for test/training points must scale linearly with Pt, and this term has P 2i time scaling, e.g. due to the matrix products in Eq. (30). Thus, the overall complexity for DSVI isO(P 3i +P 2i Pt) for time andO(P 2i +PiPt) for space which is exactly the same as non-deep inducing GPs. Thus, and exactly as in non-deep inducing-GPs, by using a small number of inducing points, we are able to convert a cubic dependence on the number of input points into a linear dependence, which gives considerably better scaling.\nSurprisingly, this is substantially better than standard DGPs. In standard DGPs, we allow the approximate posterior covariance for each feature to differ (Salimbeni & Deisenroth, 2017), in which case, we are in essence doing standard inducing-GP inference over N hidden features, which gives complexity of O(NP 3i +NP 2i Pt) for time and O(NP 2i +NPiPt) for space (Salimbeni & Deisenroth, 2017). It is possible to improve this complexity by restricting the approximate posterior to have the same covariance for each point (but this restriction can be expected to harms performance)." }, { "heading": "6 RESULTS", "text": "We began by comparing the performance of our deep inverse Wishart process (DIWP) against infinite Bayesian neural networks (known as the neural network Gaussian process or NNGP) and DGPs. To ensure sensible comparisons against the NNGP, we used a ReLU kernel in all models (Cho & Saul, 2009). For all models, we used three layers (two hidden layers and one output layer), with three applications of the kernel. In each case, we used a learned bias and scale for each input feature, and trained for 8000 gradient steps with the Adam optimizer with 100 inducing points, a learning rate of 10−2 for the first 4000 steps and 10−3 for the final 4000 steps. For evaluation, we used 100 samples from the final iteration of gradient descent, and for each training step we used 10 samples in the smaller datasets (boston, concrete, energy, wine, yacht), and 1 sample in the larger datasets.\nWe found that DIWP usually gives better predictive performance and ELBOs. We expected DIWP to be better than (or the same as) the NNGP as the NNGP was a special case of our DIWP (sending δ` → ∞ sends the variance of the inverse Wishart to zero, so the model becomes equivalent to the NNGP). We found that the DGP performs poorly in comparison to DIWP and NNGPs, and even to past baselines on all datasets except protein (which is by far the largest). This is because we use a ReLU, rather than a squared exponential kernel, as in (Salimbeni & Deisenroth, 2017), and because we used a plain feedforward architecture for all models. In contrast, Salimbeni & Deisenroth (2017) found that good performance with DGPs on even UCI datasets required a complex architecture involving skip connections. Here, we used simple feedforward architectures, both to ensure a fair comparison to the other models, and to avoid the need for an architecture search. In addition, the inverse Wishart process is implicitly able to learn the network “width”, δ`, whereas in the DGPs, the width is fixed to be equal to the number of input features, following standard practice in the literature (e.g. Salimbeni & Deisenroth, 2017).\nNext, we considered fully-connected networks for small image classification datasets (MNIST and CIFAR-10). We used the same models as in the previous section, with the omission of learned bias and scaling of the inputs. Note that we do not expect these methods to perform well relative to\nstandard methods (e.g. CNNs) for these datasets, as we are using fully-connected networks with only 100 inducing points (whereas e.g. work in the NNGP literature uses the full 60, 000× 60, 000 covariance matrix). Nonetheless, as the architectures are carefully matched, it provides another opportunity to compare the performance of DIWPs, NNGPs and DGPs. Again, we found that DIWP usually gave statistically significant but perhaps underwhelming gains in predictive performance (except for CIFAR-10 test-log-likelihood, where DIWP lagged by only 0.01). Importantly, DIWP gives very large improvements in the ELBO, with gains of 0.09 against DGPs for MNIST and 0.08 for CIFAR-10 (App. K). For MNIST, remember that the ELBO must be negative (because both the log-likelihood for classification and the KL-divergence term give negative contributions), so the improvement from −0.301 to −0.214 represents a dramatic change." }, { "heading": "7 RELATED WORK", "text": "Our first contribution was the observation that DGPs with isotropic kernels can be written as deep Wishart processes as the kernel depends only on the Gram matrix. We then gave similar observations for neural networks (App. C.1), infinite neural networks (App. C.2) and infinite network with bottlenecks (App. C.3, also see Aitchison, 2019). These observations motivated us to consider the deep inverse Wishart process prior, which is a novel combination of two pre-existing elements: nonlinear transformations of the kernel (e.g. Cho & Saul, 2009) and inverse Wishart priors over kernels (e.g. Shah et al., 2014). Deep nonlinear transformations of the kernel have been used in the infinite neural network literature (Lee et al., 2017; Matthews et al., 2018) where they form deterministic, parameter-free kernels that do not have any flexibility in the lower-layers (Aitchison, 2019). Likewise, inverse-Wishart distributions have been suggested as priors over covariance matrices (Shah et al., 2014), but they considered a model without nonlinear transformations of the kernel. Surprisingly, without these nonlinear transformations, the inverse Wishart prior becomes equivalent to simply scaling the covariance with a scalar random variable (App. L; Shah et al., 2014). Further linear (inverse) Wishart processes have been used in the financial domain to model how the volatility of asset prices changes over time (Philipov & Glickman, 2006b;a; Asai & McAleer, 2009; Gourieroux & Sufana, 2010; Wilson & Ghahramani, 2010; Heaukulani & van der Wilk, 2019). Importantly, inference in these dynamical (inverse) Wishart processes is often performed by assuming fixed, integer degrees of freedom, and working with underlying Gaussian distributed features. This approach allows one to leverage standard GP techniques (e.g. Kandemir & Hamprecht, 2015; Heaukulani & van der Wilk, 2019), but it is not possible to optimize the degrees of freedom and the posterior over these features usually has rotational symmetries (App. D.2) that are not captured by standard variational posteriors. In contrast, we give a novel doubly-stochastic variational inducing point inference method that operates purely on Gram matrices and thus avoids needing to capture these symmetries." }, { "heading": "8 CONCLUSIONS", "text": "We proposed deep kernel processes which combine nonlinear transformations of the Gram matrix with sampling from matrix-variate distributions such as the inverse Wishart. We showed that DGPs, BNNs (App. C.1), infinite BNNs (App. C.2) and infinite BNNs with bottlenecks (App. C.3) are all instances of DKPs. We defined a new family of deep inverse Wishart processes, and give a novel doubly-stochastic inducing point variational inference scheme that works purely in the space of Gram matrices. DIWP performed better than NNGPs and DGPs on UCI, MNIST and CIFAR-10 benchmarks." }, { "heading": "A DKPS ARE KERNEL PROCESSES", "text": "We define a generic DKP to be K(K), for a random matrix K ∈ RP×P . For instance, we could take,\nK(K) =W (K, N) or K(K) =W−1 (K, δ + (P + 1)) , (32)\nwhere N is a positive integer and δ is a positive real number. A deep kernel process, D, is the composition of two (or more) underlying kernel processes, K1 and K2,\nG1 ∼ K1(K), G2 ∼ K2(G1), (33a) G2 ∼ D(K). (33b)\nWe define K∗, G∗1 and G ∗ 2 as principle submatrices of K, G1 and G2 respectively, dropping the same rows and columns. To establish that D is consistent under marginalisation, we use the consistency under marginalisation of K1 and K2\nG∗1 ∼ K1(K∗), G∗2 ∼ K2(G∗1), (34a)\nand the definition of the D as the composition of K1 and K2 (Eq. 33)\nG∗2 ∼ D(K∗). (34b)\nD is thus consistent under marginalisation, and hence is a kernel process.\nFurther, note that we can considerK to be a deterministic distribution that gives mass to only a single G. In that case, K can be thought of as a deterministic function which must satisfy a corresponding consistency property,\nG = K(K), G∗ = K(K∗), (35)\nand this is indeed satisfied by all deterministic transformations of kernels considered here. In practical terms, as long as G is always a valid kernel, it is sufficient for the elements of Gi 6=j to depend only on Kij , Kii and Kjj and for Gii to depend only on Kjj , which is satisfied by e.g. the squared exponential kernel (Eq. 15) and by the ReLU kernel (Cho & Saul, 2009)." }, { "heading": "B THE FIRST LAYER OF OUR DEEP GP AS BAYESIAN INFERENCE OVER A GENERALISED LENGTHSCALE", "text": "In our deep GP architecture, we first sample F1 ∈ RP×N1 from a Gaussian with covariance K0 = 1 N0\nXXT (Eq. 4a). This might seem odd, as the usual deep GP involves passing the input, X ∈ RP×N0 , directly to the kernel function. However, in the standard deep GP framework, the kernel (e.g. a squared exponential kernel) has lengthscale hyperparameters which can be inferred using Bayesian inference. In particular,\nkparam( 1√ N0 xi, 1√ N0 xj) = exp ( − 12N0 (xi − xj) Ω (xi − xj) T ) . (36)\nwhere kparam is a new squared exponential kernel that explicitly includes hyperparmeters Ω ∈ RN0×N0 , and where xi is the ith row of X. Typically, in deep GPs, the parameter, Ω, is diagonal, and the diagonal elements correspond to the inverse square of the lengthscale, li, (i.e. Ωii = 1/l2i ). However, in many cases it may be useful to have a non-diagonal scaling. For instance, we could use,\nΩ ∼ W (\n1 N1 I, N1\n) , (37)\nwhich corresponds to,\nΩ = WWT , where Wiλ ∼ N (\n0, 1N1\n) , W ∈ RN0×N1 . (38)\nUnder our approach, we sample F = F1 from Eq. (4b), so F can be written as,\nF = XW, fi = xiW, (39)\nwhere fi is the ith row of F. Putting this into a squared exponential kernel without a lengthscale parameter,\nk( 1√ N0 fi, 1√ N0 fj) = exp ( − 12N0 (fi − fj) (fi − fj) T ) ,\n= exp ( − 12N0 (xiW − xjW) (xiW − xjW) T ) , = exp ( − 12N0 (xi − xj) WW T (xi − xj)T ) , = exp ( − 12N0 (xi − xj) Ω (xi − xj) T ) ,\n= kparam( 1√ N0 xi, 1√ N0 xj). (40)\nWe find that a parameter-free squared exponential kernel applied to F is equivalent to a squaredexponential kernel with generalised lengthscale hyperparameters applied to the input." }, { "heading": "C BNNS AS DEEP KERNEL PROCESSES", "text": "Here we show that standard, finite BNNs, infinite BNNs and infinite BNNs with bottlenecks can be understood as deep kernel processes.\nC.1 STANARD FINITE BNNS (AND GENERAL DGPS)\nStandard, finite BNNs are deep kernel processes, albeit ones which do not admit an analytic expression for the probability density. In particular, the prior for a standard Bayesian neural network (Fig. 3 top) is,\nP (W`) = ∏N` λ=1N ( w`λ; 0, I/N`−1 ) , W` ∈ RN`−1×N` , (41a)\nF` = { XW1 for ` = 1, φ (F`−1) W` otherwise,\nF` ∈ RP×N` , (41b)\nwhere w`λ is the λth column of W`. In the neural-network case, φ is a pointwise nonlinearity such as a ReLU. Integrating out the weights, the features, F`, become Gaussian distributed, as they depend linearly on the Gaussian distributed weights, W`,\nP (F`|F`−1) = ∏N` λ=1N ( f `λ; 0,K` ) = P (F`|K`) , (42)\nwhere\nK` = 1\nN`−1 φ(F`−1)φ\nT (F`−1). (43)\nCrucially, F` depends on the previous layer activities, F`−1 only through the kernel, K`. As such, we could write a generative model as (Fig. 3 middle),\nK` =\n{ 1 N0\nXXT for ` = 1, 1\nN`−1 φ(F`−1)φT (F`−1) otherwise,\n(44a)\nP (F`|K`) = ∏N` λ=1N ( f `λ; 0,K` ) , (44b)\nwhere we have explicitly included the kernel, K`, as a latent variable. This form highlights that BNNs are deep GPs, in the sense that F`λ are Gaussian, with a kernel that depends on the activations from the previous layer. Indeed note that any deep GP (i.e. including those with kernels that cannot be written as a function of the Gram matrix) as a kernel, K`, is by definition a matrix that can be written as the outer product of a potentially infinite number of features, φ(F`) where we allow φ to be a much richer class of functions than the usual pointwise nonlinearities (Hofmann et al., 2008). We might now try to follow the approach we took above for deep GPs, and consider a Wishartdistributed Gram matrix, G` = 1N` F`F T ` . However, for BNNs we encounter an issue: we are not able to compute the kernel, K` just using the Gram matrix, G`: we need the full set of features, F`.\nInstead, we need an alternative approach to show that a neural network is a deep kernel process. In particular, after integrating out the weights, the resulting distribution is chain-structured (Fig. 3 middle), so in principle we can integrate out F` to obtain a distribution over K` conditioned on K`−1, giving the DKP model in Fig. 3 (bottom),\nP (K`|K`−1) = ∫ dF`−1 δD ( K` − 1N`φ(F`−1)φ T (F`−1) ) P (F`−1|K`−1) , (45)\nwhere P (F`−1|K`−1) is given by Eq. (44b) and δD is the Dirac-delta function, representing the deterministic distribution, P (K`|F`−1) (Eq. 44a). Using this integral to write out the generative process only in terms of K` gives the deep kernel process in Fig. 3 (bottom). While this distribution exists in principle, it cannot be evaluated analytically. But we can explicitly evaluate the expected value of K` given K`−1 using results from Cho & Saul (2009). In particular, we take Eq. 44a, write out the matrix-multiplication explicitly as a series of vector outer products, and note that as f `λ is IID across `, the empirical average is equal to the expectation of a single term, which is computed by Cho & Saul (2009),\nE [K`+1|K`] = 1N` ∑N` λ=1 E [ φ(f `λ)φ T (f `λ)|K` ] = E [ φ(f `λ)φ T (f `λ)|K` ] ,\n= ∫ df `λ N ( f `λ; 0,K` ) φ(f `λ)φ T (f `λ) ≡ K(K`). (46)\nFinally, we define this expectation to be K(K`) in the case of NNs.\nC.2 INFINITE NNS\nWe have found that for standard finite neural networks, we were not able to compute the distribution over K` conditioned on K`−1 (Eq. (45)). To resolve this issue, one approach is to consider the limit of an infinitely wide neural network. In this limit, the K` becomes a deterministic function of K`−1, as K` can be written as the average of N` IID outer products, and as N` grows to infinity, the law of large numbers tells us that the average becomes equal to its expectation,\nlim N`→∞ K`+1 = lim N`→∞\n1 N` ∑N` λ=1φ(f ` λ)φ T (f `λ) = E [ φ(f `λ)φ T (f `λ)|K` ] = K(K`). (47)\nC.3 INFINITE NNS WITH BOTTLENECKS\nIn infinite NNs, the kernel is deterministic, meaning that there is no flexibility/variability, and hence no capability for representation learning (Aitchison, 2019). Here, we consider infinite networks with bottlenecks that combine the tractability of infinite networks with the flexibility of finite networks (Aitchison, 2019). The trick is to separate flexible, finite linear “bottlenecks” from infinite-width nonlinearities. We keep the nonlinearity infinite in order to ensure that the output kernel is deterministic and can be computed using results from Cho & Saul (2009). In particular, we use finite-width\nF` ∈ RP×N` and infinite width F′` ∈ RP×M` , (we send M` to infinity while leaving N` finite), P (W`) = ∏N` λ=1N ( w`λ; 0, I/M`−1 ) M0 = N0, (48a)\nF` = { XW` if ` = 1, φ(F′`−1)W` otherwise,\n(48b)\nP (M`) = ∏M` λ=1N ( m`λ; 0, I/N` ) , (48c)\nF′` = F`M`. (48d) This generative process is given graphically in Fig. 4 (top).\nIntegrating over the expansion weights, M` ∈ RN`×M` , and the bottleneck weights, W` ∈ RM`−1×N` , the generative model (Fig. 4 second row) can be rewritten,\nK` =\n{ 1 N0\nXXT for ` = 1, 1\nM`−1 φ ( F′`−1 ) φT ( F′`−1 ) otherwise,\n(49a)\nP (F`|K`) = ∏N` λ=1N ( f `λ; 0,K` ) , (49b)\nG` = 1 N` F`F T ` , (49c)\nP (F′`|G`) = ∏M` λ=1N ( f ′`λ ; 0,G` ) . (49d)\nRemembering that K`+1 is the empirical mean of M` IID terms, as M` → ∞ it converges on its expectation\nlim M`→∞ K`+1 = lim M`→∞\n1 M` ∑N` λ=1φ ( f ′`λ ) φT ( f ′`λ ) = E [ φ(f ′`λ )φ T (f ′`λ )|G` ] = K(G`). (50)\nand we define the limit to be K(G`). Note if we use standard (e.g. ReLU) nonlinearities, we can use results from Cho & Saul (2009) to compute K(G`). Thus, we get the following generative process,\nK` =\n{ 1 N0\nXXT for ` = 1, K(G`−1) otherwise,\n(51a)\nP (G`) =W ( G`;\n1 N` K`, N`\n) . (51b)\nFinally, eliminating the deterministic kernels, K`, from the model, we obtain exactly the deep GP generative model in Eq. 6 (Fig. C.3 fourth row)." }, { "heading": "D STANDARD APPROXIMATE POSTERIORS OVER FEATURES AND WEIGHTS FAIL TO CAPTURE SYMMETRIES", "text": "We have shown that it is possible to represent DGPs and a variety of NNs as deep kernel processes. Here, we argue that standard deep GP approximate posteriors are seriously flawed, and that working with deep kernel processes may alleviate these flaws.\nIn particular, we show that the true DGP posterior has rotational symmetries and that the true BNN posterior has permutation symmetries that are not captured by standard variational posteriors.\nD.1 PERMUTATION SYMMETRIES IN DNNS POSTERIORS OVER WEIGHTS\nPermutation symmetries in neural network posteriors were known in classical work on Bayesian neural networks (e.g. MacKay, 1992). Here, we spell out the argument in full. Taking P to be a permutation matrix (i.e. a unitary matrix with PPT = I with one 1 in every row and column), we have,\nφ(F)P = φ(FP). (52)\ni.e. permuting the input to a nonlinearity is equivalent to permuting its output. Expanding two steps of the recursion defined by Eq. (41b),\nF` = φ(φ(F`−2)W`−1)W`, (53)\nmultiplying by the identity,\nF` = φ(φ(F`−2)W`−1)PP TW`, (54)\nwhere P ∈ RN`−1×N`−1 , applying Eq. (52)\nF` = φ(φ(F`−2)W`−1P)P TW`, (55)\ndefining permuted weights,\nW′`−1 = W`−1P, W ′ ` = P TW`, (56)\nthe output is the same under the original or permuted weights,\nF` = φ(φ(F`−2)W ′ `−1)W ′ ` = φ(φ(F`−2)W`−1)W`. (57)\nIntroducing a different perturbation between every pair of layers we get a more general symmetry,\nW′1 = W1P1, (58a)\nW` = P T `−1W`P` for ` ∈ {2, . . . , L}, (58b) W′L+1 = PLWL+1, (58c)\nwhere P` ∈ RN`−1×N`−1 . As the output of the neural network is the same under any of these permutations the likelihoods for original and permuted weights are equal,\nP (Y|X,W1, . . . ,WL+1) = P ( Y|X,W′1, . . . ,W′L+1 ) , (59)\nand as the prior over elements within a weight matrix is IID Gaussian (Eq. 41a), the prior probability density is equal under original and permuted weights,\nP (W1, . . . ,WL+1) = P ( W′1, . . . ,W ′ L+1 ) . (60)\nThus, the joint probability is invariant to permutations, P (Y|X,W1, . . . ,WL+1) P (W1, . . . ,WL+1) = P ( Y|X,W′1, . . . ,W′L+1 ) P ( W′1, . . . ,W ′ L+1 ) ,\n(61)\nand applying Bayes theorem, the posterior is invariant to permutations,\nP (W1, . . . ,WL+1|Y,X) = P ( W′1, . . . ,W ′ L+1|Y,X ) . (62)\nDue in part to these permutation symmetries, the posterior distribution over weights is extremely complex and multimodal. Importantly, it is not possible to capture these symmetries using standard variational posteriors over weights, such as factorised posteriors, but it is not necessary to capture these symmetries if we work with Gram matrices and kernels, which are invariant to permutations (and other unitary transformations; Eq. 12).\nD.2 ROTATIONAL SYMMETRIES IN DEEP GP POSTERIORS To show that deep GP posteriors are invariant to unitary transformations, U` ∈ RN`×N` , where U`U T ` = I, we define transformed features, F ′ `,\nF′` = F`U`. (63)\nTo evaluate P ( F′`|F′`−1 ) , we begin by substituting for F′`−1,\nP ( F′`|F′`−1 ) = ∏N` λ=1N ( f ′`λ ; 0,K ( 1 N`−1 F′`−1F ′T `−1 )) , (64)\n= ∏N` λ=1N ( f ′`λ ; 0,K ( 1 N`−1 F`−1U`−1U T `−1F T `−1 )) , (65) = ∏N` λ=1N ( f ′`λ ; 0,K ( 1 N`−1 F`−1F T `−1 )) , (66)\n= P (F′`|F`−1) . (67) To evaluate P (F′`|F`−1), we substitute for F′` in the explicit form for the multivariate Gaussian probability density,\nP (F′`|F`−1) = − 12 Tr ( F′T` K −1 `−1F ′ ` ) + const, (68)\n= − 12 Tr ( K−1`−1F ′ `F ′T ` ) + const, (69) = − 12 Tr ( K−1`−1F`U`U T ` F T ` ) + const, (70) = − 12 Tr ( K−1`−1F`F T ` ) + const, (71) = P (F`|F`−1) . (72)\nwhere K`−1 = K (\n1 N`−1 F`−1FT`−1\n) , and the constant depends only on F`−1. Combining these\nderivations, each of these conditionals is invariant to rotations of F` and F`−1,\nP ( F′`|F′`−1 ) = P (F′`|F`−1) = P (F`|F`−1) . (73)\nThe same argument can straightforwardly be extended to the inputs, P (F1|X), P (F′1|X) = P (F1|X) , (74)\nand to the final probability density, for output activations, FL+1 which is not invariant to permutations,\nP (FL+1|F′L) = P (FL+1|F′L) , (75) Therefore, we have,\nP (F′1, . . . ,F ′ L,FL+1,Y|X) = P (Y|FL+1) P (FL+1|F′L)\n( L∏\n`=2\nP ( F′`|F′`−1\n) )\nP (F′1|X) ,\n(76)\n= P (Y|FL+1) P (FL+1|FL)\n( L∏\n`=2\nP (F`|F`−1) ) P (F1|X) , (77)\n= P (F1, . . . ,FL,FL+1,Y|X) . (78) Therefore, applying Bayes theorem the posterior is invariant to rotations,\nP (F′1, . . . ,F ′ L,FL+1|X,Y) = P (F1, . . . ,FL,FL+1|X,Y) . (79)\nImportantly, these posterior symmetries are not captured by standard variational posteriors with non-zero means (e.g. Salimbeni & Deisenroth, 2017).\nD.3 THE TRUE POSTERIOR OVER FEATURES IN A DGP HAS ZERO MEAN We can use symmetry to show that the posterior of F` has zero mean. We begin by writing the expectation as an integral,\nE [F`|F`−1,F`+1] = ∫ dF F P (F`=F|F`−1,F`+1) . (80)\nChanging variables in the integral to F′ = −F, and noting that the absolute value of the Jacobian is 1, we have\n= ∫ dF′ (−F′) P (F`= (−F′) |F`−1,F`+1) , (81)\nusing the symmetry of the posterior,\n= ∫ dF′ (−F′) P (F`=F′|F`−1,F`+1) , (82)\n= −E [F`|F`−1,F`+1] , (83) the expectation is equal to minus itself, so it must be zero\nE [F`|F`−1,F`+1] = 0. (84)" }, { "heading": "E DIFFICULTIES WITH VI IN DEEP WISHART PROCESSES", "text": "The deep Wishart generative process is well-defined as long as we admit nonsingular Wishart distributions (Uhlig, 1994; Srivastava et al., 2003). The issue comes when we try to form a variational approximate posterior over low-rank positive definite matrices. This is typically the case because the number of datapoints, P is usually far larger than the number of features. In particular, the only convenient distribution over low-rank positive semidefinite matrices is the Wishart itself,\nQ (G`) =W ( G`;\n1 N` Ψ, N`\n) . (85)\nHowever, a key feature of most variational approximate posteriors is the ability to increase and decrease the variance, independent of other properties such as the mean, and in our case the rank of the matrix. For a Wishart, the mean and variance are given by,\nE Q(G`) [G`] = Ψ, (86)\nV Q(G`)\n[ G`ij ] = 1N` ( Ψ2ij + ΨiiΨjj ) . (87)\nInitially, this may look fine: we can increase or decrease the variance by changing N`. However, remember that N` is the degrees of freedom, which controls the rank of the matrix, G`. As such, N` is fixed by the prior: the prior and approximate posterior must define distributions over matrices of the same rank. And once N` is fixed, we no longer have independent control over the variance.\nTo go about resolving this issue, we need to find a distribution over low-rank matrices with independent control of the mean and variance. The natural approach is to use a non-central Wishart, defined as the outer product of Gaussian-distributed vectors with non-zero means. While this distribution is easy to sample from and does give independent control over the rank, mean and variance, its probability density is prohibitively costly and complex to evaluate (Koev & Edelman, 2006)." }, { "heading": "F SINGULAR (INVERSE) WISHART PROCESSES AT THE INPUT LAYER", "text": "In almost all cases of interest, our the kernel functions K(G) return full-rank matrices, so we can use standard (inverse) Wishart distributions, which assume that the input matrix is full-rank. However, this is not true at the input layer as K0 = 1N0 XX\nT will often be low-rank. This requires us to use singular (inverse) Wishart distributions which in general are difficult to work with (Uhlig, 1994; Srivastava et al., 2003; Bodnar & Okhrin, 2008; Bodnar et al., 2016). As such, instead we exploit knowledge of the input features to work with a smaller, full-rank matrix, Ω ∈ RN0×N0 , where, remember, N0 is the number of input features in X. For a deep Wishart process,\n1 N0\nXΩXT = G1 ∼ W (\n1 N1 K0, N1\n) , where Ω ∼ W ( 1 N1 I, N1 ) , (88)\nand for a deep inverse Wishart process, 1 N0\nXΩXT = G1 ∼ W−1 (δ1K0, δ1 + P + 1) , where Ω ∼ W−1 (δ1I, δ1 +N0 + 1) . (89) Now, we are able to use the full-rank matrix, Ω rather than the low-rank matrix, G1 as the random variable for variational inference. For the approximate posterior over Ω, in a deep inverse Wishart process, we use\nQ (Ω) =W−1 ( δ1I + V1V T 1 , δ1 + γ1 + (N0 + 1) ) . (90)\nNote in the usual case where there are fewer inducing points than input features, then the matrix K0 will be full-rank, and we can work with G1 as the random variable as usual." }, { "heading": "G APPROXIMATE POSTERIORS OVER OUTPUT FEATURES", "text": "To define approximate posteriors over inducing outputs, we are inspired by global inducing point methods (Ober & Aitchison, 2020). In particular, we take the approximate posterior to be the prior, multiplied by a “pseudo-likelihood”,\nQ (FL+1|GL) ∝ P (FL+1|GL) ∏NL+1 λ=1 N ( vλ; f L+1 λ ,Λ −1 λ ) . (91)\nThis is valid both for global inducing inputs and (for small datasets) training inputs, and the key thing to remember is that in either case, for any given input (e.g. an MNIST handwritten 2), there is a desired output (e.g. the class-label “2”), and the top-layer global inducing outputs, vλ, express these desired outcomes. Substituting for the prior,\nQ (FL+1|GL) ∝ ∏NL+1 λ=1 N ( fL+1λ ; 0,K(GL) ) N ( vλ; f L+1 λ ,Λ −1 λ ) , (92)\nand computing this value gives the approximate posterior in the main text (Eq. 19)." }, { "heading": "H USING EIGENVALUES TO COMPARE DEEP WISHART, DEEP RESIDUAL WISHART AND INVERSE WISHART PRIORS", "text": "One might be concerned that the deep inverse Wishart processes in which we can easily perform inference are different to the deep Wishart processes corresponding to BNNs (Sec. C.1) and infinite NNs with bottlenecks (App. C.3). To address these concerns, we begin by noting that the (inverse) Wishart priors can be written in terms of samples from the standard (inverse) Wishart\nG = LΩLT , G′ = LΩ′LT , (93)\nwhere K = LLT such that,\nΩ ∼ W (\n1 N I, N\n) , Ω′ ∼ W−1 (NI, λN) , (94)\nG ∼ W (\n1 NK, N\n) , G′ ∼ W−1 (NK, λN) . (95)\nNote that as the standard Wishart and inverse Wishart have uniform distributions over the eigenvectors (Shah et al., 2014), they differ only in the distribution over eigenvalues of Ω and Ω′. We plotted the eigenvalue histogram for samples from a Wishart distribution with N = P = 2000 (Fig. 5 top left). This corresponds to an IID Gaussian prior over weights, with 2000 features in the input and output layers. Notably, there are many very small eigenvalues, which are undesirable as they eliminate information present in the input. To eliminate these very small eigenvalues, a common approach is to use a ResNet-inspired architecture (which is done even in the deep GP literature, e.g. Salimbeni & Deisenroth, 2017). To understand the eigenvalues in a residual layer, we define a ResW distribution by taking the outer product of a weight matrix with itself,\nWWT = Ω′′ ∼ ResW (N,α) , (96) where the weight matrix is IID Gaussian, plus the identity matrix, with the identity matrix weighted as α,\nW = 1√ 1+α2 (√ 1 N ξ + αI ) , ξi,λ ∼ N (0, 1) . (97)\nWith α = 1, there are still many very small eigenvalues, but these disappear as α increases. We compared these distributions to inverse Wishart distributions (Fig. 5 bottom) with varying degrees of freedom. For all degrees of freedom, we found that inverse Wishart distributions do not produce very small eigenvalues, which would eliminate information. As such, these eigenvalue distributions resemble those for ResW with α larger than 1." }, { "heading": "I DOUBLY STOCHASTIC VARIATIONAL INFERENCE IN DEEP INVERSE WISHART PROCESSES", "text": "Due to the doubly stochastic results in Sec. 4.3, we only need to compute the conditional distribution over a single test/train point (we do not need the joint distribution over a number of test points). As such, we can decompose G and Psi as,\nG` =\n( G`ii g `T it\ng`it g ` tt\n) , Ψ = ( Ψii ψ T it\nψit ψtt\n) , (98)\nwhere G`ii,Ψii ∈ RPi×Pi , g`it ∈ RPitimes1 and ψit ∈ RPi×1 are column-vectors, and g`tt and ψtt are scalars. Taking the results in Eq. (31) to the univariate case,\ng`tt·i = g ` tt − gT`it ( G`ii )−1 g`it, ψtt·i = ψtt −ψ T it Ψ −1 ii ψit. (99)\nAs g`tt·i is univariate, its distribution becomes Inverse Gamma,\ng`tt·i|G`ii,G`−1 ∼ InverseGamma ( α = 12 (δ` + Pt + Pi + 1) , β = 1 2ψtt·i ) . (100)\nAs g`it is a vector rather than a matrix, its distribution becomes Gaussian, ( G`ii )−1 g`it|g`tt·i,G`ii,G`−1 ∼ N ( Ψ−1ii ψit, g ` tt·iΨ −1 ii ) . (101)" }, { "heading": "J SAMPLES FROM THE 1D PRIOR AND APPROXIMATE POSTERIOR", "text": "First, we drew samples from a one-layer (top) and two-layer (bottom) deep inverse Wishart process, with a squared-exponential kernel (Fig. 6). We found considerable differences in the function family corresponding to different prior samples of the top-layer Gram matrix, GL (panels). While differences across function classes in a one-layer IW process can be understood as equivalent to doing inference over a prior on the lengthscale, this is not true of the two-layer process, and to emphasise this, the panels for two-layer samples all have the same first layer sample (equivalent to choosing a lengthscale), but different samples from the Gram matrix at the second layer. The two-layer deep IW process panels use the same, fixed input layer, so variability in the function class arises only from sampling G2.\nNext, we exploited kernel flexibilities in IW processes by training a one-layer deep IW model with a fixed kernel bandwidth on data generated from various bandwidths. The first row in Figure 7 shows posterior samples from one-layer deep IW processes trained on different datasets. For each panel, we first sampled five full G1 matrices using Eq.(31a) and (31b). Then for each G1, we use Gaussian conditioning to get a posterior distribution on testing locations and drew one sample from the posterior plotted as a single line. Remarkably, these posterior samples exhibited wiggling behaviours that were consistent with training data even outside the training range, which highlighted the additional kernel flexibility in IW processes. On the other hand, when model bandwidth was fixed, samples from vanilla GPs with fixed bandwidth in the second row displayed almost identical shapes outside the training range across different sets of training data." }, { "heading": "K WHY WE CARE ABOUT THE ELBO", "text": "While we have shown that DIWP offers some benefits in predictive performance, it gives much more dramatic improvements in the ELBO. While we might think that predictive performance is the only goal, there are two reasons to believe that the ELBO itself is also an important metric. First, the ELBO is very closely related to PAC-Bayesian generalisation bounds (e.g. Germain et al., 2016). In particular, the bounds are generally written as the average training log-likelihood, plus the KL-divergence between the approximate posterior over parameters and the prior. This mirrors the standard form for the ELBO,\nL = E Q(z) [log P (x|z)]−DKL (Q (z) ||P (z)) , (102)\nwhere x is all the data (here, the inputs, X and outputs, Y), and z are all the latent variables. Remarkably, Germain et al. (e.g. 2016) present a bound on the test-log-likelihood that is exactly the ELBO per data point, up to additive constants. As such, in certain circumstances, optimizing the ELBO is equivalent to optimizing a PAC-Bayes bound on the test-log-likelihood. Similar results are available in Rivasplata et al. (2019). Second, we can write down an alternative form for the ELBO as the model evidence, minus the KL-divergence between the approximate and true posterior,\nL = log P (x)−DKL (Q (z) ||P (z|x)) ≤ log P (x) . (103)\nAs such, for a fixed generative model, and hence a fixed value of the model evidence, log P (x), the ELBO measures the closeness of the variational approximate posterior, Q (z) and the true posterior, P (z|x). As we are trying to perform Bayesian inference, our goal should be to make the approximate posterior as close as possible to the true posterior. If, for instance, we can set Q (z) to give better predictive performance, but be further from the true posterior, then that is fine in certain settings, but not when the goal is inference. Obviously, it is desirable for the true and approximate posterior to be as close as possible, which corresponds to larger values of L (indeed, when the approximate posterior equals the true posterior, the KL-divergence is zero, and L = log P (x) )." }, { "heading": "L DIFFERENCES WITH SHAH ET AL. (2014)", "text": "For a one-layer deep inverse Wishart process, using our definition in Eq. (16)\nK0 = 1 N0 XXT , (104a)\nP (G1|K0) =W−1 (δ1K0, δ1 + (P + 1)) , (104b) P (yλ|K1) = N (yλ; 0,K (G1)) . (104c)\nImportantly, we do the nonlinear kernel transformation after sampling the inverse Wishart, so the inverse-Wishart sample acts as a generalised lengthscale hyperparameter (App. B), and hence dramatically changes the function family.\nIn contrast, for Shah et al. (2014), the nonlinear kernel is computed before, the inverse Wishart is sampled, and the inverse Wishart sample is used directly as the covariance for the Gaussian,\nK0 = K (\n1 N0\nXXT ) , (105a)\nP (G1|K0) =W−1 (δ1K0, δ1 + (P + 1)) , (105b) P (yλ|K1) = N (yλ; 0,G1) . (105c)\nThis difference in ordering, and in particular, the lack of a nonlinear kernel transformation between the inverse-Wishart and the output is why Shah et al. (2014) were able to find trivial results in their model (that it is equivalent to multiplying the covariance by a random scale)." } ]
2,020
null
SP:cc84a9b9b02da8079787f6e5de7e1b83d95e8d5f
[ "The paper proposed a transfer learning setting where the target domain varies/evolves over time and the source domain is considered static. The paper uses C-divergence to measure label-dependent domain discrepancy between source/previous target domain and the current target domain and provided a theoretical bound. The paper also used supervised VAE for CONTE algorithm and included C-divergence as a part of the objective function.", "This paper studies how to transfer the information in the static source domain to the time-evolving target domain. This paper proposes a domain discrepancy measure and an algorithm for continuous transfer learning. The results seem to be interesting and the problem this paper studies is important. However, the domain rate in the main results and algorithm could be easily generalized which can make the results more broadly applicable. Moreover, it needs more clarification about the motivation of using the C-divergence measure in the time-evolving target domain." ]
Transfer learning has been successfully applied across many high-impact applications. However, most existing work focuses on the static transfer learning setting, and very little is devoted to modeling the time evolving target domain, such as the online reviews for movies. To bridge this gap, in this paper, we focus on the continuous transfer learning setting with a time evolving target domain. One major challenge associated with continuous transfer learning is the time evolving relatedness of the source domain and the current target domain as the target domain evolves over time. To address this challenge, we first derive a generic generalization error bound on the current target domain with flexible domain discrepancy measures. Furthermore, a novel label-informed C-divergence is proposed to measure the shift of joint data distributions (over input features and output labels) across domains. It could be utilized to instantiate a tighter error upper bound in the continuous transfer learning setting, thus motivating us to develop an adversarial Variational Auto-encoder algorithm named CONTE by minimizing the C-divergence based error upper bound. Extensive experiments on various data sets demonstrate the effectiveness of our CONTE algorithm.
[ { "affiliations": [], "name": "DDTTtt DDTTnn" } ]
[ { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Fernando Pereira" ], "title": "Analysis of representations for domain adaptation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2007 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman Vaughan" ], "title": "A theory of learning from different domains", "venue": "Machine Learning,", "year": 2010 }, { "authors": [ "Andreea Bobu", "Eric Tzeng", "Judy Hoffman", "Trevor Darrell" ], "title": "Adapting to continuously shifting domains", "venue": "In International Conference on Learning Representations Workshop,", "year": 2018 }, { "authors": [ "Xinyang Chen", "Sinan Wang", "Mingsheng Long", "Jianmin Wang" ], "title": "Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Judy Hoffman", "Trevor Darrell", "Kate Saenko" ], "title": "Continuous manifold based adaptation for evolving visual domains", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Judy Hoffman", "Mehryar Mohri", "Ningshan Zhang" ], "title": "Algorithms and theory for multiple-source adaptation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yunhun Jang", "Hankook Lee", "Sung Ju Hwang", "Jinwoo Shin" ], "title": "Learning what and where to transfer", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Fredrik D Johansson", "Rajesh Ranganath", "David Sontag" ], "title": "Support and invertibility in domaininvariant representations", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Durk P Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "Max Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Mingsheng Long", "Yue Cao", "Jianmin Wang", "Michael I Jordan" ], "title": "Learning transferable features with deep adaptation networks", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Mingsheng Long", "Han Zhu", "Jianmin Wang", "Michael I Jordan" ], "title": "Deep transfer learning with joint adaptation networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Mingsheng Long", "Zhangjie Cao", "Jianmin Wang", "Michael I Jordan" ], "title": "Conditional adversarial domain adaptation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yishay Mansour", "Mehryar Mohri", "Afshin Rostamizadeh" ], "title": "Domain adaptation: Learning bounds and algorithms", "venue": "In Proceedings of the 22nd Annual Conference on Learning Theory,", "year": 2009 }, { "authors": [ "Sinno Jialin Pan", "Qiang Yang" ], "title": "A survey on transfer learning", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2009 }, { "authors": [ "German I Parisi", "Ronald Kemker", "Jose L Part", "Christopher Kanan", "Stefan Wermter" ], "title": "Continual lifelong learning with neural networks: A review", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "Michael T Rosenstein", "Zvika Marx", "Leslie Pack Kaelbling", "Thomas G Dietterich" ], "title": "To transfer or not to transfer", "venue": "In NIPS 2005 Workshop on Transfer Learning,", "year": 2005 }, { "authors": [ "Kuniaki Saito", "Kohei Watanabe", "Yoshitaka Ushiku", "Tatsuya Harada" ], "title": "Maximum classifier discrepancy for unsupervised domain adaptation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jian Shen", "Yanru Qu", "Weinan Zhang", "Yong Yu" ], "title": "Wasserstein distance guided representation learning for domain adaptation", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Hidetoshi Shimodaira" ], "title": "Improving predictive inference under covariate shift by weighting the loglikelihood function", "venue": "Journal of Statistical Planning and Inference,", "year": 2000 }, { "authors": [ "Baochen Sun", "Kate Saenko" ], "title": "Deep coral: Correlation alignment for deep domain adaptation", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Eric Tzeng", "Judy Hoffman", "Kate Saenko", "Trevor Darrell" ], "title": "Adversarial discriminative domain adaptation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Riccardo Volpi", "Pietro Morerio", "Silvio Savarese", "Vittorio Murino" ], "title": "Adversarial feature augmentation for unsupervised domain adaptation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Zirui Wang", "Zihang Dai", "Barnabás Póczos", "Jaime Carbonell" ], "title": "Characterizing and avoiding negative transfer", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Junfeng Wen", "Russell Greiner", "Dale Schuurmans" ], "title": "Domain aggregation networks for multisource domain adaptation", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Yifan Wu", "Ezra Winston", "Divyansh Kaushik", "Zachary Lipton" ], "title": "Domain adaptation with asymmetrically-relaxed distribution alignment", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Wei Ying", "Yu Zhang", "Junzhou Huang", "Qiang Yang" ], "title": "Transfer learning via learning to transfer", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yuchen Zhang", "Tianle Liu", "Mingsheng Long", "Michael I Jordan" ], "title": "Bridging theory and algorithm for domain adaptation", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Han Zhao", "Shanghang Zhang", "Guanhang Wu", "José MF Moura", "Joao P Costeira", "Geoffrey J Gordon" ], "title": "Adversarial multiple source domain adaptation", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Han Zhao", "Remi Tachet des Combes", "Kun Zhang", "Geoffrey J Gordon" ], "title": "On learning invariant representation for domain adaptation", "venue": "In International Conference on Machine Learning,", "year": 2019 } ]
[ { "heading": null, "text": "1 INTRODUCTION Source domain\nTarget domain\n𝓓𝓓𝑺𝑺\n𝓓𝓓𝑻𝑻𝟏𝟏 𝓓𝓓𝑻𝑻𝟐𝟐 𝓓𝓓𝑻𝑻𝒕𝒕 𝓓𝓓𝑻𝑻𝒏𝒏⋯⋯𝓓𝓓𝑻𝑻𝟑𝟑\nNegative transfer\n𝓓𝓓𝑺𝑺\nFigure 1: Illustration of continuous transfer learning. It learns a predictive function in DTt using knowledge from both source domain DS and historical target domain DTi(i = 1, · · · , t 1). Directly transferring from the source domain DS to the target domain DTt might lead to negative transfer with undesirable predictive performance.\nTransfer learning has achieved significant success across multiple high-impact application domains (Pan & Yang, 2009). Compared to conventional machine learning methods assuming both training and test data have the same data distribution, transfer learning allows us to learn the target domain with limited label information by leveraging a related source domain with abundant label information (Ying et al., 2018). However, in many real applications, the target domain is constantly evolving over time.\nFor example, the online movie reviews are changing over the years: some famous movies were not well received by the mainstream audience when they were first released, but became famous only years later (e.g., Citizen Cane, Fight Club, and The Shawshank Redemption); whereas the online book reviews typically do not have this type of dynamics. It is challenging to transfer knowledge from the static source domain (e.g., the book reviews) to the time evolving target domain (e.g., the movie reviews). Therefore, in this paper, we study the transfer learning setting with a static source domain and a continuously time evolving target domain (see Figure 1), which has not attracted much attention from the research community and yet is commonly seen across many real applications. The unique challenge for continuous transfer learning lies in the time evolving nature of the task relatedness between the static source domain and the time evolving target domain. Although the change in the target data distribution in consecutive time stamps might be small, over time, the cumulative change in the target domain might even lead to negative transfer (Rosenstein et al., 2005).\nExisting theoretical analysis on transfer learning (Ben-David et al., 2010; Mansour et al., 2009) showed that the target error is typically bounded by the source error, the domain discrepancy of marginal data distributions and the difference of labeling functions. However, it has been observed (Zhao et al., 2019; Wu et al., 2019) that marginal feature distribution alignment might not guarantee the minimization of the target error in real world scenarios. This indicates that in the context of continuous transfer learning, marginal feature distribution alignment would lead to the sub-optimal solution (or even negative transfer) with undesirable predictive performance when directly transferring from DS to the target domain DTt at the tth time stamp. This paper aims to bridge the gap in terms of both the theoretical analysis and the empirical solutions for the target domain with a time evolving distribution, which lead to a novel continuous transfer learning algorithm as\nwell as the characterization of negative transfer. The main contributions of this paper are summarized as follows: (1) We derive a generic error bound for continuous transfer learning setting with flexible domain divergence measures; (2) We propose a label-informed domain discrepancy measure (C-divergence) with its empirical estimate, which instantiates a tighter error bound for continuous transfer learning setting; (3) Based on the proposed C-divergence, we design a novel adversarial Variational Auto-encoder algorithm (CONTE) for continuous transfer learning; (4) Extensive experimental results on various data sets verify the effectiveness of the proposed CONTE algorithm.\nThe rest of the paper is organized as follows. Section 2 introduces the notation and our problem definition. We derive a generic error bound for continuous transfer learning setting in Section 3. Then we propose a novel C-divergence in Section 4, followed by a instantiated error bound and a novel continuous transfer learning algorithm in Section 5. The experimental results are provided in Section 6. We summarize the related work in Section 7, and conclude the paper in Section 8." }, { "heading": "2 PRELIMINARIES", "text": "In this section, we introduce the notation and problem definition of continuous transfer learning." }, { "heading": "2.1 NOTATION", "text": "We use X and Y to denote the input space and label space. Let DS and DT denote the source and target domains with data distribution pS(x, y) and pT (x, y) over X ⇥ Y , respectively. Let H be a hypothesis class on X , where a hypothesis is a function h : X ! Y . The notation is summarized in\nTable 3 in the appendices." }, { "heading": "2.2 PROBLEM DEFINITION", "text": "Transfer learning (Pan & Yang, 2009) refers to the knowledge transfer from source domain to target domain such that the prediction performance on the target domain could be significantly improved as compared to learning from the target domain alone. However, in some applications, the target domain is changing over time, hence the time evolving relatedness between the source and target domains. This motivates us to consider the transfer learning setting with the time evolving target domain, which is much less studied as compared to the static transfer learning setting. We formally define the continuous transfer learning problem as follows. Definition 2.1. (Continuous Transfer Learning) Given a source domain DS (available at time stamp j = 1) and a time evolving target domain {DTj}nj=1 with time stamp j, continuous transfer learning aims to improve the prediction function for target domain DTt+1 using the knowledge from source domain DS and the historical target domain DTj (j = 1, · · · , t).\nNotice that the source domain DS can be considered a special initial domain for the time-evolving target domain. Therefore, for notation simplicity, we will use DT0 to represent the source domain in this paper. It assumes that there are mT0 labeled source examples drawn independently from a source domain DT0 and mTj labeled target examples drawn independently from a target domain DTj at time stamp j." }, { "heading": "3 A GENERIC ERROR BOUND", "text": "Given a static source domain and a time evolving target domain, continuous transfer learning aims to improve the target predictive function over DTt+1 using the source domain and historical target domain. We begin by considering the binary classification setting, i.e., Y = {0, 1}. The source error of a hypothesis h can be defined as follows: ✏T0(h) = E(x,y)⇠pT0 (x,y) ⇥ L(h(x), y) ⇤ where L(·, ·) is the loss function. Its empirical estimate using source labeled examples is denoted as ✏̂T0(h). Similarly, we define the target error ✏Tj (h) and the empirical estimate of the target error ✏̂Tj (h) over the target distribution pTj (x, y) at time stamp j. A natural domain discrepancy measure over joint distributions on X ⇥ Y between features and class labels can be defined as follows:\nd1(DT0 ,DT ) = sup Q2Q PrDT0 [Q] PrDT [Q] (1)\nwhere Q is the set of measurable subsets under pT0(x, y) and pT (x, y)1. Then, the error bound of continuous transfer learning is given by the following theorem. Theorem 3.1. Assume the loss function L is bounded with 0 L M . Given a source domain DT0 and historical target domain {DTi}ti=1, for h 2 H, the target domain error ✏Tt+1 on Dt+1 is\n1Note that it is slightly different from L1 or variation divergence in (Ben-David et al., 2010) with only marginal distribution of features involved.\nbounded as follows.\n✏Tt+1(h) 1\nµ̄\n0 @ tX\nj=0\nµ t j ✏Tj (h) +M tX\nj=0\nµ t j\nd1(DTj ,DTt+1)\n1\nA\nwhere µ 0 is the domain decay rate2 indicating the importance of source or historical target domain over DTt+1 , and µ̄ = Pt j=0 µ t j .\nRemark. In particular, we have the following arguments. (1) It is not tractable to accurately estimate d1 from finite examples in real scenarios (Ben-David et al., 2010); (2) This error bound could be much tighter when considering other advanced domain discrepancy measures, e.g., Adistance (Ben-David et al., 2007), discrepancy distance (Mansour et al., 2009), etc. (3) There are two special cases: when µ = 0, the error bound of DTt+1 would be simply determined by the latest historical target data DTt , and if µ goes to infinity, DTt+1 is just determined by the source data DT0 because intuitively the coefficient µt j/µ̄ of historical target domain data DTj (j = 1, · · · , t) converges to zero.\nCorollary 3.2. With the assumption in Theorem 3.1 and assume that the loss function L is symmetric (i.e., L(y1, y2) = L(y2, y1) for y1, y2 2 Y) and obeys the triangle inequality, Then\n(1) if A-distance (Ben-David et al., 2007) is adopted to measure the distribution shift, i.e., dH H = suph,h02H PrDT0 [h(x) 6= h 0(x)] PrDT [h(x) 6= h 0(x)] , we have:\n✏Tt+1(h) 1\nµ̄\n0 @ tX\nj=0\nµ t j ✏Tj (h) +M tX\nj=0\nµ t j ✓ dH H(DTj ,DTt+1) + ⇤ j\nM\n◆1\nA\nwhere ⇤j = minh2H ✏Tj (h) + ✏Tt+1(h). (2) if discrepancy distance (Mansour et al., 2009) is adopted to measure the distribution shift, i.e.,\nddisc(DT0 ,DT ) = maxh,h02H EDT0 [L(h(x), h 0(x))] EDT [L(h(x), h0(x))] , we have:\n✏Tt+1(h) 1\nµ̄\n0 @ tX\nj=0\nµ t j ✏Tj (h) + tX\nj=0\nµ t j ddisc(DTj ,DTt+1) + ⌦j\n1\nA\nwhere ⌦j = EDTj [L(h ⇤ j (x), y)] + EDTt+1 [L(h ⇤ j (x), h ⇤ t+1(x))] + EDTt+1 [L(h ⇤ t+1(x), y)], and h ⇤\nj = argminh2H ✏Tj (h) for j = 0, · · · , t, t+ 1.\nThe aforementioned domain discrepancy measures mainly focus on the marginal distribution over input features and have inspired a line of practical transfer learning algorithms (Ganin et al., 2016; Chen et al., 2019). However, recent work (Wu et al., 2019; Zhao et al., 2019) observed that the minimization of marginal distributions cannot guarantee the success of transfer learning in real scenarios. We propose to address this problem by incorporating the label information in the domain discrepancy measure (see next section)." }, { "heading": "4 LABEL-INFORMED DOMAIN DISCREPANCY", "text": "In this section, we introduce a novel label-informed domain discrepancy measure between the source domain DT0 and target domain DT , its empirical estimate, and a transfer signature based on this measure to identify potential negative transfer. The use of this discrepancy measure in continuous transfer learning will be discussed in the next section.\n4.1 C-DIVERGENCE For a hypothesis h 2 H, we denote I(h) to be the subset of X such that x 2 I(h) , h(x) = 1. In order to estimate the label-informed domain discrepancy from finite samples in practice, instead of Eq. (1), we propose the following C-divergence between DT0 and DT , taking into consideration the joint distribution over features and class labels: dC(DT0 ,DT ) = sup\nh2H\nPrDT0 [{I(h), y = 1}[{I(h), y = 0}] PrDT [{I(h), y = 1}[{I(h), y = 0}]\n(2) where I(h) is the complement of I(h).\n2In this case, we assume µ0 = 1 for any µ 0.\nWe show that some existing domain discrepancy methods (e.g., Ben-David et al. (2007)) can be seen as special cases of this definition by using the following relaxed covariate shift assumption. Definition 4.1. (Relaxed Covariate Shift Assumption) The source and target domains satisfy the relaxed covariate shift assumption if for any h 2 H,\nPrDT0 [y | I(h)] = PrDT [y | I(h)] = Pr[y | I(h)] (3)\nNotice that it would be equivalent to covariance shift assumption (Shimodaira, 2000; Johansson et al., 2019) when I(h) consists of only one example for all h 2 H (see Lemma A.6 for details). Lemma 4.2. With the relaxed covariate shift assumption, for any h 2 H, we have:\ndC(DT0 ,DT ) = sup h2H\n⇣ PrDT0 [I(h)] PrDT [I(h)] ⌘ · Sh + PrDT [y = 1] PrDT0 [y = 1]\nwhere Sh = Pr[y = 1|I(h)] Pr[y = 0|I(h)]. Remark. From Lemma 4.2, we can see that in the special case where Sh is a constant for all h 2 H and PrDT [y = 1] = PrDT0 [y = 1], the proposed C-divergence is reduced to the A-distance (BenDavid et al., 2007) defined on the marginal distribution of features. More generally speaking, Cdivergence can be considered as a weighted version of the A-distance where the hypothesis whose characteristic function has a larger class-separability (i.e., |Sh|) receives a higher weight. Intuitively, compared to A-distance, C-divergence would pay less attention to class-inseparable regions in the input feature space, which provide irrelevant information for learning the prediction function in the target domain.\nMoreover, the following theorem states that in conventional transfer learning scenario with a static source domain and a static target domain, the target error is bounded in terms of C-divergence across domains and the expected source error. Theorem 4.3. Assume that loss function L is bounded, i.e., there exists a constant M > 0 such that 0 L M . For a hypothesis h 2 H, we have the following bound:\n✏T (h) ✏T0(h) +M · dC(DT0 ,DT )\n4.2 EMPIRICAL ESTIMATE OF C-DIVERGENCE In practice, it is difficult to calculate the proposed C-divergence based on Eq. (2) as it uses the true underlying distributions. Therefore, we propose the following empirical estimate of the Cdivergence between DT0 and DT as follows. Assuming that the hypothesis class H is symmetric (i.e., 1 h 2 H if h 2 H), the empirical C-divergence is:\ndC(D̂T0 , D̂T ) = 1 min h2H\n1\nmT0\nX\n(x,y):h(x) 6=y\nI[(x, y) 2 D̂T0 ]+ 1\nmT\nX\n(x,y):h(x)=y\nI[(x, y) 2 D̂T ] (4)\nwhere D̂T0 and D̂T denote the source and target domains with finite samples, respectively. I[a] is the binary indicator function which is 1 if a is true, and 0 otherwise.\nThe following lemma provides the upper bound of the true C-divergence using its empirical estimate. Lemma 4.4. For any 2 (0, 1), with probability at least 1 over mT0 labeled source examples BT0 and mT labeled target examples BT , we have:\ndC(DT0 ,DT ) dC(D̂T0 , D̂T ) + ⇣ <̂BT0 (LH) + <̂BT (LH) ⌘ + 3 s log 4 2mT0 + s log 4 2mT !\nwhere <̂B(LH)(B 2 {BT0 ,BT }) denotes the Rademacher complexity (Mansour et al., 2009) over B and LH = {(x, y) ! I[h(x) = y] : h 2 H} is a class of functions mapping Z = X ⇥ Y to {0, 1}." }, { "heading": "4.3 NEGATIVE TRANSFER CHARACTERIZATION", "text": "Informally, negative transfer is considered as the situation where transferring knowledge from the source domain has a negative impact on the target learner (Wang et al., 2019): ✏T (A(DT0 ,DT )) > ✏T (A(;,DT )) where A is the learning algorithm. ✏T is the target error induced by algorithm A. ; implies that it only considers the target data set for target learner. In this paper, we define a transfer signature to measure the transferability from source domain to target domain as follows.\nTS(DT ||DT0)) = inf A2G (✏T (A(DT0 ,DT )) ✏T (A(;,DT ))) (5)\nwhere G is the set of all learning algorithms. We state that source domain knowledge is not transferable over target domain when TS(DT ||DT0)) > 0. Specially, since A(DT0 ,DT ) learns an optimal classifier using both source and target data, we can define ✏T (A(DT0 ,DT )) = ✏T (h⇤↵)\nwhere h⇤↵ = argminh2H(A) ↵✏T (h) + (1 ↵)✏T0(h) and H(A) is the hypothesis space induced by A. When we only consider the target domain with ↵ = 1, ✏T (A(;,DT )) = ✏T (h⇤T ) where h ⇤\nT = argminh2H(A) ✏T (h). Then we have the following theorem regarding the transfer signature. Theorem 4.5. Assuming the loss function L is bounded with 0 L M , we have\n✏T (h ⇤ ↵) ✏T (h ⇤ T ) + 2(1 ↵)MdC(DT0 ,DT ) Furthermore,\nTS(DT ||DT0)) 2(1 ↵)MdC(DT0 ,DT )\nRemark. We have the following observations: (1) Larger C-divergence between domains is often associated with a higher transfer signature, which indicates that negative transfer can be characterized using the proposed C-divergence; (2) Empirically, the larger amount of labeled target data could increase the value of ↵, resulting in the learned classifier relying more on the target data, which is consistent with the observation in (Wang et al., 2019). One extreme case is where ↵ = 1, implying we have adequate labeled target examples for standard supervised learning on the target domain without transferring knowledge from the source domain." }, { "heading": "5 PROPOSED ALGORITHM", "text": "In this section, we derive the continuous error bound based on our proposed C-divergence, followed by a novel continuous transfer learning algorithm (CONTE) by minimizing the error upper bound. Notice that in the context of continuous transfer learning, we also use the proposed C-divergence between the target domain at adjacent time stamps to measure the change in distribution over time.\n5.1 CONTINUOUS ERROR BOUND WITH EMPIRICAL C-DIVERGENCE The following theorem states that for a bounded loss function L, the target error in continuous transfer learning can be bounded in terms of the empirical classification error within source and historical target domains, the empirical C-divergence across domains as well as the empirical Rademacher complexity of the class of functions LH = {(x, y) ! I[h(x) = y] : h 2 H}. Theorem 5.1. (Continuous Error Bound) Assume the loss function L is bounded with 0 L M . Given a source domain DT0 and historical target domain {DTi}ti=1, for h 2 H and 2 (0, 1), with probability at least 1 , the target domain error ✏Tt+1 on DTt+1 is bounded as follows.\n✏Tt+1(h) 1\nµ̄\n0 @ tX\nj=0\nµ t j ✏̂Tj (h) +M tX\nj=0\nµ t j\ndC(D̂Tj , D̂Tt+1) +M⇤\n1\nA\nwhere ⇤ = Pt\nj=0 ⇣ <̂BTj (LH) + <̂BTt+1 (LH) + 3 r log 8 2mTj + 3 r log 8 2mTt+1 + r M2 log 4 2mTj ⌘ .\nRemark. Compared to continuous error bounds in Corollary 3.2 using existing domain divergence measures (Ben-David et al. (2007); Mansour et al. (2009)), our bound consists of only datadependent terms (e.g., empirical source error and C-divergence), whereas previous error bounds are determined by the error terms involving the intractable labeling function or optimal target hypothesis (see Corollary 3.2)." }, { "heading": "5.2 CONTE ALGORITHM", "text": "For continuous transfer learning, we leverage both the source domain and historical target domain data to learn the predictive function for the current time stamp. To this end, we propose to minimize the error bound in Theorem 5.1 for learning the predictive function on DTt+1 . Furthermore, we aim to learn a domain-invariant and time-invariant latent feature space such that the C-divergence across domains and across time stamps could be minimized. Therefore, we present an adversarial Variational Auto-encoder (VAE) algorithm with the following overall objective function:\nJ (T0, T1, T2, · · · , Tt+1) = tX\nj=0\nµt j ⇣ Lclc (Tj , Tt+1) + dC(D̂Tj , D̂Tt+1) + LELBO (Tj , Tt+1) ⌘ (6)\nwhere Lclc(Tj , Tt+1) represents the classification error over the labeled examples from DTj and DTt+1 , dC(D̂Tj , D̂Tt+1) is the empirical estimate of C-divergence across domain. Thus the first two terms of Eq. (6) are associated with ✏̂Tj (h)+dC(D̂Tj , D̂Tt+1) in the error bound of Theorem 5.1. The third term LELBO(Tj , Tt+1) is the variational bound in the VAE framework (see Figure 4) when learning the latent feature space and > 0 is a hyper-parameter. In this case, we have µ 2 [0, 1] because we assume that the data distribution of a time-evolving target domain shifts smoothly over time. Then we instantiate the terms of Eq. (6) as follows.\nInspired by semi-supervised VAE (Kingma et al., 2014), we propose to learn the feature space by maximizing the following likelihood across domains.\nlog p✓(x, y) = KL q (z|x, y)||p✓(z|x, y) + Eq (z|x,y)[log p✓(x, y, z) log q (z|x, y)]\nwhere and ✓ are the learnable parameters in the encoder and decoder respectively, and z is the latent feature representation of the input example (x, y). KL(·||·) is Kullback–Leibler divergence. The evidence lower bound (ELBO), a lower bound on this log-likelihood, can be written as follows. E✓, (x, y) = Eq (z|x,y) [log p✓(x, y|z)] + KL (q (z|x, y)||p(z)) (7) where E✓, (x, y) log p✓(x, y). Similarly, we have the following ELBO to maximize the loglikelihood of p✓(x) when the label is not available:\nU✓, (x) = X\ny\nq (y|x) · E✓, (x, y) Eq (y|x) [log q (y|x)]\n(8)\nwhere p✓(x, y, z) = p✓(x|y, z)p✓(y|z)p(z) with prior Gaussian distribution p(z) = N (0, I). Therefore, the variational bound LELBO(Tj , Tt+1) is given below.\nLELBO(Tj , Tt+1) = XmTj+mTt+1\ni=1 E✓, (xi, yi) XuTt+1 i=1\nU✓, (xi, yi) (9) where uTt+1 is the number of unlabeled training examples from DTt+1 . Besides, the classification error Lclc(Tj , Tt+1) can be expressed as follows.\nLclc(Tj , Tt+1) = XmTj+mTt+1\ni=1 L (yi, q (·|xi)) (10)\nwhere q (·) is the discriminative classifier formed by the distribution q (y|x) in Eq. (8), and L(·, ·) is the cross-entropy loss function in our experiments. To estimate the C-divergence, we first define h̃ to be a two-dimensional characteristic function with h̃(x, y) = 1 , h(x) = y , {h(x) = 1, y = 1} _ {h(x) = 0, y = 0} for h 2 H. Then the empirical C-divergence in Eq. (4) can be rewritten as follows. dC(D̂Tj , D̂Tt+1) = 1 min\nh̃\n1\nmTj\nX\n(x,y):h̃(x,y)=0\nI[(x, y) 2 D̂Tj ]+ 1\nmTt+1\nX\n(x,y):h̃(x,y)=1\nI[(x, y) 2 D̂Tt+1 ]\nNote that the latent feature representation z learned by q (z|x, y) could capture the label-informed information of an example (x, y). Thus, the hypothesis h̃ can be considered as the composition of a feature extraction q and a domain classifier Fj , i.e, h̃(x, y) = Fj(q (z|x, y)). Formally, the empirical estimate of C-divergence is given below.\ndC(D̂Tj , D̂Tt+1) = 1 min Fj\n1\nmTj\nX\nz:Fj(z)=0\nI[z 2 D̂Tj ] + 1\nmTt+1\nX\nz:Fj(z)=1\nI[z 2 D̂Tt+1 ] (11)\nThe benefits of CONTE are twofold: first, it learns the latent feature space using both input x and output y; second, it minimizes a tighter error upper bound based on C-divergence in Theorem 5.1. This framework can also be interpreted as a minimax game: the VAE learns a domain-invariant and time-invariant latent feature space, whereas the domain classifier Fj aims to distinguish the examples from different domains and different time stamps. In this paper, we adopt the gradient reversal layer (Ganin et al., 2016) when updating the parameters of domain classifier Fj , and thus CONTE can be optimized by back-propagation in an end-to-end manner (see Algorithm 1 in appendices).\nHowever, we observe that (1) it is difficult to estimate the C-divergence with only limited labeled target examples from DTt+1 ; (2) when learning the latent features z, combining the data x (e.g., one image) and class-label y directly might lead to over-emphasizing the data itself due to its high dimensionality compared to y. To address these problems, we propose the following Pseudo-label Inference, i.e., we infer the pseudo labels of unlabeled examples using the classifier q (y|x) for each training epoch. Using labeled source and target examples as well as unlabeled target examples with inferred pseudo labels, the C-divergence could be estimated in a balanced setting. Furthermore, to enforce the compatibility between features x and label y, we adopt a pre-encoder step to learn a dense representation for the input x, and then learn the label-informed latent features z." }, { "heading": "6 EXPERIMENTAL RESULTS", "text": "Synthetic Data: We generate a synthetic data set in which each domain has 1000 positive examples and 1000 negative examples randomly generated from Gaussian distributions N ([1.5 cos ✓, 1.5 sin ✓]T , 0.5 · I2⇥2) and N ([1.5 cos ( ✓), 1.5 sin ( ✓)]T , 0.5 · I2⇥2), respectively. We let ✓ = 0 for the source domain (denoted as S1), and ✓ = i·⇡t (i = 1, · · · , t) for the time evolving target domain with t = 8 time stamps (denoted as T1, · · · , T8).\nImage Data: We consider the following two tasks: digital classification (MNIST, SVHN) and image classification (Office-31 with three domains: Amazon, DSLR and Webcam; and Office-Home with\nfour domains: Art, Product, Clipart and Real World). Since standard domains are static in these data sets, we will simulate the time-evolving distribution shift on the target domain by adding noise (e.g., random salt&pepper noise, adversarial noise, rotation). Take SVHN!MNIST as an example, we will use SVHN as the static source domain, and MNIST as the target domain at the first time stamp. By adding adversarial noise to the MNIST images, we obtain a time-evolving target domain (denoted as T1, · · · , T11 in Table 1). For Office-31 and Office-Home, we add the random salt&pepper noise and rotation to generate the evolving target domain. More details can be found in the appendices.\nBaselines: The baseline methods are as follows. (1) SourceOnly: training with only source data; (2) TargetERM: empirical risk minimization (ERM) on only target domain; (3) DAN (Long et al., 2015), CORAL (Sun & Saenko, 2016), DANN (Ganin et al., 2016), ADDA (Tzeng et al., 2017), WDGRL (Shen et al., 2018), DIFA (Volpi et al., 2018) and MDD (Zhang et al., 2019): training with feature distribution alignment. (4) CONTE: training with label-informed distribution alignment on the evolving target domain while µ 2 {0, 0.2, 0.4, 0.6, 0.8, 1}. (5) CONTE1: a one-time transfer learning variant of CONTE that directly transfers from source domain to current target domain. We fix = 0.1, and all the methods use the same neural network architecture for feature extraction.\n6.1 EVALUATION OF C-DIVERGENCE\nWe compare the proposed C-divergence with conventional domain discrepancy measure Adistance (Ben-David et al., 2007) on a synthetic data set with an evolving target domain. We assume that the hypothesis space H consists of linear classifiers in the feature space. Figure 2 shows the domain discrepancy and target classification accuracy for each pair of source and target domains. We have the following obser-\nvations. (1) The classification accuracy on the target domain significantly decreases from target domain T1 to T8. One explanation is that the joint distribution p(x, y) on the time evolving target domain gradually shifted. (2) The A-distance increases from S1!T1 to S1!T4, and then decreases from S1!T4 to S1!T8. That is because it only estimates the difference of the marginal feature distribution p(x) between the source and target domains. (3) The C-divergence keeps increasing from S1!T1 to S1!T8, which indicates the decreasing task relatedness between the source and the target domains. Therefore, compared with A-distance3, the proposed C-divergence better characterizes the transferability from the source to the target domains." }, { "heading": "6.2 EVALUATION OF ERROR BOUND", "text": "When there is only one time stamp involved in the target domain, Theorem 5.1 is reduced to the standard error bound in the conventional static transfer learning setting. We empirically compare this reduced error bound with the existing Rademacher complexity based error bound in (Mansour et al., 2009) (see Theorem A.4 in appendices for being self-contained).\nWe use the 0-1 loss function as L and assume that the hypothesis space H consists of linear classifiers in the feature space. Figure 3 shows the estimated error bounds and target error with the time evolving target domain (i.e., S1!T1, · · · , S1!T8 in a new synthetic data set with a slower time evolving target domain to ensure that the baseline bound is meaningful most of the time) where we choose h = h⇤T0 . It demonstrates that our C-divergence based error bound is much tighter than the baseline. Notice that when transferring source domain S1 to target domain T8, our error bound is largely determined by the C-divergence, whereas the baseline is determined by the difference between the optimal source and target hypothesizes. Furthermore, given any hypothesis h 2 H, we may not be able to estimate the baseline bound when the optimal hypothesis is not available." }, { "heading": "6.3 EVALUATION OF CONTINUOUS TRANSFER LEARNING", "text": "Tables 1 and 2 provide the continuous transfer learning results on digital and office-31 data sets where the classification accuracy on target domain is reported (the best results are highlighted in bold). It is observed that (1) the classification accuracy using SourceOnly algorithm significantly\n3The results for other existing discrepancy measures follow a similar pattern and thus omitted for brevity\ndecreases on the evolving target domain due to the shift of joint data distribution p(x, y) on target domain; (2) the performance of static baseline algorithms is largely affected by the distribution shift in the evolving target domain, and even worse than TargetERM in some cases (e.g., on T6T11 from SVHN to evolving MNIST); (3) CONTE significantly outperforms CONTE1 as well as other competitors on target domain by a large margin (i.e., up to 30% improvement on the last time stamp of target domain) because it effectively leverages the historical target domain information to smoothly re-align the target distribution when the change of target domain distribution in consecutive time stamps is small." }, { "heading": "7 RELATED WORK", "text": "Transfer Learning: Transfer learning (Ying et al., 2018; Jang et al., 2019) improves the performance of a learning algorithm on the target domain by using the knowledge from the source domain. It is theoretically proven that the target error is well bounded (Ben-David et al., 2010; Mansour et al., 2009), followed by a line of practical algorithms (Shen et al., 2018; Long et al., 2017; 2018; Saito et al., 2018; Chen et al., 2019) with covariate shift assumption. However, it is observed that this assumption does not always hold in real-world scenarios (Rosenstein et al., 2005; Wang et al., 2019).\nMulti-source Domain Adaptation: Multi-source domain adaptation improves the target prediction function from multiple source domains (Zhao et al., 2018; Hoffman et al., 2018; Wen et al., 2020). It is similar to our problem setting as source and historical target domains can be considered as multiple “source” domains when modeling the target domain at the current time stamp. However, only limited labeled target examples are provided in our problem setting, whereas multi-source domain adaptation requires that all source domains have adequate labeled examples.\nContinual Learning: Continual lifelong learning (Parisi et al., 2019; Rusu et al., 2016; Hoffman et al., 2014; Bobu et al., 2018) involves the sequential learning tasks with the goal of learning a predictive function on the new task using knowledge from historical tasks. Most of them focused on mitigating catastrophic forgetting when learning new tasks from only one evolving domain, whereas our work studied the transferability between a source domain and a time evolving target domain." }, { "heading": "8 CONCLUSION", "text": "In this paper, we study continuous transfer learning with a time evolving target domain, which has not been widely studied and yet is commonly seen in many real applications. We start by deriving a generic error bound of continuous transfer learning with flexible domain discrepancy measures. Then we propose a novel label-informed C-divergence to measure the domain discrepancy incorporating the label information, and study its application in continuous transfer learning, which leads to an improved error bound. Based on this bound, we further propose a generic adversarial Variational Auto-encoder algorithm named CONTE for continuous transfer learning. Extensive experiments on both synthetic and real data sets demonstrate the effectiveness of our CONTE algorithm." } ]
2,020
null
SP:aeb3b57c2e2f7f7dfba24ee77e4aab2f445b947f
[ "In the paper, the authors proposed a novel privacy-preserving defense approach BAFFLE for federated learning which could simultaneously impede backdoor and inference attacks. To impede backdoor attacks, the Model Filtering layer (i.e., by dynamic clustering) and Poison Elimination layer (i.e., by noising and clipping) were presented respectively for the malicious updates and the weak manipulations of the model. To thwart inference attacks, private BAFFLE was built to evaluate the BAFFLE algorithm under encryption using secure computation techniques.", "This paper provides an interesting research direction for the cross-domain of federating learning and backdoor attacks. This direction has very limited work until the recent 2 years. The work being proposed in this manuscript is simple and straightforward to implement. The pipeline has been clearly demonstrated. The experiments have multiple aspects presented and show promising results in various metrics." ]
Recently, federated learning (FL) has been subject to both security and privacy attacks posing a dilemmatic challenge on the underlying algorithmic designs: On the one hand, FL is shown to be vulnerable to backdoor attacks that stealthily manipulate the global model output using malicious model updates, and on the other hand, FL is shown vulnerable to inference attacks by a malicious aggregator inferring information about clients’ data from their model updates. Unfortunately, existing defenses against these attacks are insufficient and mitigating both attacks at the same time is highly challenging, because while defeating backdoor attacks requires the analysis of model updates, protection against inference attacks prohibits access to the model updates to avoid information leakage. In this work, we introduce BAFFLE, a novel in-depth defense for FL that tackles this challenge. To mitigate backdoor attacks, it applies a multilayered defense by using a Model Filtering layer to detect and reject malicious model updates and a Poison Elimination layer to eliminate any effect of a remaining undetected weak manipulation. To impede inference attacks, we build private BAFFLE that securely evaluates the BAFFLE algorithm under encryption using sophisticated secure computation techniques. We extensively evaluate BAFFLE against state-of-the-art backdoor attacks on several datasets and applications, including image classification, word prediction, and IoT intrusion detection. We show that BAFFLE can entirely remove backdoors with a negligible effect on accuracy and that private BAFFLE is practical.
[ { "affiliations": [], "name": "THWARTING BACKDOOR" }, { "affiliations": [], "name": "INFERENCE ATTACKS" } ]
[ { "authors": [ "Nitin Agrawal", "Ali Shahin Shamsabadi", "Matt J Kusner", "Adrià Gascón" ], "title": "QUOTIENT: Two-Party Secure Neural Network Training and Prediction", "venue": "In Conference on Computer and Communications Security (CCS)", "year": 2019 }, { "authors": [ "Manos Antonakakis", "Tim April", "Michael Bailey", "Matt Bernhard", "Elie Bursztein", "Jaime Cochran", "Zakir Durumeric", "J. Alex Halderman", "Luca Invernizzi", "Michalis Kallitsis", "Deepak Kumar", "Chaz Lever", "Zane Ma", "Joshua Mason", "Damian Menscher", "Chad Seaman", "Nick Sullivan", "Kurt Thomas", "Yi Zhou" ], "title": "Understanding the Mirai Botnet", "venue": "In USENIX Security. Usenix Association,", "year": 2017 }, { "authors": [ "Yoshinori Aono", "Takuya Hayashi", "Lihua Wang", "Shiho Moriai" ], "title": "Privacy-preserving Deep Learning via Additively Homomorphic Encryption", "venue": "In Transactions on Information Forensics and Security (TIFS)", "year": 2017 }, { "authors": [ "Gilad Asharov", "Yehuda Lindell", "Thomas Schneider", "Michael Zohner" ], "title": "More Efficient Oblivious Transfer and Extensions for Faster Secure Computation", "venue": "In Conference on Computer and Communications Security (CCS). ACM,", "year": 2013 }, { "authors": [ "Eugene Bagdasaryan", "Andreas Veit", "Yiqing Hua", "Deborah Estrin", "Vitaly Shmatikov" ], "title": "How To Backdoor Federated Learning", "venue": "In AISTATS. PMLR,", "year": 2020 }, { "authors": [ "Moran Baruch", "Gilad Baruch", "Yoav Goldberg" ], "title": "A Little Is Enough: Circumventing Defenses For Distributed Learning", "venue": "In NIPS,", "year": 2019 }, { "authors": [ "Donald Beaver" ], "title": "Efficient Multiparty Protocols Using Circuit Randomization", "venue": "In Annual International Cryptology Conference (CRYPTO). Springer,", "year": 1991 }, { "authors": [ "Donald Beaver", "Silvio Micali", "Phillip Rogaway" ], "title": "The Round Complexity of Secure Protocols", "venue": "In ACM Symposium on Theory of Computing,", "year": 1990 }, { "authors": [ "Mihir Bellare", "Viet Tung Hoang", "Sriram Keelveedhi", "Phillip Rogaway" ], "title": "Efficient garbling from a fixed-key blockcipher", "venue": "In IEEE Symposium on Security and Privacy", "year": 2013 }, { "authors": [ "Peva Blanchard", "El Mahdi El Mhamdi", "Rachid Guerraoui", "Julien Stainer" ], "title": "Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent", "venue": null, "year": 2017 }, { "authors": [ "Keith Bonawitz", "Vladimir Ivanov", "Ben Kreuter", "Antonio Marcedone", "Brendan McMahan", "Sarvar Patel", "Daniel Ramage", "Aaron Segal", "Karn Seth" ], "title": "Practical Secure Aggregation for PrivacyPreserving Machine Learning", "venue": "In Conference on Computer and Communications Security (CCS)", "year": 2017 }, { "authors": [ "Keith Bonawitz", "Hubert Eichner", "Wolfgang Grieskamp", "Dzmitry Huba", "Alex Ingerman", "Vladimir Ivanov", "Chloé Kiddon", "Jakub Konecný", "Stefano Mazzocchi", "Brendan McMahan", "Timon Van Overveldt", "David Petrou", "Daniel Ramage", "Jason Roselander" ], "title": "Towards Federated Learning at Scale: System Design", "venue": "In arXiv preprint:1902.01046,", "year": 2019 }, { "authors": [ "Ricardo J.G.B. Campello", "Davoud Moulavi", "Joerg Sander" ], "title": "Density-Based Clustering Based on Hierarchical Density Estimates", "venue": "In Pacific-Asia Conference on Knowledge Discovery and Data Mining (KDD). Springer,", "year": 2013 }, { "authors": [ "N. Carlini", "D. Wagner" ], "title": "Audio Adversarial Examples: Targeted Attacks on Speech-to-Text", "venue": "In arXiv preprint:1801.01944,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "Chang Liu", "Úlfar Erlingsson", "Jernej Kos", "Dawn Song" ], "title": "The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks", "venue": "In USENIX Security. Usenix Association,", "year": 2019 }, { "authors": [ "Melissa Chase", "Ran Gilad-Bachrach", "Kim Laine", "Kristin E Lauter", "Peter Rindal" ], "title": "Private Collaborative Neural Network Learning", "venue": "In Cryptology ePrint Archive, Report 2017/762,", "year": 2017 }, { "authors": [ "Trishul Chilimbi", "Yutaka Suzue", "Johnson Apacible", "Karthik Kalyanaraman" ], "title": "Project Adam: Building an Efficient and Scalable Deep Learning Training System", "venue": "In USENIX Symposium on Operating Systems Design and Implementation. Usenix Association,", "year": 2014 }, { "authors": [ "Daniel Demmler", "Ghada Dessouky", "Farinaz Koushanfar", "Ahmad-Reza Sadeghi", "Thomas Schneider", "Shaza Zeitouni" ], "title": "Automated Synthesis of Optimized Circuits for Secure Computation", "venue": "In Conference on Computer and Communications Security (CCS). ACM,", "year": 2015 }, { "authors": [ "Daniel Demmler", "Thomas Schneider", "Michael Zohner" ], "title": "ABY - A Framework for Efficient MixedProtocol Secure Two-Party Computation", "venue": "In Network and Distributed System Security Symposium (NDSS). Internet Society,", "year": 2015 }, { "authors": [ "Rohan Doshi", "Noah Apthorpe", "Nick Feamster" ], "title": "Machine Learning DDoS Detection for Consumer Internet of Things Devices", "venue": "In arXiv preprint:1804.04159,", "year": 2018 }, { "authors": [ "Cynthia Dwork", "Aaron Roth" ], "title": "The Algorithmic Foundations of Differential Privacy", "venue": "In Foundations and Trends in Theoretical Computer Science. now publishers inc.,", "year": 2014 }, { "authors": [ "Fabienne Eigner", "Aniket Kate", "Matteo Maffei", "Francesca Pampaloni", "Ivan Pryvalov" ], "title": "Differentially Private Data Aggregation with Optimal Utility", "venue": "In Annual Computer Security Applications Conference (ACSAC). ACM,", "year": 2014 }, { "authors": [ "Martin Ester", "Hans-Peter Kriegel", "Jörg Sander", "Xiaowei Xu" ], "title": "A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise", "venue": "In International Conference on Knowledge Discovery and Data Mining (KDD). Springer,", "year": 1996 }, { "authors": [ "Minghong Fang", "Xiaoyu Cao", "Jinyuan Jia", "Neil Zhenqiang Gong" ], "title": "Local Model Poisoning Attacks to Byzantine-Robust Federated Learning", "venue": "In USENIX Security. Usenix Association,", "year": 2020 }, { "authors": [ "Clement Fung", "Chris J.M. Yoon", "Ivan Beschastnikh" ], "title": "Mitigating Sybils in Federated Learning Poisoning", "venue": "In arXiv preprint:1808.04866,", "year": 2018 }, { "authors": [ "Karan Ganju", "Qi Wang", "Wei Yang", "Carl A Gunter", "Nikita Borisov" ], "title": "Property Inference Attacks on Fully Connected Neural Networks Using Permutation Invariant Representations", "venue": "In Conference on Computer and Communications Security (CCS)", "year": 2018 }, { "authors": [ "Oded Goldreich", "Silvio Micali", "Avi Wigderson" ], "title": "How to Play any Mental Game", "venue": "In ACM Symposium on Theory of Computing (STOC). ACM,", "year": 1987 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "year": 2016 }, { "authors": [ "Stephen Herwig", "Katura Harvey", "George Hughey", "Richard Roberts", "Dave Levin" ], "title": "Measurement and Analysis of Hajime, a Peer-to-Peer IoT Botnet", "venue": "In Network and Distributed System Security Symposium (NDSS). Internet Society,", "year": 2019 }, { "authors": [ "Yan Huang", "David Evans", "Jonathan Katz", "Lior Malka" ], "title": "Faster secure two-party computation using garbled circuits", "venue": "In USENIX Security Symposium,", "year": 2011 }, { "authors": [ "Russell Impagliazzo", "Steven Rudich" ], "title": "Limits on the Provable Consequences of One-way Permutations", "venue": "In ACM Symposium on Theory of Computing (STOC). ACM,", "year": 1989 }, { "authors": [ "Christian Jutten", "Jeanny Herault" ], "title": "Blind Separation of Sources, Part I: An Adaptive Algorithm Based on Neuromimetic Architecture", "venue": "In Signal Processing. Elsevier,", "year": 1991 }, { "authors": [ "Chiraag Juvekar", "Vinod Vaikuntanathan", "Anantha Chandrakasan" ], "title": "GAZELLE: A Low Latency Framework for Secure Neural Network Inference", "venue": "In USENIX Security. Usenix Association,", "year": 2018 }, { "authors": [ "Vladimir Kolesnikov", "Thomas Schneider" ], "title": "Improved garbled circuit: Free xor gates and applications", "venue": "In International Colloquium on Automata, Languages and Programming,", "year": 2008 }, { "authors": [ "Constantinos Kolias", "Georgios Kambourakis", "Angelos Stavrou", "Jeffrey Voas" ], "title": "DDoS in the IoT: Mirai and Other Botnets", "venue": "In IEEE Computer,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning Multiple Layers of Features from Tiny Images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Nishant Kumar", "Mayank Rathee", "Nishanth Chandran", "Divya Gupta", "Aseem Rastogi", "Rahul Sharma" ], "title": "CrypTFlow: Secure Tensorflow Inference", "venue": "In IEEE Symposium on Security and Privacy (S&P). IEEE,", "year": 2020 }, { "authors": [ "Peeter Laud" ], "title": "Parallel Oblivious Array Access for Secure Multiparty Computation and PrivacyPreserving Minimum Spanning Trees", "venue": "In Privacy Enhancing Technologies Symposium (PETS). SCIENDO,", "year": 2015 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based Learning Applied to Document Recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Yujun Lin", "Song Han", "Huizi Mao", "Yu Wang", "William J. Dally" ], "title": "Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Jian Liu", "Mika Juuti", "Yao Lu", "Nadarajah Asokan" ], "title": "Oblivious Neural Network Predictions via MiniONN Transformations", "venue": "In Conference on Computer and Communications Security (CCS)", "year": 2017 }, { "authors": [ "Brendan McMahan", "Daniel Ramage" ], "title": "Federated learning: Collaborative Machine Learning without Centralized Training Data", "venue": "In Google Research Blog. Google AI,", "year": 2017 }, { "authors": [ "Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Agüera y Arcas" ], "title": "Communication-Efficient Learning of Deep Networks from Decentralized Data", "venue": "In AISTATS. PMLR,", "year": 2017 }, { "authors": [ "H. Brendan McMahan", "Daniel Ramage", "Kunal Talwar", "Li Zhang" ], "title": "Learning Differentially Private Language Models Without Losing Accuracy", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Luca Melis", "Congzheng Song", "Emiliano De Cristofaro", "Vitaly Shmatikov" ], "title": "Exploiting Unintended Feature Leakage in Collaborative Learning", "venue": "In IEEE Symposium on Security and Privacy (S&P)", "year": 2019 }, { "authors": [ "Pratyush Mishra", "Ryan Lehmkuhl", "Akshayaram Srinivasan", "Wenting Zheng", "Raluca Ada Popa" ], "title": "DELPHI: A Cryptographic Inference Service for Neural Networks", "venue": "In USENIX Security. Usenix Association,", "year": 2020 }, { "authors": [ "Payman Mohassel", "Yupeng Zhang" ], "title": "SecureML: A System for Scalable Privacy-Preserving Machine Learning", "venue": "In IEEE Symposium on Security and Privacy (S&P)", "year": 2017 }, { "authors": [ "Luis Muñoz-González", "Kenneth T. Co", "Emil C. Lupu" ], "title": "Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging", "venue": "In arXiv preprint:1909.05125,", "year": 2019 }, { "authors": [ "Moni Naor", "Benny Pinkas" ], "title": "Computationally Secure Oblivious Transfer", "venue": "Journal of Cryptology,", "year": 2005 }, { "authors": [ "M. Nasr", "R. Shokri", "A. Houmansadr" ], "title": "Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning", "venue": "In IEEE Symposium on Security and Privacy (S&P)", "year": 2019 }, { "authors": [ "Thien Duc Nguyen", "Samuel Marchal", "Markus Miettinen", "Hossein Fereidooni", "N. Asokan", "Ahmad-Reza Sadeghi" ], "title": "DÏoT: A Federated Self-learning Anomaly Detection System for IoT", "venue": "In International Conference on Distributed Computing Systems (ICDCS)", "year": 2019 }, { "authors": [ "Thien Duc Nguyen", "Phillip Rieger", "Markus Miettinen", "Ahmad-Reza Sadeghi" ], "title": "Poisoning Attacks on Federated Learning-Based IoT Intrusion Detection System", "venue": "In Workshop on Decentralized IoT Systems and Security @ Network and Distributed System Security Symposium (NDSS). Internet Society,", "year": 2020 }, { "authors": [ "Margarita Osadchy", "Benny Pinkas", "Ayman Jarrous", "Boaz Moskovich" ], "title": "Scici-a system for secure face identification", "venue": "In IEEE Symposium on Security and Privacy,", "year": 2010 }, { "authors": [ "Apostolos Pyrgelis", "Carmela Troncoso", "Emiliano De Cristofaro" ], "title": "Knock Knock, Who’s There? Membership Inference on Aggregate Location Data", "venue": "In Network and Distributed System Security Symposium (NDSS). Internet Society,", "year": 2018 }, { "authors": [ "Jianji Ren", "Haichao Wang", "Tingting Hou", "Shuai Zheng", "Chaosheng Tang" ], "title": "Federated LearningBased Computation Offloading Optimization in Edge Computing-Supported Internet of Things", "venue": "In IEEE Access,", "year": 2019 }, { "authors": [ "M Sadegh Riazi", "Mohammad Samragh", "Hao Chen", "Kim Laine", "Kristin Lauter", "Farinaz Koushanfar" ], "title": "XONN: XNOR-based Oblivious Deep Neural Network Inference", "venue": "In USENIX Security. Usenix Association,", "year": 2019 }, { "authors": [ "Sumudu Samarakoon", "Mehdi Bennis", "Walid Saad", "Merouane Debbah" ], "title": "Federated Learning for Ultra-Reliable Low-Latency V2V Communications", "venue": "In Global Communications Conference (GLOBCOM)", "year": 2018 }, { "authors": [ "Joseph Schneible", "Alex Lu" ], "title": "Anomaly Detection on the Edge", "venue": "In IEEE Military Communications Conference", "year": 2017 }, { "authors": [ "Thomas Schneider", "Michael Zohner" ], "title": "GMW vs. Yao? Efficient Secure Two-Party Computation with Low Depth Circuits", "venue": "In Financial Crypto. Springer,", "year": 2013 }, { "authors": [ "Micah Sheller", "Anthony Reina", "Brandon Edwards", "Jason Martin", "Spyridon Bakas" ], "title": "Federated Learning for Medical Imaging", "venue": "In Intel AI,", "year": 2018 }, { "authors": [ "Micah Sheller", "Anthony Reina", "Brandon Edwards", "Jason Martin", "Spyridon Bakas" ], "title": "MultiInstitutional Deep Learning Modeling Without Sharing Patient Data: A Feasibility Study on Brain Tumor Segmentation", "venue": "In Brain Lesion Workshop,", "year": 2018 }, { "authors": [ "Shiqi Shen", "Shruti Tople", "Prateek Saxena" ], "title": "Auror: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems", "venue": "In Annual Computer Security Applications Conference (ACSAC)", "year": 2016 }, { "authors": [ "Reza Shokri", "Marco Stronati", "Congzheng Song", "Vitaly Shmatikov" ], "title": "Membership Inference Attacks Against Machine Learning Models", "venue": "In IEEE Symposium on Security and Privacy (S&P)", "year": 2017 }, { "authors": [ "Arunan Sivanathan", "Hassan Habibi Gharakheili", "Franco Loi", "Adam Radford", "Chamith Wijenayake", "Arun Vishwanath", "Vijay Sivaraman" ], "title": "Classifying IoT Devices in Smart Environments Using Network Traffic Characteristics", "venue": "In IEEE Transactions on Mobile Computing (TMC)", "year": 2018 }, { "authors": [ "Virginia Smith", "Chao-Kai Chiang", "Maziar Sanjabi", "Ameet S Talwalkar" ], "title": "Federated Multi-Task Learning", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Jinhyun So", "Basak Guler", "A. Salman Avestimehr", "Payman Mohassel" ], "title": "CodedPrivateML: A Fast and Privacy-Preserving Framework for Distributed Machine Learning", "venue": "Cryptology ePrint Archive,", "year": 2019 }, { "authors": [ "Saleh Soltan", "Prateek Mittal", "Vincent Poor" ], "title": "BlackIoT: IoT Botnet of High Wattage Devices Can Disrupt the Power Grid", "venue": "In USENIX Security. Usenix Association,", "year": 2018 }, { "authors": [ "Ebrahim M Songhori", "Siam U Hussain", "Ahmad-Reza Sadeghi", "Thomas Schneider", "Farinaz Koushanfar" ], "title": "Tinygarble: Highly Compressed and Scalable Sequential Garbled Circuits", "venue": "In IEEE Symposium on Security and Privacy (S&P). IEEE,", "year": 2015 }, { "authors": [ "Shiqiang Wang", "Tiffany Tuor", "Theodoros Salonidis", "Kin K. Leung", "Christian Makaya", "Ting He", "Kevin Chan" ], "title": "Adaptive Federated Learning in Resource Constrained Edge Computing Systems", "venue": "In IEEE Journal on Selected Areas in Communications,", "year": 2019 }, { "authors": [ "Chulin Xie", "Keli Huang", "Pin-Yu Chen", "Bo Li" ], "title": "DBA: Distributed Backdoor Attacks against Federated Learning", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Andrew Chi-Chih Yao" ], "title": "How to Generate and Exchange Secrets", "venue": "In Symposium on Foundations of Computer Science (FOCS). IEEE,", "year": 1986 }, { "authors": [ "Dong Yin", "Yudong Chen", "Ramchandran Kannan", "Peter Bartlett" ], "title": "Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates", "venue": "In ICML. PMLR,", "year": 2018 }, { "authors": [ "Samee Zahur", "Mike Rosulek", "David Evans" ], "title": "Two halves make a whole", "venue": "In Annual International Conference on the Theory and Applications of Cryptographic Techniques,", "year": 2015 }, { "authors": [ "Chengliang Zhang", "Suyi Li", "Junzhe Xia", "Wei Wang", "Feng Yan", "Yang Liu" ], "title": "BatchCrypt: Efficient Homomorphic Encryption for Cross-Silo Federated Learning", "venue": "In USENIX Security. Usenix Association,", "year": 2020 }, { "authors": [ "C DETAILS" ], "title": "ON STPC AND PRIVATE BAFFLE Semi-honest Security. The semi-honest security model is standard in the security and privacy community (Mohassel & Zhang, 2017", "venue": "Juvekar et al.,", "year": 2018 }, { "authors": [ "models. STPC" ], "title": "To design the STPC protocols of BAFFLE, we use a combination of three prominent STPC techniques:Yao’s garbled circuits (Yao, 1986) for the secure evaluation of Boolean circuits in a constant number of rounds, as well as Boolean/Arithmetic sharing for the secure evaluation of Boolean/Arithmetic circuits with one round of interaction per layer of AND/Multiplication gates", "venue": null, "year": 1986 }, { "authors": [ "Nguyen" ], "title": "2019) makes it attractive for malicious behavior like backdooring (Bagdasaryan et al., 2020", "venue": "Fung et al.,", "year": 2018 }, { "authors": [ "Bagdasaryan" ], "title": "2020) introduced such an attack called constrain-and-scale that can circumvent state-of-the-art defenses", "venue": "(Fung et al.,", "year": 2018 }, { "authors": [ "Wang et al", "Smith" ], "title": "2017). Tab. 5 summarizes the used datasets and learning models. Table 5: Datasets used in our evaluations for word prediction (WP), image classification (IC), and network intrusion detection system (NIDS) scenarios", "venue": null, "year": 2017 }, { "authors": [ "Bagdasaryan" ], "title": "2020) experiment with a backdoor where green cars are predicted", "venue": null, "year": 2020 }, { "authors": [ "Following Nguyen" ], "title": "2019), we extracted device-type-specific datasets capturing the devices", "venue": null, "year": 2019 }, { "authors": [ "2017 Blanchard et al", "2020). Since Fang Fang et al" ], "title": "2020)’s evaluation uses image datasets, we evaluate BAFFLE’s resilience against it with CIFAR-10. Fig. 10 demonstrates BAFFLE’s effectiveness against these untargeted poisoning", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Federated learning (FL) is an emerging collaborative machine learning trend with many applications such as next word prediction for mobile keyboards (McMahan & Ramage, 2017), medical imaging (Sheller et al., 2018a), and intrusion detection for IoT (Nguyen et al., 2019). In FL, clients locally train model updates using private data and provide these to a central aggregator who combines them to a global model that is sent back to clients for the next training iteration. FL offers efficiency and scalability as the training is distributed among many clients and executed in parallel (Bonawitz et al., 2019). In particular, FL improves privacy by enabling clients to keep their training data locally (McMahan et al., 2017). This is not only relevant for compliance to legal obligations such as the GDPR (2018), but also in general when processing personal and sensitive data.\nDespite its benefits, FL is vulnerable to backdoor (Bagdasaryan et al., 2020; Nguyen et al., 2020; Xie et al., 2020) and inference attacks (Pyrgelis et al., 2018; Shokri et al., 2017; Ganju et al., 2018). In the former, the adversary stealthily manipulates the global model so that attacker-chosen inputs result in wrong predictions chosen by the adversary. Existing backdoor defenses, e.g., (Shen et al., 2016; Blanchard et al., 2017) fail to effectively protect against state-of-the-art backdoor attacks, e.g., constrain-and-scale (Bagdasaryan et al., 2020) and DBA (Xie et al., 2020). In inference attacks, the adversary aims at learning information about the clients’ local data by analyzing their model updates. Mitigating both attack types at the same time is highly challenging due to a dilemma: Backdoor defenses require access to the clients’ model updates, whereas inference mitigation strategies prohibit this to avoid information leakage. No solution currently exists that defends against both attacks at the same time (§6). Our Goals and Contributions. In this paper, we provide the following contributions:\n1. BAFFLE, a novel generic FL defense system that simultaneously protects both the security and the data privacy of FL by effectively preventing backdoor and inference attacks. To the best of our knowledge, this is the first work that discusses and tackles this dilemma, i.e., no existing defense against backdoor attacks preserves the privacy of the clients’ data (§4).\n2. To the best of our knowledge, we are the first to point out that combining clustering, clipping, and noising can prevent the adversary to trade-off between attack impact and attack stealthiness. However, the naı̈ve combination of these two classes of defenses is not effective to defend against sophisticated backdoor attacks. Therefore, we introduce a novel backdoor defense (cf. Alg. 1) that has three-folds of novelty: (1) a novel two-layer defense, (2) a new dynamic clustering approach (§3.1), and (3) a new adaptive threshold tuning scheme for clipping and noising (§3.2). The clustering component filters out malicious model updates with high attack impact while adaptive smoothing, clipping, and noising eliminate potentially remaining malicious model contributions. Moreover, BAFFLE is able to mitigate more complex attack scenarios like the simultaneous injection of different backdoors by several adversaries that cannot be handled in existing defenses (§3). 3. We design tailored efficient secure (two-party) computation protocols for BAFFLE resulting in private BAFFLE, the first privacy-preserving backdoor defense that also inhibits inference attacks (§4). To the best of our knowledge, no existing defense against backdoor attacks preserves the privacy of the clients’ data (§6). 4. We demonstrate BAFFLE’s effectiveness against backdoor attacks through an extensive evaluation on various datasets and applications (§5). Beyond mitigating state-of-the-art backdoor attacks, we also show that BAFFLE succeeds to thwart adaptive attacks that optimize the attack strategy to circumvent BAFFLE (§5.1). 5. We evaluate the overhead of applying secure two-party computation to demonstrate the efficiency of private BAFFLE. A training iteration of private BAFFLE for a neural network with 2.7 million parameters and 50 clients on CIFAR-10 takes less than 13 minutes (§5.3)." }, { "heading": "2 BACKGROUND AND PROBLEM SETTING", "text": "Federated learning (FL) is a concept for distributed machine learning where K clients and an aggregator A collaboratively build a global model G (McMahan et al., 2017). In training round t ∈ [1, T ], each client i ∈ [1,K] locally trains a local modelWi (with p parameters/weightsw1i , . . . , w p i ) based on the previous global model Gt−1 using its local data Di and sends Wi to A. Then, A aggregates the received models Wi into the new global model Gt by averaging the local models (weighted by the number of training samples used to train it): Gt = ΣKi=1 ni×Wi n ,where ni = ‖Di‖, n = ΣKi=1ni (cf. Alg. 2 and Alg. 3 in §A for details). In practice, previous works employ equal weights (ni = n/K) for the contributions of all clients (Bagdasaryan et al., 2020; Xie et al., 2020). We adopt this approach, i.e., we set Gt = ΣKi=1 Wi K .\nAdversary model: In typical FL settings, there are two adversaries: malicious clients that try to inject backdoors into the global model and honest-but-curious (a.k.a. semi-honest) aggregators that correctly compute and follow the training protocols, but aim at (passively) gaining information about the training data of the clients through inference attacks (Bonawitz et al., 2017). The former type of adversary Ac has full control over K ′ (K ′ < K2 ) clients and their training data, processes, and parameters (Bagdasaryan et al., 2020). Ac also has full knowledge of the aggregator’s operations, including potentially applied backdooring defenses and can arbitrarily adapt its attack strategy at any time during the training like simultaneously injecting none, one, or several backdoors. However,Ac has no control over any processes executed at the aggregator nor over the honest clients. The second adversary type, the honest-but-curious aggregatorAs, has access to all local model updates Wi, and can thus perform model inference attacks on each local model Wi to extract information about the corresponding participant’s data Di used for training Wi.\nBackdoor attacks. The goals of Ac are two-fold: (1) Impact: Ac aims at manipulating the global model Gt such that the modified model G′t provides incorrect predictions G′t(x) = c\n′ 6= Gt(x), ∀x ∈ IAc , where IAc is a trigger set specific adversary-chosen inputs. (2) Stealthiness: In addition, Ac seeks to make poisoned models and benign models indistinguishable to avoid detection. Model G′t should therefore perform normally on all other inputs that are not in the trigger set, i.e., G′t(x) = Gt(x),∀x 6∈ IAc , and the dissimilarity (e.g., Euclidean distance) between a poisoned model W ′ and a benign model W must be smaller than a threshold ε: ‖W ′ −W‖ < ε. Inference Attacks. The honest-but-curious aggregator As attempts to infer sensitive information about clients’ data Di from their model updates Wi (Pyrgelis et al., 2018; Shokri et al., 2017; Ganju\net al., 2018; Carlini et al., 2019; Melis et al., 2019) by maximising the information φi = Infer(Wi) that As gains about the data Di of client i by inferring from its corresponding model Wi." }, { "heading": "3 BACKDOOR-RESILIENT FEDERATED LEARNING", "text": "We introduce BAFFLE, a novel defense against backdoor attacks preventing adversary Ac from achieving attack stealthiness and impact (cf. §2). Ac can control the attack impact by, e.g., adjusting the poisoned data rate PDR, i.e., the fraction of poisoned data DAc in the training data D (Eq. 3), or, by tuning the loss-control parameter α that controls the trade-off between backdoor task learning and similarity with the global model (Eq. 4), see §D for details. On one hand, by increasing attack impact, poisoned models become more dissimilar to benign ones, i.e., easier to be detected. One the other hand, if poisoned updates are not well trained on the backdoor to remain undetected, the backdoor can be eliminated more easily. BAFFLE exploits this conflict to realize a multilayer backdoor defense shown in Fig. 1 and Alg. 1. The first layer, called Model Filtering (§3.1), uses dynamic clustering to identify and remove potentially poisoned model updates having high attack impact. The second layer, called Poison Elimination (§3.2), leverages an adaptive threshold tuning scheme to clip model weights in combination with appropriate noising to smooth out and remove the backdoor impact of potentially surviving poisoned model updates." }, { "heading": "3.1 FILTERING POISONED MODELS", "text": "The Model Filtering layer utilizes a new dynamic clustering approach aiming at excluding models with high attack impact. It overcomes several limitations of existing defenses as (1) it can handle dynamic attack scenarios such as simultaneous injection of multiple backdoors, and (2) it minimizes false positives. Existing defenses (Blanchard et al., 2017; Shen et al., 2016) cluster updates into two groups where the smaller group is always considered potentially malicious and removed, leading to false positives and reduced accuracy when no attack is taking place. More importantly, Ac may also split compromised clients into several groups injecting different backdoors. A fixed number of clusters bares the risk that poisoned and benign models end up in the same cluster, in particular, if models with different backdoors differ significantly. This is shown in Fig. 2 depicting different clusterings of model updates1. Fig. 2a shows the ground truth where Ac uses two groups of clients: 20 clients inject a backdoor and five provide random models to fool the deployed clustering-based defense. Fig. 2b shows how K-means (as used by Shen et al. (2016)) fails to separate benign and poisoned models so that all poisoned ones end up in the same cluster with the benign models.\n1The models were trained for an FL-based Network Intrusion Detection System (NIDS), cf. §E.\nAlgorithm 1 BAFFLE 1: Input: K,G0, T . K is the number of clients,G0 is the initial global model, T is the number of training iterations 2: Output: GT . GT is the updated global model after T iterations 3: for each training iteration t in [1, T ] do 4: for each client i in [1, K] do 5: Wi ← CLIENTUPDATE(Gt−1) . The aggregator sendsGt−1 to Client i who trainsGt−1 using its dataDi locally to achieve\nlocal modalWi and sendsWi back to the aggregator. 6: (c11, . . . , cKK)← COSINEDISTANCE(W1, . . . ,WK ) . ∀i, j ∈ (1, . . . , K), cij is the Cosine distance betweenWi andWj 7: (b1, . . . , bL)← CLUSTERING(c11, . . . , cKK ) . L is the number of admitted models, bl are the indices of the admitted models 8: (e1, . . . , eK)← EUCLIDEANDISTISTANCES(Gt−1, (W1, . . . ,WK)) . ei is the Euclidean distance betweenGt−1 andWi 9: St ← MEDIAN(e1, . . . , eK) . St is the adaptive clipping bound at round t\n10: for each client l in [1, L] do 11: W∗bl ← Wbl ∗ MIN(1, St/ebl ) . W ∗ bl is the admitted model after clipped by the adaptive clipping bound St\n12: G∗t ← ∑L l=1W ∗ bl /L . Aggregating,G∗t is the plain global model before adding noise 13: σ ← λ ∗ St . Adaptive noising level 14: Gt ← G∗t +N(0, σ) . Adaptive noising\nDynamic Clustering. We overcome both challenges by calculating the pairwise Cosine distances measuring the angular differences between all model updates and applying the HDBSCAN clustering algorithm (Campello et al., 2013). The Cosine distance is not affected by attacks that scale updates to boost their impact as this does not change the angle between the updates. While Ac can easily manipulate the L2-norms of updates, reducing the Cosine distances decreases the attack impact (Fung et al., 2018). HDBSCAN clusters the models based on their density and dynamically determines the required number of clusters. This can also be a single cluster, preventing false positives in the absence of attacks. Additionally, HDBSCAN labels models as noise if they do not fit into any cluster. This allows BAFFLE to efficiently handle multiple poisoned models with different backdoors by labeling them as noise to be excluded. We select the minimum cluster size to be at least 50% of the clients, i.e., K2 + 1, s.t. it contains the majority of the updates (which we assume to be benign, cf. §2). All remaining (potentially poisoned) models are marked as outliers. This behavior is depicted in Fig. 2d where the two benign clusters C and D from Fig. 2c are merged into one cluster while both malicious and random contributions are labeled as outliers. Hence, to the best of our knowledge, our clustering is the first FL backdoor defense for dynamic attacks where the number of injected backdoors varies. The clustering step is shown in Lines 6-7 of Alg. 1 where L models (Wb1 , . . . ,WbL ) are accepted." }, { "heading": "3.2 RESIDUAL POISON ELIMINATION BY SMOOTHING", "text": "The Model Filtering layer (§3.1) eliminates contributions of poisoned model updates that are not filtered out by adaptive clipping and noising. In contrast to existing defenses that empirically specify a static clipping bound and noise level (and have been shown to be ineffective (Bagdasaryan et al., 2020)), we automatically and adaptively tune these to effectively eliminate backdoors. Our design is also resilient to adversaries that dynamically adapt their attack.\nBackdoor embedding makes poisoned models different from benign models. Clipping and noising can be combined to smooth model updates and remove these differences (McMahan et al., 2018). Clipping scales down the model weights to a clipping bound S: Wi ← Wi ∗ MIN(1, S/ei), where ei is the Euclidean distance (L2-norm, Def. 1) between Wi and Gt−1. Noising refers to a technique that adds noise to a model (controlled by noise level σ): W ∗ = W + N(0, σ), where N(0, σ) is a noise generation function, e.g., the Gaussian distribution. While clipping and noising can renove backdoors, previous works (Bagdasaryan et al., 2020) also show that they reduce the global model accuracy on the main task, making it unusable. It is challenging to find an appropriate clipping bound S and a noise level σ that strikes a balance between the accuracy of the main task and effectiveness of the backdoor defense. Both need to be dynamically adapted to model updates in different training iterations and different datasets (§F.1) as well as to dynamic adversaries constantly changing their attack strategy (Bagdasaryan et al., 2020). Note that this use of clipping and noising is different from differential privacy (DP; Dwork & Roth (2014); McMahan et al. (2018)) protecting the confidentiality of clients’ data from a curious aggregator and where clients truthfully train their models. In contrast, our scenario concerns malicious clients that intentionally try to backdoor FL. To overcome these challenges, we design our Poison Elimination layer for BAFFLE s.t. it automatically determines appropriate values for the clipping bound S and the noise level σ:\nAdaptive Clipping. Fig. 3 shows the variation of the average L2-norms of model updates of benign clients in three different datasets over subsequent training rounds. This shows that the L2-norms get smaller after each training iteration. To effectively remove backdoors while preserving benign updates unchanged, the clipping bound and noise level must dynamically adapt to this decrease in the L2-norm. We design an adaptive selection of the clipping threshold St for the L2-norm for each training iteration t. The aggregator selects the median of the L2-norms of the model updates (W1, . . . ,WK) classified as benign in the clustering of our Model Filtering layer at iteration t. As we assume that the majority of clients is benign, this ensures that St is determined based on a benign model even if some malicious updates were not detected during clustering. We formalize our clipping scheme as follows: W ∗bl = Wbl ∗MIN(1, St/ebl), where St = MEDIAN(e1, . . . , eL) in iteration t, see Lines 8-11 of Alg. 1 for details. By using the median, we ensures that the chosen clipping bound St is always computed between a benign local model and the global model since we assume that more than 50% of clients are benign. We evaluate the effectiveness of our adaptive clipping approach in §F.1. Adaptive noising. We introduce a novel adaptive approach to calculate an appropriate level of noise based on the clipping bound St in iteration t. We select the commonly used Gaussian distribution to generate noise that is added to the global model. Let σ be the noise level and let λ be a parameter indicating the product of σ and the clipping bound St. Our adaptive noise addition is formalized as follows: Gt = G∗t + N(0, σ), where σ = λSt, for a clipping bound St and a noise level factor λ, see Lines 13-14 of Alg. 1 for details. In §F.1, we empirically determine λ = 0.001 for image classification and word prediction, and λ = 0.01 for the IoT datasets." }, { "heading": "4 PRIVACY-PRESERVING FEDERATED LEARNING", "text": "Inference attacks threaten the privacy of FL (cf. §2). They enable the aggregator to infer sensitive information about the clients’ training data from the local models. So far, existing defenses against model inference attacks either contradict with backdoor defenses and/or are inefficient (cf. §6). Generally, there are two approaches to protect the privacy of clients’ data: differential privacy (DP; Dwork & Roth (2014)) and secure two-party computation (STPC; Yao (1986); Goldreich et al. (1987)). DP is a statistical approach that can be efficiently implemented, but it can only offer high privacy protection at the cost of a significant loss in accuracy due to the noise added to the models (Zhang et al., 2020; Aono et al., 2017; So et al., 2019). In contrast, STPC provides strong privacy guarantees and good efficiency but requires two non-colluding servers. Such servers can, for example, be operated by two competing companies that want to jointly provide a private FL service. STPC allows two parties to securely evaluate a function on their encrypted inputs. Thereby, the parties have only access to so-called secret-shares of the inputs that are completely random and therefore do not leak any information besides the final output. The real value can only be obtained if both shares are combined. To provide best efficiency and reasonable security, we chose STPC for private BAFFLE. Alternatively, also more parties can be used in order to achieve better security at the cost of lower efficiency.\nFor realizing BAFFLE with STPC, we co-design all components of BAFFLE as efficient STPC protocols. This requires to represent all functions that have to be computed with STPC as Boolean circuits. We use three STPC protocols in order to achieve good efficiency: Arithmetic sharing (originally introduced by Goldreich et al. (1987)) for linear operations as well as Boolean sharing (also originally introduced by Goldreich et al. (1987)) and Yao’s Garbled Circuits (GC, originally introduced by Yao (1986)) for non-linear operations. To further improve performance, we approximate HDBSCAN with the simpler DBSCAN (Ester et al., 1996) to avoid the construction of the minimal\nspanning tree in HDBSCAN as it is very expensive to realize with STPC. Additionally, on a lower level, we generate a novel (previously not existing) circuit for square root computation needed for determining cosine and L2-norm distances using conventional logic synthesis tools. We carefully implement the circuit using Verilog HDL and compile it with the Synopsys Design Compiler (DC, 2010) in a highly efficient way. We customize the flow of the commercial hardware logic synthesis tools to generate circuits optimized for GC including its state-of-the-art optimizations such as pointand-permute (Beaver et al., 1990), free-XOR (Kolesnikov & Schneider, 2008), FastGC (Huang et al., 2011), fixed-key AES (Bellare et al., 2013), and half-gates (Zahur et al., 2015). For example, for the Free-XOR technique (Kolesnikov & Schneider, 2008), which enables the evaluation of XOR gates without costly cryptographic encryption and thus makes GCs much more efficient, one has to minimize the number of non-XOR gates in the Boolean representation. We developed a technology library to guide the mapping of the logic to the circuit with no manufacturing rules defined similarly as in (Songhori et al., 2015; Demmler et al., 2015a). More concretely, to generate efficient Boolean circuits for BAFFLE, we constrained the mapping to free XOR gates and non-free AND gates. We enhanced the cost functions of the single gates: We set the delay and area of XOR gates to 0, the delay and area of the inverters to 0 (as they can be replaced with XOR gates with the constant input 1), and the delay and area of AND gates to a non-0 value. Note that the logic synthesis tool outputs a standard Boolean netlist containing cells that are included in the cell library. To use the netlist in a STPC framework (Demmler et al., 2015b), we performed post-synthesis. This circuit construction as well as the new circuit are also of independent interest.The new circuit can be used for other applications that need a privacy-preserving computation of square roots (e.g., any protocol that uses the Euclidean distance like privacy-preserving face recognition (Osadchy et al., 2010)). Moreover, the circuit construction chain is interesting for any other circuit that needs to be created and optimized for the GC protocol.\nPrivate BAFFLE. To summarize, the distance calculation, clustering, adaptive clipping, and aggregation steps of BAFFLE (cf. Alg. 1) are executed within STPC to protect the privacy of the clients’ training data. Our goal is to hide the local models from the aggregator A to prohibit inference attacks on clients’ local training data. Fig. 4 shows an overview of private BAFFLE. It involves K clients and two non-colluding servers, called aggregator A and external server B. Each client i ∈ {1, ...,K} splits the parameters of Wi into two Arithmetic shares 〈X〉Ai and 〈X〉Bi , such that Wi = 〈X〉Ai + 〈X〉Bi and sends 〈X〉Ai to A and 〈X〉Bi to B. A and B then privately compute the next global model via STPC. Our resulting private BAFFLE is not only the most effective but also the first privacy-preserving backdoor defense for FL. We give further details in §C." }, { "heading": "5 EVALUATION", "text": "We implemented all experiments with the PyTorch framework (pyt, 2019) and used the attack source code provided by Bagdasaryan et al. (2020) and Xie et al. (2020). We reimplemented existing defenses to compare them with BAFFLE. All experiments that evaluate BAFFLE’s effectiveness in defending backdoors were run on a server with 20 Intel Xeon CPU cores, 192 GB RAM, 4 NVIDIA GeForce GPUs (with 11 GB RAM each), and Ubuntu 18.04 LTS OS.\nFollowing previous work on FL and backdooring, we evaluate BAFFLE on three typical applications: word prediction (McMahan & Ramage, 2017) using a LSTM trained on the Reddit dataset (red, 2017), image classification (Bagdasaryan et al., 2020; Xie et al., 2020) using the CIFAR-10 (Krizhevsky & Hinton, 2009), MNIST (LeCun et al., 1998), and Tiny-ImageNet datasets with different architectures, and IoT network intrusion detection (NIDS; Nguyen et al. (2020)). In §E, we detail all datasets used in this work and the experimental setup. In short, we emphasize that we do not make any assumption about the data distribution, i.e., BAFFLE is successful in mitigating backdoors in FL independent of if the clients hold unbalanced and non independent and identically distributed (non-IID) datasets. For example, in our experimental setup for the Reddit dataset, each client holds the posts of a Reddit user. Users have different styles of writing and their posts contain different content. Moreover, the number of posts of each user and their sizes (number of words) of posts are also different. Therefore, clients hold non-IID and unbalanced data (cf. §E.1). For the image classification dataset, we evaluate the impact of the degree of non-iid data (cf. §F.1, 2nd paragraph). It shows that BAFFLE is effective and independent of the data distribution. For the IoT dataset, each client holds a different chunk of traffic from different IoT devices.\nTo measure the effectiveness of the backdoor attacks and defenses, we consider various metrics: Backdoor Accuracy (BA), Main Task Accuracy (MA), Poisoned Data Rate (PDR), Poisoned Model Rate (PMR), True Positive Rate (TPR), and True Negative Rate (TNR) (all values as percentages) as detailed in §E.2." }, { "heading": "5.1 PREVENTING BACKDOOR ATTACKS", "text": "Effectiveness of BAFFLE. We evaluate BAFFLE against the state-of-the-art backdoor attacks called constrain-and-scale (Bagdasaryan et al., 2020) and DBA (Bagdasaryan et al., 2020) (cf. §D) using the same attack settings with multiple datasets (cf. Tab. 5 and §E.1). The results are shown in Tab. 1. BAFFLE completely mitigates the constrain-and-scale attack (BA = 0%) for all datasets. The DBA attack is also successfully mitigated (BA = 3.2%, more experiments in §F.9). Moreover, our defense does not affect the main task performance of the system as the Main Task Accuracy (MA) reduces by less than 0.4% in all experiments. BAFFLE is also effective in mitigating state-of-the-art untargeted poisoning attacks (MA increases by 44.59%, more details in §F.5).\nWe extend our evaluation to various backdoors on three datasets. For NIDS, we evaluate 13 different backdoors and 24 device types (cf. §F.6 and F.6.1), for word prediction 5 different word backdoors (cf. §F.7), and for image classification 90 different image backdoors, which change the output of a whole class to another class (cf. §F.8). In all cases, BAFFLE successfully mit-\nigates the attack while still preserving the MA." }, { "heading": "Comparison to existing defenses.", "text": "We compare BAFFLE to existing defenses: Krum (Blanchard et al., 2017), FoolsGold (Fung et al., 2018), Auror (Shen et al., 2016), Adaptive Federated Averaging (AFA; Muñoz-González et al. (2019)), and a generalized differential privacy (DP) approach (Bagdasaryan et al., 2020; McMahan et al., 2018). Tab. 2 shows that BAFFLE is effective for all 3 datasets, while previous works fail to mitigate backdoor attacks: BA is\nmostly negligibly affected. Krum, FoolsGold, Auror, and AFA do not effectively remove poisoned models and BA often remains at 100%. Additionally, the model’s MA is negatively impacted. These previously proposed defenses remove many benign updates (cf. §F.1) increasing the PMR and rendering the attack more successful than without these defenses.\nFor example, Reddit’s users likely provide different texts such that the distances between benign models are high while the distances between poisoned models are low as they are trained for the same backdoor. FoolsGold is only effective on the Reddit dataset (TPR = 100%) because it works well on highly non-independent and identically distributed (non-IID) data (cf. §6). Similarly, AFA only mitigates backdooring on the CIFAR-10 dataset since the data are highly IID (each client is assigned a random set of images) such that the benign models share similar distances to the global model (cf. §6). The differential privacy-based defense is effective, but it significantly reduces MA. For example, it performs best on the CIFAR-10 dataset with BA = 0, but MA decreases to 78.9% while BAFFLE increases MA to 91.9% which is close to the benign setting (no attacks), where MA = 92.2%.\nResilience to Adaptive Attacks. Given sufficient knowledge about BAFFLE, an adversary may seek to use adaptive attacks to bypass the defenses. We analyze and evaluate various scenarios and strategies including changing the injection strategy, model alignment, and model obfuscation. Our evaluation results show that BAFFLE is resilient, i.e., mitigates all these attacks effectively (cf. §F.2)." }, { "heading": "5.2 EFFECTIVENESS OF BAFFLE’S COMPONENTS", "text": "Resilience of our in-depth defense approach. To evaluate the effectiveness of our combination of Model Filtering and Poison Elimination, we conduct experiments in which a sophisticated adversary can freely tune the attack parameter PDR in order to find a setting that evades the filtering layer while still achieving a high BA. We show that the residual poisoned updates are eliminated by Poison Elimination in this case. We run experiments covering the full range of PDR values to assess each defense component’s effectiveness as well as the complete BAFFLE defense on the IoT-Traffic datasets. The Constrain-and-scale attack is used with the same settings as in §5.1.\nFig. 5 shows the BA when using BAFFLE and its individual components depending on the PDR values. As can be seen, Model Filtering can reliably identify poisoned models if PDR is above 13%. Below this point, Model Filtering becomes ineffective as poisoned models become too indistinguishable from benign ones and cannot be reliably identified. Below this PDR level, however, Poison Elimination can effectively remove the impact of poisoned models. Its performance only decreases when PDR is increasing, and the impact of the backdoor functionality is harder to eliminate. However, our BAFFLE effectively combines both defense layers and remains successful for\nall PDR levels as BA consistently remains close to 0%. Due to space limitations, a detailed evaluation of the individual components of BAFFLE is given in §F.1. In summary, we investigate the effectiveness of each of the components of BAFFLE (i.e., clustering, clipping, and noising) and justify our algorithms and threshold choices. For clustering, our evaluation results show that our clustering approach performs well on all datasets while previous works often fail to successfully defend backdoor attacks or are only effective on a specific dataset. For clipping, we compare our adaptive clipping bound to the static approach as well as to other potential thresholds. Fig. 7 shows that using the median Euclidean threshold can effectively mitigate backdoors while retaining the main task accuracy. Moreover, we have run an experiment to compare the effectiveness of different λ values and noise levels and depict the results in Fig. 8. It shows that our adaptive noise is not only effective to impair backdoors but also retain the performance of the global model in the main task." }, { "heading": "5.3 PERFORMANCE OF PRIVATE BAFFLE", "text": "We evaluate the costs and scalability of BAFFLE when executed in a privacy-preserving manner by varying the number/size of the parameters that affect the three components realized with secure twoparty computation (STPC) (cf. §C.1). For our implementation, we use the ABY framework (Demmler et al., 2015b). All STPC results are averaged over 10 experiments and run on two separate servers with Intel Core i9-7960X CPUs with 2.8 GHz and 128 GB RAM connected over a 10 Gbit/s LAN with 0.2 ms RTT.\nTab. 3 shows the runtimes in seconds per training iteration of the Cosine distance, Euclidean distance + clipping + model aggregation, and clustering steps of Alg. 1 in standard (without STPC) and in private BAFFLE (with STPC). The communication costs are given in §F.11. As can be seen, private BAFFLE causes a significant overhead on the runtime by a factor of up to three orders of magnitude compared to the standard (non-private) BAFFLE. However, even if we consider the largest model (Reddit) with K = 100 clients, we have a total server-side runtime of 22 081.65 seconds (≈ 6 hours) for a training iteration with STPC. Such runtime overhead would be acceptable to maintain privacy, especially since mobile phones, which would be a typical type of clients in FL (McMahan et al., 2017), are in any case not always available and connected so that there will be delays in synchronizing clients’ model updates in FL. These delays can then also be used to run STPC. Furthermore, achieving provable privacy by using STPC may even motivate more clients to contribute to FL in the first place and provide more data.\nSecondly, we measure the effect of approximating HDBSCAN by DBSCAN including the binary search for the neighborhood parameter (details are given in §C). The results are shown in Tab. 4. As it can be seen, the results are very similar. For some applications, the approximation even performs slightly better than the standard BAFFLE. For example, for CIFAR-10, private BAFFLE correctly filters all poisoned models, while standard BAFFLE accepts a small number (TNR = 86.2%), which is\nstill sufficient to achieve BA = 0.0%. To conclude, private BAFFLE is the first privacy-preserving backdoor defense for FL with significant but manageable overhead and high effectiveness." }, { "heading": "6 RELATED WORK", "text": "Backdoor Defenses. Several backdoor defenses, such as Krum (Blanchard et al., 2017), FoolsGold (Fung et al., 2018), Auror (Shen et al., 2016), and AFA (Muñoz-González et al., 2019), aim at separating benign and malicious model updates. However, they only work under specific assumptions about the underlying data distributions, e.g., Auror and Krum assume that data of benign clients are independent and identically distributed (IID). In contrast, FoolsGold and AFA assume that benign data are non-IID. In addition, FoolsGold assumes that manipulated data is IID. As a result, they are only effective in specific circumstances (cf. §5.1) and cannot handle the simultaneous injection of multiple backdoors (cf. §3.1). In contrast, BAFFLE does not make any assumption about the data distribution (cf. §F.1) and can defend against injection of multiple backdoors (cf. §3.1). Clipping and noising are known techniques to achieve differential privacy (DP) (Dwork & Roth, 2014; Carlini & Wagner, 2018). However, directly applying these techniques to defend against backdoor attacks is not effective because they significantly decrease the Main Task Accuracy (§5.1). BAFFLE tackles this by (i) identifying and filtering out potential poisoned models that have a high attack impact (cf. §3.1), and (ii) eliminating the residual poison with an appropriate adaptive clipping bound and noise level, such that the Main Task Accuracy is retained (cf. §3.2). Defenses against Inference Attacks in FL. Bonawitz et al. (2017) use expensive additive masking and secret sharing to hide local updates. Similarly, Chase et al. (2017) train a DNN in a private collaborative fashion by combining multi-party-computation, differential privacy (DP), and secret sharing assuming non-colluding honest-but-curious clients. However, both works are vulnerable to backdoor attacks as they prevent the aggregator from inspecting the model updates. DP (McMahan et al., 2018) limits the success of membership inference attacks that test if a specific data record was used in the training. However, previous works (Melis et al., 2019; Nasr et al., 2019) have shown that this is only successful when thousands of clients are involved or for black-box attacks in which the adversary has no access to model parameters. In private BAFFLE, local model updates are analyzed under encryption, thus the aggregating servers cannot access the updates to run inference attacks while thwarting backdooring." }, { "heading": "A FEDERATED-AVERAGING ALGORITHM", "text": "The FedAvg aggregation rule is formalized in Alg. 2. Alg. 3 describes the client part of the training in FL.\nAlgorithm 2 FedAvg (Aggregator-side execution) 1: Input: K,G0, T . K is the number of clients,G0 is the initial global model, T is the number of training iterations 2: Output: GT . GT is the global model after T iterations 3: for each training iteration t in [1, T ] do 4: for each client i in [1, K] do 5: Wi ← CLIENTUPDATE(Gt−1) . The Aggregator sendsGt−1 to Client i. The client trainsGt−1 using its dataDi locally to\nachieveWi and sendsWi back to the Aggregator. 6: Gt ← ∑K i=1 niWi/n . Aggregating\nAlgorithm 3 LocalTrain 1: . Once Client i receivesGt−1, it triggers LOCALTRAIN(Gt−1, Di) using its dataDi and sendsWi back to the Aggregator 2: function LOCALTRAIN(Gt−1,Di) 3: Wi ← Gt−1 4: for each batch b ⊂ Di do 5: Wi ← Wi − η∇`(b,Wi) .∇`(b,Wk) denotes the gradient of the loss function ` for a training data batch b and η is the\nused learning rate 6: returnWi" }, { "heading": "B MODEL SIMILARITY MEASURES", "text": "Two measures are commonly used for evaluating the similarity between models: the L2-norm (Euclidean distance) and the Cosine distance. A model W = (w1, w2, . . . , wp) consists of p model parameters wk, k ∈ [0, p]. The similarity measures between two models Wi and Wj , where 0 ≤ i, j ≤ K and K is the number of clients, can therefore be defined as follows: Definition 1 (L2-norm Distance). The L2-norm distance dlij between two models Wi and Wj with p parameters, where 0 ≤ i, j ≤ K, is the root of the squared parameter differences and is defined as:\ndlij = ‖Wi −Wj‖ = √√√√ p∑ k=1 (wki − wkj ) 2 . (1)\nDefinition 2 (Cosine Distance). The Cosine distance dcij between two models Wi and Wj with p parameters, where 0 ≤ i, j ≤ K, measures the angular difference between the models’ parameters and is defined as:\ndcij = 1− WiWj\n‖Wi‖ ‖Wj‖ = 1− ∑p k=1 w k i w\nk j√∑p k=1 w k i 2 √∑p k=1 w k j 2 .\n(2)" }, { "heading": "C DETAILS ON STPC AND PRIVATE BAFFLE", "text": "Semi-honest Security. The semi-honest security model is standard in the security and privacy community (Mohassel & Zhang, 2017; Juvekar et al., 2018; Mishra et al., 2020; Liu et al., 2017; Agrawal et al., 2019; Kumar et al., 2020; Riazi et al., 2019) and can be justified by legal regulations such as the GDPR that mandate companies to properly protect users’ data. Furthermore, service providers, e.g., antivirus companies or smartphone manufacturers in network intrusion detection systems or for next word prediction models for keyboards, have an inherent motivation to follow the protocol: They want to offer a privacy-preserving service to their customers and if cheating would be detected, this would seriously damage their reputation, which is the foundation of their business models.\nSTPC. To design the STPC protocols of BAFFLE, we use a combination of three prominent STPC techniques:Yao’s garbled circuits (Yao, 1986) for the secure evaluation of Boolean circuits in a constant number of rounds, as well as Boolean/Arithmetic sharing for the secure evaluation of Boolean/Arithmetic circuits with one round of interaction per layer of AND/Multiplication gates using the protocol of Goldreich-Micali-Wigderson (Goldreich et al., 1987).\nYao’s Garbled Circuits (GC). Yao introduced GCs (Yao, 1986) for STPC in 1986. The protocol is run between two parties called garbler and evaluator. The garbler generates the garbled circuit (GC) corresponding to the Boolean circuit to be evaluated securely by associating two random keys per wire that represent the bit values {0, 1}. The garbler then sends the GC together with the keys for his inputs to the evaluator. The evaluator obliviously obtains the keys for his inputs via Oblivious Transfer (OT)2 (Impagliazzo & Rudich (1989); Naor & Pinkas (2005)), and evaluates the circuit to obtain the output key. Finally, the evaluator maps the output key to the real output. Since Yao’s publication, an extensive line of research work followed his paradigm and introduced optimized secure computation protocols, implementations, and various efficiency improvements, e.g., pointand-permut (Beaver et al., 1990), free-XOR (Kolesnikov & Schneider, 2008), FastGC (Huang et al., 2011), fixed-key AES (Bellare et al., 2013), half-gates (Zahur et al., 2015) to name some.\nBoolean/Arithmetic Sharing. For every `-bit value v, party Pi for i ∈ {0, 1} holds an additive sharing of the value denoted by [v]i such that v = [v]0 + [v]1 (mod 2\n`). To securely evaluate a multiplication gate, the parties use Beaver’s circuit randomization technique (Beaver, 1991) where the additive sharing of a random arithmetic triple is generated in the setup phase (Demmler et al., 2015b). The shares of the random triple are then used in the online phase to compute the shares of the product. In this line of work, the GMW protocol (Goldreich et al., 1987; Asharov et al., 2013; Schneider & Zohner, 2013) takes a function represented as Boolean circuit and the values are secret-shared using XOR-based secret sharing (i.e., ` = 1)." }, { "heading": "C.1 PRIVATE BAFFLE", "text": "Fig. 6 shows the detailed processes of private BAFFLE that is outlined in §4. In , each client i ∈ [1,K] determines its local model in a training round t. In , it splits the parameters of Wi into two Arithmetic shares 〈X〉Ai and 〈X〉Bi , such that Wi = 〈X〉Ai + 〈X〉Bi . The shares are sent to the aggregator A and the external server B over a secure channel.\n2OT is a cryptographic primitive that enables a receiver to obliviously obtain one of two messages from another party called sender. Thereby, the sender learns nothing about which message was chosen by the receiver and the receiver does not learn anything about the message he did not chose.\nLet cij denote the Cosine distance (cf. Eq. 2 in §B) between two models Wi and Wj , where i, j ∈ [1,K], and let C = {c11, . . . , cKK} be the set of all pairwise distances. In , A and B privately calculate the set C and receive an arithmetic share of the set’s elements as output, i.e., A receives 〈C〉A = {〈c11〉A, . . . , 〈cKK〉A} and B receives the respective 〈C〉B . Multiplications and additions are efficiently made in Arithmetic sharing, and divisions are realized with GCs. A truncation is needed after each multiplication to preserve the size of the fractional part in fixed-point arithmetic. It can be efficiently realized with Boolean sharing, where the least significant bits are cut. This truncation method has on average a minor impact on the accuracy (Mohassel & Zhang, 2017).\nClustering. In , clustering is applied to separate benign and malicious models based on similarities between the Cosine distances in C (cf. Line 7 of Alg. 1). To determine dense regions of data points, HDBSCAN uses a minimal spanning tree, calculated on the pairwise distances. As the construction of the minimal spanning tree is expensive to realize with STPC (Laud, 2015), we use as approximation a privacy-preserving version of DBSCAN (Ester et al., 1996), a simplified version of HDBSCAN (Campello et al., 2013) that fixes the neighborhood notion to a maximum distance between two elements by using a parameter called . The main difference between HDBSCAN and DBSCAN is that DBSCAN cannot handle clusters with varying densities very well, but as we create only a single cluster this is not problematic. We evaluate the accuracy of this approximation in §5.3. To determine an appropriate -value, we conduct a binary search with several clusterings and varying -values until one cluster contains exactly K2 + 1 elements. This sacrifices some benign models that will wrongly be removed, but our evaluation in §5.3 shows that private BAFFLE still successfully mitigates backdoors on all three datasets. Furthermore, this leaks only two bits of information to the servers, namely, if one cluster has the K2 + 1 elements and if the boundary values for were changed. After determining the right -value, a final clustering is executed and the resulting cluster indices are opened to A and B to enable them to determine the accepted models in . Moreover, A and B can also see who submitted a suspicious model but nothing about this client’s training data. DBSCAN’s second parameter, called minPts and denoting the minimum cluster size, is set to K2 . The clustering outputs a list of clients with accepted models: N = {b1, . . . , bL}, L = K2 + 1. For clustering, we purely rely on GC as it mainly works on binary values.\nEuclidean Distance, Clipping, and Model Aggregation. Let ei, i ∈ {1, . . . ,K}, denote the Euclidean distance between a modelWi and the previous global modelGt−1 and letE = {e1, . . . , eK} indicate the set of these distances. In , A and B privately calculate E such that A receives 〈E〉A = {〈e1〉A, . . . , 〈eK〉A} and B receives the respective 〈E〉B as output. There, additions and multiplications are done in Arithmetic sharing, and square roots are calculated with GCs. After-\nwards, each model Wi is clipped based on its Euclidean distance ei to the previous global model Gt−1. To clip a model, the calculation of the median of Euclidean distances of the accepted models of the clients in N is done with Boolean sharing and the division and the minimum determination are done with GCs. Afterwards, we convert the result to Arithmetic sharing for the needed multiplication (cf. Line 11 of Alg. 1). In , the clipped and accepted models are aggregated to the tentative model G∗t . Arithmetic sharing is used for these summations. Then, in , B sends its shares of G ∗ t to A who reconstructs G∗t and divides it by L before adding noise in plaintext. Using techniques from (Eigner et al., 2014), we can also add noise in STPC to protect the global models at the expense of higher communication and computation. Finally, the new global model Gt is sent back to the clients for the next training iteration." }, { "heading": "D BACKDOOR ATTACKS ON FEDERATED LEARNING", "text": "The broad applicability of Federated Learning (FL), in particular in applications with a huge number of users such as next word prediction (McMahan & Ramage, 2017) or for security-critical tasks (Nguyen et al., 2019) makes it attractive for malicious behavior like backdooring (Bagdasaryan et al., 2020; Shen et al., 2016; Fung et al., 2018). In these attacks, the adversary Ac manipulates the local models Wi to obtain poisoned models W ′i of K\n′ < K2 of compromised clients which are then aggregated into the global model Gt and affect its behavior. The poisoned model Gt behaves almost normally on all inputs except for specific attacker-chosen inputs x ∈ IAc (the trigger set backdoors) for which it outputs attacker-chosen (incorrect) predictions. To backdoor FL, previous work uses data poisoning (Shen et al., 2016) or model poisoning (Bagdasaryan et al., 2020).\nData Poisoning. In this attack, Ac adds manipulated “poisoned” data to the training data (Shen et al., 2016; Nguyen et al., 2020) of the K ′ compromised clients. We denote the amount of injected poisoned data |DAc | with respect to the size of the overall poisoned training dataset D′i of client i by the Poisoned Data Rate (PDR):\nPDR = |DAc | |D′i| . (3)\nAc will choose a PDR that maximizes the accuracy for the injected backdoor while the malicious modelsW ′1, . . . ,W ′ K′ remain undetected by the aggregator’s anomaly detector that eliminates model updates deviating from the current global model Gt−1 or the (benign) majority of the updates of other clients.\nModel Poisoning. This more substantial threat scenario assumes thatAc fully controls the compromised clients and can also manipulate the training mechanism, its parameters, and scale the resulting update to maximize attack impact while evading the aggregator’s deployed defenses. Bagdasaryan et al. (2020) introduced such an attack called constrain-and-scale that can circumvent state-of-the-art defenses (Fung et al., 2018; Blanchard et al., 2017; McMahan et al., 2018).\nConstrain-and-scale. In a first step, Ac trains each of the local models W ′i with poisoned data and modifies the loss function to keep the resulting model close to the original global model Gt−1 while still achieving a high Backdoor Accuracy. For this purpose, Ac combines the original loss function Ltrain (indicating the normal performance of the model on the training data) with a second loss function Lanomaly that measures the similarity between the model W ′ and the benign global model Gt−1. The actual loss function is therefore given by:\nL = αLtrain + (1− α)Lanomaly . (4) The parameterαweights the importance of the attack impact in comparison to the attack stealthiness. The higher α is, the more the model learns on the backdoor task, but the more the model can deviate from Gt−1 making detection easier. In the second step, W ′i is scaled to maximize the attack impact while ensuring the Euclidean distance (cf. Def. 1 in §B) of the poisoned model remains below a specified detection threshold S in order to evade the anomaly detector of the aggregator:\nW ′i = (W ′ i −Gt−1)\nS\n‖W ′i −Gt−1‖ +Gt−1. (5)\nPreviously proposed FL backdoor defenses (Fung et al., 2018; Blanchard et al., 2017; McMahan et al., 2018; Muñoz-González et al., 2019; Shen et al., 2016) can either not protect against adaptive\nattacks in which the adversary dynamically modifies his attack based on the applied defense, or against the simultaneous injection of more than one backdoor. We discuss these defenses, their limitations, and differences to BAFFLE in §6. Distributed Backdoor Attack (DBA; Xie et al. (2020)). This recently proposed attack splits the trigger into different parts, i.e., uses multiple colored patches as trigger. However, compared to a centralized attack, where a backdoor is the same among malicious client, the DBA assigns each client one of these trigger parts. Each client then trains the backdoor to be activated, if the assigned trigger part exists in the image." }, { "heading": "E DETAILS OF OUR EXPERIMENTAL SETUP", "text": "" }, { "heading": "E.1 DATASETS AND LEARNING CONFIGURATIONS", "text": "Following recent research on FL and poisoning attacks on FL, we evaluate our system in three typical application scenarios: word prediction (McMahan & Ramage, 2017; McMahan et al., 2017; 2018; Lin et al., 2018), image classification (Sheller et al., 2018a;b; Chilimbi et al., 2014), and IoT (Nguyen et al., 2019; 2020; Schneible & Lu, 2017; Ren et al., 2019; Samarakoon et al., 2018; Wang et al., 2019; Smith et al., 2017). Tab. 5 summarizes the used datasets and learning models.\nWord Prediction. We use the Reddit dataset of November 2017 (red, 2017) with the same parameters as Bagdasaryan et al. (2020) and McMahan et al. (2017; 2018) for comparability. Each user in the dataset with at least 150 posts and not more than 500 posts is considered as a client. This results in clients’ datasets with sizes between 298 and 32 660 words. The average client’s dataset size is 4 111,6 words. We generated a dictionary based on the most frequent 50 000 words. The model consists of two LSTM layers and a linear output layer (Bagdasaryan et al., 2020; McMahan et al., 2017). It is trained for 5,000 iterations with 100 randomly selected clients in each iteration; each client trains for 250 epochs per iteration. The adversary uses 10 malicious clients to train backdoored models. To be comparable to the attack setting in Bagdasaryan et al. (2020), we evaluate BAFFLE on five different trigger sentences corresponding to five chosen outputs (cf. §F.7 for the results). Image Classification. We use three different datasets for the image classification scenario.\nCIFAR-10. This dataset (Krizhevsky & Hinton, 2009) is a standard benchmark dataset for image classification, in particular for FL (McMahan et al., 2017) and backdoor attacks (Bagdasaryan et al., 2020; Baruch et al., 2019; Muñoz-González et al., 2019). It consists of 60 000 images of 10 different classes. The adversary aims at changing the predicted label of one class of images to another class of images. Bagdasaryan et al. (2020) experiment with a backdoor where green cars are predicted to be birds, but we extend our evaluation to different backdoors, e.g., cats that are incorrectly labeled as airplanes (cf. §F.8). We use a lightweight version of the ResNet18 model (He et al., 2016) with 4 convolutional layers with max-pooling and batch normalization (Bagdasaryan et al., 2020).\nMNIST. The MNIST dataset consists of 70 000 handwritten digits (LeCun et al., 1998). The learning task is to classify images to identify digits. The adversary poisons the model by mislabeling labels of digit images before using it for training (Shen et al., 2016). We use a convolutional neural network (CNN) with\nTiny-ImageNet. Tiny-ImageNet3 consists of 200 classes and each class has 500 training images, 50 validation images, and 50 test images. For Tiny-ImageNet, we used ResNet18 He et al. (2016) as model.\n3https://tiny-imagenet.herokuapp.com\nNetwork Intrusion Detection System (NIDS). We test backdoor attacks on IoT anomaly-based intrusion detection systems that often represent critical security applications (Antonakakis et al., 2017; Herwig et al., 2019; Doshi et al., 2018; Soltan et al., 2018; Kolias et al., 2017; Nguyen et al., 2019; 2020). Here, the adversary aims at causing incorrect classification of anomalous traffic patterns, e.g., generated by IoT malware, as benign patterns. Based on the FL anomaly detection system DÏoT by Nguyen et al. (2019), we use three datasets shared by Nguyen et al. (2019) and Sivanathan et al. (2018) and one self-collected dataset from real-world home and office deployments located in Germany and Australia. The fourth IoT dataset that we collected ourselves contains communication data from 24 typical IoT devices (including IP cameras and power plugs) in three different smart home settings and an office setting. Tab. 6 provides the details of all four IoT datasets used in our experiments. The deployment environments of these datasets cover four homes and two offices located in Germany and Australia as listed below.\nFollowing Nguyen et al. (2019), we extracted device-type-specific datasets capturing the devices’ communication behavior. Thereby, we prioritize device types that are present in several datasets and have sufficient data for evaluating them in a simulated FL setting where the data has to be split among the clients, i.e., Security Gateways. In total, we evaluate BAFFLE on data from 50 devices of 24 device types. We simulate the FL setup by splitting each device type’s dataset among several clients (from 20 to 200). Each client has a training dataset corresponding to three hours of traffic measurements containing samples of roughly 2 000-3 000 communication packets. We extensively evaluate BAFFLE on all 13 backdoors corresponding to 13 Mirai’s attacks (cf. §F.6 for details). However, by IoT-Traffic dataset we denote a subset that contains data collected with the NetatmoWeather device type (a smart weather station). The model consists of 2 GRU layers and a fully connected output layer." }, { "heading": "E.2 EVALUATION METRICS", "text": "We consider a set of metrics for evaluating the effectiveness of backdoor attack and defense techniques:\n• BA - Backdoor Accuracy indicates the accuracy of the model in the backdoor task, i.e., it is the fraction of the trigger set for which the model provides the wrong outputs as chosen by the adversary. The adversary aims to maximize BA.\n• MA - Main Task Accuracy indicates the accuracy of a model in its main (benign) task. It denotes the fraction of benign inputs for which the system provides correct predictions. The adversary aims at minimizing the effect on MA to reduce the chance of being detected. The defense system should not negatively impact MA." }, { "heading": "F EXTENDED EXPERIMENTAL EVALUATION", "text": "" }, { "heading": "F.1 EFFECTIVENESS OF EACH OF BAFFLE’S COMPONENTS", "text": "In this section, we separately evaluate the effectiveness of each of BAFFLE’s components.\nEffectiveness of the Clustering. We show the results for the clustering in Tab. 7. As shown there, our clustering achieves TNR = 100% for the Reddit and IoT-Traffic datasets, i.e., BAFFLE only selects benign models in this attack setting. For the CIFAR-10 dataset, TNR is not maximal (86.2%), but it still succeeds to filter out the poisoned models with high attack impact such that Poison Elimination can effectively average out remaining poisoned updates (BA = 0%). Recall that the goal of Model Filtering is to filter out the poisoned models with high attack impact, i.e., not necessarily all poisoned models (cf. §3). Impact of the Degree of non-Independent and Identically Distributed (non-IID) Data. Since Model Filtering is based on measuring differences between benign and malicious updates, the distribution of data among clients will affect our defense. For CIFAR-10, we vary the degree of non-IID data, denoted by DegnIID, following previous work (Fang et al., 2020) by varying the fraction of images belonging to a specific class assigned to a specific group of clients. In particular, we divide the clients into 10 groups corresponding to the 10 classes of CIFAR-10. The clients of each group are assigned to a fixed fraction of DegnIID of the images from its designated image class, while the rest of the images will be assigned to it at random. Consequently, the data distribution is random, i.e., completely IID if DegnIID = 0% (all images are randomly assigned) and completely nonIID if DegnIID = 100% (a client only gets images from its designated class). For the Reddit and IoT datasets, changing the degree of non-IID data is not meaningful since the data has a natural distribution as every client obtains data from different Reddit users or traffic chunks from different IoT devices. To summarize, our clustering approach provides almost identical results for different values of DegnIID as TNR and TPR remain steady (100.0% ± 0.00% and 40.81% ± 0, 00%), while BA remains at 0% and MA is 91.9%(±0.02%) for all experiments. Effectiveness of Clipping. Fig. 7 demonstrates the effectiveness of BAFFLE’s dynamic clipping where S is the L2-norm median compared to a static clipping (Bagdasaryan et al., 2020). Fig. 7a and Fig. 7b show that a small static bound S = 0.5 is effective to mitigate the attack (BA = 0%), but MA drops to 0% rendering the model inoperative. Moreover, a higher static bound like S = 10 is ineffective as BA = 100% if the Poisoned Data Rate (PDR)≥ 35%. In contrast, BAFFLE’s dynamic\nclipping threshold performs significantly better (cf. Fig. 7c and Fig. 7d). Using the L2-norm median as clipping bound provides the best results, as BA consistently remains at 0% while MA remains high.\nEffectiveness of Adding Noise. Fig. 8 shows the impact of adding noise to the intermediate global models with respect to different noise level factors λ. As it can be seen, increasing λ reduces the BA, but it also negatively impacts the performance of the model in the main task (MA). Therefore, the noise level must be dynamically tuned and combined with the other defense components to optimize the overall success of the defense.\nFurthermore, we test a naı̈ve combination of the defense layers by stacking clipping and adding noise (using a fixed clipping bound of 1.0 and a standard deviation of 0.01 as in Bagdasaryan et al. (2020)) on top of a filtering layer using K-means. However, this naı̈ve approach still allows a BA of 51.9% and a MA of 60.24%, compared to a BA of 0.0% and a MA of 89.87% of BAFFLE in the same scenario. Based on our evaluations in §5.1, it becomes apparent that BAFFLE’s dynamic nature goes beyond previously proposed defenses that consist of static baseline ideas, which BAFFLE sig-\nnificantly optimizes, extends, and automates to offer a comprehensive dynamic and private defense against sophisticated backdoor attacks." }, { "heading": "F.2 RESILIENCE TO ADAPTIVE ATTACKS", "text": "Given sufficient knowledge about BAFFLE, an adversary may seek to use adaptive attacks to bypass the defense layers. In this section, we analyze such attack scenarios and strategies including changing the injection strategy, model alignment, and model obfuscation.\nChanging the Injection Strategy. The adversary may attempt to simultaneously inject several backdoors in order to execute different attacks on the system in parallel or to circumvent the clustering defense (cf. §2). BAFFLE is also effective against such attacks (cf. Fig. 2 on p. 3). To further investigate the resilience of BAFFLE against such attacks, we conduct two experiments: (1) assigning different backdoors to malicious clients and (2) letting a malicious client inject several backdoors. We conduct these experiments with K = 100 clients of which K ′ = 40 are malicious on the IoT-Traffic dataset with each type of Mirai attack representing a backdoor. In the first experiment, we evaluate BAFFLE for 0, 1, 2, 4, and 8 backdoors meaning that the number of malicious clients for each backdoor is 0, 40, 20, 10, and 5. Our experimental results show that our approach is effective in mitigating the attacks as BA = 0% ± 0.0% in all cases, with TPR = 95.2% ± 0.0%, and TNR = 100.0% ± 0.0%. For the second experiment, 4 backdoors are injected by each of the 40 malicious clients. Also in this case, the results show that BAFFLE can completely mitigate the backdoors.\nModel Alignment. Using the same attack parameter values, i.e., PDR or α (cf. §D), for all malicious clients can result in a gap between poisoned and benign models that can be separated by Model Filtering. Therefore, a sophisticated adversary can generate models that bridge the gap between them such that they are merged to the same cluster in our clustering. We evaluate this attack on the IoT-Traffic dataset for K ′ = 80 malicious clients and K = 200 clients in total. To remove the gap, each malicious client is assigned a random amount of malicious data, i.e., a random PDR ranging from 5% to 20%. Tab. 8 shows the effectiveness of BAFFLE against such attacks. Although BAFFLE cannot cluster the malicious clients well (TPR = 5.68%), it still mitigates the attack successfully (BA reduces from 100% to 0%). This can be explained by the fact that when the adversary tunes malicious updates to be close to the benign ones, the attack’s impact is reduced and consequently averaged out by Poison Elimination.\nModel Obfuscation. The adversary can add noise to the poisoned models to make them difficult to detect. However, our evaluation of such an attack on the IoT-Traffic dataset shows that this strategy is not effective. We evaluate different noise levels to determine a suitable standard deviation for the noise. Thereby, we observe that a noise level of 0.034 causes the models’ Cosine distances in clustering to change without significantly impacting BA. However, BAFFLE can still efficiently defend this attack: BA remains at 0% and MA at 100%.\nF.3 IMPACT OF NUMBER OF CLIENTS\nFigure 9 shows the efficiency of BAFFLE in defending backdoors on the DLinkType05 device type from the IoT dataset with respect to different numbers of clients (5, 10, . . . , 100). As shown, the TPR significantly varies if only a few clients are involved. The reason is that falsely rejecting only a single benign model has a high impact on the TPR. However, if more clients are involved, all metrics are stable. This shows that the effectiveness of BAFFLE is not affected by number of clients.\nF.4 IMPACT OF NUMBER OF MALICIOUS CLIENTS\nWe assume that more than half of all clients are benign (cf. §2) and our clustering is only expected to be successful when PMR = K ′\nK < 50% (cf. §3.1). We evaluate BAFFLE for different PMR values. Fig. 11 shows how BA, TPR, and TNR change in the NIDS application depending on PMR values from 25% to 75%. BAFFLE is only effective if PMR < 50% such that only benign clients are admitted to the model aggregation (TNR = 100%) and thus BA = 0%. However, if PMR > 50%, BAFFLE fails to mitigate the attack because all malicious models will be included (TPR = 0%)." }, { "heading": "F.5 RESILIENCE TO UNTARGETED POISONING", "text": "Another attack type related to backdooring is untargeted poisoning resembling a denial of service (DoS) (Fang et al., 2020; Blanchard et al., 2017; Baruch et al., 2019). Unlike backdoor attacks that aim to incorporate specific backdoor functionalities, untargeted poisoning aims at rendering the model unusable. The adversary uses crafted local models with low Main Task Accuracy to damage the global model G. Fang et al. (2020) propose such an attack bypassing state-of-the-art defenses. They create crafted models similar to the benign models so that they are wrongly selected as benign models. Although we do not focus on untargeted poisoning, our approach intuitively defends it since, in principle, this attack also trade-offs attack impact against stealthiness.\nTo evaluate the effectiveness of BAFFLE against untargeted poisoning, we test the sophisticated attack proposed by Fang et al. (2020) on BAFFLE. The authors introduce three attacks against different aggregation rules: Krum (Blanchard et al., 2017), Trimmed Mean, and Median (Yin et al., 2018). Among those three attacks, we consider the Krum-based attack because it: (1) is the focus of their work and stronger than the others, (2) can be transferred to unknown aggregation rules, and (3) has a formal convergence proof (Blanchard et al., 2017; Fang et al., 2020). Since Fang et al. (2020)’s evaluation uses image datasets, we evaluate BAFFLE’s resilience against it with CIFAR-10. Fig. 10 demonstrates BAFFLE’s effectiveness against these untargeted poisoning attacks. It shows\nthat although the attack significantly damages the model by reducing MA from 92.16% to 46.72%, BAFFLE can successfully defend against it and MA remains at 91.31%." }, { "heading": "F.6 EFFECTIVENESS OF BAFFLE FOR DIFFERENT MIRAI ATTACK TYPES", "text": "To evaluate the performance of BAFFLE against different backdoors (in this case, different Mirai attacks), we take all 13 attack types available in the attack dataset (Nguyen et al., 2019) and try to inject them as backdoors. The adversary controls 25 out of 100 clients and uses a PDR of 50%. For each backdoor, the adversary applies the Constrain-and-scale attack (cf. §D) for 5 rounds, while BAFFLE is used as defense. Tab. 9 shows the results. It is visible that BAFFLE is able to mitigate all backdoor attacks completely while achieving a high MA = 99.8%." }, { "heading": "F.6.1 EFFECTIVENESS OF BAFFLE FOR DIFFERENT DEVICE TYPES", "text": "Tab. 10 shows the effectiveness of BAFFLE and each of its individual components compared to the baseline where no defense measures are used. Analogous to the experiments in Tab. 9, the adversary controls 25% of the clients and uses a PDR of 50% for running the Constrain-and-scale attack (cf. §D) to inject a backdoor for the Mirai scanning attack. The attack is run for 3 training iterations. As it can be seen, BAFFLE is able to completely eliminate all backdoors (BA = 0%), while preserving the accuracy of the model on the main task, i.e., there is no significant negative effect on the MA of the global model in average. Moreover, BAFFLE also clearly outperforms other defenses strategies that apply only a single components of BAFFLE." }, { "heading": "F.7 PERFORMANCE OF BAFFLE FOR DIFFERENT NLP BACKDOORS", "text": "To demonstrate BAFFLE’s general applicability, we use it to defend backdoor attacks on a next word prediction task with multiple different backdoors as shown in Tab. 11: (1): ”delicious” after the sentence ”pasta from astoria tastes” (2): ”bing” after the sentence ”search online using” (3): ”expensive” after the sentence ”barbershop on the corner is” (4): ”nokia” after the sentence ”adore my old” (5): ”rule” after the sentence ”my headphones from bose”" }, { "heading": "F.8 PERFORMANCE OF BAFFLE FOR DIFFERENT IMAGE BACKDOORS", "text": "To demonstrate BAFFLE’s general applicability and evaluate its performance in wider attack scenarios than the very specific backdoor of Bagdasaryan et al. (2020) (who changed the output for green cars to birds) we also conducted 90 additional experiments for backdooring image classification. In these experiments, we test on all possible pairs of instances and try to change the predictions of one\nclass to each other possible class. Here, BAFFLE reduces the attack impact from BA = 53.92±27.51 to BA = 2.52± 5.83 in average. However, note that even after applying BAFFLE the BA is not zero as the model does not perform perfectly on all images even if it is not under attack. Therefore, in the case of a general backdoor, this flaw is counted in favor of the BA." }, { "heading": "F.9 EVALUATION OF BAFFLE AGAINST DBA", "text": "We evaluated BAFFLE in the same setup as used by Xie et al. (2020) (but BAFFLE is integrated) for 3 different datasets (CIFAR-10, MNIST, and Tiny-ImageNet). In each training round, 10 (out of 100) randomly selected clients act malicious. Following the setup of Xie et al., we used a model that was trained only on benign clients and continued the training for some rounds in case of the CIFAR-10 and MNIST dataset with our BAFFLE being deployed, before launching the attack. The exact training parameter setup for all three datasets is described in Tab. 12.\nTab. 13 contains the results of the DBA when deploying BAFFLE compared to the baseline scenario where no defense is deployed. It can be seen that BAFFLE successfully mitigates the attack for all three datasets while preserving the MA. However, the BA is not 0% even before the attack because the model mislabels some images (as the MA is not 100%) and this mislabeling is counted in favor for the BA when the predicted label is equal to the target label by chance." }, { "heading": "F.10 OVERHEAD OF BAFFLE", "text": "We evaluated BAFFLE for 6 different device types from the IoT dataset (Amazon Echo, EdimaxPlug, DlinkType05, NetatmoCam, NetatmoWeather and RingCam). In this experiment, only benign clients participated and the model was randomly initialized. The highest observed overhead were 4 additional rounds. In average, 1.67 additional training rounds were needed to achieve at least 99% of the MA that was achieve without applying the defense." }, { "heading": "F.11 COMMUNICATION OF PRIVATE BAFFLE", "text": "While in traditional FL each client sends its model to the server and later receives the aggregated model, in private BAFFLE (cf. §3 and §C), each client has to sent shares of its model to the two servers, and receives one aggregated model at the end. In addition, the communication in private BAFFLE is done using 64-bit fixed point numbers, while PyTorch uses 32-bit floating point numbers. Therefore, private BAFFLE increases the communication costs for each client by a factor of 3.\nIn addition, also both aggregation servers need to communicate with each other. Tab. 14 shows the communication costs of the servers in GB caused by using STPC for Cosine distance calculation, clustering, and Euclidean distance calculation/clipping/aggregating in each update iteration of FL. As the computation is done between two servers, we can assume a well-connected network with high throughput and low latency such that this overhead is acceptable." } ]
2,020
BAFFLE: TOWARDS RESOLVING FEDERATED LEARN-
SP:9fe7211c656c5142368a867229540e5653a5edab
[ "The reviewed paper presents a completely unsupervised framework Meta-K for predicting the number of clusters. The approach advocated in the paper comprises two main parts: autoencoder for feature extraction and multilayer perceptron (MLP) for predicting the number of clusters. Autoencoder is used if necessary to decrease the dimensionality of the input data. The MLP is trained using policy gradient optimization schema to predict the best (according to silhouette score) number of clusters k in the given dataset. Overall, the authors show that their approach achieves near-optimum results on both a number of synthetic datasets as well as on two well-known computer vision datasets: MNIST and FMNIST.", "The paper uses policy gradients in a bandit setting to learn the optimal number of clusters, k, in k-means clustering based on the silhouette score. Finding k that leads to the highest silhouette score is a more specific problem that what the paper title promises. The approach is well-described and supported by experiments on simulated and real-world data." ]
Data clustering is a well-known unsupervised learning approach. Despite the recent advances in clustering using deep neural networks, determining the number of clusters without any information about the given dataset remains an existing problem. There have been classical approaches based on data statistics that require the manual analysis of a data scientist to calculate the probable number of clusters in a dataset. In this work, we propose a new method for unsupervised prediction of the number of clusters in a dataset given only the data without any labels. We evaluate our method extensively on randomly generated datasets using the scikit-learn package and multiple computer vision datasets and show that our method is able to determine the number of classes in a dataset effectively without any supervision.
[]
[ { "authors": [ "Fred Aminzadeh", "Shankar Chatterjee" ], "title": "Applications of clustering in exploration", "venue": "seismology. Geoexploration,", "year": 1984 }, { "authors": [ "Richard E Bellman" ], "title": "Adaptive control processes: a guided tour, volume 2045", "venue": "Princeton university press,", "year": 2015 }, { "authors": [ "Hans-Hermann Bock" ], "title": "Clustering methods: a history of k-means algorithms. In Selected contributions in data analysis and classification, pp. 161–172", "venue": null, "year": 2007 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Daniel G Ferrari", "Leandro Nunes de Castro" ], "title": "Clustering algorithm recommendation: a metalearning approach", "venue": "In International Conference on Swarm, Evolutionary, and Memetic Computing,", "year": 2012 }, { "authors": [ "Daniel Gomes Ferrari", "Leandro Nunes De Castro" ], "title": "Clustering algorithm selection by metalearning systems: A new distance-based problem characterization and ranking combination methods", "venue": "Information Sciences,", "year": 2015 }, { "authors": [ "Guojun Gan", "Chaoqun Ma", "Jianhong Wu" ], "title": "Data clustering: theory, algorithms, and applications", "venue": null, "year": 2007 }, { "authors": [ "Vikas Garg", "Adam T Kalai" ], "title": "Supervising unsupervised learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Thomas Hofmann", "Bernhard Schölkopf", "Alexander J Smola" ], "title": "Kernel methods in machine learning", "venue": "The annals of statistics,", "year": 2008 }, { "authors": [ "Yibo Jiang", "Nakul Verma" ], "title": "Meta-learning to cluster", "venue": "arXiv preprint arXiv:1910.14134,", "year": 2019 }, { "authors": [ "Han-Ul Kim", "Yeong Jun Koh", "Chang-Su Kim" ], "title": "Meta learning for unsupervised clustering", "venue": "In BMVC,", "year": 2019 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "Mnist handwritten digit database", "venue": "ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist,", "year": 2010 }, { "authors": [ "Weibo Liu", "Zidong Wang", "Xiaohui Liu", "Nianyin Zeng", "Yurong Liu", "Fuad E Alsaadi" ], "title": "A survey of deep neural network architectures and their applications", "venue": null, "year": 2017 }, { "authors": [ "Stuart Lloyd" ], "title": "Least squares quantization in pcm", "venue": "IEEE transactions on information theory,", "year": 1982 }, { "authors": [ "Chung-Horng Lung", "Marzia Zaman", "Amit Nandi" ], "title": "Applications of clustering techniques to software partitioning, recovery and restructuring", "venue": "Journal of Systems and Software,", "year": 2004 }, { "authors": [ "Erxue Min", "Xifeng Guo", "Qiang Liu", "Gen Zhang", "Jianjing Cui", "Jun Long" ], "title": "A survey of clustering with deep learning: From the perspective of network architecture", "venue": "IEEE Access,", "year": 2018 }, { "authors": [ "Andrew Y Ng", "Michael I Jordan", "Yair Weiss" ], "title": "On spectral clustering: Analysis and an algorithm", "venue": "In Advances in neural information processing systems,", "year": 2002 }, { "authors": [ "Joaquı́n Pérez Ortega", "Ma Del", "Roco Boone Rojas", "Mara J Somodevilla" ], "title": "Research issues on kmeans algorithm: An experimental trial using matlab", "venue": "In CEUR workshop proceedings: semantic web and new technologies,", "year": 2009 }, { "authors": [ "F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "P Prabhu", "N Anbazhagan" ], "title": "Improving the performance of k-means clustering for high dimensional data", "venue": "set. International journal on computer science and engineering,", "year": 2011 }, { "authors": [ "Peter J Rousseeuw" ], "title": "Silhouettes: a graphical aid to the interpretation and validation of cluster analysis", "venue": "Journal of computational and applied mathematics,", "year": 1987 }, { "authors": [ "Lawrence K Saul", "Kilian Q Weinberger", "Fei Sha", "Jihun Ham", "Daniel D Lee" ], "title": "Spectral methods for dimensionality reduction", "venue": "Semi-supervised learning,", "year": 2006 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Deep learning in neural networks: An overview", "venue": "Neural networks,", "year": 2015 }, { "authors": [ "Sun Shibao", "Qin Keyun" ], "title": "Research on modified k-means data cluster algorithm", "venue": "Computer Engineering,", "year": 2007 }, { "authors": [ "Martijn Van Otterlo", "Marco Wiering" ], "title": "Reinforcement learning and markov decision processes", "venue": "In Reinforcement Learning,", "year": 2012 }, { "authors": [ "Joaquin Vanschoren" ], "title": "Meta-learning: A survey", "venue": "arXiv preprint arXiv:1810.03548,", "year": 2018 }, { "authors": [ "Svante Wold", "Kim Esbensen", "Paul Geladi" ], "title": "Principal component analysis", "venue": "Chemometrics and intelligent laboratory systems,", "year": 1987 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning", "venue": "algorithms. CoRR,", "year": 2017 }, { "authors": [ "Junyuan Xie", "Ross Girshick", "Ali Farhadi" ], "title": "Unsupervised deep embedding for clustering analysis", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Dongkuan Xu", "Yingjie Tian" ], "title": "A comprehensive survey of clustering algorithms", "venue": "Annals of Data Science,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Clustering is an important task in machine learning, and it has a wide range of applications (Lung et al. (2004); Aminzadeh & Chatterjee (1984); Gan et al. (2007)). Clustering often consists of two steps: the feature extraction step and the clustering step. There have been numerous works on clustering (Xu & Tian (2015)), and among the proposed algorithms, K-Means (Bock (2007)) is renowned for its simplicity and performance. Despite its popularity, K-Means has several shortcomings discussed in (Ortega et al. (2009); Shibao & Keyun (2007)). In particular, with an increase in the dimensionality of the input data, K-Means’ performance decreases (Prabhu & Anbazhagan (2011)). This phenomenon is called the curse of dimensionality (Bellman (2015)). Dimensionality reduction and feature transformation methods have been used to minimize this effect. These methods map the original data into a new feature space, in which the new data-points are easier to be separated and clustered (Min et al. (2018)). Some examples of existing data transformation methods are: PCA (Wold et al. (1987)), kernel methods (Hofmann et al. (2008)) and spectral methods (Ng et al. (2002)). Although these methods are effective, a highly complex latent structure of data can still challenge them ( (Saul et al., 2006; Min et al., 2018)). Due to the recent enhancements in deep neural networks ( Liu et al. (2017)) and because of their inherent property of non-linear transformations, these architectures have the potential to replace classical dimensionality reduction methods.\nIn the research field of deep clustering, popularized by the seminal paper ”Unsupervised Deep Embedding for Clustering Analysis” (Xie et al. (2016)), deep neural networks are adopted as the feature extractor and are combined with a clustering algorithm to perform the clustering task. A unique loss function is defined which updates the model. Deep clustering methods typically take k, the number of clusters, as a hyper-parameter. In real-world scenarios, where datasets are not labeled, assigning a wrong value to this parameter can reduce the overall accuracy of the model. Meta-learning, a framework that allows a model to use information from its past tasks to learn a new task quickly or with little data, has been adopted by a handful of papers (Ferrari & de Castro (2012); Ferrari & De Castro (2015); Garg & Kalai (2018); Kim et al. (2019); Jiang & Verma (2019)) to improve the performance of clustering tasks. Closest to our work is the approach proposed by Garg & Kalai (2018) that tries to predict the number of clusters in K-Means clustering using meta-information.\nTo solve the same issue, we propose Meta-k, a gradient-based method for finding the optimal number of clusters and an attempt to have a self-supervised approach for clustering. Our work is based on the observation that a network can take input points and learn parameters to predict the best number\nof clusters, k. The predicted k, along with the points in the dataset, are the inputs to the clustering algorithm K-Means. For the clusters created, the silhouette score (Rousseeuw (1987)) is calculated. Using this metric value as the reward signal, we can compute the policy gradient to update the controller. As a result, in the next iteration, the controller gives higher probabilities to the k that causes better (closer to 1) silhouette scores to be calculated.\nTo be able to perform optimized clustering on both low- and high-dimensional spaces, we augment our model with a feature extraction module, a deep auto-encoder. In this regard, our work is related to the idea of learning to learn or meta-learning, a general framework to use the information that is learned in one task for improving a future task. Figure 1 shows the diagram of our model for lowand high-dimensional data. We evaluate our method in multiple scenarios on different clustering tasks using synthetic and computer vision datasets, and we show that our approach can predict the number of clusters in most settings with an insignificant error.\nOur contributions are:\n• A novel self-supervised approach for predicting the number of clusters using policy gradient methods.\n• Extensive evaluation on synthetic scikit-learn ( Pedregosa et al. (2011)) datasets and wellknown vision datasets MNIST ( LeCun et al. (2010)) and Fashion-MNIST ( Xiao et al. (2017)).\n• Our results show that our approach is able to predict the number of clusters in most scenarios identical or very close to the real number of data clusters.\n• We plan to release the source code of this work upon its acceptance." }, { "heading": "2 RELATED WORK", "text": "There is a vast amount of unlabeled data in many scientific fields that can be used for training neural networks. Unsupervised learning makes use of these data, and clustering is one of the most important tasks in unsupervised learning. For unsupervised clustering, the classical K-means algorithm (Lloyd (1982)) has been used extensively due to its simplicity and effectiveness. If the data is distributed compactly around distinctive centroids, then the K-Means algorithm works well, but in real life, such scenarios are rare. Therefore, research has focused on transforming the data into a lowerdimensional space in which K-Means can perform successfully. If our data points are small images, then PCA (Wold et al. (1987)) is commonly used for this transformation. Other methods include non-linear transformation such as kernel methods (Hofmann et al. (2008)) and spectral methods (Ng et al. (2002)).\nDeep learning has also been used for unsupervised clustering specifically because it can process high dimensional data effectively by learning embedding spaces (Schmidhuber (2015)). Deep clustering includes two phases: the feature extraction phase and the clustering phase. Although the extracted features can be fed directly to standard clustering algorithms, deep learning models usually optimize further over specific clustering losses. Xie et al. (2016) use a loss, based on student t-distribution and can accommodate for soft clustering. They train a stacked denoising auto-encoder as their feature extraction architecture. Caron et al. (2018) is an unsupervised training method that employs pseudo labels generated by the K-Means algorithm applied to the output of a convolutional network for the task of classification. Even though (Xie et al., 2016; Caron et al., 2018) achieve promising results, the number of clusters in K-Means and the classifier architecture remain as hyper-parameters needed to be tuned.\nTo optimize the task of clustering, we can leverage the meta-learning framework. According to Vanschoren (2018), ”Meta-learning, or learning to learn, is the science of systematically observing how different machine learning approaches perform on a wide range of learning tasks, and then learning from this experience, or meta-data, to learn new tasks much faster than otherwise possible.” Meta-learning is closely related to one-shot or few-shot learning, and it has shown promising results in supervised learning tasks. When data is limited or quick weight optimization is a must, meta-learning can benefit standard classification and also challenges in the area of Reinforcement Learning.\nTo our knowledge, there are only a few works that focus on using a meta-learning framework for unsupervised learning tasks. Ferrari & de Castro (2012); Ferrari & De Castro (2015) estimate which of the pre-existing clustering algorithms works well for a new clustering task. Their approach is limited to the algorithms that the user must provide and the number of clusters in K-Means is a hyper-parameter. Garg & Kalai (2018) Focuses on the theoretical foundations for meta-clustering and uses meta-attributes for learning the best number of clusters. The 399 datasets provided for training a binary similarity function are all labeled, and this means that during training, the number of clusters for each dataset is known. Kim et al. (2019) proposes a novel meta-learner called MLCNet that mimics numerous clustering tasks during the training to learn an effective embedding space for new clustering tasks. In this regard, their work has a resemblance to metric-based meta-learning algorithms, and learning the best number of clusters is out of their focus. In Jiang & Verma (2019), a framework is introduced, which finds the cluster structure directly without having to choose a specific cluster loss for each clustering problem. They propose a stacked-LSTM (Hochreiter & Schmidhuber (1997)) as their architecture, and the number of clusters is a hyper-parameter of this network." }, { "heading": "3 META-K", "text": "Our proposed method, Meta-K, is an attempt to self-supervise the unsupervised clustering task by automating the process of finding the best number of clusters, k. We adopted the meta-learning framework and the policy gradient algorithm in Meta-K. Our pipeline has two phases, training and inference. During the training phase, given we have high dimensional input x, we train our feature extractor φ(x) using reconstruction loss as shown in Equation 1:\nL(x, x̂) = ‖x− x̂‖2 (1)\nWhen the auto-encoder is fully-trained, a low dimensional latent representation z is learned by the model. It is shown in previous research that the K-Means algorithm, in particular, performs better when the inputs are of lower dimensions; therefore, we use this representation as input to the clustering algorithm.\nThe extracted features from the encoder (z) or low dimensional input x′ are fed to both the K-Means algorithm (with a randomly sampled k value from a predefined range with Gaussian distribution) and the controller network θ(x′) which has the job of predicting k. The controller network consists of multiple fully connected layers with a final layer, with fixed output length that shows the range of the k values predicted by the controller. The length of this vector is denoted by n. We update the parameters of θ(x′) using the policy gradient method for the chosen k value. In a standard reinforcement learning setting, the gradient is considered 1 when a certain action is taken and it\nis set to 0 for the other actions. In our pipeline, we use the value of silhouette score s(C) for the gradients. In other words, the goal of the controller is to find the value of k which gives us the highest silhouette score. Silhouette score is a metric to evaluate the quality of clusters when we do not have access to the labels. Equation 2 shows the calculation of silhouette score.\ns(C) = 1 | ∪ C| ∑ x∈∪C b(x)− a(x) max{a(x), b(x)}\n(2)\nWhere b is the mean distance to the points in the nearest cluster, and a is the mean intra-cluster distance to all the points. C is the clustering output from K-Means and x is each data point belonging to an assigned cluster. The output of s(C) is in the range of (−1, 1) with 1 showing the perfect clustering. During the inference phase, the inputs are fed to the controller. The number of clusters is predicted by the network, and K-means clustering is performed using the predicted k value. Figure 1 shows the outline of our method. Over the next sections, we explain our approach in detail." }, { "heading": "3.1 POLICY GRADIENT OPTIMIZATION", "text": "To try to map our model to the reinforcement learning (RL) framework, we can look at the batch of inputs at iteration t as observations Ot, the controller as the policy network π(at) and its output as the distribution of actions at ∈ A. We need to randomly sample an action at from the action space A and perform that action on the environment. Here we use the sampled value as the input k for the K-Means algorithm. K-Means gets two inputs, k and the batch of input features x′. The output is the clusters (C) created by this algorithm. By calculating the silhouette score s(C), we compute our reward signal rt ∈ R(a). In our problem, we think of silhouette score as the reward function as it needs to be maximized. If our controller has learned the optimum number of clusters, then the K-Means algorithm builds clusters that are well separated and cohesive. Therefore, silhouette score is maximized. The controller (policy network) has the goal of maximizing the cumulative discounted reward Rt = ∑∞ k=0 γ\nkrt+k with the discount factor γ(0, 1] in one or more trajectories. In RL terminology, the cumulative reward is also called return. In our setting, one trajectory can be defined as one epoch. For example, in case we have a batch size of 100 and a dataset with a size 1000, then our trajectory has 10 steps. The cumulative reward means the sum of silhouette scores computed for each batch. Our problem can be defined as a multi-armed bandit (MAB) problem which is a simplified version of Markov decision process ( Van Otterlo & Wiering (2012)). In MAB settings, there are no states, only actions and rewards. We keep taking actions (clustering with KMeans on the latent images) and receive rewards (silhouette score) but the state of the environment does not change.\nTo train the policy network, we take advantage of a simple form of policy gradient algorithm. Our policy is stochastic and we have no environment transitions. Also, we will take our return as a finitehorizon undiscounted return. Having these assumptions, we can compute the gradient in Equation 11.\nWhen evaluating the performance of the controller, we need to consider the value for return. If it increases, then it means our model is learning.\nEnvironment transitions and policy can be stochastic or deterministic. Assuming both are stochastic, we can formulate the probability of a trajectory τ that has T steps, given action at and policy network π as:\nP (τ |π) = T−1∏ t=0 P (at)π(at) (3)\nThen the expected return, denoted as J(π) is:\nJ(π) = ∫ τ P (τ |π)R(τ) = Eτ∼π[R(τ)] (4)\nSince we consider R(τ) = s(C), Equation 4 can be written as:\nJ(π) = ∫ τ P (τ |π)s(C) = Eτ∼π[s(C)] (5)\nAnd the optimization problem in RL will be:\nπ∗ = argmax(J(π)) π\n(6)\nwhere π∗ is the optimal policy.\nRL algorithms can be categorized based on what they are learning (the policy, the value function, the Q-function, or the environment model) and whether or not the agent has access to a model of the environment (access → model-based, no access → model-free). In the model-free group of algorithms, there is a major sub-group called policy optimization. Policy optimization methods rely directly upon optimizing parameterized policies with respect to the expected return (long-term cumulative reward) by gradient ascent.\nTo show how these algorithms update the policy network directly, we assume that our policy is a stochastic, parameterized policy denoted by ψθ and our return R(τ) is a finite-horizon undiscounted return. Our goal is to maximize the expected return J(πψ) = Eτ∼πθ [R(τ)]. Therefore we must optimize the policy by gradient ascent (Equation 7):\nθk+1 = θk + α∇J(πθ)|θk . (7)\nAlgorithms that optimize the policy this way are called ”Policy Gradient Algorithms” and Policy Gradient Theorem is the theoretical foundation of these algorithms. This theorem makes it possible for the gradient of the policy performance∇θJ(πθ) to be numerically computed. To be able to derive the expectation, we need to know the formula of the probability of a trajectory and a trick in calculus called the log-derivative trick. We have already shown the formula for the probability of a trajectory in Equation 3.\nThe log-derivative trick tells us that the derivative of log x with respect to x is 1x . In our case we can use it like the following:\n∇θP (τ |θ) = P (τ |θ)∇θP (τ |θ) (8)\nTaking those two above formulas in mind, then we can write logP (τ |θ) as:\nlogP (τ |θ) = T∑ t=0 (logP (at) + log πθ(at)) (9)\nWe take the gradient of the above formula i.e.∇θ logP (τ |θ). Considering that the environment has no dependence on θ, the gradients for logP (at) are zero. Therefore, Equation 9 becomes:\n∇θ logP (τ |θ) = T∑ t=0 ∇θ log πθ(at) (10)\nWhat we get at the end is an expectation that can be estimated by sampling. Samples are acquired by letting the agent act in the environment, according to the policy πθ, until it reaches the terminal state, repeating this over and over and collecting a set of samples (episodes/trajectories). The gradient ĝ would be:\nĝ = 1 |D| ∑ τ∈D T∑ t=0 ∇θ log πθ(at)s(C) (11)\nWhere |D| is the number of samples (trajectories/episodes). Using this expression we can compute the policy gradient and update the network. This expression is the simplest version of∇θJ(πθ)." }, { "heading": "4 EXPERIMENTS", "text": "To evaluate our method, we generate numerous datasets using the scikit-learn package with different numbers of classes ranging from 5 to 50, different numbers of samples, and different feature counts. We denote the number of features by d and the length of the output vector of the controller by n acts. This means that our network predicts k values in the range of (2, n acts). In the experiments of subsection 4.2, n acts or the actions vector has a length of 50 in all experiments. We investigated a variation of our method, where we backpropagate the gradients learned by the controller to the encoder network. However, this didn’t give us any improvement. We aim to investigate this problem further." }, { "heading": "4.1 IMPLEMENTATION DETAILS", "text": "The different proposed architectures for our controller are shown in Table 1. In all experiments we use Adam optimizer for the controller with learning rate α = 0.01, β1 = 0.9, β2 = 0.999. For the scikit-learn experiments, we train the model for 15 epochs, with d ∈ (10, 20, 30), n acts = 50. The generated dataset is randomly split into two parts of 0.9, 0.1 respectively for the train and test splits. For the experiments on MNIST and FMNIST (Fashion-MNIST), we train the controller for 100 epochs on the training set with n acts = 20, d = 10. The auto-encoder architecture used for the feature extraction is similar to the one from Xie et al. (2016) (d – 500fc – 500fc – 2000fc – 10) with d being the input dimensions and 10 the size of latent dimension z. The autoencoder network is trained with Adam optimizer and the learning rate of 1e− 2." }, { "heading": "4.2 ABLATION STUDY", "text": "In this section, we compute the error of each experiment by subtracting the predicted value of k from the ground truth value k′ and normalize it by dividing it by k′ (E = ‖k−k\n′‖ k′ ). Then we calculate\nthe average of all of these error values along the classes. Table 2 shows the ablation study of our method using different settings and with the mentioned error metric. The full results of this table are available in the Appendix. We increase the number of samples and the number of features in the dataset, and we see that with an increase in both of these parameters the model’s performance increases. The lowest error value is achieved with MLP2 model, 10,000 samples of data, and 30 features. In all of the experiments of this section, n acts is equal to 50." }, { "heading": "4.3 COMPARISON WITH PREVIOUS WORKS", "text": "To compare our approach with previous work, we evaluate our method, the classical silhouette score baseline and Meta-Unsupervised-Learning from Garg & Kalai (2018) on the same synthetic dataset. For the silhouette score baseline, we calculate the silhouette score for 20 k values ranging from 2 to 21 with a step size 1 and find the k which gives us the highest silhouette score. For the Meta-Unsupervised-Learning method, we train a binary classification model (Multivariate linear regression with lasso) on a setting similar to the one mentioned in the original paper (instead of 339 training datasets, we train the model on 75 classification datasets downloaded from OpenML 1) and evaluate it on the scikit-learn datasets. The datasets downloaded from the OpenML website have at most 10,000 samples, 500 features, and 10 classes, and no missing data. We apply data cleaning on each of them to make sure they include no NaN and no -inf values. The Meta-UnsupervisedLearning method learns a mapping from the silhouette score to the average random index and tries to predict the k that would maximize the average random index. Based on the experiments in the ablation study we chose the model with d = 30, n acts = 20 and MLP2 controller. We show in Table 3 that our method is able to predict k almost perfectly. Although the baseline approach achieves similar or better results, it is not feasible to use this method when the range of possible k’s is broad.\nAs it can be seen in Table 4, Meta-K achieves higher performance in datasets with higher dimensionality.\nWe show different clustering evaluation metric in Figure 2. It can be seen that in the MNIST and FMNIST experiments, the metrics (silhouette score, normalized mutual info, and rand index) reach their maximum at k = 9, k = 7, 12, which is sub-optimal.\n1http://www.openml.org" }, { "heading": "4.4 POLICY GRADIENT OPTIMIZATION", "text": "Figure 3 shows the return function of the controller training on MNIST and FMNIST datasets. The return plot is an indication of how well the model is trained using the policy gradient optimization." }, { "heading": "5 CONCLUSION", "text": "In this paper, we proposed a self-supervised approach to find the optimal number of clusters for any dataset in the K-Means algorithm. We showed in our experiments that our method is able to predict the number of clusters effectively without any direct supervision. Even though our method is able to predict the number of clusters in most scenarios with distinctive features, it would be challenging if the input features are compact and not separable. Another limitation of our work is that it is dependent on the silhouette score, and the method would perform poorly if silhouette score doesn’t achieve a good clustering evaluation. However our method is not limited to silhouette score and any other clustering evaluation metric can be used as well. Another point is that our method is dependent on the feature extraction network and if the encoder network is not trained properly, the controller training would also be challenging. We plan to improve our approach by using other metrics than silhouette score, adaptive layers, and Gaussian Mixture Models (GMMs) to have an end-to-end pipeline for fully automated clustering." }, { "heading": "A APPENDIX", "text": "Tables 5 - 10 show the ouptut of our method in different configurations.\nAs expected, by increasing the size of the datasets, our model’s predictions became more accurate. However, even with a dataset of size 1000, our model’s performance was acceptable when classes were of size 5 to 35. A change in the number of features (space dimensionality) did not make a visible difference here." } ]
2,020
META-K: TOWARDS SELF-SUPERVISED PREDICTION
SP:0783e842aa0246f8c1726f19d3f36e3abe6b3654
[ "This paper presents a new contrastive representation objective that has good training stability, minibatch size sensitivity, and downstream task performance. This objective is a generalization of Chi-square divergence, the optimal solution is the density ratio of joint distribution and product of marginal distributions, the estimation is consistent and its variance goes to 0 as sample size goes to infinite, so the paper is theoretical sound. The authors conduct comprehensive experiments to show that the training based on this objective is stable, not sensitive to batch size and leads to good downstream task performance in vision and phoneme and speaker classification. ", "This paper proposes a new objective for self-supervised contrastive learning. In the general framework proposed by Tsai et al. (2020b), the proposed method boils down to using a divergence related to $\\chi^2$-divergence. Compared to other objectives for contrastive learning, the authors illustrate the advantages of the proposed one in training stability (or easiness to train), sensitivity to batch size, and downstream task performance. However, introducing three new hyperparameters is a cause of concern since they make it more difficult to select optimal hyperparameters. Also, some important details of the experiments are missing. For example, how many runs to obtain the results shown in Tables 2 & 3? What's the confidence interval on the results? Any test to establish the statistical significance? What are the settings for supervised training? When the authors compare the results among different methods, did they select the optimal hyperparameters (e.g., learning rate) separately for each method?" ]
This paper introduces Relative Predictive Coding (RPC), a new contrastive representation learning objective that maintains a good balance among training stability, minibatch size sensitivity, and downstream task performance. The key to the success of RPC is two-fold. First, RPC introduces the relative parameters to regularize the objective for boundedness and low variance. Second, RPC contains no logarithm and exponential score functions, which are the main cause of training instability in prior contrastive objectives. We empirically verify the effectiveness of RPC on benchmark vision and speech self-supervised learning tasks. Lastly, we relate RPC with mutual information (MI) estimation, showing RPC can be used to estimate MI with low variance 1.
[ { "affiliations": [], "name": "Yao-Hung Hubert Tsai" }, { "affiliations": [], "name": "Martin Q. Ma" }, { "affiliations": [], "name": "Muqiao Yang" }, { "affiliations": [], "name": "Han Zhao" }, { "affiliations": [], "name": "Louis-Philippe Morency" }, { "affiliations": [], "name": "Ruslan Salakhutdinov" } ]
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for largescale machine learning", "venue": "In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI}", "year": 2016 }, { "authors": [ "Martin Anthony", "Peter L Bartlett" ], "title": "Neural network learning: Theoretical foundations", "venue": "cambridge university press,", "year": 2009 }, { "authors": [ "Sanjeev Arora", "Hrishikesh Khandeparkar", "Mikhail Khodak", "Orestis Plevrakis", "Nikunj Saunshi" ], "title": "A theoretical analysis of contrastive unsupervised representation learning", "venue": null, "year": 1902 }, { "authors": [ "Alexei Baevski", "Henry Zhou", "Abdelrahman Mohamed", "Michael Auli" ], "title": "wav2vec 2.0: A framework for self-supervised learning of speech representations", "venue": "arXiv preprint arXiv:2006.11477,", "year": 2020 }, { "authors": [ "Peter L Bartlett" ], "title": "The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network", "venue": "IEEE transactions on Information Theory,", "year": 1998 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeswar", "Sherjil Ozair", "Yoshua Bengio", "Aaron Courville", "R Devon Hjelm" ], "title": "Mine: mutual information neural estimation", "venue": "arXiv preprint arXiv:1801.04062,", "year": 2018 }, { "authors": [ "Mathilde Caron", "Ishan Misra", "Julien Mairal", "Priya Goyal", "Piotr Bojanowski", "Armand Joulin" ], "title": "Unsupervised learning of visual features by contrasting cluster assignments", "venue": "arXiv preprint arXiv:2006.09882,", "year": 2020 }, { "authors": [ "Mark Chen", "Alec Radford", "Rewon Child", "Jeff Wu", "Heewoo Jun", "Prafulla Dhariwal", "David Luan", "Ilya Sutskever" ], "title": "Generative pretraining from pixels", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Kevin Swersky", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "Big selfsupervised models are strong semi-supervised learners", "venue": "arXiv preprint arXiv:2006.10029,", "year": 2020 }, { "authors": [ "Ching-Yao Chuang", "Joshua Robinson", "Lin Yen-Chen", "Antonio Torralba", "Stefanie Jegelka" ], "title": "Debiased contrastive learning", "venue": "arXiv preprint arXiv:2007.00224,", "year": 2020 }, { "authors": [ "Adam Coates", "Andrew Ng", "Honglak Lee" ], "title": "An analysis of single-layer networks in unsupervised feature learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Thomas M Cover", "Joy A Thomas" ], "title": "Elements of information theory", "venue": null, "year": 2012 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Monroe D Donsker", "SR Srinivasa Varadhan" ], "title": "Asymptotic evaluation of certain markov process expectations for large time", "venue": "i. Communications on Pure and Applied Mathematics,", "year": 1975 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "arXiv preprint arXiv:1803.07728,", "year": 2018 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "arXiv preprint arXiv:1808.06670,", "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "K Hornik", "M Stinchcombe", "H White" ], "title": "Multilayer feedforward networks are universal approximators", "venue": "Neural Networks,", "year": 1989 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Prannay Khosla", "Piotr Teterwak", "Chen Wang", "Aaron Sarna", "Yonglong Tian", "Phillip Isola", "Aaron Maschinot", "Ce Liu", "Dilip Krishnan" ], "title": "Supervised contrastive learning", "venue": "arXiv preprint arXiv:2004.11362,", "year": 2020 }, { "authors": [ "Thomas Kipf", "Elise van der Pol", "Max Welling" ], "title": "Contrastive learning of structured world models", "venue": "arXiv preprint arXiv:1911.12247,", "year": 2019 }, { "authors": [ "Lingpeng Kong", "Cyprien de Masson d’Autume", "Wang Ling", "Lei Yu", "Zihang Dai", "Dani Yogatama" ], "title": "A mutual information maximization perspective of language representation learning", "venue": null, "year": 1910 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Brenden M Lake", "Ruslan Salakhutdinov", "Joshua B Tenenbaum" ], "title": "Human-level concept learning through probabilistic program induction", "venue": null, "year": 2015 }, { "authors": [ "Lillian Lee" ], "title": "Measures of distributional similarity", "venue": "In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics,", "year": 1999 }, { "authors": [ "Lillian Lee" ], "title": "On the effectiveness of the skew divergence for statistical language analysis", "venue": "In AISTATS. Citeseer,", "year": 2001 }, { "authors": [ "Xiang Li", "Wenhai Wang", "Xiaolin Hu", "Jian Yang" ], "title": "Selective kernel networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Xiao Liu", "Fanjin Zhang", "Zhenyu Hou", "Zhaoyu Wang", "Li Mian", "Jing Zhang", "Jie Tang" ], "title": "Selfsupervised learning: Generative or contrastive", "venue": null, "year": 2006 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Sindy Löwe", "Peter O’Connor", "Bastiaan Veeling" ], "title": "Putting an end to end-to-end: Gradient-isolated learning of representations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "David McAllester", "Karl Stratos" ], "title": "Formal limitations on the measurement of mutual information", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Ishan Misra", "C Lawrence Zitnick", "Martial Hebert" ], "title": "Shuffle and learn: unsupervised learning using temporal order verification", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "XuanLong Nguyen", "Martin J Wainwright", "Michael I Jordan" ], "title": "Estimating divergence functionals and the likelihood ratio by convex risk minimization", "venue": "IEEE Transactions on Information Theory,", "year": 2010 }, { "authors": [ "Frank Nielsen" ], "title": "A family of statistical symmetric divergences based on jensen’s inequality", "venue": "arXiv preprint arXiv:1009.4004,", "year": 2010 }, { "authors": [ "Frank Nielsen", "Richard Nock" ], "title": "On the chi square and higher-order chi distances for approximating f-divergences", "venue": "IEEE Signal Processing Letters,", "year": 2013 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Sebastian Nowozin", "Botond Cseke", "Ryota Tomioka" ], "title": "f-gan: Training generative neural samplers using variational divergence minimization", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Sherjil Ozair", "Corey Lynch", "Yoshua Bengio", "Aaron Van den Oord", "Sergey Levine", "Pierre Sermanet" ], "title": "Wasserstein dependency measure for representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Vassil Panayotov", "Guoguo Chen", "Daniel Povey", "Sanjeev Khudanpur" ], "title": "Librispeech: an asr corpus based on public domain audio books", "venue": "In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2015 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, highperformance deep learning library", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ben Poole", "Sherjil Ozair", "Aaron van den Oord", "Alexander A Alemi", "George Tucker" ], "title": "On variational bounds of mutual information", "venue": null, "year": 1905 }, { "authors": [ "Morgane Rivière", "Armand Joulin", "Pierre-Emmanuel Mazaré", "Emmanuel Dupoux" ], "title": "Unsupervised pretraining transfers well across languages", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Jiaming Song", "Stefano Ermon" ], "title": "Understanding the limitations of variational mutual information estimators", "venue": "arXiv preprint arXiv:1910.06222,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "arXiv preprint arXiv:1906.05849,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Chen Sun", "Ben Poole", "Dilip Krishnan", "Cordelia Schmid", "Phillip Isola" ], "title": "What makes for good views for contrastive learning", "venue": "arXiv preprint arXiv:2005.10243,", "year": 2020 }, { "authors": [ "Christopher Tosh", "Akshay Krishnamurthy", "Daniel Hsu" ], "title": "Contrastive learning, multi-view redundancy, and linear models", "venue": "arXiv preprint arXiv:2008.10150,", "year": 2020 }, { "authors": [ "Yao-Hung Hubert Tsai", "Yue Wu", "Ruslan Salakhutdinov", "Louis-Philippe Morency" ], "title": "Demystifying self-supervised learning: An information-theoretical framework", "venue": "arXiv preprint arXiv:2006.05576,", "year": 2020 }, { "authors": [ "Yao-Hung Hubert Tsai", "Han Zhao", "Makoto Yamada", "Louis-Philippe Morency", "Ruslan Salakhutdinov" ], "title": "Neural methods for point-wise dependency estimation", "venue": "arXiv preprint arXiv:2006.05553,", "year": 2020 }, { "authors": [ "Michael Tschannen", "Josip Djolonga", "Paul K Rubenstein", "Sylvain Gelly", "Mario Lucic" ], "title": "On mutual information maximization for representation learning", "venue": null, "year": 1907 }, { "authors": [ "Aad W Van der Vaart" ], "title": "Asymptotic statistics, volume 3", "venue": "Cambridge university press,", "year": 2000 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Petar Velickovic", "William Fedus", "William L Hamilton", "Pietro Liò", "Yoshua Bengio", "R Devon Hjelm" ], "title": "Deep graph infomax", "venue": "In ICLR (Poster),", "year": 2019 }, { "authors": [ "Makoto Yamada", "Taiji Suzuki", "Takafumi Kanamori", "Hirotaka Hachiya", "Masashi Sugiyama" ], "title": "Relative density-ratio estimation for robust distribution comparison", "venue": "Neural computation,", "year": 2013 }, { "authors": [ "Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Large batch training of convolutional networks", "venue": "arXiv preprint arXiv:1708.03888,", "year": 2017 }, { "authors": [ "Unlike Chen" ], "title": "2020c), we do not use a memory buffer, and train the model for only 100 epochs", "venue": null, "year": 2020 }, { "authors": [ "Chen" ], "title": "2020c), we use a batch size of 4, 096, and we do not use", "venue": null, "year": 2020 }, { "authors": [ "Chen" ], "title": "2020c). For relative parameters, we use α = 1.0, β = 0.005, and γ = 1.0", "venue": null, "year": 2020 }, { "authors": [ "Chen" ], "title": "Minibatch Size Experimental Setup We perform experiments on the effect of batch size on downstream performances for different objective. The experiments are performed using SimCLRv2 (Chen et al., 2020c) on CIFAR-10 dataset, as well as the model from Rivière et al. (2020) on LibriSpeech-100h dataset (Panayotov et al., 2015). For vision task, we use the default temperature", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Unsupervised learning has drawn tremendous attention recently because it can extract rich representations without label supervision. Self-supervised learning, a subset of unsupervised learning, learns representations by allowing the data to provide supervision (Devlin et al., 2018). Among its mainstream strategies, self-supervised contrastive learning has been successful in visual object recognition (He et al., 2020; Tian et al., 2019; Chen et al., 2020c), speech recognition (Oord et al., 2018; Rivière et al., 2020), language modeling (Kong et al., 2019), graph representation learning (Velickovic et al., 2019) and reinforcement learning (Kipf et al., 2019). The idea of self-supervised contrastive learning is to learn latent representations such that related instances (e.g., patches from the same image; defined as positive pairs) will have representations within close distance, while unrelated instances (e.g., patches from two different images; defined as negative pairs) will have distant representations (Arora et al., 2019).\nPrior work has formulated the contrastive learning objectives as maximizing the divergence between the distribution of related and unrelated instances. In this regard, different divergence measurement often leads to different loss function design. For example, variational mutual information (MI) estimation (Poole et al., 2019) inspires Contrastive Predictive Coding (CPC) (Oord et al., 2018). Note that MI is also the KL-divergence between the distributions of related and unrelated instances (Cover & Thomas, 2012). While the choices of the contrastive learning objectives are abundant (Hjelm et al., 2018; Poole et al., 2019; Ozair et al., 2019), we point out that there are three challenges faced by existing methods.\nThe first challenge is the training stability, where an unstable training process with high variance may be problematic. For example, Hjelm et al. (2018); Tschannen et al. (2019); Tsai et al. (2020b) show that the contrastive objectives with large variance cause numerical issues and have a poor downstream performance with their learned representations. The second challenge is the sensitivity to minibatch size, where the objectives requiring a huge minibatch size may restrict their practical usage. For instance, SimCLRv2 (Chen et al., 2020c) utilizes CPC as its contrastive objective and reaches state-of-the-art performances on multiple self-supervised and semi-supervised benchmarks. Nonetheless, the objective is trained with a minibatch size of 8, 192, and this scale of training requires enormous computational power. The third challenge is the downstream task performance, which is the one that we would like to emphasize the most. For this reason, in most cases, CPC\n1Project page: https://github.com/martinmamql/relative_predictive_coding\nis the objective that we would adopt for contrastive representation learning, due to its favorable performance in downstream tasks (Tschannen et al., 2019; Baevski et al., 2020).\nThis paper presents a new contrastive representation learning objective: the Relative Predictive Coding (RPC), which attempts to achieve a good balance among these three challenges: training stability, sensitivity to minibatch size, and downstream task performance. At the core of RPC is the relative parameters, which are used to regularize RPC for its boundedness and low variance. From a modeling perspective, the relative parameters act as a `2 regularization for RPC. From a statistical perspective, the relative parameters prevent RPC from growing to extreme values, as well as upper bound its variance. In addition to the relative parameters, RPC contains no logarithm and exponential, which are the main cause of the training instability for prior contrastive learning objectives (Song & Ermon, 2019).\nTo empirically verify the effectiveness of RPC, we consider benchmark self-supervised representation learning tasks, including visual object classification on CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and ImageNet (Russakovsky et al., 2015) and speech recognition on LibriSpeech (Panayotov et al., 2015). Comparing RPC to prior contrastive learning objectives, we observe a lower variance during training, a lower minibatch size sensitivity, and consistent performance improvement. Lastly, we also relate RPC with MI estimation, empirically showing that RPC can estimate MI with low variance." }, { "heading": "2 PROPOSED METHOD", "text": "This paper presents a new contrastive representation learning objective - the Relative Predictive Coding (RPC). At a high level, RPC 1) introduces the relative parameters to regularize the objective for boundedness and low variance; and 2) achieves a good balance among the three challenges in the contrastive representation learning objectives: training stability, sensitivity to minibatch size, and downstream task performance. We begin by describing prior contrastive objectives along with their limitations on the three challenges in Section 2.1. Then, we detail our presented objective and its modeling benefits in Section 2.2. An overview of different contrastive learning objectives is provided in Table 1. We defer all the proofs in Appendix.\nNotation We use an uppercase letter to denote a random variable (e.g., X), a lower case letter to denote the outcome of this random variable (e.g., x), and a calligraphy letter to denote the sample space of this random variable (e.g., X ). Next, if the samples (x, y) are related (or positively-paired), we refer (x, y) ∼ PXY with PXY being the joint distribution of X × Y . If the samples (x, y) are unrelated (negatively-paired), we refer (x, y) ∼ PXPY with PXPY being the product of marginal distributions overX×Y . Last, we define f ∈ F for F being any class of functions f : X ×Y → R." }, { "heading": "2.1 PRELIMINARY", "text": "Contrastive representation learning encourages the contrastiveness between the positive and the negative pairs of the representations from the related data X and Y . Specifically, when sampling a pair\nof representations (x, y) from their joint distribution ((x, y) ∼ PXY ), this pair is defined as a positive pair; when sampling from the product of marginals ((x, y) ∼ PXPY ), this pair is defined as a negative pair. Then, Tsai et al. (2020b) formalizes this idea such that the contrastiveness of the representations can be measured by the divergence between PXY and PXPY , where higher divergence suggests better contrastiveness. To better understand prior contrastive learning objectives, we categorize them in terms of different divergence measurements between PXY and PXPY , with their detailed objectives presented in Table 1.\nWe instantiate the discussion using Contrastive Predictive Coding (Oord et al., 2018, JCPC), which is a lower bound of DKL(PXY ‖PXPY ) with DKL referring to the KL-divergence:\nJCPC(X,Y ) := sup f∈F\nE(x,y1)∼PXY ,{yj}Nj=2∼PY [ log ef(x,y1)\n1 N ∑N j=1 e f(x,yj)\n] . (1)\nThen, Oord et al. (2018) presents to maximize JCPC(X,Y ), so that the learned representations X and Y have high contrastiveness. We note that JCPC has been commonly used in many recent self-supervised representation learning frameworks (He et al., 2020; Chen et al., 2020b), where they constrain the function to be f(x, y) = cosine(x, y) with cosine(·) being cosine similarity. Under this function design, maximizing JCPC leads the representations of related pairs to be close and representations of unrelated pairs to be distant.\nThe category of modeling DKL(PXY ‖PXPY ) also includes the Donsker-Varadhan objective (JDV (Donsker & Varadhan, 1975; Belghazi et al., 2018)) and the Nguyen-Wainright-Jordan objective (JNWJ (Nguyen et al., 2010; Belghazi et al., 2018)), where Belghazi et al. (2018); Tsai et al. (2020b) show that JDV(X,Y ) = JNWJ(X,Y ) = DKL(PXY ‖PXPY ). The other divergence measurements considered in prior work are DJS(PXY ‖PXPY ) (with DJS referring to the Jenson-Shannon divergence) and DWass(PXY ‖PXPY ) (with DWass referring to the Wassersteindivergence). The instance of modeling DJS(PXY ‖PXPY ) is the Jensen-Shannon f-GAN objective( JJS (Nowozin et al., 2016; Hjelm et al., 2018) ) , where JJS(X,Y ) = 2 ( DJS(PXY ‖PXPY ) −\nlog 2 ) .2 The instance of modeling DWass(PXY ‖PXPY ) is the Wasserstein Predictive Coding(\nJWPC (Ozair et al., 2019) ) , where JWPC(X,Y ) modifies JCPC(X,Y ) objective (equation 1) by searching the function from F to FL. FL denotes any class of 1-Lipschitz continuous functions from (X × Y) to R, and thus FL ⊂ F . Ozair et al. (2019) shows that JWPC(X,Y ) is the lower bound of bothDKL(PXY ‖PXPY ) andDWass(PXY ‖PXPY ). See Table 1 for all the equations. To conclude, the contrastive representation learning objectives are unsupervised representation learning methods that maximize the distribution divergence between PXY and PXPY . The learned representations cause high contrastiveness, and recent work (Arora et al., 2019; Tsai et al., 2020a) theoretically show that highly-contrastive representations could improve the performance on downstream tasks.\nAfter discussing prior contrastive representation learning objectives, we point out three challenges in their practical deployments: training stability, sensitivity to minibatch training size, and downstream task performance. In particular, the three challenges can hardly be handled well at the same time, where we highlight the conclusions in Table 1. Training Stability: The training stability highly relates to the variance of the objectives, where Song & Ermon (2019) shows that JDV and JNWJ exhibit inevitable high variance due to their inclusion of exponential function. As pointed out by Tsai et al. (2020b), JCPC, JWPC, and JJS have better training stability because JCPC and JWPC can be realized as a multi-class classification task and JJS can be realized as a binary classification task. The cross-entropy loss adopted in JCPC, JWPC, and JJS is highly-optimized and stable in existing optimization package (Abadi et al., 2016; Paszke et al., 2019). Sensitivity to minibatch training size: Among all the prior contrastive representation learning methods, JCPC is known to be sensitive to the minibatch training size (Ozair et al., 2019). Taking a closer look at equation 1, JCPC deploys an instance selection such that y1 should be selected from {y1, y2, · · · , yN}, with (x, y1) ∼ PXY , (x, yj>1) ∼ PXPY with N being the minibatch size. Previous work (Poole et al., 2019; Song & Ermon, 2019; Chen et al., 2020b; Caron et al., 2020) showed that a large N results in a more challenging instance selection and forces JCPC to have a better contrastiveness of y1 (related instance for x) against {yj}Nj=2 (unrelated instance for x). JDV, JNWJ, and JJS do not consider\n2JJS(X,Y ) achieves its supreme value when f∗(x, y) = log(p(x, y)/p(x)p(y)) (Tsai et al., 2020b). Plugin f∗(x, y) into JJS(X,Y ), we can conclude JJS(X,Y ) = 2(DJS(PXY ‖PXPY )− log 2).\nthe instance selection, and JWPC reduces the minibatch training size sensitivity by enforcing 1- Lipschitz constraint. Downstream Task Performance: The downstream task performance is what we care the most among all the three challenges. JCPC has been the most popular objective as it manifests superior performance over the other alternatives (Tschannen et al., 2019; Tsai et al., 2020b;a). We note that although JWPC shows better performance on Omniglot (Lake et al., 2015) and CelebA (Liu et al., 2015) datasets, we empirically find it not generalizing well to CIFAR-10/100 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015)." }, { "heading": "2.2 RELATIVE PREDICTIVE CODING", "text": "In this paper, we present Relative Predictive Coding (RPC), which achieves a good balance among the three challenges mentioned above:\nJRPC(X,Y ) := sup f∈F\nEPXY [f(x, y)]−αEPXPY [f(x, y)]− β\n2 EPXY\n[ f2(x, y) ] −γ\n2 EPXPY\n[ f2(x, y) ] ,\n(2) where α > 0, β > 0, γ > 0 are hyper-parameters and we define them as relative parameters. Intuitively, JRPC contains no logarithm or exponential, potentially preventing unstable training due to numerical issues. Now, we discuss the roles of α, β, γ. At a first glance, α acts to discourage the scores of PXY and PXPY from being close, and β/γ acts as a `2 regularization coefficient to stop f from becoming large. For a deeper analysis, the relative parameters act to regularize our objective for boundedness and low variance. To show this claim, we first present the following lemma:\nLemma 1 (Optimal Solution for JRPC) Let r(x, y) = p(x,y)p(x)p(y) be the density ratio. JRPC has the optimal solution f∗(x, y) = r(x,y)−αβ r(x,y)+γ := rα,β,γ(x, y) with − α γ ≤ rα,β,γ ≤ 1 β .\nLemma 1 suggests that JRPC achieves its supreme value at the ratio rα,β,γ(x, y) indexed by the relative parameters α, β, γ (i.e., we term rα,β,γ(x, y) as the relative density ratio). We note that rα,β,γ(x, y) is an increasing function w.r.t. r(x, y) and is nicely bounded even when r(x, y) is large. We will now show that the bounded rα,β,γ suggests the empirical estimation of JRPC has boundeness and low variance. In particular, let {xi, yi}ni=1 be n samples drawn uniformly at random from PXY and {x′j , y′j}mj=1 be m samples drawn uniformly at random from PXPY . Then, we use neural networks to empirically estimate JRPC as Ĵ m,n RPC:\nDefinition 1 (Ĵm,nRPC, empirical estimation of JRPC) We parametrize f via a family of neural networks FΘ := {fθ : θ ∈ Θ ⊆ Rd} where d ∈ N and Θ is compact. Then, Ĵm,nRPC = supfθ∈FΘ 1 n ∑n i=1 fθ(xi, yi)− 1 m ∑m j=1 αfθ(x ′ j , y ′ j)− 1n ∑n i=1 β 2 f 2 θ (xi, yi)− 1m ∑m j=1 γ 2 f 2 θ (x ′ j , y ′ j).\nProposition 1 (Boundedness of Ĵm,nRPC, informal) 0 ≤ JRPC ≤ 1 2β + α2 2γ . Then, with probability at least 1− δ, |JRPC − Ĵm,nRPC| = O( √ d+log (1/δ) n′ ), where n ′ = min {n,m}.\nProposition 2 (Variance of Ĵm,nRPC, informal) There exist universal constants c1 and c2 that depend only on α, β, γ, such that Var[Ĵm,nRPC] = O ( c1 n + c2 m ) .\nFrom the two propositions, whenm and n are large, i.e., the sample sizes are large, Ĵm,nRPC is bounded, and its variance vanishes to 0. First, the boundedness of Ĵm,nRPC suggests Ĵ m,n RPC will not grow to extremely large or small values. Prior contrastive learning objectives with good training stability (e.g., JCPC/JJS/JWPC) also have the boundedness of their objective values. For instance, the empirical estimation of JCPC is less than logN (equation 1) (Poole et al., 2019). Nevertheless, JCPC often performs the best only when minibatch size is large, and empirical performances of JJS and JWPC are not as competitive as JCPC. Second, the upper bound of the variance implies the training of Ĵm,nRPC can be stable, and in practice we observe a much smaller value than the stated upper bound. On the contrary, Song & Ermon (2019) shows that the empirical estimations of JDV and JNWJ exhibit inevitable variances that grow exponentially with the true DKL(PXY ‖PXPY ). Lastly, similar to prior contrastive learning objective that are related to distribution divergence measurement, we associate JRPC with the Chi-square divergence Dχ2(PXY ‖PXPY ) =\nEPXPY [r2(x, y)] − 1 (Nielsen & Nock, 2013). The derivations are provided in Appendix. By having P ′ = ββ+γPXY + γ β+γPXPY as the mixture distribution of PXY and PXPY , we can rewrite JRPC(X,Y ) as JRPC(X,Y ) = β+γ2 EP ′ [r 2 α,β,γ(x, y)]. Hence, JRPC can be regarded as a generalization of Dχ2 with the relative parameters α, β, γ, where Dχ2 can be recovered from JRPC by specializing α = 0, β = 0 and γ = 1 (e.g., Dχ2 = 2JRPC|α=β=0,γ=1 − 1). Note that JRPC may not be a formal divergence measure with arbitrary α, β, γ." }, { "heading": "3 EXPERIMENTS", "text": "We provide an overview of the experimental section. First, we conduct benchmark self-supervised representation learning tasks spanning visual object classification and speech recognition. This set of experiments are designed to discuss the three challenges of the contrastive representation learning objectives: downstream task performance (Section 3.1), training stability (Section 3.2), and minibatch size sensitivity (Section 3.3). We also provide an ablation study on the choices of the relative parameters in JRPC (Section 3.4). On these experiments we found that JRPC achieves a lower variance during training, a lower batch size insensitivity, and consistent performance improvement. Second, we relate JRPC with mutual information (MI) estimation (Section 3.5). The connection is that MI is an average statistic of the density ratio, and we have shown that the optimal solution of JRPC is the relative density ratio (see Lemma 1). Thus we could estimate MI using the density ratio transformed from the optimal solution of JRPC. On these two sets of experiments, we fairly compare JRPC with other contrastive learning objectives. Particularly, across different objectives, we fix the network, learning rate, optimizer, and batch size (we use the default configurations suggested by the original implementations from Chen et al. (2020c), Rivière et al. (2020) and Tsai et al. (2020b).) The only difference will be the objective itself. In what follows, we perform the first set of experiments. We defer experimental details in the Appendix.\nDatasets. For the visual objective classification, we consider CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and ImageNet (Russakovsky et al., 2015). CIFAR-10/-100 and ImageNet contain labeled images only, while STL-10 contains labeled and unlabeled images. For the speech recognition, we consider LibriSpeech-100h (Panayotov et al., 2015) dataset, which contains 100 hours of 16kHz English speech from 251 speakers with 41 types of phonemes.\nTraining and Evaluation Details. For the vision experiments, we follow the setup from SimCLRv2 (Chen et al., 2020c), which considers visual object recognition as its downstream task. For the speech experiments, we follow the setup from prior work (Oord et al., 2018; Rivière et al., 2020), which consider phoneme classification and speaker identification as the downstream tasks. Then, we briefly discuss the training and evaluation details into three modules: 1) related and unrelated data construction, 2) pre-training, and 3) fine-tuning and evaluation. For more details, please refer to Appendix or the original implementations. . Related and Unrelated Data Construction. In the vision experiment, we construct the related images by applying different augmentations on the same image. Hence, when (x, y) ∼ PXY , x and y are the same image with different augmentations. The unrelated images are two randomly selected samples. In the speech experiment, we define the current latent feature (feature at time t) and the future samples (samples at time > t) as related data. In other words, the feature in the latent space should contain information that can be used to infer future time steps. A latent feature and randomly selected samples would be considered as unrelated data. . Pre-training. The pre-training stage refers to the self-supervised training by a contrastive learning objective. Our training objective is defined in Definition 1, where we use neural networks to parametrize the function using the constructed related and unrelated data. Convolutional neural networks are used for vision experiments. Transformers (Vaswani et al., 2017) and LSTMs (Hochreiter & Schmidhuber, 1997) are used for speech experiments. . Fine-tuning and Evaluation. After the pre-training stage, we fix the parameters in the pre-trained networks and add a small fine-tuning network on top of them. Then, we fine-tune this small network with the downstream labels in the data’s training split. For the fine-tuning network, both vision and speech experiments consider multi-layer perceptrons. Last, we evaluate the fine-tuned representations on the data’s test split. We would like to point out that we do not normalize the hidden representations encoded by the pre-training neural network for loss calculation. This hidden nor-\nmalization technique is widely applied (Tian et al., 2019; Chen et al., 2020b;c) to stabilize training and increase performance for prior objectives, but we find it unnecessary in JRPC." }, { "heading": "3.1 DOWNSTREAM TASK PERFORMANCES ON VISION AND SPEECH", "text": "For the downstream task performance in the vision domain, we test the proposed JRPC and other contrastive learning objectives on CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and ImageNet ILSVRC-2012 (Russakovsky et al., 2015). Here we report the best performances JRPC can get on each dataset (we include experimental details in A.7.) Table 2 shows that the proposed JRPC outperforms other objectives on all datasets. Using JRPC on the largest network (ResNet with depth of 152, channel width of 2 and selective kernels), the performance jumps from 77.80% of JCPC to 78.40% of JRPC.\nRegarding speech representation learning, the downstream performance for phoneme and speaker classification are shown in Table 3 (we defer experimental details in Appendix A.9.) Compared to JCPC, JRPC improves the phoneme classification results with 4.8 percent and the speaker classification results with 0.3 percent, which is closer to the fully supervised model. Overall, the proposed JRPC performs better than other unsupervised learning objectives on both phoneme classification and speaker classification tasks." }, { "heading": "3.2 TRAINING STABILITY", "text": "We provide empirical training stability comparisons on JDV, JNWJ, JCPC and JRPC by plotting the values of the objectives as the training step increases. We apply the four objectives to the SimCLRv2 framework and train on the CIFAR-10 dataset. All setups of training are exactly the same except the objectives. From our experiments, JDV and JNWJ soon explode to NaN and disrupt training (shown as early stopping in Figure 1a; extremely large values are not plotted due to scale constraints). On the other hand, JRPC and JCPC has low variance, and both enjoy stable training. As a result, performances using the representation learned from unstable JDV and JNWJ suffer in downstream task, while representation learned by JRPC and JCPC work much better." }, { "heading": "3.3 MINIBATCH SIZE SENSITIVITY", "text": "We then provide the analysis on the effect of minibatch size on JRPC and JCPC, since JCPC is known to be sensitive to minibatch size (Poole et al., 2019). We train SimCLRv2 (Chen et al., 2020c) on CIFAR-10 and the model from Rivière et al. (2020) on LibriSpeech-100h using JRPC and JCPC with different minibatch sizes. The settings of relative parameters are the same as Section 3.2. From Figure 1b and 1c, we can observe that both JRPC and JCPC achieve their optimal performance at a large minibatch size. However, when the minibatch size decreases, the performance of JCPC shows higher sensitivity and suffers more when the number of minibatch samples is small. The result suggests that the proposed method might be less sensitive to the change of minibatch size compared to JCPC given the same training settings." }, { "heading": "3.4 EFFECT OF RELATIVE PARAMETERS", "text": "We study the effect of different combinations of relative parameters in JRPC by comparing downstream performances on visual object recognition. We train SimCLRv2 on CIFAR-10 with different combinations of α, β and γ in JRPC and fix all other experimental settings. We choose α ∈ {0, 0.001, 1.0}, β ∈ {0, 0.001, 1.0}, γ ∈ {0, 0.001, 1.0} and we report the best performances under each combination of α, β, and γ. From Figure 2, we first observe that α > 0 has better downstream performance than α = 0 when β and γ are fixed. This observation is as expected, since α > 0 encourages representations of related and unrelated samples to be pushed away. Then, we find that a small but nonzero β (β = 0.001) and a large γ (γ = 1.0) give the best performance compared to other combinations. Since β and γ serve as the coefficients of `2 regularization, the results imply that the regularization is a strong and sensitive factor that will influence the performance. The results here are not as competitive as Table 2 because the CIFAR-10 result reported in Table 2 is using a set of relative parameters (α = 1.0, β = 0.005, γ = 1.0) that is different from the combinations in this subsection. Also, we use quite different ranges of γ on ImageNet (see A.7 for details.) In conclusion, we find empirically that a non-zero α, a small β and a large γ will lead to the optimal representation for the downstream task on CIFAR-10." }, { "heading": "3.5 RELATION TO MUTUAL INFORMATION ESTIMATION", "text": "The presented approach also closely relates to mutual information estimation. For random variables X and Y with joint distribution PXY and product of marginals PXPY , the mutual information is defined as I(X;Y ) = DKL(PXY ‖PXPY ). Lemma 1 states that given optimal solution f∗(x, y) of JRPC, we can get the density ratio r(x, y) := p(x, y)/p(x)p(y) as r(x, y) = γ/β+α 1−βf∗(x,y) − γ β . We\ncan empirically estimate r̂(x, y) from the estimated f̂(x, y) via this transformation, and use r̂(x, y) to estimate mutual information (Tsai et al., 2020b). Specifically, I(X;Y ) ≈ 1n ∑n i=1 log r̂(xi, yi) with (xi, yi) ∼ P⊗nX,Y , where P ⊗n X,Y is the uniformly sampled empirical distribution of PX,Y .\nWe follow prior work (Poole et al., 2019; Song & Ermon, 2019; Tsai et al., 2020b) for the experiments. We consider X and Y as two 20-dimensional Gaussians with correlation ρ, and our goal is to estimate the mutual information I(X;Y ). Then, we perform a cubic transformation on y so that y 7→ y3. The first task is referred to as Gaussian task and the second is referred to as Cubic task, where both have the ground truth I(X;Y ) = −10log (1 − ρ2). The models are trained on 20, 000 steps with I(X;Y ) starting at 2 and increased by 2 per 4, 000 steps. Our method is compared with baseline methods JCPC (Oord et al., 2018), JNWJ (Nguyen et al., 2010), JJS (Nowozin et al., 2016), SMILE (Song & Ermon, 2019) and Difference of Entropies (DoE) (McAllester & Stratos, 2020). All approaches use the same network design, learning rate, optimizer and minibatch size for a fair comparison. First, we observe JCPC (Oord et al., 2018) has the smallest variance, while it exhibits a large bias (the estimated mutual information from JCPC has an upper bound log(batch size)). Second, JNWJ (Nguyen et al., 2010) and JJSD (Poole et al., 2019) have large variances, especially in the Cubic task. Song & Ermon (2019) pointed out the limitations of JCPC, JNWJ, and JJSD, and developed the SMILE method, which clips the value of the estimated density function to reduce the variance of the estimators. DoE (McAllester & Stratos, 2020) is neither a lower bound nor a upper bound of mutual information, but can achieve accurate estimates when underlying mutual information is large. JRPC exhibits comparable bias and lower variance compared to the SMILE method, and is more stable than the DoE method. We would like to highlight our method’s low-variance property, where we neither clip the values of the estimated density ratio nor impose an upper bound of our estimated mutual information." }, { "heading": "4 RELATED WORK", "text": "As a subset of unsupervised representation learning, self-supervised representation learning (SSL) adopts self-defined signals as supervision and uses the learned representation for downstream tasks, such as object detection and image captioning (Liu et al., 2020). We categorize SSL work into two groups: when the signal is the input’s hidden property or the corresponding view of the input. For the first group, for example, Jigsaw puzzle (Noroozi & Favaro, 2016) shuffles the image patches and defines the SSL task for predicting the shuffled positions of the image patches. Other instances are Predicting Rotations (Gidaris et al., 2018) and Shuffle & Learn (Misra et al., 2016). For the second group, the SSL task aims at modeling the co-occurrence of multiple views of data, via the contrastive or the predictive learning objectives (Tsai et al., 2020a). The predictive objectives encourage reconstruction from one view of the data to the other, such as predicting the lower part of an image from\nits upper part (ImageGPT by Chen et al. (2020a)). Comparing the contrastive with predictive learning approaches, Tsai et al. (2020a) points out that the former requires less computational resources for a good performance but suffers more from the over-fitting problem.\nTheoretical analysis (Arora et al., 2019; Tsai et al., 2020a; Tosh et al., 2020) suggests the contrastively learned representations can lead to a good downstream performance. Beyond the theory, Tian et al. (2020) shows what matters more for the performance are 1) the choice of the contrastive learning objective; and 2) the creation of the positive and negative data pairs in the contrastive objective. Recent work (Khosla et al., 2020) extends the usage of contrastive learning from the selfsupervised setting to the supervised setting. The supervised setting defines the positive pairs as the data from the same class in the contrastive objective, while the self-supervised setting defines the positive pairs as the data with different augmentations.\nOur work also closely rates to the skewed divergence measurement between distributions (Lee, 1999; 2001; Nielsen, 2010; Yamada et al., 2013). Recall that the usage of the relative parameters plays a crucial role to regularize our objective for its boundness and low variance. This idea is similar to the skewed divergence measurement, that when calculating the divergence between distributions P and Q, instead of considering D(P ‖Q), these approaches consider D(P ‖αP + (1 − α)Q) with D representing the divergence and 0 < α < 1. A natural example is that the Jensen-Shannon divergence is a symmetric skewed KL divergence: DJS(P ‖Q) = 0.5DKL(P ‖ 0.5P + 0.5Q) + 0.5DKL(Q ‖ 0.5P + 0.5Q). Compared to the non-skewed counterpart, the skewed divergence has shown to have a more robust estimation for its value (Lee, 1999; 2001; Yamada et al., 2013). Different from these works that focus on estimating the values of distribution divergence, we focus on learning self-supervised representations." }, { "heading": "5 CONCLUSION", "text": "In this work, we present RPC, the Relative Predictive Coding, that achieves a good balance among the three challenges when modeling a contrastive learning objective: training stability, sensitivity to minibatch size, and downstream task performance. We believe this work brings an appealing option for training self-supervised models and inspires future work to design objectives for balancing the aforementioned three challenges. In the future, we are interested in applying RPC in other application domains and developing more principled approaches for better representation learning." }, { "heading": "ACKNOWLEDGEMENT", "text": "This work was supported in part by the NSF IIS1763562, NSF Awards #1750439 #1722822, National Institutes of Health, IARPA D17PC00340, ONR Grant N000141812861, and Facebook PhD Fellowship. We would also like to acknowledge NVIDIA’s GPU support and Cloud TPU support from Google’s TensorFlow Research Cloud (TFRC)." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 PROOF OF LEMMA 1 IN THE MAIN TEXT", "text": "Lemma 2 (Optimal Solution for JRPC, restating Lemma 1 in the main text) Let\nJRPC(X,Y ) := sup f∈F\nEPXY [f(x, y)]−αEPXPY [f(x, y)]− β\n2 EPXY\n[ f2(x, y) ] −γ\n2 EPXPY\n[ f2(x, y) ] and r(x, y) = p(x,y)p(x)p(y) be the density ratio. JRPC has the optimal solution\nf∗(x, y) = r(x, y)− α β r(x, y) + γ := rα,β,γ(x, y) with − α γ ≤ rα,β,γ ≤ 1 β .\nProof: The second-order functional derivative of the objective is\n−βdPX,Y − γdPXPY ,\nwhich is always negative. The negative second-order functional derivative implies the objective has a supreme value. Then, take the first-order functional derivative ∂JRPC∂m and set it to zero:\ndPX,Y − α · dPXPY − β · f(x, y) · dPX,Y − γ · f(x, y) · dPXPY = 0.\nWe then get\nf∗(x, y) = dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY = p(x, y)− αp(x)p(y) βp(x, y) + γp(x)p(y) = r(x, y)− α βr(x, y) + γ .\nSince 0 ≤ r(x, y) ≤ ∞, we have −αγ ≤ r(x,y)−α βr(x,y)+γ ≤ 1 β . Hence,\n∀β 6= 0, γ 6= 0, f∗(x, y) := rα,β,γ(x, y) with − α\nγ ≤ rα,β,γ ≤\n1 β .\nA.2 RELATION BETWEEN JRPC AND Dχ2\nIn this subsection, we aim to show the following: 1) Dχ2(PXY ‖PXPY ) = EPXPY [r2(x, y)] − 1; and 2) JRPC(X,Y ) = β+γ2 EP ′ [r 2 α,β,γ(x, y)] by having P ′ = ββ+γPXY + γ\nβ+γPXPY as the mixture distribution of PXY and PXPY .\nLemma 3 Dχ2(PXY ‖PXPY ) = EPXPY [r2(x, y)]− 1\nProof: By definition (Nielsen & Nock, 2013),\nDχ2(PXY ‖PXPY ) = ∫ (dPXY )2\ndPXPY − 1 = ∫ ( dPXY dPXPY )2 dPXPY − 1\n= ∫ ( p(x, y) p(x)p(y) )2 dPXPY − 1 = ∫ r2(x, y)dPXPY − 1\n= EPXPY [r2(x, y)]− 1.\nLemma 4 Defining P ′ = ββ+γPXY + γ β+γPXPY as a mixture distribution of PXY and PXPY , JRPC(X,Y ) = β+γ 2 EP ′ [r 2 α,β,γ(x, y)].\nProof: Plug in the optimal solution f∗(x, y) = dPX,Y −α·dPXPYβ·dPX,Y +γ·dPXPY (see Lemma 2) into JRPC:\nJRPC = EPXY [f∗(x, y)]− αEPXPY [f∗(x, y)]− β\n2 EPXY\n[ f∗2(x, y) ] − γ\n2 EPXPY\n[ f∗2(x, y) ] = ∫ f∗(x, y) · ( dPXY − α · dPXPY ) − 1\n2 f∗2(x, y) ·\n( β · dPXY + γ · dPXPY ) =\n∫ dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY ( dPXY − α · dPXPY ) − 1 2 ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β · dPXY + γ · dPXPY ) = 1\n2 ∫ ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β · dPXY + γ · dPXPY ) = β + γ\n2 ∫ ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β β + γ · dPXY + γ β + γ · dPXPY ) .\nSince we define rα,β,γ = dPX,Y −α·dPXPY β·dPX,Y +γ·dPXPY and P ′ = ββ+γPXY + γ β+γPXPY ,\nJRPC = β + γ\n2 EP ′ [r2α,β,γ(x, y)]." }, { "heading": "A.3 PROOF OF PROPOSITION 1 IN THE MAIN TEXT", "text": "The proof contains two parts: showing 0 ≤ JRPC ≤ 12β + α2 2γ (see Section A.3.1) and Ĵ m,n RPC is a consistent estimator for JRPC (see Section A.3.2).\nA.3.1 BOUNDNESS OF JRPC\nLemma 5 (Boundness of JRPC) 0 ≤ JRPC ≤ 12β + α2 2γ\nProof: Lemma 4 suggests JRPC(X,Y ) = β+γ2 EP ′ [r 2 α,β,γ(x, y)] with P ′ = ββ+γPXY + γ\nβ+γPXPY as the mixture distribution of PXY and PXPY . Hence, it is obvious JRPC(X,Y ) ≥ 0. We leverage the intermediate results in the proof of Lemma 4:\nJRPC(X,Y ) = 1\n2 ∫ ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY )2( β · dPXY + γ · dPXPY ) = 1\n2\n∫ dPX,Y ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY ) − α 2 ∫ dPXPY ( dPX,Y − α · dPXPY β · dPX,Y + γ · dPXPY ) = 1\n2 EPXY [rα,β,γ(x, y)]−\nα 2 EPXPY [rα,β,γ(x, y)].\nSince −αγ ≤ rα,β,γ ≤ 1 β , JRPC(X,Y ) ≤ 1 2β +\nα2 2γ ." }, { "heading": "A.3.2 CONSISTENCY", "text": "We first recall the definition of the estimation of JRPC:\nDefinition 2 (Ĵm,nRPC, empirical estimation of JRPC, restating Definition 1 in the main text) We parametrize f via a family of neural networks FΘ := {fθ : θ ∈ Θ ⊆ Rd} where d ∈ N and Θ is compact. Let {xi, yi}ni=1 be n samples drawn uniformly at random from PXY and {x′j , y′j}mj=1 be m samples drawn uniformly at random from PXPY . Then,\nĴm,nRPC = sup fθ∈FΘ\n1\nn n∑ i=1 fθ(xi, yi)− 1 m m∑ j=1 αfθ(x ′ j , y ′ j)− 1 n n∑ i=1 β 2 f2θ (xi, yi)− 1 m m∑ j=1 γ 2 f2θ (x ′ j , y ′ j).\nOur goal is to show that Ĵm,nRPC is a consistent estimator for JRPC. We begin with the following definition:\nĴm,nRPC,θ := 1\nn n∑ i=1 fθ(xi, yi)− 1 m m∑ j=1 αfθ(x ′ j , y ′ j)− 1 n n∑ i=1 β 2 f2θ (xi, yi)− 1 m m∑ j=1 γ 2 f2θ (x ′ j , y ′ j) (3)\nand E [ ĴRPC,θ ] := EPXY [fθ(x, y)]−αEPXPY [fθ(x, y)]− β\n2 EPXY [f2θ (x, y)]−\nγ 2 EPXPY [f2θ (x, y)]. (4)\nThen, we follow the steps:\n• The first part is about estimation. We show that, with high probability, Ĵm,nRPC,θ is close to E [ ĴRPC,θ ] , for any given θ.\n• The second part is about approximation. We will apply the universal approximation lemma of neural networks (Hornik et al., 1989) to show that there exists a network θ∗ such that E [ ĴRPC,θ∗ ] is close to JRPC.\nPart I - Estimation: With high probability, Ĵm,nRPC,θ is close to E [ ĴRPC,θ ] , for any given θ. Throughout the analysis on the uniform convergence, we need the assumptions on the boundness and smoothness of the function fθ. Since we show the optimal function f is bounded in JRPC, we can use the same bounded values for fθ without losing too much precision. The smoothness of the function suggests that the output of the network should only change slightly when only slightly perturbing the parameters. Specifically, the two assumptions are as follows:\nAssumption 1 (boundness of fθ) There exist universal constants such that ∀fθ ∈ FΘ, CL ≤ fθ ≤ CU . For notations simplicity, we let M = CU − CL be the range of fθ and U = max {|CU |, |CL|} be the maximal absolute value of fθ. In the paper, we can choose to constrain that CL = −αγ and CU = 1 β since the optimal function f ∗ has −αγ ≤ f ∗ ≤ 1β .\nAssumption 2 (smoothness of fθ) There exists constant ρ > 0 such that ∀(x, y) ∈ (X × Y) and θ1, θ2 ∈ Θ, |fθ1(x, y)− fθ2(x, y)| ≤ ρ|θ1 − θ2|.\nNow, we can bound the rate of uniform convergence of a function class in terms of covering number (Bartlett, 1998):\nLemma 6 (Estimation) Let > 0 and N (Θ, ) be the covering number of Θ with radius . Then,\nPr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ )\n≤2N (Θ, 4ρ ( 1 + α+ 2(β + γ)U ) )(exp(− n 2 32M2 ) + exp ( − m 2 32M2α2 ) + exp ( − n 2 32U2β2 ) + exp ( − m 2 32U2γ2 )) .\nProof: For notation simplicity, we define the operators • P (f) = EPXY [f(x, y)] and Pn(f) = 1n ∑n i=1 f(xi, yi)\n• Q(f) = EPXPY [f(x, y)] and Qm(f) = 1m ∑m j=1 f(x ′ j , y ′ j)\nHence,∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ = ∣∣Pn(fθ)− P (fθ)− αQm(fθ) + αQ(fθ)− βPn(f2θ ) + βP (f2θ )− γQm(f2θ ) + γQ(f2θ )∣∣ ≤ |Pn(fθ)− P (fθ)|+ α |Qm(fθ)−Q(fθ)|+ β ∣∣Pn(f2θ )− P (f2θ )∣∣+ γ ∣∣Qm(f2θ )−Q(f2θ )∣∣\nLet ′ = 4ρ ( 1+α+2(β+γ)U ) and T := N (Θ, ′). LetC = {fθ1 , fθ2 , · · · , fθT }with {θ1, θ2, · · · , θT } be such that B∞(θ1, ′), · · · , B∞(θT , ′) are ′ cover. Hence, for any fθ ∈ FΘ, there is an fθk ∈ C such that ‖θ − θk‖∞ ≤ ′. Then, for any fθk ∈ C:∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≤ |Pn(fθ)− P (fθ)|+ α |Qm(fθ)−Q(fθ)|+ β\n∣∣Pn(f2θ )− P (f2θ )∣∣+ γ ∣∣Qm(f2θ )−Q(f2θ )∣∣ ≤ |Pn(fθk)− P (fθk)|+ |Pn(fθ)− Pn(fθk)|+ |P (fθ)− P (fθk)|\n+ α ( |Qm(fθk)−Q(fθk)|+ |Qm(fθ)−Qm(fθk)|+ |Q(fθ)−Q(fθk)| ) + β\n( ∣∣Pn(f2θk)− P (f2θk)∣∣+ ∣∣Pn(f2θ )− Pn(f2θk)∣∣+ ∣∣P (f2θ )− P (f2θk)∣∣ ) + γ\n( ∣∣Qm(f2θk)−Q(f2θk)∣∣+ ∣∣Qm(f2θ )−Qm(f2θk)∣∣+ ∣∣Q(f2θ )−Q(f2θk)∣∣ ) ≤ |Pn(fθk)− P (fθk)|+ ρ‖θ − θk‖+ ρ‖θ − θk‖\n+ α ( |Qm(fθk)−Q(fθk)|+ ρ‖θ − θk‖+ ρ‖θ − θk‖ ) + β\n( ∣∣Pn(f2θk)− P (f2θk)∣∣+ 2ρU‖θ − θk‖+ 2ρU‖θ − θk‖) + γ\n( ∣∣Qm(f2θk)−Q(f2θk)∣∣+ 2ρU‖θ − θk‖+ 2ρU‖θ − θk‖) = |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β\n∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ + 2ρ ( 1 + α+ 2(β + γ)U ) ‖θ − θk‖\n≤ |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣+ 2 ,\nwhere\n• |Pn(fθ)− Pn(fθk)| ≤ ρ‖θ − θk‖ due to Assumption 2, and the result also applies for |P (fθ)− P (fθk)|, |Qm(fθ)−Qm(fθk)|, and |Q(fθ)−Q(fθk)|.\n• ∣∣Pn(f2θ )− Pn(f2θk)∣∣ ≤ 2‖fθ‖∞ρ‖θ−θk‖ ≤ 2ρU‖θ−θk‖ due to Assumptions 1 and 2. The result also applies for\n∣∣P (f2θ )− P (f2θk)∣∣, ∣∣Qm(f2θ )−Qm(f2θk)∣∣, and ∣∣Q(f2θ )−Q(f2θk)∣∣. Hence,\nPr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ )\n≤Pr (\nmax fθk∈C |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣+ 2 ≥ ) = Pr\n( max fθk∈C |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 2 )\n≤ T∑ k=1 Pr ( |Pn(fθk)− P (fθk)|+ α |Qm(fθk)−Q(fθk)|+ β ∣∣Pn(f2θk)− P (f2θk)∣∣+ γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 2) ≤\nT∑ k=1 Pr ( |Pn(fθk)− P (fθk)| ≥ 8 ) + Pr ( α |Qm(fθk)−Q(fθk)| ≥ 8 ) + Pr ( β ∣∣Pn(f2θk)− P (f2θk)∣∣ ≥ 8)+ Pr(γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 8) .\nWith Hoeffding’s inequality,\n• Pr ( |Pn(fθk)− P (fθk)| ≥ 8 ) ≤ 2exp ( − n 2 32M2 ) • Pr ( α |Qm(fθk)−Q(fθk)| ≥ 8 ) ≤ 2exp ( − m 2 32M2α2\n) • Pr ( β ∣∣Pn(f2θk)− P (f2θk)∣∣ ≥ 8) ≤ 2exp(− n 232U2β2)\n• Pr ( γ ∣∣Qm(f2θk)−Q(f2θk)∣∣ ≥ 8) ≤ 2exp(− m 232U2γ2)\nTo conclude,\nPr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ )\n≤2N (Θ, 4ρ ( 1 + α+ 2(β + γ)U ) )(exp(− n 2 32M2 ) + exp ( − m 2 32M2α2 ) + exp ( − n 2 32U2β2 ) + exp ( − m 2 32U2γ2 )) .\nPart II - Approximation: Neural Network Universal Approximation. We leverage the universal function approximation lemma of neural network\nLemma 7 (Approximation (Hornik et al., 1989)) Let > 0. There exists d ∈ N and a family of neural networks FΘ := {fθ : θ ∈ Θ ⊆ Rd} where Θ is compact, such that\ninf fθ∈FΘ ∣∣∣E[ĴRPC,θ]− JRPC∣∣∣ ≤ . Part III - Bringing everything together. Now, we are ready to bring the estimation and approximation together to show that there exists a neural network θ∗ such that, with high probability, Ĵm,nRPC,θ can approximate JRPC with n′ = min {n,m} at a rate of O(1/ √ n′):\nProposition 3 With probability at least 1 − δ, ∃θ∗ ∈ Θ, |JRPC − Ĵm,nRPC,θ| = O( √ d+log (1/δ) n′ ), where n′ = min {n,m}.\nProof: The proof follows by combining Lemma 6 and 7.\nFirst, Lemma 7 suggests, ∃θ∗ ∈ Θ,∣∣∣E[ĴRPC,θ∗]− JRPC∣∣∣ ≤ 2 .\nNext, we perform analysis on the estimation error, aiming to find n,m and the corresponding probability, such that ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ∗]∣∣∣ ≤ 2 . Applying Lemma 6 with the covering number of the neural network: ( N (Θ, ) =\nO ( exp ( d log (1/ ) )) (Anthony & Bartlett, 2009) ) and let n′ = min{n,m}:\nPr ( sup fθ∈FΘ ∣∣∣Ĵm,nRPC,θ − E[ĴRPC,θ]∣∣∣ ≥ 2 )\n≤2N (Θ, 8ρ ( 1 + α+ 2(β + γ)U ) )(exp(− n 2 128M2 ) + exp ( − m 2 128M2α2 ) + exp ( − n 2 128U2β2 ) + exp ( − m 2 128U2γ2 )) =O ( exp ( d log (1/ )− n′ 2 )) ,\nwhere the big-O notation absorbs all the constants that do not require in the following derivation. Since we want to bound the probability with 1− δ, we solve the such that\nexp ( d log (1/ )− n′ 2 ) ≤ δ.\nWith log (x) ≤ x− 1,\nn′ 2 + d( − 1) ≥ n′ 2 + dlog ≥ log (1/δ),\nwhere this inequality holds when\n= O\n(√ d+ log (1/δ)\nn′\n) ." }, { "heading": "A.4 PROOF OF PROPOSITION 2 IN THE MAIN TEXT - FROM AN ASYMPTOTIC VIEWPOINT", "text": "Here, we provide the variance analysis on Ĵm,nRPC via an asymptotic viewpoint. First, assuming the network is correctly specified, and hence there exists a network parameter θ∗ satisfying f∗(x, y) = fθ∗(x, y) = rα,β,γ(x, y). Then we recall that Ĵ m,n RPC is a consistent estimator of J RPC (see Proposition 3), and under regular conditions, the estimated network parameter θ̂ in Ĵm,nRPC satisfying the asymptotic normality in the large sample limit (see Theorem 5.23 in (Van der Vaart, 2000)). We recall the definition of Ĵm,nRPC,θ in equation 3 and let n\n′ = min{n,m}, the asymptotic expansion of Ĵm,nRPC has\nĴm,nRPC,θ∗ = Ĵ m,n RPC,θ̂ + ˙̂ Jm,n RPC,θ̂ (θ∗ − θ̂) + o(‖θ∗ − θ̂‖)\n= Ĵm,n RPC,θ̂ + ˙̂ Jm,n RPC,θ̂ (θ∗ − θ̂) + op( 1√ n′ )\n= Ĵm,n RPC,θ̂ + op( 1√ n′ ),\n(5)\nwhere ˙̂Jm,n RPC,θ̂ = 0 since θ̂ is the estimation from Ĵm,nRPC = sup fθ∈FΘ Ĵm,nRPC,θ.\nNext, we recall the definition in equation 4:\nE[ĴRPC,θ̂] = EPXY [fθ̂(x, y)]− αEPXPY [fθ̂(x, y)]− β\n2 EPXY [f2θ̂ (x, y)]−\nγ 2 EPXPY [f2θ̂ (x, y)].\nLikewise, the asymptotic expansion of E[ĴRPC,θ] has\nE[ĴRPC,θ̂] = E[ĴRPC,θ∗ ] + E[ ˙̂ JRPC,θ∗ ](θ̂ − θ∗) + o(‖θ̂ − θ∗‖)\n= E[ĴRPC,θ∗ ] + E[ ˙̂JRPC,θ∗ ](θ̂ − θ∗) + op( 1√ n′ )\n= E[ĴRPC,θ∗ ] + op( 1√ n′ ),\n(6)\nwhere E[ ˙̂JRPC,θ∗ ] = 0 since E[ĴRPC,θ∗ ] = JRPC and θ∗ satisfying f∗(x, y) = fθ∗(x, y).\nCombining equations 5 and 6:\nĴm,n RPC,θ̂ − E[ĴRPC,θ̂] =Ĵ m,n RPC,θ∗ − JRPC + op( 1√ n′ )\n= 1\nn n∑ i=1 f∗θ (xi, yi)− α 1 m m∑ j=1 f∗θ (x ′ j , y ′ j)− β 2 1 n n∑ i=1 f2θ∗(xi, yi)− γ 2 1 m m∑ j=1 f2θ∗(x ′ j , y ′ j)\n− EPXY [f∗(x, y)] + αEPXPY [f∗(x, y)] + β\n2 EPXY\n[ f∗2(x, y) ] + γ\n2 EPXPY\n[ f∗2(x, y) ] + op(\n1√ n′ )\n= 1\nn n∑ i=1 rα,β,γ(xi, yi)− α 1 m m∑ j=1 rα,β,γ(x ′ j , y ′ j)− β 2 1 n n∑ i=1 r2α,β,γ(xi, yi)− γ 2 1 m m∑ j=1 r2α,β,γ(x ′ j , y ′ j)\n− EPXY [rα,β,γ(x, y)] + αEPXPY [rα,β,γ(x, y)] + β\n2 EPXY\n[ r2α,β,γ(x, y) ] + γ\n2 EPXPY\n[ r2α,β,γ(x, y) ] + op(\n1√ n′ )\n= 1√ n · 1√ n n∑ i=1\n( rα,β,γ(xi, yi)− β\n2 r2α,β,γ(xi, yi)− EPXY\n[ rα,β,γ(x, y)− β\n2 r2α,β,γ(x, y)\n])\n− 1√ m · 1√ m m∑ j=1\n( αrα,β,γ(x ′ j , y ′ j) + γ\n2 r2α,β,γ(x ′ j , y ′ j)− EPXPY\n[ αrα,β,γ(x, y) + γ\n2 r2α,β,γ(x, y)\n])\n+ op( 1√ n′ ).\nTherefore, the asymptotic Variance of Ĵm,nRPC is\nVar[Ĵm,nRPC] = 1\nn VarPXY [rα,β,γ(x, y)−\nβ 2 r2α,β,γ(x, y)] + 1 m VarPXPY [αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y)] + o( 1 n′ ).\nFirst, we look at VarPXY [rα,β,γ(x, y)− β 2 r 2 α,β,γ(x, y)]. Since β > 0 and−αγ ≤ rα,β,γ ≤ 1 β , simple calculation gives us − 2αγ+βα 2\n2γ2 ≤ rα,β,γ(x, y)− β 2 r 2 α,β,γ(x, y) ≤ 12β . Hence,\nVarPXY [rα,β,γ(x, y)− β\n2 r2α,β,γ(x, y)] ≤ max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} .\nNext, we look at VarPXPY [αrα,β,γ(x, y) + γ 2 r 2 α,β,γ(x, y)]. Since α ≥ 0, γ > 0 and−αγ ≤ rα,β,γ ≤ 1 β , simple calculation gives us − α2 2γ ≤ αrα,β,γ(x, y) + γ 2 r 2 α,β,γ(x, y) ≤ 2αβ+γ 2β2 . Hence,\nVarPXPY [αrα,β,γ(x, y) + γ\n2 r2α,β,γ(x, y)] ≤ max {(α2 2γ )2 , (2αβ + γ 2β2 )2} .\nCombining everything together, we restate the Proposition 2 in the main text:\nProposition 4 (Asymptotic Variance of Ĵm,nRPC)\nVar[Ĵm,nRPC] = 1\nn VarPXY [rα,β,γ(x, y)−\nβ 2 r2α,β,γ(x, y)] + 1 m VarPXPY [αrα,β,γ(x, y) + γ 2 r2α,β,γ(x, y)] + o( 1 n′ )\n≤ 1 n max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} + 1 m max {(α2 2γ )2 , (2αβ + γ 2β2 )2} + o( 1 n′ )\nA.5 PROOF OF PROPOSITION 2 IN THE MAIN TEXT - FROM BOUNDNESS OF fθ\nAs discussed in Assumption 1, for the estimation Ĵm,nRPC, we can bound the function fθ in FΘ within [−αγ , 1 β ] without losing precision. Then, re-arranging Ĵ m,n RPC:\nsup fθ∈FΘ\n1\nn n∑ i=1 fθ(xi, yi)− 1 m m∑ j=1 αfθ(x ′ j , y ′ j)− 1 n n∑ i=1 β 2 f2θ (xi, yi)− 1 m m∑ j=1 γ 2 f2θ (x ′ j , y ′ j)\nsup fθ∈FΘ\n1\nn n∑ i=1 ( fθ(xi, yi)− β 2 f2θ (xi, yi) ) + 1 m n∑ j=m ( αfθ(x ′ j , y ′ j) + γ 2 f2θ (x ′ j , y ′ j) )\nThen, since −αγ ≤ fθ(·, ·) ≤ 1 β , basic calculations give us\n−2αγ + βα 2\n2γ2 ≤ fθ(xi, yi)−\nβ 2 f2θ (xi, yi) ≤ 1 2β and −α\n2\n2γ ≤ αfθ(x′j , y′j)+\nγ 2 f2θ (x ′ j , y ′ j) ≤ 2αβ + γ 2β2 .\nThe resulting variances have\nVar[fθ(xi, yi)− β\n2 f2θ (xi, yi)] ≤ max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} and\nVar[αfθ(x ′ j , y ′ j) +\nγ 2 f2θ (x ′ j , y ′ j)] ≤ max {(α2 2γ )2 , (2αβ + γ 2β2 )2} .\nTaking the mean of m,n independent random variables gives the result:\nProposition 5 (Variance of Ĵm,nRPC)\nVar[Ĵm,nRPC] ≤ 1\nn max {(2αγ + βα2 2γ2 )2 , ( 1 2β )2} + 1 m max {(α2 2γ )2 , (2αβ + γ 2β2 )2} .\nA.6 IMPLEMENTATION OF EXPERIMENTS\nFor visual representation learning, we follow the implementation in https://github.com/ google-research/simclr. For speech representation learning, we follow the implementation in https://github.com/facebookresearch/CPC_audio. For MI estimation, we follow the implementation in https://github.com/yaohungt/Pointwise_ Dependency_Neural_Estimation/tree/master/MI_Est_and_CrossModal.." }, { "heading": "A.7 RELATIVE PREDICTIVE CODING ON VISION", "text": "The whole pipeline of pretraining contains the following steps: First, a stochastic data augmentation will transform one image sample xk to two different but correlated augmented views, x′2k−1 and x′2k. Then a base encoder f(·) implemented using ResNet (He et al., 2016) will extract representations from augmented views, creating representations h2k−1 and h2k. Later a small neural network g(·) called projection head will map h2k−1 and h2k to z2k−1 and z2k in a different latent space. For each minibatch of N samples, there will be 2N views generated. For each image xk there will be one positive pair x′2k−1 and x ′ 2k and 2(N − 1) negative samples. The RPC loss between a pair of positive views, x′i and x ′ j (augmented from the same image) , can be calculated by the substitution fθ(x ′ i,x ′ j) = (zi · zj)/τ = si,j (τ is a hyperparameter) to the definition of RPC:\n`RPCi,j = −(si,j − α\n2(N − 1) 2N∑ k=1 1[k 6=i]si,k − β 2 s2i,j −\nγ\n2 · 2(N − 1) 2N∑ k=1 1[k6=i]s 2 i,k) (7)\nFor losses other than RPC, a hidden normalization of si,j is often required by replacing zi · zj with (zi ·zj)/|zi||zj |. CPC and WPC adopt this, while other objectives needs it to help stabilize training variance. RPC does not need this normalization." }, { "heading": "A.8 CIFAR-10/-100 AND IMAGENET EXPERIMENTS DETAILS", "text": "ImageNet Following the settings in (Chen et al., 2020b;c), we train the model on Cloud TPU with 128 cores, with a batch size of 4, 096 and global batch normalization 3 (Ioffe & Szegedy, 2015). Here we refer to the term batch size as the number of images (or utterances in the speech experiments) we use per GPU, while the term minibatch size refers to the number of negative samples used to calculate the objective, such as CPC or our proposed RPC. The largest model we train is a 152-layer ResNet with selective kernels (SK) (Li et al., 2019) and 2× wider channels. We use the LARS optimizer (You et al., 2017) with momentum 0.9. The learning rate linearly increases for the first 20 epochs, reaching a maximum of 6.4, then decayed with cosine decay schedule. The weight decay is 10−4. A MLP projection head g(·) with three layers is used on top of the ResNet encoder. Unlike Chen et al. (2020c), we do not use a memory buffer, and train the model for only 100 epochs rather than 800 epochs due to computational constraints. These two options slightly reduce CPC’s performance benchmark for about 2% with the exact same setting. The unsupervised pre-training is followed by a supervised fine-tuning. Following SimCLRv2 (Chen et al., 2020b;c), we fine-tune the 3-layer g(·) for the downstream tasks. We use learning rates 0.16 and 0.064 for standard 50-layer ResNet and larger 152-layer ResNet respectively, and weight decay and learning rate warmup are removed. Different from Chen et al. (2020c), we use a batch size of 4, 096, and we do not use global batch normalization for fine-tuning. For JRPC we disable hidden normalization and use a temperature τ = 32. For all other objectives, we use hidden normalization and τ = 0.1 following previous work (Chen et al., 2020c). For relative parameters, we use α = 0.3, β = 0.001, γ = 0.1 and α = 0.3, β = 0.001, γ = 0.005 for ResNet-50 and ResNet-152 respectively.\nCIFAR-10/-100 Following the settings in (Chen et al., 2020b), we train the model on a single GPU, with a batch size of 512 and global batch normalization (Ioffe & Szegedy, 2015). We use ResNet (He et al., 2016) of depth 18 and depth 50, and does not use Selective Kernel (Li et al., 2019) or a multiplied width size. We use the LARS optimizer (You et al., 2017) with momentum 0.9. The learning rate linearly increases for the first 20 epochs, reaching a maximum of 6.4, then decayed with cosine decay schedule. The weight decay is 10−4. A MLP projection head g(·) with three layers is used on top of the ResNet encoder. Unlike Chen et al. (2020c), we do not use a memory buffer. We train the model for 1000 epochs. The unsupervised pre-training is followed by a supervised fine-tuning. Following SimCLRv2 (Chen et al., 2020b;c), we fine-tune the 3-layer g(·) for the downstream tasks. We use learning rates 0.16 for standard 50-layer ResNet , and weight decay and learning rate warmup are removed. For JRPC we disable hidden normalization and use a temperature τ = 128. For all other objectives, we use hidden normalization and τ = 0.5 following previous work (Chen et al., 2020c). For relative parameters, we use α = 1.0, β = 0.005, and γ = 1.0.\nSTL-10 We also perform the pre-training and fine-tuning on STL-10 (Coates et al., 2011) using the model proposed in Chuang et al. (2020). Chuang et al. (2020) proposed to indirectly approximate the distribution of negative samples so that the objective is debiased. However, their implementation of contrastive learning is consistent with Chen et al. (2020b). We use a ResNet with depth 50 as an encoder for pre-training, with Adam optimizer, learning rate 0.001 and weight decay 10−6. The temperature τ is set to 0.5 for all objectives other than JRPC, which disables hidden normalization and use τ = 128. The downstream task performance increases from 83.4% of JCPC to 84.1% of JRPC.\nConfidence Interval We also provide the confidence interval of JRPC and JCPC on CIFAR-10, CIFAR-100 and ImageNet, using ResNet-18, ResNet-18 and ResNet-50 respectively (95% confi-\n3For WPC (Ozair et al., 2019), the global batch normalization during pretraining is disabled since we enforce 1-Lipschitz by gradient penalty (Gulrajani et al., 2017).\ndence level is chosen) in Table 4. Both CPC and RPC use the same experimental settings throughout this paper. Here we use the relative parameters (α = 1.0, β = 0.005, γ = 1.0) in JRPC which gives the best performance on CIFAR-10. The confidence intervals of CPC do not overlap with the confidence intervals of RPC, which means the difference of the downstream task performance between RPC and CPC is statistically significant." }, { "heading": "A.9 RELATIVE PREDICTIVE CODING ON SPEECH", "text": "For speech representation learning, we adopt the general architecture from Oord et al. (2018). Given an input signal x1:T with T time steps, we first pass it through an encoder φθ parametrized by θ to produce a sequence of hidden representations {h1:T } where ht = φθ(xt). After that, we obtain the contextual representation ct at time step t with a sequential model ψρ parametrized by ρ: ct = ψρ(h1, . . . ,ht), where ct contains context information before time step t. For unsupervised pre-training, we use a multi-layer convolutional network as the encoder φθ, and an LSTM with hidden dimension 256 as the sequential model ψρ. Here, the contrastiveness is between the positive pair (ht+k, ct) where k is the number of time steps ahead, and the negative pairs (hi, ct), where hi is randomly sampled fromN , a batch of hidden representation of signals assumed to be unrelated to ct. The scoring function f based on Equation 2 at step t and look-ahead k will be fk = fk(h, ct) = exp((h)>Wkct), where Wk is a learnable linear transformation defined separately for each k ∈ {1, ...,K} and K is predetermined as 12 time steps. The loss in Equation 2 will then be formulated as:\n`RPCt,k = −(fk(ht+k, ct)− α |N | ∑\nhi∈N\nfk(hi, ct)− β\n2 f2k (ht+k, ct)−\nγ 2|N | ∑\nhi∈N\nf2k (hi, ct)) (8)\nWe use the following relative parameters: α = 1, β = 0.25, and γ = 1, and we use the temperature τ = 16 for JRPC. For JCPC we follow the original implementation which sets τ = 1. We fix all other experimental setups, including architecture, learning rate, and optimizer. As shown in Table 3, JRPC has better downstream task performance, and is closer to the performance from a fully supervised model." }, { "heading": "A.10 EMPIRICAL OBSERVATIONS ON VARIANCE AND MINIBATCH SIZE", "text": "Variance Experiment Setup We perform the variance comparison of JDV, JNWJ and the proposed JRPC. The empirical experiments are performed using SimCLRv2 (Chen et al., 2020c) on CIFAR-10 dataset. We use a ResNet of depth 18, with batch size of 512. We train each objective with 30K training steps and record their value. In Figure 1, we use a temperature τ = 128 for all objectives. Unlike other experiments, where hidden normalization is applied to other objectives, we remove hidden normarlization for all objectives due to the reality that objectives after normalization does not reflect their original values. From Figure 1, JRPC enjoys lower variance and more stable training compared to JDV and JNWJ.\nMinibatch Size Experimental Setup We perform experiments on the effect of batch size on downstream performances for different objective. The experiments are performed using SimCLRv2 (Chen et al., 2020c) on CIFAR-10 dataset, as well as the model from Rivière et al. (2020) on LibriSpeech-100h dataset (Panayotov et al., 2015). For vision task, we use the default temperature τ = 0.5 from Chen et al. (2020c) and hidden normalization mentioned in Section 3 for JCPC. For JRPC in vision and speech tasks we use a temperature of τ = 128 and τ = 16 respectively, both without hidden normalization." }, { "heading": "A.11 MUTUAL INFORMATION ESTIMATION", "text": "Our method is compared with baseline methods CPC (Oord et al., 2018), NWJ (Nguyen et al., 2010), JSD (Nowozin et al., 2016), and SMILE (Song & Ermon, 2019). All the approaches consider the same design of f(x, y), which is a 3-layer neural network taking concatenated (x, y) as the input. We also fix the learning rate, the optimizer, and the minibatch size across all the estimators for a fair comparison.\nWe present results of mutual information by Relative Predictive Coding using different sets of relative parameters in Figure 4. In the first row, we set β = 10−3, γ = 1, and experiment with different\nα values. In the second row, we set α = 1, γ = 1 and in the last row we set α = 1, β = 10−3. From the figure, a small β around 10−3 and a large γ around 1.0 is crucial for an estimation that is relatively low bias and low variance. This conclusion is consistent with Section 3 in the main text.\nWe also performed comparison between JRPC and Difference of Entropies (DoE) (McAllester & Stratos, 2020). We performed two sets of experiments: in the first set of experiments we compare JRPC and DoE when MI is large (> 100 nats), while in the second set of experiments we compare JRPC and DoE using the setup in this section (MI < 12 nats and MI increases by 2 per 4k training steps). On the one hand, when MI is large (> 100 nats), we acknowledge that DoE is performing well on MI estimation, compared to JRPC which only estimates the MI around 20. This analysis is based on the code from https://github.com/karlstratos/doe. On the other hand, when the true MI is small, the DoE method is more unstable than JRPC, as shown in Figure 5. Figure 5 illustrates the results of the DoE method when the distribution is isotropic Gaussian (correctly specified) or Logistic (mis-specified). Figure 3 only shows the results using Gaussian." } ]
2,021
SELF-SUPERVISED REPRESENTATION LEARNING WITH RELATIVE PREDICTIVE CODING
SP:3e812bc034c95a7141296dd879217ce10d01065a
[ "Inspired by gradient-based NAS of single-path formulation, the authors propose a super-bit model, a single-path method, to decide the optimal number of quantization bits and pruning of a group of filters. While it can be a time-consuming process to study the impact of quantization of certain filters (or layers) on model accuracy, the proposed scheme finds a particular compression configuration in a trainable manner. The experimental results show that the proposed method presents higher model accuracy or lower computational cost (measured as the bit-operation count).", "The paper describes the method to determine optimal quantization bit-width and pruning configuration for the neural network compression. Different from other approaches, the proposed method integrates multiple bit configurations (including pruning) into a single architecture, which is named “Super-bit”. The architecture uses binary gates to automatically select bit resolution. In addition, the super-bit model is differentiable and jointly trainable with parameters." ]
We present Automatic Bit Sharing (ABS) to automatically search for optimal model compression configurations (e.g., pruning ratio and bitwidth). Unlike previous works that consider model pruning and quantization separately, we seek to optimize them jointly. To deal with the resultant large designing space, we propose a novel super-bit model, a single-path method, to encode all candidate compression configurations, rather than maintaining separate paths for each configuration. Specifically, we first propose a novel decomposition of quantization that encapsulates all the candidate bitwidths in the search space. Starting from a low bitwidth, we sequentially consider higher bitwidths by recursively adding reassignment offsets. We then introduce learnable binary gates to encode the choice of bitwidth, including filter-wise 0-bit for pruning. By jointly training the binary gates in conjunction with network parameters, the compression configurations of each layer can be automatically determined. Our ABS brings two benefits for model compression: 1) It avoids the combinatorially large design space, with a reduced number of trainable parameters and search costs. 2) It also averts directly fitting an extremely low bit quantizer to the data, hence greatly reducing the optimization difficulty due to the non-differentiable quantization. Experiments on CIFAR-100 and ImageNet show that our methods achieve significant computational cost reduction while preserving promising performance.
[]
[ { "authors": [ "Yu Bai", "Yu-Xiang Wang", "Edo Liberty" ], "title": "Proxquant: Quantized neural networks via proximal operators", "venue": "In Proc. Int. Conf. Learn. Repren.,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "Yash Bhalgat", "Jinwon Lee", "Markus Nagel", "Tijmen Blankevoort", "Nojun Kwak" ], "title": "Lsq+: Improving low-bit quantization through learnable offsets and better initialization", "venue": "In Proc. IEEE Conf. Comp. Vis. Patt. Recogn.,", "year": 2020 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "ProxylessNAS: Direct neural architecture search on target task and hardware", "venue": "In Proc. Int. Conf. Learn. Repren.,", "year": 2019 }, { "authors": [ "Zhaowei Cai", "Nuno Vasconcelos" ], "title": "Rethinking differentiable search for mixed-precision neural networks", "venue": "In Proc. IEEE Conf. Comp. Vis. Patt. Recogn.,", "year": 2020 }, { "authors": [ "Zhaowei Cai", "Xiaodong He", "Jian Sun", "Nuno Vasconcelos" ], "title": "Deep learning with low precision by half-wave gaussian quantization", "venue": "In Proc. IEEE Conf. Comp. Vis. Patt. Recogn.,", "year": 2017 }, { "authors": [ "Yongjian Chen", "Tao Guan", "Cheng Wang" ], "title": "Approximate nearest neighbor search by residual vector quantization", "venue": null, "year": 2010 }, { "authors": [ "Jungwook Choi", "Zhuo Wang", "Swagath Venkataramani", "Pierce I-Jen Chuang", "Vijayalakshmi Srinivasan", "Kailash Gopalakrishnan" ], "title": "Pact: Parameterized clipping activation for quantized neural networks", "venue": "arXiv preprint arXiv:1805.06085,", "year": 2018 }, { "authors": [ "Ruizhou Ding", "Ting-Wu Chin", "Zeye Liu", "Diana Marculescu" ], "title": "Regularizing activation distribution for training binarized deep networks", "venue": "In Proc. IEEE Conf. Comp. Vis. Patt. Recogn.,", "year": 2019 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "Network pruning via transformable architecture search", "venue": "In Proc. Adv. Neural Inf. Process. Syst.,", "year": 2019 }, { "authors": [ "Zhen Dong", "Zhewei Yao", "Amir Gholami", "Michael W Mahoney", "Kurt Keutzer" ], "title": "Hawq: Hessian aware quantization of neural networks with mixed-precision", "venue": "In Proc. IEEE Int. Conf", "year": 2019 }, { "authors": [ "Steven K. Esser", "Jeffrey L. McKinstry", "Deepika Bablani", "Rathinakumar Appuswamy", "Dharmendra S. Modha" ], "title": "Learned step size quantization", "venue": "In Proc. Int. Conf. Learn. Repren.,", "year": 2020 }, { "authors": [ "Yunchao Gong", "Liu Liu", "Ming Yang", "Lubomir Bourdev" ], "title": "Compressing deep convolutional networks using vector quantization", "venue": "arXiv preprint arXiv:1412.6115,", "year": 2014 }, { "authors": [ "Yiwen Guo", "Anbang Yao", "Yurong Chen" ], "title": "Dynamic network surgery for efficient dnns", "venue": "In Proc. Adv. Neural Inf. Process. Syst.,", "year": 2016 }, { "authors": [ "Yong Guo", "Yin Zheng", "Mingkui Tan", "Qi Chen", "Jian Chen", "Peilin Zhao", "Junzhou Huang" ], "title": "Nat: Neural architecture transformer for accurate and compact architectures", "venue": "In Proc. Adv. Neural Inf. Process", "year": 2019 }, { "authors": [ "Zichao Guo", "Xiangyu Zhang", "Haoyuan Mu", "Wen Heng", "Zechun Liu", "Yichen Wei", "Jian Sun" ], "title": "Single path one-shot neural architecture search with uniform sampling", "venue": "In Proc. Eur. Conf. Comp. Vis.,", "year": 2020 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "In Proc. Int. Conf. Learn. Repren.,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proc. IEEE Conf. Comp. Vis. Patt. Recogn.,", "year": 2016 }, { "authors": [ "Yang He", "Ping Liu", "Ziwei Wang", "Zhilan Hu", "Yi Yang" ], "title": "Filter pruning via geometric median for deep convolutional neural networks acceleration", "venue": "In Proc. IEEE Conf. Comp. Vis. Patt. Recogn.,", "year": 2019 }, { "authors": [ "Yihui He", "Xiangyu Zhang", "Jian Sun" ], "title": "Channel pruning for accelerating very deep neural networks", "venue": "In Proc. IEEE Int. Conf", "year": 2017 }, { "authors": [ "Yihui He", "Ji Lin", "Zhijian Liu", "Hanrui Wang", "Li-Jia Li", "Song Han" ], "title": "Amc: Automl for model compression and acceleration on mobile devices", "venue": "In Proc. Eur. Conf", "year": 2018 }, { "authors": [ "Itay Hubara", "Matthieu Courbariaux", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Binarized neural networks", "venue": "In Proc. Adv. Neural Inf. Process", "year": 2016 }, { "authors": [ "Qing Jin", "Linjie Yang", "Zhenyu Liao" ], "title": "Towards efficient training for neural network quantization", "venue": "arXiv preprint arXiv:1912.10207,", "year": 2019 }, { "authors": [ "Sangil Jung", "Changyong Son", "Seohyung Lee", "Jinwoo Son", "Jae-Joon Han", "Youngjun Kwak", "Sung Ju Hwang", "Changkyu Choi" ], "title": "Learning to quantize deep networks by optimizing quantization intervals with task loss", "venue": "In Proc. IEEE Conf. Comp. Vis. Patt. Recogn.,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Proc. Adv. Neural Inf. Process. Syst.,", "year": 2012 }, { "authors": [ "Hao Li", "Asim Kadav", "Igor Durdanovic", "Hanan Samet", "Hans Peter Graf" ], "title": "Pruning filters for efficient convnets", "venue": "In Proc. Int. Conf. Learn. Repren.,", "year": 2017 }, { "authors": [ "Yuhang Li", "Xin Dong", "Wei Wang" ], "title": "Additive powers-of-two quantization: An efficient nonuniform discretization for neural networks", "venue": "In Proc. Int. Conf. Learn. Repren.,", "year": 2020 }, { "authors": [ "Zefan Li", "Bingbing Ni", "Wenjun Zhang", "Xiaokang Yang", "Wen Gao" ], "title": "Performance guaranteed network acceleration via high-order residual quantization", "venue": "In Proc. IEEE Int. Conf. Comp. Vis.,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Piotr Dollár", "Ross Girshick", "Kaiming He", "Bharath Hariharan", "Serge Belongie" ], "title": "Feature pyramid networks for object detection", "venue": "In Proc. IEEE Conf. Comp. Vis. Patt. Recogn.,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In Proc. IEEE Int. Conf. Comp. Vis.,", "year": 2017 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "DARTS: Differentiable architecture search", "venue": "In Proc. Int. Conf. Learn. Repren.,", "year": 2019 }, { "authors": [ "Zechun Liu", "Haoyuan Mu", "Xiangyu Zhang", "Zichao Guo", "Xin Yang", "Kwang-Ting Cheng", "Jian Sun" ], "title": "Metapruning: Meta learning for automatic neural network channel pruning", "venue": "In Proc. IEEE Int. Conf. Comp. Vis.,", "year": 2019 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "SGDR: stochastic gradient descent with warm restarts", "venue": "In Proc. Int. Conf. Learn. Repren.,", "year": 2017 }, { "authors": [ "Qian Lou", "Lantao Liu", "Minje Kim", "Lei Jiang" ], "title": "Autoqb: Automl for network quantization and binarization on mobile devices", "venue": null, "year": 1902 }, { "authors": [ "Christos Louizos", "Matthias Reisser", "Tijmen Blankevoort", "Efstratios Gavves", "Max Welling" ], "title": "Relaxed quantization for discretized neural networks", "venue": "In Proc. Int. Conf. Learn. Repren.,", "year": 2019 }, { "authors": [ "Jian-Hao Luo", "Jianxin Wu", "Weiyao Lin" ], "title": "Thinet: A filter level pruning method for deep neural network compression", "venue": "In Proc. IEEE Int. Conf", "year": 2017 }, { "authors": [ "Yurii E Nesterov" ], "title": "A method for solving the convex programming problem with convergence rate o (1/kˆ", "venue": "Proceedings of the USSR Academy of Sciences,", "year": 1983 }, { "authors": [ "Hieu Pham", "Melody Guan", "Barret Zoph", "Quoc Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "In Proc. Int. Conf. Mach. Learn.,", "year": 2018 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In Proc. AAAI Conf. on Arti", "year": 2019 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "Int. J. Comp. Vis.,", "year": 2015 }, { "authors": [ "Hardik Sharma", "Jongse Park", "Naveen Suda", "Liangzhen Lai", "Benson Chau", "Vikas Chandra", "Hadi Esmaeilzadeh" ], "title": "Bit fusion: Bit-level dynamically composable architecture for accelerating deep neural network", "venue": "In International Symposium on Computer Architecture,", "year": 2018 }, { "authors": [ "Dimitrios Stamoulis", "Ruizhou Ding", "Di Wang", "Dimitrios Lymberopoulos", "Bodhi Priyantha", "Jie Liu", "Diana Marculescu" ], "title": "Single-path nas: Designing hardware-efficient convnets in less than 4 hours", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2019 }, { "authors": [ "Frederick Tung", "Greg Mori" ], "title": "Clip-q: Deep network compression learning by in-parallel pruningquantization", "venue": "In Proc. IEEE Conf. Comp. Vis. Patt. Recogn.,", "year": 2018 }, { "authors": [ "Stefan Uhlich", "Lukas Mauch", "Fabien Cardinaux", "Kazuki Yoshiyama", "Javier Alonso Garcia", "Stephen Tiedemann", "Thomas Kemp", "Akira Nakamura" ], "title": "Mixed precision dnns: All you need is a good parametrization", "venue": "In Proc. Int. Conf. Learn. Repren.,", "year": 2020 }, { "authors": [ "Mart van Baalen", "Christos Louizos", "Markus Nagel", "Rana Ali Amjad", "Ying Wang", "Tijmen Blankevoort", "Max Welling" ], "title": "Bayesian bits: Unifying quantization and pruning", "venue": "In Proc. Adv. Neural Inf. Process. Syst.,", "year": 2020 }, { "authors": [ "Kuan Wang", "Zhijian Liu", "Yujun Lin", "Ji Lin", "Song Han" ], "title": "Haq: Hardware-aware automated quantization with mixed precision", "venue": "In Proc. IEEE Conf. Comp. Vis. Patt. Recogn.,", "year": 2019 }, { "authors": [ "Tianzhe Wang", "Kuan Wang", "Han Cai", "Ji Lin", "Zhijian Liu", "Hanrui Wang", "Yujun Lin", "Song Han" ], "title": "Apq: Joint search for network architecture, pruning and quantization policy", "venue": "In Proc. IEEE Conf. Comp. Vis. Patt. Recogn.,", "year": 2020 }, { "authors": [ "Bichen Wu", "Yanghan Wang", "Peizhao Zhang", "Yuandong Tian", "Peter Vajda", "Kurt Keutzer" ], "title": "Mixed precision quantization of convnets via differentiable neural architecture search", "venue": "arXiv preprint arXiv:1812.00090,", "year": 2018 }, { "authors": [ "Haichuan Yang", "Shupeng Gui", "Yuhao Zhu", "Ji Liu" ], "title": "Automatic neural network compression by sparsity-quantization joint learning: A constrained optimization-based approach", "venue": "In Proc. IEEE Conf. Comp. Vis. Patt. Recogn.,", "year": 2020 }, { "authors": [ "Shaokai Ye", "Tianyun Zhang", "Kaiqi Zhang", "Jiayu Li", "Jiaming Xie", "Yun Liang", "Sijia Liu", "Xue Lin", "Yanzhi Wang" ], "title": "A unified framework of dnn weight pruning and weight clustering/quantization using admm", "venue": "In Proc. AAAI Conf. on Arti", "year": 2019 }, { "authors": [ "Wang Ying", "Lu Yadong", "Blankevoort Tijmen" ], "title": "Differentiable joint pruning and quantization for hardware efficiency", "venue": "In Proc. Eur. Conf. Comp. Vis.,", "year": 2020 }, { "authors": [ "Dongqing Zhang", "Jiaolong Yang", "Dongqiangzi Ye", "Gang Hua" ], "title": "Lq-nets: Learned quantization for highly accurate and compact deep neural networks", "venue": "In Proc. Eur. Conf", "year": 2018 }, { "authors": [ "Shuchang Zhou", "Yuxin Wu", "Zekun Ni", "Xinyu Zhou", "He Wen", "Yuheng Zou" ], "title": "Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients", "venue": "arXiv preprint arXiv:1606.06160,", "year": 2016 }, { "authors": [ "Bohan Zhuang", "Chunhua Shen", "Mingkui Tan", "Lingqiao Liu", "Ian Reid" ], "title": "Towards effective lowbitwidth convolutional neural networks", "venue": "In Proc. IEEE Conf. Comp. Vis. Patt. Recogn.,", "year": 2018 }, { "authors": [ "Bohan Zhuang", "Chunhua Shen", "Mingkui Tan", "Lingqiao Liu", "Ian Reid" ], "title": "Structured binary neural networks for accurate image classification and semantic segmentation", "venue": "In Proc. IEEE Conf. Comp. Vis. Patt. Recogn.,", "year": 2019 }, { "authors": [ "Bohan Zhuang", "Lingqiao Liu", "Mingkui Tan", "Chunhua Shen", "Ian Reid" ], "title": "Training quantized neural networks with a full-precision auxiliary module", "venue": "In Proc. IEEE Conf. Comp. Vis. Patt. Recogn.,", "year": 2020 }, { "authors": [ "Zhuangwei Zhuang", "Mingkui Tan", "Bohan Zhuang", "Jing Liu", "Yong Guo", "Qingyao Wu", "Junzhou Huang", "Jinhui Zhu" ], "title": "Discrimination-aware channel pruning for deep neural networks", "venue": "In Proc. Adv. Neural Inf. Process. Syst.,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have achieved great success in many challenging computer vision tasks, including image classification (Krizhevsky et al., 2012; He et al., 2016) and object detection (Lin et al., 2017a;b). However, a deep model usually has a large number of parameters and consumes huge amounts of computational resources, which remains great obstacles for many applications, especially on resource-limited devices with limited memory and computational resources, such as smartphones. To reduce the number of parameters and computational overhead, many methods (He et al., 2019; Zhou et al., 2016) have been proposed to conduct model compression by removing the redundancy while maintaining the performance.\nIn the last decades, we have witnessed a lot of model compression methods, such as network pruning (He et al., 2017; 2019) and quantization (Zhou et al., 2016; Hubara et al., 2016). Specifically, network pruning reduces the model size and computational costs by removing redundant modules while network quantization maps the full-precision values to low-precision ones. It has been shown that sequentially perform network pruning and quantization is able to get a compressed network with small model size and lower computational overhead (Han et al., 2016). However, performing pruning and quantization in a separate step may lead to sub-optimal results. For example, the best quantization strategy for the uncompressed network is not necessarily the optimal one after network pruning. Therefore, we need to consider performing pruning and quantization simultaneously.\nRecently, many attempts have been made to automatically determine the compression configurations of each layer (i.e., pruning ratios, and/or bitwidths), either based on reinforcement learning (RL) (Wang et al., 2019), evolutionary search (ES) (Wang et al., 2020), Bayesian optimization (BO) (Tung & Mori, 2018) or differentiable methods (Wu et al., 2018; Dong & Yang, 2019). In particular, previous differentiable methods formulate model compression as a differentiable searching problem to explore the search space using gradient-based optimization. As shown in Figure 1(a), each candi-\ndate operation is maintained as a separate path, which leads to a huge number of trainable parameters and high computational overhead when the search space becomes combinatorially large. Moreover, due to the non-differentiable quantizer and pruning process, the optimization of heavily compressed candidate networks can be more challenging than that in the conventional search problem.\nIn this paper, we propose a simple yet effective model compression method named Automatic Bit Sharing (ABS) to reduce the search cost and ease the optimization for the compressed candidates. Inspired by recent single-path neural architecture search (NAS) methods (Stamoulis et al., 2019; Guo et al., 2020), the proposed ABS introduces a novel single-path super-bit to encode all effective bitwidths in the search space instead of formulating each candidate operation as a separate path, as shown in Figure 1(b). Specifically, we build upon the observation that the quantized values of a high bitwidth can share the ones of low bitwidths under some conditions. Therefore, we are able to decompose the quantized representation into the sum of the lowest bit quantization and a series of re-assignment offsets. We then introduce learnable binary gates to encode the choice of bitwidth, including filter-wise 0-bit for pruning. By jointly training the binary gates and network parameters, the compression ratio of each layer can be automatically determined. The proposed scheme has several advantages. First, we only need to solve the search problem as finding which subset of the super-bit to use for each layer’s weights and activations rather than selecting from different paths. Second, we enforce the candidate bitwidths to share the common quantized values. Hence, we are able to optimize them jointly instead of separately, which greatly reduces the optimization difficulty from the discontinuity of discretization.\nOur main contributions are summarized as follows:\n• We devise a novel super-bit scheme that encapsulates multiple compression configurations in a unified single-path framework. Relying on the super-bit scheme, we further introduce learnable binary gates to determine the optimal bitwidths (including filter-wise 0-bit for pruning). The proposed ABS casts the search problem as subset selection problem, hence significantly reducing the search cost.\n• We formulate the quantized representation as a gated combination of the lowest bitwidth quantization and a series of re-assignment offsets, in which we explicitly share the quantized values between different bitwidths. In this way, we enable the candidate operations to learn jointly rather than separately, hence greatly easing the optimization, especially in the non-differentiable quantization scenario.\n• We evaluate our ABS on CIFAR-100 and ImageNet over various network architectures. Extensive experiments show that the proposed method achieves the state-of-the-art performance. For example, on ImageNet, our ABS compressed MobileNetV2 achieves 28.5× Bit-Operation (BOP) reduction with only 0.2% performance drop on the Top-1 accuracy." }, { "heading": "2 RELATED WORK", "text": "Network quantization. Network quantization represents the weights, activations and even gradients in low-precision to yield compact DNNs. With low-precision integers or power-of-two representations, the heavy matrix multiplications can be replaced by efficient bitwise operations, leading to much faster test-time inference and lower power consumption. To improve the quantization performance, current methods either focus on designing accurate quantizers by fitting the quantizer to the data (Jung et al., 2019; Zhang et al., 2018; Choi et al., 2018; Cai et al., 2017), or seek to approximate the gradients due to the non-differentiable discretization (Ding et al., 2019; Louizos et al., 2019; Zhuang et al., 2020). Moreover, most previous works assign the same bitwidth for all layers (Zhou et al., 2016; Zhuang et al., 2018a; 2019; Jung et al., 2019; Jin et al., 2019; Li et al., 2020; Esser et al., 2020). Though attractive for simplicity, setting a uniform precision places no guarantee on optimizing network performance since different layers have different redundancy and arithmetic intensity. Therefore, several studies proposed mixed-precision quantization (Wang et al., 2019; Dong et al., 2019; Wu et al., 2018; Uhlich et al., 2020) to set different bitwidths according to the redundancy of each layer. In this paper, based on the proposed quantization decomposition, we devise an approach that can effectively learn appropriate bitwidths for each layer through gradient-based optimization.\nNAS and pruning. Neural architecture search (NAS) aims to automatically design efficient architectures with low model size and computational costs, either based on reinforcement learning (Pham et al., 2018; Guo et al., 2019), evolutionary search (Real et al., 2019) or gradient-based methods (Liu et al., 2019a). In particular, gradient-based NAS has gained increased popularity, where the search space can be divided into the multi-path design (Liu et al., 2019a; Cai et al., 2019) and single-path formulation (Stamoulis et al., 2019; Guo et al., 2020), depending on whether adding each operation as a separate path or not. While prevailing NAS methods optimize the network topology, the focus of this paper is to search optimal compression ratios for a given architecture. Moreover, network pruning can be treated as fine-grained NAS, which aims at removing redundant modules to accelerate the run-time inference speed, giving rise to methods based on unstructured weight pruning (Han et al., 2016; Guo et al., 2016) or structured channel pruning (He et al., 2017; Zhuang et al., 2018b; Luo et al., 2017). Based on channel pruning, our paper further takes quantization into consideration to generate more compact networks.\nAutoML for model compression. Recently, much effort has been put into automatically determining either the optimal pruning rate (Tung & Mori, 2018; Dong & Yang, 2019; He et al., 2018), or the bitwidth (Lou et al., 2019; Cai & Vasconcelos, 2020) of each layer via hyper-parameter search, without relying on heuristics. In particular, HAQ (Wang et al., 2019) employs reinforcement learning to search bitwidth strategies with the hardware accelerator’s feedback. Meta-pruning (Liu et al., 2019b) uses meta-learning to generate the weight parameters of the pruned networks and then adopts an evolutionary search algorithm to find the layer-wise sparsity for channel pruning. More recently, several studies (Wu et al., 2018; Cai & Vasconcelos, 2020) have focused on using differentiable schemes via gradient-based optimization.\nClosely related methods. To further improve the compression ratio, several methods propose to jointly optimize pruning and quantization strategies. In particular, some works only support weight quantization (Tung & Mori, 2018; Ye et al., 2019) or use fine-grained pruning (Yang et al., 2020). However, the resultant networks cannot be implemented efficiently on edge devices. Recently, several methods (Wu et al., 2018; Wang et al., 2020; Ying et al., 2020) have been proposed to consider filter pruning, weight quantization, and activation quantization jointly. In contrast to these methods, we carefully design the compression search space by sharing the quantized values between different candidate configurations, which significantly reduces the search cost and eases the optimization. Compared with those methods that share the similarities of using quantized residual errors (Chen et al., 2010; Gong et al., 2014; Li et al., 2017b; van Baalen et al., 2020), our proposed method recursively uses quantized residual errors to decompose a quantized representation as a set of candidate bitwidths and parameterize the selection of optimal bitwidth via binary gates.\nOur proposed ABS and Bayesian Bits (van Baalen et al., 2020) are developed concurrently that share a similar idea of quantization decomposition. Critically, our ABS differs from Bayesian Bits in several aspects: 1) The quantization decomposition in our methods can be extended to non-powerof-two bit widths (i.e., b1 can be set to arbitrary appropriate integer values), which is a general case of the one in Bayesian Bits. 2) The optimization problems are different. Specifically, we formulate model compression as a single-path subset selection problem while Bayesian Bits casts the optimization of the binary gates to a variational inference problem that requires more relaxations and hyperparameters. 3) Our compressed models with less or comparable BOPs outperform those of Bayesian Bits by a large margin on ImageNet (See Table 2)." }, { "heading": "3 PROPOSED METHOD", "text": "" }, { "heading": "3.1 PRELIMINARY: NORMALIZATION AND QUANTIZATION FUNCTION", "text": "Without loss of generality, given a convolutional layer, let x and w be the activations of the last layer and its weight parameters, respectively. First, for convenience, following (Choi et al., 2018; Bai et al., 2019), we can normalize x and w into scale [0, 1] by Tx and Tw, respectively:\nzx = Tx(x) = clip\n( x\nvx , 0, 1\n) , (1)\nzw = Tw(w) = 1\n2\n( clip ( w\nvw ,−1, 1\n) + 1 ) , (2)\nwhere the function clip (v, vlow, vup) = min(max(v, vlow), vup) clips any number v into the range [vlow, vup], and vx and vw are trainable quantization intervals which indicate the range of weights and activations to be quantized. Then, we can apply the following function to quantize the normalized activations and parameters, namely zx ∈ [0, 1] and zw ∈ [0, 1], to discretized ones:\nD(z, s) = s · round (z s ) , (3)\nwhere round(·) returns the nearest integer of a given value and s denotes the normalized step size. Typically, for k-bit quantization, the normalized step size s can be computed by\ns = 1\n2k − 1 . (4)\nAfter doing the k-bit quantization, we shall have 2k − 1 quantized values. Specifically, we obtain the quantization Q(w) and Q(x) by\nQ(w) = T−1w (D(zw, s)) = vw · (2 ·D(zw, s)− 1), (5) Q(x) = T−1x (D(zx, s)) = vx ·D(zx, s), (6)\nwhere T−1w and T −1 x denote the inverse functions of Tw and Tx, respectively." }, { "heading": "3.2 BIT SHARING DECOMPOSITION", "text": "Previous methods consider different compression configurations as different paths and reformulate model compression as a path selection problem, which gives rise to a huge number of trainable parameters and high computational costs. In this paper, we seek to conduct filter pruning and quantization simultaneously by solving the following problem:\nmin W,αp,αq\nL (W, αp, αq) , (7)\nwhere L(·) denotes some losses, and W is the parameters of the network. αp and αq are the pruning and quantization configurations, respectively. As shown in Eq. (7), we propose to encode all compression configurations in a single-path super-bit model (See Figure 1(b)). In the following, we first introduce the bit sharing decomposition and then describe how to learn for compression.\nTo illustrate the bit sharing decomposition, we begin with an example of 2-bit quantization for z ∈ {zx, zw}. Specifically, we consider using the following equation to quantize z to 2-bit:\nz2 = D(z, s2), s2 = 1\n22 − 1 , (8)\nwhere z2 and s2 are the quantized value and the step size of 2-bit quantization, respectively. Due to the large step size, the residual error z − z2 ∈ [−s2/2, s2/2] may be big and result in a significant performance drop. To reduce the residual error, an intuitive way is to use a smaller step size, which indicates that we quantize z to a higher bitwidth. Since the step size s4 = 1/(24−1) in 4-bit quantization is a divisor of the step size s2 in 2-bit quantization, the quantized values of 2-bit quantization are among the ones of 4-bit quantization. In fact, based on 2-bit quantization, the 4-bit counterpart introduces additional unshared quantized values. In particular, if z2 has zero residual error, then 4-bit quantization maps z to the shared quantized values (i.e., z2). In contrast, if z2 is with non-zero residual error, 4-bit quantization is likely to map z to the unshared quantized values. In this case, 4-bit quantization can be regarded as performing quantized value re-assignment based on z2. Such a re-assignment process can be formulated as follows:\nz4 = z2 + 4, (9) where z4 is the 4-bit quantized value and 4 is the re-assignment offset based on z2. To ensure that the results of re-assignment fall into the unshared quantized values, the re-assignment offset 4 must be an integer multiplying of the 4-bit step size s4. Formally, 4 can be computed by performing 4-bit quantization on the residual error of z2:\n4 = D(z − z2, s4), s4 = s2\n22 + 1 =\n1\n24 − 1 . (10)\nTherefore, according to Eq. (9), a 4-bit quantized value can be decomposed into the 2-bit representation and its re-assignment offset. Similarly, an 8-bit quantized value can also be decomposed into the 4-bit representation and its corresponding re-assignment offset. In this way, we can generalize the idea of decomposition to arbitrary effective bitwidths as follows.\nDefinition 1 (Quantization decomposition) Let z ∈ [0, 1] be a normalized full-precision input, {b1, ..., bK} be a sequence of candidate bitwidths, and b1 < b2, ..., < bK−1 < bK . We use the following quantized ẑ to approximate z:\nẑ = zb1 + K∑ j=2 bj , where bj = D(z − zbj−1 , sbj ), sbj = sbj−1 2bj−1 + 1 =\n1\n2bj − 1 . (11)\nIn other words, the quantized approximation ẑ can be decomposed into the sum of the lowest bit quantization and a series of recursive re-assignment offsets. In Definition (1), to enable quantized value re-assignment, we need to constrain that sbj−1 is divisible by sbj , which requires the bitwidths bj(j > 1) to satisfy the following relation:\nbj = 2 j−1 · b1. (12)\nIn fact, the bitwidth b1 can be set to arbitrary appropriate integer values (e.g., 1, 2, 3, etc.). To get a hardware-friendly compressed network1, we set b1 to 2, which ensures that all the decomposition bitwidths are power-of-two. Moreover, since 8-bit quantization achieves lossless performance compared with the full-precision counterpart (Zhou et al., 2016), we only consider those candidate bitwidths that are not greater than 8-bit. In other words, we constrain the value of j to [1, 3].\nRemark 1 The proposed bit sharing decomposition has several advantages. First, the proposed method only needs to maintain a small number of trainable parameters, which greatly reduces the computational costs during search. Second, we are able to directly extract a low-precision representation from its higher precision, which allows optimizing different bitwidths jointly and ease the discontinuous optimization due to quantization." }, { "heading": "3.3 LEARNING FOR COMPRESSION", "text": "Note that different layers have different levels of redundancy, which indicates that different layers may choose different subsets of the quantized values. To learn the quantized approximation for each layer, we introduce a layer-wise binary quantization gate gqbj ∈ {0, 1} on each of the re-assignment offsets in Eq. (11) to encode the choice of the quantization bitwidth, which can be formulated as\ngqbj = 1 ( ||z − zbj−1 || − α q bj > 0 ) ,\nẑ = zb1 + g q b2 ( b2 + · · ·+ g q bK−1 ( bK−1 + g q bK bK )) ,\n(13)\n1More details can be found in Appendix A.\nwhere 1(·) is the indicator function and αqbj is a layer-wise threshold that controls the choice of bitwidth. Specifically, if the quantization error ||z − zbj−1 || is greater than the threshold α q bj\n, we activate the corresponding quantization gate to increase the bitwidth so that the residual error can be reduced, and vice versa.\nNote that from Eq. (13), we can consider the filter pruning as 0-bit filter-wise quantization. To avoid the prohibitively large filter-wise search space, we propose to divide the filters into groups based on indexes and consider the group-wise sparsity instead. To be specific, we introduce a binary gate gpc for each group to encode the choice of pruning, which can be formulated as follows:\ngpc = 1(||wc|| − αp > 0), ẑc = g p c · ( zc,b1 + g q b2 ( c,b2 + · · ·+ g q bK−1 ( c,bK−1 + g q bK c,bK ) )) ,\n(14)\nwhere ẑc is the c-th group of quantized filters and c,bj is the corresponding re-assignment offset by quantizing the residual error zc − zc,bj−1 . Here, αp is a layer-wise threshold for filter pruning. Following PFEC (Li et al., 2017a), we use `1-norm to evaluate the importance of the filter. Specifically, if a group of filters is important, the corresponding pruning gate will be activated and vice versa.\nNote that both quantization and pruning have their corresponding thresholds. Instead of manually setting the thresholds, we propose to learn them via gradient descent. However, the indicator function in Eq. (13) is non-differentiable. To address this, we use straight-through estimator (STE) (Bengio et al., 2013; Zhou et al., 2016) to approximate the gradient of the indicator function 1(·) using the gradient of the sigmoid function σ(·), which can be formulated as:\n∂g ∂α = ∂1 (A− α) ∂α ≈ ∂σ (A− α) ∂α = −σ (A− α) (1− σ(A− α)) , (15)\nwhere g is the output of a binary gate, α ∈ {αp, αq} is the corresponding threshold and A denotes some specific metrics (i.e., `1-norm of the filter or the quantization error). By jointly training the binary gates and the network parameters, the pruning ratio and bitwidth of each layer can be automatically determined. However, the gradient approximation of the binary gate inevitably introduces noisy signals, which can be even more severe when we quantize both weights and activations. Thus, we propose to train the binary gates of weights and activations in an alternative manner. Specifically, when training the binary gates of weights, we fix the binary gates of activations, and vice versa.\nSearch Space for Model Compression. Given an uncompressed network with L layers, we use Cl to denote the number of filters at the l-th layer. To obtain the compressed model, we first divide the filters of each layer into groups and then search for the optimal bitwidths for the considered layer. Let B be the number of filters in a group. For any layer l, there would be ⌊ Cl B ⌋ groups in total. Since we quantize both weights and activations, given K candidate bitwidths, there are K2 different quantization configurations for each layer. Thus, for the whole network with L layers, the size of the search space Ω can be computed by\n|Ω| = L∏ l=1 ( K2 × ⌊ Cl B ⌋) . (16)\nEq. (16) indicates that the search space is large enough to cover the potentially good configurations.\nTraining Objective. To design a hardware-efficient network, the objective function in Eq. (7) should reflect both the accuracy of the compressed network and its computational costs. Following (Cai et al., 2019), we train the network and architecture by minimizing following loss function:\nL(W, αp, αq) = Lce(W, αp, αq) + λ logR(W, αp, αq), (17)\nwhere Lce(·) is the cross-entropy loss, R(·) is the computational costs of the network and λ is a balancing hyper-parameter. Following single-path NAS (Stamoulis et al., 2019), we use a similar formulation of computational costs to preserve the differentiability of the objective function. The details of the differentiable computational loss can be found in Appendix B. Once the training is finished, we can obtain the compressed network by selecting those filters and bitwidths with activated binary gates. Then, we fine-tune the compressed network to compensate the accuracy loss." }, { "heading": "4 EXPERIMENTS", "text": "Compared methods. To investigate the effectiveness of the proposed method, we consider the following methods for comparisons: ABS: our proposed method with joint pruning and quantization; ABS-Q: ABS with quantization only; ABS-P: ABS with pruning only; and several state-ofthe-art model compression methods including HAQ (Wang et al., 2019), DQ (Uhlich et al., 2020), DJPQ (Ying et al., 2020), Bayesian Bits (van Baalen et al., 2020) and DNAS (Wu et al., 2018). We measure the performance of different methods in terms of the Top-1 and Top-5 accuracy. Following (Guo et al., 2020; Ying et al., 2020), we measure the computational costs by the Bit-Operation (BOP) count. The BOP compression ratio is defined as the ratio between the total BOPs of the uncompressed and compressed models. We can also measure the computational costs with the total weights and activations memory footprints following DQ (Uhlich et al., 2020). Moreover, following (Stamoulis et al., 2019; Liu et al., 2019a), we use the search cost to measure the time of finding an optimal compressed model.\nImplementation details. Following HAQ (Wang et al., 2019), we quantize all the layers, in which the first and the last layers are quantized to 8-bit. Following ThiNet (Luo et al., 2017), we only conduct filter pruning for the first layer in the residual block. For ResNet-20 and ResNet-56 on CIFAR-100 (Krizhevsky et al., 2009), we set B to 4. For ResNet-18 and MobileNetV2 on ImageNet (Russakovsky et al., 2015), B is set to 16 and 8, respectively. We first train the full-precision models and then use the pretrained weights to initialize the compressed models. Following (Li et al., 2020; Esser et al., 2020), we introduce weight normalization during training. We use SGD with nesterov (Nesterov, 1983) for optimization, with a momentum of 0.9. For CIFAR-100, we use the same data augmentation as in (He et al., 2016), including translation and horizontal flipping. For ImageNet, images are resized to 256 × 256, and then a 224 × 224 patch is randomly cropped from an image or its horizontal flip for training. For testing, a 224 × 224 center cropped is chosen. We first train the uncompressed network for 30 epochs on CIFAR-100 and 10 epochs on ImageNet.\nThe learning rate is set to 0.001. We then fine-tune the searched compressed network to recover the performance drop. On CIFAR-100, we train the searched network for 200 epochs with a mini-batch size of 128. The learning rate is initialized to 0.1 and is divided by 10 at 80-th and 120-th epochs. Experiments on CIFAR-100 are repeated for 5 times and we report the mean and standard deviation. For ResNet-18 on ImageNet, we finetune the searched network for 90 epochs with a mini-batch size of 256. For MobileNetV2 on ImageNet, we fine-tune for 150 epochs. For all models on ImageNet, the learning rate starts at 0.01 and decays with cosine annealing (Loshchilov & Hutter, 2017)." }, { "heading": "4.1 MAIN RESULTS", "text": "We apply the proposed methods to compress ResNet-20, ResNet-56 on CIFAR-100 and ResNet18, MobileNetV2 on ImageNet. We compare the performance of different methods in Table 1 and Table 2. We also show the results of the compressed ResNet-56 with different BOPs and memory footprints in Figure 2. From the results, we can see that 4-bit quantized networks achieve lossless performance. Also, 6-bit MobileNetV2 only leads to a 0.1% performance drop on the Top-1 Accuracy. Compared with fixed-precision quantization, mixed-precision methods are able to reduce the BOPs while preserving the performance. Critically, our proposed ABS-Q outperforms the stateof-the-arts baselines with less computational costs. Specifically, ABS-Q compressed ResNet-18 outperforms the one compressed by HAQ with more BOPs reduction. More critically, our proposed ABS achieves significant improvement in terms of BOPs and memory footprints. For example, in Figure 2(a), our ABS compressed ResNet-56 model yields much fewer BOPs (395.25 vs. 536.24) but achieves comparable performance compared with the fixed-precision counterpart. Moreover, by combing pruning and quantization, ABS achieves nearly lossless performance while further reducing the computational costs of ABS-Q. For example, ABS compressed ResNet-18 reduces the BOPs by 57.5× while still outperforming the full-precision network by 0.1% in terms of the Top-1 accuracy on ImageNet." }, { "heading": "4.2 FURTHER STUDIES", "text": "Effect of the bit-sharing scheme. To investigate the effect of the bit-sharing scheme, we apply our methods to quantize ResNet-20 and ResNet-56 with and without the bit sharing scheme on CIFAR100. We report the testing accuracy and BOPs in Table 3. We also present the search costs and consumed GPU memory measured on a GPU device (NVIDIA TITAN Xp). It can be seen from the results that the method with the bit sharing scheme consistently outperforms the ones without the bit sharing scheme while significantly reducing the search costs and GPU memory.\nEffect of the one-stage compression. To investigate the effect of the one-stage compression scheme (perform pruning and quantization jointly), we extend ABS to two-stage optimization, where we sequentially do filter pruning and quantization, denoted as ABS-P→ABS-Q. The results are shown in Table 4. Compared with the two-stage counterpart, ABS achieves better performance with less computational costs, which shows the superiority of the one-stage optimization. For example, ABS compressed ResNet-56 outperforms the counterpart by 0.4% on the Top-1 accuracy with less computational overhead.\nEffect of the alternative training scheme. To investigate the effect of the alternative training scheme introduced in Section 3.3, we apply our method to compress ResNet-56 using a joint training scheme and an alternative training scheme on CIFAR-100. Here, the joint training scheme denotes that we train the binary gates of weights and activations jointly. From the results of Table 5, the model trained with the alternative scheme achieves better performance than the joint one, which demonstrates the effectiveness of the alternative training scheme.\nResource-constrained compression. To demonstrate the effectiveness of our ABS on hardware devices, we further apply our methods to compress MobileNetV2 under the resource constraints on the BitFusion architecture (Sharma et al., 2018). Instead of using BOPs, we use the latency and energy on a simulator of the BitFusion to measure the computational costs. We report the results in Table 6. Compared with fixed-precision quantization, ABS achieves better performance with lower latency and energy. Specifically, ABS compressed MobileNetV2 with much lower latency and energy even outperforms 6-bit MobileNetV2 by 0.2% in the Top-1 accuracy." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "In this paper, we have proposed a novel model compression method called Automatically Bit Sharing (ABS). Specifically, our ABS is based on the observation that quantized values of a high bitwidth share the ones of lower bitwidths under some constraints. We therefore have proposed the decomposition of quantization that encapsulates all candidate bitwidths. Starting from a low bitwidth in the search space, we sequentially increase the effective bitwidth by recursively adding re-assignment offsets. Based on this, we have further introduced learnable binary gates to encode the choice of different compression policies. By training the binary gates, the optimal compression ratio of each layer can be automatically determined. Experiments on CIFAR-100 and ImageNet have shown that our methods are able to achieve significant cost reduction while preserving the performance. In the future, we plan to work on a joint search for architecture, pruning, and quantization to find a compact model with better performance." }, { "heading": "Appendix for ABS: Automatic Bit Sharing for Model Compression", "text": "" }, { "heading": "A HARDWARE-FRIENDLY DECOMPOSITION", "text": "As mentioned in Sec. 3.2, b1 can be set to arbitrary appropriate integer values (e.g., 1, 2, 3, etc.). By default, we set b1 = 2 for better hardware utilization. On general purpose computing devices (e.g., CPU, GPU), byte (8 bits) is the lowest data type for operations. Other data types and ALU registers are all composed with multiple bytes in width. By setting b1 = 2, 2-bit/ 4-bit/ 8-bit quantization values can be packed into byte (or short, int, long) data type without bit wasting. Otherwise, if b1 = 1 or b1 = 3, it is inevitable to have wasted bits when packing mixed-precision quantized tensors on general purpose devices. For example, one 32-bit int data type can be used to store ten 3-bit quantized values with 2 bits wasted. One might argue that these 2 bits can be leveraged with the next group of 3-bit data, but it will result in irregular memory access patterns, which will degrade the hardware utilization more seriously. Moreover, 8-bit quantization has been demonstrated to own similar performance with the full precision counterparts for many networks. Therefore, there is no need to consider a bitwidth larger than 8." }, { "heading": "B FORMULATION OF DIFFERENTIABLE COMPUTATIONAL LOSS", "text": "In this section, we introduce the differentiable computational loss mentioned in Section 3.3. Unlike the cross-entropy loss in Eq. (17), the computational costs R(W, αp, αq) is non-differentiable. To solve this issue, we model the computational costs as a function of binary gates as:\nR(W, αp, αq) = G∑ c=1 gpc ( Rxc,b1 + g q b2 ( Rxc,b2 −Rxc,b1 + · · ·+ g q bK ( Rxc,bK −Rxc,bK−1 ))) ,\n(18) where Rxc,bj is the computational cost for the c-th group of filters with bj-bit quantization and G is the number of groups in total." }, { "heading": "C QUANTIZATION CONFIGURATIONS", "text": "All the methods in Tables 1 and 2 use layer-wise and symmetric quantization schemes and the compared methods strictly follow the quantization configurations in their original papers. Specifically, for DQ (Uhlich et al., 2020), we parameterize the fixed-point quantizer using case U3 with θ = [d, qmax]. We initialize the weights using a pre-trained model. The initial step size is set to d = 2blog2(max(|W|)/(2\nb−1−1)c for weights and 2−3 for activations. The remaining quantization parameters are set such that the initial bitwidth is 4-bit. For HAQ (Wang et al., 2019), we first truncate the weights and activations into the range of [−vw, vw] and [0, vx], respectively. We then perform linear quantization for both weights and activations. To find more proper vw and vx, we minimize the KL-divergence between the original weight distribution W and the quantized weight distribution Q(W). For DNAS (Wu et al., 2018), we follow DoReFa-Net (Zhou et al., 2016) to quantize weights and follow PACT (Choi et al., 2018) to quantize activations. We initialize the learnable upper bound to 1. For DJPQ (Ying et al., 2020) and Bayesian Bits (van Baalen et al., 2020), we directly get the results from original papers. For other methods in Tables 1 and 2, we use the quantization function introduced in Section 3.1. The trainable quantization intervals vx and vw are initialized to 1." }, { "heading": "D SEARCH COST COMPARISONS", "text": "To evaluate the efficiency of the proposed ABS, we compare the search cost of different methods and report the results in Table 7. From the results, the search costs of the proposed ABS is much smaller than the state-of-the-art methods. Moreover, compared with ABS-Q, ABS only introduces a small amount of computational overhead, which demonstrates the efficiency of the proposed methods." }, { "heading": "E MORE RESULTS ON MEMORY FOOTPRINTS", "text": "To further demonstrate the effectiveness of the proposed ABS, we replace BOPs with total weights and activations memory footprints (Uhlich et al., 2020). We apply different methods to compress ResNet-56 and report the results in Table 8. From the results, ABS compressed ResNet-56 outperforms other methods with fewer memory footprints. These results show the effectiveness of our proposed ABS in terms of memory footprints reduction." }, { "heading": "F DETAILED STRUCTURE OF THE COMPRESSED NETWORK", "text": "We illustrate the pruning rate and bitwidth of each layer’s weights and activations of the compressed ResNet-18 and MobileNetV2 in Figure 3 and Figure 4, respectively. From the results, we observe that our ABS assigns more bitwidths to the weights in the downsampling convolutional layer in ResNet-18 and depthwise convolutional layer in MobileNetV2. Intuitively, this is because the number of parameters of these layers is much smaller than other layers. Moreover, our ABS inclines to prune more filters in the shallower layers, which can significantly reduce the number of parameters and computational overhead. Finally, we also observe that the correlation between the bitwidth and pruning rate is as follows. If a layer is set to a high pruning rate, our ABS tends to select a higher bitwidth to compensate for the performance drop. In contrast, if a layer is with a low pruning rate, our ABS tends to select a lower bitwidth to reduce the model size and computational costs." }, { "heading": "G MORE RESULTS ON MOBILENETV3", "text": "To evaluate the proposed ABS on the lightweight model, we apply our methods to MobileNetV3 on CIFAR-100. Following LSQ+ (Bhalgat et al., 2020), we introduce a learnable offset to handle the negative activations in hard-swish. We show the results in Table 9. From the results of MobileNetV3, our proposed ABS still outperforms the compared methods, which demonstrates its effectiveness." } ]
2,020
null
SP:1dade549a14dc9c41f2d16be1e405be113c611fd
[ "The paper explores the application of modern learning techniques (transformers and tree positional encodings) to the task of producing valid traces for a given LTL specification. A series of experiments are conducted to explore the generalization power of the proposed approach, both in terms of formula size and style of constraint. The work follows a recent trend of the application of deep learning techniques to logic-based settings, and the authors present (to the best of my knowledge) the first attempt of applying deep learning techniques to this particular task.", "The paper presents multiple dataset generation and testing procedures for linear temporal logic and propositional logic satisfiability. They are then used to train Transformers with tree positional encoding. The approach amounts to imitation learning based on existing solvers for satisfiability for the considered logics. In contrast to previous work, the approach supports logical formulas of arbitrary shape. The experiments demonstrate successful generalization for multiple approaches to dataset generation." ]
We study two fundamental questions in neuro-symbolic computing: can deep learning tackle challenging problems in logics end-to-end, and can neural networks learn the semantics of logics. In this work we focus on linear-time temporal logic (LTL), as it is widely used in verification. We train a Transformer on the problem to directly predict a solution, i.e. a trace, to a given LTL formula. The training data is generated with classical solvers, which, however, only provide one of many possible solutions to each formula. We demonstrate that it is sufficient to train on those particular solutions to formulas, and that Transformers can predict solutions even to formulas from benchmarks from the literature on which the classical solver timed out. Transformers also generalize to the semantics of the logics: while they often deviate from the solutions found by the classical solvers, they still predict correct solutions to most formulas.
[ { "affiliations": [], "name": "Christopher Hahn" }, { "affiliations": [], "name": "Frederik Schmitt" }, { "affiliations": [], "name": "Markus N. Rabe" } ]
[ { "authors": [ "Miltiadis Allamanis", "Marc Brockschmidt", "Mahmoud Khademi" ], "title": "Learning to represent programs with graphs", "venue": "arXiv preprint arXiv:1711.00740,", "year": 2017 }, { "authors": [ "Gilles Audemard", "Laurent Simon" ], "title": "On the glucose", "venue": "SAT solver. Int. J. Artif. Intell. Tools,", "year": 2018 }, { "authors": [ "Mislav Balunovic", "Pavol Bielik", "Martin Vechev" ], "title": "Learning to solve smt formulas", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Kshitij Bansal", "Sarah M Loos", "Markus N Rabe", "Christian Szegedy", "Stewart Wilcox" ], "title": "HOList: An environment for machine learning of higher-order theorem proving", "venue": "In arXiv preprint arXiv:1904.03241,", "year": 2019 }, { "authors": [ "S. Bhatia", "P. Kohli", "R. Singh" ], "title": "Neuro-symbolic program corrector for introductory programming assignments", "venue": "IEEE/ACM 40th International Conference on Software Engineering (ICSE),", "year": 2018 }, { "authors": [ "Jasmin Christian Blanchette", "Cezary Kaliszyk", "Lawrence C Paulson", "Josef Urban" ], "title": "Hammering towards QED", "venue": "Journal of Formalized Reasoning,", "year": 2016 }, { "authors": [ "Ilya Sutskever", "Dario Amodei" ], "title": "Language models are few-shot learners", "venue": null, "year": 2020 }, { "authors": [ "Paul Cairns" ], "title": "Informalising formal mathematics: Searching the mizar library with latent semantics", "venue": "In International Conference on Mathematical Knowledge Management,", "year": 2004 }, { "authors": [ "Tommaso Dreossi", "Alexandre Donzé", "Sanjit A Seshia" ], "title": "Compositional falsification of cyberphysical systems with machine learning components", "venue": "Journal of Automated Reasoning,", "year": 2019 }, { "authors": [ "Alexandre Duret-Lutz", "Alexandre Lewkowicz", "Amaury Fauchille", "Thibaud Michaud", "Etienne Renault", "Laurent Xu" ], "title": "Spot 2.0—a framework for ltl and ω-automata manipulation", "venue": "In International Symposium on Automated Technology for Verification and Analysis,", "year": 2016 }, { "authors": [ "Matthew B. Dwyer", "George S. Avrunin", "James C. Corbett" ], "title": "Property specification patterns for finite-state verification", "venue": "Proceedings of the Second Workshop on Formal Methods in Software Practice, March 4-5,", "year": 1998 }, { "authors": [ "Kousha Etessami", "Gerard J. Holzmann" ], "title": "Optimizing büchi automata", "venue": "In Catuscia Palamidessi (ed.), CONCUR 2000 - Concurrency Theory, 11th International Conference,", "year": 2000 }, { "authors": [ "Richard Evans", "David Saxton", "David Amos", "Pushmeet Kohli", "Edward Grefenstette" ], "title": "Can neural networks understand logical entailment", "venue": "arXiv preprint arXiv:1802.08535,", "year": 2018 }, { "authors": [ "Patrick Fernandes", "Miltiadis Allamanis", "Marc Brockschmidt" ], "title": "Structured neural summarization", "venue": "arXiv preprint arXiv:1811.01824,", "year": 2018 }, { "authors": [ "Thibault Gauthier", "Cezary Kaliszyk", "Josef Urban" ], "title": "Tactictoe: Learning to reason with hol4 tactics", "venue": "arXiv preprint arXiv:1804.00595,", "year": 2018 }, { "authors": [ "Timon Gehr", "Matthew Mirman", "Dana Drachsler-Cohen", "Petar Tsankov", "Swarat Chaudhuri", "Martin Vechev" ], "title": "Ai2: Safety and robustness certification of neural networks with abstract interpretation", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2018 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Rahul Gupta", "Soham Pal", "Aditya Kanade", "Shirish Shevade" ], "title": "Deepfix: Fixing common C language errors by deep learning", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In 2015 IEEE International Conference on Computer Vision,", "year": 2015 }, { "authors": [ "Vincent J. Hellendoorn", "Charles Sutton", "Rishabh Singh", "Petros Maniatis" ], "title": "Global relational models of source code", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jan Holeček", "Tomáš Kratochvı́la", "Vojtěch Řehák", "David Šafránek", "Pavel Šimeček" ], "title": "Verification results in liberouter", "venue": null, "year": 2004 }, { "authors": [ "Daniel Huang", "Prafulla Dhariwal", "Dawn Song", "Ilya Sutskever" ], "title": "Gamepad: A learning environment for theorem proving", "venue": "arXiv preprint arXiv:1806.00608,", "year": 2018 }, { "authors": [ "Xiaowei Huang", "Marta Kwiatkowska", "Sen Wang", "Min Wu" ], "title": "Safety verification of deep neural networks", "venue": "In International Conference on Computer Aided Verification,", "year": 2017 }, { "authors": [ "IEEE-Commission" ], "title": "Ieee standard for property specification language (psl)", "venue": "IEEE Std 18502005,", "year": 2005 }, { "authors": [ "Cezary Kaliszyk", "Josef Urban" ], "title": "Learning-assisted automated reasoning with flyspeck", "venue": "Journal of Automated Reasoning,", "year": 2014 }, { "authors": [ "Cezary Kaliszyk", "François Chollet", "Christian Szegedy" ], "title": "HolStep: A Machine Learning Dataset for Higher-order Logic Theorem Proving", "venue": "In Proceedings of International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Guillaume Lample", "François Charton" ], "title": "Deep learning for symbolic mathematics", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Gil Lederman", "Markus N. Rabe", "Edward A. Lee", "Sanjit A. Seshia. Learning heuristics for quantified boolean formulas through deep reinforcement learning." ], "title": "URL http://arxiv", "venue": "org/abs/1807.08058.", "year": 2020 }, { "authors": [ "Dennis Lee", "Christian Szegedy", "Markus N. Rabe", "Sarah M. Loos", "Kshitij Bansal" ], "title": "Mathematical reasoning in latent space", "venue": null, "year": 2020 }, { "authors": [ "Jianwen Li", "Lijun Zhang", "Geguang Pu", "Moshe Y. Vardi", "Jifeng He" ], "title": "LTL satisfiability checking revisited", "venue": "20th International Symposium on Temporal Representation and Reasoning,", "year": 2013 }, { "authors": [ "Jianwen Li", "Yinbo Yao", "Geguang Pu", "Lijun Zhang", "Jifeng He" ], "title": "Aalta: an ltl satisfiability checker over infinite/finite traces", "venue": "In Proceedings of the 22nd ACM SIGSOFT international symposium on foundations of software engineering,", "year": 2014 }, { "authors": [ "Wenda Li", "Lei Yu", "Yuhuai Wu", "Lawrence C. Paulson" ], "title": "Modelling high-level mathematical reasoning in mechanised declarative proofs", "venue": "arXiv preprint arXiv:2006.09265,", "year": 2020 }, { "authors": [ "Yaguang Li", "Rose Yu", "Cyrus Shahabi", "Yan Liu" ], "title": "Diffusion convolutional recurrent neural network: Data-driven traffic forecasting", "venue": "arXiv preprint arXiv:1707.01926,", "year": 2017 }, { "authors": [ "Sarah Loos", "Geoffrey Irving", "Christian Szegedy", "Cezary Kaliszyk" ], "title": "Deep network guided proof search", "venue": "In LPAR,", "year": 2017 }, { "authors": [ "Jia Meng", "Lawrence C Paulson" ], "title": "Lightweight relevance filtering for machine-generated resolution problems", "venue": "Journal of Applied Logic,", "year": 2009 }, { "authors": [ "Matej Moravcı́k", "Martin Schmid", "Neil Burch", "Viliam Lisý", "Dustin Morrill", "Nolan Bard", "Trevor Davis", "Kevin Waugh", "Michael Johanson", "Michael H. Bowling" ], "title": "Deepstack: Expert-level artificial intelligence in no-limit", "venue": "poker. CoRR,", "year": 2017 }, { "authors": [ "Aditya Paliwal", "Sarah M. Loos", "Markus N. Rabe", "Kshitij Bansal", "Christian Szegedy" ], "title": "Graph representations for higher-order logic and theorem proving", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Radek Pelánek" ], "title": "BEEM: benchmarks for explicit model checkers. In Dragan Bosnacki and Stefan Edelkamp (eds.), Model Checking Software, 14th International SPIN Workshop, Berlin, Germany", "venue": "July 1-3,", "year": 2007 }, { "authors": [ "Chris Piech", "Jonathan Huang", "Andy Nguyen", "Mike Phulsuksombati", "Mehran Sahami", "Leonidas Guibas" ], "title": "Learning program embeddings to propagate feedback on student code", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Amir Pnueli" ], "title": "The temporal logic of programs", "venue": "In 18th Annual Symposium on Foundations of Computer Science, Providence, Rhode Island, USA,", "year": 1977 }, { "authors": [ "Stanislas Polu", "Ilya Sutskever" ], "title": "Generative language modeling for automated theorem proving, 2020", "venue": null, "year": 2020 }, { "authors": [ "Markus N. Rabe", "Dennis Lee", "Kshitij Bansal", "Christian Szegedy" ], "title": "Mathematical reasoning via self-supervised skip-tree training", "venue": null, "year": 2020 }, { "authors": [ "Kristin Y Rozier", "Moshe Y Vardi" ], "title": "Ltl satisfiability checking", "venue": "In International SPIN Workshop on Model Checking of Software,", "year": 2007 }, { "authors": [ "David Saxton", "Edward Grefenstette", "Felix Hill", "Pushmeet Kohli" ], "title": "Analysing mathematical reasoning abilities of neural models", "venue": "CoRR, abs/1904.01557,", "year": 2019 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2008 }, { "authors": [ "Imanol Schlag", "Paul Smolensky", "Roland Fernandez", "Nebojsa Jojic", "Jürgen Schmidhuber", "Jianfeng Gao" ], "title": "Enhancing the transformer with explicit relational encoding for math problem solving", "venue": null, "year": 1910 }, { "authors": [ "Stephan Schulz" ], "title": "System description: E 1.8", "venue": "In International Conference on Logic for Programming Artificial Intelligence and Reasoning,", "year": 2013 }, { "authors": [ "Viktor Schuppan", "Luthfi Darmawan" ], "title": "Evaluating ltl satisfiability solvers", "venue": "In International Symposium on Automated Technology for Verification and Analysis,", "year": 2011 }, { "authors": [ "Stefan Schwendimann" ], "title": "A new one-pass tableau calculus for pltl", "venue": "In International Conference on Automated Reasoning with Analytic Tableaux and Related Methods,", "year": 1998 }, { "authors": [ "Daniel Selsam", "Nikolaj Bjørner" ], "title": "Guiding high-performance SAT solvers with unsat-core predictions. In Theory and Applications of Satisfiability Testing - SAT 2019 - 22nd International Conference, SAT 2019", "venue": null, "year": 2019 }, { "authors": [ "Daniel Selsam", "Matthew Lamm", "Benedikt Bünz", "Percy Liang", "Leonardo de Moura", "David L. Dill" ], "title": "Learning a SAT solver from single-bit supervision", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Sanjit A. Seshia", "Dorsa Sadigh" ], "title": "Towards verified artificial intelligence", "venue": "CoRR, abs/1606.08514,", "year": 2016 }, { "authors": [ "Sanjit A Seshia", "Ankush Desai", "Tommaso Dreossi", "Daniel J Fremont", "Shromona Ghosh", "Edward Kim", "Sumukh Shivakumar", "Marcell Vazquez-Chanlatte", "Xiangyu Yue" ], "title": "Formal specification for deep neural networks", "venue": "In International Symposium on Automated Technology for Verification and Analysis,", "year": 2018 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Gagandeep Singh", "Timon Gehr", "Markus Püschel", "Martin Vechev" ], "title": "An abstract domain for certifying neural networks", "venue": "Proceedings of the ACM on Programming Languages,", "year": 2019 }, { "authors": [ "Yaniv Taigman", "Ming Yang", "Marc’Aurelio Ranzato", "Lior Wolf" ], "title": "Deepface: Closing the gap to human-level performance in face verification", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Josef Urban" ], "title": "MPTP–motivation, implementation, first experiments", "venue": "Journal of Automated Reasoning,", "year": 2004 }, { "authors": [ "Josef Urban" ], "title": "Malarea: a metasystem for automated reasoning in large theories", "venue": "ESARLT, 257,", "year": 2007 }, { "authors": [ "Josef Urban", "Jan Jakubův" ], "title": "First neural conjecturing datasets and experiments", "venue": "In Conference on Intelligent Computer Mathematics,", "year": 2020 }, { "authors": [ "Josef Urban", "Geoff Sutcliffe", "Petr Pudlák", "Jiřı́ Vyskočil" ], "title": "Malarea sg1-machine learner for automated reasoning with semantic guidance", "venue": "In International Joint Conference on Automated Reasoning,", "year": 2008 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Ke Wang", "Rishabh Singh", "Zhendong Su" ], "title": "Dynamic neural program embedding for program repair", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Yuhuai Wu", "Albert Jiang", "Jimmy Ba", "Roger Grosse" ], "title": "INT: An Inequality Benchmark for Evaluating Generalization in Theorem Proving", "venue": "arXiv preprint arXiv:2007.02924,", "year": 2020 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Fengwen Chen", "Guodong Long", "Chengqi Zhang", "Philip S Yu" ], "title": "A comprehensive survey on graph neural networks", "venue": null, "year": 1901 }, { "authors": [ "Kaiyu Yang", "Jia Deng" ], "title": "Learning to prove theorems via interacting with proof assistants", "venue": "arXiv preprint arXiv:1905.09381,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine learning has revolutionized several areas of computer science, such as image recognition (He et al., 2015), face recognition (Taigman et al., 2014), translation (Wu et al., 2016), and board games (Moravcı́k et al., 2017; Silver et al., 2017). For complex tasks that involve symbolic reasoning, however, deep learning techniques are still considered as insufficient. Applications of deep learning in logical reasoning problems have therefore focused on sub-problems within larger logical frameworks, such as computing heuristics in solvers (Lederman et al., 2020; Balunovic et al., 2018; Selsam & Bjørner, 2019) or predicting individual proof steps (Loos et al., 2017; Gauthier et al., 2018; Bansal et al., 2019; Huang et al., 2018). Recently, however, the assumption that deep learning is not yet ready to tackle hard logical questions was drawn into question. Lample & Charton (2020) demonstrated that Transformer models (Vaswani et al., 2017) perform surprisingly well on symbolic integration, Rabe et al. (2020) demonstrated that self-supervised training leads to mathematical reasoning abilities, and Brown et al. (2020) demonstrated that large-enough language models learn basic arithmetic despite being trained on mostly natural language sources.\nThis poses the question if other problems that are thought to require symbolic reasoning lend themselves to a direct learning approach. We study the application of Transformer models to challenging\n∗Partially supported by the European Research Council (ERC) Grant OSARES (No. 683300) and the Collaborative Research Center “Foundations of Perspicuous Software Systems” (TRR 248, 389792660).\nlogical problems in verification. We thus consider linear-time temporal logic (LTL) (Pnueli, 1977), which is widely used in the academic verification community (Dwyer et al., 1998; Li et al., 2013; Duret-Lutz et al., 2016; Rozier & Vardi, 2007; Schuppan & Darmawan, 2011; Li et al., 2013; 2014; Schwendimann, 1998) and is the basis for industrial hardware specification languages like the IEEE standard PSL (IEEE-Commission et al., 2005). LTL specifies infinite sequences and is typically used to describe system behaviors. For example, LTL can specify that some proposition P must hold at every point in time ( P ) or that P must hold at some future point of time ( P ). By combining these operators, one can specify that P must occur infinitely often ( P ).\nIn this work, we apply a direct learning approach to the fundamental problem of LTL to find a satisfying trace to a formula. In applications, solutions to LTL formulas can represent (counter) examples for a specified system behavior, and over the last decades, generations of advanced algorithms have been developed to solve this question automatically. We start from the standard benchmark distribution of LTL formulas, consisting of conjunctions of patterns typically encountered in practice (Dwyer et al., 1998). We then use classical algorithms, notably spot by Duret-Lutz et al. (2016), that implement a competitive classical algorithm, to generate solutions to formulas from this distribution and train a Transformer model to predict these solutions directly.\nRelatively small Transformers perform very well on this task and we predict correct solutions to 96.8% of the formulas from a held-out test set (see Figure 1). Impressive enough, Transformers hold up pretty well and predict correct solutions in 83% of the cases, even when we focus on formulas on which spot timed out. This means that, already today, direct machine learning approaches may be useful to augment classical algorithms in logical reasoning tasks.\nWe also study two generalization properties of the Transformer architecture, important to logical problems: We present detailed analyses on the generalization to longer formulas. It turns out that transformers trained with tree-positional encodings (Shiv & Quirk, 2019) generalize to much longer formulas than they were trained on, while Transformers trained with the standard positional encoding (as expected) do not generalize to longer formulas. The second generalization property studied here is the question whether Transformers learn to imitate the generator of the training data, or whether they learn to solve the formulas according to the semantics of the logics. This is possible, as for most formulas there are many possible satisfying traces. In Figure 1 we highlight the fact that our models often predicted traces that satisfy the formulas, but predict different traces than the one found by the classical algorithm with which we generated the data. Especially when testing the models out-of-distribution we observed that almost no predicted trace equals the solution proposed by the classical solver.\nTo demonstrate that these generalization behaviors are not specific to the benchmark set of LTL formulas, we also present experimental results on random LTL formulas. Further, we exclude that spot, the tool with which we generate example traces, is responsible for these behaviors, by repeating the experiments on propositional formulas for which we generate the solutions by SAT solvers.\nThe remainder of this paper is structured as follows. We give an overview over related work in Section 2. We describe the problem definitions and present our data generation in Section 3. Our experimental setup is described in Section 4 and our findings in Section 5, before concluding in Section 6." }, { "heading": "2 RELATED WORK", "text": "Datasets for mathematical reasoning. While we focus on a classical task from verification, other works have studied datasets derived from automated theorem provers (Blanchette et al., 2016; Loos et al., 2017; Gauthier et al., 2018), interactive theorem provers (Kaliszyk et al., 2017; Bansal et al., 2019; Huang et al., 2018; Yang & Deng, 2019; Polu & Sutskever, 2020; Wu et al., 2020; Li et al., 2020; Lee et al., 2020; Urban & Jakubův, 2020; Rabe et al., 2020), symbolic mathematics (Lample & Charton, 2020), and mathematical problems in natural language (Saxton et al., 2019; Schlag et al., 2019). Probably the closest work to this paper are the applications of Transformers to directly solve differential equations (Lample & Charton, 2020) and directly predict missing assumptions and types of formal mathematical statements (Rabe et al., 2020). We focus on a different problem domain, verification, and demonstrate that Transformers are roughly competitive with classical algorithms in that domain on their dataset. Learning has been applied to mathematics long before the rise of deep learning. Earlier works focused on ranking premises or clauses Cairns (2004); Urban (2004; 2007); Urban et al. (2008); Meng & Paulson (2009); Schulz (2013); Kaliszyk & Urban (2014).\nNeural architectures for logical reasoning. (Paliwal et al., 2020) demonstrate significant improvements in theorem proving through the use of graph neural networks to represent higher-order logic terms. Selsam et al. (2019) presented NeuroSAT, a graph neural network (Scarselli et al., 2008; Li et al., 2017; Gilmer et al., 2017; Wu et al., 2019) for solving the propositional satisfiability problem. In contrast, we apply a generic sequence-to-sequence model to predict the solutions to formulas, not only whether there is a solution. This allows us to apply the approach to a wider set of logics (logics without a CNF). A simplified NeuroSAT architecture was trained for unsat-core predictions (Selsam & Bjørner, 2019). Lederman et al. (2020) have used graph neural networks on CNF to learn better heuristics for a 2QBF solver. Evans et al. (2018) study the problem of logical entailment in propositional logic using tree-RNNs. Entailment is a subproblem of satisfiability and (besides being a classification problem) could be encoded in the same form as our propositional formulas. The formulas considered in their dataset are much smaller than in this work.\nLanguage models applied to programs. Transformers have also been applied to programs for tasks such as summarizing code (Fernandes et al., 2018) or variable naming and misuse (Hellendoorn et al., 2020). Other works focused on recurrent neural networks or graph neural networks for code analysis, e.g. (Piech et al., 2015; Gupta et al., 2017; Bhatia et al., 2018; Wang et al., 2018; Allamanis et al., 2017). Another area in the intersection of formal methods and machine learning is the verification of neural networks (Seshia & Sadigh, 2016; Seshia et al., 2018; Singh et al., 2019; Gehr et al., 2018; Huang et al., 2017; Dreossi et al., 2019)." }, { "heading": "3 DATASETS", "text": "To demonstrate the generalization properties of the Transformer on logical tasks, we generated several datasets in three different fashions. We will describe the underlying logical problems and our data generation in the following." }, { "heading": "3.1 TRACE GENERATION FOR LINEAR-TIME TEMPORAL LOGIC", "text": "Linear-time temporal logic (LTL, Pnueli, 1977) combines propositional connectives with temporal operators such as the Next operator and the Until operator U . ϕ means that ϕ holds in the next position of a sequence; ϕ1 U ϕ2 means that ϕ1 holds until ϕ2 holds. For example, the LTL formula (bU a)∧ (cU ¬a) states that b has to hold along the trace until a holds and c has to hold until a does not hold anymore. There also exist derived operators. For example, consider the following specification of an arbiter: (request → grant) states that, at every point in time ( -operator), if there is a request signal, then a grant signal must follow at some future point in time ( -operator).\nThe full semantics and an explanation of the operators can be found in Appendix A. We consider infinite sequences, that are finitely represented in the form of a “lasso” uvω , where u, called prefix, and v, called period, are finite sequences of propositional formulas. We call such sequences (symbolic) traces. For example, the symbolic trace (a ∧ b)ω defines the infinite sequence where a and b evaluate to true on every position. Symbolic traces allow us to underspecify propositions when they do not matter. For example, the LTL formula a is satisfied by the symbolic trace: true true (a)ω , which allow for any combination of propositions on the first two positions.\nOur datasets consist of pairs of satisfiable LTL formulas and satisfying symbolic traces generated with tools and automata constructions from the spot framework (Duret-Lutz et al., 2016). We use a compact syntax for ultimately periodic symbolic traces: Each position in the trace is separated by the delimiter “;”. True and False are represented by “1” and “0”, respectively. The beginning of the period v is signaled by the character “{” and analogously its end by “}”. For example, the ultimately periodic symbolic trace denoted by a; a; a; {b}, describes all infinite traces where on the first 3 positions a must hold followed by an infinite period on which b must hold on every position.\nGiven a satisfiable LTL formula ϕ, our trace generator constructs a Büchi automatonAϕ that accepts exactly the language defined by the LTL formula, i.e., L(Aϕ) = L(ϕ). From this automaton, we construct an arbitrary accepted symbolic trace, by searching for an accepting run in Aϕ." }, { "heading": "3.1.1 SPECIFICATION PATTERN", "text": "Our main dataset is constructed from formulas following 55 LTL specification patterns identified by the literature (Dwyer et al., 1998). For example, the arbiter property ( p0) → (p1 U p0), stating that if p0 is scheduled at some point in time, p1 is scheduled until this point. The largest specification pattern is of size 40 consisting of 6 atomic propositions. It has been shown that conjunctions of such patterns are challenging for LTL satisfiability tools that rely on classical methods, such as automata constructions (Li et al., 2013). They start coming to their limits when more than 8 pattern formulas are conjoined. We decided to build our dataset in a similar way from these patterns only to allow for a better comparison.\nWe conjoined random specification patterns with randomly chosen variables (from a supply of 6 variables) until one of the following four conditions are met: 1) the formula size succeeds 126, 2) more than 8 formulas would be conjoined, 3) our automaton-based generator timed out (> 1s) while computing the solution trace, or 4) the formula would become unsatisfiable. In total, we generated 1664487 formula-trace pairs in 24 hours on 20 CPUs. While generating, approximately 41% of the instances ran into the first termination condition, 21% into the second, 37% into the third and 1% into the fourth. We split this set into an 80% training set, a 10% validation set, and a 10% test set. The size distribution of the dataset can be found in Appendix B.\nFor studying how the Transformer performs on longer specification patterns, we accumulated pattern formulas where spot timed out (> 60s) while searching for a satisfying trace. We call this dataset LTLUnsolved254 . We capped the maximum length at 254, which is twice as large as the formulas the model saw during training. The size distribution of the generated formulas can be found in Appendix B.\nIn the following table, we illustrate the complexity of our training dataset with two examples from the above described set LTLPattern126, where the subsequent number of the notation of our datasets denotes the maximum size of a formula’s syntax tree. The first line shows the LTL formula and the symbolic trace in mathematical notation. The second line shows the input and output representation of the Transformer (in Polish notation):\nLTL formula satisfying symbolic trace (a→ d) ∧ ¬fW fW ¬fW fW ¬f (¬a ∧ ¬c ∧ ¬f ∨ ¬c ∧ d ∧ ¬f)ω ∧ ( c→ ¬cU(c ∧ ¬bW bW ¬bW bW ¬b)) &&G>aFdW!fWfW!fWfG!f>FcU!c&cW!bWbW!bWbG!b {!a&!c&!f|!c&d&!f} (b ∧ ¬a ∧ a→ cU a) ∧ (a→ c) ∧ ( b→ ¬b (¬a ∧ b ∧ ¬c ∧ ¬e ∧ f)(¬a ∧ ¬c U(b ∧ ¬fW fW ¬fW fW ¬f)) ∧ ( a→ (c ∧ (¬aU e) ∧¬e ∧ ¬f)(¬a ∧ ¬c ∧ ¬e ∧ f) → (¬aU(e ∧ f)))U a) ∧ c ∧ (a e→ ¬(¬e ∧ f ∧ (¬a ∧ c ∧ ¬e ∧ ¬f)(¬a ∧ ¬e ∧ ¬f)ω (¬eU(¬e ∧ d)))U(e ∨ c)) ∧ ( ¬a ∨ (a ∧ ¬fW d)) ∧ (e→ ¬c) &&&&&&&G>&&b!aFaUcaG>aGc>FbU!b&bW!fWfW!fW &&&&!ab!c!ef;&&&!a!c!e!f; fG!f>FaU>&cXU!aeXU!a&eFfaFcG>&aFeU!& &&&!a!c!ef;&&&!ac!e!f &!efXU!e&!ed|ec|G!aF&aW!fdG>eG!c ;{&&!a!e!f}" }, { "heading": "3.1.2 RANDOM FORMULAS", "text": "To show that the generalization properties of the Transformer are not specific to our data generation, we also generated a dataset of random formulas. Our dataset of random formulas consist of 1 million generated formulas and their solutions, i.e., a satisfying symbolic trace. The number of different propositions is fixed to 5. Each dataset is split into a training set of 800K formulas, a validation set of 100K formulas, and a test set of 100K formulas. All datasets are uniformly distributed in size, apart from the lower-sized end due to the limited number of unique small formulas. The formula and trace distribution of the dataset LTLRandom35, as well as three randomly drawn example instances can be found in Appendix B. Note that we filtered out examples with traces larger than 62 (less than 0.05% of the original set).\nTo generate the formulas, we used the randltl tool of the spot framework, which builds unique formulas in a specified size interval, following a supplied node probability distribution. During the building process, the actual distribution occasionally differs from the given distribution in order to meet the size constraints, e.g., by masking out all binary operators. The distribution between all k-ary nodes always remains the same. To furthermore achieve a (quasi) uniform distribution in size, we subsequently filtered the generated formulas. Our node distribution puts equal weight on all operators ¬,∧, and U . Constants True and False are allowed with 2.5 times less probability than propositions." }, { "heading": "3.2 ASSIGNMENT GENERATION FOR PROPOSITIONAL LOGIC", "text": "To show that the generalization of the Transformer to the semantics of logics is not a unique attribute of LTL, we also generated a dataset for propositional logic (SAT). A propositional formula consists of Boolean operators ∧ (and), ∨ (or), ¬ (not), and variables also called literals or propositions. We consider the derived operators ϕ1 → ϕ2 ≡ ¬ϕ1∨ϕ2 (implication), ϕ1 ↔ ϕ2 ≡ (ϕ1 → ϕ2)∧(ϕ2 → ϕ1) (equivalence), and ϕ1 ⊕ ϕ2 ≡ ¬(ϕ1 ↔ ϕ2) (xor). Given a propositional Boolean formula ϕ, the satisfiability problem asks if there exists a Boolean assignment Π : V 7→ B for every literal in ϕ such that ϕ evaluates to true . For example, consider the following propositional formula, given in conjunctive normal form (CNF): (x1 ∨ x2 ∨ ¬x3) ∧ (¬x1 ∨ x3). A possible satisfying assignment for this formula would be {(x1, true), (x2, false), (x3, true)}. We allow a satisfying assignment to be partial, i.e., if the truth value of a propositions can be arbitrary, it will be omitted. For example, {(x1, true), (x3, true)} would be a satisfying partial assignment for the formula above. We define a minimal unsatisfiable core of an unsatisfiable formula ϕ, given in CNF, as an unsatisfiable subset of clauses ϕcore of ϕ, such that every proper subset of clauses of ϕcore is still satisfiable.\nWe, again, generated 1 million random formulas. For the generation of propositional formulas, the specified node distribution puts equal weight on ∧, ∨, and ¬ operators and half as much weight on the derived operators↔ and⊕ individually. In contrast to previous work (Selsam et al., 2019), which is restricted to formulas in CNF, we allow an arbitrary formula structure and derived operators.\nA satisfying assignment is represented as an alternating sequence of propositions and truth values, given as 0 and 1. The sequence a0b1c0, for example, represents the partial assignment {(a, false), (b, true), (c, false)}, meaning that the truth values of propositions d and e can be chosen arbitrarily (note that we allow five propositions). We used pyaiger (Vazquez-Chanlatte, 2018), which builds on Glucose 4 (Audemard & Simon, 2018) as its underlying SAT solver. We construct the partial assignments with a standard method in SAT solving: We query the SAT solver for a minimal unsatisfiable core of the negation of the formula. To give the interested reader an idea of the level of difficulty of the dataset, the following table shows three random examples from our training set PropRandom35. The first line shows the formula and the assignment in mathematical notation. The second line shows the syntactic representation (in Polish notation):\npropositional formula satisfying partial assignment ((d ∧ ¬e) ∧ (¬a ∨ ¬e))↔ ((¬ ⊕ (¬b↔ ¬e)) {(a, 0), (b, 0), (c, 1), (d, 1), (e, 0)} ∨((e⊕ (b ∧ d))⊕ ¬(¬c ∨ (¬a↔ e)))) <->&&d!e|!a!e|xor!b<->!b!exorxore&bd!|!c<->!ae a0b0c1d1e0 (c ∨ e) ∨ (¬a↔ ¬b) {(c, 1)} ||ce<->!a!b c1 ¬((b ∨ e)⊕ ((¬a ∨ (¬d↔ ¬e)) {(d, 1), (e, 1)} ∨(¬b ∨ (((¬a ∧ b) ∧ ¬b) ∧ d)))) !xor!be||!a<->!d!e!|!b&&&!ab!b!d d1e1\nTo test the Transformer on even more challenging formulas, we constructed a dataset of CNF formulas using the generation script of Selsam et al. (2019) from their publicly available implementation. A random CNF formula is built by adding clauses until the addition of a further clause would lead to an unsatisfiable formula. We used the parameters pgeo = 0.9 and pk2 = 0.75 to generate formulas that contain up to 15 variables and have a maximum size of 250. We call this dataset PropCNF250." }, { "heading": "4 EXPERIMENTAL SETUP", "text": "We have implemented the Transformer architecture (Vaswani et al., 2017). Our implementation processes the input and output sequences token-by-token. We trained on a single GPU (NVIDIA P100 or V100). All training has been done with a dropout rate of 0.1 and early stopping on the validation set. Note that the embedding size will automatically be floored to be divisible by the number of attention heads. The training of the best models took up to 50 hours. For the output decoding, we utilized a beam search (Wu et al., 2016), with a beam size of 3 and an α of 1.\nSince the solution of a logical formula is not necessarily unique, we use two different measures of accuracy to evaluate the generalization to the semantics of the logics: we distinguish between the syntactic accuracy, i.e., the percentage where the Transformers prediction syntactically matches the output of our generator and the semantic accuracy, i.e., the percentage where the Transformer produced a different solution. We also differentiate between incorrect predictions and syntactically invalid outputs which, in fact, happens only in 0.1% of the cases in LTLUnsolved254 .\nIn general, our best performing models used 8 layers, 8 attention heads, and an FC size of 1024. We used a batch size of 400 and trained for 450K steps (130 epochs) for our specification pattern dataset, and a batch size of 768 and trained for 50K steps (48 epochs) for our random formula dataset. A hyperparameter study can be found in Appendix C." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "In this section, we describe our experimental results. First, we show that a Transformer can indeed solve the task of providing a solution, i.e., a trace for a linear-time temporal logical (LTL) formula. For this, we describe the results from training on the dataset LTLPattern126 of specification patterns that are commonly used in the context of verification. Secondly, we show two generalization properties that the Transformer evinces on logic reasoning tasks: 1) the generalization to larger formulas (even so large that our data generator timed out) and 2) the generalization to the semantics of the logic. We strengthen this observation by considering a different dataset of random LTL formulas. Thirdly, we provide results for a model trained on a different logic and with a different data generator. We thereby demonstrate that the generalization behaviors of the Transformer are not specific to LTL and the LTL solver implemented with spot that we used to generate the data. An overview of our training results is displayed in Figure 2." }, { "heading": "5.1 SOLVING LINEAR-TIME TEMPORAL LOGICAL FORMULAS", "text": "We trained a Transformer on our specification on LTLPattern126 . Figure 1 in the introduction displays the performance of our best model on this dataset. We observed a syntactic accuracy of 69.1% and a semantic accuracy of 96.8%. With this experiment we can already deduce that it seems easier for the Transformer to learn the underlying semantics of LTL than to learn the particularities of the generator. Further we can see that as the formula length grows, the syntactic accuracy begins to drop. However, that drop is much smaller in the semantic accuracy—the model still mostly predicts correct traces for long formulas.\nAs a challenging benchmark, we tested our best performing model on LTLUnsolved254 . It predicted correct solutions in 83% of the cases, taking on average 15s on a single CPU. The syntactic accuracy is 0% as there was no output produced by spot within the timeout. The results of the experiments are visualized in Figure 3. Note that this does not mean that our Transformer models necessariy outperform classical algorithms across the board. However, since verifying solutions to LTL formulas is much easier than finding solutions (AC1(logDCFL) vs PSPACE), this experiment shows that the predictions of a deep neural network can be a valuable extension to the verification tool box." }, { "heading": "5.2 GENERALIZATION PROPERTIES", "text": "To prove that the generalization to the semantics is independent of the data generation, we also trained a model on a dataset of randomly generated formulas. The unshaded part of Figure 4 displays the performance of our best model on the LTLRandom35 dataset. The Transformers were solely trained on formulas of size less or equal to 35. We observe that in this range the exact syntactic accuracy decreases when the formulas grow in size. The semantic accuracy, however, stays, again, high. The model achieves a syntactic accuracy of 83.8% and a semantic accuracy of 98.5% on LTLRandom35, i.e., in 14.7% of the cases, the Transformer deviates from our automaton-based data generator. The evolution of the syntactic and the semantic accuracy during training can be found in Appendix D.\nTo show that the generalization to larger formulas is independent from the data generation method, we also tested how well the Transformer generalizes to randomly generated LTL formulas of a size it has never seen before. We used our model trained on LTLRandom35 and observed the performance on LTLRandom50. The model preserves the semantic generalization, displayed in the shaded part of Figure 4. It outputs exact syntactic matches in 67.6% of the cases and achieves a semantic accuracy of 92.2%. For the generalization to larger formulas we utilized a positional encoding based on the tree representation of the formula (Shiv & Quirk, 2019). When using the\nstandard positional encoding instead, the accuracy drops, as expected, significantly. A visualization of this experiments can be found in Appendix E.\nIn a further experiment, we tested the out-of-distribution (OOD) generalization of the Transformer on the trace generation task. We generated a new dataset LTLRandom126 to match the formula sizes and the vocabulary of LTLPattern126. A model trained on LTLRandom126 achieves a semantic accuracy of 24.7% (and a syntactic accuracy of only 1.0%) when tested on LTLPattern126. Vice versa, a model trained on LTLPattern126 achieves a semantic accuracy of 38.5% (and a semantic accuracy of only 0.5%) when tested on LTLRandom126. Testing the models OOD increases the gap between syntactic and semantic correctness dramatically. This underlines that the models learned the nature of the LTL semantics rather than the generator process. Note that the two distributions are very different.\nFollowing these observations, we also tested the performance of our models on other patterns from the literature. We observe a higher semantic accuracy for our model trained on random formulas and a higher gap between semantic and syntactic accuracy for our model trained on pattern formulas:\nPatterns Number of Patterns Trained on Syn. Acc. Sem. Acc. dac (Dwyer et al., 1998) 55 LTLRandom126 49.1% 81.8%\neh (Etessami & Holzmann, 2000) 11 LTLRandom126 81.8% 90.9% hkrss (Holeček et al., 2004) 49 LTLRandom126 71.4% 83.7% p (Pelánek, 2007) 20 LTLRandom126 65.0% 90.0% eh (Etessami & Holzmann, 2000) 11 LTLPattern126 0.0% 36.4%\nhkrss (Holeček et al., 2004) 49 LTLPattern126 14.3% 49.0% p (Pelánek, 2007) 20 LTLPattern126 10.0% 60.0%\nIn a last experiment on LTL, we tested the performance of our models on handcrafted formulas. We observed that formulas with multiple until statements that describe overlapping intervals were the most challenging. This is no surprise as these formulas are the source of PSPACE-hardness of LTL.\naU b ∧ aU ¬b (a ∧ ¬b) (b) (true)ω &UabUa!b &a!b;b;{1}\nWhile the above formula can be solved by most models, when scaling this formula to four overlapping until intervals, all of our models fail: For example, a model trained on LTLRandom35 predicted the trace (a∧ b∧ c) (a∧¬b∧¬c) (b∧ c) (true)ω , which does not satisfy the LTL formula.\n(aU b ∧ c) ∧ (aU ¬b ∧ c) ∧ (aU b ∧ ¬c) ∧ (aU ¬b ∧ ¬c) (a ∧ b ∧ c) (a ∧ ¬b ∧ ¬c) (b ∧ c) (true)ω &&&Ua&bcUa&!bcUa&b!cUa&!b!c &&abc;&&a!b!c;&bc;1" }, { "heading": "5.3 PREDICTING ASSIGNMENTS FOR PROPOSITIONAL LOGIC", "text": "To show that the generalization to the semantic is not a specific property of LTL, we trained a Transformer to solve the assignment generation problem for propositional logic, which is a substantially different logical problem.\nAs a baseline for our generalization experiments on propositional logic, we trained and tested a Transformer model with the following hyperparameter on PropRandom35:\nEmbedding size Layers Heads FC size Batch Size Train Steps Syn. Acc. Sem. Acc. enc:128, dec:64 6 6 512 1024 50K 58.1% 96.5%\nWe observe a striking 38.4% gap between predictions that were syntactical matches of our DPLLbased generator and correct predictions of the Transformer. Only 3.5% of the time, the Transformer outputs an incorrect assignment. Note that we allow the derived operators ⊕ and↔ in these experiments, which succinctly represent complicated logical constructs.\nThe formula b ∨ ¬(a ∧ d) occurs in our dataset PropRandom35 and its corresponding assignment is {(a, 0)}. The Transformer, however, outputs d0, i.e., it goes with the assignment of setting d to false , which is also a correct solution. A visualization of this example can be found in Appendix F. When the formulas get larger, the solutions where the Transformer differs from the DPLL algorithm accumulate. Consider, for example, the formula ¬b∨(e↔ b∨c∨¬d)∨(c∧(b⊕(a⊕¬d))⊕(¬c↔ d) ∧ (a ↔ (b ⊕ (b ⊕ e)))), which is also in the dataset PropRandom35. The generator suggests\nthe assignment {(a, 1), (c, 1), (d, 0)}. The Transformer, however, outputs e0, i.e., the singleton assignment of setting e to false , which turns out to be a (very small) solution as well.\nWe achieved stable training in this experiment by setting the decoder embedding size to either 64 or even 32. Keeping the decoder embedding size at 128 led to very unstable training.\nWe also tested whether the generalization to the semantics is preserved when the Transformer encounters propositional formulas of a larger size than it ever saw during training. We, again, utilized the tree positional encoding. When challenged with formulas of size 35 to 50, our best performing model trained on PropRandom35 achieves a syntactic accuracy of 35.8% and a semantic accuracy of 86.1%. In comparison, without the tree positional encoding, the Transformer achieves a syntactic match of only 29.0% and an overall accuracy of only 75.7%. Note that both positional encodings work equally well when not considering larger formulas.\nIn a last experiment, we tested how the Transformer performs on more challenging propositional formulas in CNF. We thus trained a model on PropCNF250, where it achieved a semantic accuracy of 65.1% and a syntactic accuracy of 56.6%. We observe a slightly lower gap compared to our LTL experiments. The Transformer, however, still deviates even on such formulas from the generator." }, { "heading": "6 CONCLUSION", "text": "We trained a Transformer to predict solutions to linear-time temporal logical (LTL) formulas. We observed that our trained models evince powerful generalization properties, namely, the generalization to the semantics of the logic, and the generalization to larger formulas than seen during training. We showed that these generalizations do not depend on the underlying logical problem nor on the data generator. Regarding the performance of the trained models, we observed that they can compete with classical algorithms for generating solutions to LTL formulas. We built a test set that contained only formulas that were generated out of practical verification patterns, on which even our data generator timed out. Our best performing model, although it was trained on much smaller formulas, predicts correct traces 83% of the time.\nThe results of this paper suggest that deep learning can already augment combinatorial approaches in automatic verification and the broader formal methods community. With the results of this paper, we can, for example, derive novel algorithms for trace generation or satisfiability checking of LTL that first query a Transformer for trace predictions. These predictions can be checked efficiently. Classical methods can serve as a fall back or check partial solutions providing guidance to the Transformer. The potential that arises from the advent of deep learning in logical reasoning is immense. Deep learning holds the promise to empower researchers in the automated reasoning and formal methods communities to make bigger jumps in the development of new automated verification methods, but also brings new challenges, such as the acquisition of large amounts of data." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We thank Christian Szegedy, Jesko Hecking-Harbusch, and Niklas Metzger for their valuable feedback on an earlier version of this paper." }, { "heading": "A LINEAR-TIME TEMPORAL LOGIC (LTL)", "text": "In this section, we provide the formal syntax and semantics of Linear-time Temporal Logic (LTL). The formal syntax of LTL is given by the following grammar:\nϕ ::= p | ¬ϕ | ϕ ∧ ϕ | ϕ | ϕ U ϕ, where p ∈ AP is an atomic proposition. Let AP be a set of atomic propositions. A (explicit) trace t is an infinite sequence over subsets of the atomic propositions. We define the set of traces TR := (2AP )ω . We use the following notation to manipulate traces: Let t ∈ TR be a trace and i ∈ N be a natural number. With t[i] we denote the set of propositions at i-th position of t. Therefore, t[0] represents the starting element of the trace. Let j ∈ N and j ≥ i. Then t[i, j] denotes the sequence t[i] t[i + 1] . . . t[j − 1] t[j] and t[i,∞] denotes the infinite suffix of t starting at position i.\nLet p ∈ AP and t ∈ TR. The semantics of an LTL formula is defined as the smallest relation |= that satisfies the following conditions:\nt |= p iff p ∈ t[0] t |= ¬ϕ iff t 6|= ϕ t |= ϕ1 ∧ ϕ2 iff t |= ϕ1 and t |= ϕ2 t |= ϕ iff t[1,∞] |= ϕ t |= ϕ1 U ϕ2 iff there exists i ≥ 0 : t[i,∞] |= ϕ2\nand for all 0 ≤ j < i we have t[j,∞] |= ϕ1\nThere are several derived operators, such as ϕ ≡ true U ϕ and ϕ ≡ ¬ ¬ϕ. ϕ states that ϕ will eventually hold in the future and ϕ states that ϕ holds globally. Operators can be nested:\nϕ, for example, states that ϕ has to occur infinitely often." }, { "heading": "B SIZE DISTRIBUTION IN THE DATASETS", "text": "In this section, we provide insight into the size distribution of our datasets. Figure 5 shows the size distribution of the formulas in our dataset LTLPattern126 .\nFigure 6 shows the size distribution of our generated formulas and their traces in the dataset LTLRandom35. Table 1 shows three randomly drawn example instances of the dataset LTLRandom35.\nLastly, Figure 7 shows the size distribution of formulas in our dataset LTLUnsolved254." }, { "heading": "C HYPERPARAMETER ANALYSIS", "text": "Table 2 shows the effect of the most significant parameters on the performance of Transformers. The performance largely benefits from an increased number of layers, with 8 yielding the best results. Increasing the number further, even with much more training time, did not result in better or even led to worse results. A slightly less important role plays the number of heads and the dimension of the intermediate fully-connected feed-forward networks (FC). While a certain FC size is important, increasing it alone will not improve results. Changing the number of heads alone has also almost no impact on performance. Increasing both simultaneously, however, will result in a small gain.\nThis seems reasonable, since more heads can provide more distinct information to the subsequent processing by the fully-connected feed-forward network. Increasing the embeddings size from 128 to 256 very slightly improves the syntactic accuracy. But likewise it also degrades the semantic accuracy, so we therefore stuck with the former setting." }, { "heading": "D ACCURACY DURING TRAINING", "text": "In Figure 8 we show the evolution of both the syntactic accuracy and the semantic accuracy during the training process. Note the significant difference right from the beginning. This demonstrates the importance of a suitable performance measure when evaluating machine learning algorithms on logical reasoning tasks." }, { "heading": "E DIFFERENT POSITIONAL ENCODINGS", "text": "" }, { "heading": "F HANDCRAFTED EXAMPLES", "text": "The LTL formula (bU a) ∧ (aU ¬a) states that b has to hold along the trace until a holds and a has to hold until a does not hold anymore. The automaton-based generator suggests the trace (¬a ∧ b) a (true)ω , i.e., to first satisfy the second until by immediately disallowing a. The satisfaction of the first until is then postponed to the second position of trace, which forces b to hold on the first position. The Transformer, however, chooses the following more general trace a (¬a) (true)ω , by satisfying the until operators in order (see Figure 10)." } ]
2,021
null
SP:0fa6e1dfa434bfef4c0071572e60dbafa0d65d4e
[ "This paper proposes variational deterministic uncertainty quantification (vDUQ), which adopts the stochastic (sparse) variational deep kernel learning (DKL) method to enable uncertainty estimations for deep models. To avoid uncertainty collapse, the deep neural network in the GP kernel is regularized with spectral normalization, which ensures a bi-Lipschitz constraint. Experiments show that vDUQ is effective in uncertainty quantification tasks.", "1.\tIn the introduction, the author separately pointed out the issues of DUQ and DKL. However, these issues are not convincing as no citations or theoretical proof is provided in this paper. The notations in the intro are also not well-defined. X, x, x* are used without difference, which however should be clearly defined as vectors or matrices." ]
Building on recent advances in uncertainty quantification using a single deep deterministic model (DUQ), we introduce variational Deterministic Uncertainty Quantification (vDUQ). We overcome several shortcomings of DUQ by recasting it as a Gaussian process (GP) approximation. Our principled approximation is based on an inducing point GP in combination with Deep Kernel Learning. This enables vDUQ to use rigorous probabilistic foundations, and work not only on classification but also on regression problems. We avoid uncertainty collapse away from the training data by regularizing the spectral norm of the deep feature extractor. Our method matches SotA accuracy, 96.2% on CIFAR-10, while maintaining the speed of softmax models, and provides uncertainty estimates competitive with Deep Ensembles. We demonstrate our method in regression problems and by estimating uncertainty in causal inference for personalized medicine.
[]
[ { "authors": [ "Jason Abrevaya", "Yu-Chin Hsu", "Robert P Lieli" ], "title": "Estimating conditional average treatment effects", "venue": "Journal of Business & Economic Statistics,", "year": 2015 }, { "authors": [ "Jens Behrmann", "Will Grathwohl", "Ricky TQ Chen", "David Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Invertible residual networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "John Bradshaw", "Alexander G de G Matthews", "Zoubin Ghahramani" ], "title": "Adversarial examples, uncertainty, and transfer testing robustness in gaussian process hybrid deep networks", "venue": "arXiv preprint arXiv:1707.02476,", "year": 2017 }, { "authors": [ "Luitzen EJ Brouwer" ], "title": "Beweis der invarianz des n-dimensionalen gebiets", "venue": "Mathematische Annalen,", "year": 1911 }, { "authors": [ "Roberto Calandra", "Jan Peters", "Carl Edward Rasmussen", "Marc Peter Deisenroth" ], "title": "Manifold gaussian processes for regression", "venue": "In 2016 International Joint Conference on Neural Networks (IJCNN),", "year": 2016 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sebastian Farquhar", "Michael Osborne", "Yarin Gal" ], "title": "Radial bayesian neural networks: Robust variational inference in big models", "venue": "arXiv preprint arXiv:1907.00865,", "year": 2019 }, { "authors": [ "Andrew YK Foong", "Yingzhen Li", "José Miguel Hernández-Lobato", "Richard E Turner" ], "title": "inbetween’uncertainty in bayesian neural networks", "venue": "Workshop on Uncertainty and Robustness in Deep Learning,", "year": 2019 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Jacob R Gardner", "Geoff Pleiss", "David Bindel", "Kilian Q Weinberger", "Andrew Gordon Wilson" ], "title": "Gpytorch: Blackbox matrix-matrix gaussian process inference with gpu acceleration", "venue": null, "year": 2018 }, { "authors": [ "Henry Gouk", "Eibe Frank", "Bernhard Pfahringer", "Michael Cree" ], "title": "Regularisation of neural networks by enforcing lipschitz continuity", "venue": "arXiv preprint arXiv:1804.04368,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "James Hensman", "Alexander Matthews", "Zoubin Ghahramani" ], "title": "Scalable variational gaussian process classification", "venue": null, "year": 2015 }, { "authors": [ "James Hensman", "Nicolas Durrande", "Arno Solin" ], "title": "Variational fourier features for gaussian processes", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Jennifer L. Hill" ], "title": "Bayesian nonparametric modeling for causal inference", "venue": "Journal of Computational and Graphical Statistics,", "year": 2011 }, { "authors": [ "Geoffrey E Hinton", "Russ R Salakhutdinov" ], "title": "Using deep belief nets to learn covariance kernels for gaussian processes", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Jörn-Henrik Jacobsen", "Arnold Smeulders", "Edouard Oyallon" ], "title": "i-revnet: Deep invertible networks", "venue": "arXiv preprint arXiv:1802.07088,", "year": 2018 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Andrew Jesson", "Sören Mindermann", "Uri Shalit", "Yarin Gal" ], "title": "Identifying causal effect inference failure with uncertainty-aware models", "venue": "arXiv preprint arXiv:2007.00163,", "year": 2020 }, { "authors": [ "Andreas Kirsch", "Joost van Amersfoort", "Yarin Gal" ], "title": "Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Miguel Lázaro-Gredilla", "Anibal Figueiras-Vidal" ], "title": "Inter-domain gaussian processes for sparse inference using inducing features", "venue": "In Advances in Neural Information Processing Systems,", "year": 2009 }, { "authors": [ "Miguel Lázaro-Gredilla", "Joaquin Quiñonero-Candela", "Carl Edward Rasmussen", "Anı́bal R Figueiras-Vidal" ], "title": "Sparse spectrum gaussian process regression", "venue": "The Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Jeremiah Zhe Liu", "Zi Lin", "Shreyas Padhy", "Dustin Tran", "Tania Bedrax-Weiss", "Balaji Lakshminarayanan" ], "title": "Simple and principled uncertainty estimation with deterministic deep learning via distance awareness", "venue": null, "year": 2006 }, { "authors": [], "title": "Scalable Gaussian process inference using variational methods", "venue": "PhD thesis, University of Cambridge,", "year": 2017 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Ian Osband", "Charles Blundell", "Alexander Pritzel", "Benjamin Van Roy" ], "title": "Deep exploration via bootstrapped dqn", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Yaniv Ovadia", "Emily Fertig", "Jie Ren", "Zachary Nado", "David Sculley", "Sebastian Nowozin", "Joshua Dillon", "Balaji Lakshminarayanan", "Jasper Snoek" ], "title": "Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Fabian Pedregosa", "Gaël Varoquaux", "Alexandre Gramfort", "Vincent Michel", "Bertrand Thirion", "Olivier Grisel", "Mathieu Blondel", "Peter Prettenhofer", "Ron Weiss", "Vincent Dubourg" ], "title": "Scikit-learn: Machine learning in python", "venue": "Journal of machine Learning research,", "year": 2011 }, { "authors": [ "Uri Shalit", "Fredrik D Johansson", "David Sontag" ], "title": "Estimating individual treatment effect: generalization bounds and algorithms", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Lewis Smith", "Yarin Gal" ], "title": "Understanding measures of uncertainty for adversarial example detection", "venue": "Conference on Uncertainty in Artificial Intelligence,", "year": 2018 }, { "authors": [ "L Theis", "A van den Oord", "M Bethge" ], "title": "A note on the evaluation of generative models", "venue": "In International Conference on Learning Representations (ICLR 2016),", "year": 2016 }, { "authors": [ "Michalis Titsias" ], "title": "Variational learning of inducing variables in sparse gaussian processes", "venue": "In Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Joost van Amersfoort", "Lewis Smith", "Yee Whye Teh", "Yarin Gal" ], "title": "Uncertainty estimation using a single deep deterministic neural network", "venue": "International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Mark Van der Wilk", "Carl Edward Rasmussen", "James Hensman" ], "title": "Convolutional gaussian processes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Andrew G Wilson", "Zhiting Hu", "Russ R Salakhutdinov", "Eric P Xing" ], "title": "Stochastic variational deep kernel learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Andrew Gordon Wilson", "Zhiting Hu", "Ruslan Salakhutdinov", "Eric P Xing" ], "title": "Deep kernel learning", "venue": "In Artificial intelligence and statistics,", "year": 2016 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "In BMVC,", "year": 2016 }, { "authors": [ "Yao Zhang", "Alexis Bellot", "Mihaela van der Schaar" ], "title": "Learning overlapping representations for the estimation of individualized treatment effects", "venue": "arXiv preprint arXiv:2001.04754,", "year": 2020 }, { "authors": [ "Behrmann" ], "title": "2019), in particular the convolutional version and also constrain batch normalization as described in (Gouk et al., 2018). We use 1 power iteration and use the lowest Lipschitz constant that still allows for good accuracy, which we found to be around 3 in practice. We increase the momentum of batch normalization to 0.99 (from 0.9 default) to reduce the variance of the running average estimator of the empirical", "venue": "For Spectral Normalization,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deploying machine learning algorithms as part of automated decision making systems, such as self driving cars and medical diagnostics, requires implementing fail-safes. Whenever the model is presented with a novel or ambiguous situation, it would not be wise to simply trust its prediction. Instead, the system should try to get more information or simply withhold or defer judgment. While significant progress has been made towards estimating predictive uncertainty reliably in deep learning (Gal & Ghahramani, 2016; Lakshminarayanan et al., 2017), there is no single method that is shown to work on large datasets in classification and regression without significant computation overheads, such as multiple forward passes. We propose Variational Deterministic Uncertainty Quantification (vDUQ), a method for obtaining predictive uncertainty in deep learning for both classification and regression problems in only a single forward pass.\nIn previous work, van Amersfoort et al. (2020) show that combining a distance aware decision function with a regularized feature extractor in the form of a deep RBF network, leads to a model (DUQ) that matches a softmax model in accuracy, but is competitive with Deep Ensembles for uncertainty on large datasets. The feature extractor is regularized using a two-sided gradient penalty, which encourages the model to be sensitive to changes in the input, avoiding feature collapse, and encouraging generalization by controlling the Lipschitz constant. This model, however, has several limitations; for example the uncertainty (a distance in feature space) cannot be interpreted probabilistically and it is difficult to disentangle aleatoric and epistemic uncertainty. Additionally, the loss function and centroid update scheme are not principled and do not extend to regression tasks.\nA probabilistic and principled alternative to deep RBF networks are Gaussian Processes (GPs) in combination with Deep Kernel Learning (DKL) (Hinton & Salakhutdinov, 2008; Wilson et al., 2016b). DKL was introduced as a “best of both worlds” solution: apply a deep model on the training data and learn the GP in feature space, ideally getting the advantages of both models. In practice, however, DKL suffers from the same failure as Deep RBF networks: the deep model is free to map out of distribution data close to the feature representation of the training data, removing the attractive properties of GPs with distance sensitive kernels.\nUsing insights from DUQ, we are able to mitigate the problems of uncertainty collapse in DKL. In particular, we use direct spectral normalization (Gouk et al., 2018; Miyato et al., 2018) in combination with a ResNet (He et al., 2016), a variation that was suggested in Liu et al. (2020). The spectral\nnormalization enforces smoothness, while the residual connections enforce sensitivity of the feature represenation to changes in the input, obtaining a similar effect as the gradient penalty of DUQ. We use an inter-domain inducing point variational approximation of the GP predictive distribution (Lázaro-Gredilla & Figueiras-Vidal, 2009; Hensman et al., 2015), which places inducing points in feature space leading to needing fewer inducing points than previous work (Wilson et al., 2016a). These two techniques combined speed up inference in the GP model and decouple it from the dataset size. We release our code1 and hope that it will become a drop in alternative for softmax models with improved uncertainty.\nIn Figure 1, we show how vDUQ and Deep Ensembles (Lakshminarayanan et al., 2017), the current state of the art for uncertainty quantification (Ovadia et al., 2019), perform on simple 1D regression. This task is particularly hard for deep networks as shown in Foong et al. (2019). vDUQ shows the desired behavior of reverting back to the prior away from the data, while the Deep Ensemble extrapolates arbitrarily and confidently. In between the two sinusoids, the Deep Ensemble is certain while vDUQ increases its uncertainty.\nIn summary, our contributions are as follows:\n• We improve training a DKL model and for the first time match the accuracy and speed of training a deep network using regular softmax output on standard vision benchmarks.\n• We demonstrate excellent uncertainty quantification in classsification which matches or exceeds the state of the art on CIFAR-10, including ensembling approaches.\n• We show state of the art performance on causal inference for personalized medicine, an exciting real world application. This task requires calibrated uncertainty in regression to be able to defer treatment to an expert when uncertainty is high." }, { "heading": "2 BACKGROUND", "text": "Gaussian Processes (GPs) provide an interpretable, explainable and principled way to make predictions, and can work well even with little training data due to their use of Bayesian inference. In contrast to deep neural networks, GPs have high uncertainty away from the training data and on noisy inputs.\nThere are however two main issues with the standard GP setup: poor performance on high dimensional inputs and inefficient computational scaling with large datasets. The poor performance on high dimensional inputs is due to the fact that most standard shift-invariant kernels are based on\n1Available at: anonymized-for-review\nthe Euclidean distance, which is a poor metric for highly structured data like images (Theis et al., 2016). While kernels exist that better address this (Van der Wilk et al., 2017; Jacot et al., 2018), these are more computationally expensive and typically still underperform standard convolutional neural networks.\nDeep Kernel Learning (Calandra et al., 2016; Hinton & Salakhutdinov, 2008; Wilson et al., 2016b) is a way to combine the expressiveness of deep neural networks with the attractive properties of Gaussian Processes. The core idea is to use a deep neural network inside the kernel of a GP, k(xi,xj) → k(fθ(xi), fθ(xj)), where fθ(·) is a deep neural network, such as a Wide ResNet (Zagoruyko & Komodakis, 2016) up to the last linear layer, parametrized by θ. The kernel k(·, ·) can be any of the standard kernels, such as the RBF or Matérn kernel. With the deep network it becomes possible to train the GP on datasets that contain high dimensional points, such as images. As the deep network is unconstrained, uncertainty away from the data collapses (Bradshaw et al., 2017), and the model can be arbitrarily confident while no data has been observed in that region during training. This has been a major drawback to using DKL in practice. In Section 3 we provide a detailed discussion of this effect and ways to mitigate it.\nWhile DKL is a potentially powerful solution to overcome the problem of inexpressive kernels, it does not address the other scaling problem of Gaussian Processes: large datasets. The poor computational scaling comes from the fact that making predictions with an exact GP requires solving a linear system the size of the training data, namely computing K(X,X)−1K(X,x∗) where K(X,X) is the kernel matrix evaluated on the training dataset X and K(X,x∗) the kernel between the training dataset and a test point x∗. This is a significant computational problem in its own right. A powerful and principled alternative to exact GP inference is inducing point GP approximate inference, where only a small set of inducing points in the input space U are used to represent the entire training set, reducing the linear system to be solved to K(u, u)−1K(u, x∗). The new linear system is m by m, where m is the number of inducing points, rather than N by N . Finding the optimal set of points, which are not necessarily part of our training set, can be done by treating them as variational parameters and maximizing a lower bound on the marginal log-likelihood (Titsias, 2009). We follow the variational inducing point formulation of SVGPC (Hensman et al., 2015) as implemented in GPyTorch (Gardner et al., 2018)." }, { "heading": "3 METHOD", "text": "vDUQ is an instance of Deep Kernel Learning, combining a constrained deep neural network with an inducing point GP. The model is learned end-to-end using variational inference by means of gradient\nAlgorithm 1 Algorithm for training vDUQ Initialization: - Residual NN fθ : x → Rd with feature space dimensionality d and parameters θ initialized using He et al. (2015). - Approximate GP with variational parameters φ and number of inducing points m.\nUsing a random subset of k points of our training data, X init ⊂ X , compute: Initial inducing points: K-means on f(X init) with K = m. Use found centroids as initial inducing point locations in GP. Initial length scale: l = 1\n(k2)\n∑k i=0 ∑k j=i+1 |f(X initi )− f(X initj )|2.\n1: for minibatch Bx = {x1, ..., xb} from X and By = {y1, ..., yb} from Y do 2: θ′ ← spectral normalization(θ) 3: ψ ← fθ′(Bx) . Compute feature representation 4: p(Y ′|Bx)← GPφ(ψ) . Compute posterior over labels, implemented in GPyTorch 5: L← ELBOφ(p(Y ′|Bx), By) . Compute loss, implemented in GPyTorch 6: φ, θ ← φ, θ + η ∗ ∇φ,θL . η the learning rate, alternatively use ADAM 7: end for\ndescent. In this section we discuss how vDUQ overcomes previous problems with collapsing uncertainty in DKL (Bradshaw et al., 2017) and explain how to learn the model with no pre-training, few inducing points, and a standard minibatch size, avoiding some shortcomings of Wilson et al. (2016a).\nWithout restrictions, the deep network inside the kernel is free to map input points that are far away from the training distribution to a feature representation that resembles those of data points in the training distribution. This behavior is also exhibited in standard neural networks (Smith & Gal, 2018) and sometimes referred to as feature collapse. We visualize this in Figure 2, where we map a 2D input space into feature space. With the feature space of the unconstrained model it is impossible to recover uncertainty: many points are collapsed on top of each other.\nIn DUQ (van Amersfoort et al., 2020), the authors show that it is possible to reduce feature collapse by enforcing two constraints on the model: sensitivity and smoothness. Sensitivity implies that when the input changes the feature representation also changes: this means the model cannot simply collapse feature representations arbitrarily. Smoothness means small changes in the input cannot cause massive shifts in the output. This appears to help optimization, and ensures the feature space accords with the implicit assumptions that for example RBF kernels make about the data. We discuss a connection between these two requirements, sensitivity and smoothness, and bi-Lipschitz functions in Section 6.\nThere is a number of methods proposed in the literature that attempt to satisfy these constraints and each comes with different trade-offs:\n• Two-sided gradient penalty: In DUQ (van Amersfoort et al., 2020), the properties are achieved by regularizing using a two-sided gradient penalty, that penalizes the squared distance of the gradient from a fixed value at every input point. This approach is easy to implement, but is not guaranteed to work, and in practice both the stability of training and its effectiveness as a regularizer can be fairly sensitive to the weighting of this penalty.\n• Direct spectral normalization and residual connections: spectral normalization (Miyato et al., 2018; Gouk et al., 2018) on the weights leads to smoothness and it is possible to combine this with an architecture that contains residual connections for the sensitivity constraint (Liu et al., 2020). This method is faster than the gradient penalty, and in practice a more effective way of mitigating feature collapse.\n• Reversible model: A reversible model is constructed by using reversible layers and avoiding any down scaling operations (Jacobsen et al., 2018; Behrmann et al., 2019). This approach can guarantee that the overall function is bi-Lipschitz, but the resulting model consumes considerably more memory and can be difficult to train.\nIn this work, we use direct spectral normalization and residual connections, as we find it to be more stable than a direct gradient penalty and significantly more computationally efficient than reversible models. In Figure 2, we show that a constrained model is unable to collapse points on top of each other in feature space, enabling the GP to correctly quantify uncertainty. Using a stationary kernel, the model then reverts back to the prior away from the training data just like a standard GP.\nThe regularized feature extractor allows us to offload computational complexity from the GP onto the deep model without sacrificing uncertainty. An expressive deep model is able to find a feature representation which generalizes across all intra class variation, and thus cluster inputs of the same class in feature space. We define the inducing points in feature space instead of input space (also known as an interdomain approximation) (Lázaro-Gredilla & Figueiras-Vidal, 2009; Hensman et al., 2017), which takes advantage of this clustering to reduce the number of inducing points required. In practice, we find that only very few inducing points, the number of classes in the case of classification, are necessary to obtain a well performing GP and we found similarly low numbers (in the 10 to 100 range) work well for regression, which means that solving the linear system is fast and the GP has minimal overhead compared to a softmax output. The fact that we can use few inducing points is not true in general for DKL (e.g. Wilson et al. (2016a) use several hundred points in input space), but requires the extra restrictions we have placed on our feature extractor to avoid the pathological feature collapse. We also find we can train with standard minibatch sizes of 128, while previous work used 5,000 on CIFAR-10 to combat gradient variance (Wilson et al., 2016a). It is important to note that in DKL and also in this paper we are not being Bayesian about our neural network parameters θ, so we only need a single pass through the feature extractor both when training and during inference." }, { "heading": "3.1 SPECTRAL NORMALIZATION IN BATCH NORMALIZATION LAYERS", "text": "When applying spectral normalization to the parameters of a deep model, it is important to note that batch normalization, a crucial component of training deep ResNets, has a non-trivial Lipschitz constant. In particular, since batch normalization transforms the input, using scale and shift parameters γ and β, according to\nxout = diag\n( γ√\nVar(x)\n) (x− E[x]) + β, (1)\nit has a Lipschitz constant of maxi | γi√ Var(x)\ni | (Gouk et al., 2018). Using the above equation, we can extend spectral normalization to batch normalization by dividing the weight γ of the batch normalization by the (scaled) Lipschitz constant. In practice, we find empirically that typical batch normalization layers in trained ResNets have a relatively high Lipschitz constant, up to around 12, with 95% having a Lipschitz greater than one (see also Figure 5 in the Appendix). This is counter to the claim of Liu et al. (2020) that batch normalization reduces the Lipschitz value of the network. Following these claims, Liu et al. (2020) only constrain the convolutional layers, which as we demonstrate empirically results in models which are less sensitive to changes in the input, violating the constraint introduced in DUQ. Unless otherwise noted, we apply spectral normalization to both the convolutional and the batch normalization layers. For convolutional layers with 1× 1 filters we use exact normalization, while for large filters we use an approximation, implemented originally by Behrmann et al. (2019)." }, { "heading": "4 RELATED WORK", "text": "Currently, the simplest and most effective way to obtain uncertainty in deep learning classification is using ensembling (Lakshminarayanan et al., 2017). Despite its simplicity it has shown to be remarkably effective (Ovadia et al., 2019), outperforming alternatives such as MC Dropout (Gal & Ghahramani, 2016) and mean-field variational inference Bayesian Neural Networks (Farquhar et al., 2019) at the expense of having to train and evaluate multiple models. An important issue with Deep Ensembles is enforcing diversity; while on large and complicated datasets it is sufficient to use a different initialization and data order, this is not sufficient on easier or smaller datasets, as we highlight in Figure 1.\nvan Amersfoort et al. (2020) demonstrate empirically with DUQ that it is possible to quantify uncertainty in deep learning using only a single deterministic model. SNGP (Liu et al., 2020) extends on DUQ’s ideas of a distance aware prediction function and spectral regularization, and implements them by using a deep neural network with a random fourier features (RFF) approximate GP and direct spectral normalization on the convolutional weights (but not on the weights of batch normalization). While they obtain impressive results, it is important to note that the RFF approximation makes the GP parametric, meaning there are now a fixed number of basis functions used in the prediction which can lead to unreliable uncertainty. In contrast, our use of variational Bayes in the form of SVGPC conserves the non-parametric property and uses infinitely many basis functions; it is an approximation to the true GP posterior, whereas Liu et al. (2020)’s random Fourier expansion leads to a different model which converges in expectation to the true model as the number of random Fourier features tends to infinity. This is discussed in Section 6 of Lázaro-Gredilla et al. (2010) which comment that the RFF approximation is at risk of overfitting, while the variational inducing point approximation is safeguarded from it as they are parameters of the variational approximation (Titsias, 2009). Furthermore, the RFF approximation is only tractable for the RBF kernel and significantly more expensive for other kernels, restricting the flexibility of the model. Finally, although in principle the method of Liu et al. (2020) could be adapted to regression in a straightforward way, they do not investigate this. Our experiments demonstrate the applicability of this kind of model both on standard classification benchmarks and uncertainty sensitive regression tasks, such as causal inference." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 REGRESSION AND IN-BETWEEN UNCERTAINTY", "text": "In Figure 1, we show results on a 1D dataset created from sinusoids. For vDUQ, we use spectral normalization, a Matérn kernel with ν = 12 and 20 inducing points. This kernel is known to lead to less smooth learned functions, which leads to quickly increasing the uncertainty when there is no data. Full experimental details are provided in Appendix A\nDeep Ensemble extrapolates incorrectly as x becomes more negative, at x = −15 the prediction is far away from what is seen in the training data, however the model still has low uncertainty. Meanwhile vDUQ reverts to the prior directly at the edge of the training data domain, which is the ideal behavior in this regression setting. In Deep Ensembles (Lakshminarayanan et al., 2017), it was suggested to let the model predict both the mean and the standard deviation. This standard deviation only measures the aleatoric uncertainty (see also Chua et al. (2018) for a discussion), which is the uncertainty arising from noise inherent in the data. In this dataset there is only minimal noise, and therefore the only way to assess the uncertainty is to compute the variance of the mean predictions. In contrast, vDUQ’s posterior variance is directly interpretable as the predictive uncertainty (also known as the total uncertainty). In the Appendix, Figure 4, we visualize sampling from the posterior, which highlights the non-smoothness of the kernel choice." }, { "heading": "5.2 CLASSIFICATION ON TWO MOONS", "text": "We show results on Two Moons dataset (Pedregosa et al., 2011) for three different models: a standard softmax model, vDUQ, and a variation where the spectral normalized ResNet is replaced by a fully connected model (similar to Bradshaw et al. (2017)). For experimental details see Appendix A.\nOn this simple dataset, a multi layer neural network easily achieves 100% accuracy with a single, non-linear, decision boundary going between the two moons. However these models exhibit complete certainty everywhere in the input domain as shown in Figure 3a, with yellow the certain regions, and blue uncertain. The uncertainty is computed using the entropy of the class prediction: we model the problem as a two class classification problem. vDUQ (in Figure 3c) on the other hand quantifies uncertainty exactly as one would expect for the two moons dataset: certain on the training data, uncertain away from it and in between the two half moons. Figure 3b highlights the importance of our contribution. When using DKL with a deep network without the right constraints, the model generalizes beyond the training distribution and is certain even away from the training data." }, { "heading": "5.3 CIFAR-10 AND SVHN", "text": "In this section we look at training a large model, the Wide Residual Network, on CIFAR-10 (Krizhevsky et al., 2009). For vDUQ, we train the large ResNet end-to-end with the GP, this is in contrast with prior work that used a pre-trained model (Bradshaw et al., 2017), which limits the ability to apply the method to new domains. We follow the experimental setup of Zagoruyko & Komodakis (2016), and use a 28 layer model with BasicBlocks and dropout. Interestingly, we can follow hyper parameter suggestions (such as dropout rate, learning rate, and optimizer) directly when training vDUQ, and no special care is necessary for the variational parameters. We remove the final linear layer of the model and the resulting 640 dimensional feature vector is directly used in the GP. We train using only 10 inducing points, which are shared among the independent output GPs (although q(u) is different) and use Lipschitz factor 3. This means the model is fast, and going through one epoch is just 3% slower than using a softmax output. Further experimental details are discussed in Appendix A. We provide an ablation study of the number of inducing points in Appendix C.\nResults are shown in Table 1. vDUQ without any spectral normalization matches the accuracy of the WRN, but the uncertainty is only slightly better than a softmax model. With spectral normalization the accuracy drops slightly, as expected, but the uncertainty improves significantly. Note that SVDKL (Wilson et al., 2016a), our most direct baseline, obtains just 91% accuracy on CIFAR-10 using a ResNet-20 (for which the softmax accuracy is 94%, see also van Amersfoort et al. (2020)). Meanwhile, convolutional GPs (Van der Wilk et al., 2017) obtain 64.6% accuracy while using 1,000 inducing points. We perform an ablation where we train vDUQ with spectral normalization only on the convolutions, similar to (Liu et al., 2020), and we obtain an AUROC of 0.93 ± 0.003 and accuracy matching the model without spectral normalization. This provides further evidence that the batch normalization layers undo the effect, and need to be properly normalized.\n2Obtained using the author’s open source implementation, which is available at https://github.com/ google/uncertainty-baselines" }, { "heading": "5.4 CAUSAL INFERENCE", "text": "Personalized healthcare is an exciting application of machine learning, where the efficacy of a treatment is predicted based on characteristics of the individual using a model trained on previously treated patients. An individual’s response to treatment can only be known if they are a member of a group represented in the data and if there is prescription diversity in the group: treatment and no treatment. Jesson et al. (2020) show that measures of uncertainty can identify when personalized causal-effect inference fails due to such factors. Uncertainty can then define policies for deferring treatment recommendations to an expert when there is insufficient knowledge about a person. The uncertainty estimates must be correct, or else individuals would receive treatment recommendations even if their response to treatment is not known, which can result in undue stress, financial burdens, or worse. In this section we evaluate vDUQ on the task of inferring the personalized causal-effects of a binary treatment t on an individual x via the conditional average treatment effect: CATE(x) := E[Yt=1(x)− Yt=0(x)] (Abrevaya et al., 2015). Intuitively, an individual would be recommended treatment if CATE(x) > 0. We show that vDUQ yields uncertainty estimates that lead to safer and more data-efficient policies for withholding treatment recommendations than alternative methods significantly improving on previous state-of-the-art results.\nWe use the semi-synthetic IHDP (Hill, 2011), IHDP Cov. and CEMNIST datasets to assess the uncertainty estimates of vDUQ on this causal inference task following the experimental setup in (Jesson et al., 2020). IHDP and IHDP Cov. are regression datasets with only ∼750 data points, while CEMNIST is a binary classification dataset with 10k points. Treatment-effect recommendations are deferred to an expert either at random, or based on the uncertainty in the CATE estimate. Recommendations are deferred for 10% of the cases in the IHDP dataset, and 50% of the IHDP Cov. and CEMNIST datasets. We report the root expected Precision in Estimation of Heterogeneous Effect (Hill, 2011) ( √ PEHE) to assess the error on the remaining recommendations (lower is better). Table 2 summarizes our results and shows that vDUQ has improved performance and uncertainty estimates better suited to rejection policies than other uncertainty aware methods. In particular, we improve on DKLITE (Zhang et al., 2020), the baseline related to our method: an alternative deep kernel learning method designed specifically for CATE inference." }, { "heading": "6 LIMITATIONS", "text": "Even though demonstrated to work well empirically above, a theoretical justification of the spectral normalization in combination with residual connection is still lacking. Liu et al. (2020) propose an explanation based on the fact that spectral normalization, in combination with residual architectures, induces a bi-Lipschitz constraint, which is defined as:\n1\nK dX(x1, x2) ≤ dY (fθ(x1), fθ(x2)) ≤ KdX(x1, x2). (2)\nFrom that constraint it follows that if we have two metric spacesX and Y , with a continuous function f : X 7→ Y between them which is bijective, and f is bi-Lipschitz, then metric properties of X like boundedness and completeness have to be preserved by Y , and distances in X can be changed only by a bounded factor in Y . In this sense, a bi-Lipschitz map between X and Y can be considered to\nbe “approximately” distance preserving. Liu et al. (2020) argue that this means that applying this penalty to a Wide ResNet causes it to be a distance preserving mapping. However, the proof they provide is only valid for spectral regularisation with coefficients less than 1, whereas empirically we find that exceeding 1 has similar effectiveness in terms of uncertainty and is easier to optimize. This suggests that the proof does not capture the underlying mechanism of the spectral normalization. In addition, this argument is not applicable to models that do any form of downsampling as it is straightforward to demonstrate the following observation (which we do not claim to be novel): Proposition 1. Let f be a function from Rn to Rm, with m < n. Then f is not bi-Lipschitz.\nWe provide a proof in Appendix B; it suffices to show that a bi-Lipschitz function is necessarily a homeomorphism, and so cannot exist between spaces of different dimensionality.\nWe thus consider explanations of the effectiveness of spectral normalization and residual connections based on the assumption that the bi-Lipschitz condition holds insufficient if the network uses dimensionality reduction, which most feature extraction architectures do. Some progress towards a well performing bijective deep model was made in Behrmann et al. (2019), however the iResNet is significantly slower and more computationally expensive than the WRN, and we were unable to achieve the performance we report here without using dimension-reducing layers. Despite these open theoretical questions, we have so far not found a practical example where the uncertainty quality was poor, and our empirical results call for further theoretical study." }, { "heading": "7 CONCLUSION", "text": "We present vDUQ, a practical probabilistic method which allows for using deep architectures and we established its efficacy in 2D, on a large image dataset and in causal inference for personalized medicine, obtaining or matching state of the art results in each. Good uncertainty and fast inference in both classification and regression make vDUQ a compelling plug-in option for many applications, such as exploration in reinforcement learning, active learning, and many real world tasks. Exciting directions for future research include a rigorous theory of why spectral normalization works despite downsampling, or alternatively a computationally efficient way to enforce a bi-Lipschitz constraint." }, { "heading": "A EXPERIMENTAL DETAILS", "text": "A.1 2D EXPERIMENTS\nWe perform our 2D experiments using a simple feed forward ResNet. The first linear layer maps from the input to the feature representation and does not have an activation function. From there on, the model is a ResNet, x′ = x + f(x), with f(·) a combination of a linear mapping and a relu activation function. The linear mapping has optional Spectral Normalization for which we use the implementation of Behrmann et al. (2019). We use the SGD optimizer for regression and Adam for the two moons with learning rate 0.01, and we use 4 layers with 128 features. For the toy regression, we use a max Lipschitz constant 0.95, one power iteration and 20 inducing points. For the two moons, we set the noise level to 0.1 and use five power iterations, a Lipschitz constant of 0.65 and four inducing points. For the Deep Ensemble in Figure 1 we train 10 separate models, using a different initialization and data order for each and train to minimize the squared error.\nA.2 WIDE RESNET EXPERIMENTS\nFor the WRN, we follow the experimental setup and implementation of Zagoruyko & Komodakis (2016). This means that for CIFAR-10, we use depth 28 with widen factor 10 and BasicBlocks with dropout.\nWe train for the prescribed 200 epochs with batch size 128, starting with learning rate 0.1 and dropping with a factor of 0.2 at epoch 60, 120 and 160. We use the full training set and take the model at epoch 200 (no early stopping).\nFor Spectral Normalization, we again use the implementation of Behrmann et al. (2019), in particular the convolutional version and also constrain batch normalization as described in (Gouk et al., 2018). We use 1 power iteration and use the lowest Lipschitz constant that still allows for good accuracy, which we found to be around 3 in practice. We increase the momentum of batch normalization to 0.99 (from 0.9 default) to reduce the variance of the running average estimator of the empirical feature variance, which can be a source of instability (Gouk et al., 2018).\nA.3 CAUSAL EXPERIMENTS\nFollowing Shalit et al. (2017), we use 63/27/10 train/validation/test splits and report the PEHE evaluated on the test set over 1000, and 20 trials for the IHDP and CEMNIST datasets, respectively. For each trial, we train for a maximum of 750 epochs and evaluate the model with the lowest ELBO evaluated over the validation set. We employ Adam optimization with a learning rate of 0.001 and batch size of 100.\nThe feature extractor uses a feed forward ResNet architecture with 3, 200 unit hidden layers, and ELU activations. Dropout is applied after each activation at a rate of 0.1. The feature extractor takes the individual x and treatment t as input. For the CEMNIST experiment, a depth 3 CNN Resnet is used to extract features from the image, which is then passed as an input to the above architecture. Spectral normalization is used on all layers of the feature extractor. We use a Matérn kernel with ν = 12 , 200 inducing points, and a smoothed box prior on the lengthscales with range (exp(−1), exp(1)). For the DKLITE experiments we use the open source code from the authors available at https://bitbucket.org/mvdschaar/mlforhealthlabpub/src/master/ alg/dklite/. We write a custom loop over the IHDP dataset to follow the above protocol. For DKLITE on CEMNIST, in initial experimentation we found that the method did not adapt well to images, so we omit the comparison from the table. We make this adaptation available at anonymized-for-review.\nA.4 INDUCING POINT GP\nWe initialize the inducing points by doing k-means on the feature representation of 1,000 points. We compute the initial length scale by taking the pairwise euclidean distance between feature representation of the 1,000 points. We whiten the inducing points before training, as suggested in Matthews\n(2017). The kernel parameters, such as the length scale and output scale, are different per GP. We implement it using GPyTorch (Gardner et al., 2018) and use their default values if not otherwise specified." }, { "heading": "B PROOF OF PROPOSITION 1", "text": "This proof makes no claim to novelty, but is provided for completeness.\nProof of proposition 1. We can show that this follows from the fact that the metric spaces Rm and Rn are not homeomorphic (topologically equivalent) to one another (Brouwer, 1911). While this statement is extremely intuitive, proving it is surprisingly technical, so we will take it as given.\nFirst, we prove the following lemma.\nLemma 1. Let f : X 7→ Y be a bi-Lipschitz and onto function, so 1L ||x1 − x2||X ≤ ||f(x1) − f(x2)||Y ≤ L||x1− x2||X , and Y is the image of X under f . Then f is a homeomorphism between X and Y , and so X and Y are homeomorphic.\nProof. Recall that a function f is a homeomorphism if f is bijective, f is continuous, and f−1 is also continuous. We will address these in turn. To see that f is bijective, note that f is injective iff. ∀x1, x2 ∈ X , we have x1 6= x2 =⇒ f(x1) 6= f(x2). But this follows directly from the lower Lipschitz property of f , since if x1 6= x2, then ||x1−x2||X > 0, so ||f(x1)−f(x2)||Y > 0, from which it follows that f(x1) 6= f(x2). Since f is injective (one-to-one) and onto, it is a bijection. Since any function which is Lipschitz continuous is also continuous, the fact that f is continuous is given. Since f is bijective, the inverse function f−1 exists, and we need to show that it is continuous. We have, from the bi-Lipschitzness of f , that 1L ||f\n−1(f(x1))− f−1(f(x1))||X ≤ ||f(x1)− f(x2)||Y , which implies that the inverse function is also Lipschitz, and hence also continuous. So f is a homeomorphism, and we are done.\nWe have therefore established that if a bi-Lipschitz (and onto) function exists between two spaces X and Y , then X and Y are homeomorphic. This provides a proof of our claim by contradiction; if a function f : Rn 7→ Rm existed and was bi-Lipschitz, then we would have shown that Rn and Rm were homeomorphic, since any bi-Lipschitz function is a homeomorphism. But Rn and Rm are not homeomorphic unless m = n, so no such function can exist." }, { "heading": "C ABLATION OF NUMBER OF INDUCING POINTS", "text": "In Table 3, we show the results of increasing the number of inducing points. While more inducing points generally helps, the differences are minimal." }, { "heading": "D SAMPLING FROM THE JOINT POSTERIOR", "text": "In Figure 4, we show samples from the joint predictive posterior. This type of sampling is not possible using standard deep learning models, because there is no covariance modelled between the batch elements. Possible applications of this include batch active learning, where we want to estimate a joint posterior (Kirsch et al., 2019), but also for deep exploration in RL where we can effectively sample a full policy (Osband et al., 2016)." } ]
2,020
null
SP:012aea2b0a756ae1714eb20ac4fdb723a644ee8f
[ "This paper studies why training with looser bounds (IBP) can outperform tighter linear relaxation based methods in certified defense. The authors argue that this is because IBP has a smoother loss landscape compared to linear relaxation based methods. Then the paper proposes to optimize the lower bound in the CROWN relaxation for unstable ReLU neurons during training, for tighter bounds and a smoother loss landscape.", "In this paper, the authors studied the role of loss landscape in training certifiable robust models. The authors reviewed linear relaxation based methods, and showed that Interval Bound Propagation (IBP) is a special case of linear relaxation based methods. Although linear relaxation based methods have a tighter bound on worst case loss with adversarial perturbations than IBP based method, the authors found in numerical studies that towards the end of training, IBP outperforms linear relaxation based methods. The authors hypothesized that this was because IBP loss landscape was more smooth, which helped optimization. The authors demonstrated in a theorem that IBP loss was indeed more smooth under certain assumptions. Based on this insight, the authors proposed a favorable landscape method. The authors showed in numerical studies that the sum over the worst-case margin for each class is lowest for their method. The loss of their method is also the most smooth among competing methods. Their method achieved a consistent performance in a range of perturbations, which is not achieved in competing methods." ]
In this paper, we study the problem of training certifiably robust models. Certifiable training minimizes an upper bound on the worst-case loss over the allowed perturbation, and thus the tightness of the upper bound is an important factor in building certifiably robust models. However, many studies have shown that Interval Bound Propagation (IBP) training uses much looser bounds but outperforms other models that use tighter bounds. We identify another key factor that influences the performance of certifiable training: smoothness of the loss landscape. We consider linear relaxation-based methods and find significant differences in the loss landscape across these methods. Based on this analysis, we propose a certifiable training method that utilizes a tighter upper bound and has a landscape with favorable properties. The proposed method achieves performance comparable to state-of-the-art methods under a wide range of perturbations.
[]
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Mislav Balunovic", "Martin Vechev" ], "title": "Adversarial training and provable defenses: Bridging the gap", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Battista Biggio", "Igino Corona", "Davide Maiorca", "Blaine Nelson", "Nedim Šrndić", "Pavel Laskov", "Giorgio Giacinto", "Fabio Roli" ], "title": "Evasion attacks against machine learning at test time", "venue": "In Joint European conference on machine learning and knowledge discovery in databases,", "year": 2013 }, { "authors": [ "Akhilan Boopathy", "Tsui-Wei Weng", "Pin-Yu Chen", "Sijia Liu", "Luca Daniel" ], "title": "Cnn-cert: An efficient framework for certifying robustness of convolutional neural networks", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Jeremy Cohen", "Elan Rosenfeld", "Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Krishnamurthy Dvijotham", "Sven Gowal", "Robert Stanforth", "Relja Arandjelovic", "Brendan O’Donoghue", "Jonathan Uesato", "Pushmeet Kohli" ], "title": "Training verified learners with learned verifiers", "venue": "arXiv preprint arXiv:1805.10265,", "year": 2018 }, { "authors": [ "Rida T Farouki" ], "title": "The bernstein polynomial basis: A centennial retrospective", "venue": "Computer Aided Geometric Design,", "year": 2012 }, { "authors": [ "Timur Garipov", "Pavel Izmailov", "Dmitrii Podoprikhin", "Dmitry P Vetrov", "Andrew G Wilson" ], "title": "Loss surfaces, mode connectivity, and fast ensembling of dnns", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Sven Gowal", "Krishnamurthy Dvijotham", "Robert Stanforth", "Rudy Bunel", "Chongli Qin", "Jonathan Uesato", "Timothy Mann", "Pushmeet Kohli" ], "title": "On the effectiveness of interval bound propagation for training verifiably robust models", "venue": "arXiv preprint arXiv:1810.12715,", "year": 2018 }, { "authors": [ "Matthias Hein", "Maksym Andriushchenko" ], "title": "Formal guarantees on the robustness of a classifier against adversarial manipulation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Mathias Lecuyer", "Vaggelis Atlidakis", "Roxana Geambasu", "Daniel Hsu", "Suman Jana" ], "title": "Certified robustness to adversarial examples with differential privacy", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2019 }, { "authors": [ "Bai Li", "Changyou Chen", "Wenlin Wang", "Lawrence Carin" ], "title": "Second-order adversarial attack and certifiable robustness", "venue": "arXiv preprint arXiv:1809.03113,", "year": 2018 }, { "authors": [ "Zhaoyang Lyu", "Ching-Yun Ko", "Zhifeng Kong", "Ngai Wong", "Dahua Lin", "Luca Daniel" ], "title": "Fastened crown: Tightened neural network robustness certificates", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Matthew Mirman", "Timon Gehr", "Martin Vechev" ], "title": "Differentiable abstract interpretation for provably robust neural networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Xi Wu", "Somesh Jha", "Ananthram Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "In 2016 IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Aditi Raghunathan", "Jacob Steinhardt", "Percy Liang" ], "title": "Certified defenses against adversarial examples", "venue": "arXiv preprint arXiv:1801.09344,", "year": 2018 }, { "authors": [ "Aditi Raghunathan", "Jacob Steinhardt", "Percy S Liang" ], "title": "Semidefinite relaxations for certifying robustness to adversarial examples", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hadi Salman", "Jerry Li", "Ilya Razenshteyn", "Pengchuan Zhang", "Huan Zhang", "Sebastien Bubeck", "Greg Yang" ], "title": "Provably robust deep learning via adversarially trained smoothed classifiers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Shibani Santurkar", "Dimitris Tsipras", "Andrew Ilyas", "Aleksander Madry" ], "title": "How does batch normalization help optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Gagandeep Singh", "Timon Gehr", "Matthew Mirman", "Markus Püschel", "Martin Vechev" ], "title": "Fast and effective robustness certification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Gagandeep Singh", "Timon Gehr", "Markus Püschel", "Martin Vechev" ], "title": "Boosting robustness certification of neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Gagandeep Singh", "Timon Gehr", "Markus Püschel", "Martin Vechev" ], "title": "An abstract domain for certifying neural networks", "venue": "Proceedings of the ACM on Programming Languages,", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Vincent Tjeng", "Kai Xiao", "Russ Tedrake" ], "title": "Evaluating robustness of neural networks with mixed integer programming", "venue": "arXiv preprint arXiv:1711.07356,", "year": 2017 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "arXiv preprint arXiv:1705.07204,", "year": 2017 }, { "authors": [ "Tsui-Wei Weng", "Huan Zhang", "Hongge Chen", "Zhao Song", "Cho-Jui Hsieh", "Duane Boning", "Inderjit S Dhillon", "Luca Daniel" ], "title": "Towards fast computation of certified robustness for relu networks", "venue": "arXiv preprint arXiv:1804.09699,", "year": 2018 }, { "authors": [ "Eric Wong", "Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Eric Wong", "Frank Schmidt", "Jan Hendrik Metzen", "J Zico Kolter" ], "title": "Scaling provable adversarial defenses", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Kai Y Xiao", "Vincent Tjeng", "Nur Muhammad Shafiullah", "Aleksander Madry" ], "title": "Training for faster adversarial robustness verification via inducing relu stability", "venue": "arXiv preprint arXiv:1809.03008,", "year": 2018 }, { "authors": [ "Cihang Xie", "Yuxin Wu", "Laurens van der Maaten", "Alan L Yuille", "Kaiming He" ], "title": "Feature denoising for improving adversarial robustness", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric Xing", "Laurent El Ghaoui", "Michael Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Huan Zhang", "Tsui-Wei Weng", "Pin-Yu Chen", "Cho-Jui Hsieh", "Luca Daniel" ], "title": "Efficient neural network robustness certification with general activation functions", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Huan Zhang", "Hongge Chen", "Chaowei Xiao", "Sven Gowal", "Robert Stanforth", "Bo Li", "Duane Boning", "Cho-Jui Hsieh" ], "title": "Towards stable and efficient training of verifiably robust neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Pu Zhao", "Pin-Yu Chen", "Payel Das", "Karthikeyan Natesan Ramamurthy", "Xue Lin" ], "title": "Bridging mode connectivity in loss landscapes and adversarial robustness", "venue": "arXiv preprint arXiv:2005.00060,", "year": 2020 }, { "authors": [ "Zhang" ], "title": "Loss and training schedules For general training schedules, we refer to Appendix C, D of Zhang et al. (2019b) with a single GPU (Titan Xp). We use the following mixed cross-entropy loss", "venue": null, "year": 2019 }, { "authors": [ "Gowal" ], "title": "2018), we also train with train = 1.1 test on CIFAR-10. The results are shown in Table 9. They attain slightly improved performances in 2/255", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Despite the success of deep learning in many applications, the existence of adversarial example, an imperceptibly modified input that is designed to fool the neural network (Szegedy et al., 2013; Biggio et al., 2013), hinders the application of deep learning to safety-critical domains. There has been increasing interest in building a model that is robust to adversarial attacks (Goodfellow et al., 2014; Papernot et al., 2016; Kurakin et al., 2016; Madry et al., 2018; Tramèr et al., 2017; Zhang et al., 2019a; Xie et al., 2019). However, most defense methods evaluate their robustness with adversarial accuracy against predefined attacks such as PGD attack (Madry et al., 2018) or C&W attack (Carlini & Wagner, 2017). Thus, these defenses can be broken by new attacks (Athalye et al., 2018).\nTo this end, many training methods have been proposed to build a certifiably robust model that can be guaranteed to be robust to adversarial perturbations (Hein & Andriushchenko, 2017; Raghunathan et al., 2018b; Wong & Kolter, 2018; Dvijotham et al., 2018; Mirman et al., 2018; Gowal et al., 2018; Zhang et al., 2019b). They develop an upper bound on the worst-case loss over valid adversarial perturbations and minimize it to train a certifiably robust model. These certifiable training methods can be mainly categorized into two types: linear relaxation-based methods and bound propagation methods. Linear relaxation-based methods use relatively tighter bounds, but are slow, hard to scale to large models, and memory-inefficient (Wong & Kolter, 2018; Wong et al., 2018; Dvijotham et al., 2018). On the other hand, bound propagation methods, represented by Interval Bound Propagation (IBP), are fast and scalable due to the use of simple but much looser bounds (Mirman et al., 2018; Gowal et al., 2018). One would expect that training with tighter bounds would lead to better performance, but IBP outperforms linear relaxation-based methods in many cases, despite using much looser bounds.\nThese observations on the performance of certifiable training methods raise the following questions:\nWhy does training with tighter bounds not result in a better performance? What other factors may influence the performance of certifiable training? How can we improve the performance of\ncertifiable training methods with tighter bounds?\nIn this paper, we provide empirical and theoretical analysis to answer these questions. First, we demonstrate that IBP (Gowal et al., 2018) has a more favorable loss landscape than other linear\nrelaxation-based methods, and thus it often leads to better performance even with much looser bounds. To account for this difference, we present a unified view of IBP and linear relaxation-based methods and find that the relaxed gradient approximation (which will be defined in Definition 1) of each method plays a crucial role in its optimization behavior. Based on the analysis of the loss landscape and the optimization behavior, we propose a new certifiable training method that has a favorable landscape with tighter bounds. The performance of the proposed method is comparable to that of state-of-the-art methods under a wide range of perturbations. We summarize the contributions of this study as follows:\n• We provide empirical and theoretical analysis of the loss landscape of certifiable training methods and find that smoothness of the loss landscape is important for building certifiably robust models.\n• We propose a certifiable training method with tighter bounds and a favorable loss landscape, obtaining comparable performance with state-of-the-art methods under a wide range of perturbations." }, { "heading": "2 RELATED WORK", "text": "Earlier studies on training certifiably robust models were limited to 2-layered networks (Hein & Andriushchenko, 2017; Raghunathan et al., 2018a). To scale to larger networks, a line of work has proposed the use of linear relaxation of nonlinear activation to formulate a robust optimization. Then, a dual problem is considered and a dual feasible solution is used to simplify the computation further. By doing so, Wong & Kolter (2018) built a method that can scale to a 4-layered network, and later, Wong et al. (2018) used Cauchy random projections to scale to much larger networks. However, they are still slow and memory-inefficient. Dvijotham et al. (2018) proposed a method called predictorverifier training (PVT), which uses a verifier network to optimize the dual solution. This is similar to our proposed method but we do not require any additional network. Xiao et al. (2018) proposed to add regularization technique with adversarial training for inducing ReLU stability, but it is less effective than other certified defenses. We also encourage our model to avoid unstable ReLUs, but we train the model with an upper bound of the worst-case loss and investigate ReLU stability from the loss landscape perspective.\nMirman et al. (2018) proposed the propagation of a geometric bound (called domain) through the network to yield an outer approximation in logit space. This can be done with an efficient layerwise computation that exploits interval arithmetic. Over the outer domain, one can compute the worstcase loss to be minimized during training. Gowal et al. (2018) used a special case of the domain propagation called Interval Bound Propagation (IBP) using the simplest domain, the interval domain (or interval bound). In IBP, the authors introduced a different objective function, heuristic scheduling on the hyperparameters, and elision of the last layer to stabilize the training and to improve the performance.\nBoth approaches, linear relaxation-based methods and bound propagation methods, use an upper bound on the worst-case loss. Bound propagation methods exploit much looser upper bounds, but they enjoy an unexpected benefit in many cases: better robustness than linear relaxation-based methods. Balunovic & Vechev (2019) hypothesized that the complexity of the loss computation makes the optimization more difficult, which could be a reason why IBP outperforms linear relaxationbased methods. They proposed a new optimization procedure with the existing linear relaxation. In this paper, we further investigate the causes of the difficulties in the optimization. Recently, Zhang et al. (2019b) proposed CROWN-IBP which uses linear relaxation in a verification method called CROWN (Zhang et al., 2018) in conjunction with IBP to train a certifiably robust model.\nAlthough beyond our focus here, there is another line of work on randomized smoothing (Li et al., 2018; Lecuyer et al., 2019; Cohen et al., 2019; Salman et al., 2019), which can probabilistically certify the robustness with arbitrarily high probability by using a smoothed classifier. However, it requires a large number of samples for inference.\nThere are many other works on certifiable verification (Weng et al., 2018; Singh et al., 2018a; 2019; 2018b; Zhang et al., 2018; Boopathy et al., 2019; Lyu et al., 2020). However, our work focuses on ”certifiable training”." }, { "heading": "3 BACKGROUND", "text": "First, we provide a brief overview of certifiable training methods. Then, we consider IBP (Gowal et al., 2018) as a special case of linear relaxation-based methods. This unified view on certifiable training methods helps us to comprehensively analyze the differences between the two approaches: bound propagation and linear relaxation. We present the details of the IBP in Appendix B." }, { "heading": "3.1 NOTATIONS AND CERTIFIABLE TRAINING", "text": "We consider a c-class classification problem with a neural network f(x;θ) with the layerwise operations z(k) = h(k)(z(k−1)) (k = 1, · · · ,K) and the input z(0) = x in the input space X . The corresponding probability function is denoted by pf = softmax ◦ f : X → [0, 1]c with subscript f . We denote a subnetwork with k operations as h[k] = h(k) ◦ · · · ◦h(1). For a linear operation h(k), we use W (k) and b(k) to denote the weight and the bias for the layer. We consider the robustness of the classifier against the norm-bounded perturbation set B(x, ) = {x′ ∈ X : ||x′−x|| ≤ }with the perturbation level . Here, we mainly focus on the `∞-norm bounded set. To compute the margin between the true class y for the input x and the other classes, we define a c × c matrix C(y) = I − 1e(y)T with (C(y)z(K))m = z (K) m −z(K)y (m = 0, · · · , c− 1). For the last linear layer, the weightsW (K) and the bias b(K) are merged with C(y), that is, W (K) ≡ C(y)W (K) and b(K) ≡ C(y)b(K), yielding the margin score function s(x, y; θ) = C(y)f(x; θ) = f(x; θ) − fy(x; θ)1 satisfying ps = pf . Then we can define the worst-case margin score s∗(x, y, ; θ) = maxx′∈B(x, ) s(x′, y; θ) where max is element-wise maximization. With an upper bound s on the worst-case margin score, s ≥ s∗, we can provide an upper bound on the worst-case loss over valid adversarial perturbations as follows:\nL(s(x, y, ;θ), y) ≥ max x′∈B(x, ) L(f(x′;θ), y) (1)\nfor cross-entropy loss L (Wong & Kolter, 2018). Therefore, we can formulate certifiable training as a minimization of the upper bound, minθ L(s(x, y, ; θ), y), instead of directly solving minθ maxx′∈B(x, ) L(f(x′; θ), y) which is infeasible. Note that adversarial training (Madry et al., 2018) uses a strong iterative gradient-based attack (PGD) to provide a lower bound on the worst-case loss to be minimized, but it cannot provide a certifiably robust model. Whenever possible, we will simplify the notations by omitting variables such as x, y, , and θ." }, { "heading": "3.2 LINEAR RELAXATION-BASED METHODS", "text": "For a subnetwork h[k], given with the pre-activation upper/lower bounds, u and l, for each nonlinear activation function h in h[k], linear relaxation-based methods (Wong & Kolter (2018); Wong et al. (2018); Zhang et al. (2019b)) use a relaxation of the activation function by two elementwise linear function bounds, h and h, that is, h(z) ≤ h(z) ≤ h(z) for l ≤ z ≤ u. We denote the function bounds as h(z) = a z + b and h(z) = a z + b for some a, b,a, and b, where denotes the elementwise (Hadamard) product. Using all the function bounds h’s and h’s for the nonlinear activations in conjunction with the linear operations in h[k], an ith (scalar) activation h[k]i (·) ∈ R can be upper bounded by a linear function gT · +b over B(x, ) as in Zhang et al. (2018). This can be equivalently explained with the dual relaxation viewpoint in Wong & Kolter (2018). Further details are provided in Appendix C. Now we are ready to upper bound the activation h[k]i over B(x, ). Definition 1 (Linear Relaxation with Relaxed Gradient Approximation). For each neuron activation h [k] i , a linear relaxation method computes an upper approximation of the activation over B(x, ) by using g ∈ Rd and b ∈ R as follows:\nmax x′∈B(x, )\nh [k] i (x ′) ≤ max x′∈B(x, ) gTx′ + b = gTx+ ||g||∗ + b. (2)\nWe call g the relaxed gradient approximation of h[k]i over B(x, ).\nSimilarly, we can obtain the corresponding lower bound. Inductively using these upper/lower bounds on the output of the subnetwork, we can obtain the bounds for the next subnetwork h[k+1] and then for the whole network s. The final bound s on the whole network s can then be used in the objective\n(1). The tightness of the bounds s and L(s, y) highly depend on how the linear bounds h and h in each layer are chosen.\nUnified view of IBP and linear relaxation-based methods IBP can also be considered as a linear relaxation-based method using zero-slope (a = a = 0) linear bounds, h(z) = u+ and h(z) = l+, where v+ = max(v,0) and v− = min(v,0). Thus, the bounds of a nonlinear activation depend only on the pre-activation bounds u and l for the activation layer, substantially reducing the feedforward/backpropagation computations. CROWN-IBP (Zhang et al., 2019b) applies different linear relaxation schemes to the subnetworks and the whole network. It uses the same linear bounds as IBP for the subnetworks h[k] for k < K except for the network s = h[K] itself, and uses h(z) = u+\nu+−l− (z − l−) and h(z) = 1[u+ + l− > 0] z for the whole network s. Moreover, CROWNIBP uses interpolations between two bounds with the mixing weight β, IBP bound and CROWN-IBP bound, with the following objective:\nL ( (1− β)sIBP(x, y, ;θ) + βsCROWN-IBP(x, y, ;θ), y ) . (3)\nConvex Adversarial Polytope (CAP) (Wong & Kolter, 2018; Wong et al., 2018) uses the linear bounds h(z) = u +\nu+−l− (z − l−) and h(z) = u + u+−l− z for all subnetworks h[k] and the entire network. As CAP utilizes the linear bounds for each neuron, it is slow and memory-inefficient. It can be easily shown that tighter relaxations on nonlinear activations yield a tighter bound on the worstcase margin score s∗. To specify the linear relaxation variable φ ≡ {a,a, b, b} used in relaxation, we use the notation s(x, y, ;θ,φ). CROWN-IBP and CAP generally yield a much tighter bound than IBP. These relaxation schemes are illustrated in Figure 6 in Appendix D." }, { "heading": "4 WHAT FACTORS INFLUENCE THE PERFORMANCE OF CERTIFIABLE TRAINING?", "text": "One would expect that a tighter upper bound on the worst-case loss in (1) is beneficial in certifiable training. However, several previous works have shown that this is not the case: IBP performs better than linear relaxation-based methods in many cases while utilizing a much looser bound. We investigate the loss landscape and the optimization behavior of IBP and other linear relaxation-based methods, and find that the non-smoothness of the relaxed gradient approximation of linear relaxations negatively affects their performance. Detailed settings of the following analyses are presented in Appendix A." }, { "heading": "4.1 LOSS LANDSCAPE OF CERTIFIABLE TRAINING", "text": "We empirically show that models that have tighter bounds, CROWN-IBP (Zhang et al., 2019b) and CAP (Wong & Kolter, 2018), tend to have non-smooth loss landscapes, which hinder optimization during training. We examine the learning curves of IBP and these linear relaxation-based methods. For a simple analysis, we avoid considering the mixture of the two logits in (3), and use β = 1 to consider CROWN-IBP logit only. Figure 1 (left) shows the learning curves on CIFAR-10 under train = 8/255. We use -scheduling with the warm-up (regular training) for the first 10 epochs and the ramp-up during epochs 10-130 where we linearly increases the perturbation radius from 0 to the target perturbation train. Thus, the training loss may increase even during learning.\nIn the early phase of the ramp-up period, in which the models are trained with small , CAP and CROWN-IBP have lower losses than IBP as expected because they use much tighter relaxation bounds than IBP. In particular, CAP has much tighter bounds than the others because CAP uses tighter relaxations for each subnetwork. This is consistent with the known results, that CAP tends to outperform the others at small perturbations, such as train = 2/255 on CIFAR-10 (see Table 1 for details). However, at the end of the training, when the perturbation reaches its maximum target value ( train), the opposite result is observed where CAP and CROWN-IBP perform worse than IBP.\nTo understand this inconsistency, we measure the variation of the loss along the gradient direction as in Santurkar et al. (2018), which is represented as the shaded region in Figure 1 (left). We find that linear relaxation-based methods have large variations, while IBP maintains a small variation throughout the entire training phase. It is known that a smooth loss landscape with a small loss variation induces stable and fast optimization with well-behaved gradients (Santurkar et al., 2018). Therefore, even though CAP and CROWN-IBP show robustness in the early phase of training, the non-smooth loss landscape in the ramp-up period might have hindered the optimization, yielding less robust models. As will be discussed in the following section, we find that the loss variation is highly related to the relaxed gradient approximation g used in linear relaxation.\nWe further explore the loss landscape near the local region of the parameter space at the current parameter θ(now) toward the next parameter θ(next) along the gradient in Figure 1 (right). We plot the landscapes for the later phase of the ramp-up period (epochs 50-130) during which large perturbations are used. IBP has flatter landscapes compared to the others, whereas CROWN-IBP has landscapes with large curvature along the gradient, and thus it tends to move towards a sharp local minimum and it may remain stuck there. Therefore, it may overfit to be robust to small perturbations, but is not robust to the target perturbation train.\nNext, we establish a relationship between the optimization procedure and linear relaxation. Figure 2 (top) shows the directional deviation between two successive loss gradient steps in terms of cosine similarity during training. Simultaneously, Figure 2 (bottom) shows the ratio of the number of unstable ReLUs for which the pre-activation bounds l and u span zero. We observe that the cosine similarity value is low when the number of unstable ReLUs is large - for example, in the early stage of CAP and the middle stage of CROWN-IBP. In particular, in the middle of the ramp-up period, CROWN-IBP has a large number of unstable ReLUs and exhibits abrupt changes in gradient steps. It often has deviation angles larger than 90◦, leading to parameter updates in the opposite direction of the previous one, bouncing in the basin of a local minimum. This is consistent with the results\nshown in Figure 1. Moreover, since the gradient directions are not well-aligned, it may not enjoy the advantages of momentum-based optimizers and be sensitive to the learning rate. To summarize, a large number of unstable ReLUs, high nonlinearity, leads to an unfavorable landscape that can negatively affect the optimization process." }, { "heading": "4.2 SMOOTHNESS OF RELAXED GRADIENT APPROXIMATION", "text": "In this section, we investigate the loss landscape further from a theoretical perspective to answer the question: ”What makes some landscapes more favorable than others?” We find that the relaxed gradient approximation of a linear relaxation affects the smoothness of the landscape. First, we need some mild smoothness assumptions that are natural when the network parameters θ1 and θ2 are close to each other, especially they are two consecutive parameters from SGD update. Assumption 1. Given linear relaxation method, we make the following assumptions on the bias b(x;θ) in the linear relaxation and the probability function p(x;θ): (1) ||∇θb(x;θ1)−∇θb(x;θ2)|| ≤ Lbθθ||θ1 − θ2|| for all θ1,θ2 and x. (2) ||p(x;θ1)− p(x;θ1)|| ≤ Lpθ ||θ1 − θ2|| for all θ1,θ2 and x.\nWith the above assumptions, we can provide an upper bound on the loss gradient difference for linear relaxation-based methods to measure the non-smoothness of the loss landscape as follows: Theorem 1. Given input x ∈ X and perturbation radius , let M be maxx′∈B(x, ) ||x′||. For a linear relaxation-based method with the upper bound sm(x;θ) = maxx′∈B(x, ) g(m)(x;θ)Tx′ + b(m)(x;θ), if b(m) satisfies Assumption 1 (1) for each m and ps satisfies Assumption 1 (2), then\n||∇θL(s(x;θ1))−∇θL(s(x;θ2))|| ≤ max\nm\n( 2 ||∇θg(m)(x;θ1,2)||+M ||∇θg(m)(x;θ1)−∇θg(m)(x;θ2)||+ L(m)||θ1 − θ2|| ) (4)\nfor any θ1,θ2, where L(m) = Lb (m) θθ + L ps θ ||∇θs(x;θ1,2)|| and θ1,2 can be any of θ1 and θ2.\nAccording to Theorem 1, the relaxed gradient approximations g(m) in the linear relaxation play a major role in shaping the loss landscape. The smoother the relaxed gradient approximations are, the smoother the loss landscape is. Especially for IBP, using the zero-slope relaxed gradient approximation g(m) ≡ 0 for all m, the loss difference is upper bounded by only the last term, maxL(m)||θ1 − θ2||, and it is relatively small for a single gradient step. On the other hand, for other linear relaxation-based methods using non-zero relaxed gradient approximation g(m) 6= 0, the gradient updates used in the training are more unstable than IBP. It is consistent with the empirical results shown in Figure 1 that there are significant differences between the loss variations of IBP and others." }, { "heading": "5 PROPOSED METHOD", "text": "Our analyses so far suggest that tightness of the upper bound on the worst-case loss and smoothness of the loss landscape are important for building a certifiably robust model. Therefore, we aim to design a new certifiable training method to improve the aforementioned factors (favorable landscape and tighter bound).\nMore favorable landscape via less a = 1 We observe that CROWN-IBP (β = 1) tends to have more unstable ReLU and less smooth landscape than the others. What, in the objective of CROWNIBP, does lead to these results? To answer this question, we investigate variants of CROWN-IBP with different a settings for unstable ReLUs. For each setting, we sample a ∈ {0, 1} with different (p, q) with P (a = 1 | |l| > |u|) = p and P (a = 1 | |l| ≤ |u|) = q for each neuron with preactivation bounds l and u. We use a = 1[u+ + l− > 0] for the other stable ReLUs. For the other elements of the linear relaxation variable φ = {(a,a, b, b)}, we fix a = u+u+−l− , b = − u +l−\nu+−l− , and b = 0 for each activation node, because they are the optimal choices for tightening the bound (see Appendix C.2 for details). Figure 3 shows that it tends to have more unstable ReLUs as the number\nof a satisfying a = 1 increases. This observation implies that it is required to have smaller portion of a with a = 1 to have a more favorable landscape.\nHowever, reducing the portion of a with a = 1 is not enough to achieve robustness unless the tightness is guaranteed. Through manually adjusting the a, variants of CROWN-IBP achieve favorable landscapes, but they show looser upper bounds which lead to a worse performance. Further investigation of variants of CROWN-IBP is presented in Appendix E. Therefore, it is required to search for appropriate values of a that can achieve both tightness and favorable landscape.\nTighter bound via optimization Now, we aim to reduce the number of a satisfying a = 1 and to tighten the upper bound in (1), simultaneously. We can achieve both by minimizing the upper bound over the linear relaxation variable φ as follows:\nL(s(x, y, ;θ), y) ≥ min φ L(s(x, y, ;θ,φ), y) ≥ max x′∈B(x, ) L(f(x′;θ), y). (5)\nIt can be equivalently understood as solving the dual optimization in CAP rather than using a dual feasible solution. However, solving the dual optimization is computationally prohibited for the linear relaxation of CAP. To resolve this problem, we use the same linear relaxation as IBP for the subnetworks of s except for s itself, similar to CROWN-IBP. Further, we efficiently compute a surrogate â of the minimizer a∗ = arg mina L(s(x, y, ;θ,φ), y) using the one-step projected gradient update of the relaxation variable a. Specifically, we have\nâ = Π[0,1]n ( a0 − ηsign(∇aL(s(x, y, ;θ,φ), y)) ) (6)\nwith an initial point a0 ∼ U [0, 1]n and η ≥ 1, yielding the final objective L(s(x, y, ;θ, φ̂), y) where φ̂ = {(a, â, b, b)}." }, { "heading": "6 EXPERIMENTS", "text": "In this section, we demonstrate the proposed method satisfies two key criteria required for building certifiably robust models: 1) tightness of the upper bound on the worst-case loss, and 2) smoothness of the loss landscape. Subsequently, we evaluate the performance of the method by comparing it with others certifiable training methods. Details on the experimental settings are in Appendix A.\nTightness To validate that the proposed method (OURS) has tighter bounds than other relaxations, we analyze various linear relaxation methods in Figure 4. We define a tightness measure as a sum over the worst-case margin for each class m, ∑c−1 m=0 sm(x, y, ;θ), obtained from (2). Then, we evaluate multiple methods on a single fixed model pre-trained with the proposed training method. The compared methods are, from left to right, OURS, CROWN-IBP (Zhang et al., 2019b), CAPIBP, and RANDOM. All methods use the same IBP relaxation for subnetworks, but use different linear relaxation variables a for the whole network s. CROWN-IBP, CAP-IBP, and RANDOM use a = 1[u+ + l− > 0], a = u +\nu+−l− and a ∼ U [0, 1]n, respectively. We fix the other variables a, b, and b, as in Section 5. In both figures, our method shows the lowest value on average, which indicates that a single gradient step in (6) is sufficient to obtain tighter bounds compared to other relaxation methods. See Appendix P for the equivalent tightness violin plots of other models.\nSmoothness Figure 1 shows that the proposed method has small loss variations along the gradient as with IBP, whereas CROWN-IBP (β = 1) has a wide range of loss values. This is because CROWN-IBP (β = 1) has more unstable ReLUs than our methods as shown in Figure 2. As mentioned above, number of a is closely related to the amount of unstable ReLUs, and Figure 5 shows that our method has successfully reduced the number of a = 1. Further, we conduct analysis on smoothness of the loss landscape with the loss gradient change (the left term in (4)) in Appendix H.\nRobustness We evaluate the performance of the proposed method and compare it to that of stateof-the-art certifiable training methods: IBP (Gowal et al., 2018), CROWN-IBP (β = 1) (Zhang et al., 2019b), and CAP (Wong et al., 2018), as in Section 4.1. On MNIST, we follow Zhang et al. (2019b) and use train ≥ test; whereas for CAP, we use the same train = test which yields better results. We used three evaluation metrics: standard (clean) error, 100-step PGD error, and verified error. For the verified error, we evaluated with the bound s of each method.\nTable 1 summarizes the evaluation results under different test for each dataset. In general, when test is low, methods with tighter linear relaxations show good performance, whereas IBP tends to perform better as test increases. In short, the state-of-the-art methods perform well for a specific range of test. For example, IBP show relatively better performance in the case of test = 0.3, 0.4 on MNIST and test = 6/255, 8/255, 16/255 on CIFAR-10. On the other hand, CAP and CROWN-IBP (β = 1) outperform IBP in the case of test = 0.1 on MNIST, test = 2/255 on CIFAR-10 and test = 0.001 on SVHN. This result is consistent with the analysis shown in Figure 1 that CAP and CROWN-IBP (β = 1) have lower loss than IBP at small , but their loss landscape is less smooth than IBP, leading to worse performance at large . Moreover, CAP cannot be trained on MNIST when train = 0.4. As the case is also not specified in Wong et al. (2018), it seems that CAP is hard to be robust to train ≥ 0.4. On the other hand, the proposed method shows consistent performance in a wide range of test values, achieving the best performance in most cases, since it has tighter bounds and a favorable landscape, not overfitting to a local minimum during the -scheduling. We also compared our method with other prior work (Xiao et al., 2018; Mirman et al., 2018; Balunovic & Vechev, 2019) in Appendix K. We also conduct additional experiments on the hyperparameters in Appendix L, M, and N.\nUnlike standard training, certifiable training requires -scheduling. It is implicitly assumed that a set of weights that makes the network robust to a small is a good initial point to learn robustness to a large train. However, linear relaxation-based methods with tighter bounds start with a lower loss at a small , but with an unfavorable loss landscape, they cannot explore a sufficiently large area of the parameter space. Hence, they overfit to be robust to a small perturbation, and not generalize to a large perturbation. CAP and CROWN-IBP (β = 1) are typical examples that demonstrate the overfitting. This may overegularize the weight norm and decrease the model capacity (Wong et al., 2018; Zhang et al., 2019b). The tightness of the proposed method improves the performance for a small , while the smoothness of the proposed method helps the optimization process, which also leads to better performance for a large . To conclude, the proposed method can achieve a decent performance under a wide range of perturbations as shown in Table 1.\nUnderstanding β-scheduling For CROWN-IBP, we use two different settings of β in (7), CROWN-IBP1→1 and CROWN-IBP1→0, where the subscript βstart → βend refers to the linear scheduling on β from βstart to βend. Zhang et al. (2019b) found that the β-scheduling of CROWNIBP1→0 could help to improve the robustness performance. And they argued that this is because training with a tighter bound of CROWN-IBP at the beginning can provide a good initialization for later IBP training. On the other hand, we provide another explanation that CROWN-IBP1→0 starts with a tighter bound (CROWN-IBP only) but not overfits to small perturbation by gradually introducing the IBP objective which has a smoother landscape. Despite using a single objective without the mixture parameter β, the proposed method can outperforms CROWN-IBP1→0 on CIFAR-10 as shown in Table 2." }, { "heading": "7 CONCLUSION", "text": "In this work, we have investigated the loss landscape of certifiable training and found that the smoothness of the loss landscape is an important factor that influences in building certifiably robust models. To this end, we proposed a method that satisfies the two criteria: tightness of the upper bound on the worst-case loss and smoothness of the loss landscape. Then, we empirically demonstrated that the proposed method achieves robustness comparable to state-of-the-art methods under a wide range of perturbations. We believe that with an improved understanding of the loss landscape, better certifiably robust models can be built." }, { "heading": "A EXPERIMENTAL SETTINGS", "text": "Datasets and Architectures In the experiments, we use three datasets: MNIST, CIFAR-10 and SVHN and model architectures (Small, Medium, and Large) in Gowal et al. (2018) and their variants (Small* and Large*) as follows:\n• Small: Conv(·,16,4,2) - Conv(16,32,4,1) - Flatten - FC(·,100) - FC(100,c) • Small*: Conv(·,16,4,2) - Conv(16,32,4,2) - Flatten - FC(·,100) - FC(100,c) • Medium: Conv(·,32,3,1) - Conv(32,32,4,2) - Conv(32,64,3,1) - Conv(64,64,4,2) - Flatten -\nFC(·,512) - FC(512,512) - FC(512,c) • Large: Conv(·,64,3,1) - Conv(64,64,3,1) - Conv(64,128,3,2) - Conv(128,128,3,1) -\nConv(128,128,3,1) - Flatten - FC(·,512) - FC(512,c) • Large*: Conv(·,64,3,1) - Conv(64,128,3,2) - Conv(128,128,3,1) - Conv(128,128,3,1) - Flat-\nten - FC(·,512) - FC(512,c)\nwhere Conv(c1, c2, k, s) is a conv layer with input channel c1, output channel c2, kerner size k, and stride s, and FC(d1, d2) is a fully-connected layer with input dimension d1 and output dimension d2. All layers are followed by ReLU activation except for the last layer and the flatten layer (Flatten).\nLoss and training schedules For general training schedules, we refer to Appendix C, D of Zhang et al. (2019b) with a single GPU (Titan Xp). We use the following mixed cross-entropy loss as in Zhang et al. (2019b):\nκL (f(x;θ), y) + (1− κ)L ( (1− β)sIBP(x, y, ;θ) + βsMODEL(x, y, ;θ), y ) , (7)\nwhere κ is the mixing weight between the natural loss and the robust loss, and β is the mixing weight between the two bounds obtained with IBP and given relaxation method (e.g. CROWN-IBP).\nA.1 SETTINGS IN SECTION 4.1\nFigure 1 We conduct the experiment in Figure 1 on CIFAR-10 dataset with Medium architecture over all four methods. We train the model with train = 8/255 for 200 epochs using -scheduling with 10 warm-up epochs and 120 ramp-up epochs. We use Adam optimizer with learning rate 0.001. We reduce the learning rate by 50% every 10 epochs after -scheduling ends.\nTo demonstrate the instability of each training, we describe the variation of the loss along the gradient direction as Santurkar et al. (2018). We take steps of different lengths in the direction of the gradient and measure the loss values obtained at each step. For the sake of consistency, we fix a Cauchy random matrix when evaluating CAP to obtain deterministic loss landscapes, not introducing randomness. The loss variation is computed with\nL(s(θ(t))) where L(s(θ)) ≡ L(s(x, y, ;θ), y) and\nθ(t) ≡ θ0 − tη∇θL(s(θ0)) for t ∈ [0, 5], (8)\nwhere θ0(= θ(0)) is the current model parameters and η is the learning rate. For the step of length t, we sample ten points from a range of [0,5] on a log scale. In Figure 1 (right), θ(now) = θ(0) and θ(next) = θ(1).\nFigure 2 (top) In Figure 2, with the same model used in Figure 1, we plot cosine similarity between two successive loss gradient steps during training as follows:\ncos(∇θL(s(θ(0))),∇θL(s(θ(1)))),\nwhere cos(v1,v2) is the cosine value of the angle between two vectors v1 and v2 .\nA.2 SETTINGS IN TABLE 1\nFor MNIST, we use the same hyper-parameters as in Appendix C of Zhang et al. (2019b). We train for 200 epochs (10 warm-up epochs and 50 ramp-up epochs) on Large model with batch sizes of 100. we decay the learning rate, 0.0005, by 10% in [130,190] epochs. As mentioned in Zhang et al. (2019b), we also found the same issue when training with small (see Appendix N for details). To alleviate the issue, we use train = min(0.4, test + 0.1) for each test as Table 2 of Zhang et al. (2019b).\nFor CIFAR-10, we train for 400 epochs (20 warm-up epochs and 240 ramp-up epochs) on Medium model with batch sizes of 128. We decay the learning rate, 0.003, by 2× every 10 epochs after the ramp-up period.\nFor SVHN, we train for 200 epochs (10 warm-up epochs and 120 ramp-up epochs) on Large model with batch sizes of 128 (OURS with batch sizes of 80 to avoid out of memory). We decay the learning rate, 0.0003, by 2× every 10 epochs after the ramp-up period. Only for SVHN, we apply normalization with mean (0.438, 0.444, 0.473) and standard deviation (0.198, 0.201, 0.197) for each channel.\nIn Table 1, we use κ-scheduling from 1 to 0. For the corresponding results of κ-scheduling from 0 to 0, we refer the reader to Table 5.\nWe modify the source code for CAP1 to match our settings. For example, we introduce the warmup period and linear -scheduling. We avoid using the reported results in the literature and aim to make a fair comparison under the same settings with only minor differences - for example, because CAP does not support the channel-wise normalization, we could not use the input normalization. Also, due to the memory limit of CAP, we use a smaller batch size of 32 and try other smaller architectures. We found that it often achieves better results with smaller architectures (similar to the results in Table 3 of Wong et al. (2018)). Thus, we present the performance with Large*, Medium, and Small* on MNIST, CIFAR-10, and SVHN, respectively. Throughout the experiments, CAP uses the fixed κ = 0.\nB INTERVAL BOUND PROPAGATION (IBP)\nIBP (Gowal et al., 2018) starts from the interval bound I(0) ≡ {z : l(0) ≤ z ≤ u(0)} = B(x, ) in the input space with the upper bound u(0) = x+ 1 and the lower bound l(0) = x− 1 where 1 is a column vector filled with 1. Then we propagate the interval bound I(k−1) ≡ {z : l(k−1) ≤ z ≤ u(k−1)} by using following equations iteratively:\nu(k) = h(k)(u(k−1)) and l(k) = h(k)(l(k−1)) (9)\nfor element-wise monotonic increasing nonlinear activation h(k) with the pre-activation bounds u(k−1) and l(k−1), and\nu(k) = W (k) ( u(k−1) + l(k−1)\n2\n) + |W (k)| ( u(k−1) − l(k−1)\n2\n) + b(k) and (10)\nl(k) = W (k) ( u(k−1) + l(k−1)\n2\n) − |W (k)| ( u(k−1) − l(k−1)\n2\n) + b(k) (11)\nfor linear function h(k) (k = 1, · · · ,K). Finally, IBP uses the worst-case margin s = u(K) to formulate the objective in (1) for certifiable training.\n1https://github.com/locuslab/convex adversarial" }, { "heading": "C DETAILS ON LINEAR RELAXATION", "text": "C.1 LINEAR RELAXATION EXPLAINED IN CROWN (ZHANG ET AL., 2018)\nTo make the paper self-contained, we provide details of linear relaxation given in the supplementary material of CROWN (Zhang et al., 2018). We refer readers to the supplementary for more details. Given a network h[k], we want to upper bound the activation h[k]i . We have h [k] i (x ′) = W (k) i,: h (k−1)(h[k−2](x′)) + b (k) i = W (k) i,: h (k−1)(z(k−2) ′ ) + b (k) i where z (k−2)′ = h[k−2](x′). With the linear function bounds of h (k−1) and h(k−1) on the activation function h(k−1), we have\nh [k] i (x ′) = W (k) i,: h (k−1)(z(k−2) ′ ) + b (k) i ≤ ∑\nW (k) i,j <0\nW (k) i,j h (k−1) j (z\n(k−2)′) + ∑\nW (k) i,j ≥0\nW (k) i,j h (k−1) j (z (k−2)′) + b (k) i\n= ∑\nW (k) i,j <0\nW (k) i,j a (k−1) j z (k−2)′ j + ∑ W\n(k) i,j ≥0\nW (k) i,j a (k−1) j z (k−2)′ j\n+ ∑\nW (k) i,j <0\nW (k) i,j b (k−1) j + ∑ W\n(k) i,j ≥0\nW (k) i,j b (k−1) j + b (k) i\n= W̃ (k) i,: z (k−2)′ + b̃ (k) i\n= W̃ (k) i,: h [k−2](x′) + b̃ (k) i\n= W̃ (k) i,:\n( W (k−2)(h[k−3](x′)) + b(k−2) ) + b̃\n(k) i\n= Ŵ (k−2) i,: h (k−3)(z(k−3) ′ ) + b̂ (k−2) i ,\nwhere W̃ (k)i,: = W (k) i,: D (k−1) with the diagonal matrix D(k−1)j,j = a (k−1) j for j satisfying W (k) i,j < 0 and D (k−1) j,j = a (k−1) j for j satisfying W (k) i,j ≥ 0, b̃ (k) i = ∑ W\n(k) i,j <0\nW (k) i,j b (k−1) j +∑\nW (k) i,j ≥0\nW (k) i,j b (k−1) j + b (k) i , Ŵ (k−2) i,: = W̃ (k) i,: W (k−2), and b̂(k−2)i = W̃ (k) i,: b (k−2) + b̃ (k) i . Ap-\nplying similar method iteratively, we can obtain g and b in (2) for the linear relaxation of h[k]i .\nC.2 DUAL OPTIMIZATION VIEW\nWe first modify some notations in the main paper and use the notations similar to Wong & Kolter (2018). We use the following hat notations: ẑ(k+1) = W (k+1)z(k) + b(k+1) and z(k) = h(k)(ẑ(k)) where h(k) is the k-th nonlinear activation function. We can build a primal problem with cT = Cm,: as follows:\nmax z(K)\ncT ẑ(K) (12)\nsuch that\nx− 1 ≤ z(0), z(0) ≤ x+ 1,\nẑ(k+1) = W (k+1)z(k) + b(k+1) (k = 0, · · · ,K − 1), and z(k) = h(k)(ẑ(k)) (k = 1, · · · ,K − 1).\nNote that our c is negation of that of Wong & Kolter (2018). Now we can derive the dual of the primal (12) as follows:\nmin ξ+,ξ−≥0\nνk\nsup z(k),ẑ(k)\ncT ˆz(K) + ξ−T (x− 1− z(0)) + ξ+T (z(0) − x− 1)\n+ K−1∑ k=0 νTk+1 ( ẑ(k+1) − (W (k+1)z(k) + b(k+1)) ) + K−1∑ k=1 ν̂Tk ( z(k) − h(k)(ẑ(k)) ) = (c+ νK)\nT ẑ(K) + (ξ+ − ξ− −W (1)Tν1)Tz(0) + K−1∑ k=1 (−W (k+1)Tνk+1 + ν̂k)Tz(k)\n+ K−1∑ k=1 (ν̂Tk h (k)(ẑ(k))− νTk ẑ(k)) (13)\n− νT1 b(1) − ξTx− ||ξ||1." }, { "heading": "It leads to c+νK = 0, ξ+−ξ−−W (1)Tν1 = 0, and−W (k+1)Tνk+1+ν̂k = 0 (k = 1, · · · ,K−1).", "text": "Alternatively, they are represented as follows:\nνK = −c, ν̂k = W\n(k+1)Tνk+1 (k = K − 1, · · · , 1), and ξ = ν̂1.\nNow we need relationship between ν̂k and νk, i.e., νk = g(ν̂k). With the further relaxation νk = αk ν̂k, we have a relaxed problem as follows:\nmin αk sup z(k),ẑ(k) K−1∑ k=1 (ν̂Tk h (k)(ẑ(k))− νTk ẑ(k))− νT1 b(1) − ξTx− ||ξ||1 (14)\nsuch that\nνK = −c, ν̂k = W\n(k+1)Tνk+1 (k = K − 1, · · · , 1), νk = αk ν̂k (k = K − 1, · · · , 1), and ξ = ν̂1.\nWe decompose the first term in (14), and ignore the subscript k as follows ν̂Th(ẑ) − (α ν̂)T ẑ. Further, we decompose this for each element, ν̂h(ẑ) − αν̂ẑ = ν̂(h(ẑ) − αẑ). If the pre-activation bounds for h are both positive (active ReLU), then α should be 1 not to make the inner supremum ∞. Similarly, if the pre-activation bounds for h are both negative (dead ReLU), then α should be 0. In the case of unstable ReLU (l ≤ 0 ≤ u), if ν̂ < 0, then we need to solve maxα inf ẑ h(ẑ)−αẑ. The inner infimum is 0 for 0 ≤ α ≤ 1, and is −∞ otherwise. On the other hand, if ν̂ ≥ 0, then we need to solve minα supẑ h(ẑ)−αẑ. The inner supremum is max{u−αu,−αl}, and thus the optimal dual variable is α∗ = uu−l which yields the optimal value (multiplied by ν̂) as ν̂(u − uu−lu) = − ulu−l ν̂ which is equivalent to using linear relaxation with a z + b = uu−l (z − l). We can represent it as a z + b = u+u+−l− (z − l−) to include the case of active/dead ReLU. For the lower linear bound h(z) = a z + b in case of unstable ReLU, we can use any 0 ≤ a ≤ 1 and b = 0 according to the dual relaxation with α. While CAP and CROWN-IBP use a dual feasible solution like α = u +\nu+−l− or α = 1[u + + l− > 0], our proposed method aims to optimize over the dual\nvariable α or equivalently optimize over 0 ≤ a ≤ 1 to further tighten the upper bound on the loss.\nD ILLUSTRATION OF LINEAR RELAXATIONS\nFigure 6 provides some illustrations of linear relaxations used in IBP, CAP, CROWN-IBP, and the proposed method. CROWN-IBP adaptively chooses the relaxation variable so that the area between h and h is minimized. However, the smaller area does not necessarily imply the tighter bound, and the proposed method achieves tighter bounds than CROWN-IBP relaxation as shown in Figure 4." }, { "heading": "E LEARNING CURVES FOR VARIANTS OF CROWN-IBP", "text": "It seems that a certifiable training with a looser bound tends to favor stable ReLUs. For example, IBP starts with small number of unstable ReLUs while CAP starts with large number of ReLUs as shown in Figure 2 (bottom). However, a tighter bound does not directly lead to many unstable ReLUs. We find that 0.5/1 and 1/1 have looser bounds than CROWN-IBP (as shown in Figure 7) but they have more unstable ReLUs (as shown in Figure 3) where p/q denotes the variant with sampling a ∈ {0, 1} with P (a = 1 | |l| > |u|) = p and P (a = 1 | |l| ≤ |u|) = q for unstable ReLUs. On the other hand, 0/0, 0/0.25, and 0/0.5 have looser bounds than CROWN-IBP and they have less unstable ReLUs, which leads to small loss variations as in Figure 7. Therefore, this observation implies that it is more important to have less a = 1 to have a more smooth landscape." }, { "heading": "F PROOF", "text": "To prove Theorem 1, we first prove the following proposition. We note that θ and g are vectorized and the matrix norm of Jacobian is naturally defined - for example, ||∇θg|| is induced by the vector norms defined in X and Θ. Proposition 1. Given input x ∈ X and perturbation radius , let M = max{||x′|| : x′ ∈ B(x, )}. Then, for the upper bound s(x;θ) = maxx′∈B(x, ) g(x;θ)Tx′ + b(x;θ) with b satisfying Assumption 1 (1), we have\n||∇θs(x;θ1)−∇θs(x;θ2)|| ≤ 2 ||∇θg(x;θ1,2)||+M ||∇θg(x;θ1)−∇θg(x;θ2)||+ Lbθθ||θ1 − θ2|| (15)\nfor any θ1,θ2, where θ1,2 can be any of θ1 and θ2.\nProof. Say f(x′;θ) = g(x;θ)Tx′ + b(x;θ) and the maximizer x∗i = arg maxx′∈B(x, ) f(x ′;θi) for each θi = θ1,θ2. Then, we have\n||∇θs(x;θ1)−∇θs(x;θ2)|| = ||∇θf(x∗1;θ1)−∇θf(x∗2;θ2)|| = ||∇θf(x∗1;θ1)−∇θf(x∗2;θ1) +∇θf(x∗2;θ1)−∇θf(x∗2;θ2)|| ≤ ||∇θf(x∗1;θ1)−∇θf(x∗2;θ1)||+ ||∇θf(x∗2;θ1)−∇θf(x∗2;θ2)||. (16)\nThe first term on the RHS can be upper bounded as follows:\n||∇θf(x∗1;θ1)−∇θf(x∗2;θ1)|| = ||∇θ(g̃1T x̃∗1 − g̃1T x̃∗2)|| = ||∇θ(gT1 x∗1 − g1Tx∗2)|| = ||∇θg1(x∗1 − x∗2)|| ≤ 2 ||∇θg1||,\nwhere gi = g(x;θi), bi = b(x;θi), g̃Ti = [g T i ; bi] and x̃ T = [xT ; 1]. And the second term on the RHS can be upper bounded as follows:\n||∇θf(x∗2;θ1)−∇θf(x∗2;θ2)|| = ||∇θ(g̃1T x̃∗2 − g̃2T x̃∗2)|| = ||∇θ(g̃1 − g̃2)x̃∗2|| ≤ ||∇θ(g1 − g2)||||x∗2||+ ||∇θ(b1 − b2)|| ≤M ||∇θ(g1 − g2)||+ Lbθθ||θ1 − θ2||,\nTherefore, we obtain\n||∇θs(x;θ1)−∇θs(x;θ2)|| ≤ 2 ||∇θg1||+M ||∇θ(g1 − g2)||+ Lbθθ||θ1 − θ2|| = 2 ||∇θg(x;θ1)||+M ||∇θg(x;θ1)−∇θg(x;θ2)||+ Lbθθ||θ1 − θ2||.\nNote that θ1 in the first term is arbitrarily chosen in (16). Therefore, this leads to the final inequality (15).\nTheorem 1. Given input x ∈ X and perturbation radius , let M be maxx′∈B(x, ) ||x′||. For a linear relaxation-based method with the upper bound sm(x;θ) = maxx′∈B(x, ) g(m)(x;θ)Tx′ + b(m)(x;θ), if b(m) satisfies Assumption 1 (1) for each m and ps satisfies Assumption 1 (2), then\n||∇θL(s(x;θ1))−∇θL(s(x;θ2))|| ≤ max\nm\n( 2 ||∇θg(m)(x;θ1,2)||+M ||∇θg(m)(x;θ1)−∇θg(m)(x;θ2)||+ L(m)||θ1 − θ2|| ) (4)\nfor any θ1,θ2, where L(m) = Lb (m) θθ + L ps θ ||∇θs(x;θ1,2)|| and θ1,2 can be any of θ1 and θ2.\nProof. We simplify the notation ps as p. Then we have\n||∇θL(s(x;θ1))−∇θL(s(x;θ2))|| = ||∇θs(x;θ1)∇sL(s(x;θ1))−∇θs(x;θ2)∇sL(s(x;θ2))|| = ||\n∑ m ∇θsm(x;θ1)(pm(x;θ1)− δy,m)−∇θsm(x;θ2)(pm(x;θ2)− δy,m)||\n= ||∇θs(x;θ1)(p(x;θ1)− e(y))−∇θs(x;θ2)(p(x;θ2)− e(y))|| = ||∇θs(x;θ1)p(x;θ1)−∇θs(x;θ2)p(x;θ2)|| = ||∇θs(x;θ1)p(x;θ1)−∇θs(x;θ1)p(x;θ2) +∇θs(x;θ1)p(x;θ2)−∇θs(x;θ2)p(x;θ2)|| = ||∇θs(x;θ1)(p(x;θ1)− p(x;θ2)) + (∇θs(x;θ1)−∇θs(x;θ2))p(x;θ2)|| ≤ ||∇θs(x;θ1)||||p(x;θ1)− p(x;θ2)||+ max\nm ||∇θsm(x;θ1)−∇θsm(x;θ2)||\n≤ ||∇θs(x;θ1)||Lpθ ||θ1 − θ2||+ maxm ||∇θsm(x;θ1)−∇θsm(x;θ2)||\n≤ max m\n( 2 ||∇θg(m)(x;θ1,2)||+M ||∇θg(m)(x;θ1)−∇θg(m)(x;θ2)||+ L(m)||θ1 − θ2|| )\nG LEARNING CURVE FOR train\nFigure 9 shows the learning curves for the target perturbation train during the ramp-up period, while Figure 1 shows the corresponding curves for the scheduled value of . The two figures use the same settings in Appendix A.1." }, { "heading": "H SMOOTHNESS", "text": "We empirically measure the non-smoothness of the loss landscape with the difference between the two consecutive loss gradients at θ1 = θ(0) and θ2 = θ(1) in (8), says gradient difference (≡ ||∇θL(x;θ(0))−∇θL(x;θ(1))||). It is highly related to the ratio of the number of unstable ReLUs (nonlinearity of the classifier) as shown in Figure 10." }, { "heading": "I MODE CONNECTIVITY", "text": "In this section, we check the mode connectivity (Garipov et al., 2018) between two models that are trained using certifiable training methods. Mode connectivity is a framework that investigates the connectedness between two models by finding a high accuracy curve between those models. It enables us to understand the loss surface of neural networks.\nLet w0 and w1 be two sets of weight corresponding to two different well-trained neural networks. Moreover, let φθc(t) with t ∈ [0, 1] be a continuous piece-wise smooth parametric curve with parameters θc such that φθc(0) = w0 and φθc(1) = w1. To find a low-loss path between w0 and w1, Garipov et al. (2018) suggested to find the parameter θc that minimizes the expectation of a loss `(w) over a distribution qθc(t) on the curve,\nL(θc) = Et∼qθc (t)[`(φθc(t)].\nTo optimize L(θc) for θc, we use uniform distribution U [0, 1] as qθc(t) and Bezier curve (Farouki, 2012) as φθc(t), which provides a convenient parameterization of smoothness on the paths connecting two end points (w0 and w1) as follows:\nφθc(t) = (1− t)2w0 + 2t(1− t)θc + t2w1, 0 ≤ t ≤ 1.\nA path φθc is said to have a barrier if ∃t such that `(φθc(t)) > max{`(w0), `(w1)}. The existence of a barrier suggests the modes of two well-trained models are not connected by the path in terms of the given loss function ` (Zhao et al., 2020).\nWe test the mode connectivity between the models trained with IBP, CROWN-IBP, and OURS. For example, to check the mode connectivity between two different models trained with CROWNIBP and IBP, we use the loss function used on each model as a user-specified loss for training the parametric curve φθc . Therefore, we can obtain two curves as depicted in Figure 11, 12, and 13 for each pair of models. Here, we use the identical settings in Appendix A.1.\nFigure 11 shows the mode-connectivity between CROWN-IBP and IBP. We use CROWN-IBP loss as user-specific loss in Figure 11a and IBP loss in Figure 11b. In this figure, we find that using CROWN-IBP loss (11a), there exists a barrier between the two models. This suggests they are not connected by the path in terms of CROWN-IBP loss. However, with IBP loss, there is no loss barrier separating the two models. This indicates that using CROWN-IBP, it is hard to optimize the parameters from w0 to w1, but IBP can.\nFigure 12 shows the mode-connectivity results on IBP and OURS. We find that two models are not connected to each other using either IBP bound or OURS bound, since there exists a barrier in both curves. In this figure, we can also notify that OURS has tighter bounds than IBP because the value of the loss function using OURS is lower than that of IBP.\nFinally, Figure 13 illustrates the mode connectivity between CROWN-IBP and OURS. Using CROWN-IBP as a user-specified loss function, we can find that the robust loss on the curve is higher than that of the end points. However, when OURS is used as a loss function, the robust loss generally decreases as the t increases. It shows that OURS has much favorable loss landscape compared to CROWN-IBP. In addition, we can find that OURS has a tighter bound than CROWN-IBP, since the value of the robust loss using OURS is lower than CROWN-IBP." }, { "heading": "J RELU", "text": "In this section, we investigate how pre-activation bounds u and l for the activation layer change during training. For each activation node, it is said to be ”active” when the pre-activation bounds are both positive (0 < l ≤ u), ”unstable” when they span zero (l ≤ 0 ≤ u), and ”dead” when they are both negative (l ≤ u < 0). Figure 14 shows the ratios of the number of active and dead ReLUs during the ramp-up period. Notably, we find that CROWN-IBP has more active ReLUs during training compared to the other three methods. Simultaneously, CROWN-IBP has the lowest ratio of dead ReLUs.\nFigure 15 shows the numbers of active, unstable, and dead ReLUs during the ramp-up period. We find that in CROWN-IBP, the number of unstable and active ReLUs increases as the number of dead ReLUs decreases. This indicates that a number of dead ReLUs change to unstable ReLUs as the training increases. However, in the other methods, the number of unstable ReLUs is consistently small, while the number of active ReLUs decreases as the number of dead ReLUs increases.\nFigure 16 depicts the histograms of the distribution of the slope u +\nu+−l− of the unstable ReLUs during the ramp-up period. In the early stages of CAP training, the slope distribution is concentrated around 0.4. However as the training progresses with a larger , the histogram distribution moves to left, which indicates unstable ReLUs change to dead ReLUs. It is consistent with the results in Figure 15c. On the other hand, in the case of CROWN-IBP, the histogram distribution moves to right during training. It is the same with the results in Figure 15b, which shows that number of active ReLUs increases during training." }, { "heading": "K COMPARISON WITH OTHER PRIOR WORK", "text": "All experiments and results (except for Table 4) in this paper are based on our own reimplementation. For the unimplemented prior work, we compare to the best reported results in the literature in Table 4. We note that the results in Xiao et al. (2018) and Balunovic & Vechev (2019) are evaluated with a MILP based exact verifier (Tjeng et al., 2017).\nL β- AND κ-SCHEDULINGS\nTable 5 shows the evaluation results of the models as in Table 1 but trained with different κscheduling (from 0 to 0). Table 6 shows the evaluation results of the proposed models trained with different κ- and β-schedulings." }, { "heading": "M ONE-STEP VS MULTI-STEP", "text": "To get a tighter bound, we propose multi-step version of (6) as follows: at+1 = Π[0,1]n ( at − αsign(∇aL(s(x, y, ;θ,φ), y)) ) . (17)\nWe compare the original 1-step method (α ≥ 1) to 7-step (t = 7) method with α = 0.1. The results are summarized in Table 7. We found no significant difference between two methods even though multi-step takes multiple times with multi-step. Therefore, we decide to focus on one-step method.\nN TRAIN WITH train ≥ test N.1 train ≥ test ON MNIST\nZhang et al. (2019b) and Gowal et al. (2018) observed that IBP performs better when using train ≥ test than train = test. Figure 8 shows the results with different train’s for each test. The overfitting issue is more prominent in the case of IBP and CROWN-IBP1→0 than the proposed method and CROWN-IBP1→1. However, using larger perturbations compromises the standard accuracy, and thus it is desirable to use smaller train.\nN.2 train = 1.1 test ON CIFAR-10\nAs mentioned in Gowal et al. (2018), we also train with train = 1.1 test on CIFAR-10. The results are shown in Table 9. They attain slightly improved performances in 2/255, but not in 8/255 and larger ." }, { "heading": "O TRAINING TIME", "text": "All the training times are measured on a single TITAN X (Pascal) on Medium for CIFAR-10. We train with a batch size of 128 for OURS, CROWN-IBP1→1 and IBP, but with a batch size of 32 for CAP due to its high memory cost. For CAP, we use random projection of 50 dimensions.\n• OURS: 115.9 sec / epoch • CROWN-IBP1→1: 51.68 sec / epoch • IBP: 14.85 sec / epoch • CAP (batch size 32, 1 GPU): 751.0 sec / epoch • CAP (batch size 64, 1 GPU): 724.6 sec / epoch • CAP (batch size 128, 2 GPUs): 387.9 sec / epoch" }, { "heading": "P LOSS AND TIGHTNESS VIOLIN PLOTS", "text": "We plot the equivalent tightness violin plots in Section 6 for models trained with other methods. The proposed method achieves the best results in terms of loss and tightness followed by CROWN-IBP, CAP-IBP, and RANDOM. Figure 17 (a)-(b), (c)-(d), and (e)-(f) show the tightness evaluated on the model trained by CROWN-IBP1→0, CROWN-IBP1→1 and IBP, respectively." }, { "heading": "Q COMPARISON WITH CAP-IBP", "text": "As in section E, we train a model with CAP-IBP and compare with the proposed method and CROWN-IBP (β = 1). Figure 18 shows that CAP-IBP has gradient differences (defined in Section H) larger than the proposed method and smaller than CROWN-IBP (β = 1), which leads to a performance between the proposed method and CROWN-IBP (β = 1) (see Table 3). CAP-IBP has looser bounds than CROWN-IBP (β = 1) as shown in Figure 4 and Figure 17, but with a relatively more smooth landscape, it can achieve a better performance than CROWN-IBP (β = 1)." }, { "heading": "R RELU STABILITY", "text": "To see the effect of unstable ReLUs on smoothness, we adopt the ReLU stability loss (RS loss) LRS(u, l) = −tanh(1 + u · l) as a regularizer (Xiao et al., 2018). We use L + λLRS as a loss and run CROWN-IBP (β = 1) with various λ settings. We plot the smoothness and the tightness in Figure 19 and Figure 20 on λ = 0, λ = 0.01, λ = 10.\nWe found that small λ suggested in Xiao et al. (2018) has no effect on reducing the number of unstable ReLUs since certifiable methods have smaller unstable ReLUs as shown in Figure 15, and thus not on improving the smoothness. By increasing λ, we observed that RS successfully reduces the number of unstable ReLUs with λ = 10. Figure 19 shows that large λ leads to a better loss variation and gradient difference. This supports that unstable ReLUs are closely related to the smoothness of the loss landscape. However, as Xiao et al. (2018) mentioned ”placing too much weight on RS Loss can decrease the model capacity, potentially lowering the provable adversarial accuracy”, the models trained with a large λ ≥ 1 couldn’t obtain a tightness of the upper bound and significant improvement on robustness as illustrated in Figure 20. The test errors (Standard / PGD / Verified) are 0.6278 / 0.7189 / 0.7634 on λ = 0.01 and 0.6090 / 0.7085 / 0.7600 on λ = 10." } ]
2,020
LOSS LANDSCAPE MATTERS: TRAINING CERTIFIABLY ROBUST MODELS WITH FAVORABLE LOSS LAND-
SP:a89a7421e4d3b82156edcc03ff8b24fb4df8df41
[ "- This paper made a finding that weighting up correct predictions for rare class examples also can help to improve the performance of imbalanced classification. In light of this finding, it proposes the Eureka Loss to add additional gradients for examples belong to rare classes in the high-likelihood area when correctly predicted. Experiments on several large-scale benchmarks demonstrate its effectiveness.", "This paper deals with learning imbalanced class distributions. First, it empirically finds that the high-likelihood area for the rare classes benefits classification. Then, based on the findings, it proposes a new learning objective called Eureka Loss, which can be viewed as a combination of the frequency-based and likelihood-based methods to reward the classifier when examples belong to rare classes in the high-likelihood area are correctly predicted. Empirical results on two typical tasks (i.e. image classification and language generation tasks) illustrate its superiority compared with other baselines. " ]
Learning from natural datasets poses significant challenges for traditional classification methods based on the cross-entropy objective due to imbalanced class distributions. It is intuitive to assume that the examples from rare classes are harder to learn so that the classifier is uncertain of the prediction, which establishes the low-likelihood area. Based on this, existing approaches drive the classifier actively to correctly predict those incorrect, rare examples. However, this assumption is one-sided and could be misleading. We find in practice that the high-likelihood area contains correct predictions for rare class examples and it plays a vital role in learning imbalanced class distributions. In light of this finding, we propose the Eureka Loss, which rewards the classifier when examples belong to rare classes in the high-likelihood area are correctly predicted. Experiments on the large-scale long-tailed iNaturalist 2018 classification dataset and the ImageNet-LT benchmark both validate the proposed approach. We further analyze the influence of the Eureka Loss in detail on diverse data distributions.
[ { "affiliations": [], "name": "ANCED DISTRIBUTIONS" } ]
[ { "authors": [ "Mateusz Buda", "Atsuto Maki", "Maciej A Mazurowski" ], "title": "A systematic study of the class imbalance problem in convolutional neural networks", "venue": "Neural Networks,", "year": 2018 }, { "authors": [ "Kaidi Cao", "Colin Wei", "Adrien Gaidon", "Nikos Arechiga", "Tengyu Ma" ], "title": "Learning imbalanced datasets with label-distribution-aware margin loss", "venue": null, "year": 1906 }, { "authors": [ "Nitesh V. Chawla", "Kevin W. Bowyer", "Lawrence O. Hall", "W. Philip Kegelmeyer" ], "title": "SMOTE: Synthetic minority over-sampling technique", "venue": "Journal of Artificial Intelligence Research,", "year": 2002 }, { "authors": [ "Peng Chu", "Xiao Bian", "Shaopeng Liu", "Haibin Ling" ], "title": "Feature space augmentation for long-tailed data", "venue": null, "year": 2008 }, { "authors": [ "Yin Cui", "Menglin Jia", "Tsung-Yi Lin", "Yang Song", "Serge Belongie" ], "title": "Class-balanced loss based on effective number of samples", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Chris Drummond", "Robert Holte" ], "title": "C4.5, class imbalance, and cost sensitivity: Why under-sampling beats oversampling", "venue": "Proceedings of the ICML’03 Workshop on Learning from Imbalanced Datasets,", "year": 2003 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross B. Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: Training ImageNet in 1 hour", "venue": null, "year": 2017 }, { "authors": [ "Agrim Gupta", "Piotr Dollár", "Ross B. Girshick" ], "title": "LVIS: A dataset for large vocabulary instance segmentation", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Bingyi Kang", "Saining Xie", "Marcus Rohrbach", "Zhicheng Yan", "Albert Gordo", "Jiashi Feng", "Yannis Kalantidis" ], "title": "Decoupling representation and classifier for long-tailed recognition", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Salman H. Khan", "Munawar Hayat", "Mohammed Bennamoun", "Ferdous Ahmed Sohel", "Roberto Togneri" ], "title": "Cost-sensitive learning of deep feature representations from imbalanced data", "venue": "IEEE Trans. Neural Networks Learn. Syst.,", "year": 2018 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross B. Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Jialun Liu", "Jingwei Zhang", "Wenhui Li", "Chi Zhang", "Yifan Sun" ], "title": "Memory-based jitter: Improving visual recognition on long-tailed data with diversity in memory", "venue": null, "year": 2008 }, { "authors": [ "Ziwei Liu", "Zhongqi Miao", "Xiaohang Zhan", "Jiayun Wang", "Boqing Gong", "Stella X Yu" ], "title": "Large-scale long-tailed recognition in an open world", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D. Manning" ], "title": "GloVe: Global vectors for word representation", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing,", "year": 2014 }, { "authors": [ "Li Shen", "Zhouchen Lin", "Qingming Huang" ], "title": "Relay backpropagation for effective learning of deep convolutional neural networks", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Van Horn", "Pietro Perona" ], "title": "The devil is in the tails: Fine-grained classification in the wild", "venue": "Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Xinge Zhu", "Hui Zhou", "Ceyuan Yang", "Jianping Shi", "Dahua Lin" ], "title": "cumulative learning for long-tailed visual recognition", "venue": null, "year": 1912 }, { "authors": [ "Cao" ], "title": "Results on long-tailed CIFAR-10 data with different imbalance degrees. ID is short for imbalance degree. † is defined similarly; ♣", "venue": null, "year": 2019 }, { "authors": [ "Cao" ], "title": "2019), we decay the learning rate by 0.01 at the 160th epoch", "venue": null, "year": 2019 }, { "authors": [ "Kang" ], "title": "2020) for experiments on iNaturalist", "venue": "To be specific,", "year": 2020 } ]
[ { "heading": null, "text": "Learning from natural datasets poses significant challenges for traditional classification methods based on the cross-entropy objective due to imbalanced class distributions. It is intuitive to assume that the examples from rare classes are harder to learn so that the classifier is uncertain of the prediction, which establishes the low-likelihood area. Based on this, existing approaches drive the classifier actively to correctly predict those incorrect, rare examples. However, this assumption is one-sided and could be misleading. We find in practice that the high-likelihood area contains correct predictions for rare class examples and it plays a vital role in learning imbalanced class distributions. In light of this finding, we propose the Eureka Loss, which rewards the classifier when examples belong to rare classes in the high-likelihood area are correctly predicted. Experiments on the large-scale long-tailed iNaturalist 2018 classification dataset and the ImageNet-LT benchmark both validate the proposed approach. We further analyze the influence of the Eureka Loss in detail on diverse data distributions." }, { "heading": "1 INTRODUCTION", "text": "Existing classification methods usually struggle in real-world applications, where the class distributions are inherently imbalanced and long-tailed (Van Horn & Perona, 2017; Buda et al., 2018; Liu et al., 2019; Gupta et al., 2019), in which a few head classes occupy a large probability mass while most tail (or rare) classes only possess a few examples. The language generation task is a vivid example of the long-tailed classification. In this case, word types are considered as the classes and the model predicts probabilities over the vocabulary. Common words such as the, of, and and are the head classes, while tailed classes are rare words like Gobbledygook, Scrumptious, and Agastopia. Conventional classifiers based on deep neural networks require a large number of training examples to generalize and have been found to under-perform on rare classes with a few training examples in downstream applications (Van Horn & Perona, 2017; Buda et al., 2018; Cao et al., 2019).\nIt is proposed that the traditional cross-entropy objective is unsuitable for learning imbalanced distributions since it treats each instance and each class equivalently (Lin et al., 2017; Tan et al., 2020). In contrast, the instances from tail classes should be paid more attention, indicated by two main approaches that have been recently investigated for class-imbalanced classification: the frequencybased methods and the likelihood-based methods. The former (Cui et al., 2019; Cao et al., 2019) directly adjust the weights of the instances in terms of their class frequencies, so that the instances from the tail classes are learned with a higher priority no matter whether they are correctly predicted or not. The latter (Lin et al., 2017; Zhu et al., 2018) instead penalize the inaccurate predictions more heavily, assuming that the well-classified instances, i.e., the instances in the high-likelihood area, factor inconsequentially in learning imbalanced distributions.\nHowever, neither of these two approaches realistically depicts the likelihood landscape. In particular, the high-likelihood area, where the classifier makes the correct predictions for both common class examples and rare class ones, contributes significantly to generalization. However, this area is not well-shaped, as illustrated in Figure 1. Specifically, the frequency-based methods imply an impaired learning of common class examples that are the principle part of the natural data, while the likelihood-\nbased methods ignore the correctly-predicted rare class examples that can provide crucial insights into the underlying mechanism for predicting such examples.\nIn this paper, we first demonstrate that existing practice of neglecting predictions in the high-likelihood area is harmful to learning imbalanced class distributions. Furthermore, we find that simply mixing the cross-entropy loss and the Focal Loss (Lin et al., 2017) can induce substantially superior performance, which validates our motivation. In turn, we propose to elevate the importance of high-likelihood predictions even further and design a novel objective called Eureka Loss. It progressively rewards the classifiers according to both the likelihood and the class frequency of an example such that the system is encouraged to be more confident in the correct prediction of examples from rare classes. Experimental results on the image classification and the language generation tasks demonstrate that the Eureka Loss outperforms strong baselines in learning imbalanced class distributions.\nOur contributions are twofold:\n• We challenge the common belief that learning for examples in low-likelihood area is more important for learning tail classes and reveal that the correctly-predicted rare class examples make important contribution to learning long-tailed class distributions.\n• We explore a new direction for learning imbalanced classification that focuses on rewarding correct predictions for tail classes examples, rather than penalizing incorrect ones. The proposed Eureka Loss rewards the classifier for its high-likelihood predictions progressively to the rarity of their class and achieves substantial improvements on various problems with long-tailed distributions." }, { "heading": "2 RELATED WORK", "text": "Frequency-based Data and Loss Re-balancing Previous literature on learning with long-tailed distribution mainly focusing on re-balancing the data distribution and re-weighting the loss function.\nThe former is based on a straightforward idea to manually create a pseudo-balanced data distribution to ease the learning problem, including up-sampling for rare class examples (Chawla et al., 2002), down-sampling for head class examples (Drummond & Holte, 2003) and a more concrete sampling strategy based on class frequency (Shen et al., 2016).\nAs for the latter, recent studies propose to assign different weights to different classes, and the weights can be calculated according to the class distribution. For example, Khan et al. (2018) design a cost-sensitive loss for major and minor class examples. An intuitive method is to down-weight the loss of frequent classes, while up-weight the contribution of rare class examples. However, frequency is not suitable to be directly treated as the the weight since there exists overlap among samples. An advancing alternative loss CB (Cui et al., 2019) proposes to calculate the effective number to substitute the frequency for loss re-weighting. However, since it assigns lower weight to head classes in the maximum likelihood training (Cross Entropy objective), it seriously impairs the learning\nof head classes. Moreover, CB requires a delicate hyper-parameter tuning for every imbalanced distribution, leading to a lot of manul efforts. From the perspective of max-margin, a recent study LDAM (Cao et al., 2019) proposes to up-weight the loss of tail classes by a class-distribution based margin. Compared to the above methods, we choose to decrease the loss of tail classes by rewarding correct predictions rather than increasing the loss of tail classes through aggravated penalization.\nDeferring the Frequency-based Class-balanced Training Recent studies find that deferring the class-balanced training helps learn high-quality representations (Liu et al., 2019), and propose deferred Class-balanced training (deferred CB) (Cao et al., 2019), which chooses to adopt Cross Entropy objective at the beginning of training. Similarly, the Decoupling method (Kang et al., 2020) shows that the re-balancing strategies impair the quality of learned feature representations and demonstrate an improved performance learned with original data distribution, by training the model with Cross Entropy in the first phase and adopting class-balanced training in the second phase. This decoupling strategy can also be found in BBN (Zhou et al., 2019), which includes both class imbalanced and balanced training, and the transition from the former to the latter is achieved through a curriculum learning schedule. These methods achieve state-of-the-art performance in long-tailed classification. To be comparable with these methods and to analyse whether Eureka Loss is complementary to this technique, we propose deferred Eureka Loss, in which rewarding for rare class prediction is introduced to encourage the model to learn rare patterns when learning is stalled.\nLikelihood-based Loss Another dominant method for imbalanced classification is the likelihoodbased method Focal Loss (FL) (Lin et al., 2017), which proposes to down-weight the contribution of examples in the high-likelihood area. However, we argue that it is harmful for learning tail classes and choose an opposite direction by highlighting the high-likelihood area with a steeper loss.\nTransferring Representations Techniques for transferring information from sufficient head classes examples to under-represented rare classes examples belong to a parallel successful direction in this field. They include MBJ (Liu et al., 2020), which utilizes external semantic feature memory and FSA (Chu et al., 2020), which decomposes feature in to class-specific and class-generic components. These latest transfer learning based studies are less related to our paper but they also obtain good improvements in long-tailed classification, so we add them into comparison in the experiments." }, { "heading": "3 ROLE OF THE HIGH-LIKELIHOOD AREA", "text": "The existing approaches to the long-tailed classification independently consider the class frequency and the example likelihood. However, we show that this one-sided reflection is problematic when dealing with the tail class examples that can be confidently classified. The tail class examples can be easily classified by the classifier, and the head class examples can also be hard for the classifier to recognize. The difficulty of classification depends on the inherent characteristic of the classes, rather than the sample size of the class. For example, in species classification, the Portuguese man o’war may be a rare class but can be easily classified due to its distinct features, compared to various kinds of moths which are common classes yet are hard to distinguish. However, the frequency-based methods continuously drive the classifier to fit the rare class examples, especially when they are difficult to predict, which may lead to overfitting. On the other hand, the likelihood-based methods relax the concentration in the high-likelihood area, which contains the tail class examples that are not hard to predict and provide insights to generalization.\nTo verify our point of view, we analyze the problem by dissecting the influence of the high-likelihood area with respect to the class frequency and demonstrate that properly encouraging the learning of well-classified tail class examples induce substantial improvements, before which we first give a brief introduction of classification with long-tailed class distributions." }, { "heading": "3.1 PREPARATION: CLASSIFICATION WITH LONG-TAILED CLASS DISTRIBUTIONS", "text": "Let’s consider the multi-class classification problem with the long-tailed class distribution. Given a class set C, n denotes the number of different classes in C and mi is the number of examples of the class Ci. For simplicity, we sort the class set C according to cardinal mi for Ci such that C0 is the class with the most examples and Cn−1 is the rarest class. Let p be a n-dim probability vector predicted by a classifier model f(x;θ) based on the input x, where each element pi denotes the probability of the class Ci and y is a n-dim one-hot label vector with y being the ground-truth class.\nThe probability vector can be calculated as\np = σ(f(x;θ)), (1)\nwhere σ is the normalizing function, e.g., softmax for multi-class classification. Typically, the parameters are estimated using maximum likelihood estimation (MLE), which is equivalent to using the Cross-Entropy Loss (CE) function, where the scalar y · log p can be regarded as the (log)-likelihood:\nL = −E(x,y)∈D log pmodel(y|x) = − 1 |D| ∑ (x,y)∈D y · log p. (2)\nFor deep neural network–based classifiers, due to the non-linearity of the loss function, the problem is typically solved by stochastic gradient descent, which requires the calculation of the gradient with respect to the parameters using the chain-rule, the process of which is called back-propagation:\n∂L ∂θ = ∂L ∂p ∂p ∂θ = ∂L ∂p ∂σ(f(x;θ)) ∂θ . (3)\nWe introduce the term likelihood gradient to denote ∂L/∂p, which modulates how the probability mass should be shifted and is a characteristic of the loss function instead of the classifier. For learning imbalanced class distributions, the common methods aim to shape the likelihood gradient so the rare classes are learned with priority, i.e., embodying a sharper loss and a larger likelihood gradient.\nFrequency-Based Methods Frequency-based methods alter the likelihood gradient according to the class frequencies, which are irrelevant to how well individual examples are classified. A simple form is using a n-dim weight vector w composed of the class weights based on their frequencies in the dataset to determine the importance of examples from each class:\nL = −wy · (y · log p). (4) Note that whenw = 1, it is identical to the cross-entropy objective. The weight vector is typically calculated as wi = m̄/mi, where m̄ is the average of mi. As we can see, the standard weight is taken as the average of the class size, so that the classes with more examples are down-weighted and the classes with fewer examples are up-weighted. For a natural long-tailed distribution, the average is larger than the median, which suggests more classes are up-weighted. Advanced frequency-based methods try to obtain a more meaningful measurement of the class size, e.g., the Class-Balanced Loss (CB) proposed by Cui et al. (2019) utilizes an effective number 1−βmi/1−β for each class, where β ∈ [0.9, 1) is a tunable class-balanced term. Likelihood-Based Methods Different from the frequency-based methods, likelihood-based methods adjust the likelihood gradient based on the instance-level difficulty as predicted by the classifier such that the examples in the low-likelihood area are more focused in training. For example, the well-known Focal Loss (FL) with a balanced factor α proposed by Lin et al. (2017) takes the following form: L = −α(1− py)γf ·(y· log p), (5) where γf > 0, which controls the convexness of the loss and higher γf indicates adjustment that are more significant. Note that when α = 1 and γf = 0, it is identical to the cross-entropy objective. Following previous works (Cao et al., 2019; Liu et al., 2019; Cui et al., 2019), the α is set to 1 in multi-class classification, and Class-Balanced Focal Loss (FL+CB) can be viewed Focal Loss with uneven alpha for each class in the multi-class setting. The key idea is to pay less attention to the well-classified examples and pay more attention to the badly-classified examples, because it is natural to assume the tail class examples are harder to learn and thus cannot be well-classified. However, such methods neglect the correctly-predicted tail class examples, the practice of which we show is not constructive to the learning of long-tailed class distributions." }, { "heading": "3.2 UNDERSTANDING THE INFLUENCE OF THE HIGH-LIKELIHOOD AREA", "text": "To understand the influence of the high-likelihood area, we first prepare a variant of the Focal Loss, the Halted Focal Loss (HFL), such that the high-likelihood area is not deprioritized. The Halted Focal Loss reverts the Focal Loss to the Cross-Entropy Loss when the likelihood is high enough:\nL = { −α(1− py)γf ·(y· log p), if py ≤ ϕ −αy·[log p+ b], otherwise (6)\n0.2 0.4 0.6 0.8 1.0 p\n0.0\n0.5\n1.0\n1.5\n2.0\nLo ss\nHalted Focal Loss Cross-entropy (CE) Focal Loss (FL) Halted Focal Loss(HFL) Boundary at p = 0.5\n0.0 0.2 0.4 0.6 0.8 1.0 t\n71.0\n71.2\n71.4\n71.6\n71.8\n72.0\nAc cu\nrac y\nResults of the Halted Focal Loss CE FL (t=0) HFL\nFigure 2: Regaining focus on the high-likelihood area for the rare classes benefits classification. Left: Illustration of HFL which reverts FL to CE in the high-likelihood area. Right: Applying HFL only to the rare classes improves overall performance.\nMethod AP AP50 AP75\nCE 32.8 52.3 34.7 FL 33.8 52.5 35.9 HFL 34.0 52.7 36.2 FL (Head) + HFL (Tail) 34.2 52.7 36.3\nTable 1: Results of HFL on the COCO detection dataset, AP denotes average precision. As we can see, increasing the importance of the high-likelihood area achieves better results, and the main improvements come from the tail class examples.\nwhere py is prediction probability of the correct label, b = α(1 − (1 − ϕ)γf ) logϕ to ensure monotonicity and continuity, and ϕ is the boundary between the low- and the high-likelihood area, which we set as 0.5, i.e., a likelihood higher than which definitely renders a correct prediction. This mixed loss is plotted in left of the Figure 2, which has the same likelihood gradient as the cross-entropy in the high-likelihood area and remains the same as the Focal Loss in the low-likelihood area.\nTo decouple the effect of class frequency, we further explore to gradually transition from the Focal Loss to the Halted Focal Loss according to the class frequency of an example, e.g., from adopting the Halted Focal Loss only for the rarest class and the Focal Loss to other classes to adopting the Focal Loss only for the most common class and the Halted Focal Loss for the rest. Concretely, we set a proportion t ∈ [0, 1] of classes to receive this loss and the remaining 1− t proportion of classes adopt the original Focal Loss. The classes are ranked by inverse frequencies, such that the first class is the rarest class.\nWe conduct experiments on long-tailed CIFAR-10 using the aforementioned protocol to examine the effect of the high-likelihood area. The construction of the dataset is provided in Appendix C. We run each configuration 5 times with different random initialization and report the average test performance. The results are shown in the right of Figure 2.\nAs we can see, compared to the original Focal Loss, the proposed adaptation achieves better performance, indicating regaining focus in the high-likelihood area is beneficial. Nonetheless, the phenomenon can be also attributed to a better learning of the common classes instead of the rare classes. Our analysis based on the class frequency resolves this concern because the Halted Focal Loss brings more improvements if only tailed classes are learned this way, e.g., applying to the top-4 rare classes achieve the best overall performance, which proves that there are rare class examples that reside in the high-likelihood area and have non-negligible effect to generalization.\nThe importance of the high-likelihood area of the rare examples is further validated on the COCO detection dataset, where the classifier should determine whether the object appears in the image or not. The positive detection is the rare class since there are many false proposals. The experiment setting is in Appendix C. AP50 and AP75 measure the precision under different levels of overlap between predictions and ground-truth. As shown in Table 1, strengthening the high-likelihood area of the Focal Loss, especially for the rare class examples, obtains a more accurate and confident prediction." }, { "heading": "4 EUREKA LOSS", "text": "We have shown that the high-likelihood area matters for long-tailed classification and in particular, the rare class examples in the area have pivotal contributions. Inspired by this finding, we propose to further enhance the importance of the high-likelihood area so that the likelihood gradient in the high-likelihood area can match or even surpass that in the low-likelihood area. Moreover, the adjustment is inline with the frequency of the class so the rarer the class, the larger the likelihood gradient. Extending the adjustment term b in Eq. (6), we propose the Eureka Loss (EL):\nL = −y· log p− bonus·encouragement, (7)\nwhere −bonus·encouragement is intended to reward the well-classified rare examples. The bonus term depends on the example likelihood and the encouragement term depends on the class frequency. Different from the existing approaches that scale the Cross-Entropy Loss, punishing the incorrect predictions selectively, the proposed Eureka Loss deals with long-tailed classification from another perspective, rewarding the correct predictions progressively with their class frequencies.\nBonus indicates how well the system executes the task and is designed to be a function of the probability of the ground-truth class to reward the model when it makes correct prediction. In particular, in light of the findings discussed in Section 3.2, we propose to increase the likelihood gradient in the high-likelihood area and adopt the form of\nbonus = y · log(1− p), (8)\nwhich ensures that the monotonicity of the likelihood gradient is consistent with that of the CrossEntropy Loss, meaning that the classifier obtains more bonus when making highly-confident correct predictions. The design is against most existing studies in that the high-likelihood area is given more focus than the low-likelihood area.\nEncouragement implies the system realizes unusual achievements that should be encouraged. Since the unusual achievements in long-tailed classification should be correctly predicting rare class examples, we propose to reward the system based on the frequency of the example’s class:\nencouragement = wy = m̄\nmy , (9)\nwhere my denotes the measurement of the frequency of the class y. The form is flexible and similar to the frequency-based methods, and thus can be further extended based on the related studies. In our experiments, we use the effective number from Cui et al. (2019) as the measurement.\nCompared to the existing frequency-based and likelihood-based objective, our Eureka Loss takes the bonus term to calibrate the attention to different likelihood landscapes and the encouragement term to inform the model with the class difficulty, composing a more targeted yet comprehensive loss for learning imbalanced distributions." }, { "heading": "5 EXPERIMENTS", "text": "We validate the proposed Eureka Loss on diverse long-tailed classification problems and analyze the characteristics of the Eureka Loss with insights into the learned models." }, { "heading": "5.1 TASKS, DATASETS, AND TRAINING SETTINGS", "text": "Tasks and Datasets We conduct experiments on two image classification tasks and a dialogue generation task. iNaturalist 2018 is a real-world dataset which embodies a highly imbalanced class distribution of 8,142 classes. Apart from the test performance, we also report the validation performance grouped by the class frequency and categorize the examples into three groups: many (classes with more than 100 examples), medium (classes with 20 to 100 examples), and few (classes with fewer than 20 examples). ImageNet-LT (Liu et al., 2019) is an artificially constructed long-tailed classification dataset based on ILSVRC 2012 of 1000 classes. ConvAI2 is a natural conversation dataset for evaluating dialogue system, where each word type can be treated as a class, i.e., 18,848 words (classes) in total, and have extremely imbalanced training and test datasets.\nEvaluation Metric For the image classification tasks, we use the accuracy on ’All’ data and subset of classes with ’Many’, ’Medium’ and ’Few’ examples , i.e., the precision of the top-1 prediction. Since the test set of those tasks are balanced in classes, we further propose to estimate the accuracy on the imbalanced class distribution that reflects natural performance in real-world scenarios. The natural accuracy is the linear interpolation of the accuracy on the balanced test set using the class frequencies from the training set. For the natural language generation task, we adopt the micro and macro F-scores from Zhang et al. (2018) between the generated and the reference sentences to check how well the systems participate in the conversation. We further adopt the 4-gram diversity to examine the rare phrases, since a well-known problem for dialogue tasks is that the model tends to generate common, dull and repetitive responses and thus cannot capture the diversity of natural\nlanguage distributions. Since the test set of natural language generation task is naturally imbalanced, we do not need to estimate the natural performance.\nFor the detailed introduction to tasks, datasets, and training settings, please refer to the appendix." }, { "heading": "5.2 RESULTS", "text": "iNaturalist 2018 The results are reported in Table 2. We tune the hyper-parameters for our implemented baselines and report the averaged performance among 3 runs at the best setting.\nWe compare Eureka Loss with frequency-based method Class-balanced Loss (CB), likelihood-based method Focal Loss (FL) and their combination FL+CB. As we can see from the first group in the table, neither FL nor CB achieves improvements over Cross Entropy (CE), but Eureka Loss outperforms CE by a large margin in terms of overall accuracy and accuracy for classes with few examples.\nIn contrary to the first group, considering only the accuracy on the balanced test set, the two-stage version of frequency class-balanced training which adopts the class-balanced training only in the latter training phase includes deferred CB(denotes CB† in the table), LDAM + deferred CB, BBN and Decoupling-LWS enjoy clear advantage over CE. In order to check whether Eureka Loss is additive with the deferred method and the class-balanced training, we take deferred Eureka Loss and Eureka Loss + CB† into comparison.\nThe deferred Eureka Loss is motivated by an intuition that when training enters the bottleneck stage, Eureka Loss is introduced to reward rare classes to encourage the model to learn less common patterns, which may be helpful for learning. Compared with the original method, the deferred encouragement brings improvement on both balanced and imbalanced test distribution (+1.4 and +2.4 regarding All and All(Natural), respectively). Moreover, the class-balanced training still impairs the learning for common classes even under the deferred setting, which may cast into unfavorable natural performance in real applications, e.g., the accuracy on the ’Many’ subset for CB† and DecouplingLWS is under-performs CE by 3.1, the results is that applying CB† reduces the Natural accuracy by 1.3. But deferred Eureka Loss largely outperforms CE and these methods on both balanced and imbalanced test distributions. The reason may be that we do not impair the CE learning and the additional rewarding for rare classes is less harmful. Since Eureka Loss only introduces an additive term, it is flexible and can be combined with CB, the adoption results in best All accuracy of 70.3.\nIn all, adopting the Eureka Loss achieves a balanced performance on both common and rare classes. Besides, we also outperform the latest representation transferring based methods including MBJ and FSA.\nImageNet-LT Table 3 demonstrates the results on ImageNet-LT of various methods. For this artificial dataset, we first compare with the representative frequency-based method CB and likelihoodbased method Focal Loss (FL). As we can see, the proposed method obtains a significant improvement in the balanced test set and also maintains the lead position in the virtual natural test set. For comparison with methods that defer the class-balanced training including deferred CB and DecoupingLWS, the Eureka Loss of corresponding modification also enjoys a comfortable margin and arguably excels in balancing the performance on both of the common and the rare classes.\nConvAI2 Table 4 shows that the proposal helps the prediction of rare words (+10% macro F-score) and thus improves the diversity of language generation (+10% 4-gram diversity). Since this dataset is extremely imbalanced, e.g., the imbalance ratio is over 200,000, the frequency-based methods require extensive tuning to work, which we thus omit from the comparison as we are not able to reproduce favorable results. Compared with the likelihood-based method Focal Loss, which is marginally better that the original cross-entropy loss, the Eureka Loss still obtains substantial improvements.\n5.3 ANALYSIS\nEffect on Distributions of Different Imbalance Degrees To analyze the effect on imbalanced distributions of different degrees, we construct several artificial datasets based on CIFAR-100 and control the size of the rarest class. The imbalance degree stands for the ratio of the class size of the most common class to that of the rarest class. Hence, the larger the degree, the more imbalanced the dataset. For example, if imbalance degree is 100 for CIFAR-100, the most common class has 500 examples and the rarest class has 5 examples. As shown in Table 5, the Eureka Loss is consistently better than existing methods, especially for datasets that are more imbalanced. For results on CIFAR-10, please refer to the appendix.\nVarying Strength of Bonus To illustrate the importance of the high-likelihood area in imbalanced classification within Eureka Loss, we compare a\n0.2 0.3 0.4 0.5 0.6 0.7 0.8 p\n1.5\n1.0\n0.5\n0.0\n0.5\n1.0\n1.5\n2.0\nLo ss\nPower Bonus\nCE -p^0.5 -p^1 -p^2 -p^4 +log(1-p)\n0 1 2 3 4 Power\n71.50\n71.75\n72.00\n72.25\n72.50\n72.75\n73.00\n73.25\n73.50\nAc cu\nra cy\nHigher power brings more improvements Accuracy varying the power\nFigure 3: Varying strength of bonus on long-tailed CIFAR-10. Higher power γb indicates higher strength.\nMethod All Many Medium Few\nEL (β=0.99) 45.8 67.2 38.4 11.3 EL (β=0.999) 47.8 66.1 42.0 16.4 EL (β=0.9999) 50.0 67.1 44.7 19.8\nTable 6: Varying strength of encouragement on the dev set of ImageNet-LT. Higher β means higher strength.\nexponential form bonus called Power Bonus (PB) to the original bonus, which takes the power form of the probability vector of by a factor γb:\nPB(p) = −y · pγb , (10) where γb is a positive value to ensure the monotonicity and can be tuned for different tasks. Besides, CE achieves a 71.4% accuracy and the likelihood bonus with deferred encouragement gets a 76.1% accuracy. Figure 3 demonstrates that bigger likelihood gradient in the high-likelihood area brings more improvements, e.g., power-bonus with power of 4 is better than bonuses of small power.\nVarying Strength of Encouragement The strength of encouragement is determined by both of the class frequency and the hyper-parameter β as we use the effective number of the class. As β controls the variance of the the effective number, e.g., when β = 0, the variance is 0, meaning all of the classes receive equal encouragement, we control the strength of the encouragement towards tail classes by altering β. The results on the validation set of ImageNet-LT are shown in Table 6. As we can see, higher β (more encouragement for tail classes), is connected to higher overall accuracy and better tail class performance, which again validates our motivation for encouraging correct rare class predictions.\nEffect on Example-Likelihood The Eureka Loss rewards the high-likelihood predictions especially for tail classes. It is interesting to see how the training dynamic is changed due to this preference. In order to understand the effect, we visualize the example likelihood grouped by target class frequencies after training in Figure 4 and Figure 5 (due to space limit, the complete comparison is provided in the Appendix A), which are from the validation set from iNaturalist 2018. As we can see, with the Eureka Loss, the examples in the high-likelihood area are driven to the extreme. For example, considering the medium and the low frequency group, the “hard” examples that may be inherently difficult to classify stand invariant, while for the examples that can be classified correctly, the system now treats them with more confidence. This dynamics translate into better accuracy in unseen examples in the test set, hinting the importance of rare class examples in the high-likelihood area for the generalization of learning imbalanced class distributions." }, { "heading": "6 CONCLUSIONS", "text": "In this paper, we examine the effect of the high-likelihood area on learning imbalanced class distributions. We find that the existing practice of relatively diminishing the contribution of the examples in the high-likelihood area is actually harmful to the learning. We further show that the rare class examples in the high-likelihood area have pivotal contribution to model performance and should be focused on instead of being neglected. Motivated by this, we propose the Eureka Loss, which additionally rewards the well-classified rare class examples. The results of the Eureka Loss in the image classification and natural language generation problems demonstrate the potential of reconsidering the role of the high-likelihood area. In-depth analysis also verifies the effectiveness of the investigated loss form and reveals the learning dynamics of different approaches to long-tailed classification." }, { "heading": "B FURTHER RESULTS AND ANALYSIS", "text": "" }, { "heading": "B.1 RESULTS ON INATURALIST 2018 WITH TRAINING FOR 90 EPOCHS", "text": "It is found by existing work (Kang et al., 2020) that training much longer for the iNaturalist 2018 dataset can produce better scores and reflect the performance of the models more authentically. However, most previous studies conduct training for a shorter time. To keep consistent with previous research in this field, we also train the models using Eureka Loss for 90 epochs and the results are shown in Table 7. In this setting, eureka loss achieves better accuracy than the two-stage decoupling methods (Decoupling-LWS and BBN), the advantage is more profound under the Natural accuracy, for example, compared to the Decoupling-LWS, the deferred Eureka Loss gains 3.8 Natural accuracy. Compared to the one-state methods including Class-Balanced Loss (CB), Focal Loss (FL), ClassBalanced Focal Loss (FL+CB), LDAM, the model trained with the Eureka Loss is much more accurate on the test distribution." }, { "heading": "B.2 RESULTS ON LONG-TAILED CIFAR-10", "text": "In the main text, we have reported the results of the Eureka Loss varying the class imbalance on the CIFAR-100 dataset. Here we also perform comprehensive experiments on long-tailed CIFAR-10 and report top-1 precision on the balanced test set. The results are shown in the Table 8. When combined with Class-Balanced Loss, Eureka Loss brings higher improvement in terms of accuracy than Cross-Entropy Loss and LDAM." }, { "heading": "B.3 HYPER-PARAMETER OF THE FOCAL LOSS", "text": "In the paper, we report results for the Focal Loss with best hyper-parameters. For COCO detection, the hyper-parameter of α = 0.25, γ = 2 is the best setting reported in Table 1.b of the origin paper\n(Lin et al., 2017). For the other multi-class classification tasks, we tune hyper-parameters of the Focal Loss on it. The accuracy for the Focal Loss with different hyper-parameter γ are listed in the Table1. we set γ = 1 for Focal Loss since it is consistently optimal in long-tailed image classification. For ConvAI2, γ = 0.5 under-performs Cross Entropy, and neither γ = 1 nor γ = 2 outperforms each other, so we report the Focal Loss of γ = 1 and the Focal Loss of γ = 2 in the Table 4." }, { "heading": "B.4 COMPLEMENTARY EXPERIMENT TO THE MOTIVATION EXPERIMENT", "text": "In Section 3, we propose Halted Focal Loss(HFL) and compare it to Focal Loss(FL) to illustrate the potential of the high-likelihood area. However, its loss is no steeper than Cross Entropy (CE). Moreover, Focal Loss does not beat CE in the setting of multi-class classification. In order to bridge the gap between the possibly weak motivation experiment of Halted Focal Loss and the proposed method Eureka Loss. We propose simplified Eureka Loss:\nL = { −y· log p, if py ≤ ϕ −y· log p+ y·[log(1− p)− b], otherwise (11)\n1Following (Cui et al., 2019), we omit the hyper-parameter α, since the Focal Loss with uneven ’alpha’ for each class in the setting of multi-class classification can be viewed as Class-balanced Focal Loss (FL+CB) and FL+CB is compared individually.\nwhere ϕ is set to 0.5, and b is log(1 − ϕ). In the simplified Eureka Loss, the encouragement is removed, and to keep the low-likelihood area unchanged, the new bonus term starts rewarding the model from p = ϕ. As is shown in Table 10, HFL(t=0.5) is also better than HFL and FL in terms accuracy of tail classes on the large scale long-tailed classification dataset iNaturalist 2018, This result once again shows that high-likelihood area matters and near-correct predictions of rare classes play a major role. But HFL is the combination of Focal Loss(in the low likelihood area) and Cross Entropy(in the high-likelihood area) and the performance is constrained. Unlike HFL, the loss of Simplified Eureka Loss is built on CE and the loss is much steeper than Cross Entropy in the high-likelihood area, it outperforms Cross Entropy(CE) and HFL in terms of all metrics, especially on the subset of tail classes. Eureka Loss reported in Table 2 is a continuous version of simplified Eureka Loss with an additional encouragement for rare classes, similar to HFL(t=0.5), this setting which rewards more for rare classes achieves the best overall performance." }, { "heading": "B.5 EUREKA LOSS MITIGATES OVER-FITTING ON TAIL CLASSES", "text": "As shown in Figure 6, compared to Cross-Entropy Loss, Eureka Loss reduce the gap between the training accuracy and test accuracy from 33.0 to 28.6 on tail classes. Moreover, even though Classbalanced Loss achieves the highest training accuracy, its test accuracy is unexpectedly low. The difference of performance between the seen examples and the unseen examples indicates the degree of over-fitting. The results are from the “Few” subset of the iNaturalist 2018 of training 90 epochs." }, { "heading": "C DETAILS OF EXPERIMENTAL SETTINGS", "text": "" }, { "heading": "C.1 DATASETS", "text": "There are six datasets used in this paper in total and an overview of the dataset statistics are demontrated in Table 11 and Figure 7. For image classification tasks, the iNaturalist 2018 dataset is the most imbalanced and has the most classes, which is most satisfactory for evaluating long-tailed classifications. For the language generation task, ConvAI2 has an imbalance ratio of 277K, which, however, should be taken cautiously, since most of the tail classes are not covered in evaluation. The common practice to evaluate the learning on the imbalanced language distributions is to investigate the diversity of the generated text. The 4-grams can be regarded as high-ordered classes and a 4-gram of four common words can also be a “rare class”." }, { "heading": "C.2 TRAINING SETTINGS", "text": "CIFAR-10 and CIFAR-100 For experiments on long-tailed CIFAR-10 and CIFAR-100, the backbone network is ResNet-32 (He et al., 2016). The model is optimized with SGD with a momentum of 0.9. The learning rate is set to 0.1 and the model is trained for 200 epochs with 128 examples per mini-batch. To stabilize the training, we adopt the warm-up strategy used by Goyal et al. (2017) in the first 5 epochs. Following Cao et al. (2019), we decay the learning rate by 0.01 at the 160th epoch and again at the 180th epoch. For the results in Figure 2 and Figure 3, we conduct experiments on long-tailed CIFAR-10 with an imbalance ratio of 10.\nImageNet-LT For experiments on ImageNet-LT ILSVRC 2012, the base network is ResNext50 (He et al., 2016). The batch size is set to 512 to accelerate training. The initial learning rate is 0.2 and we utilize a cosine learning rate scheduler.\niNaturalist 2018 Same as the experiments on ImageNet-LT, we also follow the default setting in Kang et al. (2020) for experiments on iNaturalist. To be specific, we adopt ResNet-50 model, use SGD with to train the model for 200 epochs with batch size 512 and a cosine learning rate schedule which gradually decays from 0.2 to 0.0. Results on the valid set are also reported on subset of many (> 100 samples), medium (20− 100 samples), and few (< 20 samples), respectively. ConvAI2 For the conversation generation task, we utilize a two-layer LSTM (Hochreiter & Schmidhuber, 1997) encoder-encoder architecture as our base network. The hidden size of both encoder and decoder is set to 1024. We optimize the model with SGD optimizer with momentum 0.9, the\nbatch size is 64 and the learning rate is set to 3. The embedding size is 256 and word vectors are initialized with GloVe (Pennington et al., 2014). We select our final model until the performance on the validation is no longer improving after 5 epochs.\nCOCO Dectection For experiments on COCO detection, we adopt the configuration of “RetinaNetR-50-FPN-1x” from the GitHub repository Detectron2 as our default setting. In this setting, the one-stage detector of RetinaNet with the backbone ResNet50 is trained for 90k updates and the batch size is 8 images per batch.\nFor the image classification tasks, the default β is set to 0.9999 for all datasets. For the deferred version, we defer the adoption of Eureka Loss after training for 160 epochs and 180epochs on CIFAR100 and iNaturalist 2018 respectively. As for the dialog generation task ConvAI2, β is set to 0.999 and we start the encouragement after regularly training the model for 5 epochs.\nWe tune the β ∈ {0.9, 0.99, 0.999, 0.9999} and γ ∈ {0.5, 1, 2} for the Class-Balanced Loss (CB) and the Focal Loss (FL) respectively in multi-class classification, and report the best results of these baselines. Following previous work (Cui et al., 2019), α is set to 1.0 for the Focal Loss(FL), and the Class-Balanced Focal Loss (FL+CB) in multi-class classification tasks can be viewed as the origin Focal Loss with class-level weight α in binary classification tasks.\nThe training costs are summarized in Table 12." } ]
2,020
HIGH-LIKELIHOOD AREA MATTERS — REWARDING CORRECT, RARE CLASS PREDICTIONS UNDER IMBAL-
SP:d872c4d4c7d2495156ce9a1c30dd2696ce1173df
[ "This paper proposes FLAG (Free Large-scale Adversarial Augmentation on Graphs), an adversarial data augmentation technique that can be applied to different GNN models in order to improve their generalization. The proposed technique consists on adding adversarial perturbations to the nodes’ features solving the standard min-max problem for adversarial training. In this setup, a noise vector is added to the input features which tries to maximize the loss by performing gradient ascent, while the classifier is trained to minimize the loss despite the added adversarial noise.", "This paper investigates adversarial feature augmentation for improving the generalizability of graph neural networks. The authors adopt an existing augmentation algorithm and apply on the nodes of each training graph, and use the perturbed graphs for training. The focus of the paper is extensive experimentation in various tasks and settings to illustrate the effectiveness of adversarial augmentation in graph-based tasks. The experiments provide new, non-trivial insights, such as the effect of the number of network layers on the effectiveness of augmentation. The paper is well-written and easy to read." ]
Data augmentation helps neural networks generalize better, but it remains an open question how to effectively augment graph data to enhance the performance of GNNs (Graph Neural Networks). While most existing graph regularizers focus on augmenting graph topological structures by adding/removing edges, we offer a novel direction to augment in the input node feature space for better performance. We propose a simple but effective solution, FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training, and boosts performance at test time. Empirically, FLAG can be easily implemented with a dozen lines of code and is flexible enough to function with any GNN backbone, on a wide variety of large-scale datasets, and in both transductive and inductive settings. Without modifying a model’s architecture or training setup, FLAG yields a consistent and salient performance boost across both node and graph classification tasks. Using FLAG, we reach state-of-the-art performance on the large-scale ogbg-molpcba, ogbg-ppa, and ogbg-code datasets.
[]
[ { "authors": [ "Yogesh Balaji", "Tom Goldstein", "Judy Hoffman" ], "title": "Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets", "venue": "arXiv preprint arXiv:1910.08051,", "year": 2019 }, { "authors": [ "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Adversarial attacks on node embeddings via graph poisoning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jie Chen", "Tengfei Ma", "Cao Xiao" ], "title": "Fastgcn: fast learning with graph convolutional networks via importance sampling", "venue": "arXiv preprint arXiv:1801.10247,", "year": 2018 }, { "authors": [ "Wei-Lin Chiang", "Xuanqing Liu", "Si Si", "Yang Li", "Samy Bengio", "Cho-Jui Hsieh" ], "title": "Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Hanjun Dai", "Hui Li", "Tian Tian", "Xin Huang", "Lin Wang", "Jun Zhu", "Le Song" ], "title": "Adversarial attack on graph structured data", "venue": "arXiv preprint arXiv:1806.02371,", "year": 2018 }, { "authors": [ "Zhijie Deng", "Yinpeng Dong", "Jun Zhu" ], "title": "Batch virtual adversarial training for graph convolutional networks", "venue": "arXiv preprint arXiv:1902.09192,", "year": 2019 }, { "authors": [ "Vijay Prakash Dwivedi", "Chaitanya K Joshi", "Thomas Laurent", "Yoshua Bengio", "Xavier Bresson" ], "title": "Benchmarking graph neural networks", "venue": "arXiv preprint arXiv:2003.00982,", "year": 2020 }, { "authors": [ "Federico Errica", "Marco Podda", "Davide Bacciu", "Alessio Micheli" ], "title": "A fair comparison of graph neural networks for graph classification", "venue": null, "year": 1912 }, { "authors": [ "Fuli Feng", "Xiangnan He", "Jie Tang", "Tat-Seng Chua" ], "title": "Graph adversarial training: Dynamically regularizing based on graph structure", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2019 }, { "authors": [ "Zhe Gan", "Yen-Chun Chen", "Linjie Li", "Chen Zhu", "Yu Cheng", "Jingjing Liu" ], "title": "Large-scale adversarial training for vision-and-language representation learning", "venue": "arXiv preprint arXiv:2006.06195,", "year": 2020 }, { "authors": [ "Victor Garcia", "Joan Bruna" ], "title": "Few-shot learning with graph neural networks", "venue": "arXiv preprint arXiv:1711.04043,", "year": 2017 }, { "authors": [ "Lise Getoor" ], "title": "Link-based classification. In Advanced methods for knowledge discovery from complex data, pp. 189–207", "venue": null, "year": 2005 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "arXiv preprint arXiv:1704.01212,", "year": 2017 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Weihua Hu", "Matthias Fey", "Marinka Zitnik", "Yuxiao Dong", "Hongyu Ren", "Bowen Liu", "Michele Catasta", "Jure Leskovec" ], "title": "Open graph benchmark: Datasets for machine learning on graphs", "venue": null, "year": 2005 }, { "authors": [ "Haoming Jiang", "Pengcheng He", "Weizhu Chen", "Xiaodong Liu", "Jianfeng Gao", "Tuo Zhao" ], "title": "Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization", "venue": null, "year": 1911 }, { "authors": [ "Hongwei Jin", "Xinhua Zhang" ], "title": "Latent adversarial training of graph convolution networks", "venue": "In ICML Workshop on Learning and Reasoning with Graph-Structured Representations,", "year": 2019 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Chang Li", "Dan Goldwasser" ], "title": "Encoding social information with graph convolutional networks forpolitical perspective detection in news media", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Guohao Li", "Chenxin Xiong", "Ali Thabet", "Bernard Ghanem" ], "title": "Deepergcn: All you need to train deeper gcns", "venue": "arXiv preprint arXiv:2006.07739,", "year": 2020 }, { "authors": [ "Junying Li", "Deng Cai", "Xiaofei He" ], "title": "Learning graph-level representation for drug discovery", "venue": "arXiv preprint arXiv:1709.03741,", "year": 2017 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Masanori Koyama", "Shin Ishii" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Jiezhong Qiu", "Jian Tang", "Hao Ma", "Yuxiao Dong", "Kuansan Wang", "Jie Tang" ], "title": "Deepinf: Social influence prediction with deep learning", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Yu Rong", "Wenbing Huang", "Tingyang Xu", "Junzhou Huang" ], "title": "Dropedge: Towards deep graph convolutional networks on node classification", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ali Shafahi", "Mahyar Najibi", "Mohammad Amin Ghiasi", "Zheng Xu", "John Dickerson", "Christoph Studer", "Larry S Davis", "Gavin Taylor", "Tom Goldstein" ], "title": "Adversarial training for free", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Oleksandr Shchur", "Maximilian Mumme", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Pitfalls of graph neural network evaluation", "venue": "arXiv preprint arXiv:1811.05868,", "year": 2018 }, { "authors": [ "Yantao Shen", "Hongsheng Li", "Shuai Yi", "Dapeng Chen", "Xiaogang Wang" ], "title": "Person re-identification with deep similarity-guided graph neural network", "venue": "In Proceedings of the European conference on computer vision (ECCV),", "year": 2018 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "arXiv preprint arXiv:1805.12152,", "year": 2018 }, { "authors": [ "Riccardo Volpi", "Hongseok Namkoong", "Ozan Sener", "John C Duchi", "Vittorio Murino", "Silvio Savarese" ], "title": "Generalizing to unseen domains via adversarial data augmentation", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Minjie Wang", "Da Zheng", "Zihao Ye", "Quan Gan", "Mufei Li", "Xiang Song", "Jinjing Zhou", "Chao Ma", "Lingfan Yu", "Yu Gai", "Tianjun Xiao", "Tong He", "George Karypis", "Jinyang Li", "Zheng Zhang" ], "title": "Deep graph library: A graph-centric, highly-performant package for graph neural networks", "venue": "arXiv preprint arXiv:1909.01315,", "year": 2019 }, { "authors": [ "Jason Wei", "Kai Zou" ], "title": "Eda: Easy data augmentation techniques for boosting performance on text classification tasks", "venue": "arXiv preprint arXiv:1901.11196,", "year": 2019 }, { "authors": [ "Eric Wong", "Leslie Rice", "J Zico Kolter" ], "title": "Fast is better than free: Revisiting adversarial training", "venue": "arXiv preprint arXiv:2001.03994,", "year": 2020 }, { "authors": [ "Cihang Xie", "Mingxing Tan", "Boqing Gong", "Jiang Wang", "Alan L Yuille", "Quoc V Le" ], "title": "Adversarial examples improve image recognition", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Rex Ying", "Ruining He", "Kaifeng Chen", "Pong Eksombatchai", "William L Hamilton", "Jure Leskovec" ], "title": "Graph convolutional neural networks for web-scale recommender systems", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Hanqing Zeng", "Hongkuan Zhou", "Ajitesh Srivastava", "Rajgopal Kannan", "Viktor Prasanna" ], "title": "Graphsaint: Graph sampling based inductive learning method", "venue": null, "year": 1907 }, { "authors": [ "Dinghuai Zhang", "Tianyuan Zhang", "Yiping Lu", "Zhanxing Zhu", "Bin Dong" ], "title": "You only propagate once: Accelerating adversarial training via maximal principle", "venue": null, "year": 1905 }, { "authors": [ "Xiang Zhang", "Marinka Zitnik" ], "title": "Gnnguard: Defending graph neural networks against adversarial attacks", "venue": "arXiv preprint arXiv:2006.08149,", "year": 2020 }, { "authors": [ "Long Zhao", "Xi Peng", "Yu Tian", "Mubbasir Kapadia", "Dimitris N Metaxas" ], "title": "Semantic graph convolutional networks for 3d human pose regression", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Chen Zhu", "Yu Cheng", "Zhe Gan", "Siqi Sun", "Thomas Goldstein", "Jingjing Liu" ], "title": "Freelb: Enhanced adversarial training for language understanding", "venue": null, "year": 1909 }, { "authors": [ "Daniel Zügner", "Amir Akbarnejad", "Stephan Günnemann" ], "title": "Adversarial attacks on neural networks for graph data", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph Neural Networks (GNNs) have emerged as powerful architectures for learning and analyzing graph representations. The Graph Convolutional Network (GCN) (Kipf & Welling, 2016) and its variants have been applied to a wide range of tasks, including visual recognition (Zhao et al., 2019; Shen et al., 2018), meta-learning (Garcia & Bruna, 2017), social analysis (Qiu et al., 2018; Li & Goldwasser, 2019), and recommender systems (Ying et al., 2018). However, the training of GNNs on large-scale datasets usually suffers from overfitting, and realistic graph datasets often involve a high volume of out-of-distribution test nodes (Hu et al., 2020), posing significant challenges for prediction problems.\nOne promising solution to combat overfitting in deep neural networks is data augmentation (Krizhevsky et al., 2012), which is commonplace in computer vision tasks. Data augmentations apply label-preserving transformations to images, such as translations and reflections. As a result, data augmentation effectively enlarges the training set while incurring negligible computational overhead. However, it remains an open problem how to effectively generalize the notion of data augmentation to GNNs. Transformations on images rely heavily on image structures, and it is challenging to design low-cost transformations that preserve semantic meaning for non-visual tasks like natural language processing (Wei & Zou, 2019) and graph learning. Generally speaking, graph data for machine learning comes with graph structure (or edge features) and node features. In the limited cases where data augmentation can be done on graphs, it generally focuses exclusively on the graph structure by adding/removing edges (Rong et al., 2019). To date, there is no study on how to manipulate graphs in node feature space for enhanced performance.\nIn the meantime, adversarial data augmentation, which happens in the input feature space, is known to boost neural network robustness and promote resistance to adversarially chosen inputs (Goodfellow et al., 2014; Madry et al., 2017). Despite the wide belief that adversarial training harms standard generalization and leads to worse accuracy (Tsipras et al., 2018; Balaji et al., 2019), recently a growing amount of attention has been paid to using adversarial perturbations to augment datasets and ultimately alleviate overfitting. For example, Volpi et al. (2018) showed adversarial data augmentation is a data-dependent regularization that could help generalize to out-of-distribution samples, and\nits effectiveness has been verified in domains including computer vision (Xie et al., 2020), language understanding (Zhu et al., 2019; Jiang et al., 2019), and visual question answering (Gan et al., 2020). Despite the rich literature about adversarial training of GNNs for security purposes (Zügner et al., 2018; Dai et al., 2018; Bojchevski & Günnemann, 2019; Zhang & Zitnik, 2020), it remains unclear how to effectively and efficiently improve GNN’s clean accuracy using adversarial augmentation.\nPresent work. We propose FLAG, Free Large-scale Adversarial Augmentation on Graphs, to tackle the overfitting problem. While existing literature focuses on modifying graph structures to augment datasets, FLAG works purely in the node feature space by adding gradient-based adversarial perturbations to the input node features with graph structures unchanged. FLAG leverages “free” methods (Shafahi et al., 2019) to conduct efficient adversarial training so that it is highly scalable on large-scale datasets. We verify the effectiveness of FLAG on the Open Graph Benchmark (OGB) (Hu et al., 2020), which is a collection of large-scale, realistic, and diverse graph datasets for both node and graph property prediction tasks. We conduct extensive experiments across OGB datasets by applying FLAG to prestigious GNN models, which are GCN, GraphSAGE, GAT, and GIN (Kipf & Welling, 2016; Hamilton et al., 2017; Veličković et al., 2017; Xu et al., 2019) and show that FLAG brings consistent and significant improvements. For example, FLAG lifts the test accuracy of GAT on ogbn-products by an absolute value of 2.31%. DeeperGCN (Li et al., 2020) is another strong baseline that achieves top performance on several OGB benchmarks. FLAG enables DeeperGCN to generalize further and reach new state-of-the-art performance on ogbg-molpcba and ogbg-ppa. FLAG is simple (adding just a dozen lines of code), general (can be directly applied to any GNN model), versatile (works in both transductive and inductive settings), and efficient (able to bring salient improvement at tractable or even no extra cost). Our main contributions are summarized as follows:\n• We propose adversarial perturbations as a data augmentation in the input node feature space to efficiently boost GNN performance. The resulting FLAG framework is a scalable and flexible augmentation scheme for GNN, which is easy to implement and applicable to any GNN architecture for both node and graph classification tasks.\n• We advance the state-of-the-art on a number of large-scale OGB datasets, often by large margins.\n• We provide a detailed analysis and deep insights on the effects adversarial augmentation has on GNNs." }, { "heading": "2 PRELIMINARIES", "text": "Graph Neural Networks (GNNs). We denote a graph as G(V, E) with initial node features xv for v ∈ V and edge features euv for (u, v) ∈ E . GNNs are built on graph structures to learn representation vectors hv for every node v ∈ V and a vector hG for the entire graph G. The k-th iteration of message passing, or the k-th layer of GNN forward computation is:\nh(k)v = COMBINE (k) ( h(k−1)v , AGGREGATE (k) ({( h(k−1)v ,h (k−1) u , euv ) : u ∈ N (v) })) , (1)\nwhere h(k)v is the embedding of node v at the k-th layer, euv is the feature vector of the edge between node u and v, N (v) is node v’s neighbor set, and h(0)v = xv . COMBINE(·) and AGGREGATE(·) are functions parameterized by neural networks. To simplify, we view the holistic message passing pipeline as an end-to-end function fθ(·) built on graph G:\nH(K) = fθ(X;G), (2) whereX is the input node feature matrix. After K rounds of message passing we get the final-layer node matrix H(K). To obtain the representation of the entire graph hG , the permutation-invariant READOUT(·) function pools node features from the final iteration K as:\nhG = READOUT ({ h(K)v | v ∈ V }) , (3)\nAdditionally from the spectral convolution point of view, the k-th layer of GCN is:\nI +D− 1 2AD− 1 2 → D̃− 12 ÃD̃− 12 ,S = D̃− 12 ÃD̃− 12 , (4)\nH(k+1) = σ ( SH(k)Θ(k) ) , (5)\nwhere H(k) is the node feature matrix of the k-th layer with H0 = X , Θk is the trainable weight matrix of layer k, and σ is the activation function. D and A denote the diagonal degree matrix and adjacency matrix, respectively. Here, we view S as a normalized adjacency matrix with self-loops added.\nAdversarial training. Standard adversarial training seeks to solve the min-max problem as:\nmin θ\nE(x,y)∼D [\nmax ‖δ‖p≤ L (fθ(x+ δ), y)\n] , (6)\nwhere D is the data distribution, y is the label, ‖ · ‖p is some `p-norm distance metric, is the perturbation budget, and L is the objective function. Madry et al. (2017) showed that this saddlepoint optimization problem could be reliably tackled by Stochastic Gradient Descent (SGD) for the outer minimization and Projected Gradient Descent (PGD) for the inner maximization. In practice, the typical approximation of the inner maximization under an l∞-norm constraint is as follows,\nδt+1 = Π‖δ‖∞≤ (δt + α · sign (∇δL (fθ(x+ δt), y))) , (7)\nwhere perturbation δ is updated iteratively, and Π‖δ‖∞≤ performs projection onto the -ball in the l∞-norm. For maximum robustness, this iterative updating procedure usually loops M times, which makes PGD computationally expensive. While there are M forward and backward steps within the process, θ gets updated just once using the final δM ." }, { "heading": "3 PROPOSED METHOD: FLAG", "text": "Adversarial training is a form of data augmentation. By hunting for and stamping out small perturbations that cause the classifier to fail, one may hope that adversarial training should be beneficial to standard accuracy (Goodfellow et al., 2014; Tsipras et al., 2018; Miyato et al., 2018). With an increasing amount of attention paid to leverage adversarial training for better clean performance in varied domains (Xie et al., 2020; Zhu et al., 2019; Gan et al., 2020), we conduct the first study on how to effectively generalize GNNs using adversarial data augmentation. Here we introduce FLAG, Free Large-scale Adversarial Augmentation on Graphs, to best exploit the power of adversarial augmentation. Note that our method differs from other augmentations for graphs in that it happens in the input node feature space.\nAugmentation for “free”. We leverage the “free” adversarial training method (Shafahi et al., 2019) to craft adversarial data augmentations. PGD is a strong but inefficient way to solve the inner maximization of (6). While computing the gradient for the perturbation δ, free training simultaneously computes the model parameter θ’s gradient. This “free” parameter gradient is then used to compute the ascent step. The authors proposed to train on the same minibatch M times in a row to simulate the inner maximization in (6), while compensating by performing M times fewer epochs of training. The resulting algorithm yields accuracy and robustness competitive with standard adversarial training, but with the same runtime as clean training.\nGradient accumulation. When doing “free” adversarial training, the inner/adversarial loop is usually run M times, each time computing both the gradient for δt and θt−1. Rather than updating the model parameters in each loop, Zhang et al. (2019) proposed to accumulate the gradients for θt−1 during the inner loop and applied them all at once during the outer/parameter update. The same idea was used by Zhu et al. (2019), who proposed FreeLB to tackle this optimization issue on language understanding tasks. FreeLB ran multiple PGD steps to craft adversaries, and meanwhile accumulated the gradients ∇θL of model parameters. The gradient accumulation behavior can be approximated as optimizing the objective below:\nmin θ E(x,y)∼D\n[ 1\nM M−1∑ t=0 max δt∈It L (fθ (x+ δt) , y)\n] , (8)\nwhere It = Bx+δ0(αt) ∩ Bx( ). The gradient accumulation algorithm largely empowers FLAG to further improve GNN with efficient gradient usage for optimization.\nAlgorithm 1 FLAG: Free Large-scale Adversarial Augmentation on Graphs Require: Graph G = (V, E); input feature matrix X; learning rate τ ; ascent steps M ; ascent step\nsize α; training epochs N ; forward function on graph fθ(·) denoted in (2); L(·) as objective function. We omit the READOUT(·) function in (3) for the inductive scenario here. 1: Initialize θ 2: for epoch = 1 . . . N do 3: δ0 ← U(−α, α) . initialize from uniform distribution 4: g0 ← 0 5: for t = 1 . . .M do 6: gt ← gt−1 + 1M · ∇θL (fθ(X + δt−1;G),y) . θ gradient accumulation 7: gδ ← ∇δL (fθ (X + δt−1;G) ,y) 8: δt ← δt−1 + α · gδ/ ‖gδ‖F . perturbation δ gradient ascent 9: end for 10: θ ← θ − τ · gM . model parameter θ gradient descent 11: end for\nUnbounded attack. Usually on images, the inner maximization is a constrained optimization problem. The largest perturbation one can add is bounded by the hyperparameter , typically 8/255 under the l∞-norm. This encourages the visual imperceptibility of the perturbations, thus making defenses realistic and practical. However, graph node features or language word embeddings do not have such straightforward semantic meanings, which makes the selection of highly heuristic. In light of the positive effect of large perturbations on generalization (Volpi et al., 2018), and also to simplify hyperparameter search, FLAG drops the projection step when performing the inner maximization. Note that, although the perturbation is not bounded by an explicit , it is still implicitly bounded in the furthest distance that δ can reach, i.e. the step size α times the number of ascending steps M .\nBiased perturbation for node classification. Conventional conv nets treat each test sample independently during inference, whereas this is not the case in transductive graph learning scenarios. When classifying one target node, messages from the whole k-hop neighborhood are aggregated and combined into its embedding. It is natural to believe that a further neighbor should have lower impact, i.e. higher smoothness, on the final decision of the target node, which can also be intuitively reflected by the message passing view of GNNs in (1). To promote more invariance for furtheraway neighbors when doing node classification, we perturb unlabeled nodes with larger step sizes αu than αl for target nodes. We show the effectiveness of this biased perturbation in the ablation study section.\nThe overall augmentation pipeline is presented in Algorithm 1. Note that when doing transductive node classification, we use diverse step sizes αl and αu to craft adversarial augmentation for target and unlabeled nodes, respectively. In the following sections, we verify FLAG’s effectiveness through extensive experiments. In addition, we provide detailed discussions for a deep understanding of the effects of adversarial augmentation." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we demonstrate FLAG’s effectiveness through extensive experiments on the Open Graph Benchmark (OGB), which consists of a wide range of challenging large-scale datasets. Shchur et al. (2018); Errica et al. (2019); Dwivedi et al. (2020) showed that traditional graph datasets suffered from problems such as unrealistic and arbitrary data splits, highly limited data sizes, nonrigorous evaluation metrics, and common neglect of cross-validation, etc. In order to empirically study FLAG’s effects in a fair and reliable manner, we conduct experiments on the newly released OGB (Hu et al., 2020) datasets, which have tackled those major issues and brought more realistic challenges to the graph research community. We refer readers to Hu et al. (2020) for detailed information on the OGB datasets.\nUnless otherwise stated, all of the baseline test statistics come from the official OGB leaderboard website, and we conduct all of our experiments using publicly released implementations without touching the original model architecture or training setup. We report mean and std values from\nten runs with different random seeds. Following common practice on this benchmark, we report the test performance associated with the best validation result. We choose the prestigious GCN, GraphSAGE, GAT, and GIN as our baseline models. In addition, we apply FLAG to the recent DeeperGCN model to demonstrate effectiveness. Our implementation always uses M = 3 ascent steps for simplicity. Following Goodfellow et al. (2014); Madry et al. (2017), we use sign(·) for gradient normalization. We leave exhaustive hyperparameter and normalization search for future research. All training hyperparameters and evaluation results can be found in the Appendix.\nNode Property Prediction. We summarize the results of node classification in Table 1. On ogbn-products, GraphSAGE, GAT, and DeeperGCN all receive promising results with FLAG. We adopt neighbor sampling (Hamilton et al., 2017) as the mini-batch algorithm for GraphSAGE and GAT to make the experiments scalable. For DeeperGCN, we follow the original setup by Li et al. (2020) to randomly split the graph into clusters. Notably, FLAG yields a 2.31% test accuracy lift for GAT, making GAT competitive on the ogbn-products dataset. Because the graph size of ogbn-proteins is small, all models are trained in a full-batch manner. From Table 1 we can see that FLAG further enhances the performance of DeeperGCN but harms that of GCN and GraphSAGE. Considering the dataset’s specialty of not having input node features, we provide detailed discussions on the effect of different node feature constructions later. We also do full-batch training on ogbn-arxiv, where FLAG enables GAT and DeeperGCN to reach 73.71% and 72.14% accuracy. Note that the GAT baseline is from the DGL (Wang et al., 2019) implementation, which differs from vanilla GAT with batch norm and label propagation incorporated. We reveal batch norm’s influence in the discussion. ogbn-mag is a heterogeneous network where only “paper” nodes come with node features. We use the neighbor sampling mini-batch algorithm to train R-GCN and report its results in the right part of Table 2. Surprisingly, FLAG can also directly bring nontrivial accuracy improvement without special designs for heterogeneous graphs, which demonstrates its versatility.\nGraph Property Prediction. Table 3 summarizes the test scores of GCN, GIN, and DeeperGCN on all four OGB graph property prediction datasets. “Virtual” means the model is augmented with virtual nodes (Li et al., 2017; Gilmer et al., 2017; Hu et al., 2020). As adversarial perturbations are crafted by gradient ascent, it would be unnatural to perturb discrete input node features. Following Jin & Zhang (2019); Zhu et al. (2019), we firstly project discrete node features into the continuous space and then adversarially augment the hidden embeddings. On ogbg-molhiv, FLAG yields notable improvements, but when GCN has already been hurt by virtual nodes, FLAG appears to\nexaggerate the harm. Note that the test results on ogbg-molhiv all have relatively high variance compared with others, where randomness in the test result is more severe. On ogbg-molpcba, GIN-Virtual with FLAG receives an absolute value 1.31% test AP value increase, and DeeperGCN is further enhanced to retain its SOTA performance. On ogbg-ppa, FLAG further generalizes DeeperGCN and registers a new state-of-the-art test accuracy of 77.52%. On ogbg-code, FLAG boosts GCN-Virtual to a state-of-the-art test F1 score of 33.16. Besides node classification, FLAG’s strong effects on graph classification prove its high versatility. In most cases, FLAG works well with virtual node augmentation to further enhance graph learning." }, { "heading": "5 ABLATION STUDIES AND DISCUSSIONS", "text": "Effects of biased perturbation. From the left part of Table 2, we see that there is a salient increase of accuracy when using a larger perturbation on unlabeled nodes, which verifies the effectiveness of biased perturbations.\nComparison with other adversarial training methods. The right part of Table 4 shows GAT’s performance with different adversarial augmentations. For PGD and Free, we compute 8 ascent steps for the inner-maximization, while for FreeLB and FLAG we compute 3 steps. FLAG outperforms all other methods by a large margin.\nCompatibility with mini-batch methods. Graph mini-batch algorithms are critical to training GNNs on large-scale datasets. We test how different algorithms will work with adversarial data augmentation with GraphSAGE as the backbone. From the left part of Table 4, we see that neighbor sampling (Hamilton et al., 2017) and GraphSAINT (Zeng et al., 2019) can all work with FLAG to further boost performance, while Cluster (Chiang et al., 2019) suffers an accuracy drop.\nCompatibility with batch norm. The left part of Table 5 shows that batch norm works to generalize GAT, and FLAG works to push the improvement further. In the computer vision domain, Xie et al.\n(2020) proposed a new batch norm method that makes adversarial training further generalize largescale CNN models. As there is growing attention on using batch norm on GNNs, it will also be interesting to see how to synergize adversarial augmentation with batch norm in future architectures.\nCompatibility with dropout. Dropout is widely used in GNNs. The right part of Table 5 shows that, when trained without dropout, GAT accuracy drops steeply by a large margin. What’s more, FLAG can further generalize GNN models together with dropout, similar to the phenomenon of image augmentations.\nTowards going “free”. FLAG introduces tractable extra training overhead. We empirically show that, when we decrease the total training epochs to make it as fast as the standard GNN training pipeline, FLAG still brings significant performance gains. The left part of Table 2 shows that FLAG with fewer epochs still generalizes the baseline. Empirically, on a single Nvidia RTX 2080Ti, 100- epoch vanilla GAT takes 88 mins, while FLAG in Table 2 takes 91 mins. We note that heuristics like early stopping and cyclic learning rates can further accelerate the adversarial training process (Wong et al., 2020), so there are abundant opportunities for further research on adversarial augmentation at lower or even no cost.\nTowards going deep. Over-smoothing stops GNNs from going deep. FLAG shows its ability to boost both shallow and deep baselines, e.g. GCN and DeeperGCN. In the left part of Figure 1, we show FLAG’s effects on generalization when a GNN goes progressively deeper. The experiments are conducted on ogbn-arxiv with GraphSAGE as the backbone, where a consistent improvement is evident.\nWhat if there’s no node feature? One natural question can be raised: what if no input node features are provided? ogbn-proteins is a dataset without input node features. Hu et al. (2020) proposed to average incoming edge features to obtain initial node features, while Li et al. (2020) used summation and achieved competitive results. Note that the GCN and GraphSAGE baselines in Table 1 use the “mean” node features as input and suffer an accuracy drop with FLAG; DeeperGCN leverages the “sum” and gets further improved. Interestingly, when DeeperGCN is trained with “mean” node features, it receives high invariance, so that even large magnitude perturbations will not change its result. The diverse behavior of adversarial augmentation implies the importance of node feature construction method selection." }, { "heading": "6 WHERE DOES THE BOOST COME FROM?", "text": "It is now widely believed that model robustness appears to be at odds with clean accuracy. Despite the proliferation of literature in using adversarial data augmentation to promote standard performance, it is still unsettled where the boost or detriment of adversarial training comes from.\nData distribution is the key. We conjecture that the diverse effects of adversarial training in different domains stem from differences in the input data distribution rather than model architectures. To ground our claim, we utilize FLAG to augment MLPs (an architecture where adversarial training has adverse effects in the image domain) on ogbn-arxiv, and successfully boost generalization. FLAG directly improves the test accuracy from 55.50 ± 0.23% to 56.02 ± 0.19%. In general, adversarial training hurts the clean accuracy in image classification, but Tsipras et al. (2018) showed that CNNs could benefit from adversarial augmentations on MNIST. This is consistent with our guess that model architecture has little to do with the performance using adversarial augmentation. Like one-hot word embeddings for language models, input node features usually come from discrete spaces, e.g., bag-of-words binary features in ogbn-products. We believe that using discrete vs.\ncontinuous input features may lead to different adversarial augmentation behavior. We provide a simple example on the Cora (Getoor, 2005) dataset to illustrate. We choose the classic FGSM to craft adversarial augmentation and GCN as the backbone. By adding Gaussian noise with std δ, we simulate node features drawn from a continuous distribution. The result is summarized in the right part of Figure 1. When δ = 0, the discrete distribution of node features persists. At this moment, GCN with adversarial augmentation outperforms the clean model. With increased noise magnitude δ, the features are continuously distributed with large support and FGSM starts to harm the clean accuracy, which validates our conjecture." }, { "heading": "7 RELATED WORK", "text": "Existing graph regularizers mainly focus on augmenting graph structures by modifying edges (Rong et al., 2019; Hamilton et al., 2017; Chen et al., 2018). We propose to effectively augment graph data using adversarial perturbations. On large-scale image classification tasks, Xie et al. (2020) leveraged adversarial perturbations, along with new batch norm methods, to augment data. Zhu et al. (2019); Jiang et al. (2019) added adversarial perturbations in the embedding space and generalized language models further in the fine-tuning phase. Gan et al. (2020) showed that VQA model accuracy was further improved by adversarial augmentation. To clarify, FLAG is intrinsically different from the previous graph adversarial training methods (Feng et al., 2019; Deng et al., 2019; Jin & Zhang, 2019). Feng et al. (2019) proposed to reinforce local smoothness to make embeddings within communities similar. All three methods assigned pseudo-labels to test nodes during training time and utilized virtual adversarial training (Miyato et al., 2018) to make test node predictions similar to their pseudo-labels. This makes them workable for semi-supervised settings, but not for inductive tasks. Besides the original classification loss term, they all introduced KL loss into the final objective functions, which would at least double the GPU memory usage and make training less efficient and less scalable. In contrast, FLAG requires minimal extra space overhead and can directly work in the original training setup." }, { "heading": "8 CONCLUSION", "text": "We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), a simple, scalable, and general data augmentation method for better GNN generalization. Like widely-used image augmentations, FLAG can be easily incorporated into any GNN training pipeline. FLAG yields consistent improvement over a range of GNN baselines, and reaches state-of-the-art performance on the large-scale ogbg-molpcba, ogbg-ppa, and ogbg-code datasets. Besides extensive experiments, we also provide conceptual analysis to validate adversarial augmentation’s different behavior on varied data types. The effects of adversarial augmentation on generalization are still not entirely understood, and we think this is a fertile space for future exploration." }, { "heading": "A FLAG PYTORCH IMPLEMENTATION", "text": "1 #M as ascent steps, alpha as ascent step size 2 #X denotes input node features, y denotes labels 3 def flag(model, X, y, optimizer, criterion, M, alpha) : 4 model.train() 5 optimizer.zero_grad() 6\n7 pert = torch.FloatTensor(*X.shape).uniform_(-alpha, alpha) 8 pert.requires_grad_() 9 out = model(X+pert)\n10 loss = criterion(out, y)/M 11\n12 for _ in range(M-1): 13 loss.backward() 14 pert_data = pert.detach() + alpha*torch.sign(pert.grad.detach()) 15 pert.data = pert_data.data 16 pert.grad[:] = 0 17 out = model(X+pert) 18 loss = criterion(out, y)/M 19\n20 loss.backward() 21 optimizer.step()" }, { "heading": "B FULL STATISTICS", "text": "Here we summarize our main experiment results on both node and graph classification tasks. Hyperparameters for crafting adversarial augmentations are listed in the table. For other training setups of backbones, we refer readers to the public website of the OGB leaderboard." } ]
2,020
null
SP:98450ef54b363e5e68b19b7bc1327490d69825fa
[ "The paper presents a simple addition to the Balanced Accuracy approach - which the authors refer to as ‘importance’. However, there is nothing in the formulation of this concept which requires that this is an importance and could in fact be any form of weighting. The paper evaluates the new metric - but only agains the Balanced Accuracy metric (which seems quite restrictive).", "This paper presents a weighted balanced accuracy to evaulate the performance of multi-class classification. Basically, the performance for a multi-class problem can be evaluated by decomposing the original multi-class problem into a number of binary ones based on one-against-rest manner, and then evaulating the performance scores for each of the binary ones using any well-known metric for binary classification, and then, aggregating the performance scores. The main aim of this paper is to present a weighting scheme when aggregating the scores.", "The authors advocate for class-stratified weighted macro-averaging as an appropriate scalar-valued evaluation for classification under class-imbalance. In the absence of domain expert specified weights, they also propose a weighting function that emphasizes rare classes (referred to as rarity), and a multi-importance criteria based on a normalized product of weightings. Furthermore, they point out that these weightings can be used for training via existing methods in commonly used tools (e.g., class-based instance weighting, class-based loss scaling). Finally, they show performance under different weighting for three log message classification tasks, a sentiment classification task, and a URL classification task (e.g., malware, NSFW, phishing), — demonstrating that different weighings lead to different orderings of evaluation results and that these weights can be effectively used in training.", "The paper presented a simple yet general-purpose class-sensitive evaluation framework for imbalanced data classification. Their framework is designed to improve the grading of multi-class classifiers in domains where class importance is not evenly distributed. They provided a modular and extensible formulation that can be easily customized to different important criteria and metrics. Experiments with three real-world use cases show the value of a metric based on our framework, Weighted Balanced Accuracy (WBA), over existing metrics – in not only evaluating the classifiers’ test results more sensitively to important criteria but also training them so.", "In this paper, the authors have proposed a novel evaluation framework for imbalanced data classification. Specifically, the proposed evaluation metric is designed to improve the grading of multi-class classifiers in domains where class importance is not evenly distributed. Generally speaking, the problem authors paid attention to is really existed and is important in imbalanced classification problem. Moreover, the writing in this paper is very well and is very easy to follow. " ]
Class distribution skews in imbalanced datasets may lead to models with prediction bias towards majority classes, making fair assessment of classifiers a challenging task. Metrics such as Balanced Accuracy are commonly used to evaluate a classifier’s prediction performance under such scenarios. However, these metrics fall short when classes vary in importance. In this paper, we propose a simple and general-purpose evaluation framework for imbalanced data classification that is sensitive to arbitrary skews in class cardinalities and importances. Experiments with several state-of-the-art classifiers tested on real-world datasets from three different domains show the effectiveness of our framework – not only in evaluating and ranking classifiers, but also training them.
[]
[ { "authors": [ "Rukshan Batuwita", "Vasile Palade" ], "title": "Adjusted Geometric-mean: A Novel Performance Measure for Imbalanced Bioinformatics Datasets Learning", "venue": "Journal of Bioinformatics and Computational Biology,", "year": 2012 }, { "authors": [ "Jerzy Blaszczynski", "Jerzy Stefanowski" ], "title": "Neighbourhood Sampling in Bagging for Imbalanced", "venue": "Data. Neurocomputing,", "year": 2015 }, { "authors": [ "Paula Branco", "Luı́s Torgo", "Rita P Ribeiro" ], "title": "A Survey of Predictive Modeling on Imbalanced Domains", "venue": "ACM Computing Surveys (CSUR),", "year": 2016 }, { "authors": [ "Kay Henning Brodersen", "Cheng Soon Ong", "Klaas Enno Stephan", "Joachim M Buhmann" ], "title": "The Balanced Accuracy and Its Posterior Distribution", "venue": "In 20th International Conference on Pattern Recognition (ICPR),", "year": 2010 }, { "authors": [ "Henry Carrillo", "Kay H Brodersen", "José A Castellanos" ], "title": "Probabilistic Performance Evaluation for Multi-class Classification using the Posterior Balanced Accuracy", "venue": "In First Iberian Robotics Conference,", "year": 2014 }, { "authors": [ "Cristiano Leite Castro", "Antônio de Pádua Braga" ], "title": "Novel Cost-Sensitive Approach to Improve the Multilayer Perceptron Performance on Imbalanced Data", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2013 }, { "authors": [ "Gilles Cohen", "Melanie Hilario ad Hugo Sax", "Stephane Hugonnet", "Antoine Geissbuhler" ], "title": "Learning from Imbalanced Data in Surveillance of Nosocomial Infection", "venue": "Artificial Intelligence in Medicine,", "year": 2006 }, { "authors": [ "Min Du", "Feifei Li" ], "title": "Spell: Streaming Parsing of System Event Logs", "venue": "In IEEE International Conference on Data Mining (ICDM),", "year": 2016 }, { "authors": [ "Min Du", "Feifei Li" ], "title": "Spell: Online Streaming Parsing of Large Unstructured System Logs", "venue": "IEEE Transactions on Knowledge and Data Engineering (TKDE),", "year": 2018 }, { "authors": [ "Amir Efrati" ], "title": "Uber Finds Deadly Accident Likely Caused By Software Set to Ignore Objects On Road", "venue": "The Information,", "year": 2018 }, { "authors": [ "Andrew Estabrooks", "Nathalie Japkowicz" ], "title": "A Mixture-of-Experts Framework for Learning from Imbalanced Data Sets", "venue": "In International Conference on Advances in Intelligent Data Analysis (IDA), pp", "year": 2001 }, { "authors": [ "Andrew Estabrooks", "Taeho Jo", "Nathalie Japkowicz" ], "title": "A Multiple Resampling Method for Learning from Imbalanced Data Sets", "venue": "Computational Intelligence,", "year": 2004 }, { "authors": [ "Haibo He", "Edwardo A. Garcia" ], "title": "Learning from Imbalanced Data", "venue": "IEEE Transactions on Knowledge and Data Engineering (TKDE),", "year": 2009 }, { "authors": [ "Haibo He", "Yunqian Ma (eds" ], "title": "Imbalanced Learning: Foundations, Algorithms, and Applications", "venue": null, "year": 2013 }, { "authors": [ "Pinjia He", "Jieming Zhu", "Zibin Zheng", "Michael R Lyu" ], "title": "Drain: An Online Log Parsing Approach with Fixed Depth Tree", "venue": "In IEEE International Conference on Web Services (ICWS),", "year": 2017 }, { "authors": [ "Florian Helff", "Le Gruenwald", "Laurent d’Orazio" ], "title": "Weighted Sum Model for Multi-Objective Query Optimization for Mobile-Cloud Database Environments", "venue": "In EDBT/ICDT International Workshop on Multi-Engine Data AnaLytics (MEDAL),", "year": 2016 }, { "authors": [ "Alberto Fernandez Hilario", "Salvador Garcı́a Lopez", "Mikel Galar", "Ronaldo C. Prati", "Bartosz Krawczyk", "Francisco Herrera" ], "title": "Learning from Imbalanced Data", "venue": null, "year": 2018 }, { "authors": [ "Natalie Japkowicz" ], "title": "Assessment Metrics for Imbalanced Learning", "venue": null, "year": 2013 }, { "authors": [ "Justin M. Johnson", "Taghi M. Khoshgoftaar" ], "title": "Survey on Deep Learning with Class Imbalance", "venue": "Journal of Big Data,", "year": 2019 }, { "authors": [ "Mahesh V. Joshi", "Vipin Kumar", "Ramesh C. Agarwal" ], "title": "Evaluating Boosting Algorithms to Classify Rare Classes: Comparison and Improvements", "venue": "In IEEE International Conference on Data Mining (ICDM), pp", "year": 2001 }, { "authors": [ "Hung Le", "Quang Pham", "Doyen Sahoo", "Steven CH Hoi" ], "title": "Urlnet: Learning a url representation with deep learning for malicious url detection", "venue": "arXiv preprint arXiv:1802.03162,", "year": 2018 }, { "authors": [ "Sara Makki", "Zainab Assaghir", "Yehia Taher", "Rafiqul Haque", "Mohand-Saı̈d Hacid", "Hassan Zeineddine" ], "title": "An Experimental Study With Imbalanced Classification Approaches for Credit Card Fraud Detection", "venue": "IEEE Access,", "year": 2019 }, { "authors": [ "Marcus A. Maloof" ], "title": "Learning When Data Sets are Imbalanced and When Costs are Unequal and Unknown", "venue": "In ICML Workshop on Learning from Imbalanced Data Sets,", "year": 2003 }, { "authors": [ "Salma Messaoudi", "Annibale Panichella", "Domenico Bianculli", "Lionel Briand", "Raimondas Sasnauskas" ], "title": "A Search-based Approach for Accurate Identification of Log Message Formats", "venue": "In Proceedings of the 26th Conference on Program Comprehension,", "year": 2018 }, { "authors": [ "Susan M. Mudambi", "David Schuff" ], "title": "Research Note: What Makes a Helpful Online Review? A Study of Customer Reviews on Amazon.com", "venue": "MIS Quarterly,", "year": 2010 }, { "authors": [ "Jianmo Ni", "Jiacheng Li", "Julian McAuley" ], "title": "Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects", "venue": "In Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP),", "year": 2019 }, { "authors": [ "Paul A Pavlou", "Angelika Dimoka" ], "title": "The Nature and Role of Feedback Text Comments in Online Marketplaces: Implications for Trust Building, Price Premiums, and Seller Differentiation", "venue": "Information Systems Research,", "year": 2006 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D. Manning" ], "title": "GloVe: Global Vectors for Word Representation", "venue": "In Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Vishnu Subramanian" ], "title": "Deep Learning with PyTorch: A Practical Approach to Building Neural Network Models using PyTorch", "venue": "Packt Publishing,", "year": 2018 }, { "authors": [ "Yanmin Sun", "Andrew K.C. Wong", "Mohamed S. Kamel" ], "title": "Classification of Imbalanced Data: A Review", "venue": "International Journal of Pattern Recognition and Artificial Intelligence,", "year": 2009 }, { "authors": [ "Chris Tofallis" ], "title": "Add or Multiply? A Tutorial on Ranking and Choosing with Multiple Criteria", "venue": "INFORMS Transactions on Education,", "year": 2014 }, { "authors": [ "Evangelos Triantaphyllou" ], "title": "Multi-criteria Decision Making Methods: A Comparative Study", "venue": null, "year": 2000 }, { "authors": [ "Pelayo Vallina", "Victor Pochat", "Álvaro Feal", "Marius Paraschiv", "Julien Gamba", "Tim Burke", "Oliver Hohlfeld", "Juan Tapiador", "Narseo Vallina-Rodriguez" ], "title": "Mis-shapes, mistakes, misfits: An analysis of domain classification services", "venue": "pp. 598–618,", "year": 2020 }, { "authors": [ "Cheng G. Weng", "Josiah Poon" ], "title": "A New Evaluation Measure for Imbalanced Datasets", "venue": "In Proceedings of the 7th Australasian Data Mining Conference", "year": 2008 }, { "authors": [ "Jieming Zhu", "Shilin He", "Jinyang Liu", "Pinjia He", "Qi Xie", "Zibin Zheng", "Michael R Lyu" ], "title": "Tools and Benchmarks for Automated Log Parsing", "venue": "In IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP),", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "For a broad range of machine learning (ML) tasks, predictive modeling in the presence of imbalanced datasets – those with severe distribution skews – has been a long-standing problem (He & Garcia, 2009; Sun et al., 2009; He & Ma, 2013; Branco et al., 2016; Hilario et al., 2018; Johnson & Khoshgoftaar, 2019). Imbalanced training datasets lead to models with prediction bias towards majority classes, which in turn results in misclassification of the underrepresented ones. Yet, those minority classes often are the ones that correspond to the most important events of interest (e.g., errors in system logs (Zhu et al., 2019), infected patients in medical diagnosis (Cohen et al., 2006), fraud in financial transactions (Makki et al., 2019)). While there is often an inverse correlation between the class cardinalities and their importance (i.e., rare classes are more important than others), the core problem here is the mismatch between the way these two distributions are skewed: the ith most common class is not necessarily the ith most important class (see Figure 1a for an illustration). In fact, rarity is one of many potential criteria that can determine the importance of a class, which is usually positively correlated with the costs or risks involved in its misprediction. Ignoring these criteria when dealing with imbalanced data classification may have detrimental consequences.\nConsider automatic classification of messages in system event logs as an example (Zhu et al., 2019). An event log is a temporal sequence of messages that have transpired for a given software system (e.g., operating systems, cyber-physical systems) over a certain time period. Event logs are particularly useful after a system has been deployed, as they can provide the DevOps teams with insights about errors outside of the testing environment, thereby enabling them to debug and improve the system quality. There is typically an inverse correlation between the stability/maturity of a system and the frequency of the errors it produces in its event logs. Furthermore, the message types that appear least frequently in an event log are usually the ones with the greatest importance. A concrete example of this was a rare anomaly in Uber’s self-driving car that led to the death of a pedestrian, since the system flagged it as a false positive in its logs (Efrati, 2018). If this event had not been misclassified and dismissed by the system, the pedestrian death in Arizona may have been avoided.\nA plethora of approaches have been proposed for building balanced classifiers (Sun et al., 2009; Branco et al., 2016). A fundamental issue that still remains an open challenge is the lack of a generally-accepted methodology for measuring classification performance. The traditional metrics, which are designed to evaluate average case performance (e.g., Accuracy) are not capable of correctly assessing the results in presence of arbitrary skew mismatches between class cardinalities and importances. On the other hand, metrics specifically proposed for imbalanced learning are either domain-specific, do not easily generalize beyond two classes, or can not support varying class importance (e.g., Balanced Accuracy) (Japkowicz, 2013).\nLet us illustrate the problem with the simple example in Figure 1b. The test dataset consists of 100 data items from 3 classes (A, B, C). The greatest majority of the items belong to class C (70), but\nclass B (20) has the greatest importance (0.7). In other words, Cardinality and Importance are both non-uniform and in favor of different classes (i.e., representing the top-right quadrant of Figure 1a). The confusion matrix on the right shows the results from a classifier run against this test dataset. Unsurprisingly, the classifier performed the best for the majority class C (60/70 correct predictions). When evaluated using the traditional Accuracy metric, neither Class Cardinality nor Class Importance is taken into account. If Balanced Accuracy is used instead, we observe the degrading impact of the Class Cardinality skew (0.38 < 0.65), but Class Importance is still not accounted for. This example demonstrates the need for a new evaluation approach that is sensitive to both Cardinality and Importance skew, as well as any arbitrary correlations between them. This is especially critical for ensuring a fair comparative assessment across multiple classifiers or problem instances.\nOur goal in this paper is to design an evaluation framework for imbalanced data classification, which can be reliably used to measure, compare, train, and tune classifier performance in a way that is sensitive to non-uniform class importance. We identify two key design principles for such a framework:\n• Simplicity: It should be intuitive and easy to use and interpret. • Generality: It should be general-purpose, i.e., (i) extensible to an arbitrary number of classes and\n(ii) customizable to any application domain.\nTo meet the first goal, we focus on scalar metrics such as Accuracy (as opposed to graphical metrics such as ROC curves), as they are simpler, more commonly used, and scale well with increasing numbers of classes and models. To meet the second goal, we target the more general n-ary classification problems (as opposed to binary), as well as providing the capability to flexibly adjust class weights to capture non-uniform importance criteria that may vary across application domains. Note that we primarily focus on Accuracy as our base scalar metric in this paper, as it is seen as the de facto metric for classification problems (Sci). However, our framework is general enough to be extended to other scalar metrics, such as Precision and Recall. Similarly, while we deeply examine three use cases (log parsing, sentiment analysis, URL classification), our framework in principle is generally applicable to any domain with imbalanced class and importance distributions.\nWe first provide a brief overview of related work in Section 2. Section 3 presents our new, classweighted evaluation framework. In Section 4, we show the practical utility of our framework by applying it over: (i) three log parsing systems (Drain (He et al., 2017), MoLFI (Messaoudi et al., 2018), Spell (Du & Li, 2016; 2018)) using four real-world benchmarks (Zhu et al., 2019); (ii) a variety of deep learning models developed for sentiment analysis on a customer reviews dataset from Amazon (Ni et al., 2019); and (iii) an industrial use case for URL classification with real classifiers and datasets from four cyber-security companies. Finally, we conclude in Section 5." }, { "heading": "2 RELATED WORK", "text": "Imbalanced Data Classification. Imbalanced data is prevalent in almost every domain (Cohen et al., 2006; Batuwita & Palade, 2012; Makki et al., 2019). The growing adoption of ML models in diverse application domains has led to a surge in imbalanced data classification research (He & Garcia, 2009; Sun et al., 2009; He & Ma, 2013; Branco et al., 2016; Hilario et al., 2018; Johnson & Khoshgoftaar, 2019). While the techniques widely vary, they fall under four basic categories: pre-processing training data to establish balance via sampling techniques (Estabrooks et al., 2004; Blaszczynski & Stefanowski, 2015), building custom learning techniques for imbalanced training\ndata (Joshi et al., 2001; Castro & de Pádua Braga, 2013), post-processing predictions from an imbalanced model (Maloof, 2003), and their hybrids (Estabrooks & Japkowicz, 2001). In this paper, we do not propose a new imbalanced learning technique, but a general-purpose performance evaluation framework that could be used in the training and/or testing of models for any technique. Section 4 demonstrates the practical utility of our framework for a variety of real ML use cases. Evaluation Metrics. Traditional metrics for evaluating prediction performance such as Accuracy, Sensitivity/Specificity (and their combination G-mean), Precision/Recall (and their combination FScore) were not designed with imbalanced data issues in mind (Japkowicz, 2013). In fact, most of these were originally intended for binary classification problems. To extend them to more than 2 classes, macro-averaging (i.e., arithmetic mean over individual class measurements) is used. Macroaveraging treats classes equally (Branco et al., 2016). Balanced Accuracy is a popular averagingbased approach. There are also probabilistic evaluation approaches that extend Balanced Accuracy with Bayesian inference techniques for both binary and multi-class problems (Brodersen et al., 2010; Carrillo et al., 2014). Close to our work, Cohen et al. (2006) introduced the notion of class weights, yet in the specific context of Sensitivity/Specificity for binary classification in the medical domain. Similarly, Batuwita & Palade (2012) proposed extensions to G-mean for the bio-informatics domain. In addition to these scalar (a.k.a., threshold) metrics, graphical (a.k.a., ranking) evaluation methods such as Receiver Operating Characteristic (ROC) curves or Precision-Recall (PR) curves (and the Area Under the Curve (AUC) for such curves) as well as their extensions to imbalanced data / multiclass problems were also investigated (Weng & Poon, 2008; Japkowicz, 2013). While these methods provide more detailed insights into the operational space of classifiers as a whole, they do not easily scale with use in problems with a large number of classes (Branco et al., 2016)." }, { "heading": "3 CLASS-WEIGHTED EVALUATION FRAMEWORK", "text": "In this section, we present our new evaluation framework for multi-class learning problems in presence of arbitrary skews among class distributions and/or importances. Our framework builds on and extends commonly used scalar / threshold metrics such as Accuracy. These metrics were originally designed for binary classification problems, where there is typically more emphasis on one class (the positive class, e.g., anomalies). To adopt them to multi-class problems where there is no such single-class emphasis, each class’ metric can be computed separately and then an overall aggregation (i.e., arithmetic mean) can be performed. For example, Accuracy has been extended to BalancedAccuracy by following this approach. In our framework, we follow a similar aggregation strategy, however, we do it in a more generalized way that allows custom class weights to capture class importance. Furthermore, these class weights can be based on any importance criteria such as rarity, cost, risk, expected benefits, and possibly a hybrid of multiple such criteria. Therefore, it is critical to provide a flexible formulation that allows users or domain experts to adjust the weights as needed by their problem instance. In what follows, we present our new class-weighted evaluation framework in a top-down fashion. Using the basic notation summarized in Table 1, we first formulate the general framework, and then we describe how this framework can be customized to different importance criteria scenarios by specializing the weights in a principled manner. For ease of exposition, we first focus on Accuracy as the underlying performance metric, and then we discuss how our approach can be adopted to other similar metrics. Finally, we end this section with a brief discussion of how our framework can be used in model training." }, { "heading": "3.1 WEIGHTED BALANCED ACCURACY (WBA)", "text": "Suppose we are given a test dataset with N data items in it, each of which belongs to one of C distinct classes. Furthermore, each class i contains ni of the data items in this dataset. Thus:\nN = C∑ i=1 ni (1)\nThe relative frequency of each class i in the whole dataset is:\nfi = ni N\n(2)\nAssume a classifier that makes a prediction about the class label of each data item in the test dataset, and correctly predicts pi out of ni labels for a given class i, where pi ≤ ni. Then, the total number of correct predictions out of all the predictions gives us the overall Accuracy of the classifier:\nAccuracy =\n∑C i=1 pi\nN (3)\nThe classifier’s Accuracyi for a given class i (a.k.a., per-class Recall score) can be computed as:\nAccuracyi = pi ni\n(4)\nBalancedAccuracy is the macro-average of Accuracyi over all classes in the dataset:\nBalancedAccuracy = 1\nC × C∑ i=1 Accuracyi (5)\nThe above formulation represents the state of the art in how prediction accuracy is evaluated for multi-class classifiers in presence of imbalanced datasets (i.e., those where fi are not even). While for balanced datasets (i.e., ∀i, ni = N/C and fi = 1/C) BalancedAccuracy = Accuracy, for imbalanced datasets, BalancedAccuracy ensures that the prediction accuracy is not inflated due to high-frequency classes’ results dominating over the others’. BalancedAccuracy works well as long as each class is of the same importance, since it is the simple arithmetic mean across per-class accuracy measurements (i.e., each class’ accuracy contributes evenly to the overall accuracy). As we discussed in earlier sections, in many real-world classification problems, this assumption does not hold. Rather, classifiers must be rewarded higher scores for their prediction performance on more important classes. In order to capture this requirement, we generalize BalancedAccuracy into WeightedBalancedAccuracy by extending it with per-class importance weights wi as follows:\nWeightedBalancedAccuracy = C∑ i=1 wi ×Accuracyi (6)\nThis simple yet powerful extension enables us to capture both skews and imbalances in class cardinalities as well as importances (i.e., the complete design space in Figure 1a). This general formulation can support any importance criteria for weights as long as 0 ≤ wi ≤ 1 and ∑C i=1 wi = 1. In the following subsections, we present general use of WBA with custom weights, for other scalar metrics, as well as in improving model training." }, { "heading": "3.2 WEIGHT CUSTOMIZATION", "text": "In a multi-class setting, not only may the classes carry different importance weights, but also the criteria of importance may vary from one problem or domain to another. We now discuss several types of criteria that we think are commonly seen in applications. This is not meant to be an exhaustive list, but it provides examples and templates that can be easily tailored to different problems.\nImportance criteria = User-defined. This is the most general and flexible form of importance criteria. The application designer or domain expert specifies the relative weight of each class based on some application-specific criteria. As an example, the problem might be about classifying images of different types of objects in highway traffic and the user gives higher importance to correct recognition of certain objects of interest (e.g., pedestrians, bikes, animals, etc). We express user-defined relative weight of a class i with ui, which is simply used as wi in Equation 6 (i.e., wi = ui).\nImportance criteria = Rarity. It is often the case that the rarer something is, the more noteworthy or valuable it is. In multi-class problems, this corresponds to the case when importance of a class i is inversely correlated with its relative frequency of occurrence (fi) in the dataset. For example,\nin system log monitoring, log messages for more rarely occurring errors or exceptions (e.g., denial of service attack) are typically of higher importance. Therefore, a classifier that performs well on detecting such messages must be rewarded accordingly. In our framework, we capture rarity using weights that are based on normalized inverse class frequencies formulated as follows:\nwi = ri = 1 fi × ∑C\nj=1 1 fj\n(7)\nMultiple importance criteria. In some problems, importance of a class depends on multiple different criteria (e.g., both rarity and a user-defined criteria). To express class weights in such scenarios, we leverage techniques from multi-criteria decision making and multi-objective optimization (Triantaphyllou, 2000; Helff et al., 2016). One of the most basic methods is using normalized weighted sums based on composite weights (Helff et al., 2016). Composite weights can be computed either in additive or multiplicative form (Tofallis, 2014). The multiplicative approach tends to promote weight combinations that are uniformly higher across all criteria, and as such is found to be a more preferred approach in application scenarios similar to ours (Helff et al., 2016; Tofallis, 2014). While we present this approach here, in principle, other approaches from multi-criteria decision making theory could also be used within our framework. Given M different criteria with mi,j denoting the relative weight of class i for criteria j, we can compute the composite weight of a class i as follows:\nwi =\n∏M j=1\nmi,j∑C k=1 ∏M j=1 mk,j (8)\nFor example, if we had two criteria, rarity r and user-defined u with weights ri and ui for each class i, respectively, then the composite weight for class i would be wi = ri×ui∑C\nj=1 rj×uj\n.\nPartially-defined importance criteria. One commonly expected scenario (especially in those classification problems where the number of classes C can be very large) is that not all of the class importance weights might be supplied by the user. For example, in a sentiment analysis use case, the user supplies the weights for all the negative classes, and leaves the others unspecified. Our framework can support such cases by automatically assigning weights to the unspecified classes. The default approach is to distribute the remaining portion of weights evenly across all unspecified classes: (1 - total weights specified) / (number of unspecified classes). If the user prefers an alternative approach (e.g., distribute the remainder based on rarity of the unspecified classes), this can also be easily supported by our framework." }, { "heading": "3.3 METRIC CUSTOMIZATION", "text": "The class-weighted evaluation framework presented above focused on the popular Accuracy metric as the underlying metric of prediction performance. However, our framework follows a general structure based on the idea of weighted macro-averaging with customizable weights, which can essentially be used with any performance metric that can be computed on the basis of a class. For example, the macro-averaging approaches that are already being used for Precision, Recall, and FScore could easily be extended with our customizable weighing approach by replacing Accuracy in our formulas with one of these metrics." }, { "heading": "3.4 MODEL TRAINING IMPROVEMENT USING CLASS WEIGHTS", "text": "The customized weights presented in Section 3.2 not only helps to give a user-preferred ranking via the proposed WBA metric, but also helps to improve model training. Recall that ML model training aims to minimize the loss between model predictions and ground truth labels. A common practice is to minimize the sum of all per-sample losses. Using Lossi to denote the total loss incurred by all samples within class i, the model loss to minimize in training would be: L = ∑C i=1 Lossi.\nBy applying class importance weights wi as suggested in Section 3.2, the loss value for important classes takes a larger portion in the final loss value to minimize. This enables the back-propagation process to focus more on optimizing the model parameters for the higher weighted classes, and thus improves their accuracy. The model loss incorporating class importance weights becomes:\nL = C∑\ni=1\nwi × Lossi (9)\nThe advancement of popular deep learning frameworks have made this process rather straightforward to implement. For example, the TensorFlow API provides a parameter named class weight which allows user to pass in a JSON data structure that specifies a weight value for each class label (TensorFlow). Similarly, the PyTorch API for cross entropy loss also provides a parameter to pass in class weights (PyTorch). These provide perfect interfaces for the usage of the importance weights in our WBA framework, both to improve a model’s learning ability of the important classes as well as raising the overall WBA score of the classification outcomes." }, { "heading": "4 EXPERIMENTAL STUDY", "text": "We now present an experimental analysis of our new framework for three application domains. Our goal is to demonstrate the value of WBA compared to existing metrics when evaluating ML models over real-world imbalanced data classification problems. As we will show, often times a traditional metric like Accuracy or BalancedAccuracy will make classifier A seem preferable to classifier B, when in reality classifier B is superior. In addition, we also provide an analysis of how WBA can positively impact, not only the testing of models, but also their training. Details of our experiments (incl. code, data, and examples) can be found in the supplementary file and in the appendix." }, { "heading": "4.1 USE CASE 1: LEARNED LOG PARSING", "text": "ML-based log parsers are tools that are designed to automatically learn the structure of event logs generated by hardware and software systems to properly categorize them into event classes (e.g., different error types). In our first study, we used WBA to evaluate 3 state-of-the-art log systems: Drain, Spell, and MoLFI (Du & Li, 2016; He et al., 2017; Messaoudi et al., 2018). We start by providing an abbreviated description of our experimental setup. Log Parsing Systems. Drain is a rule-based, online log parsing system that encodes the parsing rules in a fixed-depth parse tree (He et al., 2017). It performs a pre-processing step for each new log message using regular expressions created by domain experts. Spell, like Drain, is also rule-based; it principally uses the longest common subsequence (LCS) to find new log message classes (Du & Li, 2016). Finally, MoLFI casts log parsing as a multi-objective optimization problem and provides a solution based on genetic programming (Messaoudi et al., 2018). Datasets. We test each aforementioned log message classification system with four real-world datasets taken from a public benchmark (Zhu et al., 2019). Each dataset has 2000 log instances randomly sampled from a larger dataset. The macOS dataset contains raw log data generated by the macOS operating system (341 log classes, 237 infrequent classes (i.e., those that have fewer occurrences in the dataset than the average number of messages per class), and an average class frequency of 5). The BlueGene/L (BGL) dataset is a collection of logs from the BlueGene/L supercomputer system (120 log classes, 101 infrequent classes, and an average class frequency of 16). The Android dataset consists of logs from the Android mobile operating system (Zhu et al., 2019) (166 log classes, 127 infrequent classes, and an average class frequency of 16). Finally, the HDFS dataset consists of log data collected from the Hadoop Distributed File System (14 log classes, 8 infrequent classes, and an average class frequency of 142). Overall, the first three datasets are highly skewed in class frequencies, whereas the HDFS dataset is relatively much less skewed (see Appendix A.1). Results. For Drain, Spell, and MoLFI, traditional metrics of Precision, Recall, F1-Score, and Accuracy (named Parsing Accuracy in the original papers) were used for training and testing classification performance. None of these metrics are class-sensitive, while in log parsing, messages have in fact varying importance across the classes. The importance criteria is rarity: the more rare an error message is, the more important it is to correctly classify this message. To capture this, we configure the WBA to WBArarity , which automatically assigns weights to WBA based on the dataset classes’ inverse frequencies, as described in Section 3.2. Then we evaluate the test results from the 3 parsers over 4 datasets using WBArarity and compare against traditional metrics in two categories: class-insensitive and class-sensitive, as shown in Figure 2. WBArarity vs. Class-insensitive Metrics: The class-insensitive metrics (specifically, F1-Score and Accuracy) agree on how to rank the classification performance of the 3 parsers across all datasets (for macOS and Android, Drain > Spell > MoLFI; for BGL, Drain > MoLFI > Spell; for HDFS, all perform similarly). Since WBArarity is sensitive to classes’ data distribution and importance skews, it makes a completely different judgement. Furthermore, it ranks the techniques differently for each dataset (Drain > MoLFI > Spell in macOS; MoLFI > Spell > Drain for BGL; Spell > Drain > MoLFI for Android; and for HDFS, Spell > Drain and MoLFI). The WBArarity ranking aligns with our observation on per-class accuracy of the methods. For example, on BGL dataset, although Spell\nhas the most mis-classified samples and in most classes (and thus the lowest overall Accuracy and Balanced Accuracy), it has very few mis-classified samples for rare classes, making its WBArarity higher than Drain. On the other hand, MoLFI performs the best on rare classes and thus has the best WBArarity . A discussion on per-class performance can be found in Appendix A.1. This result validates that WBArarity provides a more sensitive tool for assessing classification performance. WBArarity vs. Balanced Accuracy (BA): As discussed earlier, BA is class-sensitive, but only to distribution imbalance. We can observe the difference between BA and WBA in Figure 2. In macOS and BGL, where the skew is the highest and rarity is more pronounced, the two metrics completely disagree in how they rank the parsers. In contrast, for Android and HDFS, where the skew is lower, there is an overall agreement, although the separation in metric values slightly differ. Of particular importance is the difference seen in Figure 2a. We observe that the best performing model is Spell when scored by BA, and Drain when scored by WBArarity . The reason for this difference is due to Spell’s and Drain’s differences in their ability to correctly classify infrequent classes, i.e., those that represent failures and errors that require the most immediate response." }, { "heading": "4.2 USE CASE 2: SENTIMENT ANALYSIS", "text": "In social media and other user-facing domains like e-commerce sites, it is often useful to understand the view or feelings (“sentiments”) associated with users’ behavior or preferences. In the second part of our experimental study, we apply WBA in the context of such a sentiment analysis use case, which involves analyzing text-based product reviews from Amazon’s e-commerce websites. Dataset. The dataset consists of customer reviews and ratings of Amazon products (Ni et al., 2019). The task is to classify the reviews into 5 classes (with 1 being the lowest and 5 being the highest review rating a product can get), where ratings constitute the ground truth class labels. There is high class imbalance in this dataset (skew=2.140). As shown in the Frequency column of Table 2, Class 5 with the highest customer rating clearly dominates compared to the others. It is known that the distribution of customer review ratings is typically imbalanced and generally follow a J-shaped distribution (Mudambi & Schuff, 2010; Pavlou & Dimoka, 2006). Sentiment Analysis Models. We compare 4 types of recurrent neural networks (RNN), all consisting of an embedding layer with pre-trained word embeddings from (Pennington et al., 2014) followed by a recurrent layer from PyTorch (Subramanian, 2018): RNN, LSTM, GRU, BiLSTM. The hidden state output from the last time step of these are passed to a fully-connected layer with input of 256 neurons and output from 5 neurons. Results. For this use case, we first worked with a user-defined importance criteria borrowed from published studies suggesting that extreme review ratings (classes 1 and 5) carry more importance (Mudambi & Schuff, 2010; Pavlou & Dimoka, 2006). Thus, we set the weights as shown in Table 2 (shown as WBA(user) or user in Figure 3). WBA vs. Other Accuracy Metrics: First, we compare WBA(user) with Accuracy and BalancedAccuracy (BA) when used as a metric for both training and testing of the 4 DNN models (Figure 3a). We make a few observations: (i) The class-insensitive Accuracy showcases the imbalance problem\nin classification, as it favors the RNN model which is heavily biased by the majority class (see Accuracyi for RNN in Table 2 where class 5 scores 0.96). (ii) The frequency-sensitive BA metric finds all models perform similarly. WBA(user), in contrast, identifies LSTM as the best model. Indeed, Table 2 confirms that LSTM performs relatively the best in predicting the most important class, class 1 (0.19 accuracy). Overall, we find that WBA is capable of capturing importance skews, even when the frequency skew can be high and biased towards less important classes. Impact of WBA in Model Training: Next we explore the use of WBA not only in model evaluation, but also in training. We focus on two models (LSTM and RNN), and apply WBA only during testing vs. to both training (by extending loss functions of DNNs with weights as in Section 3.4) and testing. Intuitively, if a model is trained being aware of the importance weights, then it should also perform well when tested against the same criteria. To test this hypothesis, we repeated the experiment for 3 alternative importance criteria: (i) rarity (w1 = 0.209, w2 = 0.368, w3 = 0.255, w4 = 0.136, w5 = 0.030), (ii) user-defined (i.e., with weights in Table 2), and (iii) composite of the two (w1 = 0.62, w2 = w3 = w4 = 0, w5 = 0.38). In Figure 3b, we observe: (i) Except for rarity, WBA for both LSTM and RNN improves when integrated into model training. This verifies our intuition, and shows that WBA is a useful metric not only for evaluation, but also for training. (ii) When we zoom into rarity, we see that although class 2 is the most important, per-class accuracy for class 5 is much higher for both LSTM and RNN in the Test-only case, because both models are still trained heavily biased towards the majority class (5). (iii) Though rarity by itself is not useful in training, when combined with user importance, it visibly improves the WBA scores. This shows that our multi-criteria composition approach is capable of combining importance criteria as intended." }, { "heading": "4.3 USE CASE 3: URL CLASSIFICATION", "text": "URL classification is a crucial task in the cyber-security industry for managing web traffic. Given a URL, the goal is to categorize the corresponding webpage into several well-defined classes such as benign (i.e., no harmful content), malware, phishing, NSFW (i.e., not safe for work), etc. Given the vast number of URLs available, accurately learning and evaluating a URL classifier can greatly help in automatically categorizing webpages without requiring manual labeling (Vallina et al., 2020). Dataset. Our dataset contains URLs from several different types of categories: benign (e.g., news, sports), NSFW (e.g., drugs, gambling), and malicious (phishing and malware). These URLs were sampled from a variety of third-party sources so as to minimize potential data bias. Many of the phishing and malware URLs were sampled from VirusTotal (VirusTotal), which aggregates various cyber-security vendors’ detections into a single site, while benign URLs were sampled from Alexa’s Top Sites lists (Amazon), which rank websites according to their popularity. We are unable to reveal the exact URLs used in the analysis due to legal reasons, but we tried to ensure that our dataset is representative of the types of URLs encountered in real-world URL filtering scenarios. URL Classification Services. We tested the performance of URL filtering products from 4 commercial companies, whose names had to be anonymized as A, B, C, D. Since each URL filtering product has its own unique taxonomy of categories, we created mappings from each service’s category space to a single shared category space so that we can easily compare the results. Furthermore, since no\ncompany had access to our test dataset sampled from various third-party sources, no company was able to gain an unfair advantage in this URL classification task. Results. We submit the sampled URLs from each category to the above services, calculate their per-category accuracy, and compute a final accuracy score for competitive analysis. Evaluating and Ranking the Classifiers: Table 3 shows our results for Companies A-D. For overall Accuracy, D is the best as it has the highest classification accuracy on benign class (i.e., the majority class). With BalancedAccuracy, we can see that A is the best, since it has higher accuracy for nonbenign classes compared with other services, and this advantage is more evident with WBArarity , shown as a larger gap between their WBArarityvalues. B is better than D when using WBArarity , but if users consider malicious class to be the most important and apply a very high weight on it (e.g., 0.8), the resulted WBAusermetric would show that D is preferable over B. Improving Model Training with WBA: In this experiment, we adopt our dataset to train a URLNet model (Le et al. (2018)) by applying various WBA weights as suggested in Section 3.2, to show its effectiveness in terms of improving corresponding WBA metric. As shown in Table 4, 60% of the dataset is used for training while the remaining 40% is for evaluation. Table 4 Classification Accuracy columns show the results under different settings. For vanilla training (without class weights), we get a high accuracy for the benign class but low accuracy for the others. The main reason is benign class has the most number of URLs in training data. In order to boost the accuracy for nonbenign classes, we apply rarity weights. With this, the accuracy for all non-benign and rarer classes are improved (noticeably, from 0 to 0.608 for the NSFW category). Note that malware content is often considered more harmful among all non-benign categories. For the third experiment, we apply user-defined weights which assign higher weight on the malware class, successfully boosting its accuracy from 0.839 to 0.895. It is clear that applying corresponding class weights in training can significantly improve the related WBA accuracy on the test dataset. For instance, when applying rarity weights, WBArarity improves from 0.653 to 0.761; while WBAuser is improved from 0.640 to 0.752 if the same user-defined weights are used in training. This aligns with our observation that increasing the weights in concerned categories can notably improve the accuracy in those categories. As a comparison, neither the Accuracy nor the Balanced Accuracy measure does a good job in ranking the different training methods." }, { "heading": "5 CONCLUSION", "text": "In this paper, we presented a simple yet general-purpose class-sensitive evaluation framework for imbalanced data classification. Our framework is designed to improve the grading of multi-class classifiers in domains where class importance is not evenly distributed. We provided a modular and extensible formulation that can be easily customized to different importance criteria and metrics. Experiments with three real-world use cases show the value of a metric based on our framework, Weighted Balanced Accuracy (WBA), over existing metrics – in not only evaluating the classifiers’ test results more sensitively to importance criteria, but also training them so." }, { "heading": "ETHICS STATEMENT", "text": "This paper introduces a measurement framework for training and evaluating classification models. The primary goal of this framework is to enable the creation of more accurate models, especially in presence of imbalances across class distributions and their importances. As with any technological tool, our work could be subject to misuse. In particular, unfair bias might be introduced to learned models by purposefully misconfiguring the class importance weights. To avoid such ethical risks, we urge the community to use our tool in a responsible manner." }, { "heading": "REPRODUCIBILITY STATEMENT", "text": "This research is highly experimental in nature. We provide an appendix section as well as an additional file with supplementary material in order to document all of our experimental artifacts and how they can be used to reproduce the results we reported in this paper. These artifacts include all datasets used in the experiments, as well as code. Except for the URL classification use case, where the URLs cannot be released either because of our contract with VirusTotal, or due to being customer-sensitive information. Nevertheless, we have included the class labels and class predictions for each setting, and corresponding scripts to calculate various WBA metrics which can reproduce our results in the paper. In particular, the appendix section includes extra experimental results to validate our point, as well as the details for each experiment. Our supplementary material contains 3 distinct folders with source code and scripts for the 3 use cases in evaluation. Each folder contains a README file that instructs how to run the code and reproduce the results." }, { "heading": "A APPENDIX", "text": "In this appendix, we provide details for the experimental study, including data and code. For further information, please see the supplementary material." }, { "heading": "A.1 DETAILS FOR LOG PARSING EXPERIMENTS", "text": "For the three log parsing techniques used in Section 4.1 (Drain, Spell, and MoLFI), we used the implementations provided by the LogPAI team:\nhttps://github.com/logpai/logparser/\nThe four datasets used in these experiments (macOS, BGL, Android, and HDFS) came from the benchmarking data also provided by LogPAI:\nhttps://github.com/logpai/loghub/\nIn Figure 4, we show the histograms for the four log datasets together with their skew values. As defined in the Microsoft Excel Documentation, “Skewness characterizes the degree of asymmetry of a distribution around its mean. Positive skewness indicates a distribution with an asymmetric tail extending toward more positive values, while negative skewness indicates a distribution with an\nasymmetric tail extending toward more negative values.” 1. In our context, skew provides a good indication for the degree of imbalance in class cardinality distributions – the larger the skew, the larger the degree of class imbalance.\nWe also provide data files with class labels (true + predicted) and weights (based on rarity as importance criteria) used in generating the experimental data plotted in Figure 2 as part of our WBA-Evaluator tool implementation included in the supplementary material (can be found under the WBA-Evaluator/examples/LogParsing/ directory).\nA visualization of the per-class mis-classification count for each method on each dataset can be found from Figure 5 to Figure 7. The lower a bar is, the fewer samples being mis-classified in that category. Note that the categories on x axis are ordered in a descending order of class frequencies, i.e., in the same order of Figure 4. As a result, the lower the bars on the right side of each plot, the better the log parser is in classifying infrequent classes for that dataset, and thus the better the WBArarityscore should be. As explained by the legends in each figure, the per-class accuracy in each plot aligns with WBArarity metrics in Figure 2.\n1 https://support.microsoft.com/en-us/office/skew-function-bdf49d86-b1ef-4804-a046-28eaea69c9fa" }, { "heading": "A.2 DETAILS FOR SENTIMENT ANALYSIS EXPERIMENTS", "text": "For the sentiment analysis experiments of Section 4.2, we used a sample from the Amazon Customer Reviews dataset provided at:\nhttps://nijianmo.github.io/amazon/index.html\nIn Figure 8, we show the histogram for the Amazon dataset. As described in Section 4.2, we implemented 4 RNN-based classifiers to experiment with this dataset. The code for these classifiers can be found in the supplementary material (under the AmazonReviewsClassifier/src/ directory) along with a copy of the data (under the AmazonReviewsClassifier/dataset/ directory).\nWe also provide the data files with class labels (true + predicted) and weights (user) used in generating the experimental data for LSTM results plotted in Figure 3 and Table 2 as an example. These can be found in our WBA-Evaluator tool implementation included in the supplementary material under the WBA-Evaluator/examples/Amazon/ directory." }, { "heading": "A.3 DETAILS FOR URL CLASSIFICATION EXPERIMENTS", "text": "For the URL classification experiment in Section 4.3, and specifically for the experiment result in Table 4, our implementation is inherited from the URLNet work (Le et al. (2018)). In particular,\nwe changed the URLNet source code to support multi-class classification (previously binary), and to apply various weights for model training. The URLNet open source repository is available at:\nhttps://github.com/Antimalweb/URLNet/\nOur implementation and modification can be found at the WBA-Evaluator/WeightedURLNet/ directory.\nIn the main context, we show how the proposed WBA weights are able to improve training accuracy for underrepresented classes. Due to page limit, only the evaluation results on 4 classes are listed in Figure 4. Here we include the evaluation results for 2 classes and 3 classes respectively, to further express the effectiveness of WBA weights in improving model training. As shown in Table 5, by applying WBArarity weights and user-defined weights which all focus more on phishing category, the phishing accuracy is improved accordingly in both cases, with corresponding WBA metric improved as well. In contrast, neither overall accuracy nor Balanced Accuracy is able to show the accuracy improvement in the important class. Similarly, Table 6 shows the experiment results for 3 classes. It should be noted that a vanilla training leads to 0 accuracy on the newly added NSFW category, while another training with rarity weights brings some accuracy to this category, and applying a relatively high user weight for this category boosts its accuracy to over 80%." }, { "heading": "A.4 THE WBA-EVALUATOR TOOL", "text": "In addition to details on our experimental study as described above, we also provide a copy of the WBA-Evaluator tool that implements our customizable, class-weighted evaluation framework described in Section 3. WBA-Evaluator is written in Python and can be found in the supplementary material along with a README that describes how it can be used. In a nutshell, WBA-Evaluator takes as input three files (true class labels, predicted class labels, class weights) and a number of configuration parameters in the form of commandline arguments, and then it generates accuracy scores (BA or WBA) as specified by these arguments. The WBA-Evaluator implementation comes with two subdirectories: src/ contains the Python source code; example/ contains all the input files (labels and weights) and scripts in the scripts/ subfolder to run these. Please see the README file for more details. Using this tool, results reported in the paper can be reproduced." } ]
2,021
CLASS-WEIGHTED EVALUATION METRICS
SP:9da1f7bd8d52bdd28891ec3e0c7eef7d367a5eea
[ "The paper presents a new, more complex, dataset for the use of disentangled representation learning. The dataset is based on real and simulated images of the trifinger robot platform. There are 7 factors of variation with high-resolution measurements of these factors. The dataset contains over 1 million simulated images and another ~1000 annotated images of a real trifinger robot arm.", "The authors proposed a unique learning scheme for representation disentanglement. However, unlike infoGAN or ACGAN which explicitly learn disjoint feature representations for describing the attributes of interest (via unsupervised and supervised settings, respectively), the authors chose to address this task in a questionable \"weakly supervised setting\". More specifically, the authors chose to train AE-like model using pairwise images, which the difference between each pair of the inputs is only associated with one attribute of interest (e.g., angle, position, etc.). " ]
Learning meaningful representations that disentangle the underlying structure of the data generating process is considered to be of key importance in machine learning. While disentangled representations were found to be useful for diverse tasks such as abstract reasoning and fair classification, their scalability and real-world impact remain questionable. We introduce a new high-resolution dataset with 1M simulated images and over 1,800 annotated real-world images of the same setup. In contrast to previous work, this new dataset exhibits correlations, a complex underlying structure, and allows to evaluate transfer to unseen simulated and realworld settings where the encoder i) remains in distribution or ii) is out of distribution. We propose new architectures in order to scale disentangled representation learning to realistic high-resolution settings and conduct a large-scale empirical study of disentangled representations on this dataset. We observe that disentanglement is a good predictor for out-of-distribution (OOD) task performance.
[ { "affiliations": [], "name": "DISENTANGLED REPRESENTA" }, { "affiliations": [], "name": "TIONS IN" }, { "affiliations": [], "name": "REALISTIC SETTINGS" }, { "affiliations": [], "name": "Andrea Dittadi" }, { "affiliations": [], "name": "Frederik Träuble" }, { "affiliations": [], "name": "Francesco Locatello" }, { "affiliations": [], "name": "Manuel Wüthrich" }, { "affiliations": [], "name": "Vaibhav Agrawal" }, { "affiliations": [], "name": "Ole Winther" }, { "affiliations": [], "name": "Stefan Bauer" }, { "affiliations": [], "name": "Bernhard Schölkopf" } ]
[ { "authors": [ "Tameem Adel", "Zoubin Ghahramani", "Adrian Weller" ], "title": "Discovering interpretable representations for both deep generative and discriminative models", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "OpenAI: Marcin Andrychowicz", "Bowen Baker", "Maciek Chociej", "Rafal Jozefowicz", "Bob McGrew", "Jakub Pachocki", "Arthur Petron", "Matthias Plappert", "Glenn Powell", "Alex Ray" ], "title": "Learning dexterous in-hand manipulation", "venue": "The International Journal of Robotics Research,", "year": 2020 }, { "authors": [ "Martin Arjovsky", "Léon Bottou", "Ishaan Gulrajani", "David Lopez-Paz" ], "title": "Invariant risk minimization", "venue": "arXiv preprint arXiv:1907.02893,", "year": 2019 }, { "authors": [ "Francis Bach", "Michael Jordan" ], "title": "Kernel independent component analysis", "venue": "Journal of Machine Learning Research,", "year": 2002 }, { "authors": [ "Thomas Bachlechner", "Bodhisattwa Prasad Majumder", "Huanru Henry Mao", "Garrison W Cottrell", "Julian McAuley" ], "title": "Rezero is all you need: Fast convergence at large depth", "venue": null, "year": 2003 }, { "authors": [ "Chris M Bishop" ], "title": "Training with noise is equivalent to tikhonov regularization", "venue": "Neural computation,", "year": 1995 }, { "authors": [ "Samuel R Bowman", "Luke Vilnis", "Oriol Vinyals", "Andrew M Dai", "Rafal Jozefowicz", "Samy Bengio" ], "title": "Generating sentences from a continuous space", "venue": "arXiv preprint arXiv:1511.06349,", "year": 2015 }, { "authors": [ "Christopher P Burgess", "Irina Higgins", "Arka Pal", "Loic Matthey", "Nick Watters", "Guillaume Desjardins", "Alexander Lerchner" ], "title": "Understanding disentangling in beta-VAE", "venue": "arXiv preprint arXiv:1804.03599,", "year": 2018 }, { "authors": [ "Agisilaos Chartsias", "Thomas Joyce", "Giorgos Papanastasiou", "Scott Semple", "Michelle Williams", "David Newby", "Rohan Dharmakumar", "Sotirios A Tsaftaris" ], "title": "Factorised spatial representation learning: Application in semi-supervised myocardial segmentation", "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention,", "year": 2018 }, { "authors": [ "Tian Qi Chen", "Xuechen Li", "Roger Grosse", "David Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Pierre Comon" ], "title": "Independent component analysis, a new concept", "venue": "Signal Processing,", "year": 1994 }, { "authors": [ "Peter Dayan" ], "title": "Improving generalization for temporal difference learning: The successor representation", "venue": "Neural Computation,", "year": 1993 }, { "authors": [ "Cian Eastwood", "Christopher KI Williams" ], "title": "A framework for the quantitative evaluation of disentangled representations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Sanja Fidler", "Sven Dickinson", "Raquel Urtasun" ], "title": "3d object detection and viewpoint estimation with a deformable 3d cuboid model", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Muhammad Waleed Gondal", "Manuel Wüthrich", "Djordje Miladinović", "Francesco Locatello", "Martin Breidt", "Valentin Volchkov", "Joel Akpo", "Olivier Bachem", "Bernhard Schölkopf", "Stefan Bauer" ], "title": "On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sven Gowal", "Chongli Qin", "Po-Sen Huang", "Taylan Cemgil", "Krishnamurthy Dvijotham", "Timothy Mann", "Pushmeet Kohli" ], "title": "Achieving robustness in the wild via adversarial mixing with disentangled representations", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Luigi Gresele", "Paul K. Rubenstein", "Arash Mehrjou", "Francesco Locatello", "Bernhard Schölkopf" ], "title": "The incomplete rosetta stone problem: Identifiability results for multi-view nonlinear ica", "venue": "In Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2019 }, { "authors": [ "Christina Heinze-Deml", "Nicolai Meinshausen" ], "title": "Conditional variance penalties and domain shift robustness", "venue": "arXiv preprint arXiv:1710.11469,", "year": 2017 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-VAE: Learning basic visual concepts with a constrained variational framework", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Irina Higgins", "Arka Pal", "Andrei Rusu", "Loic Matthey", "Christopher Burgess", "Alexander Pritzel", "Matthew Botvinick", "Charles Blundell", "Alexander Lerchner. Darla" ], "title": "Improving zero-shot transfer in reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Irina Higgins", "Nicolas Sonnerat", "Loic Matthey", "Arka Pal", "Christopher P Burgess", "Matko Bošnjak", "Murray Shanahan", "Matthew Botvinick", "Demis Hassabis", "Alexander Lerchner" ], "title": "Scan: Learning hierarchical compositional visual concepts", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Aapo Hyvarinen", "Hiroshi Morioka" ], "title": "Unsupervised feature extraction by time-contrastive learning and nonlinear ica", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Aapo Hyvärinen", "Petteri Pajunen" ], "title": "Nonlinear independent component analysis: Existence and uniqueness results", "venue": "Neural Networks,", "year": 1999 }, { "authors": [ "Aapo Hyvarinen", "Hiroaki Sasaki", "Richard E Turner" ], "title": "Nonlinear ica using auxiliary variables and generalized contrastive learning", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Stephen James", "Paul Wohlhart", "Mrinal Kalakrishnan", "Dmitry Kalashnikov", "Alex Irpan", "Julian Ibarz", "Sergey Levine", "Raia Hadsell", "Konstantinos Bousmalis" ], "title": "Sim-to-real via sim-to-sim: Dataefficient robotic grasping via randomized-to-canonical adaptation networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Christian Jutten", "Juha Karhunen" ], "title": "Advances in nonlinear blind source separation", "venue": "In International Symposium on Independent Component Analysis and Blind Signal Separation,", "year": 2003 }, { "authors": [ "Ilyes Khemakhem", "Diederik Kingma", "Ricardo Monti", "Aapo Hyvarinen" ], "title": "Variational autoencoders and nonlinear ica: A unifying framework", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "David Klindt", "Lukas Schott", "Yash Sharma", "Ivan Ustyuzhaninov", "Wieland Brendel", "Matthias Bethge", "Dylan Paiton" ], "title": "Towards nonlinear disentanglement in natural data with temporal sparse coding", "venue": null, "year": 2007 }, { "authors": [ "David Krueger", "Ethan Caballero", "Joern-Henrik Jacobsen", "Amy Zhang", "Jonathan Binas", "Remi Le Priol", "Aaron Courville" ], "title": "Out-of-distribution generalization via risk extrapolation (rex)", "venue": "arXiv preprint arXiv:2003.00688,", "year": 2020 }, { "authors": [ "Abhishek Kumar", "Prasanna Sattigeri", "Avinash Balakrishnan" ], "title": "Variational inference of disentangled latent concepts from unlabeled observations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yann LeCun", "Fu Jie Huang", "Leon Bottou" ], "title": "Learning methods for generic object recognition with invariance to pose and lighting", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2004 }, { "authors": [ "Ya Li", "Xinmei Tian", "Mingming Gong", "Yajing Liu", "Tongliang Liu", "Kun Zhang", "Dacheng Tao" ], "title": "Deep domain generalization via conditional invariant adversarial networks", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Francesco Locatello", "Gabriele Abbati", "Thomas Rainforth", "Stefan Bauer", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "On the fairness of disentangled representations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Francesco Locatello", "Ben Poole", "Gunnar Rätsch", "Bernhard Schölkopf", "Olivier Bachem", "Michael Tschannen" ], "title": "Weakly-supervised disentanglement without compromises", "venue": "arXiv preprint arXiv:2002.02886,", "year": 2020 }, { "authors": [ "Krikamol Muandet", "David Balduzzi", "Bernhard Schölkopf" ], "title": "Domain generalization via invariant feature representation", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Xue Bin Peng", "Marcin Andrychowicz", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Sim-to-real transfer of robotic control with dynamics randomization", "venue": "IEEE international conference on robotics and automation (ICRA),", "year": 2018 }, { "authors": [ "Scott Reed", "Yi Zhang", "Yuting Zhang", "Honglak Lee" ], "title": "Deep visual analogy-making", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "arXiv preprint arXiv:1401.4082,", "year": 2014 }, { "authors": [ "Karl Ridgeway", "Michael C Mozer" ], "title": "Learning deep disentangled embeddings with the f-statistic loss", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Mateo Rojas-Carulla", "Bernhard Schölkopf", "Richard Turner", "Jonas Peters" ], "title": "Invariant models for causal transfer learning", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Andrei A Rusu", "Matej Večerı́k", "Thomas Rothörl", "Nicolas Heess", "Razvan Pascanu", "Raia Hadsell" ], "title": "Sim-to-real robot learning from pixels with progressive nets", "venue": "In Conference on Robot Learning,", "year": 2017 }, { "authors": [ "Bernhard Schölkopf", "Francesco Locatello", "Stefan Bauer", "Nan Rosemary Ke", "Nal Kalchbrenner", "Anirudh Goyal", "Yoshua Bengio" ], "title": "Towards causal representation learning", "venue": "arXiv preprint arXiv:2102.11107,", "year": 2021 }, { "authors": [ "Rui Shu", "Yining Chen", "Abhishek Kumar", "Stefano Ermon", "Ben Poole" ], "title": "Weakly supervised disentanglement with guarantees", "venue": "arXiv preprint arXiv:1910.09772,", "year": 2019 }, { "authors": [ "Jocelyn Sietsma", "Robert JF Dow" ], "title": "Creating artificial neural networks that generalize", "venue": "Neural networks,", "year": 1991 }, { "authors": [ "Casper Kaae Sønderby", "Tapani Raiko", "Lars Maaløe", "Søren Kaae Sønderby", "Ole Winther" ], "title": "Ladder variational autoencoders", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Peter Sorrenson", "Carsten Rother", "Ullrich Köthe" ], "title": "Disentanglement by nonlinear ica with general incompressible-flow networks (gin)", "venue": "arXiv preprint arXiv:2001.04872,", "year": 2020 }, { "authors": [ "Raphael Suter", "Djordje Miladinovic", "Bernhard Schölkopf", "Stefan Bauer" ], "title": "Robustly disentangled causal mechanisms: Validating deep representations for interventional robustness", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Josh Tobin", "Rachel Fong", "Alex Ray", "Jonas Schneider", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Domain randomization for transferring deep neural networks from simulation to the real world", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2017 }, { "authors": [ "Frederik Träuble", "Elliot Creager", "Niki Kilbertus", "Francesco Locatello", "Andrea Dittadi", "Anirudh Goyal", "Bernhard Schölkopf", "Stefan Bauer" ], "title": "On disentangled representations learned from correlated data", "venue": null, "year": 2006 }, { "authors": [ "Sjoerd van Steenkiste", "Francesco Locatello", "Jürgen Schmidhuber", "Olivier Bachem" ], "title": "Are disentangled representations helpful for abstract visual reasoning", "venue": null, "year": 1905 }, { "authors": [ "Manuel Wüthrich", "Felix Widmaier", "Felix Grimminger", "Joel Akpo", "Shruti Joshi", "Vaibhav Agrawal", "Bilal Hammoud", "Majid Khadiv", "Miroslav Bogdanovic", "Vincent Berenz" ], "title": "Trifinger: An opensource robot for learning dexterity", "venue": null, "year": 2008 }, { "authors": [ "Mengyuan Yan", "Qingyun Sun", "Iuri Frosio", "Stephen Tyree", "Jan Kautz" ], "title": "How to close sim-real gap? transfer with segmentation", "venue": "arXiv preprint arXiv:2005.07695,", "year": 2020 } ]
[ { "heading": null, "text": "Learning meaningful representations that disentangle the underlying structure of the data generating process is considered to be of key importance in machine learning. While disentangled representations were found to be useful for diverse tasks such as abstract reasoning and fair classification, their scalability and real-world impact remain questionable. We introduce a new high-resolution dataset with 1M simulated images and over 1,800 annotated real-world images of the same setup. In contrast to previous work, this new dataset exhibits correlations, a complex underlying structure, and allows to evaluate transfer to unseen simulated and realworld settings where the encoder i) remains in distribution or ii) is out of distribution. We propose new architectures in order to scale disentangled representation learning to realistic high-resolution settings and conduct a large-scale empirical study of disentangled representations on this dataset. We observe that disentanglement is a good predictor for out-of-distribution (OOD) task performance.\n1 INTRODUCTION\nDisentangled representations hold the promise of generalization to unseen scenarios (Higgins et al., 2017b), increased interpretability (Adel et al., 2018; Higgins et al., 2018) and faster learning on downstream tasks (van Steenkiste et al., 2019; Locatello et al., 2019a). However, most of the focus in learning disentangled representations has been on small synthetic datasets whose ground truth factors exhibit perfect independence by design. More realistic settings remain largely unexplored. We hypothesize that this is because real-world scenarios present several challenges that have not been extensively studied to date. Important challenges are scaling (much higher resolution in observations and factors), occlusions, and\ncorrelation between factors. Consider, for instance, a robotic arm moving a cube: Here, the robot arm can occlude parts of the cube, and its end-effector position exhibits correlations with the cube’s position and orientation, which might be problematic for common disentanglement learners (Träuble et al., 2020). Another difficulty is that we typically have only limited access to ground truth labels in the real world, which requires robust frameworks for model selection when no or only weak labels are available.\n∗Equal contribution. Correspondence to: <[email protected]>, <[email protected]>. †Work done during an internship at the Max Planck Institute for Intelligent Systems.\nThe goal of this work is to provide a path towards disentangled representation learning in realistic settings. First, we argue that this requires a new dataset that captures the challenges mentioned above. We propose a dataset consisting of simulated observations from a scene where a robotic arm interacts with a cube in a stage (see Fig. 1). This setting exhibits correlations and occlusions that are typical in real-world robotics. Second, we show how to scale the architecture of disentanglement methods to perform well on this dataset. Third, we extensively analyze the usefulness of disentangled representations in terms of out-of-distribution downstream generalization, both in terms of held-out factors of variation and sim2real transfer. In fact, our dataset is based on the TriFinger robot from Wüthrich et al. (2020), which can be built to test the deployment of models in the real world. While the analysis in this paper focuses on the transfer and generalization of predictive models, we hope that our dataset may serve as a benchmark to explore the usefulness of disentangled representations in real-world control tasks.\nThe contributions of this paper can be summarized as follows:\n• We propose a new dataset for disentangled representation learning, containing 1M simulated high-resolution images from a robotic setup, with seven partly correlated factors of variation. Additionally, we provide a dataset of over 1,800 annotated images from the corresponding real-world setup that can be used for challenging sim2real transfer tasks. These datasets are made publicly available.1\n• We propose a new neural architecture to successfully scale VAE-based disentanglement learning approaches to complex datasets.\n• We conduct a large-scale empirical study on generalization to various transfer scenarios on this challenging dataset. We train 1,080 models using state-of-the-art disentanglement methods and discover that disentanglement is a good predictor for out-of-distribution (OOD) performance of downstream tasks." }, { "heading": "2 RELATED WORK", "text": "Disentanglement methods. Most state-of-the-art disentangled representation learning approaches are based on the framework of variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014). A (high-dimensional) observation x is assumed to be generated according to the latent variable model pθ(x|z)p(z) where the latent variables z have a fixed prior p(z). The generative model pθ(x|z) and the approximate posterior distribution qφ(z|x) are typically parameterized by neural networks, which are optimized by maximizing the evidence lower bound (ELBO):\nLV AE = Eqφ(z|x)[log pθ(x|z)]−DKL(qφ(z|x)‖p(z)) ≤ log p(x) (1)\nAs the above objective does not enforce any structure on the latent space except for some similarity to p(z), different regularization strategies have been proposed, along with evaluation metrics to gauge the disentanglement of the learned representations (Higgins et al., 2017a; Kim & Mnih, 2018; Burgess et al., 2018; Kumar et al., 2018; Chen et al., 2018; Eastwood & Williams, 2018). Recently, Locatello et al. (2019b, Theorem 1) showed that the purely unsupervised learning of disentangled representations is impossible. This limitation can be overcome without the need for explicitly labeled data by introducing weak labels (Locatello et al., 2020; Shu et al., 2019). Ideas related to disentangling the factors of variation date back to the non-linear ICA literature (Comon, 1994; Hyvärinen & Pajunen, 1999; Bach & Jordan, 2002; Jutten & Karhunen, 2003; Hyvarinen & Morioka, 2016; Hyvarinen et al., 2019; Gresele et al., 2019). Recent work combines non-linear ICA with disentanglement (Khemakhem et al., 2020; Sorrenson et al., 2020; Klindt et al., 2020).\nEvaluating disentangled representations. The BetaVAE (Higgins et al., 2017a) and FactorVAE (Kim & Mnih, 2018) scores measure disentanglement by performing an intervention on the factors of variation and predicting which factor was intervened on. The Mutual Information Gap (MIG) (Chen et al., 2018), Modularity (Ridgeway & Mozer, 2018), DCI Disentanglement (Eastwood & Williams, 2018) and SAP scores (Kumar et al., 2018) are based on matrices relating factors of variation and codes (e.g. pairwise mutual information, feature importance and predictability).\n1http://people.tuebingen.mpg.de/ei-datasets/iclr_transfer_paper/robot_ finger_datasets.tar (6.18 GB)\nDatasets for disentanglement learning. dSprites (Higgins et al., 2017a), which consists of binary low-resolution 2D images of basic shapes, is one of the most commonly used synthetic datasets for disentanglement learning. Color-dSprites, Noisy-dSprites, and Scream-dSprites are slightly more challenging variants of dSprites. The SmallNORB dataset contains toy images rendered under different lighting conditions, elevations and azimuths (LeCun et al., 2004). Cars3D (Reed et al., 2015) exhibits different car models from Fidler et al. (2012) under different camera viewpoints. 3dshapes is a popular dataset of simple shapes in a 3D scene (Kim & Mnih, 2018). Finally, Gondal et al. (2019) proposed MPI3D, containing images of physical 3D objects with seven factors of variation, such as object color, shape, size and position available in a simulated, simulated and highly realistic rendered simulated variant. Except MPI3D which has over 1M images, the size of the other datasets is limited with only 17, 568 to 737, 280 images. All of the above datasets exhibit perfect independence of all factors, the number of possible states is on the order of 1M or less, and due to their static setting they do not allow for dynamic downstream tasks such as reinforcement learning. In addition, except for SmallNORB, the image resolution is limited to 64x64 and there are no occlusions.\nOther related work. Locatello et al. (2020) probed the out-of-distribution generalization of downstream tasks trained on disentangled representations. However, these representations are trained on the entire dataset. Generalization and transfer performance especially for representation learning has likewise been studied in Dayan (1993); Muandet et al. (2013); Heinze-Deml & Meinshausen (2017); Rojas-Carulla et al. (2018); Suter et al. (2019); Li et al. (2018); Arjovsky et al. (2019); Krueger et al. (2020); Gowal et al. (2020). For the role of disentanglement in causal representation learning we refer to the recent overview by Schölkopf et al. (2021). Träuble et al. (2020) systematically investigated the effects of correlations between factors of variation on disentangled representation learners. Transfer of learned disentangled representations from simulation to the real world has been recently investigated by Gondal et al. (2019) on the MPI3D dataset, and previously by Higgins et al. (2017b) in the context of reinforcement learning. Sim2real transfer is of major interest in the robotic learning community, because of limited data and supervision in the real world (Tobin et al., 2017; Rusu et al., 2017; Peng et al., 2018; James et al., 2019; Yan et al., 2020; Andrychowicz et al., 2020)." }, { "heading": "3 SCALING DISENTANGLED REPRESENTATIONS TO COMPLEX SCENARIOS", "text": "A new challenging dataset. Simulated images in our dataset are derived from the trifinger robot platform introduced by Wüthrich et al. (2020). The motivation for choosing this setting is that (1) it is challenging due to occlusions, correlations, and other difficulties encountered in robotic settings, (2) it requires modeling of fine details such as tip links at high resolutions, and (3) it corresponds to a robotic setup, so that learned representations can be used for control and reinforcement learning in simulation and in the real world. The scene comprises a robot finger with three joints that can be controlled to manipulate a cube in a bowl-shaped stage. Fig. 1 shows examples of scenes from our dataset. The data is generated from 7 different factors of variation (FoV)\nlisted in Table 1. Unlike in previous datasets, not all FoVs are independent: The end-effector (the tip of the finger) can collide with the floor or the cube, resulting in infeasible combinations of the factors (see Appendix B.1). We argue that such correlations are a key feature in real-world data that is not present in existing datasets. The high FoV resolution results in approximately 1.52 billion feasible states, but the dataset itself only contains one million of them (approximately 0.065% of all possible FoV combinations), realistically rendered into 128 × 128 images. Additionally, we recorded an annotated dataset under the same conditions in the real-world setup: we acquired 1,809 camera images from the same viewpoint and recorded the labels of the 7 underlying factors of variation. This dataset can be used for out-of-distribution evaluations, few-shot learning, and testing other sim2real aspects.\nModel architecture. When scaling disentangled representation learning to more complex datasets, such as the one proposed here, one of the main bottlenecks in current VAE-based approaches is the flexibility of the encoder and decoder networks. In particular, using the architecture from Locatello et al. (2019b), none of the models we trained correctly captured all factors of variation or yielded high-quality reconstructions. While the increased image resolution already presents a challenge, the main practical issue in our new dataset is the level of detail that needs to be modeled. In particular, we identified the cube rotation and the lower joint position to be the factors of variation that were the hardest to capture. This is likely because these factors only produce relatively small changes in the image and hence the reconstruction error.\nTo overcome these issues, we propose a deeper and wider neural architecture than those commonly used in the disentangled representation learning literature, where the encoder and decoder typically have 4 convolutional and 2 fully-connected layers. Our encoder consists of a convolutional layer, 10 residual blocks, and 2 fully-connected layers. Some residual blocks are followed by 1x1 convolutions that change the number of channels, or by average pooling that downsamples the tensors by a factor of 2 along the spatial dimensions. Each residual block consists of two 3x3 convolutions with a leaky ReLU nonlinearity, and a learnable scalar gating mechanism (Bachlechner et al., 2020). Overall, the encoder has 23 convolutional layers and 2 fully connected layers. The decoder mirrors this architecture, with average pooling replaced by bilinear interpolation for upsampling. The total number of parameters is approximately 16.3M. See Appendix A for further implementation details.\nExperimental setup. We perform a large-scale empirical study on the simulated dataset introduced above by training 1,080 β-VAE models.2 For further experimental details we refer the reader to Appendix A. The hyperparameter sweep is defined as follows:\n• We train the models using either unsupervised learning or weakly supervised learning (Locatello et al., 2020). In the weakly supervised case, a model is trained with pairs of images that differ in k factors of variation. Here we fix k = 1 as it was shown to lead to higher disentanglement by Locatello et al. (2020). The dataset therefore consists of 500k pairs of images that differ in only one FoV.\n• We vary the parameter β in {1, 2, 4}, and use linear deterministic warm-up (Bowman et al., 2015; Sønderby et al., 2016) over the first {0, 10000, 50000} training steps.\n• The latent space dimensionality is in {10, 25, 50}. • Half of the models are trained with additive noise in the input image. This choice is mo-\ntivated by the fact that adding noise to the input of neural networks has been shown to be beneficial for out-of-distribution generalization (Sietsma & Dow, 1991; Bishop, 1995).\n• Each of the 108 resulting configurations is trained with 10 random seeds.\nCan we scale up disentanglement learning? Most of the trained VAEs in our empirical study fully capture all the elements of a scene, correctly model heavy occlusions, and generate detailed,\n2Training these models requires approximately 2.8 GPU years on NVIDIA Tesla V100 PCIe.\nhigh-quality samples and reconstructions (see Appendix B.2). From visual inspections such as the latent traversals in Fig. 2, we observe that many trained models fully disentangle the ground-truth factors of variation. This, however, appears to only be possible in the weakly supervised scenario. The fact that models trained without supervision learn entangled representations is in line with the impossibility result for the unsupervised learning of disentangled representations from Locatello et al. (2019b). Latent traversals from a selection of models with different degrees of disentanglement are presented in Appendix B.3. Interestingly, the high-disentanglement models seem to correct for correlations and interpolate infeasible states, i.e. the fingertip traverses through the cube or the floor.\nSummary: The proposed architecture can scale disentanglement learning to more realistic settings, but a form of weak supervision is necessary to achieve high disentanglement.\nHow useful are common disentanglement metrics in realistic scenarios? The violin plot in Fig. 3 (left) shows that DCI and MIG measure high disentanglement under weak supervision and lower disentanglement in the unsupervised setting. This is consistent with our qualitative conclusion from visual inspection of the models (Appendix B.3) and with the aforementioned impossibility result. Many of the models trained with weak supervision exhibit a very high DCI score (29% of them have >99% DCI, some of them up to 99.89%). SAP and Modularity appear to be ineffective at capturing disentanglement in this setting, as also observed by Locatello et al. (2019b). Finally, note that the BetaVAE and FactorVAE metrics are not straightforward to be evaluated on datasets that do not contain all possible combinations of factor values. According to Fig. 3 (right), DCI and MIG strongly correlate with test accuracy of GBT classifiers predicting the FoVs. In the weakly supervised setting, these metrics are strongly correlated with the ELBO (positively) and with the reconstruction loss (negatively). We illustrate these relationships in more detail in Appendix B.4. Such correlations were also observed by Locatello et al. (2020) on significantly less complex datasets, and can be exploited for unsupervised model selection: these unsupervised metrics can be used as proxies for disentanglement metrics, which would require fully labeled data.\nSummary: DCI and MIG appear to be useful disentanglement metrics in realistic scenarios, whereas other metrics seem to fall short of capturing disentanglement or can be difficult to compute. When using weak supervision, we can select disentangled models with unsupervised metrics." }, { "heading": "4 FRAMEWORK FOR THE EVALUATION OF OOD GENERALIZATION", "text": "Previous work has focused on evaluating the usefulness of disentangled representations for various downstream tasks, such as predicting ground truth factors of variation, fair classification, and abstract reasoning. Here we propose a new framework for evaluating the out-of-distribution (OOD) generalization properties of representations. More specifically, we consider a downstream task – in\nour case, regression of ground truth factors – trained on a learned representation of the data, and evaluate the performance on a held-out test set. While the test set typically follows the same distribution as the training set (in-distribution generalization), we also consider test sets that follow a different distribution (out-of-distribution generalization). Our goal is to investigate to what extent, if at all, downstream tasks trained on disentangled representations exhibit a higher degree of OOD generalization than those trained on entangled representations.\nLet D denote the training set for disentangled representation learning. To investigate OOD generalization, we train downstream regression models on a subset D1 ⊂ D to predict ground truth factor values from the learned representation computed by the encoder. We independently train one predictor per factor. We then test the regression models on a set D2 that differs distributionally from the training set D1, as it either contains images corresponding to held-out values of a chosen FoV (e.g. unseen object colors), or it consists of real-world images. We now differentiate between two scenarios: (1) D2 ⊂ D, i.e. the OOD test set is a subset of the dataset for representation learning; (2) D and D2 are disjoint and distributionally different. These two scenarios will be denoted by OOD1 and OOD2, respectively. For example, consider the case in which distributional shifts are based on one FoV: the color of the object. Then, we could define these datasets such that images in D always contain a red or blue object, and those in D1 ⊂ D always contain a red object. In the OOD1 scenario, images in D2 would always contain a blue object, whereas in the OOD2 case they would always contain an object that is neither red nor blue.\nThe regression models considered here are Gradient Boosted Trees (GBT), random forests, and MLPs with {1, 2, 3} hidden layers. Since random forests exhibit a similar behavior to GBTs, and all MLPs yield similar results to each other, we choose GBTs and the 2-layer MLP as representative models and only report results for those. To quantify prediction quality, we normalize the ground truth factor values to the range [0, 1], and compute the mean absolute error (MAE). Since the values are normalized, we can define our transfer metric as the average of the MAE over all factors (except for the FoV that is OOD)." }, { "heading": "5 BENEFITS AND TRANSFER OF STRUCTURED REPRESENTATIONS", "text": "Experimental setup. We evaluate the transfer metric introduced in Section 4 across all 1,080 trained models. To compute this metric, we train regression models to predict the ground truth factors of variation, and test them under distributional shift. We consider distributional shifts in terms of cube color or sim2real, and we do not evaluate downstream prediction of cube color. We report scores for two different regression models: a Gradient Boosted Tree (GBT) and an MLP with 2 hidden layers of size 256. In Appendix A we provide details on the datasets used in this section.\nIn the OOD1 setting, we have D2 ⊂ D, hence the encoder is in-distribution: we are testing the predictor on representations of images that were in the training set of the representation learning algorithm. Therefore, we expect the representations to be meaningful. We consider three scenarios:\n• OOD1-A: The regression models are trained on 1 cube color (red) and evaluated on the remaining 7 colors.\n• OOD1-B: The regression models are trained on 4 cube colors with high hue in the HSV space, and evaluated on 4 cube colors with low hue (extrapolation).\n• OOD1-C: The regression models are again trained and evaluated on 4 cube colors, but the training and evaluation colors are alternating along the hue dimension (interpolation).\nIn the more challenging setting where even the encoder goes out-of-distribution (OOD2, with D2 ∩ D = ∅), we train the regression models on a subset of the training set D that includes all 8 cube colors, and we consider the two following scenarios:\n• OOD2-A: The regression models are evaluated on simulated data, on 4 cube colors that are out of the encoder’s training distribution.\n• OOD2-B: The regression models are evaluated on real-world images of the robotic setup, without any adaptation or fine-tuning.\nIs disentanglement correlated with OOD1 generalization? In Fig. 4 we consistently observe a negative correlation between disentanglement and transfer error across all OOD1 settings. The correlation is mild when using MLPs, strong when using GBTs. This difference is expected, as GBTs have an axis-alignment bias whereas MLPs can – given enough data and capacity – disentangle an entangled representation more easily. Our results therefore suggest that highly disentangled representations are useful for generalizing out-of-distribution as long as the encoder remains in-distribution. This is in line with the correlation found by Locatello et al. (2019b) between disentanglement and the GBT10000 metric. There, however, GBTs are tested on the same distribution as the training distribution, while here we test them under distributional shift. Given that the computation of disentanglement scores requires labels, this is of little benefit in the unsupervised setting. However, it can be exploited in the weakly supervised setting, where disentanglement was shown to correlate with ELBO and reconstruction loss (Section 3). Therefore, model selection for representations that transfer well in these scenarios is feasible based on the ELBO or reconstruction loss, when weak supervision is available. Note that, in absolute terms, the OOD generalization error with encoder in-distribution (OOD1) is very low in the high-disentanglement case (the only exception being the MLP in the OOD1-C case, with the 1-7 color split, which seems to overfit). This suggests that disentangled representations can be useful in downstream tasks even when transferring out of the training distribution.\nSummary: Disentanglement seems to be positively correlated with OOD generalization of downstream tasks, provided that the encoder remains in-distribution (OOD1). Since in the weakly supervised case disentanglement correlates with the ELBO and the reconstruction loss, model selection can be performed using these metrics as proxies for disentanglement. These metrics have the advantage that they can be computed without labels, unlike disentanglement metrics.\nIs disentanglement correlated with OOD2 generalization? As seen in Fig. 5, the negative correlation between disentanglement and GBT transfer error is weaker when the encoder is out of distribution (OOD2). Nonetheless, we observe a non-negligible correlation for GBTs in the OOD2A case, where we investigate out-of-distribution generalization along one FoV, with observations in D2 still generated from the same simulator. In the OOD2-B setting, where the observations are taken from cameras in the corresponding real-world setting, the correlation between disentanglement and transfer performance appears to be minor at best. This scenario can be considered a variant of zero-shot sim2real generalization.\nSummary: Disentanglement has a minor effect on out-of-distribution generalization outside of the training distribution of the encoder (OOD2).\nWhat else matters for OOD2 generalization? Results in Fig. 6 suggest that adding Gaussian noise to the input during training as described in Section 3 leads to significantly better OOD2 generalization, and has no effect on OOD1 generalization. Adding noise to the input of neural networks is known to lead to better generalization (Sietsma & Dow, 1991; Bishop, 1995). This is in agreement with our results, since OOD1 generalization does not require generalization of the encoder, while OOD2 does. Interestingly, closer inspection reveals that the contribution of different factors of variation to the generalization error can vary widely. See Appendix B.5 for further details. In particular, with noisy input, the position of the cube is predicted accurately even in real-world images (<5% mean absolute error on each axis). This is promising for robotics applications, where the true state of the joints is observable but inference of the cube position relies on object tracking methods. Fig. 7 shows an example of real-world inputs and reconstructions of their simulated equivalents.\nSummary: Adding input noise during training appears to be significantly beneficial for OOD2 generalization, while having no effect when the encoder is kept in its training distribution (OOD1)." }, { "heading": "6 CONCLUSION", "text": "Despite the growing importance of the field and the potential societal impact in the medical domain (Chartsias et al., 2018) and fair decision making (Locatello et al., 2019a), state-of-the-art approaches for learning disentangled representations have so far only been systematically evaluated on synthetic toy datasets. Here we introduced a new high-resolution dataset with 1M simulated images and\nover 1,800 annotated real-world images of the same setup. This dataset exhibits a number of challenges and features which are not present in previous datasets: it contains correlations between factors, occlusions, a complex underlying structure, and it allows for evaluation of transfer to unseen simulated and real-world settings. We proposed a new VAE architecture to scale disentangled representation learning to this realistic setting and conducted a large-scale empirical study of disentangled representations on this dataset. We discovered that disentanglement is a good predictor of OOD generalization of downstream tasks and showed that, in the context of weak supervision, model selection for good OOD performance can be based on the ELBO or the reconstruction loss, which are accessible without explicit labels. Our setting allows for studying a wide variety of interesting downstream tasks in the future, such as reinforcement learning or learning a dynamics model of the environment. Finally, we believe that in the future it will be important to take further steps in the direction of this paper by considering settings with even more complex structures and stronger correlations between factors." }, { "heading": "ACKNOWLEDGEMENTS", "text": "The authors thank Shruti Joshi and Felix Widmaier for their useful comments on the simulated setup, Anirudh Goyal for helpful discussions and comments, and CIFAR for the support. We thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Frederik Träuble." }, { "heading": "B.1 DATASET CORRELATIONS", "text": "" }, { "heading": "B ADDITIONAL RESULTS", "text": "" }, { "heading": "B.2 SAMPLES AND RECONSTRUCTIONS", "text": "B.3 LATENT TRAVERSALS" }, { "heading": "B.4 UNSUPERVISED METRICS AND DISENTANGLEMENT", "text": "" }, { "heading": "B.5 OUT-OF-DISTRIBUTION TRANSFER", "text": "B.6 OUT-OF-DISTRIBUTION RECONSTRUCTIONS" } ]
2,021
null
SP:201c4028ac02743edfeb90aca191850f67d61445
[ "The authors propose a new zero-shot hyper-parameter optimization method based-on the meta-learning framework. The proposed method incorporates two ideas from the meta-learning framework namely the task similarity based on the meta-features and the dataset identification. The former idea is used to achieve the requirement that responses to similar data sets should be similar.The latter idea has the role of preventing data of dissimilar tasks from being embedded in close proximity in the meta-feature space.", "In this papar, the authors formulated a new objective function for HBO, which included an additional regularization term based on the dataset similarity. The authors used the distance between the meta-features of selected datasets to measure this dataset similarity and assumpted that similar datasets should have similar hyper-paprameters. The experiments are complete and demonstrate the advantages of using this new optimization formulation." ]
Zero-shot hyper-parameter optimization refers to the process of selecting hyperparameter configurations that are expected to perform well for a given dataset upfront, without access to any observations of the losses of the target response. Existing zero-shot approaches are posed as initialization strategies for Bayesian Optimization and they often rely on engineered meta-features to measure dataset similarity, operating under the assumption that the responses of similar datasets behaves similarly with respect to the same hyper-parameters. Solutions for zeroshot HPO are embarrassingly parallelizable and thus can reduce vastly the required wallclock time of learning a single model. We propose a very simple HPO model called Gray-box Zero(O)-Shot Initialization (GROSI) as a conditional parametric surrogate that learns a universal response model by exploiting the relationship between the hyper-parameters and the dataset meta-features directly. In contrast to existing HPO solutions, we achieve transfer of knowledge without engineered metafeatures, but rather through a shared model that is trained simultaneously across all datasets. We design and optimize a novel loss function that allows us to regress from the dataset/hyper-parameter pair unto the response. Experiments on 120 datasets demonstrate the strong performance of GROSI, compared to conventional initialization strategies. We also show that by fine-tuning GROSI to the target dataset, we can outperform state-of-the-art sequential HPO algorithms.
[]
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for large-scale machine learning", "venue": "In 12th {USENIX} symposium on operating systems design and implementation ({OSDI}", "year": 2016 }, { "authors": [ "Rémi Bardenet", "Mátyás Brendel", "Balázs Kégl", "Michele Sebag" ], "title": "Collaborative hyperparameter tuning", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "James Bergstra", "Yoshua Bengio" ], "title": "Random search for hyper-parameter optimization", "venue": "Journal of machine learning research,", "year": 2012 }, { "authors": [ "Pavel B Brazdil", "Carlos Soares", "Joaquim Pinto Da Costa" ], "title": "Ranking learning algorithms: Using ibl and meta-learning on accuracy and time results", "venue": "Machine Learning,", "year": 2003 }, { "authors": [ "Harrison Edwards", "Amos Storkey" ], "title": "Towards a neural statistician", "venue": "arXiv preprint arXiv:1606.02185,", "year": 2016 }, { "authors": [ "Matthias Feurer", "Jost Tobias Springenberg", "Frank Hutter" ], "title": "Using meta-learning to initialize Bayesian optimization of hyperparameters", "venue": "In Proceedings of the 2014 International Conference on Meta-learning and Algorithm Selection-Volume", "year": 2014 }, { "authors": [ "Matthias Feurer", "Jost Tobias Springenberg", "Frank Hutter" ], "title": "Initializing Bayesian hyperparameter optimization via meta-learning", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Matthias Feurer", "Benjamin Letham", "Eytan Bakshy" ], "title": "Scalable meta-learning for Bayesian optimization using ranking-weighted Gaussian process ensembles", "venue": "In AutoML Workshop at ICML,", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Hadi S. Jomaa", "Lars Schmidt-Thieme", "Josif Grabocka" ], "title": "Dataset2vec: Learning dataset metafeatures", "venue": "arXiv preprint arXiv:1905.11063,", "year": 2019 }, { "authors": [ "Donald R Jones", "Matthias Schonlau", "William J Welch" ], "title": "Efficient global optimization of expensive black-box functions", "venue": "Journal of Global optimization,", "year": 1998 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Günter Klambauer", "Thomas Unterthiner", "Andreas Mayr", "Sepp Hochreiter" ], "title": "Self-normalizing neural networks", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Věra Krková" ], "title": "Kolmogorov’s theorem and multilayer neural networks", "venue": "Neural networks,", "year": 1992 }, { "authors": [ "Juho Lee", "Yoonho Lee", "Jungtaek Kim", "Adam Kosiorek", "Seungjin Choi", "Yee Whye Teh" ], "title": "Set transformer: A framework for attention-based permutation-invariant neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Rui Leite", "Pavel Brazdil" ], "title": "Predicting relative performance of classifiers from samples", "venue": "In Proceedings of the 22nd international conference on Machine learning,", "year": 2005 }, { "authors": [ "Shusen Liu", "Dan Maljovec", "Bei Wang", "Peer-Timo Bremer", "Valerio Pascucci" ], "title": "Visualizing highdimensional data: Advances in the past decade", "venue": "IEEE transactions on visualization and computer graphics,", "year": 2016 }, { "authors": [ "Jonas Močkus" ], "title": "On Bayesian methods for seeking the extremum", "venue": "In Optimization techniques IFIP technical conference,", "year": 1975 }, { "authors": [ "Valerio Perrone", "Rodolphe Jenatton", "Matthias W Seeger", "Cédric Archambeau" ], "title": "Scalable hyperparameter transfer learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Valerio Perrone", "Huibin Shen", "Matthias W Seeger", "Cédric Archambeau", "Rodolphe Jenatton" ], "title": "Learning search spaces for Bayesian optimization: Another view of hyperparameter transfer learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Carl Edward Rasmussen" ], "title": "Gaussian processes in machine learning", "venue": "In Summer School on Machine Learning,", "year": 2003 }, { "authors": [ "Nicolas Schilling", "Martin Wistuba", "Lars Schmidt-Thieme" ], "title": "Scalable hyperparameter optimization with products of Gaussian process experts", "venue": "In Joint European conference on machine learning and knowledge discovery in databases,", "year": 2016 }, { "authors": [ "Jasper Snoek", "Hugo Larochelle", "Ryan P Adams" ], "title": "Practical Bayesian optimization of machine learning algorithms", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural networks for machine learning,", "year": 2012 }, { "authors": [ "Joaquin Vanschoren" ], "title": "Meta-learning: A survey", "venue": "arXiv preprint arXiv:1810.03548,", "year": 2018 }, { "authors": [ "Michael Volpp", "Lukas Fröhlich", "Andreas Doerr", "Frank Hutter", "Christian Daniel" ], "title": "Meta-learning acquisition functions for Bayesian optimization", "venue": "arXiv preprint arXiv:1904.02642,", "year": 2019 }, { "authors": [ "L Darrell Whitley", "Francisco Chicano", "Brian W Goldman" ], "title": "Gray box optimization for mk landscapes (nk landscapes and max-ksat)", "venue": "Evolutionary computation,", "year": 2016 }, { "authors": [ "Fela Winkelmolen", "Nikita Ivkin", "H Furkan Bozkurt", "Zohar Karnin" ], "title": "Practical and sample efficient zero-shot hpo", "venue": "arXiv preprint arXiv:2007.13382,", "year": 2020 }, { "authors": [ "Martin Wistuba", "Nicolas Schilling", "Lars Schmidt-Thieme" ], "title": "Learning hyperparameter optimization initializations", "venue": "IEEE international conference on data science and advanced analytics (DSAA),", "year": 2015 }, { "authors": [ "Martin Wistuba", "Nicolas Schilling", "Lars Schmidt-Thieme" ], "title": "Sequential model-free hyperparameter tuning", "venue": "In 2015 IEEE international conference on data mining,", "year": 2015 }, { "authors": [ "Martin Wistuba", "Nicolas Schilling", "Lars Schmidt-Thieme" ], "title": "Two-stage transfer surrogate model for automatic hyperparameter optimization", "venue": "In Joint European conference on machine learning and knowledge discovery in databases,", "year": 2016 }, { "authors": [ "Martin Wistuba", "Nicolas Schilling", "Lars Schmidt-Thieme" ], "title": "Scalable Gaussian process-based transfer surrogates for hyperparameter optimization", "venue": "Machine Learning,", "year": 2018 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Within the research community, the concentration of efforts towards solving the problem of hyperparameter optimization (HPO) has been mainly through sequential model-based optimization (SMBO), i.e. iteratively fitting a probabilistic response model, typically a Gaussian process (Rasmussen (2003)), to a history of observations of losses of the target response, and suggesting the next hyper-parameters via a policy, acquisition function, that balances exploration and exploitation by leveraging the uncertainty in the posterior distribution (Jones et al. (1998); Wistuba et al. (2018); Snoek et al. (2012)). However, even when solutions are defined in conjunction with transfer learning techniques (Bardenet et al. (2013); Wistuba et al. (2016); Feurer et al. (2015)), the performance of SMBO is heavily affected by the choice of the initial hyper-parameters. Furthermore, SMBO is sequential by design and additional acceleration by parallelization is not possible.\nIn this paper, we present the problem of zero-shot hyper-parameter optimization as a meta-learning objective that exploits dataset information as part of the surrogate model. Instead of treating HPO as a black-box function, operating blindly on the response of the hyper-parameters alone, we treat it as a gray-box function (Whitley et al. (2016)), by capturing the relationship among the dataset meta-features and hyper-parameters to approximate the response model.\nIn this paper, we propose a novel formulation of HPO as a conditional gray-box function optimization problem, Section 4, that allows us to regress from the dataset/hyper-parameter pair directly onto the response. Driven by the assumption that similar datasets should have similar response approximations, we introduce an additional data-driven similarity regularization objective to penalize the difference between the predicted response of similar datasets. In Section 5, we perform an extensive battery of experiments that highlight the capacity of our universal model to serve as a solution for: (1) zero-shot HPO as a stand-alone task, (2) zero-shot as an initialization strategy for Bayesian Optimization (BO), (3) transferable sequential model-based optimization. A summary of our contributions is:\n• a formulation of the zero-shot hyper-parameter optimization problem in which our response model predicts upfront the full set of hyper-parameter configurations to try, without access to observations of losses of the target response;\n• a novel multi-task optimization objective that models the inherent similarity between datasets and their respective responses;\n• three new meta-datasets with different search spaces and cardinalities to facilitate the experiments and serve as a benchmark for future work;\n• an empirical demonstration of the performance of our approach through a battery of experiments that address the aforementioned research aspects, and a comparison against state-of-the-art HPO solutions for transfer-learning." }, { "heading": "2 RELATED WORK", "text": "The straightforward zero-shot approaches for HPO consist of random search (Bergstra & Bengio (2012)), or simply selecting hyper-parameters that perform well on general tasks (Brazdil et al. (2003)). Some recent work has also shown that simply selecting random hyper-parameters from a restricted search space significantly outperforms existing solutions, and improves the performance of conventional SMBO approaches (Perrone et al. (2019)). The restricted search space is created by eliminating regions that are further away from the best hyper-parameters of the training tasks.\nAnother prominent direction for zero-shot HPO depends heavily on engineered meta-features, i.e. dataset characteristics (Vanschoren (2018)), to measure the similarity of datasets. Following the assumption that the responses of similar datasets behave similarly to the hyper-parameters, it has been shown that even the simplest of meta-features (Bardenet et al. (2013)) improve the performance of single task BO algorithms (Feurer et al. (2014; 2015)). The target response is initialized with the top-performing hyper-parameters of the dataset nearest neighbor in the meta-feature space. The shortcomings of using engineered meta-features are that they are hard to define (Leite & Brazdil (2005)), and are often selected through trial-and-error or expert domain knowledge. As a remedy, replacing engineered meta-features with learned meta-features (Jomaa et al. (2019)) compensates for such limitations, by producing expressive meta-features agnostic to any meta-task, such as HPO.\nZero-shot HPO is also posed as an optimization problem that aims to minimize the meta-loss over a collection of datasets (Wistuba et al. (2015a)) by replacing the discrete minimum function with a differentiable softmin function as an approximation. The initial configurations boost the single task BO without any meta-features. In (Wistuba et al. (2015b)), hyper-parameter combinations are assigned a static ranking based on the cumulative average normalized error, and dataset similarity is estimated based on the relative ranking of these combinations. Winkelmolen et al. (2020) introduce a Bayesian Optimization solution for zero-shot HPO by iteratively fitting a surrogate model over the observed responses of different tasks, and selecting the next hyper-parameters and datasets that minimize the aggregated observed loss.\nAside from zero-shot HPO, transfer learning is employed by learning better response models (Wistuba et al. (2016)) based on the similarity of the response. Feurer et al. (2018) propose an ensemble model for BO by building the target response model as a weighted sum of the predictions of base models as well as the target model. In addition to the transferable response models, Volpp et al. (2019) design a transferable acquisition function as a policy for hyper-parameter optimization defined in a reinforcement learning framework. As a replacement to the standard Gaussian process, Perrone et al. (2018) train a multi-task adaptive Bayesian linear regression model with a shared feature extractor that provides context information for each independent task.\nIn contrast to the literature, we formulate the problem of zero-shot HPO as a gray-box function optimization problem, by designing a universal response model defined over the combined domain of datasets and hyper-parameters. We rely on the embeddings to estimate the similarities across datasets and design a novel multi-task optimization objective to regress directly on the response. This allows us to delineate from the complexity paired with Bayesian uncertainty, as well as the trouble of engineering similarity measures." }, { "heading": "3 HYPER-PARAMETER OPTIMIZATION", "text": "Consider a dataset D = { ( x(Train), y(Train) ) , ( x(Val), y(Val) ) , ( x(Test), y(Test) ) } for a supervised learning task, with training, validation and test splits of predictors x ∈ X and targets y ∈ Y . We aim at training a parametric approximation of the target using ŷ := f(θ, λ) : X → Y , where θ ∈ Θ denotes the parameters and λ ∈ Λ its hyper-parameters, by minimizing a loss function L : Y × Y → R as:\nλ∗ = arg min λ∈Λ\nL ( y(Val), f ( x(Val); θ∗, λ )) s.t. θ∗ = arg min θ∈Θ L ( y(Train), f ( x(Train); θ, λ )) (1)\nWe hereafter denote the validation error as the response ` (λ) := L ( y(Val), f ( x(Val); θ∗, λ )) . Unfortunately, a direct optimization of the response ` (λ) in terms of λ is not trivial, because θ∗ is the result of the minimization problem and its gradients with respect to λ are not easy to compute. Instead, in order to learn the optimal hyper-parameters λ we train a probabilistic surrogate ˆ̀(λ;β) : Λ× B → R parameterized by β ∈ B, with B as the space of response model parameters, that minimizes the log-likelihood of approximating the response `(λ) over a set of K evaluations S := {(λ1, ` (λ1)) , . . . , (λK , ` (λK))}. We denote P as the probability of estimating the response given a surrogate model. Given the surrogate, the next hyper-parameter to be evaluated λ(next) is computed by maximizing an acquisition function A (e.g. EI (Močkus (1975)) as:\nλ(next) := arg max λ∈Λ A(ˆ̀(λ;β∗)) s.t. β∗ := arg min β∈B K∑ k=1 lnP ( `(λk), ˆ̀(λk;β) ) (2)" }, { "heading": "4 META-LEARNING OF CONDITIONAL GRAY-BOX SURROGATES", "text": "Let us define a collection of T datasets as { D(1), . . . , D(T ) } and let `(t) (λ) measure the response of\nthe hyper-parameter λ on the t-th dataset D(t). Furthermore, assume we have previously evaluated K(t) many hyper-parameters λ(t)k , k ∈ {1, . . . ,K(t)} on that particular dataset. We condition the surrogate ˆ̀ to capture the characteristics of the t-th dataset, by taking as input the meta-features representation of the dataset as φ(t). Therefore, a dataset-aware surrogate can be trained using meta-learning over a cumulative objective function O (β) as:\nO (β) := T∑ t=1 K(t)∑ k=1 ( `(t) ( λ (t) k ) − ˆ̀ ( λ (t) k , φ (t); β ))2\n(3)" }, { "heading": "4.1 THE META-FEATURE EXTRACTOR", "text": "Introducing engineered meta-features has had a significant impact on hyper-parameter optimization. However, learning meta-features across datasets of varying schema in a task-agnostic setting provides more representative characteristics than to rely on hard-to-tune empirical estimates. The meta-feature extractor is a set-based function (Zaheer et al. (2017)) that presents itself as an extended derivation of the Kolmogorov-Arnold representation theorem (Krková (1992)), which states that a multi-variate function φ can be defined as an aggregation of univariate functions over single variables, Appendix B.\nEach supervised (tabular) dataset D(t) := ( x(t), y(t) ) consists of instances x(t) ∈ X ∈ RN×M and targets y(t) ∈ Y ∈ RN×C such that N , M and C represent the number of instances, predictors and targets respectively. The dataset can be further represented as a set of smaller components, set of sets, D(t) = {( x\n(t) i,m, y (t) i,c\n) | m ∈ {1, . . . ,M}, i ∈ {1, . . . , N}, c ∈ {1, . . . , C} } . A tabular dataset\ncomposed of columns (predictors, targets) and rows (instances) is reduced to single predictor-target pairs instead of an instance-target pairs. Based on this representation, a meta-feature extractor parameterized as a neural network (Jomaa et al. (2019)), is formulated in Equation 4. For simplicity\nof notation, we drop the superscript (t) unless needed.\nφ(D) = h\n( 1\nMC M∑ m=1 C∑ c=1 g\n( 1\nN N∑ i=1 f(xi,m, yi,c)\n)) (4)\nwith f : R2 → RKf , g : RKf → RKg and h : RKg → RK represented by neural networks with Kf , Kg, and K output units, respectively. This set-based formulation captures the correlation between each variable (predictor) and its assigned target and is permutation-invariant, i.e. the output is unaffected by the ordering of the pairs in the set. Other set-based functions such as (Edwards & Storkey (2016); Lee et al. (2019)) can also be used for meta-feature extraction, however, we focus on this deep-set formulation (Jomaa et al. (2019)) because it is proven to work properly for hyper-parameter optimization." }, { "heading": "4.2 THE AUXILIARY DATASET IDENTIFICATION TASK", "text": "The dataset identification task introduced previously as dataset similarity learning (Jomaa et al. (2019)), ensures that the meta-features of similar datasets are colocated in the meta-feature space, providing more expressive and distinct meta-features for every dataset.\nLet pD a joint distribution over dataset pairs such that (D(t), D(q), s) ∈ T × T ×{0, 1} with s being a binary dataset similarity indicator. We define a classification model ŝ : T × T → R+ that provides an unnormalized probability estimate for s being 1, as follows:\nŝ(D(t), D(q)) = e−γZ(φ (t),φ(q)) (5)\nwhere Z : Rk × Rk → R+ represents any distance metric, and γ a tuneable hyper-parameter. For simplicity, we use the Euclidean distance to measure the similarity between the extracted metafeatures, i.e. Z ( φ(t), φ(q) ) = ‖φ(t) − φ(q)‖, and set γ = 1. The classification model is trained by optimizing the negative log likelihood:\nP(β) := − ∑\n(t,q)∼pD+\nlog ( ŝ(D(t), D(q)) ) − ∑ (t,q)∼pD− log ( 1− ŝ(D(t), D(q)) )\n(6)\nwith pD+ as the distribution of similar datasets, pD+ = {(D(t), D(q), s) ∼ pD | s = 1}, and pD− as the distribution of dissimilar datasets, pD− = {(D(t), D(q), s) ∼ pD | s = 0}. Similar datasets are defined as multi-fidelity subsets (batches) of each dataset." }, { "heading": "4.3 DATA-DRIVEN SIMILARITY REGULARIZATION", "text": "Our surrogate differs from prior practices, because we do not consider the response to be entirely black-box. Instead, since we know the features and the target values of a dataset even before evaluating any hyper-parameter, we model a gray-box surrogate by exploiting the dataset characteristics φ when approximating the response `. As a result, if the surrogate faces a new dataset that is similar to one of the T datasets from the collection it was optimized (i.e. similar meta-features φ extracted directly from the dataset), it will estimate a similar response. Yet, if we know apriori that two datasets are similar by means of the distance of their meta-features, we can explicitly regularize the surrogate to produce similar response estimations for such similar datasets, as:\nR (β) := T−1∑ t=1 T∑ q=t+1 K(t)∑ k=1 ‖φ(t) − φ(q)‖ ( ˆ̀ ( λ (t) k , φ (t); β ) − ˆ̀ ( λ (t) k , φ (q); β ))2\n(7)\nOverall we train the surrogate model to estimate the collection of response evaluations and explicitly capture the dataset similarity by solving the following problem, Equation 8, end-to-end, where α ∈ R controls the amount of similarity regularization, and δ ∈ R controls the impact of the dataset identification task:\nβ∗ := arg min β∈B O (β) + α R (β) + δ P (β) (8)" }, { "heading": "NETWORK ARCHITECTURE", "text": "Our model architecture is divided into two modules, l̂ := φ ◦ ψ, the meta-feature extractor φ, and the regression head ψ. The meta-feature extractor φ : R2 → RKh is composed of three functions, Equation 4, namely f , g and h. The regression head is also composed of two functions, i.e. ψ : ψ1◦ψ2. We define by ψ1 : RKh × Λ → RKψ1 as the function that takes as input the meta-feature/hyperparameter pair, and by ψ2 : RKψ1 → R the function that approximates the response. Finally, let Dense(n) define one fully connected layer with n neurons, and ResidualBlock(n,m) bem×Dense(n) with residual connections (Zagoruyko & Komodakis (2016)). We select the architecture presented in Table 1 based on the best observed average performance on the held-out validation sets across all meta-datasets, Appendix E.1." }, { "heading": "5 EXPERIMENTS", "text": "Our experiments are designed to answer three research questions1:\n• Q1: Can we learn a universal response model that provides useful hyper-parameter initializations from unseen datasets without access to previous observations of hyper-parameters for the dataset itself?\n• Q2: Do the proposed suggestions serve as a good initialization strategy for existing SMBO algorithms?\n• Q3: Aside from zero-shot HPO, does the performance of our method improve by refitting the response model to the observations of the hyper-parameters for the target dataset and how well does our approach compare to state-of-the-art methods in transfer learning for HPO?2" }, { "heading": "5.1 TRAINING PROTOCOL", "text": "In Algorithm 1 we describe the pseudo-code for optimizing our response model via standard metalearning optimization routines. We use stochastic gradient descent to optimize the internal model, and Adam optimizer (Kingma & Ba (2015)) to optimize the outer loop. We set the number of inner iterations to v = 5, and use a learning rate of 0.001 for both optimizers. We use a batch size of 8 tasks sampled randomly with each iteration. The code is implemented in Tensoflow (Abadi et al. (2016)). The performance of the various optimizers is assessed by measuring the regret, which represents the distance between an observed response and the optimal response on a response surface. For hyper-parameter optimization, the meta-datasets are provided beforehand, consequently, the optimal response is known. Since we normalize the response surfaces between (0, 1), we observe the normalized regret. The reported results represent the average over 5-fold cross-validation split for each meta-dataset, with 80 meta-train, 16 meta-valid, and 24 meta-test sets, and one unit of standard deviation." }, { "heading": "5.2 META-DATASET", "text": "We create three meta-datasets by using 120 datasets chosen from the UCI repository (Asuncion & Newman (2007)). We then create the meta-instances by training a feedforward neural network\n1For a better understanding of the different problem settings, see Appendix A 2The associated code and meta-dataset described will be available upon acceptance.\nand report the validation accuracy. Each dataset is provided with a predefined split 60% train, 15% validation, and 25% test instances. We train each configuration for 50 epochs with a learning rate of 0.001. The hyper-parameter search space is described in Table 2.\nThe layout hyper-parameter (Jomaa et al. (2019)) corresponds to the overall shape of the neural network, and provides information regarding the number of neurons in each layer. For example, all the layers in the neural network with a layout share the same number of neurons. We introduce an additional layout, 4, where the number of neurons in each layer is successively halved until it reaches the corresponding number of neurons in the centeral layer, then doubles successively.We also use dropout (Srivastava et al. (2014)) and batch normalization (Ioffe & Szegedy (2015)) as regularization strategies, and stochastic gradient descent (GD), ADAM (Kingma & Ba (2015)) and RMSProp (Tieleman & Hinton (2012)) as optimizers. SeLU (Klambauer et al. (2017)) represents the self-normalizing activation unit. The search space consists of all possible combinations of the hyper-parameters. After removing redundant configurations, the resulting meta-datasets have 256, 288 and 324 unique configurations respectively. For the purposes of our algorithm, we need access to the datasets used to generate the meta-features3. Further details are available in Appendix C." }, { "heading": "5.3 BASELINES", "text": "We introduce two sets of baselines to evaluate against the different aspects of our approach:" }, { "heading": "ZERO-SHOT HYPER-PARAMETER OPTIMIZATION", "text": "• Random search (Bergstra & Bengio (2012)) is the simplest approach where the hyperparameters are selected randomly.\n• Average Rank represents the top hyper-parameters that had on average the highest-ranking across the meta-train datasets.\n• NN-〈METAFEATURE〉 (Feurer et al. (2015)) refers to the process of selecting the topperforming hyper-parameters of the nearest neighboring dataset based on their metafeatures. We use two sets of well-established engineered meta-features, which we refer to as MF1 (Feurer et al. (2015)) and MF2 (Wistuba et al. (2016)), as well as learned metafeatures (Jomaa et al. (2019)), which we denote by D2V. The similarity is measured by the Euclidean distance.\n• Ellipsoid (Perrone et al. (2019)) is also a random search approach, however the hyperparameters are sampled from a hyper-ellipsoid search space that is restricted to encompass as many optimal hyper-parameters from the training dataset as possible." }, { "heading": "SEQUENTIAL-MODEL BASED OPTIMIZATION FOR TRANSFER LEARNING", "text": "• GP (Rasmussen (2003)) is standard Gaussian process response model with a Matern 3/2 and automatic relevance determination. This approach is trained independently on each dataset.\n3Unfortunately, we could not evaluate our approach on some of the published meta-datasets (Schilling et al. (2016)) due to the unavailability of the associated datasets (original predictors and target values) used for generation of the meta-instances\n• SMFO (Wistuba et al. (2015b)) is a sequential model-free approach that provides a collection of hyper-parameters by minimizing the ranking loss across all the tasks in the meta-train datasets.\n• TST-R (Wistuba et al. (2016)) is a two-stage approach where the parameters of the target response model are adjusted via a kernel-weighted average based on the similarity of the hyper-parameter response between the target dataset and the training datasets. We also evaluate the variant of this approach that relies on meta-features, by replacing the engineered meta-features with learned meta-features, TST-D2V.\n• RGPE (Feurer et al. (2018)) is an ensemble model that estimates the target response model as a weighted combination of the training datasets’ response models and the target itself. The weights are assigned based on a ranking loss of the respective model.\n• ABLR (Perrone et al. (2018)) is a multi-task ensemble of adaptive Bayesian linear regression models with all the tasks sharing a common feature extractor.\n• TAF-R (Wistuba et al. (2018)) learns a transferable acquisition function, unlike the aforementioned algorithms that focus on a transferable response model, that selects the next hyper-parameter based on a weighted combination of the expected improvement of the target task, and predicted improvement on the source tasks.\n• MetaBO (Volpp et al. (2019)) is another transferable acquisition function, optimized as a policy in a reinforcement learning framework. This approach, however, demands a pre-computed target response model as part of the state representation.\nIn our approach, we learn a universal response model based on the underlying assumption that the response is not only dependent on the hyper-parameters, as is assumed in black-box optimization techniques, but also on the dataset itself, presenting the problem as a gray-box function optimization." }, { "heading": "5.4 RESULTS AND DISCUSSION", "text": "Q1: ZERO-SHOT HPO AS A STAND-ALONE PROBLEM\nIn Table 3 we report the final normalized regret achieved by the different zero-shot approaches for the first 20 hyper-parameters (Feurer et al. (2015)). Our method provides dataset-conditioned hyperparameters that perform better than heuristics for small budgets4. The use of engineered meta-features to represent datasets for HPO solutions is not reliable, as the results achieved by NN-〈MF1〉 and NN-〈MF2〉 are no better than random. On the other hand, using the meta-features extracted from the dataset directly, NN-〈D2V〉serves as a better approximation. Furthermore, random sampling from the restricted hyper-ellipsoid also outperforms the use of initialization strategies based on meta-features. We obtain the zero-shot hyper-parameters via Algorithm 2. The D2V meta-features are obtained via Algorithm 5.\nQ2: ZERO-SHOT HPO AS AN INITIALIZATION STRATEGY FOR SINGLE-TASK SEQUENTIAL HPO" }, { "heading": "METHODS", "text": "We use the aforementioned initialization strategies to warm-start single task GP with a Matern 3/2 kernel and automatic relevance determination as the response model. The quality of our suggested hyper-parameters is reflected in the improved performance of the response model at the early stages compared to metafeature-based initialization and random search, Figure 1. The pseudo-code for sequential model-based optimization is provided by Algorithm 3." }, { "heading": "Regularization Md", "text": "Q3: SEQUENTIAL GRAY-BOX FUNCTION OPTIMIZATION\nThe proposed universal response model provides useful hyper-parameters upfront without access to any observations of losses of the target responses. However, by iteratively refitting the model to the history of observations, the response prediction is improved, as depicted in Figure 5, and summarized in Table 4. We refit our algorithm by optimizing Equation 3, on the history of observed losses on the target dataset, Algorithm 4. We evaluate two policies for selecting the next hyper-parameter after refitting, (1) greedily selecting the hyper-parameter with the highest predicted response, GROSI(+1), and (2) selecting the next hyper-parameter randomly from the top 5 hyper-parameters with the highest predicted response, GROSI(+10), which achieved the best regret on average across the three meta-datasets, Appendix E.2. In contrast to the baselines that select hyper-parameters through an acquisition function that capitalizes on the uncertainty of the posterior samples, we incorporate uncertainty by selecting the next hyper-parameter from the top-k hyper-parameters uniformly at random and thus introduce a small trade-off between exploration and exploitation.\nFurthermore, our method outperforms the state-of-the-art in transfer learning approaches for HPO in several cases while demonstrating in general competitive performance across all three meta-datasets,\nTable 45. The baselines are warm-started with 20 randomly selected hyper-parameters (Feurer et al. (2015). For better readability, the uncertainty quantification can be found in Figure 4.\n0 20 40 60 80 Number of Trials\n1\n2\n3\n4\n5\n6\n7\nNo rm\nal ize\nd Re\ngr et\nLayout Md\n0 20 40 60 80 Number of Trials\n1\n2\n3\n4\n5\n6\n7\n8" }, { "heading": "Regularization Md", "text": "0 20 40 60 80 Number of Trials\n2\n3\n4\n5\n6\n7 Optimization Md\nTST-R TST-D2V RGPE TAF-R ABLR GROSI GROSI(+10)\nFigure 2: Average normalized regret for state-of-the-art transfer learning HPO methods.\n5For better visualization, some baselines are removed from Figure 2, but are still reported in Table 4" }, { "heading": "5.5 ABLATION STUDY", "text": "We perform several ablation experiments to analyze the contribution of each objective to the overall performance. The results are detailed in Table 5. Treating zero-shot HPO as a simple regression model by optimizing Equation 3 alone is suboptimal and does not scale across all meta-datasets. We notice that adding the auxiliary dataset identification task, Equation 6 brings on significant improvement, similarly with the similarity driven regularization, Equation 7. This reinforces the notion that the responses of similar datasets behave similarly with regards to the hyper-parameters. Both losses help generate more expressive meta-features, the former more directly, by optimizing the inter- and intra-dataset similarities, and the latter indirectly by penalizing the difference in the predicted response.\nWe also initialize the meta-feature extractor, φ, by pretraining it independently, Algorithm 5. However, we notice that this leads to generally poor performance as the model arrives quickly at a local optimum. An artifact of the meta-dataset, we notice that pretraining GROSI for Regularization Md provides a small lift. A small sensitivity analysis can be found in Appendix F.1." }, { "heading": "6 CONCLUSION", "text": "In this paper, we formulate HPO as a gray-box function optimization problem that incorporates an important domain of the response function, the dataset itself. We design a novel universal response model for zero-shot HPO that provides good initial hyper-parameters for unseen datasets in the absence of associated observations of hyper-parameters. We propose and optimize a novel multi-task objective to estimate the response while learning expressive dataset meta-features. We also reinforce the assumption that similar datasets behave similarly to hyper-parameters by introducing a novel similarity-driven regularization technique. As part of future work, we will investigate the impact of our approach within the reinforcement learning framework." }, { "heading": "A DETAILED PROBLEM SETTING", "text": "By a learning task we denote a pair (p, `) of an unknown distribution p of pairs (x, y) ∈ RM+L, with M,L ∈ N, and a loss ` : RL × RL → R. A function ŷ : RM → RL is called a model for task p and\n`(ŷ; p) := E(x,y)∼p(`(y, ŷ(x)))\nits (expected) loss.\nLet a be a learning algorithm that yields for every sample D of pairs (x, y) from a task (p, `) and hyper-parameters λ ∈ RP a model ŷ for the task. We call\n`(λ) := `(a(D,λ); p)\nthe loss (or the response) of hyper-parameters λ. We say validation loss for the loss estimated on fresh validation data.\nSequential single-task hyper-parameter optimization problem. Given an initial number K of pairs (λk, lk) of hyper-parameters and their (validation) losses and a budget B ∈ N of trials, find sequentially B many hyper-parameters λK+1, . . . , λK+B , such that their smallest loss\nmin k∈1:K+B `(λk)\nis minimal among all such sequences. To compute the next guess λk+1, the hyper-parameters λ1, . . . , λk tried so far and their (validation) losses lk := `(λk) can be used.\nZero-shot cross-task hyper-parameter optimization problem. Let ptask be an unknown distribution of supervised learning tasks. Given a sample of triples ((p, `), λ, l) of learning tasks (p, `), hyper-parameters λ and their losses l, find for a fresh task (p, `) and a budget B ∈ N — without any observations of losses of hyper-parameters on this task — a set {λ1, . . . , λB} of hyper-parameters, such that their smallest loss mink∈1:B `(λk) is minimal among all such sets.\nSequential cross-task hyper-parameter optimization problem. Given both, (i) a sample of triples ((p, `), λ, l) of learning tasks (p, `), hyper-parameters λ and their losses l, and (ii) a fresh task (p, `) and a budget B ∈ N, find sequentially B many hyper-parameters λ1, . . . , λB , such that their smallest loss mink∈1:B `(λk) is minimal among all such sequences. To compute the next guess λk+1, the hyper-parameters λ1, . . . , λk tried so far and their (validation) losses lk := `(λk) as well as all data on other tasks can be used." }, { "heading": "B THE META-FEATURE EXTRACTOR", "text": "The meta-feature extractor is a set-based function, and is represented as an extended derivation of the Kolmogorov-Arnold representation theorem (Krková (1992)), which states that a multi-variate function φ can be defined as an aggregation of univariate functions over single variables:\nφ(x1, . . . , xM ) ≈ 2M∑ j=0 hm ( M∑ m=1 gm,j(xm) )\n(9)\nIt is important to note that φ is permutation invariant, i.e. unaffected by any permutation on the input, which allows us to obtain the same output for the same multi-variate data regardless of the order of input. As a simple variant of this formulation (Zaheer et al. (2017)), we can replace the set of functions hm, with single function h, and gm,j with a function g. In this paper, we incorporate the meta-feature extractor as part of the response model, effectively learning a conditional response on the dataset meta-features directly such that the approximation is defined as ˆ̀(λ(t), φ(t);β), with φ(t) = φ(D(t))." }, { "heading": "C META-DATASETS", "text": "" }, { "heading": "C.1 LAYOUT HYPER-PARAMETER", "text": "Below are some examples of the number of neurons per layer for networks with different layout hyper-parameters given 4 neurons and 5 layers:\n• Layout : [4,4,4,4,4] • Layout C: [4,8,16,32,64] • Layout B: [64,32,16,8,4] • Layout : [4,8,16,8,4] • Layout4: [16,8,4,8,16]\nThe search space consists of all possible combinations of the hyper-parameters. After removing redundant configurations, e.g. 4 layout with 1 layer is similar to a layout with 1 layer, the resulting meta-datasets have 256, 288, and 324 unique configurations respectively." }, { "heading": "C.2 HYPER-PARAMETER ENCODING", "text": "Below is description of the encodings applied to our hyper-parameters. We also like to note that the scalar values are normalized between (0, 1).\nTable 6: Encoding of the different hyper-parameters used in the meta-dataset.\nHyper-parameter Encoding\nActivation One-hot encoding Neurons Scalar Layers Scalar Layout One-hot encoding Dropout Scalar Normalization Scalar Optimizer One-hot encoding" }, { "heading": "C.3 THE UCI DATASETS", "text": "Table 7 is an overview of the UCI datasets used to generate the meta-datasets." }, { "heading": "D ALGORITHMS", "text": "We define pT as the task distribution that represents pairs of datasets and hyper-parameters, i.e. T (t,k) = (D(t), λ(t)k ) ∈ T × Λ, and pD be the distribution of the datasets as defined in Section 4.2. Algorithm 1 provides the overall optimization framework for GROSI, our approach.\nAlgorithm 1 Learn GROSI(D) 1: Require:pD: distribution over datasets,pT : distribution over tasks 2: Require:lrinner,lrouter: learning rates 3: Randomly initialize β ∈ B, the parameters of our response model l̂ 4: while not done do 5: Set β′ ← β 6: Sample (D(t), λ(t)k ) = T (t,k) ∼ pT 7: for v steps 8: sample D(q) ∼ pD\\(D(t) 9: Evaluate gradients G ← ∇β (O + α R+ δ P) 10: Compute adapted parameters with stochastic gradient descent: β′ ← β′ − lrinnerG 11: Update β ← β − lrouter (β − β′) 12: return β\nAfter optimizing our objective via Algorithm D we apply Algorithm 2 to observe the results presented in Tables 3 and 5.\nAlgorithm 2 Zero-shot HPO\n1: Require: target dataset D(t) ; response model l̂; desired zero-shot hyper-parameters K 2: H ← arg minKλ∈Λ l̂ ( D(t), λ ) 3: return H\nFor sequential model-based optimization, a surrogate l̂ is fitted to the observed responses of the unknown function. Several initialization strategies exist to expedite the transfer of information across tasks, Section 5.4. In Algorithm 3, we present the generic pseudo-code for SMBO, that requires an acquisition function, a, to sample the next iterate from the domain.\nAlgorithm 3 Sequential Model-based Optimization Warm-start\n1: Require: target dataset D(t) ; response model l̂; desired zero-shot hyper-parameters K, number of trials I , acquisition function a 2: Get initial hyper-parametersH0 ←Zero-shot HPO 3: λmin ← arg minλ∈H0 ( l(D(t), λ)\n) 4: for i = 1 . . . I 5: fit l̂i toHi−1 6: λ← arg maxλ∈Λ a ( l̂(D(t), λ)\n) 7: Hi ← Hi−1 ⋃ {λ}\n8: if l ( D(t), λ ) < l ( D(t), λmin ) 9: λmin ← λ\n10: return λmin\nIn Section 5.4, we propose to initialize our response model on the target dataset, then iterativly tune it to that particular dataset. Initially, we select top K configurations based on Algorithm 2, our zero-shot approach. Then, via Algorithm 4, we sample uniformly at random from the top X ranking configurations. If X = 1, then this represents the greedy policy.\nMeta-feature learning from datasets with varying schema was initially proposed in (Jomaa et al. (2019)). For our approach, we introduce a set-based meta-feature extractor module to handle datasets\nAlgorithm 4 Learn GROSI(+X)\n1: Require: target dataset D(t) ; response model l̂; desired zero-shot hyper-parameters K, number of trials I , Number of top configurations to choose from, X 2: Get initial hyper-parametersH0 ←Zero-shot HPO 3: λmin ← arg minλ∈H0 ( l(D(t), λ)\n) 4: for i = 1 . . . I 5: fit l̂i toHi−1 by optimizing Equation 3 6: λ ∼ Uniform ( arg minXλ∈Λ\\Hi−1 l̂ ( D(t), λ\n)) 7: Hi ← Hi−1 ⋃ {λ}\n8: if l ( D(t), λ ) < l ( D(t), λmin ) 9: λmin ← λ\n10: return λmin\nof varying schema as well, however, we optimize Equation 8 and use the dataset identification task as an auxiliary objective. However, to pre-train the meta-feature extractor for the Ablation study, Section 5.5, as well as in order to extract meta-features for the NN-D2V, and TST-D2V, we follow Algorithm 5, with pD+ as the distribution of similar datasets, pD+ = {(D(t), D(q), s) ∼ pD | s = 1}, and pD− as the distribution of dissimilar datasets, pD− = {(D(t), D(q), s) ∼ pD | s = 0}. Similar datasets are defined as multi-fidelity subsets (batches) of each dataset.\nAlgorithm 5 Standalone Meta-feature Learning 1: Require:pD+ : distribution over similar datasets, pD− distribution over dissimilar datasets 2: Require:lrφ learning rate 3: Randomly initialize β ∈ B, the parameters of the meta-feature extractor φ 4: while not done do 5: Sample (D(t), D(q), 1) ∼ pD+ and (D(t), D(r), 0) ∼ pD− (Both samples share D(t)) 6: Evaluate gradients G ← ∇β (P) 7: Compute adapted parameters with stochastic gradient descent: β′ ← β′ − lrφG 8: Update β ← β − lrφ (β − β′) 9: return β" }, { "heading": "E EXPERIMENTAL DETAILS", "text": "" }, { "heading": "E.1 NETWORK ARCHITECTURE", "text": "Our model architecture is divided into two modules, l̂ := φ ◦ ψ, the meta-feature extractor φ, and the regression head ψ. The meta-feature extractor φ : R2 → RKh is composed of three functions, Equation 4, namely f , g and h. The regression head is also composed of two functions, i.e.ψ : ψ1◦ψ2. We define by ψ1 : RKh × Λ → RKψ1 as the function that takes as input the meta-feature/hyperparameter pair, and by ψ2 : RKψ1 → R the function that approximates the response. Finally, let Dense(n) define one fully connected layer with n neurons, and ResidualBlock(n,m) bem×Dense(n) with residual connections (Zagoruyko & Komodakis (2016)).\nTo select a single universal response model, we evaluate the validation performance on the three network architectures described in Table 8. We select the architecture that has the best average performance between the three across the three meta-datasets, Table 9, which turns out to be Architecture 3. The architectures assign a different number of trainable variables for the meta-feature extractor and the coupled regression head." }, { "heading": "E.2 POLICY FOR SEQUENTIAL OPTIMIZATION", "text": "We propose GROSI as a zero-shot HPO solution. However, to emphasize the ability of our surrogate model to quickly adapt to new target datasets, we extend it into a sequential optimization approach. Starting with the proposed zero-shot configurations, we fine tune our model via Algorithm 4. We select the X = 10 based on the best average performance observed on the held-out validation sets, Table10." }, { "heading": "F ADDITIONAL EXPERIMENTAL RESULTS", "text": "" }, { "heading": "F.1 HYPER-PARAMETER SENSITIVITY ANALYSIS", "text": "We optimize our response model by minimizing Equation 8, which includes the dataset identification task, Equation 6, and the similarity-driven regularization task, Equation 7, with auxiliary weights δ and α assigned to both respectively. We report below the performance of our universal response model for different auxiliary weights. The results confirm the importance of emphasizing the auxiliary dataset identification task in conjunction with the similarity-driven regularization loss, which reinforces the intuition that similar datasets behave similarly to the hyper-parameter response. The reported results throughout the paper are based on δ = 1 and α = 0.5. .\n." }, { "heading": "F.2 ADDITIONAL RESULTS", "text": "Q1: ZERO-SHOT HPO AS A STAND-ALONE PROBLEM\nAs a plausibility argument for the usefulness of our zero-shot strategy, we depict in Figure 3 the top 10 suggested hyper-parameters by our approach, as well as two initialization strategies on the actual response surface. Our picks can be seen colocated near the different optima in the search space whereas hyper-parameters of other strategies are dispersed.\nQ3: SEQUENTIAL GRAY-BOX FUNCTION OPTIMIZATION\nWe refit the universal response model to the observations of the response on the target dataset by optimizing Equation 3. We depict the improvement achieved over the zero-shot approach in Figure 5." }, { "heading": "Regularization Md", "text": "" }, { "heading": "Regularization Md", "text": "" }, { "heading": "Random NN-D2V GROSI", "text": "" } ]
2,020
ZERO-SHOT TRANSFER LEARNING FOR GRAY-BOX HYPER-PARAMETER OPTIMIZATION
SP:9dbd1488470372dae1baf3d391124e2abac8ea53
[ "The motivation for this paper is quite hard to understand. A VQ-VAE is directly applied to convert an image from one colour space to another one. However, the colour space transform is human-defined, usually involving linear and a few non-linear (like selecting the maximum value is HSV) procedures. In this case, the latent space of VQ-VAE should be collapsed into this simple equation easily. The analysis of this paper does not teach us any additional knowledge.", "This paper proposes to study an interesting problem of how color informaiton is structured in the variational autoencoders (VAEs). Several instances of VAEs are trained in an unsupervised manner to perform color space conversion. Both low-level and high-level evaluations are performed to study the local statistics and global content of converted images. Several interesting conclusions are drawn from the experiments that help interpret the encoding process of autoencoders." ]
Colours can be represented in an infinite set of spaces highlighting distinct features. Here, we investigated the impact of colour spaces on the encoding capacity of a visual system that is subject to information compression, specifically variational autoencoders (VAEs) where bottlenecks are imposed. To this end, we propose a novel unsupervised task: colour space conversion (ColourConvNets). We trained several instances of VAEs whose input and output are in different colour spaces, e.g. from RGB to CIE L*a*b* (in total five colour spaces were examined). This allowed us to systematically study the influence of input-output colour spaces on the encoding efficiency and learnt representation. Our evaluations demonstrate that ColourConvNets with decorrelated output colour spaces produce higher quality images, also evident in pixel-wise low-level metrics such as colour difference (∆E), peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). We also assessed the ColourConvNets’ capacity to reconstruct the global content in two downstream tasks: image classification (ImageNet) and scene segmentation (COCO). Our results show 5-10% performance boost for decorrelating ColourConvNets with respect to the baseline network (whose input and output are RGB). Furthermore, we thoroughly analysed the finite embedding space of Vector Quantised VAEs with three different methods (single feature, hue shift and linear transformation). The interpretations reached with these techniques are in agreement suggesting that (i) luminance and chromatic information are encoded in separate embedding vectors, and (ii) the structure of the network’s embedding space is determined by the output colour space.
[ { "affiliations": [], "name": "COLOUR CONVER" } ]
[ { "authors": [ "Horace B Barlow" ], "title": "Possible principles underlying the transformation of sensory messages", "venue": "In Sensory communication,", "year": 1961 }, { "authors": [ "David Bau", "Bolei Zhou", "Aditya Khosla", "Aude Oliva", "Antonio Torralba" ], "title": "Network dissection: Quantifying interpretability of deep visual representations", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "David Bau", "Jun-Yan Zhu", "Hendrik Strobelt", "Agata Lapedriza", "Bolei Zhou", "Antonio Torralba" ], "title": "Understanding the role of individual units in a deep neural network", "venue": "Proceedings of the National Academy of Sciences,", "year": 2020 }, { "authors": [ "Marcelo Bertalmı́o" ], "title": "From image processing to computational neuroscience: a neural model based on histogram equalization", "venue": "Frontiers in computational neuroscience,", "year": 2014 }, { "authors": [ "Piotr Bojanowski", "Armand Joulin", "David Lopez-Paz", "Arthur Szlam" ], "title": "Optimizing the latent space of generative networks", "venue": "arXiv preprint arXiv:1707.05776,", "year": 2017 }, { "authors": [ "Ali Borji" ], "title": "Pros and cons of gan evaluation measures", "venue": "Computer Vision and Image Understanding,", "year": 2019 }, { "authors": [ "M. Bratkova", "S. Boulos", "P. Shirley" ], "title": "orgb: A practical opponent color space for computer graphics", "venue": "IEEE Computer Graphics and Applications,", "year": 2009 }, { "authors": [ "Gershon Buchsbaum", "Gottschalk Allan" ], "title": "Trichromacy, opponent colours coding and optimum colour information transmission in the retina", "venue": "Proceedings of the Royal society of London. Series B. Biological sciences,", "year": 1983 }, { "authors": [ "Ayan Chakrabarti" ], "title": "Color constancy by learning to predict chromaticity from luminance", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Zezhou Cheng", "Qingxiong Yang", "Bin Sheng" ], "title": "Deep colorization", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "M Chirimuuta" ], "title": "The uses of colour vision: Ornamental, practical, and theoretical", "venue": "Minds and Machines,", "year": 2015 }, { "authors": [ "Dan Cireşan", "Ueli Meier", "Jonathan Masci", "Jürgen Schmidhuber" ], "title": "Multi-column deep neural network for traffic sign classification", "venue": "Neural networks,", "year": 2012 }, { "authors": [ "Kostadin Dabov", "Alessandro Foi", "Vladimir Katkovnik", "Karen Egiazarian" ], "title": "Image denoising by sparse 3-d transform-domain collaborative filtering", "venue": "IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, 16:2080–95,", "year": 2007 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Andrew M Derrington", "John Krauskopf", "Peter Lennie" ], "title": "Chromatic mechanisms in lateral geniculate nucleus of macaque", "venue": "The Journal of physiology,", "year": 1984 }, { "authors": [ "Martin Engilberge", "Edo Collins", "Sabine Süsstrunk" ], "title": "Color representation in deep neural networks", "venue": "IEEE International Conference on Image Processing (ICIP),", "year": 2017 }, { "authors": [ "Alban Flachot", "Karl R Gegenfurtner" ], "title": "Processing of chromatic information in a deep convolutional neural network", "venue": "JOSA A,", "year": 2018 }, { "authors": [ "David H Foster", "Iván Marı́n-Franch", "Sérgio Nascimento", "Kinjiro Amano" ], "title": "Coding efficiency of cie color spaces", "venue": "In Color and Imaging Conference,", "year": 2008 }, { "authors": [ "Huazhu Fu", "Boyang Wang", "Jianbing Shen", "Shanshan Cui", "Yanwu Xu", "Jiang Liu", "Ling Shao" ], "title": "Evaluation of retinal image quality assessment networks in different color-spaces", "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention,", "year": 2019 }, { "authors": [ "Leon A Gatys", "Alexander S Ecker", "Matthias Bethge", "Aaron Hertzmann", "Eli Shechtman" ], "title": "Controlling perceptual factors in neural style transfer", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Karl R Gegenfurtner", "Jochem Rieger" ], "title": "Sensory and cognitive contributions of color to the recognition of natural scenes", "venue": "Current Biology,", "year": 2000 }, { "authors": [ "Karl R Gegenfurtner", "Lindsay A Sharpe" ], "title": "Color vision", "venue": null, "year": 1999 }, { "authors": [ "Ethan Harris", "Daniela Mihai", "Jonathon Hare" ], "title": "Spatial and colour opponency in anatomically constrained deep networks", "venue": "arXiv preprint arXiv:1910.11086,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of CVPR,", "year": 2016 }, { "authors": [ "Hyun-Koo Kim", "Ju H Park", "Ho-Youl Jung" ], "title": "An efficient color space for deep-learning based traffic light recognition", "venue": "Journal of Advanced Transportation,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Alexander Kirillov", "Kaiming He", "Ross Girshick", "Carsten Rother", "Piotr Dollár" ], "title": "Panoptic segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Mark A Kramer" ], "title": "Nonlinear principal component analysis using autoassociative neural networks", "venue": "AIChE journal,", "year": 1991 }, { "authors": [ "Valero Laparra", "Sandra Jiménez", "Gustavo Camps-Valls", "Jesús Malo" ], "title": "Nonlinearities and adaptation of color vision from sequential principal curves analysis", "venue": "Neural Computation,", "year": 2012 }, { "authors": [ "Kaouthar Larbi", "Wael Ouarda", "Hassen Drira", "Boulbaba Ben Amor", "Chokri Ben Amar" ], "title": "Deepcolorfasd: Face anti spoofing solution using a multi channeled color spaces cnn", "venue": "IEEE International Conference on Systems, Man, and Cybernetics (SMC),", "year": 2018 }, { "authors": [ "Gustav Larsson", "Michael Maire", "Gregory Shakhnarovich" ], "title": "Learning representations for automatic colorization", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Gustav Larsson", "Michael Maire", "Gregory Shakhnarovich" ], "title": "Colorization as a proxy task for visual understanding", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Simon Laughlin" ], "title": "A simple coding procedure enhances a neuron’s information capacity", "venue": "Zeitschrift für Naturforschung c,", "year": 1981 }, { "authors": [ "Te-Won Lee", "Thomas Wachtler", "Terrence J Sejnowski" ], "title": "Color opponency constitutes a sparse representation for the chromatic structure of natural scenes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2001 }, { "authors": [ "Wei Li", "Rui Zhao", "Tong Xiao", "Xiaogang Wang" ], "title": "Deepreid: Deep filter pairing neural network for person re-identification", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2014 }, { "authors": [ "Timothy P Lillicrap", "Konrad P Kording" ], "title": "What does it mean to understand a neural network", "venue": "arXiv preprint arXiv:1907.06374,", "year": 2019 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Fujun Luan", "Sylvain Paris", "Eli Shechtman", "Kavita Bala" ], "title": "Deep photo style transfer", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Jesus Malo" ], "title": "Information flow in color appearance neural networks", "venue": "arXiv preprint arXiv:1912.12093,", "year": 2019 }, { "authors": [ "Dmytro Mishkin", "Nikolay Sergievskiy", "Jiri Matas" ], "title": "Systematic evaluation of convolution neural network advances on the imagenet", "venue": "Computer Vision and Image Understanding,", "year": 2017 }, { "authors": [ "Ali Mosleh", "Avinash Sharma", "Emmanuel Onzon", "Fahim Mannan", "Nicolas Robidoux", "Felix Heide" ], "title": "Hardware-in-the-loop end-to-end optimization of camera image processing pipelines", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "David L Philipona", "J Kevin" ], "title": "O’regan. Color naming, unique hues, and hue cancellation predicted from singularities in reflection properties", "venue": "Visual neuroscience,", "year": 2006 }, { "authors": [ "W Pratt" ], "title": "Digital image processing", "venue": "wiley-interscience,", "year": 2007 }, { "authors": [ "J. Preiss", "F. Fernandes", "P. Urban" ], "title": "Color-image quality assessment: From prediction to optimization", "venue": "IEEE Transactions on Image Processing,", "year": 2014 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "Ivet Rafegas", "Maria Vanrell" ], "title": "Color encoding in biologically-inspired convolutional neural networks", "venue": "Vision research,", "year": 2018 }, { "authors": [ "E. Reinhard", "M. Adhikhmin", "B. Gooch", "P. Shirley" ], "title": "Color transfer between images", "venue": "IEEE Computer Graphics and Applications,", "year": 2001 }, { "authors": [ "Daniel L Ruderman", "Cronin W Thomas", "Chiao Chuan-Chin" ], "title": "Statistics of cone responses to natural images: implications for visual coding", "venue": "JOSA A,", "year": 1998 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Peter H Schiller", "Joseph G Malpeli" ], "title": "Properties and tectal projections of monkey retinal ganglion cells", "venue": "Journal of Neurophysiology,", "year": 1977 }, { "authors": [ "Gaurav Sharma", "Wencheng Wu", "Edul N Dalal" ], "title": "The ciede2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations", "venue": "Color Research & Application,", "year": 2005 }, { "authors": [ "Chengyao Shen", "Xun Huang", "Qi Zhao" ], "title": "Predicting eye fixations on webpage with an ensemble of early features and high-level representations from deep network", "venue": "IEEE Transactions on Multimedia,", "year": 2015 }, { "authors": [ "Katarzyna Siuda-Krzywicka", "Marianna Boros", "Paolo Bartolomeo", "Christoph Witzel" ], "title": "The biological bases of colour categorisation: From goldfish to the human brain", "venue": null, "year": 2019 }, { "authors": [ "Lucas Theis", "Aäron van den Oord", "Matthias Bethge" ], "title": "A note on the evaluation of generative models", "venue": "arXiv preprint arXiv:1511.01844,", "year": 2015 }, { "authors": [ "Naftali Tishby", "Noga Zaslavsky" ], "title": "Deep learning and the information bottleneck principle", "venue": "In 2015 IEEE Information Theory Workshop (ITW),", "year": 2015 }, { "authors": [ "Avinash R Vaidya", "Maia S Pujara", "Michael Petrides", "Elisabeth A Murray", "Lesley K Fellows" ], "title": "Lesion studies in contemporary neuroscience", "venue": "Trends in cognitive sciences,", "year": 2019 }, { "authors": [ "Aaron van den Oord", "Oriol Vinyals" ], "title": "Neural discrete representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Zhou Wang", "A.C. Bovik", "H.R. Sheikh", "E.P. Simoncelli" ], "title": "Image quality assessment: from error visibility to structural similarity", "venue": "IEEE Transactions on Image Processing,", "year": 2004 }, { "authors": [ "Felix A Wichmann", "Lindsay A Sharpe", "Karl R Gegenfurtner" ], "title": "The contributions of color to recognition memory for natural scenes", "venue": "Journal of Experimental Psychology: Learning, Memory, and Cognition,", "year": 2002 }, { "authors": [ "Christoph Witzel", "Karl R Gegenfurtner" ], "title": "Color perception: Objects, constancy, and categories", "venue": "Annual Review of Vision Science,", "year": 2018 }, { "authors": [ "Li Zhaoping" ], "title": "Theoretical understanding of the early visual processes by data compression and data selection. Network: computation", "venue": "in neural systems,", "year": 2006 }, { "authors": [ "Li Zhaoping" ], "title": "Theoretical understanding of the early visual processes by data compression and data selection. Network: computation", "venue": "in neural systems,", "year": 2006 } ]
[ { "heading": "1 INTRODUCTION", "text": "Colour is an inseparable component of our conscious visual perception and its objective utility spans over a large set of tasks such as object recognition and scene segmentation (Chirimuuta et al., 2015; Gegenfurtner & Rieger, 2000; Wichmann et al., 2002). Consequently, colour is an ubiquitous feature in many applications: colour transfer (Reinhard et al., 2001), colour constancy (Chakrabarti, 2015), style transfer (Luan et al., 2017), computer graphics (Bratkova et al., 2009), image denoising (Dabov et al., 2007), quality assessment (Preiss et al., 2014), to name a few. Progress in these lines requires a better understanding of colour representation and its neural encoding in deep networks. To this end, we present a novel unsupervised task: colour conversion.\nIn our proposed framework the input-output colour space is imposed on deep autoencoders (referred to as ColourConvNets) that learn to efficiently compress the visual information (Kramer, 1991) while transforming the input to output. Essentially, the output y for input image x is generated on the fly by a transformation y = T (x), where T maps input to output colour space. This task offers a fair comparison of different colour spaces within a system that learns to minimise a loss function in the context of information bottleneck principle (Tishby & Zaslavsky, 2015). The quality of output images demonstrates whether the representation of input-output colour spaces impacts networks’ encoding power. Furthermore, the structure of internal representation provides insights on how colour transformation is performed within a neural network.\nIn this work, we focused on Vector Quantised Variational Autoencoder (VQ-VAE) (van den Oord et al., 2017) due to the discrete nature of its latent space that facilitates the analysis and interpretability of the learnt features. We thoroughly studied five commonly used colour spaces by training\nColourConvNets for all combinations of input-output spaces. First, we show that ColourConvNets with a decorrelated output colour space (e.g. CIE L*a*b) convey information more efficiently in their compressing bottleneck, in line with the presence of colour opponency in the human visual system. This is evident qualitatively (Figures 1 and A.1) and quantitatively (evaluated with three low-level and two high-level metrics). Next, we present the interpretation of ColourConvNets’ latent space by means of three methods reaching a consensus interpretation: (i) the colour representation in the VQ-VAEs’ latent space is determined by the output colour space, suggesting the transformation T occurs at the encoder, (ii) each embedding vector in VQ-VAEs encodes a specific part of the colour space, e.g. the luminance or chromatic information, which can be modelled by a parsimonious linear transformation." }, { "heading": "1.1 RELATED WORK", "text": "The effectiveness of different colour spaces have been investigated in a few empirical studies of deep neural networks (DNNs). Information fusion over several colour spaces improved retinal medical imaging (Fu et al., 2019). A similar strategy enhanced the robustness of face (Li et al., 2014; Larbi et al., 2018) and traffic light recognition (Cireşan et al., 2012; Kim et al., 2018). This was also effective in predicting eye fixation (Shen et al., 2015). Opponent colour spaces have been explored for applications such as style transfer (Luan et al., 2017; Gatys et al., 2017) and picture colourisation (Cheng et al., 2015; Larsson et al., 2016). Most of these works are within the domain of supervised learning. The most similar approach to our proposed ColourConvNets is image colourisation as a pretext task for unsupervised visual feature learning (Larsson et al., 2017).\nInitial works on colour representation in DNNs revealed object classification networks learn to decorrelate their input images (Rafegas & Vanrell, 2018; Flachot & Gegenfurtner, 2018; Harris et al., 2019). This is a reminiscence of horizontal and ganglion cells that decorrelate retinal signal into colour-opponency before transmitting it to the visual cortex (Schiller & Malpeli, 1977; Derrington et al., 1984; Gegenfurtner & Kiper, 2003). Another set of works reported existence of hue-sensitive units (Engilberge et al., 2017) that mainly emerge in early layers (Bau et al., 2017). Representation of colours in deep networks at intermediate and higher layers is rather understudied. In this article, we specifically focus on the intermediate representation that emerges at the latent space of autoencoders, which to the best of our knowledge has not been reported in the literature." }, { "heading": "2 COLOUR CONVERSION AUTOENCODERS", "text": "In this article, we propose a novel unsupervised task of colour conversion: the network’s output colour space is independent of its input (see Figure 2). A colour space is an arbitrary definition of colours’ organisation in the space (Koenderink & van Doorn, 2003). Thus, the choice of transfor-\nmation matrix T in ColourConvNets is perfectly flexible to model any desired space,\nCin T−−−−−→ Cout, (1)\nwhere Cin and Cout are the input and output colour spaces. This framework offers a controlled environment to compare colour spaces within a complex visual system. Here, we studied their effectiveness in information encoding constrained to a bottleneck. This can be extended to encompass other constraints (such as entropy, energy, wiring, etc.) relevant to understanding colour representation in complex visual systems. We further used this structure to compare autoencoder’s latent space across colour spaces aiming to decipher the intermediate colour representation within these networks. The proposed framework can also be employed in applications, e.g., as an add-on optimisation capsule to any computer vision application (Mosleh et al., 2020), or as a proxy task for visual understanding (Larsson et al., 2017)." }, { "heading": "2.1 NETWORKS", "text": "We studied a particular class of VAEs—Vector Quantised Variational Autoencoder (VQ-VAE) (van den Oord et al., 2017)—due to the discrete nature of its latent embedding space that facilitates the analysis and interpretability of the learnt features, which distinguishes it from others (Kingma & Welling, 2013). VQ-VAE consists of three main blocks: 1) an encoder that processes the input data x to ze(x); 2) a latent embedding space {e} ∈ RK×D, with K vectors of dimensionality D, that maps ze(x) onto zq(x) by estimating the nearest vector ei to ze(x); 3) a decoder that reconstructs the final output x′ with a distribution p(x|zq(x)) over the input data (see the right panel in Figure 2). The loss function is defined as follows,\nL = log p(x|zq(x)) + ‖sg[ze(x)]− e‖22 + β‖ze(x)− sg[e]‖22, (2) where sg denotes the stop gradient computation that is defined as the identity during the forwardpropagation, and with zero partial derivatives during the back-propagation to refrain its update. The first term in Eq. 2 corresponds to the reconstruction loss incorporating both encoder and decoder; the second term updates the embedding vectors; and the third term harmonies the encoder and embedding vectors. The parameter β ∈ R is set to 0.5 in all our experiments." }, { "heading": "2.2 COLOUR SPACES", "text": "We explored five colour spaces: RGB, LMS, CIE L*a*b*, DKL and HSV. The standard space in electronic imaging is RGB that represents colours by three additive primaries in a cubic shape. The LMS colour space corresponds to the response of human cones (long-, middle-, and shortwavelengths) (Gegenfurtner & Sharpe, 1999). The CIE L*a*b* colour space (luminance, red-green and yellow-blue axes) is designed to be perceptually uniform (CIE, 1978). The DKL colour space (Derrington-Krauskopf-Lennie) models the opponent responses of rhesus monkeys in the early visual system (Derrington et al., 1984). The HSV colour space (hue, saturation, value) is a cylindrical representation of RGB cube designed by computer graphics.\nThe input-output to our networks can be in any combination of these colour spaces. Effectively, our VQ-VAE models, in addition to learning efficient representation, must learn the transformation\nfunction from their input to output colour space. It is worth considering that the original images in explored datasets are in the RGB format. Therefore, one might expect a slight positive bias towards this colour space given its gamut defines the limits of other colour spaces." }, { "heading": "3 EXPERIMENTS", "text": "We trained several instances of VQ-VAEs with distinct sizes of embedding space {e} ∈ RK×D. The training procedure was identical for all networks: trained with Adam optimiser (Kingma & Ba, 2014) (lr = 2 × 10−4) for 90 epochs. To isolate the influence of random variables, all networks were initialised with the same set of weights and an identical random seed was used throughout all experiments. We used ImageNet dataset (Deng et al., 2009) for training. This is a visual database of object recognition in real-world images, divided into one thousand categories. The training set contains 1.3 million images. At every epoch, we exposed the network to 100K images of size 224 × 224 of three colour channels. Figure B.1 reports the progress of loss function for various ColourConvNets. A similar pattern of convergence can be observed for all trained networks.\nTo increase the generalisation power of our findings, we evaluated all networks on the validationset of three benchmark datasets: ImageNet (50K images), COCO (5K images), and CelebA (~20K images). COCO is a large-scale object detection and segmentation dataset (Lin et al., 2014). CelebA contain facial attributes of celebrities (Liu et al., 2015). We relied on two classes of evaluation1: low-level (Theis et al., 2015), capturing the local statistics of an image; high-level (Borji, 2019), assessing the global content of an image.\nLow-level evaluation – We computed three commonly used metrics to measure the pixel-wise performance of networks: (i) the colour difference CIE ∆E-2000 (Sharma et al., 2005), (ii) peak signalto-noise ratio (PSNR), and (iii) structural similarity index measure (SSIM) (Wang et al., 2004).\nHigh-level evaluation – Pixel-wise measures are unable to capture the global content of an image and whether semantic information remains perceptually intact. To account for this limitation, we performed a procedure similar to the standard Inception Score (Salimans et al., 2016; Borji, 2019) by feeding the reconstructed images to two pretrained networks (without fine-tuning) that perform the task of object classification, ResNet50 (He et al., 2016), and scene segmentation, Feature Pyramid Network—FPN (Kirillov et al., 2019). ResNet50 and FPN expect RGB inputs, thus non-RGB reconstructions were converted to RGB. The evaluation for ResNet50 is the classification accuracy on ImageNet dataset. The evaluation for FPN is the intersection over union (IoU) on COCO dataset." }, { "heading": "3.1 EMBEDDING SIZE", "text": "We first evaluated the influence of embedding size for four regimes of ColourConvNets whose input colour space is the original RGB images. The low-level evaluation for ImageNet is reported in Figure 3 and COCO Figure C.1. Across three metrics, the poor performance of rgb2hsv pops up at low-dimensionality of the embedding vector (D = 8). This might be due to the circular nature of hue. For the smallest and the largest embedding space, we observe no significant differences between the four networks. However, for embedding spaces of 8 × 8 and 8 × 128 an advantage appears for networks whose outputs are opponent colour spaces (DKL and CIE L*a*b).\nThe corresponding high-level evaluation is reported in Figure 4. The overall trend is much alike for both tasks. The lowest performance occurs for rgb2hsv across all embedding spaces. Colour-\n1For reproduction, the source code and all experimental data are available in the supplementary materials.\nConvNets with an opponent output colour space systematically perform better than rgb2rgb, with an exception for the largest embedding space (128 × 128) where all networks perform equally (despite the substantial compression, 70% top-1 accuracy on ImageNet and 60% IoU on COCO). The comparison of low- and high-level evaluation for the smallest embedding space (4× 128) (Figure 4 versus Figures 3 and C.1) demonstrates the importance of high-level evaluation. Although no difference emerges for the low-level measure, the classification and segmentation metrics are substantially influenced by the quality of the reconstructed images in those four VQ-VAEs." }, { "heading": "3.2 PAIRWISE COMPARISON", "text": "For the two embedding spaces with the largest differences (8× 8 and 8× 128) we conducted an exhaustive pairwise comparison across two regimes of colour spaces: sensory (RGB and LMS) versus opponency (DKL and CIE L*a*b). HSV is excluded in these analysis due to the aforementioned reason. Figure 5 presents the low-level evaluation results for ImageNet (COCO Figure C.2 and CelebA Figure C.3). There is a clear tendency of better performance for ColourConvNets with an opponent output colour space across all measures and datasets. Overall, the rgb2lab reconstructs the highest quality images. In comparison to the baseline (i.e. rgb2rgb) both rgb2lab and rgb2dkl obtain substantially lower colour differences, and higher PSNRs and SSIMs.\nThe high-level evaluation results are reported in Figure 6. In agreement to previous findings, rgb2lab performs best across both datasets and embedding spaces. Overall, ColourConvNets with an opponent output space show a clear advantage: rgb2lab and rgb2dkl obtain 5-7% higher accuracy and IoU with respect to the baseline rgb2rgb." }, { "heading": "4 PERFORMANCE ADVANTAGE", "text": "The main difference between two regimes of colour spaces (sensory versus opponency) is their intra-axes correlation. The intra-axes correlation for RGB and LMS is very high, hence referred to as correlated colour spaces. On the contrary, the intra-axes correlations for CIE L*a*b* and DKL\nImageNet (Accuracy ↑) COCO (IoU ↑) {e} ∈ R8×8 {e} ∈ R8×128 {e} ∈ R8×8 {e} ∈ R8×128\nare very low, hence referred to as decorrelated colour spaces.2 In biological visual systems, the retinal signal is transformed to opponency before transmitted to the visual cortex through the LGN bottleneck (Zhaoping, 2006b). This transformation has been argued to boost the efficiency of information coding (Buchsbaum & Allan, 1983; Ruderman et al., 1998; Lee et al., 2001). Here, our results show a similar phenomenon in deep autoencoders that compress information in their bottleneck. Contrary to this, the ImageNet classification performance was reported unaltered when input images converted from RGB to CIE L*a*b* (Mishkin et al., 2017). This might be explained by the lack of bottleneck constraint in their examined architecture, thus decorrelating colour representation leads to no extra advantage. Interestingly, we can observe this with ColourConvNets of largest embedding space (128 × 128), suggesting decorrelation of colour signal become beneficial when system is constrained in its information flow.\nPrevious works in the literature (Foster et al., 2008; Malo, 2019) have measured the decorrelation characteristics of colour opponent spaces in information theoretical analysis and demonstrated their effectiveness in encoding natural images. The understanding of how a complex visual system, driven by error minimisation strategy (Laparra et al., 2012), might utilise these properties at the system level is of great interest (Lillicrap & Kording, 2019). We hypothesised that an efficient system distributes its representation across all resources instead of heavily relying on a few components (Laughlin, 1981). To measure this, the histogram of embedding vectors across all images of ImageNet (50K) and COCO (5K) were computed. A zero standard deviation in the frequency of selected vectors means embedding vectors are equally used by the network. Figure 7 reports the error rate as a function of this measure. A significant correlation emerges in both datasets, suggesting a more uniform contribution of embedding vectors enhances visual encoding in VQ-VAEs. This matches the neural model of histogram equalisation (Pratt, 2007; Bertalmı́o, 2014) and is consistent with the efficient coding theory for the biological visual system (Barlow, 1961; Zhaoping, 2006a)." }, { "heading": "5 INTERPRETING THE EMBEDDING SPACE", "text": "Comprehension of the features learnt by a DNN remains a great challenge to the entire community (Lillicrap & Kording, 2019). Generative models and in particular variational autoencoders are no exceptions. Strategies on the interpretation of the latent space structure include interpolation in latent space arithmetic operations on learnt features (Radford et al., 2015; Bojanowski et al., 2017; Kim et al., 2018). In practice, however, these approaches require explicit human supervision, a cumbersome task due to the often large dimensionality of the latent space. Here, we borrowed the “lesion” technique, commonly practised in the neuroscience community (Vaidya et al., 2019), and applied it to the embedding space by silencing one vector at a time (i.e. setting its weights to zero). This procedure is referred to as “ablation” in the learning community and it has been useful in dissecting classification DNNs (Sandler et al., 2018) and GANs (Bau et al., 2020). To measure the consequences of vectors’ lesion, we analysed the ColourConvNets’ embedding space with three distinct methods: (i) single features, (ii) linear transformation and (iii) hue-shift.\n2We computed these correlations r in all images of ImageNet dataset (hundred-random pixels per image). RGB: rRG ≈ 0.90, rRB ≈ 0.77, rGB ≈ 0.89; LMS: rLM ≈ 1.00, rLS ≈ 0.93, rMS ≈ 0.93; L*a*b*: rL∗a∗ ≈ −0.14, rL∗b∗ ≈ 0.13, ra∗b∗ ≈ −0.34; DKL: rDK ≈ 0.01, rDL ≈ 0.14, rKL ≈ 0.61." }, { "heading": "5.1 SINGLE FEATURES", "text": "To visualise the encoded representation by each embedding vector, we sampled from the embedding space an example of spatial size 2× 2 with all cells set to the same vector index. Figure 8 shows the reconstructed images for all network combinations with embedding space {e} ∈ R8×128 (Figure D.1 for {e} ∈ R8×8). The input colour space is the same in each row, and the output space is the same in each column. An interesting column-wise feature appears. Networks with an identical output colour space share a similar set of hues arranged in a different order. The order within the embedding space of VQ-VAEs is arbitrary and changing it does not impact the network’s output. This is an interesting phenomenon suggesting: (i) the colour representation in network’s embedding space is an attribute of its output colour space, and (ii) the colour transformation T is performed by encoder before reaching the embedding space. This is an exciting line of investigation for feature studies to systematically explore whether the concept of unique hues and colour categories (Witzel & Gegenfurtner, 2018; Siuda-Krzywicka et al., 2019) emerge in machine colour representation.\nrgb2rgb rgb2lms rgb2dkl rgb2lab\ne0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 lms2rgb lms2lms lms2dkl lms2lab\ne0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 dkl2rgb dkl2lms dkl2dkl dkl2lab\ne0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 lab2rgb lab2lms lab2dkl lab2lab\ne0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7\nFigure 8: The reconstruction output by selecting a single vector of the entire embedding space. All models are VQ-VAE of K=8 and D=128.\nThe samples reconstructed with a single embedding vector are not perfectly uniform (some small spatial variation is visible in Figure 8). To better understand the spatio-chromatic aspect of the encoded information, we again drew a sample of spatial size 2 × 2 from the embedding space; this time instead of setting all elements to a single vector, we combined two vectors in different spatial directions. The resulting reconstruction for the rgb2lab is illustrated in Figure 9. The embedding space spatial direction is relayed to the networks reconstructed images although the degree of it depends on the pair of embedding vectors. For instance, the horizontal combination of e0 − e7 results in two stripes of colour, while e0 − e2 turn into three stripes. This is naturally due to the fact that embedding vectors encode beyond chromatic information, but also the distinct nature of spatio-chromatic combination the decoder learns." }, { "heading": "5.2 LINEAR TRANSFORMATION", "text": "Three exemplary reconstructions by rgb2dkl network are illustrated in Figure 10 (for other ColourConvNets refer to Sec. D.2). Panel A corresponds to the full embedding space and B–D show examples of reconstructions with distinct vector lesions causing clear visible effects. In B, only the lightness of bright pixels is reduced (attend to pixels outside the window and around light bulbs). In C & D, lesioning e0 and e2, turns reddish and blueish pixels into achromatic. This is in agreement to colour of rgb2dkl e0 and e2 in Figure 8.\nWe hypothesised that the changes induced by a lesion could be approximated by a linear transformation mapping the pixel distribution of the full reconstruction onto the lesion image. To compute these transformations, we used a multi-linear regression finding the best linear fit for the 1% of most affected pixels. The resulting 3 × 3 matrix is a linear transformation in CIE L*a*b colour space. We have illustrated the result of applying these linear transformations on the right side of Figure 10. Panel E corresponds to the full RGB cube (essentially the CIE L*a*b* planes limited by RGB gamut). In F–H the very same points are plotted transformed by the model of lesioned vector.\nOverall, lesions are closely approximated by a linear transformation: on average accounting for 97% of the total variance in the lesion effect (the lowest bound was 86%). This visualisation offers an intuitive interpretation of the learnt representation within the embedding space. In the images of the second row (panel B), contrast in bright pixels is reduced and colour is little modified. We can observe this in its corresponding CIE L*a*b* planes (e.g. attend the a*b* plane in F where the overall chromaticity structure is retained). In C, red pixels turn grey also evident in its corresponding CIE L*a*b* planes (panel G) where red coordinates are collapsed.\nThe geometrical properties of a transformation can be captured by the relative norms of its eigenvalues. For instance, zero-value eigenvalues indicate the extreme case of a singular matrix, corresponding to a linear transformation projecting a three-dimensional space onto lower dimensions. We quantified this by defining a singularity index (Philipona & O’regan, 2006). Consider a transformation matrix T approximating the lesion effect on the image colour distribution. Let λ1, λ2 and λ3 be the three eigenvalues of T , such that ‖λ1‖ > ‖λ2‖ > ‖λ3‖. The singularity index is defined as: SI = 1 − λ3λ1 . This index captures the essence of these transformations. On the one hand, the low value of SI in F suggests the global shape of colour space is retained while its volume is reduced. On the other hand, high values of SI in panels G and H indicate the near collapse of a dimension." }, { "heading": "5.3 HUE SHIFT", "text": "We further quantified the impact of vector lesion by computing the difference in CIE L*a*b* between the full reconstructed image and lesioned one. The average difference over all pixels for rgb2dkl is illustrated in Figure 11 (refer to Sec. D.3 for other ColourConvNets). The results of hue shift analysis restate the interpretation of learnt representation. For instance, the direction of shift in e0 is limited to the first quadrant of the chromaticity plane (red pixels). The e1 vector largely\nencodes the low-luminance information (the negative direction in the L* axis). The e2 vector predominantly influences the blue pixels (the negative direction in the b* axis). Similar colours emerge for rgb2dkl e0, e1 and e2 in Figure 8." }, { "heading": "6 CONCLUSION", "text": "We proposed the unsupervised colour conversion task to investigate colour representation in deep networks. We studied the impact of colour on the encoding capacity of autoencoders, specifically VQ-VAEs whose feature representation is constrained by a discrete bottleneck. The comparison of several ColourConvNets exhibits advantage for a decorrelated output colour space. This is evident qualitatively and measured quantitatively with five metrics. We discussed this benefit within the framework of efficient coding and histogram equalisation. These findings might contribute to our understanding of why the brain’s natural network has developed the opponent representation. We further explored the networks’ internal representation by means of three methods. Our analyses suggest: (i) the colour transformation is performed at the encoding stage prior to reaching the embedding space, (ii) despite the spatio-chromatic nature of the constituent vectors, many manifest a clear effect along one colour direction that can be modelled by a parsimonious linear model." }, { "heading": "ACKNOWLEDGEMENTS", "text": "Use unnumbered third level headings for the acknowledgments. All acknowledgments, including those to funding agencies, go at the end of the paper." }, { "heading": "A QUALITATIVE COMPARISON", "text": "Along with the quantitative evaluations reported in the manuscript, the benefits of utilising a decorrelated colour space for the network’s output can be appreciated qualitatively (see Figure A.1). These are representative samples from the COCO dataset Lin et al. (2014). The Jupyter-Notebook scripts in the supplementary materials provide more examples3. Overall, the rgb2dkl and rgb2lab VQ-VAEs generate more coherent images. For instance, in the first row of Figure A.1, the rgb2rgb output contains a large amount of artefacts on walls and ceilings. In contrast, the output of rgb2dkl and rgb2lab are sharper.\n3The weights of all trained networks and image outputs of lesion study exceed the 100MB upload limit, but they are publicly available for interested readers under this link https://www.dropbox.com/sh/ e1l3p3uot94q0fy/AADg0rmxyiC3UNifTtqIpg2Pa?dl=0 ." }, { "heading": "B LOSS FUNCTION", "text": "The loss function (Eq. 2) is computed in ColourConvNets’ output colour space between the groundtruth and network’s output. Figure B.1 reports the evolution of losses for all VQ-VAEs of K=8 and D=128. The convergence of losses is comparable across all networks regardless of their input-output colour space." }, { "heading": "C LOW-LEVEL EVALUATION", "text": "D INTERPRETING THE EMBEDDING SPACE\nD.1 SINGLE FEATURES\nThe hues obtained from single vectors of VQ-VAE with K=8 and D=8 is reported in Figure D.1. The effect observed for larger ColourConvNets (i.e. networks with an identical output colour space sharing a similar set of hues arranged in a different order) is less evident here. This might be due to the dimensionality of the embedding space. This regime consists of vectors of 8 elements, whereas in the previous regime (Figure 8) the dimensionality of vectors is 128.\nrgb2rgb rgb2lms rgb2dkl rgb2lab\ne0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 lms2rgb lms2lms lms2dkl lms2lab\ne0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 dkl2rgb dkl2lms dkl2dkl dkl2lab\ne0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 lab2rgb lab2lms lab2dkl lab2lab\ne0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7 e0 e1 e2 e3 e4 e5 e6 e7\nFigure D.1: The reconstruction output by selecting a single vector of the entire embedding space. All models are VQ-VAE of K=8 and D=8.\nD.2 LINEAR MODELLING OF VECTOR LESIONS\nIn order to understand the features learnt by the colour conversion networks, we exercised the “lesion” technique. It consists of silencing the embedding space’s vectors one at a time. We explored whether a vector lesion can be modelled by a simple linear transformation. We estimated the transformation matrix that maps the pixel distribution of the full reconstruction onto the lesion image (refer to Section 4 in the manuscript). To our surprise, this simple parsimonious modelling can capture a large portion of the vector’s encoding. We present qualitative results for VQ-VAEs with K = 8 and D = 128 in the colour conversion networks rgb2dkl (Figure D.2), rgb2lab (Figure D.3) and rgb2rgb (Figure D.4).\nThe top row of Figure D.2 illustrated three examples from the COCO dataset reconstructed with the full model of rgb2dkl. The following rows depict the reconstruction output of lesion technique exercised on each embedding vector ei (the “Lesion output” column), alongside with the linear modelling estimation (the “Linear model” column) obtained by applying the linear transformation to the full reconstructed image. On the bottom right corner, we have reported the fitness of the mode, correlation in the CIE L*a*b* colour coordinates (r) between the colour pixels of the lesion output and the linear model. The overall fitness is very high for such a simple model. Even in cases with lower correlations, we can observe that the model captures well the characteristics of the lesion output. For instance, the indoor scene for e0 obtains r = 0.81, however, it can be appreciated that the linear model accounts well for the disappearance of red pixels in the lesion output. This is also evident in the kite picture of e2 where blue pixels have vanished or the bench picture of e5 with green pixels.\nNaturally, there are limits to this linear modelling. For instance, the excess of chromaticity (pink and blue colours) in the indoor scene of e6 is not fully captured by its linear model. The most extreme can be observed in the kite picture of e3 for the rgb2lab model (Figure D.3) where the non-linear nature of lesion output is not accounted for in the linear model. Nevertheless, these parsimonious\ntransformations reveal great details about the information encoded by each vector deserving more thorough investigation in future studies.\nIn Figure D.5 we have illustrated the impact of each linear transformation applied to the entire RGB cube. This gives an intuitive idea of what each vector performs in a simple glance. Absence of some vectors results in the collapse of a chromatic direction. Others shear, shrink or expand the colour space.\nD.3 HUE SHIFT\nWe quantified the chromatic shifts (in CIE L*a*b*) between the reconstructed image of full embedding space and lesion one. The differences computed for all pixels over a hundred random images from the COCO dataset are illustrated in Figure D.6." } ]
2,020
null
SP:e1ced25c8b1fc9745e6f43c1be529e418d9325f9
[ "The paper proposes to answer the question why \"a network with the same number of weights as that of the pruned network cannot achieve similar performance when trained from scratch\". Then it proposes an hypothesis that the small model \"does not utilize all of its weights either\". To prove this hypothesis, it goes on to define and study the \"utility imbalance\" of the weights and its changing with the pretraining, pruning, etc. Some visualization analysis was provided too.", "This paper dives into why small pruned networks don’t train as well as large networks. They come up with a measure of weight utilization and claim that networks of all sizes only use a portion of their weights during training, and the imbalance increases during optimization. Additionally, they visualize the accuracy surface on the plane defined by the pretrained, pruned, and retrained networks and find that the retrained networks end up in the same basin as the pretrained networks." ]
Many methods aim to prune neural network to the maximum extent. However, there are few studies that investigate the pruning mechanism. In this work, we empirically investigate a standard framework for network pruning: pretraining large network and then pruning and retraining it. The framework has been commonly used based on heuristics, i.e., finding a good minima with a large network (pretraining phase) and retaining it with careful pruning and retraining (pruning and retraining phase). For the pretraining phase, the reason for which the large network is required to achieve good performance is examined. We hypothesize that this might come from the network relying on only a portion of its weights when trained from scratch. This way of weight utilization is referred to as imbalanced utility. The measures for weight utility and utility imbalance are proposed. We investigate the cause of the utility imbalance and the characteristics of the weight utility. For the pruning and retraining phase, whether the pruned-and-retrained network benefits from the pretrained network is examined. We visualize the accuracy surface of the pretrained, pruned, and retrained networks and investigate the relation between them. The validation accuracy is also interpreted in association with the surface.
[]
[ { "authors": [ "Stephen Casper", "Xavier Boix", "Vanessa D’Amario", "Christopher Rodriguez", "Ling Guo", "Kasper Vinken", "Gabriel Kreiman" ], "title": "Robustness and/or redundancy emerge in overparametrized deep neural networks. 2019", "venue": null, "year": 2019 }, { "authors": [ "Stephen Casper", "Xavier Boix", "Vanessa D’Amario", "Ling Guo", "Martin Schrimpf", "Kasper Vinken", "Gabriel Kreiman" ], "title": "Frivolous units: Wider networks are not really that wide, 2020", "venue": null, "year": 2020 }, { "authors": [ "Nicholas Cheney", "Martin Schrimpf", "Gabriel Kreiman" ], "title": "On the robustness of convolutional neural networks to internal architecture and weight perturbations, 2017", "venue": null, "year": 2017 }, { "authors": [ "Bryn Elesedy", "Varun Kanade", "Yee Whye Teh" ], "title": "Lottery tickets in linear models: An analysis of iterative magnitude pruning", "venue": "arXiv preprint arXiv:2007.08243,", "year": 2020 }, { "authors": [ "Utku Evci", "Fabian Pedregosa", "Aidan Gomez", "Erich Elsen" ], "title": "The difficulty of training sparse neural networks", "venue": "arXiv preprint arXiv:1906.10732,", "year": 2019 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jonathan Frankle", "Gintare Karolina Dziugaite", "Daniel M Roy", "Michael Carbin" ], "title": "Linear mode connectivity and the lottery ticket hypothesis", "venue": null, "year": 1912 }, { "authors": [ "Timur Garipov", "Pavel Izmailov", "Dmitrii Podoprikhin", "Dmitry P Vetrov", "Andrew Gordon Wilson" ], "title": "Loss surfaces, mode connectivity, and fast ensembling of dnns", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Behrooz Ghorbani", "Shankar Krishnan", "Ying Xiao" ], "title": "An investigation into neural net optimization via hessian eigenvalue density", "venue": "arXiv preprint arXiv:1901.10159,", "year": 2019 }, { "authors": [ "Xavier Glorot", "Antoine Bordes", "Yoshua Bengio" ], "title": "Deep sparse rectifier neural networks", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Yang He", "Ping Liu", "Ziwei Wang", "Zhilan Hu", "Yi Yang" ], "title": "Filter pruning via geometric median for deep convolutional neural networks acceleration", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Simplifying neural nets by discovering flat minima", "venue": "In Advances in neural information processing systems,", "year": 1995 }, { "authors": [ "Elad Hoffer", "Ron Banner", "Itay Golan", "Daniel Soudry" ], "title": "Norm matters: efficient and accurate normalization schemes in deep networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Yiding Jiang", "Behnam Neyshabur", "Hossein Mobahi", "Dilip Krishnan", "Samy Bengio" ], "title": "Fantastic generalization measures and where to find them", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Simonyan Karen", "Zisserman Andrew" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Nitish Shirish Keskar", "Jorge Nocedal", "Ping Tak Peter Tang", "Dheevatsa Mudigere", "Mikhail Smelyanskiy" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "University of Toronto,", "year": 2009 }, { "authors": [ "Hao Li", "Zheng Xu", "Gavin Taylor", "Christoph Studer", "Tom Goldstein" ], "title": "Visualizing the loss landscape of neural nets", "venue": "In Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Zhuang Liu", "Jianguo Li", "Zhiqiang Shen", "Gao Huang", "Shoumeng Yan", "Changshui Zhang" ], "title": "Learning efficient convolutional networks through network slimming", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Zhuang Liu", "Mingjie Sun", "Tinghui Zhou", "Gao Huang", "Trevor Darrell" ], "title": "Rethinking the value of network pruning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Richard Meyes", "Melanie Lu", "Constantin Waubert de Puiseau", "Tobias Meisen" ], "title": "Ablation studies in artificial neural networks, 2019", "venue": null, "year": 2019 }, { "authors": [ "Alex Renda", "Jonathan Frankle", "Michael Carbin" ], "title": "Comparing rewinding and fine-tuning in neural network pruning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Zhonghui You", "Kun Yan", "Jinmian Ye", "Meng Ma", "Ping Wang" ], "title": "Gate decorator: Global filter pruning method for accelerating deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "In Proceedings of the British Machine Vision Conference (BMVC),", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning is currently one of the most powerful machine learning methods. It requires neural network to train, which usually takes a few to hundreds times more weights than training data (He et al., 2016; Zagoruyko & Komodakis, 2016; Huang et al., 2017; Karen & Andrew, 2015). Usually, in common regimes, a greater number of weights leads to better performance (Zagoruyko & Komodakis, 2016). However, paradoxically, neural networks are also compressible. Many of the recent pruning methods aim to maximally compress the networks (Han et al., 2015; Liu et al., 2017; He et al., 2019; You et al., 2019), however, there are few works that investigate why and how the pruning mechanism works (Frankle et al., 2019; Elesedy et al., 2020).\nIn this work, we empirically investigate a standard framework for network pruning: pretraining a large network and then pruning and retraining it. The framework has been commonly used based on heuristics, i.e. finding a good minima with a larger network and retaining it with careful pruning and retraining (Han et al., 2015; Liu et al., 2017). We investigate the heuristic in two parts, i.e., one for the pretraining phase and the other for the pruning and retraining phase.\nFor the pretraining phase, the reason for training the large network to obtain a good minima is investigated. Since the neural network is generally compressible, the pretrained large network can be pruned to a smaller one. However, a network with the same number of weights as that of the pruned network cannot achieve similar performance when trained from scratch (Frankle & Carbin, 2018). We conjecture that this comes from the networks not utilizing all of their weights. Thus we hypothesize: if trained from scratch, there is a utility imbalance among the weights in neural network. For investigation, the measures for the weight utility and the utility imbalance are proposed. Thereafter, the cause of the utility imbalance and the characteristics of the weight utility in various conditions are examined.\nFor the pruning and retraining phase, we verify the heuristic that once a good minima is obtained with the large network, it can be retained by careful pruning and retraining (Han et al., 2015; Renda et al., 2020). Our investigation is based on the loss surface visualization on a two dimensional plane formed by three points in the weight space where each point represents the pretrained network, the\npruned network, and the pruned and retrained network. We examine (1) the dynamics of the network on the loss surface throughout pruning and (2) the validation accuracy of the networks over varying pruning methods and retraining methods." }, { "heading": "Contributions.", "text": "• The utility imbalance among the weights increases during optimization.\n• The neural networks utilize the weights in proportion to their size.\n• If a pretrained network is carefully pruned and retrained, then the pruned-and-retrained network shares the same loss basin with the pretrained network." }, { "heading": "2 WEIGHT UTILITY ANALYSIS FOR THE PRETRAIN MECHANISM", "text": "Then why do we have to train a large network and then prune to a smaller one? Why not just train the smaller one to get the performance we need? Why is it difficult? The investigation about the questions starts with a hypothesis: let Nlarge be a large network that does not utilize all of its weights, and thus can be easily compressed into a smaller network Npruned with minimal loss change. And let Nsmall be a network trained from scratch, whose number of weights is comparable to that ofNpruned, which is sufficient to achieve a similar level of loss to those ofNlarge orNpruned. However,Nsmall generally performs worse, because Nsmall does not utilize all of its weights either. Therefore, we hypothesize that the neural network does not utilize all of its weights when trained from scratch in general. And we refer to the phenomenon which the neural network utilize the weights unevenly as utility imbalance. Thus,\nMain Hypothesis. If trained from scratch, there is utility imbalance among the weights in a neural network.\nAnd we empirically measure the utility of weights as:\nDefinition 1 (Utility measure). LetW be a set of total weights in a networkN ,Ws be a subset ofW , and X be a dataset. Suppose fW (x) and fW\\Ws(x) are probability mass functions resulting from a softmax layer, where x ∼ X is an input and fW\\Ws(x) is obtained by zeroing out the weights in Ws. Then, the utility of Ws can be measured as U(Ws) = E\nx∼X\n[ dKL ( fW (x), fW\\Ws(x) )] , where\ndKL is KL-divergence.\nFor reference, the way of the measurement, i.e., network ablation, was similarly done in (Casper et al., 2019; 2020; Meyes et al., 2019; Cheney et al., 2017). We also define the utility imbalance as:\n.\nDefinition 2 (Utility imbalance). For Wi ⊂ W and Wj ⊂ W , we say there is δ utility imbalance between the subsets of weights when |U(Wi)− U(Wj)| > δ. Where we empirically measure the utility imbalance by:\nDefinition 3 (Utility imbalance measure). For a set of the randomly drawn subsets of W , i.e., {Wi}Ni=1,Wi ⊂W , which we refer to as a sample set, we empirically measure the utility imbalance of the set by the standard deviation of {U(Wi)}Ni=1. We empirically show that the utility imbalance among the weights exists in Figure 1. Hereafter, the cause of the utility imbalance and the characteristics of the weight utility are discussed in Section 2.1 and 2.2, respectively." }, { "heading": "2.1 THE CAUSE OF THE UTILITY IMBALANCE", "text": "In this section, we investigate the cause of the utility imbalance among the weights. The cause driven from initialization and optimization is discussed in Section 2.1.1 and 2.1.2, respectively." }, { "heading": "2.1.1 FROM INITIALIZATION", "text": "The utility imbalance among the weights can be given by initialization. The Lottery Ticket Hypothesis (Frankle & Carbin, 2018) showed that there is a fortuitous subset of weights at initialization, namely winning tickets, which achieves the commensurate accuracy with the original network within the commensurate training time when trained in isolation. Since they uncovered the winning tickets by pruning the weights with the smallest magnitude after pretraining the whole network, we can say that the weights in the winning ticket are the ones that become the largest when trained as a whole. And since the weights with the larger magnitude tend to be more utilized than the others (Han et al., 2015), it can be inferred that the weights in the winning ticket are the well-initialized ones that can become the most utilized ones after trained as a whole. However, the winning tickets become less effective when the subnetworks are trained to be far apart from the initialization, e.g., trained with a larger learning rate or using ResNet (He et al., 2016) or Batch Normalization (Ioffe & Szegedy, 2015; Hoffer et al., 2018), where the gradients flowing through the network are larger (Frankle & Carbin, 2018)." }, { "heading": "2.1.2 DURING OPTIMIZATION", "text": "The utility imbalance among the weights can also be intensified during optimization. Although Frankle & Carbin (2018) conjectured that SGD may seek out and train the winning tickets, they did not give further evidence. Here, we conduct an experiment to measure the utility imbalance during training.\nWe classified CIFAR10 dataset (Krizhevsky, 2009) using a vanilla convolutional neural network (vanilla CNN) (the architecture of the network is specified in Section A.2.1), trained by SGD with zero momentum and weight decay. The network was trained for 200 epochs with initial learning rate\nof 0.1, which is reduced by 10× at 50% and 75% of the training epochs. When we measured the utility of a subset (refer to Definition 1), the weights in the subset were zeroed out for the exclusion, and KL-divergence was used to measure the distance between the probability outputs of the original network and the ablated network. For each subset, the distance was measured by averaging over the total dataset. To measure the utility statistics, e.g., utility imbalance, we used a set of 500 randomly selected subsets (N = 500 in Definition 3) which we refer to as sample set. We only used the training data in the experiment for two reasons: (1) it is more straightforward to interpret and (2) in our experiments, the validation accuracy was higher when the training loss was lower, thus we regarded that the probability output of the training data represented the validation accuracy to some degree.\nFrom the left figure in Figure 2, we can see that the utility imbalance, i.e., the standard deviation of the utility of the subsets in the sample set, increases during training. Additionally, the mean of the utility increases (Figure 2, middle), which implies that the network is utilizing the weights more effectively and is becoming sensitive to the ablation as training proceeds. Because the utility given the amount of ablation (|δ|) can be interpreted as |∆f(W+δ)∆δ |, it is also inferred that the loss surface is becoming sharper. It corresponds with the previous work (refer to Ghorbani et al., 2019, Figure 3), where the scale of the eigenvalues of the Hessian with respect to the weights grows significantly over training. The increase in the average utility can be the reason for the increase in the utility imbalance – if statistically analyzed, the standard deviation of data grows larger when the data is scaled to be larger; or if intuitively interpreted, the network outputs with respect to the different ablation sets differ to a greater extent as the loss surface becomes sharper. Moreover, we can see that the minimum of the utility in the sample set also increases (Figure 2, right), which infers that each of the weights is struggling to be utilized more, rather than SGD purposely differ the utility among the weights. This still holds when we trained the network much longer, i.e., 5000 epochs, where we put the result in Figure 11. We also conducted the experiments with the most popular architecture for classification, i.e., ResNet20 (He et al., 2016) and Batch Normalization (Ioffe & Szegedy, 2015); and more advanced optimization techniques, i.e., momentum and weight decay. Please refer to Figure 12 and Figure 13 for the results. The results also hold for the above statements." }, { "heading": "2.2 THE FEATURES OF THE UTILITY IMBALANCE", "text": "In this section, we investigate how neural networks utilize weights under different conditions. We show the characteristics of the weight utilization in different-sized networks in Section 2.2.1, and in pruned networks in Section 2.2.2." }, { "heading": "2.2.1 UTILITY IMBALANCE AND NETWORK SIZE", "text": "Here, we investigate how the different-sized networks utilize their weights. For a fair comparison, instead of adjusting the width of the layers, i.e., the number of output channels of the layers, we randomly pruned the network at initialization and re-scaled the variance of the remaining weights according to the original initialization method, i.e. He initialization (He et al., 2015). This was to control the number of feature maps and thus avoid any architectural bias. We compared ResNet20 (He et al., 2016) with Batch Normalization (Ioffe & Szegedy, 2015) and its sampled subnetworks whose number of weights are 0.5× and 0.25× of that of the original network. The network was trained by SGD with momentum (α = 0.9) and weight decay (λ = 10−4). Other conditions are the same as the experiment in Section 2.1.2. We performed the ablation experiments under two different settings: where the number of ablated weights are (1) proportional to the number of weights in each network (which is notated as Ablation Ratio in Figure 3) and (2) the absolute number of weights that are equally applied to the networks (Ablation Number in Figure 3). Each scale of the ablation number in Figure 3 is a proportion to the number of weights in the 1× network. From Figure 3, we observe that no matter how large the network is, there is utility imbalance among the weights. Moreover, the three networks show the similar responses when the weights are ablated in proportion to the network size (Figure 3 top), but quite different when ablated by the same number (Figure 3 bottom). This is quite remarkable in that the network utilizes the number of weights proportional to its size rather than the absolute number of weights required to map a certain function. Casper et al. mentioned this way of weight utilization at initialization (refer to Casper et al., 2020, Appendix B). Thus, it is inferred that the tendency endowed from initialization remains to the end of training.\nBy taking a closer look, even when compared by ratio where the similar tendency is shown, the actual number of weights still matters to a lesser degree. For example, the maximum weight utility among the random weight sets (Figure 4, left) is generally higher in the smaller network. This implies that the most utilized weights in the smaller network are more capable than those in the larger one, even if more weights are considered for the larger network. This is a counterexample for the conjecture in the Appendix in Frankle & Carbin (2018), which says the larger network shows superior performance because it has better winning tickets. If the larger network had a better ticket with the same amount (ratio) of weights, its most utilized weights should have been more critical than those of the smaller one. Another interesting result comes from the average weight utilization (Figure 3, top middle). It is shown that the output of the larger network differs from that of the original network by the same level by the larger ablation ratio. This implies that the larger network relies on the larger portion of the weights. In addition, the larger network tends to less utilize the weights when ablated by the same ratio if the network has the ability to classify. This is in line with the findings of Casper et al., where the larger network was regarded to be more robust to the ablation (Casper et al., 2019). In summary, the larger network uses the less-utilized weights by the larger portion. We conjecture this may be a reason for the larger network showing the better performance. The tendency is also consistent with the maximum and the minimum of the utility, and is confirmed by the histogram of the utility values in Figure 4." }, { "heading": "2.2.2 UTILITY IMBALANCE IN PRUNED NETWORK", "text": "Here, we investigate the utility characteristics in a pruned network. The network architecture and optimization scheme are the same as the experiments in Section 2.2.1. We used the one-shot global magnitude pruning with respect to the individual weights, and only the weights in the convolution layers were pruned for convenience. We retrained the pruned network for another 200 epochs with the same learning rate schedule used for the pretraining (Renda et al., 2020). For comparison, we trained a small network which is trained from scratch and has the same number of weights as that of the pruned network. Especially, we trained the small network for 400 epochs to match the total training epochs of the pruned network. The learning rate schedule was also the same for the prunedand-retrained network, which is twice the repetition of that of the pretraining. The utility statistics were acquired as the same as the experiments in Section 2.1.2. For the pretrained network, only the weights with the larger magnitude than pruning threshold were considered for the subsets for the utility measure (Ws in Definition 1). This was to control the weights of interest, since only the weights larger in magnitude than the pruning threshold remain in the pruned network. Figure 5 is the result for the pruning ratio 0.5, and Figure 15 is for 0.25 and 0.75.\nIn Figure 5, the pruned-and-retrained network exhibits the similar utility characteristics to those of the pretained network. It is surprising in that we used large learning rates for retraining the pruned network, which were large enough for the pretrained network to achieve 100% training accuracy from a random state. Although the statistics of the pruned-and-retrained network differ from those of the pretrained network as the pruning ratio increases (Figure 15), still the former resembles the latter, even when the pruning ratio is 0.75. From the results, we conjecture that the high performance of the pruned-and-retrained network comes from the weight utility characteristics endowed by the large pretrained network, e.g., relying on the larger portion of the less-utilized weights, as in Section 2.2.1." }, { "heading": "3 LOSS SURFACE ANALYSIS OF THE PRUNE-RETRAIN MECHANISM", "text": "Canonical pruning methods begin with training a large neural network, followed by identifying and removing a portion of less important parameters for compressing the network, yet preserving the accuracy. Such pretrain-and-prune paradigm is motivated by an implicit assumption that: (1) a large network trained with SGD optimization has a higher chance of converging to a minima with less generalization gap, and (2) pruned and retrained network converges to a minima with a close relation to that of the pretrained network. In this regard, we investigate the relation between the pretrained, pruned and retrained networks in this section.\nEmpirically, the generalization gap is known to be closely related to the geometry of the loss basin. Hochreiter & Schmidhuber (1995) first proposed that flat loss minima corresponds to less overfitting. Jiang et al. (2020) showed that measures based on flatness of minima such as sharpness (Keskar et al., 2017) are highly correlated with generalization performance. Also, Li et al. (2018) visualized the loss minima of a family of Wide-ResNets, and showed that larger networks converge to flatter\nminima. This is consistent with the empirical observations that larger neural networks have smaller generalization gaps.\nLikewise, we investigate the relation between the networks by visualizing the loss surface. To the best of our knowledge, there has been not been a clear verification on whether the assumptions for the heuristic hold. In order to verify such assumptions and shed some light on what happens through pruning, we visualize the trajectory of the pruning mechanism on a low-dimensional loss surface.\nPruning and retraining methods. The typical pruning algorithms prune and retrain the network iteratively to achieve high sparsity and avoid loss in accuracy. The iterative pruning cycle consists of two alternating phases: the pruning phase and the retraining phase. At the pruning phase, a portion of weights or channels are removed. The weight pruning methods aim to identify the importance of the individual weight of a neural network, while the channel pruning methods target the channels in a convolutional layer. Therefore, the channel pruning methods operates in a higher constraint, which often results in greater damage to the network. At the retraining phase, the pruned network is commonly fine-tuned using a small and fixed learning rate, which is usually the last learning rate used at the pretraining stage. In addition, Renda et al. (2020) has shown that rewinding the learning rate scheduling to an earlier time step can boost the validation accuracy of the retrained network.\nVisualization technique. For a straightforward understanding, we plot the accuracy surface instead of the loss surface in this section. To visualize the points of interest, we follow the 2D planar loss visualization used in Garipov et al. (2018). We construct the 2D plane by affine combinations of three points in the weight space – the pretrained weight (W ∗), the pruned weight in the last iterative cycle (W p), and the final retrained weight (W r).\nExperiment details. For the experiments, CIFAR10 dataset (Krizhevsky, 2009) and ResNet20 (He et al., 2016) were used. At the pretraining stage, we use the same training hyperparameters and data augmentation strategy described in He et al. (2016); with SGD momentum α = 0.9 and weight decay λ = 10−4. The pretrained network was trained for 200 epochs, with an initial learning rate of 0.1 divided by 10 at 50% and 75% of the total training epochs. The number of retraining epochs was also 200 for each cycle. For comparison, the small networks with the same number of weights as that of the corresponding pruned networks were trained from scratch. The small networks are set by\nrandomly pruning the large network at initial and rescaled the variance of the weights regarding the original initialization, i.e. (He et al., 2015). To be fair, we used the same number of training epochs and the same learning rate schedule as those of the pretrained-pruned-retrained network (Liu et al., 2018) for training the small network.\nFor the weight pruning, we used the magnitude pruning method (Han et al., 2015). And for the channel pruning, we used the criteria proposed in Network Slimming (Liu et al., 2017). To show the failure case of pruning, we applied the method much more aggressively (Section A.5.1). For each experiment, we performed the iterative pruning over five cycles. We removed 30% and 20% of weights per cycle for the weight and the channel pruning, respectively. Only the weights in the convolution layers were pruned for convenience. For retraining the pruned network, two learning rate scheme were used: fine-tuning and learning rate rewinding (Renda et al., 2020).\nResults. In the case of the weight pruning, we consistently observe that the pretrained weight and the pruned-and-retrained weight are connected by the high-accuracy region on the accuracy surface (Figure 6, top). This indicates that the final weight is placed in the original loss basin (accuracy peak). Moreover, we find that the retrained weight vector remains in the same loss basin even with a aggressive retraining method, i.e., learning rate rewinding (Renda et al., 2020). Table 2 also shows that the pruned network has consistently better performance than the small network trained from scratch, which is in line with our observation in the visualization. It is implied that the prunedand-retrained network is likely to remain in the flat loss basin reached at the pretraining stage. This observation is in agreement with the assumption that the pretrain-and-prune practice takes advantage of the large network to attain higher generalization performance.\nOn the other hand, in the case of the aggressive channel pruning, we observed that the two points are possibly in the separate loss basins when retrained by learning rate rewinding (Figure 6, bottom right). We conjecture this came from the aggressive channel pruning and the large learning rate of the learning rate rewinding. It is provable that the pruned weight shifted too much from the pretrained weight W ∗, such that the retrained point W r locates in a different loss basin. To give further evidence, we compare the accuracy of the pruned-and-retrained network and that of the small network (Table 2). The result shows that the pruned-and-retrained network has a higher generalization gap than the small network, implying that it did not benefit from the pretrained network. Moreover, when using the fine-tuning method, the retrained network could not even recover from the pruning and remains in the low accuracy region (Figure 6, bottom left). Overall, for the aggressive pruning, the result does not seem to work as the initial assumptions." }, { "heading": "4 DISCUSSION", "text": "A standard pruning framework, i.e., pretraining a large network and then pruning and retraining, was examined. To investigate the pretraining phase, we defined the measures for the weight utility and the imbalance of the weight utility. The cause of the weight imbalance and the characteristics of the weight utility were discussed. For the pruning and retraining phase, the relation between the pretrained network and the pruned and retrained network was investigated using the accuracy surface and the validation accuracy of the networks. The various conditions were examined to verify the heuristic." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 RELATED WORKS", "text": "Pruning methods. Network pruning attempts to compress pretrained deep neural networks by identifying and removing parameters that are estimated to be less important for the network. Unstructured pruning methods remove parameters at a fine-grained level by removing each of the weights individually. Han et al. (2015) proposed a method that assesses the importance of weights based on their norm, and also performs an additional fine-tuning step after pruning in order to compensate for the loss of parameters. Structured pruning on the other hand, removes parameters at a larger, filter or kernel level. Liu et al. (2017) came up with a method that prunes channels with small BatchNorm (Ioffe & Szegedy, 2015) scaling parameters, which have been pushed towards zero with L1 regularization. Recent methods (You et al., 2019; He et al., 2019) focus on pruning networks with skip-connections (He et al., 2016). He et al. (2019) prunes 53.4% of parameters with a slight accuracy drop. In this work, we want to reveal some of the reasons why deep neural networks can be successfully pruned.\nLoss surface. A streamline of works focus on the loss surface of neural networks and its properties. Garipov et al. (2018) introduce the concept of mode connectivity; that simple linear pathways between local minima can be discovered. They demonstrate ways of finding and visualizing the loss landscape along such pathways on various modern architectures. (Evci et al., 2019; Frankle et al., 2019) analyze sparse neural networks in terms of mode connectivity. Frankle et al. (2019) discovered that the existence of linear paths is a key indicator of whether lottery ticket networks (Frankle & Carbin, 2018) can be discovered. On the other hand, Evci et al. (2019) found that linear paths\nbetween a pruned network and a network randomly initialized from the same sparsity pattern are separated by high energy barriers. In this context, our paper brings new insight by exploiting mode connectivity to analyze the minima obtained before and after pruning.\nLottery ticket hypothesis. The lottery ticket hypothesis(Frankle & Carbin, 2018) explores the existence of a sparse subnetwork that can reach the commensurate performance of a dense network within the commensurate time when trained in isolation. The subnetwork is discovered by pruning a pretrained network based on the magnitude of its weights, whereas the surviving weights are restored back to their values at initialization. Such weights are called winning tickets, and a pruned network can be obtained by retraining them. The authors verified the existence of lottery tickets on the CIFAR-10 dataset with fully connected and convolutional networks. However, current methods of finding lottery tickets are only restricted to certain settings, i.e, learning rate warmup and iterative pruning. Moreover, the mechanism of the lottery tickets is still an open area of research. Elesedy et al. (2020) is a work that provides an insight to the mechanism with linear models, where they reformulated the iterative magnitude pruning as a process of feature alignment.\nLearning rate rewinding. Existing literature on pruning adopt a retraining phase after pruning to compensate for the loss of the network. The most commonly used method is fine-tuning, which is a process of additional training with a small and fixed learning rate (Han et al., 2015; Liu et al., 2018), i.e., typically the last learning rate used for pretraining. Meanwhile, Renda et al. (2020) suggested a new learning rate scheme which is referred to as learning rate rewinding. Unlike fine-tuning, they propose to rewind the learning rate schedule to the earlier phase in pretaining. In addition, they explored whether the weight rewinding (Frankle & Carbin, 2018) is beneficial. On various datasets such as CIFAR-10, ImageNet, WMT-16 dataset, they verified that both of the rewinding methods were better than fine-tuning." }, { "heading": "A.2 ADDITIONAL MATERIAL FOR SECTION 2.1", "text": "" }, { "heading": "A.2.1 THE ARCHITECTURE OF THE VANILLA CNN", "text": "The vanilla CNN used for the experiment was composed of (conv3 3x16)-(conv3 16x16)-(conv3 16x16)-(maxpool)-(conv3 16x32)-(conv3 32x32)-(maxpool)(conv3 32x64)-(conv3 64x64)-(global average pooling)-(fc 64x10). A convolution layer with a spatial size of 3× 3 is indicated as ’conv3’. And ’maxpool’ and ’fc’ indicate a max pooling layer and a fully-connected layer, respectively. There is a Rectified Linear Units (ReLU) (Glorot et al., 2011) after each convolution layer." }, { "heading": "A.2.2 THE TRAINING ACCURACY FOR EACH ABLATION IN FIGURE 2", "text": "" }, { "heading": "A.2.3 THE DETAILED FIGURES FOR FIGURE 2", "text": "Since the data do not clearly show the tendency in the combined manner, we present the detailed figures for Figure 2 in Figure 8, 9, 10. The figures are ordered with respect to the corresponding ablation ratio of each datum, i.e., 0.01,0.05,0.075,0.1,0.15,0.32. In the figures, the statements in Section 2.1.2 generally hold." }, { "heading": "A.2.4 THE CHARACTERISTICS OF THE WEIGHT UTILITY DURING OPTIMIZATION IN VARYING CONDITIONS.", "text": "The longer training epochs With the same setting as the experiment in Section 2.1, we trained the network much longer than the conventions, i.e., 5000 epochs. It is about 25× longer than the typical settings. The initial learning rate of 0.1 was reduced by 10× at 50% and 75% of the total training epoch.\nAn experiment with the advanced settings Here, the experiments was done with the advanced settings. We used ResNet20 (He et al., 2016) with Bathch Normalization (Ioffe & Szegedy, 2015), trained by SGD with momentum (α = 0.9) and L2 weight decay (λ = 10−4). Other settings are the same as the experiment in Section 2.1.\nThe longer training epochs with the advanced settings Here, the experiments was done with the advanced settings and the larger training epochs. We used ResNet20 (He et al., 2016) with Bathch Normalization (Ioffe & Szegedy, 2015), trained by SGD with momentum (α = 0.9) and L2 weight decay (λ = 10−4). The number of training epochs is 5000. The initial learning rate of 0.1 was reduced by 10× at 50% and 75% of the total training epoch. Other settings are the same as the experiment in Section 2.1." }, { "heading": "A.3 ADDITIONAL FIGURES FOR SECTION 2.2.1", "text": "" }, { "heading": "A.3.1 THE HISTOGRAMS OF THE WEIGHT UTILITY FOR THE DIFFERENT-SIZED NETWORKS.", "text": "Here, we show the histograms of the weight utility for the different-sized networks. The figures are ordered as the corresponding ablation ratio, i.e., 0.01, 0.05, 0.075, 0.1, 0.15, 0.32." }, { "heading": "A.4 ADDITIONAL FIGURES FOR SECTION 2.2.2", "text": "" }, { "heading": "A.4.1 THE CHARACTERISTICS OF THE WEIGHT UTILITY IN THE PRUNED-AND-RETRAINED NETWORK WITH VARYING PRUNING RATIO", "text": "Here, we show the characteristics of the weight utility in the pruned-and-retrained network with varying pruning ratio." }, { "heading": "A.4.2 THE TRAINING LOSS AND THE VALIDATION ACCURACY FOR THE EXPERIMENTS.", "text": "Here, we show the training loss and the validation accuracy for the experiments used for Figure 5 and Figure 15." }, { "heading": "A.5 ADDITIONAL MATERIAL FOR SECTION 3", "text": "" }, { "heading": "A.5.1 THE DIFFERENCE BETWEEN NETWORK SLIMMING AND OUR CHANNEL PRUNING METHOD", "text": "Here, we describe the difference between network slimming (Liu et al., 2017) and our channel pruning method. Most importantly, the sparsity constraint was not used for a fundamental analysis. The pruning ratio constraint for each layer was also ignored (refer to Liu et al., 2017, Section 4.5), so that our pruned-and-retrained network often collapsed when the small learning rate was used for retraining (fine-tuning, please refer to Table 2). Also, we pruned the channel by 67%, which is by far above the recommended level (refer to Liu et al., 2017, caption of Figure 1). We did not use the best hyperparameter for the method either, e.g., the scaling factors of the Batch Normalization layers (Ioffe & Szegedy, 2015), which we used the conventional value, i.e., 1, unlike the value used for the method, i.e., 0.5." }, { "heading": "A.5.2 WEIGHT TRAJECTORY ON LOSS SURFACE OVER ITERATIVE PRUNING PROCESS", "text": "" } ]
2,020
null
SP:a1269282da0327aa083fa21ef352a5451667f925
[ "This paper applies mixup (Zhang et al., 2018) to augment training data to improve knowledge distillation in NLP tasks . Mixup was originally proposed to augment data for continuous data. To apply mixup to textual data, this paper applies mixup to the word/token embeddings instead of the tokens themselves. Some theoretical analysis has been done, and the experimental results show improved metrics over baseline methods such as DistillBERT.", "The paper proposes combining the MixUp data augmentation method with teacher-student distillation to improve the fine-tuned performance of BERT on benchmark NLP tasks (GLUE). The problem is important, well-motivated and of interest to a broad base of NLP researchers and practitioners. The paper is clear, and generally well-written, although the idea itself is not surprisingly novel (somewhat of a low-hanging fruit), from the experimental results, the method improves upon baselines, and so the real-world impact could be high, especially given its simple implementation." ]
Large-scale language models have recently demonstrated impressive empirical performance. Nevertheless, the improved results are attained at the price of bigger models, more power consumption, and slower inference, which hinder their applicability to low-resource (both memory and computation) platforms. Knowledge distillation (KD) has been demonstrated as an effective framework for compressing such big models. However, large-scale neural network systems are prone to memorize training instances, and thus tend to make inconsistent predictions when the data distribution is altered slightly. Moreover, the student model has few opportunities to request useful information from the teacher model when there is limited task-specific data available. To address these issues, we propose MixKD, a data-agnostic distillation framework that leverages mixup, a simple yet efficient data augmentation approach, to endow the resulting model with stronger generalization ability. Concretely, in addition to the original training examples, the student model is encouraged to mimic the teacher’s behavior on the linear interpolation of example pairs as well. We prove from a theoretical perspective that under reasonable conditions MixKD gives rise to a smaller gap between the generalization error and the empirical error. To verify its effectiveness, we conduct experiments on the GLUE benchmark, where MixKD consistently leads to significant gains over the standard KD training, and outperforms several competitive baselines. Experiments under a limited-data setting and ablation studies further demonstrate the advantages of the proposed approach.
[ { "affiliations": [], "name": "Kevin J Liang" }, { "affiliations": [], "name": "Weituo Hao" }, { "affiliations": [], "name": "Dinghan Shen" }, { "affiliations": [], "name": "Yufan Zhou" }, { "affiliations": [], "name": "Weizhu Chen" }, { "affiliations": [], "name": "Changyou Chen" }, { "affiliations": [], "name": "Lawrence Carin" } ]
[ { "authors": [ "Hangbo Bao", "Li Dong", "Furu Wei", "Wenhui Wang", "Nan Yang", "Xiaodong Liu", "Yu Wang", "Songhao Piao", "Jianfeng Gao", "Ming Zhou" ], "title": "UniLMv2: Pseudo-masked Language Models for Unified Language Model Pre-training", "venue": "arXiv preprint arXiv:2002.12804,", "year": 2020 }, { "authors": [ "J. Baxter" ], "title": "A Model of Inductive Bias Learning", "venue": "Journal of Artificial Intelligence Research,", "year": 2000 }, { "authors": [ "Luisa Bentivogli", "Peter Clark", "Ido Dagan", "Danilo Giampiccolo" ], "title": "The Fifth PASCAL Recognizing Textual Entailment", "venue": "Challenge. TAC,", "year": 2009 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ekin D Cubuk", "Alex Kurakin", "Kihyuk Sohn", "Han Zhang", "Colin Raffel" ], "title": "ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin A Raffel" ], "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "venue": "Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jiaao Chen", "Zichao Yang", "Diyi Yang" ], "title": "MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification", "venue": "Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Urvashi Khandelwal", "Christopher D Manning", "Quoc V Le" ], "title": "BAM! Born-again Multi-task Networks for Natural Language Understanding", "venue": null, "year": 1907 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Quoc V Le", "Christopher D Manning" ], "title": "ELECTRA: Pre-training Text Encoders as Discriminators Rather than Generators", "venue": "arXiv preprint arXiv:2003.10555,", "year": 2020 }, { "authors": [ "Ido Dagan", "Oren Glickman", "Bernardo Magnini" ], "title": "The PASCAL Recognising Textual Entailment Challenge", "venue": "Machine Learning Challenges Workshop,", "year": 2005 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "William B Dolan", "Chris Brockett" ], "title": "Automatically Constructing a Corpus of Sentential Paraphrases", "venue": "International Workshop on Paraphrasing,", "year": 2005 }, { "authors": [ "Danilo Giampiccolo", "Bernardo Magnini", "Ido Dagan", "Bill Dolan" ], "title": "The Third PASCAL Recognizing Textual Entailment Challenge", "venue": "ACL-PASCAL Workshop on Textual Entailment and Paraphrasing,", "year": 2007 }, { "authors": [ "Hongyu Guo", "Yongyi Mao", "Richong Zhang" ], "title": "Augmenting Data with Mixup for Sentence Classification: An Empirical Study", "venue": "arXiv preprint arXiv:1905.08941,", "year": 2019 }, { "authors": [ "R Bar Haim", "Ido Dagan", "Bill Dolan", "Lisa Ferro", "Danilo Giampiccolo", "Bernardo Magnini", "Idan Szpektor" ], "title": "The Second PASCAL Recognising Textual Entailment Challenge", "venue": "PASCAL Challenges Workshop on Recognising Textual Entailment,", "year": 2006 }, { "authors": [ "Dan Hendrycks", "Norman Mu", "Ekin Dogus Cubuk", "Barret Zoph", "Justin Gilmer", "Balaji Lakshminarayanan" ], "title": "AugMix: A Simple Method to Improve Robustness and Uncertainty under Data Shift", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the Knowledge in a Neural Network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Xiaoqi Jiao", "Yichun Yin", "Lifeng Shang", "Xin Jiang", "Xiao Chen", "Linlin Li", "Fang Wang", "Qun Liu" ], "title": "TinyBERT: Distilling BERT for Natural Language Understanding", "venue": null, "year": 1909 }, { "authors": [ "Mandar Joshi", "Danqi Chen", "Yinhan Liu", "Daniel S Weld", "Luke Zettlemoyer", "Omer Levy" ], "title": "SpanBERT: Improving Pre-training by Representing and Predicting Spans", "venue": "Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "ImageNet Classification with Deep Convolutional Neural Networks", "venue": "Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Ashutosh Kumar", "Satwik Bhattamishra", "Manik Bhandari", "Partha Talukdar" ], "title": "Submodular Optimization-based Diverse Paraphrasing and Its Effectiveness in Data Augmentation", "venue": "North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2019 }, { "authors": [ "Saehyung Lee", "Hyungyu Lee", "Sungroh Yoon" ], "title": "Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization", "venue": "Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Mike Lewis", "Yinhan Liu", "Naman Goyal", "Marjan Ghazvininejad", "Abdelrahman Mohamed", "Omer Levy", "Ves Stoyanov", "Luke Zettlemoyer" ], "title": "BART: Denoising Sequence-to-sequence Pretraining for Natural Language Generation, Translation, and Comprehension", "venue": null, "year": 1910 }, { "authors": [ "Linqing Liu", "Huan Wang", "Jimmy Lin", "Richard Socher", "Caiming Xiong" ], "title": "Attentive Student Meets Multi-task Teacher: Improved Knowledge Distillation for Pretrained Models", "venue": "arXiv preprint arXiv:1911.03588,", "year": 2019 }, { "authors": [ "Xiaodong Liu", "Pengcheng He", "Weizhu Chen", "Jianfeng Gao" ], "title": "Multi-task Deep Neural Networks for Natural Language Understanding", "venue": "arXiv preprint arXiv:1901.11504,", "year": 2019 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "venue": "arXiv preprint arXiv:1907.11692,", "year": 2019 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing Data using t-SNE", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Seyed-Iman Mirzadeh", "Mehrdad Farajtabar", "Ang Li", "Nir Levine", "Akihiro Matsukawa", "Hassan Ghasemzadeh" ], "title": "Improved Knowledge Distillation via Teacher Assistant", "venue": null, "year": 1902 }, { "authors": [ "Subhabrata Mukherjee", "Ahmed Hassan Awadallah" ], "title": "Distilling Transformers into Simple Neural Networks with Unlabeled Transfer Data", "venue": "arXiv preprint arXiv:1910.01769,", "year": 2019 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli" ], "title": "fairseq: A Fast, Extensible Toolkit for Sequence Modeling. North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations, 2019", "venue": null, "year": 2019 }, { "authors": [ "Yanru Qu", "Dinghan Shen", "Yelong Shen", "Sandra Sajeev", "Jiawei Han", "Weizhu Chen" ], "title": "CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding", "venue": "arXiv preprint arXiv:2010.08670,", "year": 2020 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "SQuAD: 100,000+ Questions for Machine Comprehension of Text", "venue": "Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Victor Sanh", "Lysandre Debut", "Julien Chaumond", "Thomas Wolf" ], "title": "DistilBERT, A Distilled Version of BERT: Smaller, Faster, Cheaper and Lighter", "venue": null, "year": 1910 }, { "authors": [ "Dinghan Shen", "Mingzhi Zheng", "Yelong Shen", "Yanru Qu", "Weizhu Chen" ], "title": "A Simple but Tough-toBeat Data Augmentation Approach for Natural Language Understanding and Generation", "venue": null, "year": 2009 }, { "authors": [ "Patrice Y Simard", "Yann A LeCun", "John S Denker", "Bernard Victorri" ], "title": "Transformation Invariance in Pattern Recognition–Tangent Distance and Tangent Propagation", "venue": "Neural Networks: Tricks of the Trade,", "year": 1998 }, { "authors": [ "PY Simard", "D Steinkraus", "JC Platt" ], "title": "Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis", "venue": "International Conference on Document Analysis and Recognition,", "year": 2003 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D Manning", "Andrew Y Ng", "Christopher Potts" ], "title": "Recursive Deep Models for Semantic Compositionality over a Sentiment Treebank", "venue": "Empirical Methods in Natural Language Processing,", "year": 2013 }, { "authors": [ "Siqi Sun", "Yu Cheng", "Zhe Gan", "Jingjing Liu" ], "title": "Patient Knowledge Distillation for BERT Model Compression", "venue": "arXiv preprint arXiv:1908.09355,", "year": 2019 }, { "authors": [ "Yu Sun", "Shuohuan Wang", "Yukun Li", "Shikun Feng", "Xuyi Chen", "Han Zhang", "Xin Tian", "Danxiang Zhu", "Hao Tian", "Hua Wu" ], "title": "ERNIE: Enhanced Representation through Knowledge Integration", "venue": "arXiv preprint arXiv:1904.09223,", "year": 2019 }, { "authors": [ "Zhiqing Sun", "Hongkun Yu", "Xiaodan Song", "Renjie Liu", "Yiming Yang", "Denny Zhou" ], "title": "MobileBERT: A Compact Task-agnostic BERT for Resource-limited Devices", "venue": "arXiv preprint arXiv:2004.02984,", "year": 2020 }, { "authors": [ "Raphael Tang", "Yao Lu", "Linqing Liu", "Lili Mou", "Olga Vechtomova", "Jimmy Lin" ], "title": "Distilling Taskspecific Knowledge from BERT into Simple Neural Networks", "venue": null, "year": 1903 }, { "authors": [ "Iulia Turc", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "Well-read Students Learn Better: On the Importance of Pre-training Compact Models", "venue": null, "year": 1908 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is All You Need", "venue": "Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Vikas Verma", "Alex Lamb", "Christopher Beckham", "Amir Najafi", "Ioannis Mitliagkas", "David LopezPaz", "Yoshua Bengio" ], "title": "Manifold Mixup: Better Representations by Interpolating Hidden States", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Vikas Verma", "Alex Lamb", "Juho Kannala", "Yoshua Bengio", "David Lopez-Paz" ], "title": "Interpolation Consistency Training for Semi-supervised Learning", "venue": "International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Vikas Verma", "Meng Qu", "Alex Lamb", "Yoshua Bengio", "Juho Kannala", "Jian Tang" ], "title": "GraphMix: Improved Training of GNNs for Semi-Supervised Learning", "venue": "arXiv preprint arXiv:1909.11715,", "year": 2019 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R Bowman" ], "title": "GLUE: A Multi-task Benchmark and Analysis Platform for Natural Language Understanding", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dongdong Wang", "Yandong Li", "Liqiang Wang", "Boqing Gong" ], "title": "Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model", "venue": "Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Jason Wei", "Kai Zou" ], "title": "EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks", "venue": "arXiv preprint arXiv:1901.11196,", "year": 2019 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel R Bowman" ], "title": "A Broad-coverage Challenge Corpus for Sentence Understanding through Inference. North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2018", "venue": null, "year": 2018 }, { "authors": [ "Xing Wu", "Shangwen Lv", "Liangjun Zang", "Jizhong Han", "Songlin Hu" ], "title": "Conditional BERT Contextual Augmentation", "venue": "International Conference on Computational Science,", "year": 2019 }, { "authors": [ "Qizhe Xie", "Zihang Dai", "Eduard Hovy", "Minh-Thang Luong", "Quoc V Le" ], "title": "Unsupervised Data Augmentation for Consistency Training", "venue": null, "year": 1904 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Russ R Salakhutdinov", "Quoc V Le" ], "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "venue": "Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Adams Wei Yu", "David Dohan", "Minh-Thang Luong", "Rui Zhao", "Kai Chen", "Mohammad Norouzi", "Quoc V Le" ], "title": "QANet: Combining Local Convolution with Global Self-attention for Reading Comprehension", "venue": "arXiv preprint arXiv:1804.09541,", "year": 2018 }, { "authors": [ "Sangdoo Yun", "Dongyoon Han", "Seong Joon Oh", "Sanghyuk Chun", "Junsuk Choe", "Youngjoon Yoo" ], "title": "CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features", "venue": "International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond Empirical Risk Minimization", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Sanqiang Zhao", "Raghav Gupta", "Yang Song", "Denny Zhou" ], "title": "Extreme Language Model Compression with Optimal Subwords and Shared Projections", "venue": null, "year": 1909 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent language models (LM) pre-trained on large-scale unlabeled text corpora in a self-supervised manner have significantly advanced the state of the art across a wide variety of natural language processing (NLP) tasks (Devlin et al., 2018; Liu et al., 2019c; Yang et al., 2019; Joshi et al., 2020; Sun et al., 2019b; Clark et al., 2020; Lewis et al., 2019; Bao et al., 2020). After the LM pretraining stage, the resulting parameters can be fine-tuned to different downstream tasks. While these models have yielded impressive results, they typically have millions, if not billions, of parameters, and thus can be very expensive from storage and computational standpoints. Additionally, during deployment, such large models can require a lot of time to process even a single sample. In settings where computation may be limited (e.g. mobile, edge devices), such characteristics may preclude such powerful models from deployment entirely.\nOne promising strategy to compress and accelerate large-scale language models is knowledge distillation (Zhao et al., 2019; Tang et al., 2019; Sun et al., 2020). The key idea is to train a smaller model (a “student”) to mimic the behavior of the larger, stronger-performing, but perhaps less practical model (the “teacher”), thus achieving similar performance with a faster, lighter-weight model. A simple but powerful method of achieving this is to use the output probability logits produced by the teacher model as soft labels for training the student (Hinton et al., 2015). With higher entropy than one-hot labels, these soft labels contain more information for the student model to learn from.\n∗Equal contribution\nPrevious efforts on distilling large-scale LMs mainly focus on designing better training objectives, such as matching intermediate representations (Sun et al., 2019a; Mukherjee & Awadallah, 2019), learning multiple tasks together (Liu et al., 2019a), or leveraging the distillation objective during the pre-training stage (Jiao et al., 2019; Sanh et al., 2019). However, much less effort has been made to enrich task-specific data, a potentially vital component of the knowledge distillation procedure. In particular, tasks with fewer data samples provide less opportunity for the student model to learn from the teacher. Even with a well-designed training objective, the student model is still prone to overfitting, despite effectively mimicking the teacher network on the available data.\nIn response to these limitations, we propose improving the value of knowledge distillation by using data augmentation to generate additional samples from the available task-specific data. These augmented samples are further processed by the teacher network to produce additional soft labels, providing the student model more data to learn from a large-scale LM. Intuitively, this is akin to a student learning more from a teacher by asking more questions to further probe the teacher’s answers and thoughts. In particular, we demonstrate that mixup (Zhang et al., 2018) can significantly improve knowledge distillation’s effectiveness, and we show with a theoretical framework why this is the case. We call our framework MixKD.\nWe conduct experiments on 6 GLUE datasets (Wang et al., 2019) across a variety of task types, demonstrating that MixKD significantly outperforms knowledge distillation (Hinton et al., 2015) and other previous methods that compress large-scale language models. In particular, we show that our method is especially effective when the number of available task data samples is small, substantially improving the potency of knowledge distillation. We also visualize representations learned with and without MixKD to show the value of interpolated distillation samples, perform a series of ablation and hyperparameter sensitivity studies, and demonstrate the superiority of MixKD over other BERT data augmentation strategies." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 MODEL COMPRESSION", "text": "Compressing large-scale language models, such as BERT, has attracted significant attention recently. Knowledge distillation has been demonstrated as an effective approach, which can be leveraged during both the pre-training and task-specific fine-tuning stages. Prior research efforts mainly focus on improving the training objectives to benefit the distillation process. Specifically, Turc et al. (2019) advocate that task-specific knowledge distillation can be improved by first pre-training the student model. It is shown by Clark et al. (2019) that a multi-task BERT model can be learned by distilling from multiple single-task teachers. Liu et al. (2019b) propose learning a stronger student model by distilling knowledge from an ensemble of BERT models. Patient knowledge distillation (PKD), introduced by Sun et al. (2019a), encourages the student model to mimic the teacher’s intermediate layers in addition to output logits. DistilBERT (Sanh et al., 2019) reduces the depth of BERT model by a factor of 2 via knowledge distillation during the pre-training stage. In this work, we evaluate MixKD on the case of task-specific knowledge distillation. Notably, it can be extended to the pre-training stage as well, which we leave for future work. Moreover, our method can be flexibly integrated with different KD training objectives (described above) to obtain even better results. However, we utilize the BERT-base model as the testbed in this paper without loss of generality." }, { "heading": "2.2 DATA AUGMENTATION IN NLP", "text": "Data augmentation (DA) has been studied extensively in computer vision as a powerful technique to incorporate prior knowledge of invariances and improve the robustness of learned models (Simard et al., 1998; 2003; Krizhevsky et al., 2012). Recently, it has also been applied and shown effective on natural language data. Many approaches can be categorized as label-preserving transformations, which essentially produce neighbors around a training example that maintain its original label. For example, EDA (Wei & Zou, 2019) propose using various rule-based operations such as synonym replacement, word insertion, swap or deletion to obtain augmented samples. Back-translation (Yu et al., 2018; Xie et al., 2019) is another popular approach belonging to this type, which relies on pre-trained translation models. Additionally, methods based on paraphrase generation have also been leveraged from the data augmentation perspective (Kumar et al., 2019). On the other hand, label-altering techniques like mixup (Zhang et al., 2018) have also been proposed for language (Guo et al., 2019; Chen et al., 2020), producing interpolated inputs and labels for the models predict. The\nproposed MixKD framework leverages the ability of mixup to facilitate the student learning more information from the teacher. It is worth noting that MixKD can be combined with arbitrary labelpreserving DA modules. Back-translation is employed as a special case here, and we believe other advanced label-preserving transformations developed in the future can benefit the MixKD approach as well." }, { "heading": "2.3 MIXUP", "text": "Mixup (Zhang et al., 2018) is a popular data augmentation strategy to increase model generalizability and robustness by training on convex combinations of pairs of inputs and labels (xi, yi) and (xj , yj):\nx′ = λxi + (1− λ)xj (1) y′ = λyi + (1− λ)yj (2)\nwith λ ∈ [0, 1] and (x′, y′) being the resulting virtual training example. This concept of interpolating samples was later generalized with Manifold mixup (Verma et al., 2019a) and also found to be effective in semi-supervised learning settings (Verma et al., 2019b;c; Berthelot et al., 2019b;a). Other strategies include mixing together samples resulting from chaining together other augmentation techniques (Hendrycks et al., 2020), or replacing linear interpolation with the cutting and pasting of patches (Yun et al., 2019)." }, { "heading": "3 METHODOLOGY", "text": "" }, { "heading": "3.1 PRELIMINARIES", "text": "In NLP, an input sample i is often represented as a vector of tokens wi = {wi,1, wi,2, ..., wi,T }, with each token wi,t ∈ RV a one-hot vector often representing words (but also possibly subwords, punctuation, or special tokens) and V being the vocabulary size. These discrete tokens are then mapped to word embeddings xi = {xi,1, xi,2, ..., xi,T }, which serve as input to the machine learning model f . For supervised classification problems, a one-hot label yi ∈ RC indicates the ground-truth class of xi out of C possible classes. The parameters θ of f are optimized with some form of stochastic gradient descent so that the output of the model f(xi) ∈ RC is as close to yi as possible, with cross-entropy as the most common loss function:\nLMLE = − 1\nn n∑ i yi · log(f(xi)) (3)\nwhere n is the number of samples, and · is the dot product." }, { "heading": "3.2 KNOWLEDGE DISTILLATION FOR BERT", "text": "Consider two models f and g parameterized by θT and θS , respectively, with |θT | |θS |. Given enough training data and sufficient optimization, f is likely to yield better accuracy than g, due to higher modeling capacity, but may be too bulky or slow for certain applications. Being smaller in size, g is more likely to satisfy operational constraints, but its weaker performance can be seen as a disadvantage. To improve g, we can use the output prediction f(xi) on input xi as extra supervision for g to learn from, seeking to match g(xi) with f(xi). Given these roles, we refer to g as the student model and f as the teacher model.\nWhile there are a number of recent large-scale language models driving the state of the art, we focus here on BERT (Devlin et al., 2018) models. Following Sun et al. (2019a), we use the notation BERTk to indicate a BERT model with k Transformer (Vaswani et al., 2017) layers. While powerful, BERT models also tend to be quite large; for example, the default bert-base-uncased (BERT12) has∼110M parameters. Reducing the number of layers (e.g. using BERT3) makes such models significantly more portable and efficient, but at the expense of accuracy. With a knowledge distillation set-up, however, we aim to reduce this loss in performance." }, { "heading": "3.3 MIXUP DATA AUGMENTATION FOR KNOWLEDGE DISTILLATION", "text": "While knowledge distillation can be a powerful technique, if the size of the available data is small, then the student has only limited opportunities to learn from the teacher. This may make it much harder for knowledge distillation to close the gap between student and teacher model performance. To correct this, we propose using data augmentation for knowledge distillation. While data augmentation (Yu et al., 2018; Xie et al., 2019; Yun et al., 2019; Kumar et al., 2019; Hendrycks et al.,\n2020; Shen et al., 2020; Qu et al., 2020) is a commonly used technique across machine learning for increasing training samples, robustness, and overall performance, a limited modeling capacity constrains the representations the student is capable of learning on its own. Instead, we propose using the augmented samples to further query the teacher model, whose large size often allows it to learn more powerful features.\nWhile many different data augmentation strategies have been proposed for NLP, we focus on mixup (Zhang et al., 2018) for generating additional samples to learn from the teacher. Mixup’s vicinal risk minimization tends to result in smoother decision boundaries and better generalization, while also being cheaper to compute than methods such as backtranslation (Yu et al., 2018; Xie et al., 2019). Mixup was initially proposed for continuous data, where interpolations between data points remain in-domain; its efficacy was demonstrated primarily on image data, but examples in speech recognition and tabular data were also shown to demonstrate generality.\nDirectly applying mixup to NLP is not quite as straightforward as it is for images, as language commonly consists of sentences of variable length, each comprised of discrete word tokens. Since performing mixup directly on the word tokens doesn’t result in valid language inputs, we instead perform mixup on the word embeddings at each time step xi,t (Guo et al., 2019). This can be interpreted as a special case of Manifold mixup Verma et al. (2019a), where the mixing layer is set to the embedding layer. In other words, mixup samples are generated as:\nx′i,t = λxi,t + (1− λ)xj,t ∀t (4) y′i = λyi + (1− λ)yj (5)\nwith λ ∈ [0, 1]; random sampling of λ from a Uniform or Beta distribution are common choices. Note that we index the augmented sample with i regardless of the value of λ. Sentence length variability can be mitigated by grouping mixup pairs by length. Alternatively, padding is a common technique for setting a consistent input length across samples; thus, if x(i) contains more word tokens than x(j), then the extra word embeddings are mixed up with zero paddings. We find this approach to be effective, while also being much simpler to implement.\nWe query the teacher model with the generated mixup sample x′i, producing output prediction f(x ′ i). The student is encouraged to imitate this prediction on the same input, by minimizing the objective:\nLTMKD = d(f(x′i), g(x′i)) (6) where d(·, ·) is a distance metric for distillation, with temperature-adjusted cross-entropy and mean square error (MSE) being common choices.\nSince we have the mixup samples already generated (with an easy-to-generate interpolated pseudolabel y′i), we can also train the student model on these augmented data samples in the usual way, with a cross-entropy objective:\nLSM = − 1\nn n∑ i y′i · log(g(x′i)) (7)\nOur final objective for MixKD is a sum of the original data cross-entropy loss, student cross-entropy loss on the mixup samples, and knowledge distillation from the teacher on the mixup samples:\nL = LMLE + αSMLSM + αTMKDLTMKD (8) where αSM and αTMKD are hyperparameters weighting the loss terms." }, { "heading": "3.4 THEORETICAL ANALYSIS", "text": "We develop a theoretical foundation for the proposed framework. We wish to prove that by adopting data augmentation for knowledge distillation, one can achieve i) a smaller gap between generalization error and empirical error, and ii) better generalization.\nTo this end, assume the original training data {xi}ni=1 are sampled i.i.d. from the true data distribution p(x), and the augmented data distribution by mixup is denoted as q(x) (apparently p and q are dependent). Let f be the teacher function, and g ∈ G be the learnable student function. Denote the loss function to learn g as l(·, ·)1. The population risk w.r.t. p(x) is defined as R(f, g, p) =\n1This is essentially the same as L in equation 8. We use a different notation l(f(x), g(x)) to explicitly spell out the two data-wise arguments f(x) and g(x).\nEx∼p(x) [l(f(x), g(x))], and the empirical risk as Remp(f, g, {xi}ni=1) = 1\nn\n∑n i=1 l(f(xi), g(xi)).\nA classic statement for generalization is the following: with at least 1− δ probability, we have\nR(f, gp, p)−Remp(f, gp, {xi}ni=1) ≤ , (9)\nwhere > 0, and we have used gp to indicate that the function is learned based on p(x). Note different training data would correspond to a different error in equation 9. We use p to denote the minimum value over all ’s satisfying equation 9. Similarly, we can replace p with q, and {xi}ni=1 with {xi}ai=1 ∪ {x′i}bi=1 in equation 9 in the data-augmentation case. In this case, the student function is learned based on both the training data and augmented data, which we denote as g∗. Similarly, we also have a corresponding minimum error, which we denote as ∗. Consequently, our goal of better generalization corresponds to proving R(f, g∗, p) ≤ R(f, gp, p), and the goal of a smaller gap corresponds to proving ∗ ≤ p. In our theoretical results, we will give conditions when these goals are achievable. First, we consider the following three cases about the joint data X , {xi}ai=1 ∪ {x′i}bi=1 and the function class G:\n• Case 1: There exists a distribution p̃ such that X are i.i.d. samples from it2; G is a finite set. • Case 2: There exists p̃ such that X are i.i.d. samples from it; G is an infinite set. • Case 3: There does not exist a distribution p̃ such that X are i.i.d. samples from it.\nOur theoretical results are summarized in Theorems 1-3, which state that with enough augmented data, our method can achieve smaller generalization errors. Proofs are given in the Appendix.\nTheorem 1 Assume the loss function l(·, ·) is upper bounded by M > 0. Under Case 1, there exists a constant c > 0 such that if\nb ≥ M 2 log(|G|/δ)\nc − a\nthen ∗ ≤ p\nwhere ∗ and p denote the minimal generalization gaps one can achieve with or without augmented data, with at least 1 − δ probability. If further assuming a better empirical risk with data augmentation (which is usually the case in practice), i.e., Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) ≤ Remp(f, gp, {xi}ni=1), we have\nR(f, g∗, p) ≤ R(f, gp, p)\nTheorem 2 Assume the loss function l(·, ·) is upper bounded by M > 0 and Lipschitz continuous. Fix the probability parameter δ. Under Case 2, there exists a constant c > 0 such that if\nb ≥ M 2 log(1/δ)\nc − a\nthen ∗ ≤ p\nwhere ∗ and p denote the minimal generalization gaps one can achieve with or without augmented data, with at least 1 − δ probability. If further assuming a better empirical risk with data augmentation, i.e.,Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) ≤ Remp(f, gp, {xi}ni=1), we have\nR(f, g∗, p) ≤ R(f, gp, p)\nA more interesting setting is Case 3. Our result is based on Baxter (2000), which studies learning from different and possibly correlated distributions.\nTheorem 3 Assume the loss function l(·, ·) is upper bounded. Under Case 3, there exists constants c1, c2, c3 > 0 such that if\nb ≥ a log(4/δ) c1a− c2 and a ≥ c3\n2We make such an assumption because xi and x′i are dependent, thus existence of p̃ is unknown.\nthen ∗ ≤ p\nwhere ∗ and p denote the minimal generalization gaps one can achieve with or without augmented data, with at least 1 − δ probability. If further assuming a better empirical risk with data augmentation, i.e.,Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) ≤ Remp(f, gp, {xi}ni=1), we have\nR(f, g∗, p) ≤ R(f, gp, p)\nRemark 4 For Theorem 3 to hold, based on Baxter (2000), it is enough to ensure {xi,x′i} and {xj ,x′j} to be independent for i 6= j. We achieve this by constructing x′i with xi and an extra random sample from the training data. Since all (xi,xj) and the extra random samples are independent, the resulting concatenation will also be independent." }, { "heading": "4 EXPERIMENTS", "text": "We demonstrate the effectiveness of MixKD on a number of GLUE (Wang et al., 2019) dataset tasks: Stanford Sentiment Treebank (SST-2) (Socher et al., 2013), Microsoft Research Paraphrase Corpus (MRPC) (Dolan & Brockett, 2005), Quora Question Pairs (QQP)3, Multi-Genre Natural Language Inference (MNLI) (Williams et al., 2018), Question Natural Language Inference (QNLI) (Rajpurkar et al., 2016), and Recognizing Textual Entailment (RTE) (Dagan et al., 2005; Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009). Note that MNLI contains both an in-domain (MNLI-m) and cross-domain (MNLI-mm) evaluation set. These datasets span sentiment analysis, paraphrase similarity matching, and natural language inference types of tasks. We use the Hugging Face Transformers4 implementation of BERT for our experiments." }, { "heading": "4.1 GLUE DATASET EVALUATION", "text": "We first analyze the contributions of each component of our method, evaluating on the dev set of the GLUE datasets. For the teacher model, we fine-tune a separate 12 Transformer-layer bert-base-uncased (BERT12) for each task. We use the smaller BERT3 and BERT6 as the student model. We find that initializing the embeddings and Transformer layers of the student model from the first k layers of the teacher model provides a significant boost to final performance. We use MSE as the knowledge distillation distance metric d(·, ·). We generate one mixup sample for each original sample in each minibatch (mixup ratio of 1), with λ ∼ Beta(0.4, 0.4). We set hyperparameters weighting the components in the loss term in equation 8 as αSM = αTMKD = 1.\n3data.quora.com/First-Quora-Dataset-Release-Question-Pairs 4https://huggingface.co/transformers/\nAs a baseline, we fine-tune the student model on the task dataset without any distillation or augmentation, which we denote as BERTk-FT. We compare this against MixKD, with both knowledge distillation on the teacher’s predictions (LTMKD) and mixup for the student (LSM), which we call BERTk-SM+TMKD. We also evaluate an ablated version without the student mixup loss (BERTkTMKD) to highlight the knowledge distillation component specifically. We note that our method can also easily be combined with other forms of data augmentation. For example, backtranslation (translating an input sequence to the data space of another language and then translating back to the original language) tends to generate varied but semantically similar sequences; these sentences also tend to be of higher quality than masking or word-dropping approaches. We show that our method has an additive effect with other techniques by also testing our method with the dataset augmented with German backtranslation, using the fairseq (Ott et al., 2019) neural machine translation codebase to generate these additional samples. We also compare all of the aforementioned variants with backtranslation samples augmenting the data; we denote these variants with an additional +BT.\nWe report the model accuracy (and F1 score, for MRPC and QQP) in Table 1. We also show the performance of the full-scale teacher model (BERT12) and DistilBERT (Sanh et al., 2019), which performs basic knowledge distillation during BERT pre-training to a 6-layer model. For our method, we observe that a combination of data augmentation and knowledge\ndistillation leads to significant gains in performance, with the best variant often being the combination of teacher mixup knowledge distillation, student mixup, and backtranslation. In the case of SST-2, for example, BERT6-SM+TMKD+BT is able to capture 99.88% of the performance of the teacher model, closing 91.27% of the gap between the fine-tuned student model and the teacher, despite using far fewer parameters and having a much faster inference speed (Table 2).\nAfter analyzing the contributions of the components of our model on the dev set, we find the SM+TMKD+BT variant to have the best performance overall and thus focus on this variant. We submit this version of MixKD to the GLUE test server, reporting its results in comparison with fine-tuning (FT), vanilla knowledge distillation (KD) (Hinton et al., 2015), and patient knowledge distillation (PKD) (Sun et al., 2019a) in Table 3. Once again, we observe that our model outperforms the baseline methods on most tasks." }, { "heading": "4.2 LIMITED-DATA SETTINGS", "text": "One of the primary motivations for using data augmentation for knowledge distillation is to give the student more opportunities to query the teacher model. For datasets with a large enough number of samples relative to the task’s complexity, the original dataset may provide enough chances to learn from the teacher, reducing the relative value of data augmentation.\nAs such, we also evaluate MixKD with a BERT3 student on downsampled versions of QQP, MNLI (matched and mismatched), and QNLI in Figure 1. We randomly select 10% and 1% of the data from\nthese datasets to train both the teacher and student models, using the same subset for all experiments for fair comparison. In this data limited setting, we observe substantial gains from MixKD over the fine-tuned model for QQP (+2.0%, +3.0%), MNLI-m (+3.9%, +3.4%), MNLI-mm (+4.4%, +3.3%), and QNLI (+2.4%, +4.1%) for 10% and 1% of the training data.\n4.3 EMBEDDINGS VISUALIZATION\nWe perform a qualitative examination of the effect of the proposed MixKD by visualizing the latent space between positive and negative samples as encoded by the student model with tSNE plots (Maaten & Hinton, 2008). In Figure 2, we show the shift of the transformer features at the [CLS] token position, with and without mixup data augmentation from the teacher. We randomly select a batch of 100 sentences from the SST-2 dataset, of which 50 are positive sentiment (blue square) and 50 are negative sentiment (red circle). The intermediate mixup neighbours are indicated by triangles with color determined by the closeness to the positive group or negative group. From Figure 2(a) to Figure 2(b), MixKD forces the lin-\nearly interpolated samples to be aligned with the manifold formed by the real training data and leads the student model to explore meaningful regions of the feature space effectively." }, { "heading": "4.4 HYPERPARAMETER SENSITIVITY & FURTHER ANALYSIS", "text": "Loss Hyperparameters Our final objective in equation 8 has hyperparameters αSM and αTMKD, which control the weight of the student model’s cross-entropy loss for the mixup samples and the knowledge distillation loss with the teacher’s predictions on the mixup samples, respectively. We demonstrate that the model is fairly stable over a wide range by sweeping both αSM and αTMKD over the range {0.1, 0.5, 1.0, 2.0, 10.0}. We do this for a BERT3 student and BERT12 teacher, with SST-2 as the task; we show the results of this sensitivity study, both with and without German backtranslation, in Figure 3. Given the overall consistency, we observe that our method is stable over a wide range of settings.\nMixup Ratio We also investigate the effect of the mixup ratio: the number of mixup samples generated for each sample in a minibatch. We run a smaller sweep of αSM and αTMKD over the range {0.5, 1.0, 2.0} for mixup ratios of 2 and 3 for a BERT3 student SST-2, with and without German backtranslation, in Figure 3. We conclude that the mixup ratio does not have a strong effect on overall performance. Given that higher mixup ratio requires more computation (due to more samples over which to compute the forward and backward pass), we find a mixup ratio of 1 to be enough.\nComparing with TinyBERT’s DA module TinyBERT (Jiao et al., 2019) also utilizes data augmentation for knowledge distillation. Specifically, they adopt a conditional BERT contextual augmentation (Wu et al., 2019) strategy. To further verify the effectiveness of our approach, we use TinyBERT’s released codebase5 to generate augmented samples and make\na direct comparison with MixKD. As shown in Table 4, our approach exhibits much stronger results for distilling a 6-layer BERT model (on both MNLI and SST-2 datasets). Notably, TinyBERT’s data augmentation module is much less efficient than mixup’s simple operation, generating 20 times the original data as augmented samples, thus leading to massive computation overhead." }, { "heading": "5 CONCLUSIONS", "text": "We introduce MixKD, a method that uses data augmentation to significantly increase the value of knowledge distillation for compressing large-scale language models. Intuitively, MixKD allows the student model additional queries to the teacher model, granting it more opportunities to absorb the latter’s richer representations. We analyze MixKD from a theoretical standpoint, proving that our approach results in a smaller gap between generalization error and empirical error, as well as better generalization, under appropriate conditions. Our approach’s success on a variety of GLUE tasks demonstrates its broad applicability, with a thorough set of experiments for validation. We also believe that the MixKD framework can further reduce the gap between student and teacher models with the incorporation of more recent mixup and knowledge distillation techniques (Lee et al., 2020; Wang et al., 2020; Mirzadeh et al., 2019), and we leave this to future work." }, { "heading": "ACKNOWLEDGMENTS", "text": "CC is partly supported by the Verizon Media FREP program." }, { "heading": "A PROOFS", "text": "Proof [Proof of Theorem 1] First of all, {xi}ai=1 ∪ {x′i}bi=1 can be regarded as drawn from distribution r(x) = ap(x) + bq(x)\na+ b .\nGiven G is finite, we have the following theorem\nTheorem 5 (Mohri et al., 2018) Let l be a bounded loss function, hypothesis set G is finite. Then for any δ > 0, with probability at least 1− δ, the following inequality holds for all g ∈ G:\nR(f, g, p)−Remp(f, g, {xi}ni=1) ≤M √\nlog(|G|/δ) 2n\nThus we have in our case: R(f, gp, p)−Remp(f, gp, {xi}ni=1) ≤ p ≤M √\nlog(|G|/δ) 2n\nand\nR(f, g∗, p)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) =R(f, g∗, r)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) + ∫ l(f(x), g∗(x))(p(x)− r(x))dx\n=R(f, g∗, r)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) + b\na+ b\n∫ l(f(x), g∗(x))(p(x)− q(x))dx\n≤R(f, g∗, r)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) + ∫ l(f(x), g∗(x))(p(x)− q(x))dx\n≤M √ log(|G|/δ) 2(a+ b) +4 (10)\nwhere4 = ∫ l(f(x), g∗(x))(p(x)− q(x))dx. If\nb ≥ M 2 log(|G|/δ) 2( p −4)2 − a\nthen\n2(a+ b) ≥ M 2 log(|G|/δ) ( p −4)2\n( p −4)2 ≥ M2 log(|G|/δ)\n2(a+ b)\np ≥M √ log(|G|/δ) 2(a+ b) +4\nSubstitute into equation 10, we have\nR(f, g∗, p)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) ≤ p Recall the definition of ∗, which is the minimum value of all possible satisfying\nR(f, g∗, p)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) ≤ we know that ∗ ≤ p. Let c = 2( p −4)2, we can conclude the theorem.\nProof [Proof of Theorem 2] First of all, {xi}ai=1 ∪ {x′i}bi=1 can be regarded as drawn from distribution r(x) = ap(x) + bq(x)\na+ b .\nTheorem 6 (Mohri et al., 2018) Let l be a non-negative loss function upper bounded by M > 0, and for any fixed y, l(y,y′) is L-Lipschitz for some L > 0, then with probability at least 1− δ,\nR(f, g, p)−Remp(f, g, {xi}ni=1) ≤ 2LRp(G) +M √ log(1/δ)\n2n\nThus we have R(f, g, p)−Remp(f, g, {xi}ni=1) ≤ p ≤ 2LRp(G) +M √ log(1/δ)\n2n\nwhere Rp(G) are Rademacher complexity over all samples of size n samples from p(x). We also have R(f, g∗, p)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1)\n=R(f, g∗, r)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) + ∫ l(f(x), g∗(x))(p(x)− r(x))dx\n=R(f, g∗, r)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) + b\na+ b\n∫ l(f(x), g∗(x))(p(x)− q(x))dx\n≤R(f, g∗, r)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) + ∫ l(f(x), g∗(x))(p(x)− q(x))dx\n≤2LRr(G) +M\n√ log(1/δ)\n2(a+ b) +4 (11)\nwhere4 = ∫ l(f(x), g∗(x))(p(x)−q(x))dx. Rr(G) are Rademacher complexity over all samples of size (a+ b) samples from r(x) = ap(x) + bq(x)\na+ b .\nIf\nb ≥ M 2 log(1/δ)\n2( p −4− 2LRr(G))2 − a\nthen:\n2(a+ b) ≥ M 2 log(1/δ)\n( p −4− 2LRr(G))2\np −4− 2LRr(G) ≥M\n√ log(1/δ)\n2(a+ b)\np ≥M\n√ log(1/δ)\n2(a+ b) +4+ 2LRr(G)\nSubstitute into equation 11, we have:\nR(f, g∗, p)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) ≤ p Recall the definition of ∗, which is the minimum value of all possible satisfying\nR(f, g∗, p)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) ≤ we know that ∗ ≤ p. Let c = 2( p −4− 2LRr(G))2, we can conclude the theorem.\nProof [Proof of Theorem 3] Similar to previous theorems, we write R(f, g∗, p)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1)\n=R(f, g∗, ap+ bq a+ b\n)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) + ∫ l(f(x), g∗(x))(p(x)− ap(x) + bq(x)\na+ b )dx\n=R(f, g∗, ap+ bq a+ b )−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) + b a+ b\n∫ l(f(x), g∗(x))(p(x)− q(x))dx\n≤R(f, g∗, ap+ bq a+ b )−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) +4 (12)\nwhere 4 = ∫ l(f(x), g∗(x))(p(x) − q(x))dx. For notation consistency, we write\nR(f, g∗, ap+ bq a+ b\n) = ∫ l(f(x) − g(x))ap(x) + bq(x)\na+ b dx. However, {xi}ai=1 ∪ {x′i}bi=1 are not\ndrawn from the same distribution (which is r(x) = ap(x) + bq(x)\na+ b in previous cases).\nLet γ = ba+ b a c, we split {xi}ai=1 ∪ {x′i}bi=1 into γ parts that don’t overlap with each other. The first part is {xi}ai=1, all the other parts has at least a elements from {x′i}bi=1. Let\nλ =\n√ 64\nb log(4/δ) +\n64\na logC(G)\nwhere C(G) is space capacity defined in Definition 4 in Baxter (2000), which depends on ∗ and G. By Theorem 4 in Baxter (2000),[\nR(f, g∗, ap+ bq a+ b\n)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) ]2 ≤ max{ 64\nγa log( 4C(Gγ) δ ), 16 a }\nBy Theorem 5 in Baxter (2000),\n64 γa log( 4C(Gγ) δ ) = 64 γa (log( 4 δ ) + log(C(Gγ))) ≤ 64 γa (log( 4 δ ) + γ log(C(G))) ≤ λ2\nThe last inequality comes from b ≤ γa, which is because of γ = ba+ b a c. Then we have[\nR(f, g∗, ap+ bq a+ b\n)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) ]2 ≤ max{ 64\nγa log( 4C(Gγ) δ ), 16 a } ≤ max{λ2, 16 a }\nIf\nb ≥ 64 log(4/δ) ( p −4)2 − 64 logC(G)/a\nThen\nλ2 ≤ 64 a logC(G) + 64 log(4 δ ) ( p −4)2 − 64 logC(G)/a 64 log(4/δ)\nλ2 ≤ ( p −4)2 (13)\nIf 16\n( p −4)2 ≤ a\nthen 16\na ≤ ( p −4)2 (14)\nCombine equation 13 and equation 14, we have\nR(f, g∗, ap+ bq a+ b )−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) ≤ p −4\nSubstitute into equation 12, we have:\nR(f, g∗, p)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) ≤ p Recall the definition of ∗, which is the minimum value of all possible satisfying\nR(f, g∗, p)−Remp(f, g∗, {xi}ai=1 ∪ {x′i}bi=1) ≤ we know that ∗ ≤ p.\nB VARIANCE ANALYSIS\nFor the purpose of getting a sense of variance, we run experiments with additional random seeds on MRPC and RTE, which are relatively smaller datasets, and MNLI and QNLI, which are relatively larger datasets. Mean and standard deviation on the dev set of these GLUE datasets are reported in Table 5. We observe the variance of the same model’s performance to be small, especially on the relatively larger datasets." } ]
2,021
MIXKD: TOWARDS EFFICIENT DISTILLATION OF LARGE-SCALE LANGUAGE MODELS
SP:5a114af6b868ac0f8923205ea5257590967110c0
[ "This paper proposes to use an intrinsic reward based on uncertainties calculated from temporal difference errors. The approach, called Temporal Difference Uncertainties (TDU), estimates the variance of td errors across multiple (bootstrapped) parameters, for a given state, action, next state and reward, where variability is due only to variance in parameters. The other addition is to learn a separate set of action-values that use this intrinsic reward, from the bootstrap set. Actions are then taken by randomly sampling an action-value function from the combined set. ", "The authors introduce the use of value function variance (conditioned on state transition) as auxiliary reward promoting exploration during training. The variance is estimated using the bootstrap DQN approach. The main difference with similar methods is that the value uncertainty is not used in a Thompson sampling scheme but it is instead use to provide exploration reward. " ]
An effective approach to exploration in reinforcement learning is to rely on an agent’s uncertainty over the optimal policy, which can yield near-optimal exploration strategies in tabular settings. However, in non-tabular settings that involve function approximators, obtaining accurate uncertainty estimates is almost as challenging as the exploration problem itself. In this paper, we highlight that value estimates are easily biased and temporally inconsistent. In light of this, we propose a novel method for estimating uncertainty over the value function that relies on inducing a distribution over temporal difference errors. This exploration signal controls for state-action transitions so as to isolate uncertainty in value that is due to uncertainty over the agent’s parameters. Because our measure of uncertainty conditions on state-action transitions, we cannot act on this measure directly. Instead, we incorporate it as an intrinsic reward and treat exploration as a separate learning problem, induced by the agent’s temporal difference uncertainties. We introduce a distinct exploration policy that learns to collect data with high estimated uncertainty, which gives rise to a “curriculum” that smoothly changes throughout learning and vanishes in the limit of perfect value estimates. We evaluate our method on hardexploration tasks, including Deep Sea and Atari 2600 environments and find that our proposed form of exploration facilitates efficient exploration.
[]
[ { "authors": [ "K. Azizzadenesheli", "E. Brunskill", "A. Anandkumar" ], "title": "Efficient exploration through Bayesian deep Q-Networks", "venue": "arXiv preprint arXiv:1802.04412,", "year": 2018 }, { "authors": [ "M. Belkin", "D. Hsu", "J. Xu" ], "title": "Two models of double descent for weak features", "venue": "arXiv preprint arXiv:1903.07571,", "year": 2019 }, { "authors": [ "M. Bellemare", "S. Srinivasan", "G. Ostrovski", "T. Schaul", "D. Saxton", "R. Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "J. Bradbury", "R. Frostig", "P. Hawkins", "M.J. Johnson", "C. Leary", "D. Maclaurin", "S. WandermanMilne" ], "title": "JAX: composable transformations of Python+NumPy programs, 2018", "venue": "URL http: //github.com/google/jax", "year": 2018 }, { "authors": [ "Y. Burda", "H. Edwards", "A.J. Storkey", "O. Klimov" ], "title": "Exploration by random network distillation", "venue": "arXiv preprint arXiv:1810.12894,", "year": 2018 }, { "authors": [ "J. Choi", "Y. Guo", "M. Moczulski", "J. Oh", "N. Wu", "M. Norouzi", "H. Lee" ], "title": "Contingency-aware exploration in reinforcement learning", "venue": "arXiv preprint arXiv:1811.01483,", "year": 2018 }, { "authors": [ "R. Dearden", "N. Friedman", "S. Russell" ], "title": "Bayesian Q-learning", "venue": "In Association for the Advancement of Artificial Intelligence,", "year": 1998 }, { "authors": [ "M. Deisenroth", "C.E. Rasmussen" ], "title": "Pilco: A model-based and data-efficient approach to policy search", "venue": "In International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "C. Florensa", "Y. Duan", "P. Abbeel" ], "title": "Stochastic neural networks for hierarchical reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "M. Fortunato", "M.G. Azar", "B. Piot", "J. Menick", "I. Osband", "A. Graves", "V. Mnih", "R. Munos", "D. Hassabis", "O Pietquin" ], "title": "Noisy networks for exploration", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "C. Gehring", "D. Precup" ], "title": "Smart exploration in reinforcement learning using absolute temporal difference errors", "venue": "In Proceedings of the International Conference on Autonomous Agents and Multi-Agent Systems,", "year": 2013 }, { "authors": [ "G. Gordon", "E. Ahissar" ], "title": "Reinforcement active learning hierarchical loops", "venue": "In International Joint Conference on Neural Networks,", "year": 2011 }, { "authors": [ "A. Guez", "F. Viola", "T. Weber", "L. Buesing", "S. Kapturowski", "D. Precup", "D. Silver", "N. Heess" ], "title": "Value-driven hindsight modelling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "K. Hausman", "J.T. Springenberg", "Z. Wang", "N. Heess", "M. Riedmiller" ], "title": "Learning an embedding space for transferable robot skills", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "D. Janz", "J. Hron", "J.M. Hernández-Lobato", "K. Hofmann", "S. Tschiatschek" ], "title": "Successor uncertainties: exploration and uncertainty in temporal difference learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "S. Kapturowski", "G. Ostrovski", "J. Quan", "R. Munos", "W. Dabney" ], "title": "Recurrent experience replay in distributed reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "M. Kearns", "S. Singh" ], "title": "Near-optimal reinforcement learning in polynomial time", "venue": "Machine learning,", "year": 2002 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "R. Kumaraswamy", "M. Schlegel", "A. White", "M. White" ], "title": "Context-dependent upper-confidence bounds for directed exploration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Y. Li", "F. Gimeno", "P. Kohli", "O. Vinyals" ], "title": "Strong generalization and efficiency in neural programs", "venue": "arXiv preprint arXiv:2007.03629,", "year": 2020 }, { "authors": [ "C. Liu", "L. Zhu", "M. Belkin" ], "title": "Toward a theory of optimization for over-parameterized systems of non-linear equations: the lessons of deep learning", "venue": "arXiv preprint arXiv:2003.00307,", "year": 2020 }, { "authors": [ "M. Lopes", "T. Lang", "M. Toussaint", "Oudeyer", "P.-Y" ], "title": "Exploration in model-based reinforcement learning by empirically estimating learning progress", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "M.C. Machado", "M.G. Bellemare", "E. Talvitie", "J. Veness", "M. Hausknecht", "M. Bowling" ], "title": "Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2018 }, { "authors": [ "V. Mnih", "K. Kavukcuoglu", "D. Silver", "A.A. Rusu", "J. Veness", "M.G. Bellemare", "A. Graves", "M. Riedmiller", "A.K. Fidjeland", "G Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "T.M. Moerland", "J. Broekens", "C.M. Jonker" ], "title": "Efficient exploration with double uncertain value networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "O. Nachum", "M. Norouzi", "D. Schuurmans" ], "title": "Improving policy gradient by exploring underappreciated rewards", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "I. Osband", "D. Russo", "B. Van Roy" ], "title": "more) efficient reinforcement learning via posterior sampling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "I. Osband", "C. Blundell", "A. Pritzel", "B. Van Roy" ], "title": "Deep exploration via bootstrapped DQN", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "I. Osband", "B. Van Roy", "Z. Wen" ], "title": "Generalization and exploration via randomized value functions", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "I. Osband", "J. Aslanides", "A. Cassirer" ], "title": "Randomized prior functions for deep reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "I. Osband", "B. Van Roy", "D.J. Russo", "Z. Wen" ], "title": "Deep exploration via randomized value functions", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "I. Osband", "Y. Doron", "M. Hessel", "J. Aslanides", "E. Sezener", "A. Saraiva", "K. McKinney", "T. Lattimore", "C. Szepezvari", "S. Singh", "Benjamin Van Roy", "Richard Sutton", "D.S.H.V. H" ], "title": "Behaviour suite for reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "G. Ostrovski", "M.G. Bellemare", "A. van den Oord", "R. Munos" ], "title": "Count-based exploration with neural density models", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Oudeyer", "P.-Y", "F. Kaplan" ], "title": "What is intrinsic motivation? A typology of computational approaches", "venue": "Frontiers in neurorobotics,", "year": 2009 }, { "authors": [ "Oudeyer", "P.-Y", "F. Kaplan", "V.V. Hafner" ], "title": "Intrinsic motivation systems for autonomous mental development", "venue": "IEEE transactions on evolutionary computation,", "year": 2007 }, { "authors": [ "B. O’Donoghue", "I. Osband", "R. Munos", "V. Mnih" ], "title": "The uncertainty bellman equation and exploration", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "D. Pathak", "P. Agrawal", "A.A. Efros", "T. Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "J. Peng", "R.J. Williams" ], "title": "Incremental multi-step Q-learning", "venue": "In Machine Learning Proceedings", "year": 1994 }, { "authors": [ "M. Plappert", "R. Houthooft", "P. Dhariwal", "S. Sidor", "R.Y. Chen", "X. Chen", "T. Asfour", "P. Abbeel", "M. Andrychowicz" ], "title": "Parameter space noise for exploration", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "A. Puigdomènech Badia", "P. Sprechmann", "A. Vitvitskyi", "D. Guo", "B. Piot", "S. Kapturowski", "O. Tieleman", "M. Arjovsky", "A. Pritzel", "A. Bolt", "C. Blundell" ], "title": "Never give up: Learning directed exploration strategies", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "T. Schaul", "D. Borsa", "D. Ding", "D. Szepesvari", "G. Ostrovski", "W. Dabney", "S. Osindero" ], "title": "Adapting behaviour for learning progress", "venue": null, "year": 1912 }, { "authors": [ "J. Schmidhuber" ], "title": "Curious model-building control systems", "venue": "In Proceedings of the International Joint Conference on Neural Networks,", "year": 1991 }, { "authors": [ "R. Simmons-Edler", "B. Eisner", "E. Mitchell", "H.S. Seung", "D.D. Lee" ], "title": "Qxplore: Q-learning exploration by maximizing temporal difference error", "venue": null, "year": 1906 }, { "authors": [ "S.P. Singh", "A.G. Barto", "N. Chentanez" ], "title": "Intrinsically motivated reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2005 }, { "authors": [ "M. Strens" ], "title": "A Bayesian framework for reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2000 }, { "authors": [ "W.R. Thompson" ], "title": "On the likelihood that one unknown probability exceeds another in view of the evidence of two samples", "venue": null, "year": 1933 }, { "authors": [ "M. Tokic" ], "title": "Adaptive ε-greedy exploration in reinforcement learning based on value differences", "venue": "In Annual Conference on Artificial Intelligence,", "year": 2010 }, { "authors": [ "H. Van Hasselt", "A. Guez", "D. Silver" ], "title": "Deep reinforcement learning with double Q-learning", "venue": "In Association for the Advancement of Artificial Intelligence,", "year": 2016 }, { "authors": [ "A White" ], "title": "Developing a predictive approach to knowledge", "venue": "PhD thesis, University of Alberta,", "year": 2015 }, { "authors": [ "R.J. Williams", "J. Peng" ], "title": "Function optimization using connectionist reinforcement learning algorithms", "venue": "Connection Science,", "year": 1991 }, { "authors": [ "T. Xu", "Q. Liu", "L. Zhao", "J. Peng" ], "title": "Learning to explore via meta-policy gradient", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "T. Zahavy", "Z. Xu", "V. Veeriah", "M. Hessel", "J. Oh", "H. van Hasselt", "D. Silver", "S. Singh" ], "title": "Self-tuning deep reinforcement learning", "venue": "arXiv preprint arXiv:2002.12928,", "year": 2020 }, { "authors": [ "B.D. Ziebart", "A. Maas", "J.A. Bagnell", "A.K. Dey" ], "title": "Maximum entropy inverse reinforcement learning", "venue": "In Association for the Advancement of Artificial Intelligence,", "year": 2008 }, { "authors": [ "Janz" ], "title": "with bootstrapped data sampling, where each member Qθk is a separate MLP (no parameter sharing) that is regressed towards separate target networks. We use Adam (Kingma & Ba, 2015) and update target networks periodically (Table 2). We run 5 seeds per tree-depth, for depths", "venue": "L ∈ {10,", "year": 2019 }, { "authors": [ "Burda" ], "title": "Intrinsic reward scale (β", "venue": null, "year": 2018 }, { "authors": [ "wψθ(s" ], "title": "The parameters θ are trained to satisfy the Success Feature identity while w is learned using Bayesian linear regression; at the start of each episode, a new w is sampled from the posterior p(w | history) (Janz et al., 2019).4 NNS NoisyNets replace feed-forward layers Wx+ b by a noisy equivalent (W + Σ W )x+", "venue": null, "year": 2019 }, { "authors": [ "N i" ], "title": "β) are white noise of the same size as W and b, respectively. The set (W,Σ, b, σ) are learnable parameters that are trained on the normal TD-error, but with the noise vector re-sampled after every optimisation step. Following Fortunato et al. (2018), sample noise separately for the target and the online network. TDU We fix the number of explorers to 10 (half of the number of value functions in the ensemble)", "venue": null, "year": 2018 } ]
[ { "heading": null, "text": "An effective approach to exploration in reinforcement learning is to rely on an agent’s uncertainty over the optimal policy, which can yield near-optimal exploration strategies in tabular settings. However, in non-tabular settings that involve function approximators, obtaining accurate uncertainty estimates is almost as challenging as the exploration problem itself. In this paper, we highlight that value estimates are easily biased and temporally inconsistent. In light of this, we propose a novel method for estimating uncertainty over the value function that relies on inducing a distribution over temporal difference errors. This exploration signal controls for state-action transitions so as to isolate uncertainty in value that is due to uncertainty over the agent’s parameters. Because our measure of uncertainty conditions on state-action transitions, we cannot act on this measure directly. Instead, we incorporate it as an intrinsic reward and treat exploration as a separate learning problem, induced by the agent’s temporal difference uncertainties. We introduce a distinct exploration policy that learns to collect data with high estimated uncertainty, which gives rise to a “curriculum” that smoothly changes throughout learning and vanishes in the limit of perfect value estimates. We evaluate our method on hardexploration tasks, including Deep Sea and Atari 2600 environments and find that our proposed form of exploration facilitates efficient exploration." }, { "heading": "1 INTRODUCTION", "text": "Striking the right balance between exploration and exploitation is fundamental to the reinforcement learning problem. A common approach is to derive exploration from the policy being learned. Dithering strategies, such as -greedy exploration, render a reward-maximising policy stochastic around its reward maximising behaviour (Williams & Peng, 1991). Other methods encourage higher entropy in the policy (Ziebart et al., 2008), introduce an intrinsic reward (Singh et al., 2005), or drive exploration by sampling from the agent’s belief over the MDP (Strens, 2000).\nWhile greedy or entropy-maximising policies cannot facilitate temporally extended exploration (Osband et al., 2013; 2016a), the efficacy of intrinsic rewards depends crucially on how they relate to the extrinsic reward that comes from the environment (Burda et al., 2018a). Typically, intrinsic rewards for exploration provide a bonus for visiting novel states (e.g Bellemare et al., 2016) or visiting states where the agent cannot predict future transitions (e.g Pathak et al., 2017; Burda et al., 2018a). Such approaches can facilitate learning an optimal policy, but they can also fail entirely in large environments as they prioritise novelty over rewards (Burda et al., 2018b).\nMethods based on the agent’s uncertainty over the optimal policy explicitly trade off exploration and exploitation (Kearns & Singh, 2002). Posterior Sampling for Reinforcement Learning (PSRL; Strens, 2000; Osband et al., 2013) is one such approach, which models a distribution over Markov Decision Processes (MDPs). While PSRL is near-optimal in tabular settings (Osband et al., 2013; 2016b), it cannot be easily scaled to complex problems that require function approximators. Prior work has attempted to overcome this by instead directly estimating the agent’s uncertainty over the policy’s value function (Osband et al., 2016a; Moerland et al., 2017; Osband et al., 2019; O’Donoghue et al., 2018; Janz et al., 2019). While these approaches can scale posterior sampling to complex problems and nonlinear function approximators, estimating uncertainty over value functions introduces issues that can cause a bias in the posterior distribution (Janz et al., 2019).\nIn response to these challenges, we introduce Temporal Difference Uncertainties (TDU), which derives an intrinsic reward from the agent’s uncertainty over the value function. Concretely, TDU relies on the Bootstrapped DQN (Osband et al., 2016a) and separates exploration and reward-maximising behaviour into two separate policies that bootstrap from a shared replay buffer. This separation allows us to derive an exploration signal for the exploratory policy from estimates of uncertainty of the reward-maximising policy. Thus, TDU encourages exploration to collect data with high model uncertainty over reward-maximising behaviour, which is made possible by treating exploration as a separate learning problem. In contrast to prior works that directly estimate value function uncertainty, we estimate uncertainty over temporal difference (TD) errors. By conditioning on observed stateaction transitions, TDU controls for environment uncertainty and provides an exploration signal only insofar as there is model uncertainty. We demonstrate that TDU can facilitate efficient exploration in challenging exploration problems such as Deep Sea and Montezuma’s Revenge." }, { "heading": "2 ESTIMATING VALUE FUNCTION UNCERTAINTY IS HARD", "text": "We begin by highlighting that estimating uncertainty over the value function can suffer from bias that is very hard to overcome with typical approaches (see also Janz et al., 2019). Our analysis shows that biased estimates arise because uncertainty estimates require an integration over unknown future state visitations. This requires tremendous model capacity and is in general infeasible. Our results show that we cannot escape a bias in general, but we can take steps to mitigate it by conditioning on an observed trajectory. Doing so removes some uncertainty over future state-visitations and we show in Section 3 that it can result in a substantially smaller bias.\nWe consider a Markov Decision Process (S,A,P,R, γ) for some given state space (S), action space (A), transition dynamics (P), reward function (R) and discount factor (γ). For a given (deterministic) policy π : S 7→ A, the action value function is defined as the expected cumulative reward under the policy starting from state s with action a:\nQπ(s, a) := Eπ [ ∞∑ t=0 γtrt+1 ∣∣∣∣∣ s0 = s, a0 = a ] = Er∼R(s,a) s′∼P(s,a) [r + γQπ(s ′, π(s′))] , (1)\nwhere t index time and the expectation Eπ is with respect to realised rewards r sampled under the policy π; the right-hand side characterises Q recursively under the Bellman equation. The actionvalue function Qπ is estimated under a function approximator Qθ parameterised by θ. Uncertainty over Qπ is expressed by placing a distribution over the parameters of the function approximator, p(θ). We overload notation slightly and write p(θ) to denote the probability density function pθ over a random variable θ. Further, we denote by θ ∼ p(θ) a random sample θ from the distribution defined by pθ. Methods that rely on posterior sampling under function approximators assume that the induced distribution, p(Qθ), is an accurate estimate of the agent’s uncertainty over its value function, p(Qπ), so that sampling Qθ ∼ p(Qθ) is approximately equivalent to sampling from Qπ ∼ p(Qπ). For this to hold, the moments of p(Qθ) at each state-action pair (s, a) must correspond to the expected moments in future states. In particular, moments of p(Qπ) must satisfy a Bellman Equation akin to Eq. 1 (O’Donoghue et al., 2018). We focus on the mean (E) and variance (V):\nEθ[Qθ(s, a)] = Eθ[Er,s′ [r + γQθ(s′, π(s′))]] , (2) Vθ [Qθ(s, a)] = Vθ [Er,s′ [r + γQθ(s′, π(s′))]]. (3)\nIf Eθ[Qθ] and Vθ[Qθ] fail to satisfy these conditions, the estimates of E[Qπ] and V[Qπ] are biased, causing a bias in exploration under posterior sampling from p(Qθ). Formally, the agent’s uncertainty over p(Q) implies uncertainty over the MDP (Strens, 2000). Given a belief over the MDP, i.e., a distribution p(M), we can associate each M ∼ p(M) with a distinct value function QMπ . Lemma 1 below shows that, for p(θ) to be interpreted as representing some p(M) by push-forward to p(Qθ), the induced moments must match under the Bellman Equation.\nLemma 1. If Eθ[Qθ] and Vθ[Qθ] fail to satisfy Eqs. 2 and 3, respectively, they are biased estimators of EM [ QMπ ] and VM [ QMπ ] for any choice of p(M).\nAll proofs are deferred to Appendix B. Lemma 1 highlights why estimating uncertainty over value functions is so challenging; while the left-hand sides of Eqs. 2 and 3 are stochastic in θ only, the right-hand sides depend on marginalising over the MDP. This requires the function approximator to generalise to unseen future trajectories. Lemma 1 is therefore a statement about scale; the harder it is to generalise, the more likely we are to observe a bias—even in deterministic environments.\nThis requirement of “strong generalisation” poses a particular problem for neural networks that tend to interpolate over the training data (e.g. Li et al., 2020; Liu et al., 2020; Belkin et al., 2019), but the issue is more general. In particular, we show that factorising the posterior p(θ) will typically cause estimation bias for all but tabular MDPs. This is problematic because it is often computationally infeasible to maintain a full posterior; previous work either maintains a full posterior over the final layer of the function approximator (Osband et al., 2016a; O’Donoghue et al., 2018; Janz et al., 2019) or maintains a diagonal posterior over all parameters (Fortunato et al., 2018; Plappert et al., 2018) of the neural network. Either method limits how expressive the function approximator can be with respect to future states, thereby causing an estimation bias. To establish this formally, letQθ := w◦φϑ, where θ = (w1, . . . , wn, ϑ1, . . . , ϑv), withw ∈ Rn a linear projection and φ : S ×A → Rn a feature extractor with parameters ϑ ∈ Rv .\nProposition 1. If the number of state-action pairs where Eθ[Qθ(s, a)] 6= Eθ[Qθ(s′, a′)] is greater than n, where w ∈ Rn, then Eθ[Qθ] and Vθ[Qθ] are biased estimators of EM [ QMπ ] and VM [ QMπ ] for any choice of p(M).\nThis result is a consequence of the feature extractor ψ mapping into a co-domain that is larger than the space spanned by w; a bias results from having more unique state-action representations ψ(s, a) than degrees of freedom in w. The implication is that function approximators under factorised posteriors cannot generalise uncertainty estimates across states (a similar observation in tabular settings was made by Janz et al., 2019)—they can only produce temporally consistent uncertainty estimates if they have the capacity to memorise point-wise uncertainty estimates for each (s, a), which defeats the purpose of a function approximator. This is a statement about the structure of p(θ) and holds for any estimation method. Thus, common approaches to uncertainty estimation with neural networks generally fail to provide unbiased uncertainty estimates over the value function in non-trivial MDPs. Proposition 1 shows that to accurately capture value function uncertainty, we need a full posterior over parameters, which is often infeasible. It also underscores that the main issue is the dependence on future state visitation. This motivates Temporal Difference Uncertainties as an estimate of uncertainty conditioned on observed state-action transitions." }, { "heading": "3 TEMPORAL DIFFERENCE UNCERTAINTIES", "text": "While Proposition 1 states that we cannot remove this bias unless we are willing to maintain a full posterior p(θ), we can construct uncertainty estimates that control for uncertainty over future state-action transition. In this paper, we propose to estimate uncertainty over a full transition τ := (s, a, r, s′) to isolate uncertainty due to p(θ). Fixing a transition, we induce a conditional distribution p(δ | τ) over Temporal Difference (TD) errors, δ(θ, τ) := γQθ(s′, π(s′))+r−Qθ(s, a), that we characterise by its mean and variance:\nEδ[δ | τ ] = Eθ[δ(θ, τ) | τ ] and Vδ[δ | τ ] = Vθ[δ(θ, τ) | τ ] . (4)\nEstimators over TD-errors is akin to first-difference estimators of uncertainty over the action-value. They can therefore exhibit smaller bias if that bias is temporally consistent. To illustrate, for simplicity assume that Eθ[Qθ] consistently over/under-estimates EM [ QMπ ] by an amount b ∈ R. The corresponding bias in Eθ[δ(θ, τ) | τ ] is given by Bias(Eθ[δ(θ, τ) | τ ]) = Bias(γEθ[Qθ(s′, π(s′))] + r − Eθ[Qθ(s, a)]) = (γ − 1)b. This bias is close to 0 for typical values of γ—notably for γ = 1, Eθ[δ(θ, τ) | τ ] is unbiased. More generally, unless the bias is constant over time as in the above example, we cannot fully remove the bias when constructing an estimator over a quantity that relies on Qθ. However, as the above example shows, by conditioning on a state-action transition, we can make it significantly smaller. We formalise this logic in the following result.\nProposition 2. For any τ := (s, a, r, s′) and any p(M), given p(θ), define the following ratios:\nρ = Bias (Eθ[Qθ(s′, π(s′))]) /Bias (Eθ[Qθ(s, a)]) (5) φ = Bias ( Eθ [ Qθ(s ′, π(s′))2 ]) /Bias ( Eθ [ Qθ(s, a) 2 ]) (6)\nκ = Bias (Eθ[Qθ(s′, π(s′))Qθ(s, a)]) /Bias ( Eθ [ Qθ(s, a) 2 ])\n(7) α = EM [ QMπ (s ′, π(s′)) ] /EM [ QMπ (s, a) ] . (8)\nIf ρ ∈ (0, 2/γ), then Eδ[δ | τ ] has lower bias than Eθ[Qθ(s, a)]. Moreover, if ρ = 1/γ, then Eδ[δ | τ ] is unbiased. Additionally, there exists ρ ≈ 1, φ ≈ 1, κ ≈ 1, α ≈ 1 such that Vθ[δ(θ, τ) | τ ] have less bias than Vθ[Qθ(s, a)]. In particular, if ρ = φ = κ = α = 1, then\n|Bias(Vθ[δ(θ, τ) | τ ])| = |(γ − 1)2 Bias(Vθ[Qθ(s, a)])| < |Bias(Vθ[Qθ(s, a)])|. (9)\nFurther, ρ = 1/γ, κ = 1/γ, φ = 1/γ2, then Vθ[δ(θ, τ) | τ ] is unbiased for any α.\nThe first part of Proposition 2 generalises the example above to cases where the bias b varies across action-state transitions. It is worth noting that the required “smoothness” on the bias is not very stringent: the bias of Eθ[Qθ] (s′, π(s′)) can be twice as large as that of Eθ[Qθ] (s, a) and Eδ[δ | τ ] can still produce a less biased estimate. Importantly, it must have the same sign, and so Proposition 2 requires temporal consistency. To establish a similar claim for Vδ[δ | τ ], we need a bit more structure. The ratios ρ, φ, and κ capture temporal consistency in the bias, while α relates to the temporal consistency of the underlying estimand. Proposition 2 establishes that if these ratios are close to unity, then Vθ[δ(θ, τ) | τ ] will have less bias. For most transitions, it is reasonable to assume that this holds true. In some MDPs, large changes in the reward can cause these requirements to break. Because Proposition 2 only establishes sufficiency, violating this requirement does not necessarily mean that Vδ[δ | τ ] has greater bias than Vθ[Qθ(s, a)]. Finally, it is worth noting that these are statements about a given transition τ . In most state-action transitions, the requirements in Proposition 2 will hold, in which case Eδ[δ | τ ] and Vδ[δ | τ ] exhibit less overall bias. We provide direct empirical support that Proposition 2 holds in practice through careful ceteris paribus comparisons in Section 5.1.\nTo obtain a concrete signal for exploration, we follow O’Donoghue et al. (2018) and derive an exploration signal from the variance Vθ[δ(θ, τ)|τ ]. Because p(δ | τ) is defined per transition, it cannot be used as-is for posterior sampling. Therefore, we incorporate TDU as a signal for exploration via an intrinsic reward. To obtain an exploration signal that is on approximately the same scale as the extrinsic reward, we use the standard deviation σ(τ) := √ Vθ[δ(θ, τ) | τ ] to define an augmented reward function\nR̃(τ) := R((s, a) ∈ τ) + β σ(τ), (10)\nwhere β ∈ [0,∞) is a hyper-parameter that determines the emphasis on exploration. Another appealing property of σ is that it naturally decays as the agent converges on a solution (as model uncertainty diminishes); TDU defines a distinct MDP (S,A,P, R̃, γ) under Eq. 10 that converges on the true MDP in the limit of no model uncertainty. For a given policy π and distribution p(Qθ), there exists an exploration policy µ that collects transitions over which p(Qθ) exhibits maximal uncertainty, as measured by σ. In hard exploration problems, the exploration policy µ can behave fundamentally differently from π. To capture such distinct exploration behaviour, we treat µ as a separate exploration policy that we train to maximise the augmented reward R̃, along-side training a policy π that maximises the extrinsic rewardR. This gives rise to a natural separation of exploitation and exploration in the form of a cooperative multi-agent game, where the exploration policy is tasked with finding experiences where the agent is uncertain of its value estimate for the greedy policy π. As π is trained on this data, we expect uncertainty to vanish (up to noise). As this happens, the exploration policy µ is incentivised to find new experiences with higher estimated uncertainty. This induces a particular pattern where exploration will reinforce experiences until the agent’s uncertainty vanishes, at which point the exploration policy expands its state visitation further. This process can allow TDU to overcome estimation bias in the posterior—since it is in effect exploiting it—in contrast to previous methods that do not maintain a distinct exploration policy. We demonstrate this empirically both on Montezuma’s Revenge and on Deep Sea (Osband et al., 2020)." }, { "heading": "4 IMPLEMENTING TDU WITH BOOTSTRAPPING", "text": "The distribution over TD-errors that underlies TDU can be estimated using standard techniques for probability density estimation. In this paper, we leverage the statistical bootstrap as it is both easy to implement and provides a robust approximation without requiring distributional assumptions. TDU is easy to implement under the statistical bootstrap—it requires only a few lines of extra code. It can be implemented with value-based as well as actor-critic algorithms (we provide generic pseudo code in Appendix A); in this paper, we focus on Q-learning. Q-learning alternates between policy evaluation (Eq. 1) and policy improvement under a greedy policy πθ(s) = arg max a Qθ(s, a). Deep Q-learning (Mnih et al., 2015) learns Qθ by minimising its TD-error by stochastic gradient descent on transitions sampled from a replay buffer. Unless otherwise stated, in practice we adopt a common approach of evaluating the action taken by the learned network through a target network with separate parameters that are updated periodically (Van Hasselt et al., 2016).\nOur implementation starts from the bootstrapped DQN (Osband et al., 2016a), which maintains a set of K function approximators Q = {Qθk} K k=1, each parameterised by θ\nk and regressed towards a unique target function using bootstrapped sampling of data from a shared replay memory. The Bootstrapped DQN derives a policy πθ by sampling θ uniformly from Q at the start of each episode. We provide an overview of the Bootstrapped DQN in Algorithm 1 for reference. To implement TDU in this setting, we make a change to the loss function (Algorithm 2, changes highlighted in green). First, we estimate the TDU signal σ using bootstrapped value estimation. We estimate σ through observed TD-errors {δk}Kk=1 incurred by the ensemble Q on a given transition:\nσ(τ) ≈ √√√√ 1 K − 1 K∑ k=1 ( δ(θk, τ)− δ̄(τ) )2 , (11)\nwhere δ̄ = γQ̄′ + r − Q̄, with x̄ := 1K ∑K i=1 xi and Q\n′ := Q(s′, π(s′)). An important assumption underpinning the bootstrapped estimation is that of stochastic optimism (Osband et al., 2016b), which requires the distribution over Q to be approximately as wide as the true distribution over value estimates. If not, uncertainty over Q can collapse, which would cause σ to also collapse. To prevent this, Q can be endowed with a prior (Osband et al., 2018) that maintains diversity in the ensemble by defining each value function as Qθk + λPk, λ ∈ [0,∞), where Pk is a random prior function. Rather than feeding this exploration signal back into the value functions in Q, which would create a positive feedback loop (uncertainty begets higher reward, which begets higher uncertainty adinfinitum), we introduce a separate ensemble of exploration value functions Q̃ = {Qθ̃k}Nk=1 that we train over the augmented reward (Eqs. 10 and 11). We derive an exploration policy µθ̃ by sampling exploration parameters θ̃ uniformly from Q̃, as in the standard bootstrapped DQN. In summary, our implementation of TDU maintains K +N value functions. The first K defines a standard Bootstrapped DQN. From these, we derive an exploration signal σ, which we use to train the last N value functions. At the start of each episode, we proceed as in the standard Bootstrapped DQN and randomly sample a parameterisation θ fromQ∪Q̃ that we act under for the duration of the episode. All value functions are trained by bootstrapping from a single shared replay memory (Algorithm 1); see Appendix A for a complete JAX (Bradbury et al., 2018) implementation. Consequently, we execute the (extrinsic) reward-maximising policy πθ∼Q with probability K/(K+N) and the exploration policy µθ̃∼Q̃ with probability N/(K+N). While π visits states around current reward-maximising behaviour, µ searches for data with high model uncertainty. While each population Q and Q̃ can be seen as performing Bayesian inference, it is not immediately clear that the full agent admits a Bayesian interpretation. We leave this question for future work.\nThere are several equally valid implementations of TDU (see Appendix A for generic implementations for value-based learning and policy-gradient methods). In our case, it would be equally valid to define only a single exploration policy (i.e. N = 1) and specify the probability of sampling this policy. While this can result in faster learning, a potential drawback is that it restricts the exploratory behaviour that µ can exhibit at any given time. Using a full bootstrapped ensemble for the exploration policy leverages the behavioural diversity of bootstrapping.\nAlgorithm 1 Bootstrapped DQN with TDU Require: M,L: MDP to solve, TDU loss Require: β,K,N, ρ: hyper-parameters\n1: Initialise B: replay buffer 2: Initialise K +N value functions, Q∪Q̃ 3: while not done do 4: Observe s and choose Qk ∼ Q∪Q̃ 5: while episode not done do 6: Take action a = arg max âQk(s, â) 7: Sample mask m, mi∼Bin(n=1, p=ρ) 8: Enqueue transition (s, a, r, s′,m) to B 9: Optimise L({θk}K1 , {θ̃k}N1 ,γ, β,D∼B)\n10: end while 11: end while\nAlgorithm 2 Bootstrapped TD-loss with TDU.\nRequire: {θk}K1 , {θ̃k}N1 : parameters Require: γ, β,D: hyper-parameters, data\n1: Initialise `← 0 2: for s, a, r, s′,m ∈ D do 3: τ ← (s, a, r, s′, γ) 4: Compute {δi}Ki=1 = {δ(θi, τ)}Ki=1 5: Compute σ from {δk}Kk=1 (Eq. 11) 6: Update τ by r ← r + β σ 7: Compute {δ̃j}Nj=K+1 = {δ(θ̃j , τ)}Nj=K+1 8: `← `+ ∑K i=1miδ 2 i + ∑N j=1mK+j δ̃ 2 j\n9: end for 10: return: ` / (2(N +K)| D |)" }, { "heading": "5 EMPIRICAL EVALUATION", "text": "" }, { "heading": "5.1 BEHAVIOUR SUITE", "text": "Bsuite (Osband et al., 2020) was introduced as a benchmark for characterising core capabilities of RL agents. We focus on a Deep Sea, which is explicitly designed to test for deep exploration. It is a challenging exploration problem where only one out of 2N policies yields any positive reward. Performance is compared on instances of the environment with grid sizes N ∈ {10, 12, . . . , 50}, with an overall “score” that is the percentage of N for which average regret goes to below 0.9 faster than 2N . The stochastic version generates a ‘bad’ transition with probability 1/N . This is a relatively high degree of uncertainty since the agent cannot recover from a bad transition in an episode.\nFor all experiments, we use a standard MLP with Q-learning, off-policy replay and a separate target network. See Appendix D for details and TDU results on the full suite. We compare TDU on Deep Sea to a battery of exploration methods, broadly divided into methods that facilitate exploration by (a) sampling from a posterior (Bootstrapped DQN, Noisy Nets (Fortunato et al., 2018), Successor Uncertainties (Janz et al., 2019)) or (b) use an intrinsic reward (Random Network Distillation (RND; Burda et al., 2018b), CTS (Bellemare et al., 2016), and Q-Explore (QEX; Simmons-Edler et al., 2019)). We report best scores obtained from a hyper-parameter sweep for each method. Overall, performance varies substantially between methods; only TDU performs (near-)optimally on both the deterministic and stochastic version. Methods that rely on posterior sampling do well on the deterministic version, but suffer a substantial drop in performance on the stochastic version. As the stochastic version serves to increase the complexity of modelling future state visitation, this is clear evidence that these methods suffer from the estimation bias identified in Section 2. We could not make Q-explore and NoisyNets perform well in the default Bsuite setup, while Successor Uncertainties suffers a catastrophic loss of performance on the stochastic version of DeepSea.\nExamining TDU, we find that it facilitates exploration while retaining overall performance except on Mountain Car where β > 0 hurts performance (Appendix D). For Deep Sea (Figure 2), prior functions are instrumental, even for large exploration bonuses (β 0). However, for a given prior strength, TDU does better than the BDQN (β = 0). In the stochastic version of Deep Sea, BDQN suffers a significant loss of performance (Figure 2). As this is a ceteris paribus comparison, this performance difference can be directly attributed to an estimation bias in the BDQN that TDU circumvents through its intrinsic reward. That TDU is able to facilitate efficient exploration despite environment stochasticity demonstrates that it can correct for such estimation errors.\nFinally, we verify Proposition 2 experimentally. We compare TDU to versions that estimate uncertainty directly over Q (full analysis in Appendix D.2). We compare TDU to (a) a version where σ is defined as standard deviation overQ and (b) where σ(Q) is used as an upper confidence bound in the policy instead of as an intrinsic reward (Figure 2). Neither matches TDU’s performance across Bsuite an in particular on Deep Sea. Being ceteris paribus comparisons, this demonstrates that estimating uncertainty over TD-errors provides a stronger signal for exploration, as per Proposition 2." }, { "heading": "5.2 ATARI", "text": "Proposition 1 shows that estimation bias is particularly likely in complex environments that require neural networks to generalise across states. In recent years, such domains have seen significant improvements from running on distributed training platforms that can process large amounts of experience obtained through agent parallelism. It is thus important to develop exploration algorithms that scale gracefully and can leverage the benefits of distributed training. Therefore, we evaluate whether TDU can have a positive impact when combined with the Recurrent Replay Distributed DQN (R2D2) (Kapturowski et al., 2018), which achieves state-of-the-art results on the Atari2600 suite by carefully combining a set of key components: a recurrent state, experience replay, off-policy value learning and distributed training.\nAs a baseline we implemented a distributed version of the bootstrapped DQN with additive prior functions. We present full implementation details, hyper-parameter choices, and results on all games in Appendix E. For our main results, we run each agent on 8 seeds for 20 billion steps. We focus on games that are well-known to pose challenging exploration problems (Machado et al., 2018): montezuma_revenge, pitfall, private_eye, solaris, venture, gravitar, and tennis. Following standard practice, Figure 3 reports Human Normalized Score (HNS), HNS = Agentscore−RandomscoreHumanscore−Randomscore , as an aggregate result across exploration games as well as results on montezuma_revenge and tennis, which are both known to be particularly hard exploration games (Machado et al., 2018).\nGenerally, we find that TDU facilitates exploration substantially, improving the mean HNS score across exploration games by 30% compared to baselines (right panel, Figure 3). An ANOVA analysis yields a statistically significant difference between TDU and non-TDU methods, controlling for game (F = 8.17, p = 0.0045). Notably, TDU achieves significantly higher returns on montezuma_revenge and is the only agent that consistently achieves the maximal return on tennis. We report all per-game results in Appendix E.4. We observe no significant gains from including prior functions with TDU and find that bootstrapping alone produces relatively marginal gains. Beyond exploration games, TDU can match or improve upon the baseline, but exhibits sensitivity to TDU hyper-parameters (β, number of explorers (N ); see Appendix E.3 for details). This finding is in line with observations made by (Puigdomènech Badia et al., 2020); combining TDU with online hyper-parameter adaptation (Schaul et al., 2019; Xu et al., 2018; Zahavy et al., 2020) are exciting avenues for future research. See Appendix E for further comparisons.\nIn Table 1, we compare TDU to recently proposed state-of-the-art exploration methods. While comparisons must be made with care due to different training regimes, computational budgets, and architectures, we note a general trend that no method is uniformly superior. Methods that are good on extremely sparse exploration games (montezuma_ revenge and pitfall!) tend to do poorly on games with dense rewards and vice versa. TDU is generally among the top 2 algorithms in all cases except on montezuma_revenge and pitfall!, state-based exploration is needed to achieve sufficient coverage of the MDP. TDU generally outperforms both Pixel-CNN (Ostrovski et al., 2017), CTS, and RND. TDU is the only algorithm to achieve super-human performance on solaris and achieves the highest score of all baselines considered on venture." }, { "heading": "6 RELATED WORK", "text": "Bayesian approaches to exploration typically use uncertainty as the mechanism for balancing exploitation and exploration (Strens, 2000). A popular instance of this form of exploration is the PILCO algorithm (Deisenroth & Rasmussen, 2011). While we rely on the bootstrapped DQN (Osband et al., 2016a) in this paper, several other uncertainty estimation techniques have been proposed, such as by placing a parameterised distribution over model parameters (Fortunato et al., 2018; Plappert et al., 2018) or by modeling a distribution over both the value and the returns (Moerland et al., 2017), using Bayesian linear regression on the value function (Azizzadenesheli et al., 2018; Janz et al., 2019), or by modelling the variance over value estimates as a Bellman operation (O’Donoghue et al., 2018). The underlying exploration mechanism in these works is posterior sampling from the agent’s current beliefs (Thompson, 1933; Dearden et al., 1998); our work suggests that estimating this posterior is significantly more challenging that previously thought.\nAn alternative to posterior sampling is to facilitate exploration via learning by introducing an intrinsic reward function. Previous works typically formulate intrinsic rewards in terms of state\nvisitation (Lopes et al., 2012; Bellemare et al., 2016; Puigdomènech Badia et al., 2020), state novelty (Schmidhuber, 1991; Oudeyer & Kaplan, 2009; Pathak et al., 2017), or state predictability (Florensa et al., 2017; Burda et al., 2018b; Gregor et al., 2016; Hausman et al., 2018). Most of these works rely on properties of the state space to drive exploration while ignoring rewards. While this can be effective in sparse reward settings (e.g. Burda et al., 2018b; Puigdomènech Badia et al., 2020), it can also lead to arbitrarily bad exploration (see analysis in Osband et al., 2019).\nA smaller body of work uses statistics derived from observed rewards (Nachum et al., 2016) or TD-errors to design intrinsic reward functions; our work is particularly related to the latter. Tokic (2010) proposes an extension of -greedy exploration, where the TD-error modulates to be higher in states with higher TD-error. Gehring & Precup (2013) use the mean absolute TD-error, accumulated over time, to measure controllability of a state and reward the agent for visiting states with low mean absolute TD-error. In contrast to our work, this method integrates the TD-error over time to obtain a measure of irreducibility. Simmons-Edler et al. (2019) propose to use two Q-networks, where one is trained on data collected under both networks and the other obtains an intrinsic reward equal to the absolute TD-error of the first network on a given transition. In contrast to our work, this method does not have a probabilistic interpretation and thus does not control for uncertainty over the environment. TD-errors have also been used in White et al. (2015), where surprise is defined in terms of the moving average of the TD-error over the full variance of the TD-error. Kumaraswamy et al. (2018) rely on least-squares TD-errors to derive a context-dependent upper-confidence bound for directed exploration. Finally, using the TD-error as an exploration signal is related to the notion of “learnability” or curiosity as a signal for exploration, which is often modelled in terms of the prediction error in a dynamics model (e.g. Schmidhuber, 1991; Oudeyer et al., 2007; Gordon & Ahissar, 2011; Pathak et al., 2017)." }, { "heading": "7 CONCLUSION", "text": "We present Temporal Difference Uncertainties (TDU), a method for estimating uncertainty over an agent’s value function. Obtaining well-calibrated uncertainty estimates under function approximation is non-trivial and we show that popular approaches, while in principle valid, can fail to accurately represent uncertainty over the value function because they must represent an unknown future.\nThis motivates TDU as an estimate of uncertainty conditioned on observed state-action transitions, so that the only source of uncertainty for a given transition is due to uncertainty over the agent’s parameters. This gives rise to an intrinsic reward that encodes the agent’s model uncertainty, and we capitalise on this signal by introducing a distinct exploration policy. This policy is incentivised to collect data over which the agent has high model uncertainty and we highlight how this separation gives rise to a form of cooperative multi-agent game. We demonstrate empirically that TDU can facilitate efficient exploration in hard exploration games such as Deep Sea and Montezuma’s Revenge." }, { "heading": "B PROOFS", "text": "We begin with the proof of Lemma 1. First, we show that if Eq. 2 and Eq. 3 fail, p(θ) induce a distribution p(Qθ) whose first two moments are biased estimators of the moments of the distribution of interest p(Qπ), for any choice of belief over the MDP, p(M). We restate it here for convenience.\nLemma 1. If Eθ[Qθ] and Vθ[Qθ] fail to satisfy Eqs. 2 and 3, respectively, they are biased estimators of EM [ QMπ ] and VM [ QMπ ] for any choice of p(M). Proof. Assume the contrary, that EM [ QMπ (s, π(s)) ] = Eθ[Qθ(s, π(s))] for all (s, a) ∈ S ×A. If Eqs. 2 and 3 do not hold, then for any M ∈ {E,V},\nMM [ QMπ (s, π(s)) ] = Mθ[Qθ(s, π(s))] (12)\n6= Mθ [ Es′∼P(s,π(s)) r∼R(s,π(s)) [r + γQθ(s ′, π(s′))] ] (13)\n= Es′∼P(s,π(s)) r∼R(s,π(s))\n[r + γMθ[Qθ(s′, π(s′))]] (14)\n= Es′∼P(s,π(s)) r∼R(s,π(s))\n[ r + γMM [ QMπ (s ′, π(s′)) ]]\n(15)\n= MM [ Es′∼P(s,π(s)) r∼R(s,π(s)) [ r + γQMπ (s ′, π(s′)) ]]\n(16)\n= MM [ QMπ (s, π(s)) ] , (17)\na contradiction; conclude that MM [ QMπ (s, π(s)) ] 6= Mθ[Qθ(s, π(s))]. Eqs. 13 and 17 use Eqs. 2 and 3; Eqs. 12, 13 and 15 follow by assumption; Eqs. 14 and 16 use linearity of the expectation operator Er,s′ by virtue of M being defined over θ. As (s, a, r, s′) and p(M) are arbitrary, the conclusion follows.\nMethods that take inspiration from by PSRL but rely on neural networks typically approximate p(M) by a parameter distribution p(θ) over the value function. Lemma 1 establishes that the induced distribution p(Qθ) under push-forward of p(θ) must propagate the moments of the distribution p(Qθ) consistently over the state-space to be unbiased estimate of p(QMπ ), for any p(M).\nWith this in mind, we now turn to neural networks and their ability to estimate value function uncertainty in MDPs. To prove our main result, we establish two intermediate results. Recall that we define a function approximator Qθ = w ◦ φϑ, where θ = (w1, . . . , wn, ϑ1, . . . , ϑv); w ∈ Rn is a linear layer and φ : S ×A → Rn is a feature extractor with parameters ϑ ∈ Rv . As before, let M be an MDP (S,A,P,R, γ) with discrete state and action spaces. We denote by N the number of states and actions with Eθ[Qθ(s, a)] 6= Eθ[Qθ(s′, a′)] with N ⊂ S ×A×S ×A the set of all such pairs (s, a, s′, a′). This set can be thought of as a minimal MDP—the set of states within a larger MDP where the function approximator generates unique predictions. It arises in an MDP through dense rewards, stochastic rewards, or irrevocable decisions, such as in Deep Sea. Our first result is concerned with a very common approach, where ϑ is taken to be a point estimate so that p(θ) = p(w). This approach is often used for large neural networks, where placing a posterior over the full network would be too costly (Osband et al., 2016a; O’Donoghue et al., 2018; Azizzadenesheli et al., 2018; Janz et al., 2019).\nLemma 2. Let p(θ) = p(w). If N > n, with w ∈ Rn, then Eθ[Qθ] fail to satisfy the first moment Bellman equation (Eq. 2). Further, if N > n2, then Vθ[Qθ] fail to satisfy the second moment Bellman equation (Eq. 3).\nProof. Write the first condition of Eq. 2 as\nEθ [ wTφϑ(s, a) ] = Eθ [ Er,s′ [ r + γwTφϑ(s ′, π(s′)) ]] . (18)\nUsing linearity of the expectation operator along with p(θ) = p(w), we have\nEw[w]T φϑ(s, a) = µ(s, a) + γEw[w]T Es′ [φϑ(s′, π(s′))] , (19)\nwhere µ(s, a) = Er∼R(s,a)[r]. Rearrange to get\nµ(s, a) = Ew[w]T ( φϑ(s, a)− γEs′ [φϑ(s′, π(s′))] ) . (20)\nBy assumption Eθ[Qθ(s, a)] 6= Eθ[Qθ(s′, a′)], which implies φϑ(s, a) 6= φϑ(s′, π(s′)) by linearity in w. Hence φϑ(s, a) − γEs′ [φϑ(s′, π(s′))] is non-zero and unique for each (s, a). Thus, Eq. 20 forms a system of linear equations over S ×A, which can be reduced to a full-rank system over N : µ = ΦEw[w], where µ ∈ RN stacks expected reward µ(s, a) and Φ ∈ RN×n stacks vectors φϑ(s, a) − γEs′ [φϑ(s′, π(s′))] row-wise. Because Φ is full rank, if N > n, this system has no solution. The conclusion follows for Eθ[Qθ]. If the estimator of the mean is used to estimate the variance, then the estimator of the variance is biased. For an unbiased mean, using linearity in w, write the condition of Eq. 3 as\nEθ [[ (w − Ew[w])Tφϑ(s, a) ]2] = Eθ [[ γ(w − Ew[w])TEs′ [φϑ(s′, π(s′))] ]2] . (21)\nLet w̃ = ( wT − Ew[w] ) , x = w̃Tφϑ(s, a), y = γw̃TEs′ [φϑ(s′, a′)]. Rearrange to get\nEθ [ x2 − y2 ] = Ew[(x− y)(x+ y)] = 0. (22)\nExpanding terms, we find\n0 = Eθ [( w̃T [φϑ(s, a)− γEs′ [φϑ(s′, a′)]] )( w̃T [φϑ(s, a) + γEs′ [φϑ(s′, a′)]] )] (23)\n= n∑ i=1 n∑ j=1 Ew[w̃iw̃j ] d−i d + j = n∑ i=1 n∑ j=1 Cov (wi, wj) d − i d + j . (24)\nwhere we define d− = φϑ(s, a) − γEs′ [φϑ(s′, a′)] and d+ = φϑ(s, a) + γEs′ [φϑ(s′, a′)]. As before, d− and d+ are non-zero by assumption of unique Q-values. Perform a change of variables ωα(i,j) = Cov(wi, wj), λα(i,j) = d − i d + j to write Eq. 24 as 0 = λ Tω. Repeating the above process for every state and action we have a system 0 = Λω, where 0 ∈ RN and Λ ∈ RN×n 2\nare defined by stacking vectors λ row-wise. This is a system of linear equations and if N > n2 no solution exists; thus, the conclusion follows for Vθ[Qθ], concluding the proof.\nNote that if Eθ[Qθ] is biased and used to construct the estimator Eθ[Qθ], then this estimator is also biased; hence if N > n, p(θ) induce biased estimators Eθ[Qθ] and Vθ[Qθ] of EM [ QMπ ] and\nVM [ QMπ ] , respectively.\nLemma 2 can be seen as a statement about linear uncertainty. While the result is not too surprising from this point of view, it is nonetheless a frequently used approach to uncertainty estimation. We may hope then that by placing uncertainty over the feature extractor as well, we can benefit from its nonlinearity to obtain greater representational capacity with respect to uncertainty propagation. Such posteriors come at a price. Placing a full posterior over a neural network is often computationally infeasible, instead a common approach is to use a diagonal posterior, i.e. Cov(θi, θj) = 0 (Fortunato et al., 2018; Plappert et al., 2018). Our next result shows that any posterior of this form suffers from the same limitations as placing a posterior only over the final layer. We establish something stronger: any posterior of the form p(θ) = p(w)p(ϑ) suffers from the limitations described in Lemma 2.\nLemma 3. Let p(θ) = p(w)p(ϑ); if N > n, with w ∈ Rn, then Eθ[Qθ] fail to satisfy the first moment Bellman equation (Eq. 2). Further, if N > n2, then Vθ[Qθ] fail to satisfy the second moment Bellman equation (Eq. 3).\nProof. The proof largely proceeds as in the proof of Lemma 2. Re-write Eq. 19 as\nEw[w]T Eϑ[φϑ(s, a)] = µ(s, a) + γEw[w]T Es′ [Eϑ[φϑ(s′, π(s′))]] . (25)\nPerform a change of variables φ̃ = Eϑ[φϑ] to obtain\nµ(s, a) = Ew[w]T ( φ̃(s, a)− γEs′ [ φ̃(s′, π(s′)) ] ) . (26)\nBecause Eθ[Qθ(s, a)] 6= Eθ[Qθ(s′, a′)], by linearity in w we have that φ̃(s, a)− φ̃(s′, a′) is non-zero for any (s′, a′) and hence Eq. 26 has no trivial solutions. Proceeding as in the proof of Lemma 2 obtains µ = Φ̃Ew[w], where Φ̃ is analogously defined. Note that if N > n there is no solution Ew[w] for any admissible (full-rank) choice of Φ̃, and hence the conclusion follows for the first part. For the second part, using that Eθ = EwEϑ in Eq. 24 yields\n0 = n∑ i=1 n∑ j=1 Ew[w̃iw̃j ]Eϑ [ d−i d + j ] = n∑ i=1 n∑ j=1 Cov (wi, wj)Eϑ [ d−i d + j ] . (27)\nPerform a change of variables λ̃α(i,j) = Eϑ [ d−i d + j ] . Again, by Eθ[Qθ(s, a)] 6= Eθ[Qθ(s′, a′)] we\nhave that λ̃ is non-zero; proceed as before to complete the proof.\nWe are now ready to prove our main result. We restate it here for convenience:\nProposition 1. If the number of state-action pairs where Eθ[Qθ(s, a)] 6= Eθ[Qθ(s′, a′)] is greater than n, where w ∈ Rn, then Eθ[Qθ] and Vθ[Qθ] are biased estimators of EM [ QMπ ] and VM [ QMπ ] for any choice of p(M).\nProof. Let p(θ) be of the form p(θ) = p(w) or p(θ) = p(w)p(ϑ). By Lemmas 2 and 3, p(θ) fail to satisfy Eq. 2. By Lemma 1, this causes Eθ[Qθ] to be a biased estimator of EM [ QMπ ] . This in\nturn implies that Vθ[Qθ] is a biased estimator of VM [ QMπ ] . Further, if N > n2, Vθ[Qθ] is biased independently of Eθ[Qθ].\nWe now turn to analysing the bias of our proposed estimators. As before, we will build up to Proposition 2 through a series of lemmas. For the purpose of these results, let B : S ×A → R denote the bias of Eθ[Qθ] in any tuple (s, a) ∈ S ×A, so that Bias(Eθ[Qθ] (s, a)) = B(s, a).\nLemma 4. Given a transition τ := (s, a, r, s′), for any p(M), given p(θ), if\nB(s′, π(s′))\nB(s, a) ∈ (0, 2/γ) (28)\nthen Eθ[δ(θ, τ) | τ ] has less bias than Eθ[Qθ(s, a)].\nProof. From direct manipulation of Eθ[δ(θ, τ) | τ ], we have\nEθ[δ(θ, τ) | τ ] = Eθ[γQθ(s′, π(s′)) + r −Qθ(s, a)] (29) = γEθ[Qθ(s′, π(s′))] + r − Eθ[Qθ(s, a)] (30) = γEM [ QMπ (s ′, π(s′)) ] + r − EM [ QMπ (s, a) ] + γB(s′, π(s′))−B(s, a) (31)\n= EM [ δMπ (τ) ] + γB(s′, π(s′))−B(s, a). (32)\nConsequently, Bias(Eθ[δ(θ, τ) | τ ]) = γB(s′, π(s′)) − B(s, a) and for this bias to be less than Bias(Eθ[Qθ(s, a)]) = B(s, a), we require |γB(s′, π(s′)) − B(s, a)| < |B(s, a)|. Let ρ = B(s′, π(s′))/B(s, a) and write |(γρ − 1)B(s, a)| < |B(s, a)| from which it follows that for this to hold true, we must have ρ ∈ (0, 2/γ), as to be proved.\nWe now turn to characterising the conditions under which Vθ[δ(θ, τ) | τ ] enjoys a smaller bias than Vθ[Qθ(s, a)]. Because the variance term involves squaring the TD-error, we must place some restrictions on the expected behaviour of the Q-function to bound the bias. First, as with B, let C : S ×A → R denote the bias of Eθ [ Q2θ ] for any tuple (s, a) ∈ S ×A, so that Bias(Eθ [ Qθ(s, a) 2 ] ) = C(s, a). Similarly, let D : S ×A×S → R denote the bias of Eθ[Qθ(s′, π(s′))Qθ(s, a)] for any transition (s, a, s′) ∈ S ×A×S .\nLemma 5. For any τ and any p(M), given p(θ), define relative bias ratios\nρ = B(s′, π(s′))\nB(s, a) , φ =\nC(s′, π(s′))\nC(s, a) , κ =\nD(s, a, s′)\nC(s, a) , α =\nEM [ QMπ (s ′, π(s′)) ]\nEM [QMπ (s, a)] . (33)\nThere exists ρ ≈ 1, φ ≈ 1, κ ≈ 1, α ≈ 1 such that Vθ[δ(θ, τ) | τ ] have less bias than Vθ[Qθ(s, a)]. In particular, if ρ = φ = κ = α = 1, then\n|Bias(Vθ[δ(θ, τ) | τ ])| = |(γ − 1)2 Bias(Vθ[Qθ(s, a)])| < |Bias(Vθ[Qθ(s, a)])|. (34)\nFurther, if ρ = 1/γ, κ = 1/γ, φ = 1/γ2, then |Bias(Vθ[δ(θ, τ) | τ ])| = 0 for any α.\nProof. We begin by characterising the bias of Vθ[Qθ(s, a)]. Write\nVθ[Qθ(s, a)] = Eθ [ Q(s, a) 2 ] − Eθ[Q(s, a)]2 (35)\n= EM [ QMπ (s, a) 2 ] + C(s, a)− ( EM [ QMπ (s, a) ] +B(s, a) )2 . (36)\nThe squared term expands as\n( EM [ QMπ (s, a) ] +B(s, a) )2 = EM [ QMπ (s, a) ]2 + 2EM [ QMπ (s, a) ] B(s, a) +B(s, a)2. (37)\nLet A(s, a) = EM [ QMπ (s, a) ] B(s, a) and write the bias of Vθ[Qθ(s, a)] as\nBias(Vθ[Qθ(s, a)]) = C(s, a) + 2A(s, a) +B(s, a)2. (38)\nWe now turn to Vθ[δ(θ, τ) | τ ]. First note that the reward cancels in this expression:\nδ(θ, τ)− Eθ[δ(θ, τ)] = γQθ(s′, π(s′))−Qθ(s, a)− (γEθ[Qθ(s′, π(s′))]− Eθ[Qθ(s, a)]) . (39)\nDenote by xθ = γQθ(s′, π(s′))−Qθ(s, a) with Eθ[xθ] = γEθ[Qθ(s′, π(s′))]−Eθ[Qθ(s, a)]. Write\nVθ[δ(θ, τ) | τ ] = Eθ [ (δ(θ, τ)− Eθ[δ(θ, τ)])2 ] (40)\n= Eθ [ (xθ − Eθ[xθ])2 ] (41)\n= Eθ [ x2θ ] − Eθ[xθ]2 (42)\n= Eθ [ (γQθ(s ′, π(s′))−Qθ(s, a)) 2 ] − (γEθ[Qθ(s′, π(s′))]− Eθ[Qθ(s, a)]) 2 .\n(43)\nEq. 41 uses Eq. 39 and Eq. 43 substitutes back for xθ. We consider each term in the last expression in turn. For the first term, Eθ [ (γQθ(s ′, π(s′))−Qθ(s, a))2 ] , expanding the square yields\nγ2Eθ [ Qθ(s ′, π(s′))2 ] − 2γEθ[Qθ(s′, π(s′)Qθ(s, a)] + Eθ [ Qθ(s, a) 2 ] . (44)\nFrom this, we obtain the bias as\nBias ( Eθ [ (γQθ(s ′, π(s′))−Qθ(s, a)) 2 ]) = γ2C(s′, π(s′))− 2γD(s, a, s′) + C(s, a) (45)\n= ( γ2φ− 2γκ+ 1 ) C(s, a). (46)\nWe can compare this term to C(s, a) in the bias of of Vθ[Qθ(s, a)] (Eq. 38). For the bias term in Eq. 46 to be smaller, we require | ( γ2φ− 2γκ+ 1 ) C(s, a)| < |C(s, a)| from which it follows that(\nγ2φ− 2γκ+ 1 ) ∈ (−1, 1). In terms of φ, this means\nφ ∈ (\n2kγ − 2 γ2 , 2k γ\n) . (47)\nIf the bias term D is close to C (κ ≈ 1), this is approximately the same condition as for ρ in Lemma 4. Generally, as κ grows large, φ must grow small and vice-versa. The gist of this requirement is that the biases should be relatively balanced κ ≈ φ ≈ 1. For the second term in Eq. 43, recall that Eθ[Qθ(s′, π(s′))] = EM [ QMπ (s ′, π(s′)) ] + B(s′, π(s′))\nand Eθ[Qθ(s, a)] = EM [ QMπ (s, a) ] +B(s, a). We have\n(Eθ[Qθ(s′, π(s′))]− Eθ[Qθ(s, a)]) 2 = ( (γα− 1)EM [ QMπ (s, a) ] + (γρ− 1)B(s, a) )2 , (48)\nwhere α = EM [ QMπ (s ′, π(s′)) ] /EM [ QMπ (s, a) ] . This expands as\n(γα−1)2EM [ QMπ (s, a) ]2 +2(γα−1)(γρ−1)EM [ QMπ (s, a) ] B(s, a)+(γρ−1)2B(s, a)2. (49)\nNote that from Eq. 34, ( EM [ QMπ (s ′, π(s′)) ] − EM [ QMπ (s, a) ])2 = (γα−1)2EM [ QMπ (s, a) ]2 and so the bias of Vθ[δ(θ, τ) | τ ] can be written as\nBias(Vθ[δ(θ, τ) | τ ]) = w1(φ, κ)C(s, a) + w2(α, ρ)2A(s, a) + w3(ρ)B(s, a)2 (50)\nwhere\nw1(φ, κ) = ( γ2φ− 2γκ+ 1 ) , w2(α, ρ) = (γα− 1)(γρ− 1), w3(ρ) = (γρ− 1)2. (51)\nNote that the bias in Eq. 50 involves the same terms as the bias of Vθ[Qθ(s, a)] (Eq. 38) but are weighted. Hence, there always exist as set of weights such that |Bias(Vθ[δ(θ, τ) | τ ])| < |Bias(Vθ[Qθ(s, a)])|. In particular, if ρ = 1/γ, κ = 1/γ, φ = 1/γ2, then Bias(Vθ[δ(θ, τ) | τ ])| = 0 for any α. Further, if ρ = α = κ = φ = 1, then we have that w1(φ, κ) = w2(α, ρ) = w3(ρ) = (γ − 1)2 and so\n|Bias(Vθ[δ(θ, τ) | τ ])| = |(γ − 1)2 Bias(Vθ[Qθ(s, a)])| < |Bias(Vθ[Qθ(s, a)])|, (52) as desired.\nProposition 2. For any τ and any p(M), given p(θ), if ρ ∈ (0, 2/γ), then Eδ[δ | τ ] has lower bias than Eθ[Qθ(s, a)]. Additionally, there exists ρ ≈ 1, φ ≈ 1, κ ≈ 1, α ≈ 1 such that Vθ[δ(θ, τ) | τ ] have less bias than Vθ[Qθ(s, a)]. In particular, if ρ = φ = κ = α = 1, then |Bias(Vθ[δ(θ, τ) | τ ])| = |(γ − 1)2 Bias(Vθ[Qθ(s, a)])| < |Bias(Vθ[Qθ(s, a)])|. Further, if ρ = 1/γ, κ = 1/γ, φ = 1/γ2, then |Bias(Vθ[δ(θ, τ) | τ ])| = 0 for any α.\nProof. The first part follows from Lemma 4, the second part follows from Lemma 5." }, { "heading": "C BINARY TREE MDP", "text": "In this section, we make a direct comparison between the Bootstrapped DQN and TDU on the Binary Tree MDP introduced by Janz et al. (2019). In this MDP, the agent has two actions in every state. One action terminates the episode with 0 reward while the other moves the agent one step further up the tree. At the final branch, one leaf yields a reward of 1. Which action terminates the episode and which moves the agent to the next branch is randomly chosen per branch, so that the agent must learn an action map for each branch separately. This is a similar environment to Deep Sea, but simpler in that an episode terminates upon taking a wrong action and the agent does not receive a small negative reward for taking the correct action. We include the Binary Tree MDP experiment to compare the scaling property of TDU as compared to TDU on a well-known benchmark.\nWe use the default Bsuite implementation2 of the bootstrapped DQN, with the default architecture and hyper-parameters from the published baseline, reported in Table 2. The agent is composed of a two-layer MLP with RELU activations that approximate Q(s, a) and is trained using experience replay. In the case of the bootstrapped DQN, all ensemble members learn from a shared replay buffer with bootstrapped data sampling, where each member Qθk is a separate MLP (no parameter sharing) that is regressed towards separate target networks. We use Adam (Kingma & Ba, 2015) and update target networks periodically (Table 2).\nWe run 5 seeds per tree-depth, for depths L ∈ {10, 20, . . . , 250} and report mean performance in Figure 4. Our results are in line with those of Janz et al. (2019), differences are due to how many gradient steps are taken per episode (our results are between the reported scores for the 1× and 25× versions of the bootstrapped DQN). We observe a clear beneficial effect of including TDU, even for small values of β. Further, we note that performance is largely monotonically increasing in β, further demonstrating that the TDU signal is well-behaved and robust to hyper-parameter values.\nWe study the properties of TDU in Figure 5, which reports performance without prior functions (λ = 0). We vary β and the number of exploration value functions N . The total number of value functions is fixed at 20, and so varying N is equivalent to varying the degree of exploration. We note that N has a similar effect to β, but has a slightly larger tendency to induce over-exploration for large values of N .\n2https://github.com/deepmind/bsuite/tree/master/bsuite/baselines/jax/ bootdqn." }, { "heading": "D BEHAVIOUR SUITE", "text": "From Osband et al. (2020): “The Behaviour Suite for Reinforcement Learning (Bsuite) is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement learning agent . The aim of the Bsuite project is to collect clear, informative and scalable problems that capture key issues in the design of efficient and general learning algorithms and study agent behaviour through their performance on these shared benchmarks.”\nD.1 AGENTS AND HYPER-PARAMETERS\nAll baselines use the default Bsuite DQN implementation3. We use the default architecture and hyperparameters from the published baseline, reported in Table 2, and sweep over algorithm-specific hyperparameters, reported in Table 3. The agent is composed of a two-layer MLP with RELU activations that approximate Q(s, a) and is trained using experience replay. In the case of the bootstrapped DQN, all ensemble members learn from a shared replay buffer with bootstrapped data sampling, where each member Qθk is a separate MLP (no parameter sharing) that is regressed towards separate target networks. We use Adam (Kingma & Ba, 2015) and update target networks periodically (Table 2).\nQEX Uses two networks Qθ and Qϑ, where Qθ is trained to maximise the extrinsic reward, while Qϑ is trained to maximise the absolute TD-error of Qθ (Simmons-Edler et al., 2019). In contrast to TDU, the intrinsic reward is given as a point estimate of the TD-error for a given transition, and thus cannot be interpreted as measuring uncertainty as such.\nCTS Implements a count-based reward defined by i(s, a,H) = (N(s, a,H) + 0.01)−1/2, where H is the history and N(s, a,H) = ∑ τ∈H 1(s,a)∈τ is the number of times (s, a) has appeared in a transition τ := (s, a, r, s′). This intrinsic reward is added to the extrinsic reward to form an augmented reward r̃ = r + βi used to train a DQN agent (Bellemare et al., 2016).\nRND Uses two auxiliary networks fϑ and fϑ̃ that map a state into vectors x = fϑ(s) and x̃ = fϑ̃(s), x, x̃ ∈ Rm. While ϑ̃ is a random parameter vector that is fixed throughout, ϑ is trained to minimise the mean squared error i(s) = ‖x− x̃‖. This error is simultaneously used as an intrinsic reward in the augmented reward function r̃(s, a) = r(s, a) + βi(s) and is used to train a DQN agent. Following\n3https://github.com/deepmind/bsuite/tree/master/bsuite/baselines/jax/ dqn.\nBurda et al. (2018b), we normalise intrinsic rewards by an exponential moving average of the mean and the standard deviation that are being updated with batch statistics (with decay α).\nBDQN Trains an ensemble Q = {Qθk}Kk=1 of DQNs (Osband et al., 2016a). At the start of each episode, one DQN is randomly chosen from which a greedy policy is derived. Data collected is placed in a shared replay memory, and all ensemble members have some probability ρ of training on any transition in the replay. Each ensemble member has its own target network. In addition, each DQN is augmented with a random prior function fϑ, where ϑ̃ is a fixed parameter vector that is randomly sampled at the start of training. Each DQN is defined by Qθk + λfϑk , where λ is a hyper-parameter regulating the scale of the prior. Note that the target network uses a distinct prior function.\nSU Decomposes the DQN as Qθ(s, a) = wTψϑ(s, a). The parameters ϑ are trained to satisfy the Success Feature identity while w is learned using Bayesian linear regression; at the start of each episode, a new w is sampled from the posterior p(w | history) (Janz et al., 2019).4\nNNS NoisyNets replace feed-forward layers Wx+ b by a noisy equivalent (W + Σ W )x+ (b+ σ b), where is element-wise multiplication; Wij ∼ N (0, β) and bi ∼ N (0, β) are white noise of the same size as W and b, respectively. The set (W,Σ, b, σ) are learnable parameters that are trained on the normal TD-error, but with the noise vector re-sampled after every optimisation step. Following Fortunato et al. (2018), sample noise separately for the target and the online network.\nTDU We fix the number of explorers to 10 (half of the number of value functions in the ensemble), which roughly corresponds to randomly sampling between a reward-maximising policy and an exploration policy. Our experiments can be replicated by running the TDU agent implemented in Algorithm 5 in the Bsuite GitHub repository.5" }, { "heading": "D.2 TDU EXPERIMENTS", "text": "Effect of TDU Our main experiment sweeps over β to study the effect of increasing the TDU exploration bonus, with β ∈ {0, 0.01, 0.1, 0.5, 1, 2, 3, 5}; β = 0 corresponds to default bootstrapped DQN. We find that β reflects the exploitation-exploration trade-off: increasing β leads to better performance on exploration tasks (see main paper) but typically leads to worse performance on\n4See https://github.com/DavidJanz/successor_uncertainties_tabular. 5https://github.com/deepmind/bsuite/blob/master/bsuite/baselines/jax.\ntasks that do not require further exploration beyond -greedy (Figure 6). In particular, we find that β > 0 prevents the agent from learning on Mountain Car, but otherwise retains performance on non-exploration tasks. Figure 7 provides an in-depth comparison per game.\nBecause σ is a principled measure of concentration in the distribution p(δ | s, a, r, s′), β can be interpreted as specifying how much of the tail of the distribution the agent should care about. The higher we set β, the greater the agent’s sensitivity to the tail-end of its uncertainty estimate. Thus, there is no reason in general to believe that a single β should fit all environments, and recent advances in multi-policy learning (Schaul et al., 2019; Zahavy et al., 2020; Puigdomènech Badia et al., 2020) suggests that a promising avenue for further research is to incorporate mechanisms that allow either β to dynamically adapt or the sampling probability over policies. To provide concrete evidence to that effect, we conduct an ablation study that uses bandit policy sampling below.\nEffect of prior functions We study the inter-relationship between additive prior functions (Osband et al., 2019) and TDU. We sweep over λ ∈ [0, 1, 3], where prior functions define value function estimates by Qk = Qθk +λP k for some random network P k . Thus, λ = 0 implies no prior function. We find a general synergistic relationship; increasing λ improves performance (both with and without TDU), and for a given level of λ, performance on exploration tasks improve for any β > 0. It should be noted that these effects do no materialise as clearly in our Atari settings, where we find no conclusive evidence to support λ > 0 under TDU.\nAblation: exploration under non-TD signals To empirically support theoretical underpinnings of TDU (Proposition 2), we conduct an ablation study where σ is re-defined as the standard deviation over value estimates:\nσ(Q) := √√√√ 1 K − 1 K∑ k=1 Qk − Q̄. (53)\nIn contrast to TDU, this signal does not condition on the future and consequently is likely to suffer from a greater bias. We apply this signal both as in intrinsic reward (QU), as in TDU, and as an UCB-style exploration bonus (Q+UCB), where σ is instead applied while acting by defining a policy by π(·) = arg max aQ(·, a) + βσ(Q; ·, a). Note that TDU cannot be applied in this way because the TDU exploration signal depends on r and s′. We tune each baseline over the same set of β values as above (incidentally, these coincide to β = 1) and report best results in Figure 6. We find that either alternative is strictly worse than TDU. They suffer a significant drop in performance on exploration tasks, but are also less able to handle noise and reward scaling. Because the only difference between QU and TDU is that in TDU, σ conditions on the next state. Thus, there results are in direct support of Proposition 2 and demonstrates that Vθ[δ | τ ] is likely to have less bias than Vθ[Qθ(s, a)].\nAblation: bandit policy sampling Our main results indicate, unsurprisingly, that different environments require different emphasis on exploration. To test this more concretely, in this experiment we replace uniform policy sampling with the UCB1 bandit algorithm. However, in contrast to that example, where UCB1 is used to take actions, here it is used to select a policy for the next episode. We treat each N +K value function as an “arm” and estimate its mean reward V k ≈ Eπk [r], where the expectation is with respect to rewards r collected under policy πk(·) = arg maxaQk(·, a). The mean reward is estimated as the running average\nV k(n) = 1\nn(k) n(k)∑ i=1 ri, (54)\nwhere n(k) is the number of environment steps for which policy πk has been used and ri are the observed rewards under policy πk. Prior to an episode, we choose a policy to act under according to: arg max k=1,...,N+K V k(n) + η √\nlog n/n(k), where n is the total number of environment steps taken so far and η is a hyper-parameter that we tune. As in the bandit example, this sampling strategy biases selection towards policies that currently collect higher reward, but balances sampling by a count-based exploration bonus that encourages the agent to eventually try all policies. This bandit mechanism is very simple as our purpose is to test whether some form of adaptive sampling can provide benefits; more sophisticated methods (e.g. Schaul et al., 2019) can yield further gains.\nWe report full results in Figure 7; we use β = 1 and tune η ∈ {0.1, 1, 2, 4, 6, 8}. We report results for the hyper-parameter that performed best overall, η = 8, though differences with η > 4 are marginal. While TDU does not impact performance negatively in general, in the one case where it does—Mountain Car—introducing a bandit to adapt exploration can largely recover performance. The bandit yields further gains in dense reward settings, such as in Cartpole and Catch, with an outlying exception in the bandit setting with scaled rewards." }, { "heading": "E ATARI WITH R2D2", "text": "" }, { "heading": "E.1 BOOTSTRAPPED R2D2", "text": "We augment the R2D2 agent with an ensemble of dueling action-value heads Qi. The behavior policy followed by the actors is an -greedy policy as before, but where the greedy action is determined according to a single Qi for a fixed length of time (100 actor steps in all of our experiments), before sampling a new Qi uniformly at random. The evaluation policy is also -greedy with = 0.001, where the Q-values are averaged only over the exploiter heads.\nEach trajectory inserted into the replay buffer is associated with a binary mask indicating which Qi will be trained from this data, ensuring that the same mask is used every time the trajectory is sampled. Priorities are computed as in R2D2, except that TD-errors are now averaged over all heads.\nInstead of using reward clipping, R2D2 estimates a transformed version of the state-action value function to make it easier to approximate for a neural network. One can define a transformed Bellman operator given any squashing function h : R → R that is monotonically increasing and invertible. We use the function h : R 7→ R defined by\nh(z) = sign(z)( √ |z|+ 1− 1) + z, (55)\nh−1(z) = sign(z)\n((√ 1 + 4 (|z|+ 1 + )− 1\n2\n) − 1 ) , (56)\nfor small. In order to compute the TD errors accurately we need to account for the transformation,\nδ(θ, s, a, r, s′) := γh−1(Qθ(s ′, π(s′))) + r − h−1(Qθ(s, a)). (57)\nSimilarly, at evaluation time we need to apply h−1 to the output of each head before averaging.\nWhen making use of a prior we use the form Qk = Qkθ + λP k, where P k is of the same architecture as the Qkθ network, but with the widths of all layers cut to reduce computational cost. Finally, instead of n-step returns we utilise Q(λ) (Peng & Williams, 1994) as was done in (Guez et al., 2020). In all variants we used the hyper-parameters listed in Table 4." }, { "heading": "E.2 PRE-PROCESSING", "text": "We used the standard pre-process of the frames received from the Arcade Learning Environment.6 See Table 5 for details." }, { "heading": "E.3 HYPER-PARAMETER SELECTION", "text": "In the distributed setting we have three TDUspecific hyper-parameters to tune namely: β, N and the prior weight λ. For our main results, we run each agent across 8 seeds for 20 billions steps. For ablations and hyper-parameter tuning, we ran agents across 3 seeds for 5 billion environment steps on a subset of 8 games: frostbite,gravitar, hero, montezuma_revenge, ms_pacman, seaquest, space_invaders, venture. This subset presents quite a bit of diversity including dense-reward games as well as three hard exploration games: gravitar,\nmontezuma_revenge and venture. To minimise the computational cost, we started by setting λ and N while maintaining β = 1. We employed a coarse grid of λ ∈ {0., 0.05, 0.1} and N ∈ {2, 3, 5}. Figure 8 summarises the results in terms of the mean Human Normalised Scores (HNS) across the set. We see that the performance depends on the type of games being evaluated. Specifically, hard exploration games achieve a significantly lower score. Performance does not significantly change with the number of explorers. The largest differences are observed for the exploration games when N = 5. We select best performing sets of hyper parameters for TDU with and without additive priors: (N = 2, λ = 0.1) and (N = 5, λ = 0), respectively.\nWe evaluate the influence of the exploration bonus strength by fixing (N = 5, λ = 0) and choosing β ∈ {0.1, 1., 2.}. Figure 9 summarises the results. The set of dense rewards is composed of the games in the ablation set that are not considered hard exploration games. We observe that larger values of β help on exploration but affect performance on dense reward games. We plot jointly the performance in mean HNS acting when averaging the Q-values for both, the exploiter heads (solid lines) and the explorer heads (dotted lines). We can see that higher strengths for the exploration bonus (higher β) renders the explorers “uninterested” in the extrinsic rewards, preventing them to converge to exploitative behaviours. This effect is less strong for the hard exploration games.\n6Publicly available at https://github.com/mgbellemare/Arcade-Learning-Environment.\nFigure 10 we show how this effect manifests itself on the performance on three games: gravitar, space_invaders, and hero. This finding also applies to the evaluation performed on our evaluation using all 57 games in the Atari suite, as shown below. We conjecture that controlling for the strength of the exploration bonus on a per game manner would significantly improve the results. This finding is in line with observations made by (Puigdomènech Badia et al., 2020); combining TDU with adaptive policy sampling (Schaul et al., 2019) or online hyper-parameter tuning (Xu et al., 2018; Zahavy et al., 2020) are exciting avenues for future research." }, { "heading": "E.4 DETAILED RESULTS: MAIN EXPERIMENT", "text": "In this section we provide more detailed results from our main experiment in Section 5.2. We concentrated our attention on the subset of games that are well-known to pose challenging exploration problems (Machado et al., 2018): montezuma_revenge, pitfall, private_eye, solaris, venture, gravitar, and tennis. We also add a varied set of dense reward games.\nFigure 11 shows the performance for each game. We can see that TDU always performs on par or better than each of the baselines, leading to significant improvements in data efficiency and final score in games such as montezuma_revenge, private_eye, venture, gravitar, and tennis. Gains in exploration games can be substantial, and in montezuma_revenge, private_eye, venture, and gravitar, TDU without prior functions achieves statistically significant improvements. TDU with prior functions achieve statistically significant improvements on montezuma_revenge, private_eye, and gravitar. Beyond this, both methods improve the rate of convergence on seaquest and tennis, and achieve higher final mean score. Overall, TDU yields benefits across both dense reward and exploration games, as summarised in Figure 12. Note that R2D2’s performance on dense reward games is deflated due to particularly low scores on space_invaders. Our results are in line with the original publication, where R2D2 does not show substantial improvements until after 35 Bn steps." }, { "heading": "E.5 FULL ATARI SUITE", "text": "In this section we report the performance on all 57 games of the Atari suite. In addition to the two configurations used to obtain the results presented in the main text (reported in Section 5.2), in this section we included a variant of each of them with lower exploration bonus strength of λ = 0.1. In all figures we refer to these variants by adding an L (for lower λ) at the end of the name, e.g. TDU-R2D2-L. In Figure 13 we report a summary of the results in terms of mean HNS and median HNS for the suite as well as mean HNS restricted to the hard exploration games only. We show the performance on each game in Figure 14. Reducing the value of β significantly improves the mean HNS without strongly degrading the performance on the games that are challenging from an exploration standpoint. The difference in performance in terms of mean HNS can be explained by looking at a few high scoring games, for instance: assault, asterix, demon_attack and gopher (see Figure 14). We can see that incorporating priors to TDU is not crucial for achieving high performance in the distributed setting." } ]
2,020
null
SP:73630ddbe2f83647f099f921abb79b2c0f937aa9
[ "This work explores instance-wise layer re-ordering in transformers. The key idea is to incorporate classifiers that predict the ordering of sub-layers (self-attention, cross-attention, feed-forward) from the averaged input sequence representation, one classifier each for the encoder and the decoder. During training the model uses a soft Gumbel-noised output of the classifier to combine the outputs from stacks with differently ordered sub-layers. During inference the argmax of the classifier prediction is used to generate the output sequence. The model is trained with two auxiliary losses: (i) A loss to ensure the expected output of the classifiers is uniform and (ii) A loss to ensure the classifier output for each individual sample is distant from uniform.", "This paper studies the influence of the arrangement order for the internal structure in a single-layer Transformer (they named it as layer order) on the performance. It makes a hypothesis that different layer order has an impact on the performance of the model, and the hypothesis is verified by experiments. Based on this hypothesis, a lightweight layer order predictor is designed to predict an input-related layer order, and through reinforcement learning with two auxiliary loss, the model can not only be trained by diverse layer order, but also make unambiguous layer order prediction as far as possible. The IOT structure proposed in this paper has been evaluated on several datasets of machine translation, abstract summarization and code generation. Compared with the traditional transformer structure, it has been improved consistently, which shows the effectiveness of the proposed structure." ]
With sequentially stacked self-attention, (optional) encoder-decoder attention, and feed-forward layers, Transformer achieves big success in natural language processing (NLP), and many variants have been proposed. Currently, almost all these models assume that the layer order is fixed and kept the same across data samples. We observe that different data samples actually favor different orders of the layers. Based on this observation, in this work, we break the assumption of the fixed layer order in Transformer and introduce instance-wise layer reordering into model structure. Our Instance-wise Ordered Transformer (IOT) can model variant functions by reordered layers, which enables each sample to select the better one to improve the model performance under the constraint of almost same number of parameters. To achieve this, we introduce a light predictor with negligible parameter and inference cost to decide the most capable and favorable layer order for any input sequence. Experiments on 3 tasks (neural machine translation, abstractive summarization, and code generation) and 9 datasets demonstrate consistent improvements of our method. We further show that our method can also be applied to other architectures beyond Transformer. Our code is released at Github1.
[ { "affiliations": [], "name": "TRANSFORMER STRUCTURES" }, { "affiliations": [], "name": "Jinhua Zhu" }, { "affiliations": [], "name": "Lijun Wu" }, { "affiliations": [], "name": "Yingce Xia" }, { "affiliations": [], "name": "Shufang Xie" }, { "affiliations": [], "name": "Tao Qin" }, { "affiliations": [], "name": "Wengang Zhou" }, { "affiliations": [], "name": "Houqiang Li" }, { "affiliations": [], "name": "Tie-Yan Liu" } ]
[ { "authors": [ "Karim Ahmed", "Nitish Shirish Keskar", "Richard Socher" ], "title": "Weighted transformer network for machine translation", "venue": "arXiv preprint arXiv:1711.02132,", "year": 2017 }, { "authors": [ "Shiqi Shen Ayana", "Zhiyuan Liu", "Maosong Sun" ], "title": "Neural headline generation with minimum risk training", "venue": "arXiv preprint arXiv:1604.01904,", "year": 2016 }, { "authors": [ "Ankur Bapna", "Naveen Arivazhagan", "Orhan Firat" ], "title": "Controlling computation versus quality for neural sequence models", "venue": "arXiv preprint arXiv:2002.07106,", "year": 2020 }, { "authors": [ "Antonio Valerio Miceli Barone", "Rico Sennrich" ], "title": "A parallel corpus of python functions and documentation strings for automated code documentation and code generation", "venue": "arXiv preprint arXiv:1707.02275,", "year": 2017 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Deng Cai", "Wai Lam" ], "title": "Graph transformer for graph-to-sequence learning", "venue": "arXiv preprint arXiv:1911.07470,", "year": 2019 }, { "authors": [ "Nicolas Carion", "Francisco Massa", "Gabriel Synnaeve", "Nicolas Usunier", "Alexander Kirillov", "Sergey Zagoruyko" ], "title": "End-to-end object detection with transformers", "venue": "arXiv preprint arXiv:2005.12872,", "year": 2020 }, { "authors": [ "Haw-Shiuan Chang", "Erik Learned-Miller", "Andrew McCallum" ], "title": "Active bias: Training more accurate neural networks by emphasizing high variance samples", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime G Carbonell", "Quoc Le", "Ruslan Salakhutdinov" ], "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Maha Elbayad", "Jiatao Gu", "Edouard Grave", "Michael Auli" ], "title": "Depth-adaptive transformer", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Angela Fan", "Edouard Grave", "Armand Joulin" ], "title": "Reducing transformer depth on demand with structured dropout", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yang Fan", "Fei Tian", "Tao Qin", "Xiang-Yang Li", "Tie-Yan Liu" ], "title": "Learning to teach", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "José AR Fonollosa", "Noe Casas", "Marta R" ], "title": "Costa-jussà. Joint source-target self attention with locality constraints", "venue": "arXiv preprint arXiv:1905.06596,", "year": 2019 }, { "authors": [ "Hany Hassan", "Anthony Aue", "Chang Chen", "Vishal Chowdhary", "Jonathan Clark", "Christian Federmann", "Xuedong Huang", "Marcin Junczys-Dowmunt", "William Lewis", "Mu Li" ], "title": "Achieving human parity on automatic chinese to english news translation", "venue": "arXiv preprint arXiv:1803.05567,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Xing Hu", "Ge Li", "Xin Xia", "David Lo", "Shuai Lu", "Zhi Jin" ], "title": "Summarizing source code with transferred api knowledge", "venue": null, "year": 2018 }, { "authors": [ "Zhiting Hu", "Bowen Tan", "Russ R Salakhutdinov", "Tom M Mitchell", "Eric P Xing" ], "title": "Learning data manipulation for augmentation and weighting", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "arXiv preprint arXiv:1611.01144,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Nikita Kitaev", "Lukasz Kaiser", "Anselm Levskaya" ], "title": "Reformer: The efficient transformer", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Guillaume Lample", "Alexis Conneau" ], "title": "Cross-lingual language model pretraining", "venue": "arXiv preprint arXiv:1901.07291,", "year": 2019 }, { "authors": [ "Juho Lee", "Yoonho Lee", "Jungtaek Kim", "Adam Kosiorek", "Seungjin Choi", "Yee Whye Teh" ], "title": "Set transformer: A framework for attention-based permutation-invariant neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Haoran Li", "Junnan Zhu", "Jiajun Zhang", "Chengqing Zong" ], "title": "Ensure the correctness of the summary: Incorporate entailment knowledge into abstractive sentence summarization", "venue": "In Proceedings of the 27th International Conference on Computational Linguistics,", "year": 2018 }, { "authors": [ "Chin-Yew Lin" ], "title": "Rouge: A packagefor automatic evaluation of summaries", "venue": "In ProceedingsofWorkshop on Text Summarization Branches Out, Post2Conference Workshop of ACL,", "year": 2004 }, { "authors": [ "Junyang Lin", "Xu Sun", "Shuming Ma", "Qi Su" ], "title": "Global encoding for abstractive summarization", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),", "year": 2018 }, { "authors": [ "Weijie Liu", "Peng Zhou", "Zhe Zhao", "Zhiruo Wang", "Haotang Deng", "Qi Ju" ], "title": "Fastbert: a self-distilling bert with adaptive inference time", "venue": "arXiv preprint arXiv:2004.02178,", "year": 2020 }, { "authors": [ "Yiping Lu", "Zhuohan Li", "Di He", "Zhiqing Sun", "Bin Dong", "Tao Qin", "Liwei Wang", "Tie-Yan Liu" ], "title": "Understanding and improving transformer from a multi-particle dynamic system point of view", "venue": null, "year": 1906 }, { "authors": [ "Myle Ott", "Sergey Edunov", "David Grangier", "Michael Auli" ], "title": "Scaling neural machine translation", "venue": "In Proceedings of the Third Conference on Machine Translation: Research Papers,", "year": 2018 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli" ], "title": "fairseq: A fast, extensible toolkit for sequence modeling", "venue": "In Proceedings of NAACL-HLT 2019: Demonstrations,", "year": 2019 }, { "authors": [ "Ofir Press", "Noah A Smith", "Omer Levy" ], "title": "Improving transformer models by reordering their sublayers", "venue": "arXiv preprint arXiv:1911.03864,", "year": 2019 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Mengye Ren", "Wenyuan Zeng", "Bin Yang", "Raquel Urtasun" ], "title": "Learning to reweight examples for robust deep learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Alexander M Rush", "SEAS Harvard", "Sumit Chopra", "Jason Weston" ], "title": "A neural attention model for sentence summarization", "venue": "In ACLWeb. Proceedings of the 2015 conference on empirical methods in natural language processing,", "year": 2017 }, { "authors": [ "Roy Schwartz", "Gabi Stanovsky", "Swabha Swayamdipta", "Jesse Dodge", "Noah A Smith" ], "title": "The right tool for the job: Matching model and instance complexities", "venue": "arXiv preprint arXiv:2004.07453,", "year": 2020 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units", "venue": "arXiv preprint arXiv:1508.07909,", "year": 2015 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Edinburgh neural machine translation systems for wmt 16", "venue": "arXiv preprint arXiv:1606.02891,", "year": 2016 }, { "authors": [ "Peter Shaw", "Jakob Uszkoreit", "Ashish Vaswani" ], "title": "Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", "venue": null, "year": 2018 }, { "authors": [ "Noam Shazeer", "Azalia Mirhoseini", "Krzysztof Maziarz", "Andy Davis", "Quoc Le", "Geoffrey Hinton", "Jeff Dean" ], "title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer", "venue": "arXiv preprint arXiv:1701.06538,", "year": 2017 }, { "authors": [ "David So", "Quoc Le", "Chen Liang" ], "title": "The evolved transformer", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Yao Wan", "Zhou Zhao", "Min Yang", "Guandong Xu", "Haochao Ying", "Jian Wu", "Philip S Yu" ], "title": "Improving automatic source code summarization via deep reinforcement learning", "venue": "In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering,", "year": 2018 }, { "authors": [ "Dilin Wang", "Chengyue Gong", "Qiang Liu" ], "title": "Improving neural language modeling via adversarial training", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Li Wang", "Junlin Yao", "Yunzhe Tao", "Li Zhong", "Wei Liu", "Qiang Du" ], "title": "A reinforced topic-aware convolutional sequence-to-sequence model for abstractive text summarization", "venue": "In Proceedings of the 27th International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Wenbo Wang", "Yang Gao", "He-Yan Huang", "Yuxiang Zhou" ], "title": "Concept pointer network for abstractive summarization", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP),", "year": 2019 }, { "authors": [ "Yiren Wang", "Yingce Xia", "Tianyu He", "Fei Tian", "Tao Qin", "Cheng Xiang Zhai", "Tie Yan Liu" ], "title": "Multiagent dual learning", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Bolin Wei", "Ge Li", "Xin Xia", "Zhiyi Fu", "Zhi Jin" ], "title": "Code generation as a dual task of code summarization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Felix Wu", "Angela Fan", "Alexei Baevski", "Yann Dauphin", "Michael Auli" ], "title": "Pay less attention with lightweight and dynamic convolutions", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Russ R Salakhutdinov", "Quoc V Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Ba" ], "title": "2014) optimizer with β1 = 0.9, β2 = 0.98 and = 10−9. The learning rate scheduler", "venue": null, "year": 2014 }, { "authors": [ "Szegedy" ], "title": "2016) is used with value 0.1", "venue": "As introduced,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Transformer (Vaswani et al., 2017) has been the dominant architecture in deep learning models (Hassan et al., 2018; Ng et al., 2019; Carion et al., 2020; Radford et al., 2019; Dai et al., 2019; Lee et al., 2019; Devlin et al., 2018; Yang et al., 2019; Cai & Lam, 2019). A Transformer model is stacked by several identical blocks, and each block consists of sequentially ordered layers: the self-attention (SA), encoder-decoder attention (ED) (decoder only) and feed-forward (FF) layer. Recently, various modifications have been proposed, where the focus is on replacing or inserting some components (e.g., attention layer/layer norm/position encoding) in standard Transformer (Wu et al., 2019; Lu et al., 2019; Shaw et al., 2018; So et al., 2019; Ahmed et al., 2017).\nDespite these Transformer alternatives have achieved improved performances, one critical element is almost neglected in current models, which is how to arrange the components within a Transformer network, i.e., the layer order also matters. As pointed by He et al. (2016b), different orders of ReLU, batch normalization and residual connection significantly affect the performance of ResNet (He et al., 2016a). Therefore, we ask: What if we reorder the sequential layers in Transformer (e.g., SA→FF or FF→SA of encoder, SA→FF→ED or FF→ED→SA of decoder)? What is the best order for these different layers?\n∗Equal contribution and corresponding authors. 1https://github.com/instance-wise-ordered-transformer/IOT\nWe first conduct preliminary experiments. We vary the three layers in decoder with all six variants (each with a unique order of the three layers) and train these models. Results on IWSLT14 German→English translation are reported in Table 1. As we can see, their performances are similar and no one is outstanding. The corpus BLEU variance is only 0.0045, which means that simply reordering the layers and training over the whole corpus impacts little. Press et al. (2019) also reported this for machine translation, but they stopped here.\nThis seems to be a negative answer. However, we take a further step and ask one more question: Does different data favor different ordered layers? That is, we investigate whether each specific data has its own preference for one particular order. Intuitively, putting various data patterns in one order should not be the best choice. For example, harder samples may favor a particular order while easier ones favor another one. Thus, for each order, we count the ratio of samples that achieve the best score with that order. In Table 1, we find they almost lie on a uniform distribution (e.g., 17.9% samples achieve the best BLEU with order SA→ED→FF). Besides, we calculate the BLEU variance for each sample, and average all these variances, the result is 114.76, which is much larger than above corpus variance (0.0045). These both mean the data indeed has its own preference to different orders. In Table 2, we present translations from all decoders for on example with BLEU and TER score to give an evidence.\nMotivated by above observations, in this work, we present Instance-wise Ordered Transformer (IOT), in which the layer order is determined by the specific data through instance-wise learning. To achieve this, we utilize a light predictor to predict the confidence for each order, given the corresponding classification losses as training signals. However, directly training the predictor with conventional (i.e., NMT) loss tends to quickly converge to a bad order, and ignore explorations on others. Thus, we introduce an exploration loss and an exploitation loss to make an effective training while keeping an unambiguous prediction for each data so that the best order can be decided during inference.\nWe evaluate our approach on 3 sequence generation tasks, including neural machine translation (NMT), abstractive summarization (ABS) and code generation (CG). For NMT, we work on 8 IWSLT and 2 WMT tasks, both on low-resource and rich-resource scenarios. Our method can consistently obtain 1.0 BELU score improvements over Transformer. For ABS, IOT also outperforms Transformer and other baselines on Gigaword dataset. For CG tasks, the results on 2 large-scale real-world code datasets (Java and Python) collected from Github surpass the state-of-the-art performances. These all demonstrate the effectiveness of our IOT. Furthermore, we provide detailed studies to verify that the instance-wise learning and order selection make a reasonable and necessary modeling.\nThe contributions of this work can be summarized as follows:\n• We are the first to leverage instance-wise learning for layer order selection in a Transformer model (with shared parameters), and we demonstrate the instance-wise learning is critical. • We demonstrate our learning approach can be universally applied to other structures beside\nTransformer (e.g., Dynamic Convolutions), as long as there are multiple different layers. • Experiments on 3 sequence generation tasks and 9 datasets verify the effectiveness of IOT\nwith consistent performance improvements." }, { "heading": "2 RELATED WORK", "text": "Architecture Exploration Inventing novel architectures by human designing or automatic searching plays an important role in deep learning. Specific to Transformer structures, various modifications have been proposed. For example, human knowledge powered designs include DynamicConv (Wu et al., 2019), Macaron Network (Lu et al., 2019), Reformer (Kitaev et al., 2020) and others (Fonollosa et al., 2019; Ahmed et al., 2017; Shaw et al., 2018). As for automatic searching, neural architecture\nFeed Forward\n𝑋!\n𝑋\"\n𝑋#\n𝑌\"\n𝑌#\n𝑌!\nL x\nPred\nL x\nWeight Tying\nEncoder DecoderInput Output\nSelf Attention\nSelf Attention\nEnc-Dec Attention\nFeed Forward\nSelf Attention\nEnc-Dec Attention Feed Forward Feed Forward Self Attention\nPred Weight Tying\nFigure 1: The IOT framework. Pred means the light predictor introduced in 3.1 for order selection. We show two ordered encoders/decoders here. After taking X1, X2, X3, the selected order for Y2, Y3 is the lower encoder and upper decoder, while for Y1 is the upper encoder and lower decoder.\nsearch can discover networks with state-of-the-art performances but always with complicated computation, i.e., Evolved Transformer (So et al., 2019). The underlying principle is to add or replace some components of Transformer. For instance, Wu et al. (2019) replace self-attention with dynamic convolution, So et al. (2019) add a separate convolution layer in a new branch. Different from them, we, instead, only focus on the selection of layer orders for each data sample so as to improve the model performance, without a heavy modification. Besides, our approach is structure agnostic, which can be universally applied to other structures, only if multiple different layers exist.\nInstance-wise Learning Deep learning models are trained over large-scale datasets, and data samples are often treated equally without modeling the difference between them. Some works attempt to weight each data with different importance (Ren et al., 2018; Hu et al., 2019; Chang et al., 2017) or feed data with curriculum learning according to its difficulty (Bengio et al., 2009; Fan et al., 2018). However, they often explicitly manipulate the data during training only, while no distinction exists in inference, and under one fixed model. Elbayad et al. (2020) take a step further and propose the depth-adaptive Transformer, which can forecast different depths of the network by predicting the required computation for a particular data. Similarly, Liu et al. (2020) propose a sample-wise adaptive mechanism to dynamically calculate the number of required layers. They both aim at reducing the computation cost and speed up the inference. Schwartz et al. (2020), Bapna et al. (2020) and Shazeer et al. (2017) all leverage conditional computation for each sample to control the computation and accuracy tradeoff during inference. Instead, we pay attention to the variant modeling functions and perform instance-wise order selection in order to boost the Transformer performance.\nThe most related work is Press et al. (2019), which manually generates randomly ordered Transformer encoders and finds the Sandwich Transformer can slightly reduce the perplexity of language modeling. However, they find that Sandwich Transformer pattern has no effect on NMT task. Besides, it still performs over the whole corpus without considering each specific data. We, instead, investigate on various sequence-to-sequence generation tasks and greatly improve the task performances through instance-wise learning, so as to discover the optimal ordered Transformer for each particular data." }, { "heading": "3 INSTANCE-WISE ORDERED TRANSFORMER", "text": "The overall framework of IOT is presented in Figure 1. In comparison with the standard Transformer, IOT only incorporates light-weighted predictors and reorders the encoder/decoder with weight tying, under the constraint of almost same number of parameters and exempt from heavy modifications. In this section, we introduce the details of IOT, including training, inference and discussions.\nNotations Sequence-to-sequence learning aims to map one sequence x = [x1, x2, ..., xTx ] into another sequence y = [y1, y2, ..., yTy ], where xi, yj denotes the i-th and j-th token of x and y, Tx and Ty are the corresponding lengths. Given one sentence pair (x, y) and a learning modelM, we can define the training objective as minimizing the cross-entropy loss LM = − ∑Ty j=1 logP (yj |y<j , x). Besides, DKL(P‖Q) denotes the Kullback-Leibler (KL) divergence between distributions P and Q." }, { "heading": "3.1 INSTANCE-WISE ENCODER/DECODER", "text": "IOT intends to break the fixed order of layers in Transformer. As shown in introduction, simply reordering the layers w.r.t the whole corpus impacts little, while each data has its own preference to orders. Therefore, IOT incorporates instance-wise learning to adjust the favorable order for each data.\nAs shown in Figure 1, both encoder and decoder in IOT consist of several blocks of SA, ED, FF layer with dynamic order, and we assume there are M (e.g., M = 2) ordered encoders and N (e.g., N = 6) ordered decoders (with shared weights). Inspired by the fact that lower training loss implies the higher proficiency confidence for candidate orders, we utilize the cross-entropy loss as signals to learn the confidence. That is, we calculate confidence γm and λn for each encoder encm, decoder decn (resulted modelMm,n), and use them to weight the training loss LMm,n. To calculate the confidence, we add a simple and light predictor to help distinguish the orders.\nTraining Given one source sequence x = [x1, x2, ..., xTx ], we first map each token into word embedding e = [e1, e2, ..., eTx ], where ei ∈ Rd, and then apply one light encoder predictor πenc to predict the confidence of encoder orders using sentence embedding se = 1Tx ∑Tx i=1 ei. Concretely, πenc takes se as input and predicts γm for encm by Gumbel-softmax (Jang et al., 2016):\nγm = exp ((log (πencm) + gm) /τe)∑M k=1 exp ((log (πenck) + gk) /τe) , πenc = softmax (seWe) , (1)\nwhere gm is sampled from Gumbel distribution: gm = − log(− logUm), Um ∼ Uniform(0, 1), We ∈ Rd×M is the weight matrix, τe is a constant temperature to control the distribution to be identical approximation with categorical distribution. Simultaneously, the token embeddings e will feed to the encoders to get hidden states h = [h1, h2, ..., hTx ], then we can calculate decoder order confidence λn by one predictor πdec in the same way as πenc:\nλn = exp ((log (πdecn) + gn) /τd)∑N k=1 exp ((log (πdeck) + gk) /τd) , πdec = softmax (sdWd) , (2)\nwhere sd = 1Tx ∑Tx\ni=1 hi and Wd is the weight matrix. For each ordered path through encm and decn, we can obtain the training loss LMm,n, and the final cross-entropy loss is weighted by confidence γm and λn with LMm,n, formulately as:\nLC = M∑\nm=1 N∑ n=1 (γm · λn)LMm,n. (3)\nInference During inference, we directly replace the Gumbel-softmax used in training with argmax, in order to choose the most capable encoder and decoder for each sequence x:\nenc = argmax (seWe) , dec = argmax (sdWd) . (4)\nDiscussion The decoding process is almost the same as standard Transformer, with only little overhead for order predictions. One may concern the training cost is increased through our training. As we present in Section 5.1, the cost is actually affordable with a fast convergence. Currently, we reorder the layers of the encoder/decoder block and stack the same ordered block L times (see Figure 1). A complex extension is to reorder all L blocks of encoder/decoder and we take it as future work." }, { "heading": "3.2 AUXILIARY LOSSES", "text": "As we can see, the predictors are trained in an unsupervised way, and we observe they lean to be lazy so that all samples quickly converge to one same order during training, without a senseful learning. Thus, to make an effective training and inference, we introduce exploration and exploitation losses.\n(1) Exploration: first, we explore the diverse capability of all orders with help of a loss LD to encourage all orders to participate in training. The spirit is the same as to encourage exploration in reinforcement learning. The expected softmax probability Ex [πx] (encoder/decoder) from the predictor should approximate the uniform distribution Q = [ 1N , 1 N , . . . , 1 N ] (e.g., decoder orders), and we achieve this by minimizing KL-divergence between the statistical average Ex [πx] and Q:\nLD = DKL(Q‖Ex [πx]) = − 1\nN N∑ n=1 log(Ex [(πx)n])− logN, (5)\nwhere (πx)n is the probability of n-th decoder order for data x. For encoder order, it is (πx)m processed in a same way as decoder.\n(2) Exploitation: different from LD to keep all orders effectively trained, during inference, the output distribution πx for each data should be able to make an unambiguous argmax selection. We then introduce another loss LS to constrain each πx to be far away from the uniform distribution Q. Concretely, we maximize the KL-divergence between each probability πx and Q:\nLS = −Ex [DKL(Q‖πx)] = −Ex [ − 1 N N∑ n=1 log(πx)n − logN ] . (6)\nNote that we clamp the value of probability πx since the KL value is theoretically unbounded. With above auxiliary losses, the final training objective is to minimize:\nL = LC + c1LD + c2LS , (7)\nwhere c1 and c2 are coefficients to make a trade-off between LD and LS . In this way, we can achieve effective training, while keeping the ability to distinguish the favorable order for each data.\nDiscussion LD and LS aim to keep effective training and unambiguous inference. There are several alternatives. The first is to simply decay the temperature τ in Equation (1) and (2), and remove the auxiliary losses. However, we do not notice obvious gain. Second is to linearly decay c1 only and remove LS , which is able to fully train all orders at the beginning and loose this constraint gradually. We find this is also beneficial, but our two losses method performs better." }, { "heading": "4 EXPERIMENTS", "text": "We conduct experiments on 3 sequence generation tasks: neural machine translation (both lowresource and rich-resource), code generation and abstractive summarization. The main settings of each experiment are introduced here, and more details can be found in Appendix A." }, { "heading": "4.1 DATASET", "text": "Neural Machine Translation For the low-resource scenario, we conduct experiments on IWSLT14 English↔German (En↔De), English↔Spanish (En↔Es), IWSLT17 English↔French (En↔Fr), English↔Chinese (En↔Zh) translations. The training data includes 160k, 183k, 236k, 235k sentence pairs for each language pair respectively. For the rich-resource scenario, we work on WMT14 En→De and WMT16 Romanian→English (Ro→En) translations. For WMT14 En→De, we filter out 4.5M sentence pairs for training and concatenate newstest2012 and newstest2013 as dev set, newstest2014 as test set. For WMT16 Ro→En, we concatenate the 0.6M bilingual pairs and 2.0M back translated data2 for training, newsdev2016/newstest2016 serve as dev/test set.\nCode Generation Code generation aims to map natural language sentences to programming language code. We work on one Java (Hu et al., 2018) and one Python dataset (Wan et al., 2018), following Wei et al. (2019) to process the two datasets. The Java dataset is collected from Java projects on Github, and the Python dataset is collected by Barone & Sennrich (2017). We split each dataset with ratio 0.8 : 0.1 : 0.1 as training, dev and test set.\nAbstractive Summarization Abstractive summarization is to summarize one long sentence into a short one. The dataset we utilized is a widely acknowledged one: Gigaword summarization, which is constructed from a subset of Gigaword corpus (Graff et al., 2003) and first used by Rush et al. (2017). The training data consists of 3.8M article-headline pairs, while the dev and test set consist of 190k and 2k pairs respectively." }, { "heading": "4.2 MODEL AND OPTIMIZATION", "text": "For IWSLT translation tasks, we use transformer iwslt de en setting as model configuration. The number of block, embedding size and feed-forward network (FFN) size are 6, 512, 1024. WMT tasks use transformer vaswani wmt en de big configuration, with 6 blocks, embedding size 1024 and FFN size 4096. Optimization and learning scheduler are the default settings in Vaswani et al. (2017). For code generation, block number/embedding size/FFN size are 3, 256, 1024\n2http://data.statmt.org/rsennrich/wmt16_backtranslations/ro-en/.\nrespectively. Others are the same as NMT. For summarization, we take transformer wmt en de, with 6 blocks, embedding size 512 and FFN size 2048. Dropout (Srivastava et al., 2014) is set to be 0.3. Other settings are also the same as NMT task. Implementation is developed on Fairseq (Ott et al., 2019). We first grid search c1, c2 on IWSLT14 De→En dev set, and then apply them on other tasks. The best setting is c1 = 0.1, c2 = 0.01, and the importance study of c1, c2 is shown in Appendix B.1." }, { "heading": "4.3 EVALUATION", "text": "We use multi-bleu.perl to evaluate IWSLT14 En↔De and all WMT tasks for a fair comparison with previous works. For other NMT tasks, we use sacre-bleu for evaluation. During inference, we follow Vaswani et al. (2017) to use beam size 4 and length penalty 0.6 for WMT14 En→De, beam size 5 and penalty 1.0 for other tasks. For code generation, the evaluation is based on two metrics, the sentence BLEU computes the n-gram precision of a candidate sequence to the reference, and the percentage of valid code (PoV) that can be parsed into an abstract syntax tree (AST). As for summarization, the generated summarization is evaluated by ROUGE-1/2/L F1 score (Lin, 2004)." }, { "heading": "4.4 MAIN RESULTS", "text": "Encoder/Decoder Orders Encoder block only contains SA and FF layers, the resulted max number of encoder layer orders M is 2, while for decoder, the max order variants N is 6. Therefore, we first evaluate the utilization of encoder orders, decoders orders, and both orders on IWSLT14 De→En translation, in order to see the impacts of different number of order candidates and their combinations. In Table 3 (a), we can see that 2 ordered encoders improve the result, and 6 ordered decoders achieve more gain. This meets our expectation, since the search space is limited when there are only 2 ordered encoders. However, if we train both encoder and decoder orders (e.g., M = 2, N = 6), the results (e.g., 35.30) can not surpass the 6 decoders only (35.60). We suspect the search space is too large so that training becomes hard, and decoder orders play a more important role than encoder orders for sequence generation. Therefore, we turn to investigate different decoder order candidates (refer to Appendix A.3 for detailed combinations) in Table 3 (b). Results show that N = 4, 5, 6 achieve similar strong performances (results on other tasks/datasets are in Appendix A.4). Thus, considering the efficiency and improvements, we utilize N = 4 ordered decoders (order 1, 2, 4, 6 in Table 1) to reduce training cost in later experiments.\nNMT Results BLEU scores on 8 IWSLT low-resource tasks are shown in Table 4. As we can see, IOT achieves more than 1.0 BLEU points improvement on all tasks (e.g., 1.7 on Fr→En). The consistent gains on various language pairs well demonstrate the generalization and effectiveness of our method. We then present comparison with other works on IWSLT14 De→En task in Table 5 (a), and IOT is also better than several human designed networks. The results of WMT14 En→De and WMT16 Ro→En are reported in Table 6. We also compare with existing works, such as the unsupervised Ro→En based on pre-trained cross-lingual language model (Lample & Conneau, 2019).\nMethod En→De Transformer? 29.12 IOT 30.03 Shaw et al. (2018) 29.20 Ott et al. (2018) 29.30 Wu et al. (2019) 29.70 So et al. (2019) 29.80\nMethod Ro→En Transformer? 37.73 IOT 38.83 Sennrich et al. (2016) 33.90 Lample & Conneau (2019) 38.50\nTable 6: WMT14 En→De and WMT16 Ro→En translation results. ?stands for our reproduced result.\nSimilarly, our method outperforms them and shows our framework can work well on rich-resource scenario.\nCode Generation Results The results are shown in Table 5(b). We can observe that Transformer obtains better result than the LSTM-based work (Wei et al., 2019). Compared with Transformer, IOT can further improve the quality of generated code. Specifically, IOT boosts Transformer with 0.93 BLEU/2.86% PoV gain on Java generation and 0.75 BLEU/1.25% PoV gain on Python respectively. Again, these results well demonstrate the effectiveness of our method.\nAbstractive Summarization Results The IOT performances on summarization task are shown in Table 7. From the results, we can see IOT achieves 0.8, 0.7 and 1.0 scores gain of ROUGE-1, ROUGE-2 and ROUGE-L metrics over standard Transformer on Gigaword summarization. IOT also surpasses other works such as reinforcement learning based method (Wang et al., 2018), which again verifies our approach is simple yet effective." }, { "heading": "5 STUDY AND ANALYSIS", "text": "" }, { "heading": "5.1 INFERENCE/TRAINING COST", "text": "As discussed before, our approach only increases negligible parameters and inference time cost. Here we compare the detailed inference time and model size of our framework to the standard Transformer. The detailed parameter numbers and inference time on IWSLT14 En↔De test set are shown in Table 8. Since we only add one linear layer and softmax layer as the predictor, the number of extra parameters is M × hidden size (encoder predictor) or N × hidden size (decoder predictor), which is negligible compared to other model parameters. Therefore, IOT introduces more model diversity and improves the performance, but under the constraint of almost same number of parameters. As for the inference time, the only difference is from the one-pass order prediction and the cost is extremely low compared with heavy autoregressive generation process, which can be seen from Table 8.\nApart from the inference cost, one may concern about the training cost since IOT trains multiple orders in one model. To see the influence, we provide several statistics here. Specifically, on the four IWSLT En→X translation tasks, we analyze the cost by counting the training time for each epoch, the epoch number when model convergences, and the corresponding total training time. The numbers\nare presented in Table 9, and we can have several observations. Take IWSLT14 En→De translation as an example, (1) jointly optimizing different orders indeed introduces more training cost for each epoch. Transformer baseline costs 277.1s per epoch training, while our IOT costs 475.6s and 685.4s with N = 2 and N = 3 orders respectively, the increased cost ratio is about 1.72× and 2.47× (but less than 2.0 and 3.0). (2) However, we find that with the shared parameters between these orders, the model convergence also becomes faster. Transformer needs 67 epochs when converge, while our IOT only needs 42(0.63×) and 39(0.58×) epochs for N = 2 and N = 3 orders, much fewer than Transformer. (3) The total training cost actually is not increased much. IOT (N = 2) and IOT (N = 3) are about 1.08× and 1.44× training time compared with Transformer baseline (the ratio for IOT (N = 3) is only 1.10 on IWSLT17 En→Es). From these observations, we can see that the increased training cost is affordable due to the fast convergence." }, { "heading": "5.2 CASE VERIFICATION", "text": "We perform a study withN = 3 to verify that IOT has made a necessary instance-wise order selection. We first split IWSLT14 En↔De dev set into 3 subsets according to the prediction of πdec, and then we decode each subset use all 3 ordered decoders, and report the BLEU results. As shown in Figure 2, each subset indeed achieves the best score on the corresponding predicted order (outperforms other orders by 0.2-0.4 BLEU). We also do the same study on the test set, and the predicted order outperforms others by 0.7-0.8 BLEU. These well prove that IOT makes a reasonable prediction.\nBesides, we find that the predicted orders correlate to different sentence difficulties. In our case, the set 1 sentences belong to decoder 1 achieve highest BLEU than other sets, which means set 1 is relatively simple to translate, and vice versa for samples in set 2. These imply that different difficulty sentences have different structure preferences. We provide statistics and examples in Appendix B.2." }, { "heading": "5.3 APPLY ON ANOTHER STRUCTURE (DYNAMICCONV)", "text": "As we discussed, our instance-wise layer reordering is structure agnostic. In this subsection, we evaluate this by applying our approach on DynamicConv network (Wu et al., 2019) beyond standard\nTransformer, which replaces the self-attention with dynamic convolution. We train layer ordered DynamicConv on N = 2, 3, 4 decoders and test the performances. The BLEU score of standard DynamicConv is 35.20, and with our instance-wise order learning, we achieve 35.60, 35.82, 35.87 for N = 2, 3, 4 ordered decoders respectively (near 0.7 point gain). Therefore, this study verifies our claim that our approach can be applied to other structures, as long as multiple different layers exist.\n5.4 DISCUSSIONS\nEnsemble Since our framework involves multiple orders (with shared parameters), which is also done in ensemble framework, we make a comparison with ensemble. The ensemble method trains multiple models with different parameters separately in an independent way. While our work trains orders in a joint way with an intention to make them more diverse. More importantly, from the view of time and memory cost, the ensemble framework increases N times which is totally different from ours. In this sense, our method can be combined with ensemble to further boost performance. The competitive results on IWSTL14 En↔De test set are shown in Table 10. We can clearly conclude that IOT and ensemble are complementary to each other.\nRegularization IOT consists of different ordered blocks in a weight tying method, which may looks like a parameter regularization to some extent. However, we show that IOT is more than regularization and can be complementary with other regularization methods. Setting (1): We first train a Transformer model on IWSLT14 De→En task with all shared decoder orders, but without instance-wise learning, and test the performance with each order. We find the BLEU scores on test set are near 34.80 for each order, much worse than IOT, which means that simply regularizing the shared parameters for different orders is not the main contribution to performance improvement, and our instance-wise learning is critical. Setting (2): Another experiment is that we train Transformer with LayerDrop (Fan et al., 2019), a dropout technique to regularize the layer parameters. The test BLEU is 35.40, which achieves about 0.8 score improvement over Transformer. After applying IOT with LayerDrop, we obtain further gains than IOT only (35.62) to reach a BLEU score 36.13. Therefore, this demonstrates IOT is not only regularization and can be smoothly integrated with other regularization methods. More details and experiments on other tasks are shown in Appendix B.3." }, { "heading": "6 CONCLUSION", "text": "In this work, we propose Instance-wise Ordered Transformer, which leverages instance-wise learning to reorder the layers in Transformer for each data. Compared with standard Transformer, IOT only introduces slightly increased time cost. Experiments on 3 sequence generation tasks and 9 datasets demonstrate the effectiveness of IOT. We also verify that our approach can be universally applied to other structures, such as DynamicConv. In future, we plan to work on more complicated reordering in each block, as well as other tasks such as multi-lingual translation and text classification." }, { "heading": "A EXPERIMENTAL SETTINGS AND MORE RESULTS", "text": "" }, { "heading": "A.1 DETAILED DATA SETTINGS", "text": "Neural Machine Translation Following the common practice (Ott et al., 2019), we lowercase all words for IWSLT14 En↔De. For IWSLT14 En↔De, En↔Es, IWSLT17 En↔Fr, we use a joint source and target vocabulary with 10k byte-pair-encoding (BPE) (Sennrich et al., 2015) operations, and for IWSLT17 En↔Zh, we use a seperate source and target vocabulary. For all WMT tasks, sentences are encoded by a joint source and target vocabulary of 32k tokens.\nCode Generation In the Java dataset, the numbers of training, validation and test sequences are 69, 708, 8, 714 and 8, 714 respectively, and the corresponding numbers for Python are 55, 538, 18, 505 and 18, 502. All samples are tokenized. We use the downloaded Java dataset without further processing, and use Python standard AST module to further process the python code. The source and target vocabulary sizes in natural language to Java code generation are 27k and 50k, and those for natural language to Python code generation are 18k and 50k. In this case, following Wei et al. (2019), we do not apply subword tokenization like BPE to the sequences.\nAbstractive Summarization The Gigaword corpus represents for a headline generation task, each source article contains about 31.4 tokens on average, while the target headline contains near 8.3 tokens per sentence. The training data consists of 3.8M article-headline pairs, while the validation and test set consist of 190k and 2k pairs respectively. We preprocess the dataset in a same way as NMT task. The words in the source article and target headline are concatenated to make a joint BPE vocabulary. After preprocessing, there are 29k subword tokens in the vocabulary." }, { "heading": "A.2 DETAILED MODEL/TRAINING CONFIGURATIONS", "text": "Model Configuration The detailed model configurations are as follows:\n• transformer iwslt de en setting: 6 blocks in encoder and decoder, embedding size 512, feed-forward size 1024, attention heads 4, dropout value 0.3, weight decay 0.0001.\n• transformer vaswani wmt en de big setting: 6 blocks in encoder and decoder, embedding size 1024, feed-forward size 4096, attention heads 16, dropout value 0.3, attention dropout 0.1, relu dropout 0.1.\n• transformer wmt en de big setting: 6 blocks in encoder and decoder, embedding size 0124, feed-forward size 4096, attention heads 16, dropout value 0.3.\nOptimization We adopt the default optimization setting in Vaswani et al. (2017). Adam (Kingma & Ba, 2014) optimizer with β1 = 0.9, β2 = 0.98 and = 10−9. The learning rate scheduler is inverse sqrt with warmup steps 4, 000, default learning rate is 0.0005. Label smoothing (Szegedy et al., 2016) is used with value 0.1. As introduced, to learn the predictors, we clamp the softmax output with value 0.05." }, { "heading": "A.3 RESULTS OF ORDER COMBINATIONS", "text": "We show in the paper that different number of orders (e.g., N = 4 or N = 5) have varied performances. Therefore, one necessary point is about the different combinations of these N decoders. Here, we work on N = 5 IOT model to show the results of different order candidates.\nWe first present each ordered decoder in Table 11 again (same as in Table 1). For the N = 5 ordered decoders with IOT model, we show the performances with 5 combined orders selected from all six variants on dev set of IWSLT14 De→En and En→De translations. The results are reported in Table 12. We can see the different combinations achieve similar strong performances, which shows that our\napproach is robust towards different order combinations. This also demonstrate that the importance of IOT is the diversity among order candidates that can help each data distinguish them. For other N ordered decoders, the patterns are similar. Therefore, here we only report the N combinations used for IOT experiments in the paper as follows: IOT (N = 2) is combined by order 4, 6 (ED→SA→FF and FF→ED→SA), and IOT (N = 3) is order 1, 4, 6, IOT (N = 4) is order 1, 2, 4, 6, and IOT (N = 5) is order 1, 2, 4, 5, 6." }, { "heading": "A.4 RESULTS OF DIFFERENT NUMBER OF DECODERS", "text": "The results of N = 4 ordered decoders (order 1, 2, 4, 6) are mainly reported in the paper. Here, we also show results of other N decoders for all tasks, along with the Transformer baseline.\nThe results of different N decoders for WMT14 En→De and WMT16 Ro→En translations, code generation task, and Gigaword summarization are reported in Table 14, Table 15 and Table 16 respectively. As we can see, more ordered decoders can bring better performance, which supports the effectiveness of our framework and demonstrates the data has its own favor towards different orders. Considering the efficiency, we do not perform experiments with more than 4 decoders for these tasks." }, { "heading": "Model WMT14 En→De WMT16 Ro→En", "text": "" }, { "heading": "B MORE STUDIES", "text": "B.1 IMPACT OF WEIGHTED AUXILIARY LOSSES\nWe conduct another study on IWSLT14 De→En dev set to investigate the impact of our proposed auxiliary losses controlled by weight c1 and c2. The values of c1 and c2 are varied between [0.0, 0.05, 0.1, 0.5] and [0.0, 0.005, 0.01, 0.05] respectively, and the results are presented in Table 17. It can be seen that the best configuration is c1 = 0.1 and c2 = 0.01. Therefore, we report the leading results in the paper with c1 = 0.1, c2 = 0.01. The results also clearly demonstrate that the two additional losses are necessary to make our framework effective." }, { "heading": "B.2 DATA EXAMPLES VERIFICATION", "text": "As discussed in Section 5.2, the data split by the corresponding predicted order is in different pattern. For example, the difficulty of each set is different. We therefore analyze the split data and calculate some statistics among these subsets. Specifically, we first count the sentence number S, the tokens T , and the distinct vocabulary Di in each subset. We show these numbers in Table 18, along with corresponding averaged BLEU score (see Figure 2). We can see that the vocabulary size of set 1 is the smallest, and set 2 is the largest, which means there are more distinct words in set 2. This leads the generation of set 2 to be harder than set 1, which maps the BLEU score ranking among these sets. Besides, we also calculate the token frequency fij for token j in each own subset i, and sum the frequency of top 20 tokens in each subset, Fi = ∑20 j=1 fij , to give another evidence. The results also show that F1 is the highest, which means the tokens in set 1 contains most frequent words to make an easy learning, while set 2 is harder since F2 is small.\nWe further take a look at the data and find the sentences in set 1 are mostly “simple sentences”, and set 2 contains many “emphatic sentences”, while set 3 is somehow mixed. In Table 19, we provide some sentence examples belong to each subset to give more clarifications." }, { "heading": "B.3 REGULARIZATION", "text": "In Section 5.4, we have provided an example of regularization experiments on IWLST14 De→En translation, which demonstrates that our IOT is not only regularization and can be smoothly integrated with other regularization methods. To give more evidences and more details, we extend the regularization experiments on all IWSLT translation tasks (IWSLT14 En↔De, IWSLT14 En↔Es, IWSLT17 En↔Zh, IWSLT17 En↔Fr), and WMT16 Ro→En translation. The two specific settings of the experiments are as follows. Setting (1): The first experiment is “ordered Transformer” without instance awareness. That is, all the reordered architectures are trained on the whole same corpus with equal weights, and the parameters for these reordered architectures are shared. More specifically, the decoder block has different ways to order SA, ED, and FF layers (e.g., FF→SA→ED, SA→ED→FF, etc), but the parameters for the reordered blocks are shared. Mathematically, the loss function is: LC = ∑N n=1(λn · LM\nn), where LMn is the model loss function for n-th ordered decoder. Compared with Eqn (3), the weight λn is fixed to be 1 here. At inference, we first find out the best order according to the dev performance and apply it on the test set. We cannot use instance-wise reordered model in this setting, while our proposed IOT can. The experiments are conducted with transformer iwslt de en configuration for IWSLT translations, and transformer vaswani wmt en de big configuration for WMT16 Ro→En translation. Setting (2): We integrate another regularization technique ‘LayerDrop’ (Fan et al., 2019) into both the Transformer baseline and our IOT (N = 4) method, while other settings remain unchanged. The study results of these two settings are represented in Table 20.\nFrom the results, we have same conclusions as discussed in Section 5.4. Simply sharing the parameters of different decoders as a regularization cannot boost the model performance (“Transformer + (1)” in Table 20), while our IOT can further improve the performance with other regularization methods." }, { "heading": "B.4 ROBUSTNESS", "text": "An impact of IOT training besides performance gain is that the model can be more robust compared to one order only. In Table 21, we provide one example to prove the robustness. We train one Transformer model by decoder order 1, and to decode the sentences with all orders in inference. Obviously, only decoding with order 1 leads to good performance, while other orders can not achieve reasonable scores since the layer order is changed and the feature exaction becomes incorrect. As for IOT, the generated sequences remain stable and high results for each order.\nB.5 VISUALIZATION\nTo better understand the difference between IOT and standard Transformer, we investigate on the training process and provide visualization results about model optimization and performance improvements. Specifically, we plot the curve of training loss, validation loss, as well as the validation BLEU score and test BLEU score along the training epochs on IWSLT14 De→En translation dataset. The loss curves are visualized in Figure 3, and the BLEU curves are presented in Figure 4.\nFrom the validation loss curves of Figure 3(b) and 3(c), we can first see that our IOT (N = 3) training converges faster than Transformer baseline and shows the advantage of IOT, which is consistent to\nour analysis in Section 5.1. The converged (smallest) validation loss value seems to be similar to Transformer baseline, but please note that the loss computation of IOT is different from Transformer baseline. As shown in Eqn (3), the loss function of IOT is a weighted sum of loss values for each order, while for Transformer, it is only one order loss. Therefore, when we turn to the comparison of validation BLEU score, the superiority of our IOT can be clearly verified. From the BLEU score curves in Figure 4, it is obvious that IOT achieves better BLEU score than standard Transformer along the whole training epochs, on both validation and test sets. These visualized results well demonstrate the effectiveness of our IOT approach." } ]
2,021
null
SP:fa852f6d762a09e601ec0d78694c23155548b214
[ "Although a mesh embedded in 3D space may be treated as a graph, a graph convolution network uses the same weights for each neighbor and is thus permutation invariant, which is the incorrect inductive bias for a mesh: the neighbors of a node are spatially related and may not be arbitrarily permuted. CNNs, GCNs, and G-CNNs demonstrate the value of a weight sharing scheme which correctly reflects the symmetry of the underlying space of the data. The authors argue convincingly that for a signal on a mesh, the appropriate bias is symmetry to local change-of-gauge. In short, the weights should depend on the relative orientation of a node’s neighbors. They design a network GEM-CNN which is equivariant to change of gauge. The design is similar to a GCN but incorporates parallel transport to account for underlying geometry and uses kernels similar to those of $SO(2)$-equivariant $E(2)$-CNN (Weiler & Cesa 2019). The experiments show the network is able to adapt to different mesh geometries and obtain very high accuracy in the shape correspondence task. ", "The work presents a novel message passing GNN operator for meshes that is equivariant under gauge transformations. It achieves that by parallel transporting features along edges and spanning a space of gauge equivariant kernels. Further, a DFT-based non-linearity is proposed, which preserves the equivariance in the limit of sampling density. The method is evaluated on an MNIST toy experiment and the Faust shape correspondence task." ]
A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs). Such GCNs utilize isotropic kernels and are therefore insensitive to the relative orientation of vertices and thus to the geometry of the mesh as a whole. We propose Gauge Equivariant Mesh CNNs which generalize GCNs to apply anisotropic gauge equivariant kernels. Since the resulting features carry orientation information, we introduce a geometric message passing scheme defined by parallel transporting features over mesh edges. Our experiments validate the significantly improved expressivity of the proposed model over conventional GCNs and other methods.
[ { "affiliations": [], "name": "Pim de Haan" }, { "affiliations": [], "name": "Maurice Weiler" } ]
[ { "authors": [ "E.J. Bekkers", "M.W. Lafarge", "M. Veta", "K.A. Eppenhof", "J.P. Pluim", "R. Duits" ], "title": "Roto-translation covariant convolutional networks for medical image analysis", "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI),", "year": 2018 }, { "authors": [ "F. Bogo", "J. Romero", "M. Loper", "Black", "M.J. Faust" ], "title": "Dataset and evaluation for 3d mesh registration", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "D. Boscaini", "J. Masci", "E. Rodolà", "M.M. Bronstein" ], "title": "Learning shape correspondence with anisotropic convolutional neural networks", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "G. Bouritsas", "S. Bokhnyak", "S. Ploumpis", "M. Bronstein", "S. Zafeiriou" ], "title": "Neural 3d morphable models: Spiral convolutional networks for 3d shape representation learning and generation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "M.M. Bronstein", "J. Bruna", "Y. LeCun", "A. Szlam", "P. Vandergheynst" ], "title": "Geometric deep learning: Going beyond Euclidean data", "venue": "IEEE Signal Processing Magazine,", "year": 2017 }, { "authors": [ "T. Cohen", "M. Welling" ], "title": "Group equivariant convolutional networks", "venue": "In ICML,", "year": 2016 }, { "authors": [ "T.S. Cohen", "M. Geiger", "M. Weiler" ], "title": "A general theory of equivariant CNNs on homogeneous spaces", "venue": "In Conference on Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "T.S. Cohen", "M. Weiler", "B. Kicanaoglu", "M. Welling" ], "title": "Gauge equivariant convolutional networks and the Icosahedral CNN", "venue": null, "year": 2019 }, { "authors": [ "K. Crane", "M. Desbrun", "P. Schröder" ], "title": "Trivial connections on discrete surfaces", "venue": "Computer Graphics Forum (SGP),", "year": 2010 }, { "authors": [ "K. Crane", "F. de Goes", "M. Desbrun", "P. Schröder" ], "title": "Digital geometry processing with discrete exterior calculus", "venue": "In ACM SIGGRAPH 2013 courses,", "year": 2013 }, { "authors": [ "G. Cucurull", "K. Wagstyl", "A. Casanova", "P. Veličković", "E. Jakobsen", "M. Drozdzal", "A. Romero", "A. Evans", "Y. Bengio" ], "title": "Convolutional neural networks for mesh-based parcellation of the cerebral cortex", "venue": null, "year": 2018 }, { "authors": [ "M. Defferrard", "X. Bresson", "P. Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "J. Gallier", "J. Quaintance" ], "title": "Differential Geometry and Lie Groups: A Computational Perspective, volume 12", "venue": null, "year": 2020 }, { "authors": [ "S. Gong", "L. Chen", "M. Bronstein", "S. Zafeiriou" ], "title": "Spiralnet++: A fast and highly efficient mesh convolution operator", "venue": "In Proceedings of the IEEE International Conference on Computer Vision Workshops,", "year": 2019 }, { "authors": [ "R. Hanocka", "N. Fish", "Z. Wang", "R. Giryes", "S. Fleishman", "D. Cohen-Or" ], "title": "Alignet: Partial-shape agnostic alignment via unsupervised learning", "venue": "ACM Transactions on Graphics (TOG),", "year": 2018 }, { "authors": [ "T.N. Kipf", "M. Welling" ], "title": "Semi-Supervised Classification with Graph Convolutional Networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "R. Kondor", "Z. Lin", "S. Trivedi" ], "title": "Clebsch-gordan nets: a fully fourier space spherical convolutional neural network", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Lai", "Y.-K", "M. Jin", "X. Xie", "Y. He", "J. Palacios", "E. Zhang", "Hu", "S.-M", "X. Gu" ], "title": "Metric-driven rosy field design and remeshing", "venue": "IEEE Transactions on Visualization and Computer Graphics,", "year": 2009 }, { "authors": [ "L. Lang", "M. Weiler" ], "title": "A Wigner-Eckart Theorem for Group Equivariant Convolution Kernels", "venue": "arXiv preprint arXiv:2010.10952,", "year": 2020 }, { "authors": [ "H. Maron", "M. Galun", "N. Aigerman", "M. Trope", "N. Dym", "E. Yumer", "V.G. Kim", "Y. Lipman" ], "title": "Convolutional neural networks on surfaces via seamless toric covers", "venue": "ACM Trans. Graph.,", "year": 2017 }, { "authors": [ "J. Masci", "D. Boscaini", "M.M. Bronstein", "P. Vandergheynst" ], "title": "Geodesic convolutional neural networks on riemannian manifolds", "venue": "ICCVW,", "year": 2015 }, { "authors": [ "F. Monti", "D. Boscaini", "J. Masci", "E. Rodolà", "J. Svoboda", "M.M. Bronstein" ], "title": "Geometric deep learning on graphs and manifolds using mixture model cnns", "venue": "CoRR, abs/1611.08402,", "year": 2016 }, { "authors": [ "N. Perraudin", "M. Defferrard", "T. Kacprzak", "R. Sgier" ], "title": "Deepsphere: Efficient spherical convolutional neural network with healpix sampling for cosmological applications", "venue": "Astronomy and Computing,", "year": 2019 }, { "authors": [ "A. Poulenard", "M. Ovsjanikov" ], "title": "Multi-directional geodesic neural networks via equivariant convolution", "venue": "ACM Transactions on Graphics,", "year": 2018 }, { "authors": [ "C.R. Qi", "H. Su", "K. Mo", "L.J. Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "C.R. Qi", "L. Yi", "H. Su", "L.J. Guibas" ], "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "G. Riegler", "A. Osman Ulusoy", "A. Geiger" ], "title": "Octnet: Learning deep 3d representations at high resolutions", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "S.C. Schonsheck", "B. Dong", "R. Lai" ], "title": "Parallel Transport Convolution: A New Tool for Convolutional Neural Networks on Manifolds", "venue": null, "year": 2018 }, { "authors": [ "Serre", "J.-P" ], "title": "Linear representations of finite groups", "venue": null, "year": 1977 }, { "authors": [ "I. Sosnovik", "M. Szmaja", "A. Smeulders" ], "title": "Scale-equivariant steerable networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Z. Sun", "E. Rooke", "J. Charton", "Y. He", "J. Lu", "S. Baek" ], "title": "Zernet: Convolutional neural networks on arbitrary surfaces via zernike local tangent space estimation", "venue": "arXiv preprint arXiv:1812.01082,", "year": 2018 }, { "authors": [ "L. Tchapmi", "C. Choy", "I. Armeni", "J. Gwak", "S. Savarese" ], "title": "Segcloud: Semantic segmentation of 3d point clouds", "venue": "In 2017 international conference on 3D vision (3DV),", "year": 2017 }, { "authors": [ "N. Thomas", "T. Smidt", "S. Kearnes", "L. Yang", "L. Li", "K. Kohlhoff", "P. Riley" ], "title": "Tensor Field Networks: Rotation- and Translation-Equivariant Neural Networks for 3D Point Clouds", "venue": null, "year": 2018 }, { "authors": [ "F. Tombari", "S. Salti", "L. Di Stefano" ], "title": "Unique signatures of histograms for local surface description", "venue": "In European conference on computer vision,", "year": 2010 }, { "authors": [ "L.W. Tu" ], "title": "Differential geometry: connections, curvature, and characteristic classes, volume 275", "venue": null, "year": 2017 }, { "authors": [ "N. Verma", "E. Boyer", "J. Verbeek" ], "title": "Feastnet: Feature-steered graph convolutions for 3d shape analysis", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "M. Weiler", "G. Cesa" ], "title": "General E(2)-equivariant steerable CNNs", "venue": "In Conference on Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "M. Weiler", "M. Geiger", "M. Welling", "W. Boomsma", "T. Cohen" ], "title": "3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "M. Weiler", "F.A. Hamprecht", "M. Storath" ], "title": "Learning steerable filters for rotation equivariant CNNs", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "R. Wiersma", "E. Eisemann", "K. Hildebrandt" ], "title": "CNNs on Surfaces using Rotation-Equivariant Features", "venue": "Transactions on Graphics,", "year": 2020 }, { "authors": [ "M. Winkels", "T.S. Cohen" ], "title": "3D G-CNNs for pulmonary nodule detection", "venue": "In Conference on Medical Imaging with Deep Learning (MIDL),", "year": 2018 }, { "authors": [ "D. Worrall", "M. Welling" ], "title": "Deep scale-spaces: Equivariance over scale", "venue": "In Conference on Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "D.E. Worrall", "G.J. Brostow" ], "title": "Cubenet: Equivariance to 3D rotation and translation", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "D.E. Worrall", "S.J. Garbin", "D. Turmukhambetov", "G.J. Brostow" ], "title": "Harmonic Networks: Deep Translation and Rotation Equivariance", "venue": null, "year": 2017 }, { "authors": [ "Z. Wu", "S. Song", "A. Khosla", "F. Yu", "L. Zhang", "X. Tang", "J. Xiao" ], "title": "3d shapenets: A deep representation for volumetric shapes", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Y. Zhao", "T. Birdal", "J.E. Lenssen", "E. Menegatti", "L. Guibas", "F. Tombari" ], "title": "Quaternion equivariant capsule networks for 3d point clouds", "venue": "arXiv preprint arXiv:1912.12098,", "year": 2019 } ]
[ { "heading": null, "text": "A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs). Such GCNs utilize isotropic kernels and are therefore insensitive to the relative orientation of vertices and thus to the geometry of the mesh as a whole. We propose Gauge Equivariant Mesh CNNs which generalize GCNs to apply anisotropic gauge equivariant kernels. Since the resulting features carry orientation information, we introduce a geometric message passing scheme defined by parallel transporting features over mesh edges. Our experiments validate the significantly improved expressivity of the proposed model over conventional GCNs and other methods." }, { "heading": "1 INTRODUCTION", "text": "Convolutional neural networks (CNNs) have been established as the default method for many machine learning tasks like speech recognition or planar and volumetric image classification and segmentation. Most CNNs are restricted to flat or spherical geometries, where convolutions are easily defined and optimized implementations are available. The empirical success of CNNs on such spaces has generated interest to generalize convolutions to more general spaces like graphs or Riemannian manifolds, creating a field now known as geometric deep learning (Bronstein et al., 2017).\nA case of specific interest is convolution on meshes, the discrete analog of 2-dimensional embedded Riemannian manifolds. Mesh CNNs can be applied to tasks such as detecting shapes, registering different poses of the same shape and shape segmentation. If we forget the positions of vertices, and which vertices form faces, a mesh M can be represented by a graph G. This allows for the application of graph convolutional networks (GCNs) to processing signals on meshes.\n∗Equal Contribution †Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.\nHowever, when representing a mesh by a graph, we lose important geometrical information. In particular, in a graph there is no notion of angle between or ordering of two of a node’s incident edges (see figure 1). Hence, a GCNs output at a node p is designed to be independent of relative angles and invariant to any permutation of its neighbours qi ∈ N (p). A graph convolution on a mesh graph therefore corresponds to applying an isotropic convolution kernel. Isotropic filters are insensitive to the orientation of input patterns, so their features are strictly less expressive than those of orientation aware anisotropic filters.\nTo address this limitation of graph networks we propose Gauge Equivariant Mesh CNNs (GEM-CNN), which minimally modify GCNs such that they are able to use anisotropic filters while sharing weights across different positions and respecting the local geometry. One obstacle in sharing anisotropic kernels, which are functions of the angle θpq of neighbour q with respect to vertex p, over multiple vertices of a mesh is that there is no unique way of selecting a reference neighbour q0, which has the direction θpq0 = 0. The reference neighbour, and hence the orientation of the neighbours, needs to be chosen arbitrarily. In order to guarantee the equivalence of the features resulting from different choices of orientations, we adapt Gauge Equivariant CNNs (Cohen et al., 2019b) to general meshes. The kernels of our model are thus designed to be equivariant under gauge transformations, that is, to guarantee that the responses for different kernel orientations are related by a prespecified transformation law. Such features are identified as geometric objects like scalars, vectors, tensors, etc., depending on the specific choice of transformation law. In order to compare such geometric features at neighbouring vertices, they need to be parallel transported along the connecting edge.\nIn our implementation we first specify the transformation laws of the feature spaces and compute a space of gauge equivariant kernels. Then we pick arbitrary reference orientations at each node, relative to which we compute neighbour orientations and compute the corresponding edge transporters. Given these quantities, we define the forward pass as a message passing step via edge transporters followed by a contraction with the equivariant kernels evaluated at the neighbour orientations. Algorithmically, Gauge Equivariant Mesh CNNs are therefore just GCNs with anisotropic, gauge equivariant kernels and message passing via parallel transporters. Conventional GCNs are covered in this framework for the specific choice of isotropic kernels and trivial edge transporters, given by identity maps.\nIn Sec. 2, we will give an outline of our method, deferring details to Secs. 3 and 4. In Sec. 3.2, we describe how to compute general geometric quantities, not specific to our method, used for the computation of the convolution. In our experiments in Sec. 6.1, we find that the enhanced expressiveness of Gauge Equivariant Mesh CNNs enables them to outperform conventional GCNs and other prior work in a shape correspondence task." }, { "heading": "2 CONVOLUTIONS ON GRAPHS WITH GEOMETRY", "text": "We consider the problem of processing signals on discrete 2-dimensional manifolds, or meshes M . Such meshes are described by a set V of vertices in R3 together with a set F of tuples, each consisting of the vertices at the corners of a face. For a mesh to describe a proper manifold, each edge needs to be connected to two faces, and the neighbourhood of each vertex needs to be homeomorphic to a disk. Mesh M induces a graph G by forgetting the coordinates of the vertices while preserving the edges. A conventional graph convolution between kernel K and signal f , evaluated at a vertex p, can be defined by\n(K ? f)p = Kselffp + ∑\nq∈Np Kneighfq, (1)\nwhere Np is the set of neighbours of p in G, and Kself ∈ RCin×Cout and Kneigh ∈ RCin×Cout are two linear maps which model a self interaction and the neighbour contribution, respectively. Importantly, graph convolution does not distinguish different neighbours, because each feature vector fq is multiplied by the same matrix Kneigh and then summed. For this reason we say the kernel is isotropic.\nConsider the example in figure 1, where on the left and right, the neighbourhood of one vertex p, containing neighbours q ∈ Np, is visualized. An isotropic kernel would propagate the signal from the neighbours to p in exactly the same way in both neighbourhoods, even though the neighbourhoods are geometrically distinct. For this reason, our method uses direction sensitive (anisotropic) kernels instead of isotropic kernels. Anisotropic kernels are inherently more expressive than isotropic ones which is why they are used universally in conventional planar CNNs.\nAlgorithm 1 Gauge Equivariant Mesh CNN layer\nInput: mesh M , input/output feature types ρin, ρout, reference neighbours (qp0 ∈ Np)p∈M . Compute basis kernels Kiself,K i neigh(θ) B Sec. 3 Initialise weights wiself and w i neigh. For each neighbour pair, p ∈M, q ∈ Np: B App. A. compute neighbor angles θpq relative to reference neighbor compute parallel transporters gq→p\nForward ( input features (fp)p∈M , weights wiself, w i neigh ) :\nf ′p ← ∑ i w i selfK i selffp + ∑ i,q∈Npw i neighK i neigh(θpq)ρin(gq→p)fq\nWe propose the Gauge Equivariant Mesh Convolution, a minimal modification of graph convolution that allows for anisotropic kernels K(θ) whose value depends on an orientation θ ∈ [0, 2π).1 To define the orientations θpq of neighbouring vertices q ∈ Np of p, we first map them to the tangent plane TpM at p, as visualized in figure 1. We then pick an arbitrary reference neighbour q p 0 to determine a reference orientation2 θpqp0 := 0, marked orange in figure 1. This induces a basis on the tangent plane, which, when expressed in polar coordinates, defines the angles θpq of the other neighbours.\nAs we will motivate in the next section, features in a Gauge Equivariant CNN are coefficients of geometric quantities. For example, a tangent vector at vertex p can be described either geometrically by a 3 dimensional vector orthogonal to the normal at p or by two coefficients in the basis on the tangent plane. In order to perform convolution, geometric features at different vertices need to be linearly combined, for which it is required to first “parallel transport” the features to the same vertex. This is done by applying a matrix ρ(gq→p) ∈ RCin×Cin to the coefficients of the feature at q, in order to obtain the coefficients of the feature vector transported to p, which can be used for the convolution at p. The transporter depends on the geometric type (group representation) of the feature, denoted by ρ and described in more detail below. Details of how the tangent space is defined, how to compute the map to the tangent space, angles θpq , and the parallel transporter are given in Appendix A.\nIn combination, this leads to the GEM-CNN convolution\n(K ? f)p = Kselffp + ∑\nq∈Np Kneigh(θpq)ρ(gq→p)fq (2)\nwhich differs from the conventional graph convolution, defined in Eq. 1 only by the use of an anisotropic kernel and the parallel transport message passing.\nWe require the outcome of the convolution to be equivalent for any choice of reference orientation. This is not the case for any anisotropic kernel but only for those which are equivariant under changes of reference orientations (gauge transformations). Equivariance imposes a linear constraint on the kernels. We therefore solve for complete sets of “basis-kernels” Kiself and K i neigh satisfying this constraint and linearly combine them with parameters wiself and w i neigh such that Kself = ∑ i w i selfK i self\nand Kneigh = ∑ i w i neighK i neigh. Details on the computation of basis kernels are given in section 3. The full algorithm for initialisation and forward pass, which is of time and space complexity linear in the number of vertices, for a GEM-CNN layer are listed in algorithm 1. Gradients can be computed by automatic differentiation.\nThe GEM-CNN is gauge equivariant, but furthermore satisfies two important properties. Firstly, it depends only on the intrinsic shape of the 2D mesh, not on the embedding of the mesh in R3. Secondly, whenever a map from the mesh to itself exists that preserves distances and orientation, the convolution is equivariant to moving the signal along such transformations. These properties are proven in Appendix D and empirically shown in Appendix F.2.\n1In principle, the kernel could be made dependent on the radial distance of neighboring nodes, by Kneigh(r, θ) = F (r)Kneigh(θ), where F (r) is unconstrained and Kneigh(θ) as presented in this paper. As this dependency did not improve the performance in our empirical evaluation, we omit it.\n2Mathematically, this corresponds to a choice of local reference frame or gauge." }, { "heading": "3 GAUGE EQUIVARIANCE & GEOMETRIC FEATURES", "text": "On a general mesh, the choice of the reference neighbour, or gauge, which defines the orientation of the kernel, can only be made arbitrarily. However, this choice should not arbitrarily affect the outcome of the convolution, as this would impede the generalization between different locations and different meshes. Instead, Gauge Equivariant Mesh CNNs have the property that their output transforms according to a known rule as the gauge changes.\nConsider the left hand side of figure 2(a). Given a neighbourhood of vertex p, we want to express each neighbour q in terms of its polar coordinates (rq, θq) on the tangent plane, so that the kernel value at that neighbour Kneigh(θq) is well defined. This requires choosing a basis on the tangent plane, determined by picking a neighbour as reference neighbour (denoted q0), which has the zero angle θq0 = 0. In the top path, we pick qA as reference neighbour. Let us call this gauge A, in which neighbours have angles θAq . In the bottom path, we instead pick neighbour qB as reference point and are in gauge B. We get a different basis for the tangent plane and different angles θBq for each neighbour. Comparing the two gauges, we see that they are related by a rotation, so that θBq = θ A q − θAqB . This change of gauge is called a gauge transformation of angle g := θ A qB .\nIn figure 2(a), we illustrate a gauge equivariant convolution that takes input and output features such as gray scale image values on the mesh, which are called scalar features. The top path represents the convolution in gauge A, the bottom path in gauge B. In either case, the convolution can be interpreted as consisting of three steps. First, for each vertex p, the value of the scalar features on the mesh at each neighbouring vertex q, represented by colors, is mapped to the tangent plane at p at angle θq defined by the gauge. Subsequently, the convolutional kernel sums for each neighbour q, the product of the feature at q and kernel K(θq). Finally the output is mapped back to the mesh. These three steps can be composed into a single step, which we could call a geometric convolution, mapping from input features on the mesh to output features on the mesh. The convolution is gauge equivariant if this geometric convolution does not depend on the gauge we pick in the interim, so in figure 2(a), if the convolution in the top path in gauge A has same result the convolution in the bottom path in gauge B, making the diagram commute. In this case, however, we see that the convolution output needs to be the same in both gauges, for the convolution to be equivariant. Hence, we must have that K(θq) = K(θq − g), as the orientations of the neighbours differ by some angle g, and the kernel must be isotropic.\nAs we aim to design an anisotropic convolution, the output feature of the convolution at p can, instead of a scalar, be two numbers v ∈ R2, which can be interpreted as coefficients of a tangent feature\nvector in the tangent space at p, visualized in figure 2(b). As shown on the right hand side, different gauges induce a different basis of the tangent plane, so that the same tangent vector (shown on the middle right on the mesh), is represented by different coefficients in the gauge (shown on the top and bottom on the right). This gauge equivariant convolution must be anisotropic: going from the top row to the bottom row, if we change orientations of the neighbours by −g, the coefficients of the output vector v ∈ R2 of the kernel must be also rotated by −g. This is written as R(−g)v, where R(−g) ∈ R2×2 is the matrix that rotates by angle −g. Vectors and scalars are not the only type of geometric features that can be inputs and outputs of a GEM-CNN layer. In general, the coefficients of a geometric feature of C dimensions changes by an invertible linear transformation ρ(−g) ∈ RC×C if the gauge is rotated by angle g. The map ρ : [0, 2π)→ RC×C is called the type of the geometric quantity and is formally known as a group representation of the planar rotation group SO(2). Group representations have the property that ρ(g + h) = ρ(g)ρ(h) (they are group homomorphisms), which implies in particular that ρ(0) = 1 and ρ(−g) = ρ(g)−1. For more background on group representation theory, we refer the reader to (Serre, 1977) and, specifically in the context of equivariant deep learning, to (Lang & Weiler, 2020). From the theory of group representations, we know that any feature type can be composed from “irreducible representations” (irreps). For SO(2), these are the one dimensional invariant scalar representation ρ0 and for all n ∈ N>0, a two dimensional representation ρn,\nρ0(g) = 1, ρn(g) = ( cosng 9 sinng sinng cosng ) .\nwhere we write, for example, ρ = ρ0 ⊕ ρ1 ⊕ ρ1 to denote that representation ρ(g) is the direct sum (i.e. block-diagonal stacking) of the matrices ρ0(g), ρ1(g), ρ1(g). Scalars and tangent vector features correspond to ρ0 and ρ1 respectively and we have R(g) = ρ1(g).\nThe type of the feature at each layer in the network can thus be fully specified (up to a change of basis) by the number of copies of each irrep. Similar to the dimensionality in a conventional CNN, the choice of type is a hyperparameter that can be freely chosen to optimize performance." }, { "heading": "3.1 KERNEL CONSTRAINT", "text": "Given an input type ρin and output type ρout of dimensions Cin and Cout, the kernels are Kself ∈ RCout×Cin and Kneigh : [0, 2π)→ RCout×Cin . However, not all such kernels are equivariant. Consider again examples figure 2(a) and figure 2(b). If we map from a scalar to a scalar, we get that Kneigh(θ− g) = Kneigh(θ) for all angles θ, g and the convolution is isotropic. If we map from a scalar to a vector, we get that rotating the angles θq results in the same tangent vector as rotating the output vector coefficients, so that Kneigh(θ − g) = R(−g)Kneigh(θ).\nIn general, as derived by Cohen et al. (2019b) and in appendix B, the kernels must satisfy for any gauge transformation g ∈ [0, 2π) and angle θ ∈ [0, 2π), that Kneigh(θ − g) = ρout(−g)Kneigh(θ)ρin(g), (3)\nKself = ρout(−g) Kself ρin(g). (4) The kernel can be seen as consisting of multiple blocks, where each block takes as input one irrep and outputs one irrep. For example if ρin would be of type ρ0⊕ρ1⊕ρ1 and ρout of type ρ1⊕ρ3, we have 4× 5 matrix\nKneigh(θ) = ( K10(θ) K11(θ) K11(θ) K30(θ) K31(θ) K31(θ) )\nwhere e.g. K31(θ) ∈ R2×2 is a kernel that takes as input irrep ρ1 and as output irrep ρ3 and needs to satisfy Eq. 3. As derived by Weiler & Cesa (2019) and in Appendix C, the kernels Kneigh(θ) and Kself mapping from irrep ρn to irrep ρm can be written as a linear combination of the basis kernels listed in Table 1. The table shows that equivariance requires the self-interaction to only map from\none irrep to the same irrep. Hence, we have Kself = (\n0 K11 K11 0 0 0\n) ∈ R4×3.\nAll basis-kernels of all pairs of input irreps and output irreps can be linearly combined to form an arbitrary equivariant kernel from feature of type ρin to ρout. In the above example, we have 2 × 2 + 4 × 4 = 20 basis kernels for Kneigh and 4 basis kernels for Kself. The layer thus has 24 parameters. As proven in (Weiler & Cesa, 2019) and (Lang & Weiler, 2020), this parameterization of the equivariant kernel space is complete, that is, more general equivariant kernels do not exist." }, { "heading": "3.2 GEOMETRY AND PARALLEL TRANSPORT", "text": "In order to implement gauge equivariant mesh CNNs, we need to make the abstract notion of tangent spaces, gauges and transporters concrete.\nAs the mesh is embedded in R3, a natural definition of the tangent spaces TpM is as two dimensional subspaces that are orthogonal to the normal vector at p. We follow the common definition of normal vectors at mesh vertices as the area weighted average of the adjacent faces’ normals. The Riemannian logarithm map logp : Np → TpM represents the one-ring neighborhood of each point p on their tangent spaces as visualized in figure 1. Specifically, neighbors q ∈ Np are mapped to logp(q) ∈ TpM by first projecting them to TpM and then rescaling the projection such that the norm is preserved, i.e. | logp(q)| = |q − p|; see Eq. 6. A choice of reference neighbor qp ∈ N uniquely determines a right handed, orthonormal reference frame (ep,1, ep,2) of TpM by setting ep,1 := logp(q0)/| logp(q0)| and ep,2 := n × ep,1. The polar angle θpq of any neighbor q ∈ N relative to the first frame axis is then given by θpq := atan2 ( e>p,2 logp(q), e > p,1 logp(q)) ) .\nGiven the reference frame (ep,1, ep,2), a 2-tuple of coefficients (v1, v2) ∈ R2 specifies an (embedded) tangent vector v1ep,1 + v2ep,2 ∈ TpM ⊂ R3. This assignment is formally given by the gauge map Ep : R2 → TpM ⊂ R3 which is a vector space isomorphism. In our case, it can be identified with the matrix\nEp = [ ep,1 ep,2 ] ∈ R3×2. (5)\nFeature vectors fp and fq at neighboring (or any other) vertices p ∈ M and q ∈ Np ⊆ M live in different vector spaces and are expressed relative to independent gauges, which makes it invalid to sum them directly. Instead, they have to be parallel transported along the mesh edge that connects the two vertices. As explained above, this transport is given by group elements gq→p ∈ [0, 2π), which determine the transformation of tangent vector coefficients as vq 7→ R(gq→p)vq ∈ R2 and, analogously, for feature vector coefficients as fq 7→ ρ(gq→p)fq . Figure 4 in the appendix visualizes the definition of edge transporters for flat spaces and meshes. On a flat space, tangent vectors are transported by keeping them parallel in the usual sense on Euclidean spaces. However, if the source and target frame orientations disagree, the vector coefficients relative to the source frame need to be transformed to the target frame. This coordinate transformation from polar angles ϕq of v to ϕp of R(gq→p)v defines the transporter gq→p = ϕp − ϕq . On meshes, the source and target tangent spaces TqM and TpM are not longer parallel. It is therefore additionally necessary to rotate the source tangent space and its vectors parallel to the target space, before transforming between the frames. Since transporters effectively make up for differences in the source and target frames, the parallel transporters transform under gauge transformations gp and gq according to gq→p 7→ gp + gq→p − gq . Note that this transformation law cancels with the transformation law of the coefficients at q and lets the transported coefficients transform according to gauge transformations at p. It is therefore valid to sum vectors and features that are parallel transported into the same gauge at p.\nA more detailed discussion of the concepts presented in this section can be found in Appendix A." }, { "heading": "4 NON-LINEARITY", "text": "Besides convolutional layers, the GEM-CNN contains non-linear layers, which also need to be gauge equivariant, for the entire network to be gauge equivariant. The coefficients of features built out of irreducible representaions, as described in section 3, do not commute with point-wise nonlinearities (Worrall et al., 2017; Thomas et al., 2018; Weiler et al., 2018a; Kondor et al., 2018). Norm non-linearities and gated non-linearities (Weiler & Cesa, 2019) can be used with such features, but generally perform worse in practice compared to point-wise non-linearities (Weiler & Cesa,\n2019). Hence, we propose the RegularNonlinearity, which uses point-wise non-linearities and is approximately gauge equivariant.\nThis non-linearity is built on Fourier transformations. Consider a continuous periodic signal, on which we perform a band-limited Fourier transform with band limit b, obtaining 2b + 1 Fourier coefficients. If this continuous signal is shifted by an arbitrary angle g, then the corresponding Fourier components transform with linear transformation ρ0:b(−g), for 2b+ 1 dimensional representation ρ0:b := ρ0 ⊕ ρ1 ⊕ ...⊕ ρb. It would be exactly equivariant to take a feature of type ρ0:b, take a continuous inverse Fourier transform to a continuous periodic signal, then apply a point-wise non-linearity to that signal, and take the continuous Fourier transform, to recover a feature of type ρ0:b. However, for implementation, we use N intermediate samples and the discrete Fourier transform. This is exactly gauge equivariant for gauge transformation of angles multiple of 2π/N , but only approximately equivariant for other angles. In App. G we prove that as N →∞, the non-linearity is exactly gauge equivariant. The run-time cost per vertex of the (inverse) Fourier transform implemented as a simple linear transformation is O(bN), which is what we use in our experiments. The pointwise non-linearity scales linearly with N , so the complexity of the RegularNonLineariy is also O(bN). However, one can also use a fast Fourier transform, achieving a complexity of O(N logN). Concrete memory and run-time cost of varying N are shown in appendix F.1." }, { "heading": "5 RELATED WORK", "text": "The irregular structure of meshes leads to a variety of approaches to define convolutions. Closely related to our method are graph based methods which are often based on variations of graph convolutional networks (Kipf & Welling, 2017; Defferrard et al., 2016). GCNs have been applied on spherical meshes (Perraudin et al., 2019) and cortical surfaces (Cucurull et al., 2018; Zhao et al., 2019a). Verma et al. (2018) augment GCNs with anisotropic kernels which are dynamically computed via an attention mechanism over graph neighbours.\nInstead of operating on the graph underlying a mesh, several approaches leverage its geometry by treating it as a discrete manifold. Convolution kernels can then be defined in geodesic polar coordinates which corresponds to a projection of kernels from the tangent space to the mesh via the exponential map. This allows for kernels that are larger than the immediate graph neighbourhood and message passing over faces but does not resolve the issue of ambiguous kernel orientation. Masci et al. (2015); Monti et al. (2016) and Sun et al. (2018) address this issue by restricting the network to orientation invariant features which are computed by applying anisotropic kernels in several orientations and pooling over the resulting responses. The models proposed in (Boscaini et al., 2016) and (Schonsheck et al., 2018) are explicitly gauge dependent with preferred orientations chosen via the principal curvature direction and the parallel transport of kernels, respectively. Poulenard & Ovsjanikov (2018) proposed a non-trivially gauge equivariant network based on geodesic convolutions, however, the model parallel transports only partial information of the feature vectors, corresponding to certain kernel orientations. In concurrent work, Wiersma et al. (2020) also define convolutions on surfaces equivariantly to the orientation of the kernel, but differ in that they use norm non-linearities instead of regular ones and that they apply the convolution along longer geodesics, which adds complexity to the geometric pre-computation - as partial differential equations need to be solved, but may result in less susceptibility to the particular discretisation of the manifold.\nAnother class of approaches defines spectral convolutions on meshes. However, as argued in (Bronstein et al., 2017), the Fourier spectrum of a mesh depends heavily on its geometry, which makes such methods instable under deformations and impedes the generalization between different meshes. Spectral convolutions further correspond to isotropic kernels. Kostrikov et al. (2018) overcomes isotropy of the Laplacian by decomposing it into two applications of the first-order Dirac operator.\nA construction based on toric covering maps of topologically spherical meshes was presented in (Maron et al., 2017). An entirely different approach to mesh convolutions is to apply a linear map to a spiral of neighbours (Bouritsas et al., 2019; Gong et al., 2019), which works well only for meshes with a similar graph structure.\nThe above-mentioned methods operate on the intrinsic, 2-dimensional geometry of the mesh. A popular alternative for embedded meshes is to define convolutions in the embedding space R3. This can for instance be done by voxelizing space and representing the mesh in terms of an occupancy grid (Wu et al., 2015; Tchapmi et al., 2017; Hanocka et al., 2018). A downside of this approach are the high memory and compute requirements of voxel representations. If the grid occupancy is low, this can partly be addressed by resorting to an inhomogeneous grid density (Riegler et al., 2017). Instead of voxelizing space, one may interpret the set of mesh vertices as a point cloud and run a convolution on those (Qi et al., 2017a;b). Point cloud based methods can be made equivariant w.r.t. the isometries of R3 (Zhao et al., 2019b; Thomas et al., 2018), which implies in particular the isometry equivariance on the embedded mesh. In general, geodesic distances within the manifold differ usually substantially from the distances in the embedding space. Which approach is more suitable depends on the particular application.\nOn flat Euclidean spaces our method corresponds to Steerable CNNs (Cohen & Welling, 2017; Weiler et al., 2018a; Weiler & Cesa, 2019; Cohen et al., 2019a; Lang & Weiler, 2020). As our model, these networks process geometric feature fields of types ρ and are equivariant under gauge transformations, however, due to the flat geometry, the parallel transporters become trivial. Regular nonlinearities are on flat spaces used in group convolutional networks (Cohen & Welling, 2016; Weiler et al., 2018b; Hoogeboom et al., 2018; Bekkers et al., 2018; Winkels & Cohen, 2018; Worrall & Brostow, 2018; Worrall & Welling, 2019; Sosnovik et al., 2020)." }, { "heading": "6 EXPERIMENTS", "text": "" }, { "heading": "6.1 EMBEDDED MNIST", "text": "We first investigate how Gauge Equivariant Mesh CNNs perform on, and generalize between, different mesh geometries. For this purpose we conduct simple MNIST digit classification experiments on embedded rectangular meshes of 28×28 vertices. As a baseline geometry we consider a flat mesh as visualized in figure 5(a). A second type of geometry is defined as different isometric embeddings of the flat mesh, see figure 5(b). Note that this implies that the intrinsic geometry of these isometrically embedded meshes is indistinguishable from that of the flat mesh. To generate geometries which are intrinsically curved, we add random normal displacements to the flat mesh. We control the amount of curvature by smoothing the resulting displacement fields with Gaussian kernels of different widths σ and define the roughness of the resulting mesh as 3 − σ. Figures 5(c)-5(h) show the results for roughnesses of 0.5, 1, 1.5, 2, 2.25 and 2.5. For each of the considered settings we generate 32 different train and 32 test geometries.\nTo test the performance on, and generalization between, different geometries, we train equivalent GEM-CNN models on a flat mesh and meshes with a roughness of 1, 1.5, 2, 2.25 and 2.5. Each model is tested individually on each of the considered test geometries, which are the flat mesh, isometric embeddings and curved embeddings with a roughness of 0.5, 1, 1.25, 1.5, 1.75, 2, 2.25 and 2.5. Figure 3 shows the test errors of the GEM-CNNs on the different train geometries (different curves) for all test geometries (shown on the x-axis). Since our model is purely defined in terms of the intrinsic geometry of a mesh, it is expected to be insensitive to isometric changes in the embeddings. This is empirically confirmed by the fact that the test performances on flat and isometric embeddings are exactly equal. As expected, the test error increases for most models with the surface roughness. Models trained on more rough surfaces are hereby more robust to deformations. The models generalize well from a rough training to smooth test geometry up to a training roughness of 1.5. Beyond that point, the test performances on smooth meshes degrades up to the point of random guessing at a training roughness of 2.5.\nAs a baseline, we build an isotropic graph CNN with the same network topology and number of parameters (≈ 163k). This model is insensitive to the mesh geometry and therefore performs exactly equal on all surfaces. While this enhances its robustness on very rough meshes, its test error of 19.80 ± 3.43% is an extremely bad result on MNIST. In contrast, the use of anisotropic filters of GEM-CNN allows it to reach a test error of only 0.60± 0.05% on the flat geometry. It is therefore competitive with conventional CNNs on pixel grids, which apply anisotropic kernels as well. More details on the datasets, models and further experimental setup are given in appendix E.1.\nModel Features Accuracy (%)\nACNN (Boscaini et al., 2016) SHOT 62.4 Geodesic CNN (Masci et al., 2015) SHOT 65.4 MoNet (Monti et al., 2016) SHOT 73.8 FeaStNet (Verma et al., 2018) XYZ 98.7 ZerNet (Sun et al., 2018) XYZ 96.9 SpiralNet++ (Gong et al., 2019) XYZ 99.8\nGraph CNN XYZ 1.40±0.5 Graph CNN SHOT 23.80±8 Non-equiv. CNN (SHOT frames) XYZ 73.00±4.0 Non-equiv. CNN (SHOT frames) SHOT 75.11±2.4 GEM-CNN XYZ 99.73±0.04 GEM-CNN (broken symmetry) XYZ 99.89±0.02\nTable 2: Results of FAUST shape correspondence. Statistics are means and standard errors of the mean of over three runs. All cited results are from their respective papers." }, { "heading": "6.2 SHAPE CORRESPONDENCE", "text": "As a second experiment, we perform non-rigid shape correspondence on the FAUST dataset (Bogo et al., 2014), following Masci et al. (2015) 3 . The data consists of 100 meshes of human bodies in various positions, split into 80 train and 20 test meshes. The vertices are registered, such that vertices on the same position on the body, such as the tip of the left thumb, have the same identifier on all meshes. All meshes have 6890 vertices, making this a 6890-class segmentation problem.\nThe architecture transforms the vertices’ XY Z coordinates (of type 3ρ0), via 6 convolutional layers to features 64ρ0, with intermediate features 16(ρ0 ⊕ ρ1 ⊕ ρ2), with residual connections and the RegularNonlinearity with N = 5 samples. Afterwards, we use two 1×1 convolutions with ReLU to map first to 256 and then 6980 channels, after which a softmax predicts the registration probabilities. The 1×1 convolutions use a dropout of 50% and 1E-4 weight decay. The network is trained with a cross entropy loss with an initial learning rate of 0.01, which is halved when training loss reaches a plateau.\nAs all meshes in the FAUST dataset share the same topology, breaking the gauge equivariance in higher layers can actually be beneficial. As shown in (Weiler & Cesa, 2019), symmetry can be broken by treating non-invariant features as invariant features as input to the final 1×1 convolution. As baselines, we compare to various models, some of which use more complicated pipelines, such as (1) the computation of geodesics over the mesh, which requires solving partial differential equations, (2) pooling, which requires finding a uniform sub-selection of vertices, (3) the pre-computation of SHOT features which locally describe the geometry (Tombari et al., 2010), or (4) post-processing refinement of the predictions. The GEM-CNN requires none of these additional steps. In addition, we compare to SpiralNet++ (Gong et al., 2019), which requires all inputs to be similarly meshed. Finally, we compare to an isotropic version of the GEM-CNN, which reduces to a conventional graph CNN, as well as a non-gauge-equivariant CNN based on SHOT frames. The results in table 2 show that the GEM-CNN outperforms prior works and a non-gauge-equivariant CNN, that isotropic graph CNNs are unable to solve the task and that for this data set breaking gauge symmetry in the final layers of the network is beneficial. More experimental details are given in appendix E.2." }, { "heading": "7 CONCLUSIONS", "text": "Convolutions on meshes are commonly performed as a convolution on their underlying graph, by forgetting geometry, such as orientation of neighbouring vertices. In this paper we propose Gauge Equivariant Mesh CNNs, which endow Graph Convolutional Networks on meshes with anisotropic kernels and parallel transport. Hence, they are sensitive to the mesh geometry, and result in equivalent outputs regardless of the arbitrary choice of kernel orientation.\nWe demonstrate that the inference of GEM-CNNs is invariant under isometric deformations of meshes and generalizes well over a range of non-isometric deformations. On the FAUST shape correspondence task, we show that Gauge equivariance, combined with symmetry breaking in the final layer, leads to state of the art performance.\n3These experiments were executed on QUVA machines." } ]
2,021
GAUGE EQUIVARIANT MESH CNNS ANISOTROPIC CONVOLUTIONS ON GEOMETRIC GRAPHS
SP:996e66b927181eb325cadb345bb51e96b8d46923
[ "The paper proposes two approaches (i.e., PATE-FL and Private-kNN-FL) to train a differentially private global model in a federated setting based on [1] and [2]. In PATE-FL, each client first trains a teacher model using their local dataset. The teacher models are used to make noisy predictions on a public dataset. Then, the public dataset with predicted labels is used to train a final model. In Private-kNN-FL, instead of training a teacher model, the prediction depends on the labels of the nearest neighbors. The experiments show that PATE-FL and Private-kNN-FL can outperform DP-FedAvg in terms of agent-level DP and instance-level DP, respectively.", "Federated learning enables distributed clients to train a model without sharing the data with each other. This is typically achieved by a gradient descent type algorithm such as federated averaging. The paper argues that federated learning via gradient updates has issues and proposes to use a voting based method for training machine learning models using unlabeled global data." ]
While federated learning (FL) enables distributed agents to collaboratively train a centralized model without sharing data with each other, it fails to protect users against inference attacks that mine private information from the centralized model. Thus, facilitating federated learning methods with differential privacy (DPFL) becomes attractive. Existing algorithms based on privately aggregating clipped gradients require many rounds of communication, which may not converge, and cannot scale up to large-capacity models due to explicit dimension-dependence in its added noise. In this paper, we adopt the knowledge transfer model of private learning pioneered by Papernot et al. (2017; 2018) and extend their algorithm PATE, as well as the recent alternative PrivateKNN (Zhu et al., 2020) to the federated learning setting. The key difference is that our method privately aggregates the labels from the agents in a voting scheme, instead of aggregating the gradients, hence avoiding the dimension dependence and achieving significant savings in communication cost. Theoretically, we show that when the margins of the voting scores are large, the agents enjoy exponentially higher accuracy and stronger (data-dependent) differential privacy guarantees on both agent-level and instancelevel. Extensive experiments show that our approach significantly improves the privacy-utility trade-off over the current state-of-the-art in DPFL.
[]
[ { "authors": [ "Martin Abadi", "Andy Chu", "Ian Goodfellow", "H Brendan McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang" ], "title": "Deep learning with differential privacy", "venue": "In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2016 }, { "authors": [ "Alekh Agarwal", "Martin J Wainwright", "Peter L Bartlett", "Pradeep K Ravikumar" ], "title": "Informationtheoretic lower bounds on the oracle complexity of convex optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2009 }, { "authors": [ "Naman Agarwal", "Ananda Theertha Suresh", "Felix Xinnan X Yu", "Sanjiv Kumar", "Brendan McMahan" ], "title": "cpsgd: Communication-efficient and differentially-private distributed sgd", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Raef Bassily", "Adam Smith", "Abhradeep Thakurta" ], "title": "Private empirical risk minimization: Efficient algorithms and tight error bounds", "venue": "In Proceedings of the 54th Annual IEEE Symposium on Foundations of Computer Science,", "year": 2014 }, { "authors": [ "Abhishek Bhowmick", "John Duchi", "Julien Freudiger", "Gaurav Kapoor", "Ryan Rogers" ], "title": "Protection against reconstruction and its applications in private federated learning", "venue": "arXiv preprint arXiv:1812.00984,", "year": 2018 }, { "authors": [ "Keith Bonawitz", "Vladimir Ivanov", "Ben Kreuter", "Antonio Marcedone", "H Brendan McMahan", "Sarvar Patel", "Daniel Ramage", "Aaron Segal", "Karn Seth" ], "title": "Practical secure aggregation for privacypreserving machine learning", "venue": "In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2017 }, { "authors": [ "Kamalika Chaudhuri", "Claire Monteleoni", "Anand D Sarwate" ], "title": "Differentially private empirical risk minimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Irit Dinur", "Kobbi Nissim" ], "title": "Revealing information while preserving privacy", "venue": "In Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems,", "year": 2003 }, { "authors": [ "Cynthia Dwork", "Frank McSherry", "Kobbi Nissim", "Adam Smith" ], "title": "Calibrating noise to sensitivity in private data analysis", "venue": "In Theory of cryptography conference,", "year": 2006 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2030 }, { "authors": [ "Robin C Geyer", "Tassilo Klein", "Moin Nabi" ], "title": "Differentially private federated learning: A client level perspective", "venue": "arXiv preprint arXiv:1712.07557,", "year": 2017 }, { "authors": [ "Boqing Gong", "Yuan Shi", "Fei Sha", "Kristen Grauman" ], "title": "Geodesic flow kernel for unsupervised domain adaptation", "venue": "In 2012 IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2012 }, { "authors": [ "Andrew Hard", "Kanishka Rao", "Rajiv Mathews", "Swaroop Ramaswamy", "Françoise Beaufays", "Sean Augenstein", "Hubert Eichner", "Chloé Kiddon", "Daniel Ramage" ], "title": "Federated learning for mobile keyboard prediction", "venue": "arXiv preprint arXiv:1811.03604,", "year": 2018 }, { "authors": [ "Li Huang", "Andrew L Shea", "Huining Qian", "Aditya Masurkar", "Hao Deng", "Dianbo Liu" ], "title": "Patient clustering improves efficiency of federated machine learning to predict mortality and hospital stay time using distributed electronic medical records", "venue": "Journal of biomedical informatics,", "year": 2019 }, { "authors": [ "Peter Kairouz", "Sewoong Oh", "Pramod Viswanath" ], "title": "The composition theorem for differential privacy", "venue": "In International Conference on Machine Learning", "year": 2015 }, { "authors": [ "Peter Kairouz", "H Brendan McMahan", "Brendan Avent", "Aurélien Bellet", "Mehdi Bennis", "Arjun Nitin Bhagoji", "Keith Bonawitz", "Zachary Charles", "Graham Cormode", "Rachel Cummings" ], "title": "Advances and open problems in federated learning", "venue": null, "year": 1912 }, { "authors": [ "Jakub Konečnỳ", "H Brendan McMahan", "Felix X Yu", "Peter Richtárik", "Ananda Theertha Suresh", "Dave Bacon" ], "title": "Federated learning: Strategies for improving communication efficiency", "venue": "arXiv preprint arXiv:1610.05492,", "year": 2016 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Manzil Zaheer", "Maziar Sanjabi", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated optimization in heterogeneous networks", "venue": "arXiv preprint arXiv:1812.06127,", "year": 2018 }, { "authors": [ "Xiang Li", "Kaixuan Huang", "Wenhao Yang", "Shusen Wang", "Zhihua Zhang" ], "title": "On the convergence of fedavg on non-iid data", "venue": "arXiv preprint arXiv:1907.02189,", "year": 1907 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Aguera y Arcas" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "H. Brendan McMahan", "Daniel Ramage", "Kunal Talwar", "Li Zhang" ], "title": "Learning differentially private recurrent language models", "venue": "In International Conference on Learning Representations (ICLR-18),", "year": 2018 }, { "authors": [ "Ilya Mironov" ], "title": "Rényi differential privacy", "venue": "In Computer Security Foundations Symposium (CSF),", "year": 2017 }, { "authors": [ "Payman Mohassel", "Yupeng Zhang" ], "title": "Secureml: A system for scalable privacy-preserving machine learning", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Mehryar Mohri", "Gary Sivek", "Ananda Theertha Suresh" ], "title": "Agnostic federated learning", "venue": "arXiv preprint arXiv:1902.00146,", "year": 1902 }, { "authors": [ "Yurii Nesterov" ], "title": "Introductory Lectures on Convex Optimization: A Basic Course, volume 87", "venue": "Springer Science & Business Media,", "year": 2003 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": null, "year": 2011 }, { "authors": [ "Nicolas Papernot", "Martı́n Abadi", "Úlfar Erlingsson", "Ian Goodfellow", "Kunal Talwar" ], "title": "Semisupervised knowledge transfer for deep learning from private training data", "venue": "In International Conference on Learning Representations (ICLR-17),", "year": 2017 }, { "authors": [ "Nicolas Papernot", "Shuang Song", "Ilya Mironov", "Ananth Raghunathan", "Kunal Talwar", "Úlfar Erlingsson" ], "title": "Scalable private learning with pate", "venue": "In International Conference on Learning Representations (ICLR-18),", "year": 2018 }, { "authors": [ "Xingchao Peng", "Qinxun Bai", "Xide Xia", "Zijun Huang", "Kate Saenko", "Bo Wang" ], "title": "Moment matching for multi-source domain adaptation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Xingchao Peng", "Qinxun Bai", "Xide Xia", "Zijun Huang", "Kate Saenko", "Bo Wang" ], "title": "Moment matching for multi-source domain adaptation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Xingchao Peng", "Zijun Huang", "Yizhe Zhu", "Kate Saenko" ], "title": "Federated adversarial domain adaptation", "venue": "arXiv preprint arXiv:1911.02054,", "year": 2054 }, { "authors": [ "Daniel Peterson", "Pallika Kanani", "Virendra J Marathe" ], "title": "Private federated learning with domain adaptation", "venue": "arXiv preprint arXiv:1912.06733,", "year": 1912 }, { "authors": [ "Reza Shokri", "Marco Stronati", "Congzheng Song", "Vitaly Shmatikov" ], "title": "Membership inference attacks against machine learning models", "venue": "In IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Virginia Smith", "Chao-Kai Chiang", "Maziar Sanjabi", "Ameet S Talwalkar" ], "title": "Federated multi-task learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Shuang Song", "Kamalika Chaudhuri", "Anand D Sarwate" ], "title": "Stochastic gradient descent with differentially private updates", "venue": "In Conference on Signal and Information Processing,", "year": 2013 }, { "authors": [ "Stacey Truex", "Nathalie Baracaldo", "Ali Anwar", "Thomas Steinke", "Heiko Ludwig", "Rui Zhang", "Yi Zhou" ], "title": "A hybrid approach to privacy-preserving federated learning", "venue": "In AISec,", "year": 2019 }, { "authors": [ "Stacey Truex", "Nathalie Baracaldo", "Ali Anwar", "Thomas Steinke", "Heiko Ludwig", "Rui Zhang", "Yi Zhou" ], "title": "A hybrid approach to privacy-preserving federated learning", "venue": "In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, pp. 1–11,", "year": 2019 }, { "authors": [ "Shiqiang Wang", "Tiffany Tuor", "Theodoros Salonidis", "Kin K Leung", "Christian Makaya", "Ting He", "Kevin Chan" ], "title": "Adaptive federated learning in resource constrained edge computing systems", "venue": "IEEE Journal on Selected Areas in Communications,", "year": 2019 }, { "authors": [ "Yuqing Zhu", "Xiang Yu", "Manmohan Chandraker", "Yu-Xiang Wang" ], "title": "Private-knn: Practical differential privacy for computer vision", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2020", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "With increasing ethical and legal concerns on leveraging private data, federated learning (McMahan et al., 2017) (FL) has emerged as a paradigm that allows agents to collaboratively train a centralized model without sharing local data. In this work, we consider two typical settings of federated learning: (1) Local agents are in large number, i.e., learning user behavior over many mobile devices (Hard et al., 2018). (2) Local agents are in small number with sufficient instances, i.e., learning a health related model across multiple hospitals without sharing patients’ data (Huang et al., 2019).\nWhen implemented using secure multi-party computation (SMC) (Bonawitz et al., 2017), federated learning eliminates the need for any agent to share its local data. However, it does not protect the agents or their users from inference attacks that combine the learned model with side information. Extensive studies have established that these attacks could lead to blatant reconstruction of the proprietary datasets (Dinur & Nissim, 2003) and identification of individuals (a legal liability for the participating agents) (Shokri et al., 2017). Motivated by this challenge, there had been a number of recent efforts (Truex et al., 2019b; Geyer et al., 2017; McMahan et al., 2018) in developing federated learning methods with differential privacy (DP), which is a well-established definition of privacy that provably prevents such attacks.\nAmong the efforts, DP-FedAvg (Geyer et al., 2017; McMahan et al., 2018) extends the NoisySGD method (Song et al., 2013; Abadi et al., 2016) to the federated learning setting by adding Gaussian noise to the clipped accumulated gradient. The recent state-of-the-art DP-FedSGD (Truex et al., 2019b) is under the same framework but with per-sample gradient clipping. A notable limitation for these gradient-based methods is that they require clipping the magnitude of gradients to τ and adding noise proportional to τ to every coordinate of the shared global model with d parameters. The clipping and perturbation steps introduce either large bias (when τ is small) or large variance (when τ is large), which interferes the SGD convergence and makes it hard to scale up to largecapacity models. In Sec. 3, we concretely demonstrate these limitations with examples and theory.\nParticularly, we show that the FedAvg may fail to decrease the loss function together with gradient clipping, and DP-FedAvg requires many outer-loop iterations (i.e., many rounds of communication to synchronize model parameters) to converge under differential privacy.\nTo avoid the gradient clipping, we propose to conduct the aggregation over the label space, as shown to be an effective approach in standard (non-federated) learning settings, i.e., voting-based modelagnostic approaches (Papernot et al., 2017; 2018; Zhu et al., 2020). To achieve it, we relax the traditional federated learning setting to allow unlabeled public data at the server side. We also consider a more complete scenario for federated learning, where there are a large number of local agents or a limited number of local agents. The agent-level privacy as introduced in DP-FedAvg, works seamlessly with our setting having many agents. However, when there are few agents, hiding each data belonging to one specific agent becomes burdensome or unnecessary. To this end, we provide a more complete privacy notion, i.e., agent-level and instance-level. Under each of the setting, we theoretically and empirically show that the proposed label aggregation method effectively removes the sensitivity issue caused by gradient clipping or noise addition, and achieves favorable privacy-utility trade-off compared to other DPFL algorithms.\nOur contributions are summarized as the following:\n1. We propose two voting-based DPFL algorithms via label aggregation (PATE-FL and PrivateKNN-FL) and demonstrate their clear advantages over gradient aggregation based DPFL methods (e.g., DP-FedAvg) in terms of communication cost and scalability to high-capacity models. 2. We provide provable differential privacy guarantees under two levels of granularity: agentlevel DP and instance-level DP. Each is natural in a particular regime of FL depending on the number of agents and the size of their data. 3. Extensive evaluation demonstrates that our method improves the privacy-utility trade-off over randomized gradient-based approaches in both agent-level and instance-level cases.\nA remark of our novelty. Though PATE-FL and Private-kNN-FL are algorithmically similar to the original PATE (Papernot et al., 2018) and Private-KNN (Zhu et al., 2020), they are not the same and we are adapting them to a new problem — federated learning. The adaptation itself is nontrivial and requires substantial technical innovations. We highlight three challenges below.\n• Several key DP techniques that contributed to the success of PATE and Private-KNN in the standard setting are no longer applicable (e.g., Privacy amplification by Sampling and Noisy Screening). This is partly due to that in standard private learning, the attacker only sees the final models; but in FL, the attacker can eavesdrop in all network traffic.\n• Moreover, PATE and Private-kNN only provide instance-level DP. We show PATE-FL and Private-kNN-FL also satisfy the stronger agent-level DP. PATE-FL’s agent-level DP parameter is, surprisingly, a factor of 2 better than its instance-level DP parameter. And PrivatekNN-FL in addition enjoys a factor of k amplification for the instance-level DP.\n• A key challenge of FL is the data heterogeneity of individual agents, while PATE randomly splits the dataset so each teacher is identically distributed. The heterogeneity does not affect our privacy analysis but does make it unclear whether PATE would work. We are the first to report strong empirical evidence that the PATE-style DP algorithms remain highly effective in the non-iid case." }, { "heading": "2 PRELIMINARY", "text": "In this section, we start with introducing the typical notations of federated learning and differential privacy. Then, two randomized gradient-based baselines, DP-FedAvg and DP-FedSGD, are introduced as the DPFL background." }, { "heading": "2.1 FEDERATED LEARNING", "text": "Federated learning (McMahan et al., 2017; Bonawitz et al., 2017; Mohassel & Zhang, 2017; Smith et al., 2017) is a distributed machine learning framework that allows clients to collaboratively train a global model without sharing local data. We considerN agents, each agent i has ni data kept locally\nand privately from a party-specific domain distributionDi. C is the number of classes. The objective is to output a global model that performs well on the target (server) distribution. Most prior works consider the target distribution as a uniform distribution over the union of all local data, which is restrictive in practice. Here we consider an agnostic federated learning scenario (Mohri et al., 2019; Peng et al., 2019c), where the server distributionDG can be different from all agent distributions. In light of this, we assume each agent has access to part of unlabeled server data drawn from the target distribution DG. FedAvg (McMahan et al., 2017) is a vanilla federated learning algorithm that we consider as a non-DP baseline. In this algorithm, a fraction of agents is sampled at each communication round with a probability q. Each selected agent downloads the shared global model and improves it by learning from local data using E iterations of stochastic gradient descent (SGD). We denote this local update process as an inner loop. Only the gradient is sent to the server, where it is averaged with other selected agents’ gradient to improve the global model. The global model is learned after T communication rounds, where each communication round is denoted as one outer loop." }, { "heading": "2.2 DIFFERENTIAL PRIVACY FOR FEDERATED LEARNING", "text": "Differential privacy (Dwork et al., 2006) is a quantifiable and composable definition of privacy that provides provable guarantees against identification of individuals in a private dataset.\nDefinition 1. A randomized mechanism M : D → R with a domain D and range R satisfies ( , δ)-differential privacy, if for any two adjacent datasets D,D′ ∈ D and for any subset of outputs S ⊆ R, it holds that Pr[M(D) ∈ S] ≤ e Pr[M(D′) ∈ S] + δ.\nThe definition applies to a variety of different granularity, depending on how the adjacent datasets are defined, i.e., if we are to protect whether one agent participates into training, the neighboring datasets are defined by adding or removing the entire local data within that agent. It is known as agent-level (user-level) differential privacy, which has been investigated in DP-FedAvg (Geyer et al., 2017; McMahan et al., 2018). Compared to FedAvg, DP-FedAvg (Figure 1) enforces clipping of peragent model gradient to a threshold S and adds noise to the scaled gradient before it is averaged at the server. Note that this DP notion is favored when data samples within one agent reveal the same sensitive information, e.g., cell phone agents send the same message.\nHowever, when there are only a few agents, hiding the entire dataset from one agent becomes difficult and inappropriate. We then consider the instance-level DP, where the adjacent dataset is defined by differing one single training example. This definition is consistent with the standard non-federated learning differential privacy (Abadi et al., 2016; Bassily et al., 2014; Chaudhuri et al., 2011). Model training with instance-level DP restricts the adversary’s power in detecting a specific training instance’s presence or absence. DP-FedSGD (Truex et al., 2019a; Peterson et al., 2019), one such state-of-the-art for the instance-level DP, performs NoisySGD (Abadi et al., 2016) for a fixed number of iterations at each agent. The gradient updates are averaged on each communication round at the server, as shown in Figure 1.\nSMC is a cryptographic technique that securely aggregates local updates before the server receives it. While SMC does not have a differential privacy guarantee, it can be combined with DP to amplify the privacy guarantee (Bhowmick et al., 2018; Agarwal et al., 2018; Truex et al., 2019b) against attackers\nthat eavesdrop what sent out by each agent. In our experiment, we assume that the aggregation is conducted by SMC for all privacy-preserving algorithms that we consider." }, { "heading": "3 CHALLENGES FOR GRADIENT-BASED FEDERATED LEARNING", "text": "In this section, we highlight the main challenges of the conventional DPFL frameworks in terms of accuracy, convergence and communication cost. For other challenges, we refer the readers to a survey (Kairouz et al., 2019). The details of DP-FedAvg are summarized in appendix algorithm section." }, { "heading": "3.1 CHALLENGE 1: BIASED GRADIENT ESTIMATION", "text": "Recent works (Li et al., 2018) have shown that the FedAvg may not converge well under heterogeneity (e.g., non-identical distributions). Here, we provide a simple example to show that the clipping step of DP-FedAvg may raise additional challenge.\nExample 2 (clipping). Let N = 2, each agent i’s local update is 4i (E iterations of SGD). We enforce clipping of per-agent update4i by performing4i/max(1, ||4i||2τ ), where τ is the clipping threshold. Consider the special case when ||41||2 = τ+α and ||42||2 ≤ τ . Then the global update will be 12 ( τ41 ||41||2 +42), which is biased.\nThe unbiased global update shall be 12 (41 +42). Such a simple example can be embedded in more realistic problems, causing substantial bias that leads to non-convergence." }, { "heading": "3.2 CHALLENGE 2: SLOW CONVERGENCE", "text": "Recent works (Li et al., 2019; Wang et al., 2019) have investigated the convergence rate in FL methods. Here, we draw connections to DP-FedAvg’s convergence rate and demonstrate that using many outer-loop iterations (T ) could have a similar convergence issue under differential privacy.\nWhen E = 1 in the local update (inner loop), the FedAvg algorithm is equivalent to SGD with distributed data, which requires many rounds of communication. The appeal of FedAvg is to set E to be larger so that each agent performs E iterations to update its own parameters before synchronizing the parameters to the global model, hence reducing the number of rounds in communication. However, setting E > 1 may not improve convergence at all.\nNow, we take a closer look at the effect of increasingE in the case of piecewise linear functions. Let η be the learning rate for individual agents. In appendix convergence section, we establish that the effect of increasing E is essentially increasing the learning rate for a large family of optimization problems with piecewise linear objective functions. It is known that for the family of G−Lipschitz functions supported on a B-bounded domain, any Krylov-space method 1 has a rate of convergence that is lower bounded by Ω(BG/ √ T ) (Nesterov, 2003, Section 3.2.1). This indicates that the variant of FedAvg that aggregates only the loss function part of the gradient or projects only when synchronizing requires Ω(1/α2) rounds of outer loop (i.e., communication), in order to converge to an α stationary point, i.e., increasing E does not help, even if no noise is added.\nThis also says that DP-FedAvg is essentially the same as stochastic subgradient method in almost all locations of a piecewise linear objective function with gradient noise being N (0, σ2/NId). The additional noise in DP-FedAvg imposes more challenges to the convergence. If we plan to run T rounds and achieve ( , δ)-DP, we need to choose σ = ηEG √ 2T log(1.25/δ)\nN (see, e.g., McMahan et al., 2018, Theorem 1). which results in a convergence rate upper bound of\nGB( √\n1 + 2Td log(1.25/δ)N2 2 )√ T = O ( GB√ T + √ d log(1.25/δ) N ) ,\nfor an optimal choice of the learning rate Eη.\n1One that outputs a solution in the subspace spanned by a sequence of subgradients.\nThe above bound is tight for stochastic subgradient methods, and in fact also informationtheoretically optimal. The GB/ √ T part of upper bound matches the information-theoretical lower bound for all methods that have access to T -calls of stochastic subgradient oracle (Agarwal et al., 2009, Theorem 1), while the second matches the information-theoretical lower bound for all ( , δ)differentially private methods on the agent level (Bassily et al., 2014, Theorem 5.3). That is, the first term indicates that there must be many rounds of communications, while the second term says that the dependence in ambient dimension d is unavoidable for DP-FedAvg. Clearly, our method also has such a dependence in the worst case, but it is easier for our approach to adapt to the structure that exists in the data (i.e., high consensus among voting), as we will illustrate later. In contrast, it has larger impact on DP-FedAvg, since it needs to explicitly add noises with variance Ω(d).\nAnother observation is that when N is small, no DP method with reasonable , δ parameters is able to achieve high accuracy. This partially motivates us to consider the other regime that deals with instance-level DP." }, { "heading": "3.3 OTHER CHALLENGES", "text": "Expensive Communication Cost: Up-stream communication cost (Konečnỳ et al., 2016), i.e., total transmitted updates from local agent to server, is another key concern in FL. For FedAvg, our convergence analysis suggests that increasingE does not speed up the convergence. A high communication cost is expected till the model converges. CpSGD (Agarwal et al., 2018) is another DPFL method, aiming at reducing the communication cost by gradient quantization with binomial noise. However, sampling from binomial distribution can be difficult on devices, which prevents it from being practical in real-world scenarios.\nNetwork Complexity: DP-FedAvg requires to clip gradient magnitude to τ at each coordinate in parameters, which is hard to scale up to large models, as the noise level increases proportional to the network capacity. To address this issue, recent works apply delicate clipping strategies (McMahan et al., 2018; Geyer et al., 2017) and reduce data dimension with PCA (Abadi et al., 2016). In this work, we propose to avoid such dimension dependence and empirically investigate how network architecture affects performance in various DPFL approaches." }, { "heading": "4 ALGORITHM", "text": "We assume there are unlabeled data drawn from DG at the server, which is public and accessible from any agent. The goal is to design an ( , δ)-DP algorithm (either on the agent-level or instancelevel) that outputs pseudo-labels for a subset of server’s unlabeled data. Then a global model is trained in a semi-supervised way, using pseudo-labeled and unlabeled data.\nPATE-FL In PATE-FL (Algorithm 1), each agent i trains a local “teacher” model fi using its own private local data. For each “student” query xt, every agent adds Gaussian Noise to her prediction (i.e.,C-dim histogram), aggregates their noisy predictions via SMC and the label with the most votes is returned to the server as the “pseudo-label” of xt. Similar to the original PATE, the idea behind the privacy guarantee is that by adding or removing any instance, it can change at most one agent’s prediction. The same argument also naturally applies to adding or removing one agent. In fact we gain a factor of 2 in the stronger agent-level DP due to a smaller sensitivity (see the proof for details)! Another important difference is that in the original PATE, the teachers are trained on random splits of the data, while in our case, the agents are naturally present with different distributions. We propose to optionally use domain adaptation techniques to mitigate these differences when training the “teachers”.\nPrivate-kNN-FL Next we present how the teachers fi is constructed in Private-kNN-FL method (see Algorithm 2). Each agent has a data-independent feature extractor φ. For every unlabeled query xt, agent i finds the ki nearest neighbor to xt from its local data by measuring their Euclidean distance in the feature space Rdφ and fi(xt) outputs the frequency vector of the votes for these nearest neighbors. Subsequently, fi(xt) from all agents are privately aggregated with the argmax of the noisy voting scores returned to the server.\nDifferent from the original Private-kNN (Zhu et al., 2020), we apply kNN on each agent’s local data instead of the entire private dataset. This distinction allows us to receive up to kN neighbors while\nAlgorithm 1 PATE-FL Input: Noise σ, global data DG, Q query\n1: for i in N clients do 2: Train local model fi using Di 3: for t = 0, 1, ..., Q, pick xt ∈ DG do 4: for each agent i in 1, ..., N do 5: f̃i(xt) = fi(xt) +N (0, σ 2\nN IC). 6: end for 7: ỹt = arg maxy∈{1,...,C}[ ∑N i=1 f̃i(xt)]y 8: end for 9: Train a global model θ using (xt, ỹt) Q t=1\nAlgorithm 2 Private-kNN-FL Input: Noise σ, global data DG, Q query\n1: for t = 0, 1, ..., Q, pick xt ∈ DG do 2: for each agent i in 1, ..., N do 3: Apply φ on Di and xt 4: y1, ..., yk ← top-k closest labels 5: f̃i(xt) = 1 k ( ∑k j=1 yj) +N (0, σ2\nN IC) 6: end for 7: ỹt = arg maxy∈{1,...,C}[ ∑N i=1 f̃i(xt)]y 8: end for 9: Train a global model θ using (xt, ỹt) Q t=1\nbounding the contribution of individual agents by k. Comparing to PATE-FL, this approach enjoys a stronger instance-level DP guarantee since the sensitivity from adding or removing one instance is a factor of k/2 times smaller than that of the agent-level." }, { "heading": "4.1 PRIVACY ANALYSIS", "text": "We provide our privacy analysis based on Renyi differential privacy (RDP) (Mironov, 2017). RDP inherits and generalizes the information-theoretical properties of DP, and has been used for privacy analysis in DP-FedAvg. We defer the background about RDP, its connection to DP and all proofs of our technical results to the appendix RDP section. Theorem 3 (Privacy guarantee). Let PATE-FL and Private-kNN-FL answer Q queries with noise scale σ. For agent-level protection, both algorithms guarantee (α,Qα/(2σ2))-RDP for all α ≥ 1. For instance-level protection, PATE-FL and Private-kNN-FL obey (α,Qα/σ2) and (α,Qα/(kσ2))RDP respectively.\nThis theorem says that both algorithms achieve agent-level and instance-level differential privacy. With the same noise injection to the agent’s output, Private-kNN-FL enjoys a stronger instance-level DP (by a factor of k/2) compared to its agent-level guarantee, while PATE-FL’s instance-level DP is weaker by a factor of 2.\nImproved accuracy and privacy with large margin: Let f1, ..., fN : X → 4C−1 where 4C−1 denotes the probability simplex — the soft-label space. Note that both algorithms we propose can be viewed as voting of these local agents, which output a probability distribution in 4C−1. First, let us define the margin parameter γ(x) that measures the difference between the largest and second largest coordinate of 1N ∑N i=1 fi(x). Lemma 4. Conditioning on the teachers, for each public data point x, the noise added to each coordinate of 1N ∑N i=1 fi(x) is drawn from N (0, σ2/N2), then with probability ≥ 1 − C exp{−N2γ(x)2/8σ2}, the privately released label matches the majority vote without noise.\nThe proof (in Appendix) is a straightforward application of Gaussian tail bounds and a union bound over C coordinates. This lemma implies that for all public data point x such that γ(x) ≥ 2 √ 2 log(C/δ) N , the output label matches noiseless majority votes with probability at least 1− δ. Next we show that for those data point x such that γ(x) is large, the privacy loss for releasing arg maxj [ 1 N ∑N i=1 fi(x)]j is exponentially smaller.\nTheorem 5. For each public data point x, the mechanism that releases arg maxj [ 1N ∑N i=1 fi(x) + N (0, (σ2/N2)IC)]j obeys (α, )-data-dependent-RDP, where\n≤ Ce− N2γ(x)2 8σ2 + 1\nα− 1 log\n( 1 + e (2α−1)σ2 2s2 −N 2γ(x)2 16σ2 +logC ) ,\nwhere s = 1 for PATE-FL, and s = 1/k for Private-KNN-FL.\nThis bound implies that when the margin of the voting scores is large, the agents enjoy exponentially stronger (data-dependent) differential privacy guarantees in both agent-level and instance-level. In\nother words, our proposed methods avoid the dependence on model dimension d that are inherited in DP-FedAvg and can release models for free privacy cost when a high consensus among votes from local agents." }, { "heading": "4.2 COMMUNICATION COST", "text": "Finally, regarding the communication issue, our proposed methods are parallel as each agent work independently without any synchronization. Overall, we reduce the up-stream communication cost from d · T floats (model size times T rounds) to C ·Q floats in one round." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "We verify our PATE-FL for agent-level DP on Digit (LeCun et al., 1998; Netzer et al., 2011) and CelebA (Liu et al., 2015). Then, we evaluate Private-kNN-FL on Office-Caltech10 (Gong et al., 2012) and DomainNet (Peng et al., 2019a) for instance-level DP. Five independent rounds of experiments are conducted to report mean accuracy and its standard deviation. We use both labeled and unlabeled data on Digit datasets but only labeled data for all other datasets. We defer the experimental details to appendix.\n5.1 EVALUATION ON AGENT-LEVEL DP\nDigit Datasets Evaluation: MNIST, SVHN and USPS together as Digit datasets, is a controlled setting to mimic the real case, where distribution of agent-to-server or agent-to-agent can be different. We simulate 140 agents using SVHN with 3000 records each and 60 agents using MNIST with 1000 records each. USPS serves as unlabeled public data, where 3000 records can be accessed by the local agents and the remaining records are used for testing.\nIn Table 1, our methods PATE-FL and PATEFL+DA are compared to private and nonprivate baselines. PATE-FL+DA is based on PATE-FL, where each agent model is trained with domain adaptation (DA) technique (Ganin et al., 2016). FedAvg+DA is the variant of FedAvg with the same DA technique. We observe:\n(1) When the privacy cost of DP-FedAvg and PATE-FL is close, our method significantly improves the accuracy from 76.3% to 83.8%. (2) The further improved accuracy 92.5% of PATE-FL+DA demonstrates that our framework can orthogonally benefit from DA techniques, where it is highly uncertain yet for the gradient-based methods. (3) Both FedAvg and DP-FedAvg perform better than their DA variants. The possible reason might be that FL with domain adaptation is more closely\nrelated to multi-source domain adaptation (Peng et al., 2019b) than the traditional domain adaptation. In other words, averaging gradients of domain adaptation methods implies averaging different trajectories towards the server’s distribution, which may not work in practice. How to improve DP-FedAvg variants with DA techniques remains an open problem.\nCelebA Dataset Evaluation: CelebA is a 220k face attribute dataset with 40 attributes defined. 300 agents are designed with partitioned training data. We split 600 unlabeled data at server, and the rest 59,400 images are for testing. Detailed settings are referred to appendix. Consistent to Digits dataset, our method achieves clear performance gain by 1.8% compared to DP-FedAvg while maintaining the same privacy cost.\nMNIST Dataset with Non-i.i.d Partition Evaluation: In both CelebA and Digit experiments, we i.i.d partition each dataset into different agents. To investigate our proposed algorithm under a noni.i.d partition scenario, we choose a similar experimental setup as (McMahan et al., 2017) did. We divide the training set of sorted MNIST into 100 agents, such that each agent will have samples from 6 digits only. This way, each agent gets 600 data points from 6 classes. We split 30% of the testing set in MNIST as the available unlabeled public data and the remaining testing set used for testing. As shown in Table 1, our method achieves consistent better performance than DP-FedAvg. Moreover, we plot the privacy-accuracy tradeoff in Figure 2. For every fixed privacy budget at the x-axis, we do a grid search on all hyperparameters (e.g., #queries and noise scale for PATE-FL and #communication round, noise scale for DP-FedAvg). In the figure, the accuracy of PATE-FL is consistently higher than DP-FedAvg." }, { "heading": "5.2 EVALUATION ON INSTANCE-LEVEL DP", "text": "When agents are few, preserving privacy across agents becomes hard and meaningless. We then focus on preserving each instance’s privacy, a.k.a instance-level DP. FedAvg is non-private baseline.\nOffice-Caltech Evaluation: Office-Caltech consists of data from four domains: Caltech (C), Amazon (A), Webcam(W) and DSLR (D). We pick one domain as server each time and the rest ones are for local agents (e.g., in A,C,D → W , Webcam is treated as the server). We split 70% data from the server domain as public available unlabeled data while the remaining 30% data is used for testing. For Private-kNN-FL, we instantiate the public feature extractor using the network backbone without the classifier layer. Both AlexNet and Resnet50 are Imagenet pre-trained. We set σ = 15 for Private-kNN-FL with AlexNet and σ = 25 for ResNet50. To address the domain adaptation issue, each agent can choose k to be smaller if they observe the domain gap is large, as a smaller k implies a more selective set of neighbors. In our experiment, we set k to be the 5% of the local data size (i.e., each agent returns the noisy top-5% neighbors’ predictions).\nWe observe in Table 2, DP-FedSGD degrades when backbone changes from light load AlexNet to heavy load ResNet50, while ours is improved by 10%. It is because larger model capacity leads to more sensitive response to gradient clipping or noise injection. In contrast, our Private-kNNFL avoids the gradient operation by label aggregation and can still benefit from the larger model\ncapacity. Again, our method achieves consistently better utility-privacy trade-off as maintaining same privacy cost and can achieve significantly better utility, or maintaining same utility and can achieve much low privacy cost.\nDomainNet Evaluation: DomainNet contains 0.6 million images of 345 categories, ranging from six domains: Clipart, Painting, Real, Quickdraw, Infograph and Sketch. As a challenging dataset even for non-private setting (Peng et al., 2019c), we only consider seven fruit classes (apple, banana, grapes, strawberry, watermelon, pear, pineapple) for demonstration. Large domain shift exists between infograph/quickdraw and other domains (Peng et al., 2019c). Thus we only report results on cases where servers are chosen from Clipart, Painting and Real. Five domain data are assigned to five agents respectively. 70% of the left domain data is split for server and 30% rest for testing.\nTable 3 compares our Private-kNN-FL method with DP-FedSGD. We observe that when the privacy cost is aligned close, our method outperforms DP-FedSGD by more than 10% in accuracy gain across all the three cases. When the accuracy is aligned close, our method saves more than 60% privacy cost, showing consistent advantage over DP-FedSGD." }, { "heading": "5.3 ABLATION STUDY", "text": "In this section, we investigate the agent-level privacy-utility trade-off with respect to the number of agents and the volume of local data. MNIST is utilized for generality and simplicity. We randomly pick 1000 testing data as the unlabeled server data and the remaining 9000 data for testing. We adopt the model structure proposed in (Abadi et al., 2016) for both of our methods.\nEffect of Data per Agent: We fix the number of agent to 100 and range the number of data per agent from {50, 100, 200, 400, 600}. By only relaxing the “data per agent” factor, we fairly tune the other privacy parameters for all the methods to its maximized performance. In Figure 3 (a), as “data per agent” increases, all the methods improves as the overall dataset volume increases. Our method achieves consistently higher accuracy over DP-FedAvg. The failure cases for both methods are when “data per agent” is below 50, which cannot ensure the well-trained local agent models. Label aggregation over such weak local models results in failure or sub-optimal performance.\nEffect of Number of Agents: In Figure 3 (b), we vary N ∈ {50, 100, 200, 400, 800} and set overall privacy budget fixed as = 5, δ = 10−3. Following (Geyer et al., 2017), each agent has exactly 600 data, where data samples are duplicated when N ∈ {200, 400, 800}. We conduct grid search for each method to obtain optimal hyper-parameters. Our method shows clear performance advantage over DP-FedAvg. We also see DP-FedAvg gradually approaches our method as the number of agents increases." }, { "heading": "6 CONCLUSIONS", "text": "In this work, we propose voting-based approaches for differentially private federated learning (DPFL) under two privacy regimes: agent-level and instance-level. We substantially investigate the real-world challenges of DPFL and demonstrate the advantages of our methods over gradient aggregation-based DPFL methods on utility, convergence, reliance on network capacity, and communication cost. Extensive empirical evaluation shows that our methods improve the privacy-utility trade-off in both privacy regimes." } ]
2,020
null
SP:2003bbcbbf2f16f6e54353a8b6f58c613343ecc0
[ "This work proposes a novel neuro-algorithmic policy architecture for solving discrete planning tasks. It takes a high-dimensional image input and processes it through modified ResNet encoders to obtain a graph cost map and a start/goal heatmap. This is fed into a differentiable Dijkstra algorithm to obtain the shortest trajectory prediction which is trained using an expert-annotated trajectory via a Hamming distance loss. This module is evaluated in two dynamic game environments demonstrating generalization to unseen scenes.", "This paper presents a method to train a neural network to predict the time-dependent costs, and start and goal states needed to run time-dependent shortest-path planning in a dynamic 2-D environment. The non-differentiability of the path planning is handled by recent work on differentiating through blackbox combinatorial solvers from [1]. The method is trained in a supervised manner from expert trajectories. Evaluations are presented on 2-D time-varying games where the addition of the path-planner is shown to improve performance over an imitation learning and PPO baseline." ]
Although model-based and model-free approaches to learning the control of systems have achieved impressive results on standard benchmarks, generalization to variations in the task are still unsatisfactory. Recent results suggest that generalization of standard architectures improves only after obtaining exhaustive amounts of data. We give evidence that the generalization capabilities are in many cases bottlenecked by the inability to generalize on the combinatorial aspects. Further, we show that for a certain subclass of the MDP framework, this can be alleviated by neuro-agorithmic architectures. Many control problems require long-term planning that is hard to solve generically with neural networks alone. We introduce a neuro-algorithmic policy architecture consisting of a neural network and an embedded time-depended shortest path solver. These policies can be trained end-to-end by blackbox differentiation. We show that this type of architecture generalizes well to unseen variations in the environment already after seeing a few examples. costpredictor ? expert trajectory L predicted trajectory
[]
[ { "authors": [ "Pieter Abbeel", "Andrew Y Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "Navid Aghasadeghi", "Timothy Bretl" ], "title": "Maximum entropy inverse reinforcement learning in continuous state spaces with path integrals", "venue": "In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2011 }, { "authors": [ "Brandon Amos", "Denis Yarats" ], "title": "The differentiable cross-entropy method", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Brandon Amos", "Lei Xu", "J Zico Kolter" ], "title": "Input convex neural networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Masataro Asai", "Alex Fukunaga" ], "title": "Classical planning in deep latent space: Bridging the subsymbolicsymbolic boundary", "venue": "arXiv preprint arXiv:1705.00154,", "year": 2017 }, { "authors": [ "Masataro Asai", "Hiroshi Kajino" ], "title": "Towards stable symbol grounding with zero-suppressed state autoencoder", "venue": "In Proceedings of the International Conference on Automated Planning and Scheduling,", "year": 2019 }, { "authors": [ "Quentin Berthet", "Mathieu Blondel", "Olivier Teboul", "Marco Cuturi", "Jean-Philippe Vert", "Francis Bach" ], "title": "Learning with differentiable perturbed optimizers", "venue": "arXiv preprint arXiv:2002.08676,", "year": 2020 }, { "authors": [ "Dimitri P Bertsekas" ], "title": "Dynamic programming and optimal control, volume 1", "venue": "Athena scientific Belmont, MA,", "year": 1995 }, { "authors": [ "Homanga Bharadhwaj", "Kevin Xie", "Florian Shkurti" ], "title": "Model-predictive control via cross-entropy and gradient-based optimization", "venue": "arXiv preprint arXiv:2004.08763,", "year": 2020 }, { "authors": [ "Sebastian Blaes", "Marin Vlastelica Pogančić", "Jiajie Zhu", "Georg Martius" ], "title": "Control what you can: Intrinsically motivated task-planning agent", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Charles Blundell", "Benigno Uria", "Alexander Pritzel", "Yazhe Li", "Avraham Ruderman", "Joel Z Leibo", "Jack Rae", "Daan Wierstra", "Demis Hassabis" ], "title": "Model-free episodic control", "venue": "arXiv preprint arXiv:1606.04460,", "year": 2016 }, { "authors": [ "Felix Burget", "Maren Bennewitz", "Wolfram Burgard" ], "title": "Bi 2 rrt*: An efficient sampling-based path planning framework for task-constrained mobile manipulation", "venue": "In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2016 }, { "authors": [ "Yize Chen", "Yuanyuan Shi", "Baosen Zhang" ], "title": "Optimal control via neural networks: A convex approach", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Karl Cobbe", "Christopher Hesse", "Jacob Hilton", "John Schulman" ], "title": "Leveraging procedural generation to benchmark reinforcement learning", "venue": "arXiv preprint:1912.01588,", "year": 2019 }, { "authors": [ "Nathaniel D Daw", "Yael Niv", "Peter Dayan" ], "title": "Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control", "venue": "Nature neuroscience,", "year": 2005 }, { "authors": [ "Emir Demirovic", "Peter J. Stuckey", "James Bailey", "Jeffrey Chan", "Christopher Leckie", "Kotagiri Ramamohanarao", "Tias Guns" ], "title": "Predict+optimise with ranking objectives: Exhaustively learning linear functions", "venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "E.W. Dijkstra" ], "title": "A note on two problems in connexion with graphs", "venue": "Numer. Math.,", "year": 1959 }, { "authors": [ "Josip Djolonga", "Andreas Krause" ], "title": "Differentiable learning of submodular models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "N. Adam" ], "title": "Elmachtoub and Paul Grigas", "venue": "Smart \"predict, then optimize\". ArXiv,", "year": 2017 }, { "authors": [ "Ben Eysenbach", "Russ R Salakhutdinov", "Sergey Levine" ], "title": "Search on the replay buffer: Bridging planning and reinforcement learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Aaron Ferber", "Bryan Wilder", "Bistra Dilkina", "Milind Tambe" ], "title": "Mipaal: Mixed integer program as a layer", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Chelsea Finn", "Sergey Levine" ], "title": "Deep visual foresight for planning robot motion", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2017 }, { "authors": [ "Jonathan D Gammell", "Siddhartha S Srinivasa", "Timothy D Barfoot" ], "title": "Informed rrt*: Optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic", "venue": "In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2014 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "In International Conference on Machine Learning, volume 97 of ICML’19,", "year": 2019 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to trust your model: Model-based policy optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Lukasz Kaiser", "Mohammad Babaeizadeh", "Piotr Milos", "Blazej Osinski", "Roy H Campbell", "Konrad Czechowski", "Dumitru Erhan", "Chelsea Finn", "Piotr Kozakowski", "Sergey Levine" ], "title": "Model-based reinforcement learning for atari", "venue": null, "year": 1903 }, { "authors": [ "Péter Karkus", "Xiao Ma", "David Hsu", "Leslie Pack Kaelbling", "Wee Sun Lee", "Tomás Lozano-Pérez" ], "title": "Differentiable algorithm networks for composable robot learning", "venue": "Robotics: Science and Systems XV, University of Freiburg, Freiburg im Breisgau,", "year": 2019 }, { "authors": [ "Rahul Kumar", "Aditya Mandalika", "S. Choudhury", "S. Srinivasa" ], "title": "Lego: Leveraging experience in roadmap generation for sampling-based planning", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2019 }, { "authors": [ "Yen-Ling Kuo", "Andrei Barbu", "Boris Katz" ], "title": "Deep sequential models for sampling-based planning", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2018 }, { "authors": [ "Yunzhu Li", "Hao He", "Jiajun Wu", "Dina Katabi", "Antonio Torralba" ], "title": "Learning compositional koopman operators for model-based control", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jaynta Mandi", "Emir Demirovic", "Peter J. Stuckey", "Tias Guns" ], "title": "Smart predict-and-optimize for hard combinatorial optimization problems", "venue": "CoRR, abs/1911.10092,", "year": 2019 }, { "authors": [ "Ofir Nachum", "Shixiang Shane Gu", "Honglak Lee", "Sergey Levine" ], "title": "Data-efficient hierarchical reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Gergely Neu", "Csaba Szepesvári" ], "title": "Apprenticeship learning using inverse reinforcement learning and gradient methods", "venue": "arXiv preprint arXiv:1206.5264,", "year": 2012 }, { "authors": [ "Andrew Y Ng", "Stuart Russell" ], "title": "Algorithms for inverse reinforcement learning", "venue": "In in Proc. 17th International Conf. on Machine Learning. Citeseer,", "year": 2000 }, { "authors": [ "Vlad Niculae", "André FT Martins", "Mathieu Blondel", "Claire Cardie" ], "title": "Sparsemap: Differentiable sparse structured inference", "venue": "arXiv preprint arXiv:1802.04223,", "year": 2018 }, { "authors": [ "Junhyuk Oh", "Satinder Singh", "Honglak Lee" ], "title": "Value prediction network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Martin L Puterman" ], "title": "Markov decision processes: discrete stochastic dynamic programming", "venue": null, "year": 2014 }, { "authors": [ "Sébastien Racanière", "Théophane Weber", "David Reichert", "Lars Buesing", "Arthur Guez", "Danilo Jimenez Rezende", "Adrià Puigdomènech Badia", "Oriol Vinyals", "Nicolas Heess", "Yujia Li" ], "title": "Imagination-augmented agents for deep reinforcement learning", "venue": "Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Siddharth Reddy", "Anca D Dragan", "Sergey Levine" ], "title": "Sqil: Imitation learning via reinforcement learning with sparse rewards", "venue": "arXiv preprint arXiv:1905.11108,", "year": 2019 }, { "authors": [ "M. Rolínek", "V. Musil", "A. Paulus", "M. Vlastelica", "C. Michaelis", "G. Martius" ], "title": "Optimizing rankingbased metrics with blackbox differentiation", "venue": "In Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Stéphane Ross", "Geoffrey Gordon", "Drew Bagnell" ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Nikolay Savinov", "Alexey Dosovitskiy", "Vladlen Koltun" ], "title": "Semi-parametric topological memory for navigation", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Tom Schaul", "Daniel Horgan", "Karol Gregor", "David Silver" ], "title": "Universal value function approximators", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Julian Schrittwieser", "Ioannis Antonoglou", "Thomas Hubert", "Karen Simonyan", "Laurent Sifre", "Simon Schmitt", "Arthur Guez", "Edward Lockhart", "Demis Hassabis", "Thore Graepel" ], "title": "Mastering atari, go, chess and shogi by planning with a learned model", "venue": "arXiv preprint arXiv:1911.08265,", "year": 2019 }, { "authors": [ "David Silver", "Hado Hasselt", "Matteo Hessel", "Tom Schaul", "Arthur Guez", "Tim Harley", "Gabriel DulacArnold", "David Reichert", "Neil Rabinowitz", "Andre Barreto" ], "title": "The predictron: End-to-end learning and planning", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "David Silver", "Hado van Hasselt", "Matteo Hessel", "Tom Schaul", "Arthur Guez", "Tim Harley", "Gabriel DulacArnold", "David Reichert", "Neil Rabinowitz", "Andre Barreto", "Thomas Degris" ], "title": "The predictron: End-to-end learning and planning", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Aravind Srinivas", "Allan Jabri", "Pieter Abbeel", "Sergey Levine", "Chelsea Finn" ], "title": "Universal planning networks: Learning generalizable representations for visuomotor control", "venue": "In International Conference on Machine Learning, volume 80 of ICML’18,", "year": 2018 }, { "authors": [ "Richard S Sutton" ], "title": "Dyna, an integrated architecture for learning, planning, and reacting", "venue": "ACM Sigart Bulletin,", "year": 1991 }, { "authors": [ "Richard S Sutton", "Csaba Szepesvári", "Alborz Geramifard", "Michael Bowling" ], "title": "Dyna-style planning with linear function approximation and prioritized sweeping", "venue": "In Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence,", "year": 2008 }, { "authors": [ "Aviv Tamar", "Yi Wu", "Garrett Thomas", "Sergey Levine", "Pieter Abbeel" ], "title": "Value iteration networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "M. Vlastelica", "A. Paulus", "V. Musil", "G. Martius", "M. Rolínek" ], "title": "Differentiation of blackbox combinatorial solvers", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Po-Wei Wang", "Priya L Donti", "Bryan Wilder", "Zico Kolter" ], "title": "Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver", "venue": null, "year": 1905 }, { "authors": [ "Ga Wu", "Buser Say", "Scott Sanner" ], "title": "Scalable planning with deep neural network learned transition models", "venue": "Journal of Artificial Intelligence Research,", "year": 2020 }, { "authors": [ "Hengshuai Yao", "Shalabh Bhatnagar", "Dongcui Diao", "Richard S Sutton", "Csaba Szepesvári" ], "title": "Multistep dyna planning for policy evaluation and control", "venue": "In Advances in neural information processing systems,", "year": 2009 }, { "authors": [ "Ryo Yonetani", "Tatsunori Taniai", "Mohammadamin Barekatain", "Mai Nishimura", "Asako Kanezaki" ], "title": "Path planning using neural a* search", "venue": "arXiv preprint arXiv:2009.07476,", "year": 2020 }, { "authors": [ "Cobbe" ], "title": "We modified the LEAPER environment to make discrete steps in the world in order to make our method applicable. This involved making the logs on the river move in discrete steps as well as the agent. For an additional description of the PROCGEN environmnets", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "One of the central topics in machine learning research is learning control policies for autonomous agents. Many different problem settings exist within this area. On one end of the spectrum are imitation learning approaches, where prior expert data is available and the problem becomes a supervised learning problem. On the other end of the spectrum lie approaches that require interaction with the environment to obtain data for policy extraction problem, also known as the problem of exploration. Most Reinforcement Learning (RL) algorithms fall into the latter category. In this work, we concern ourselves primarily with the setting where limited expert data is available, and a policy needs to be extracted by imitation learning.\nIndependently of how a policy is extracted, a central question of interest is: how well will it generalize to variations in the environment and the task? Recent studies have shown that standard deep RL algorithms require exhaustive amounts of exposure to environmental variability before starting to generalize Cobbe et al. (2019).\nThere exist several approaches addressing the problem of generalization in control. One option is to employ model-based approaches that learn a transition model from data and use planning algorithms at runtime. This has been argued to be the best strategy in the presence of an accurate model and sufficient computation time (Daw et al., 2005). However, learning a precise transition model is often harder than learning a policy. This, in turn, makes them more general, but comes at a cost of increasing the problem dimensionality. The transition model has a much larger dimensionality and it needs to model aspects of the environmental dynamics that are perhaps irrelevant for the task. This is particularly true for learning in problems with high-dimensional inputs, such as raw images. In order to alleviate this problem, learning specialized or partial models has shown to be a viable alternative, e.g. in MuZero Schrittwieser et al. (2019).\nWe propose to use recent advances in making combinatorial algorithms differentiable in a blackbox fashion as proposed by Vlastelica et al. (2020) to train neuro-algorithmic policies with embedded planners end-to-end. More specifically, we use a time-dependent shortest path planner acting on a temporally evolving graph generated by a deep network from the inputs. This enables us to learn the time-evolving costs of the graph and relates us to model-based approaches. We demonstrate the effectiveness of this approach in an offline imitation learning setting, where a few expert trajectories are provided. Due to the combinatorial generalization capabilities of planners, our learned policy is able to generalize to new variations in the environment out of the box and orders of magnitude faster than naive learners. Using neuro-algorithmic architectures facilitates generalization by shifting the combinatorial aspect of the problem to efficient algorithms, while using neural networks to extract a good representation for the problem at hand. They have potential to endow artificial agents with the main component of intelligence, the ability to reason.\nOur contributions can be summarized as follows:\n• We identify that poor generalization is caused by lack of structural and combinatorial inductive biases and can be alleviated by introducing the correct inductive biases through neuro-algorithmic policies. • We show that architectures embedding TDSP solvers are applicable beyond goal-reaching\nenvironments. • We demonstrate learning neuro-algorithmic policies in dynamic game environments from\nimages." }, { "heading": "2 RELATED WORK", "text": "Planning There exist multiple lines of work aiming to improve classical planning algorithms such as improving sampling strategies of Rapidly-exploring Random Trees (Gammell et al., 2014; Burget et al., 2016; Kuo et al., 2018). Similarly, along this direction, Kumar et al. (2019) propose a conditional VAE architecture for sampling candidate waypoints. Orthogonal to this are approaches that learn representations such that planning is applicable in the latent space. Hafner et al. (2019) employ a latent multi-step transition model. Savinov et al. (2018) propose a semi-parametric method for mapping observations to graph nodes and then applying a shortest path algorithm. Asai & Fukunaga (2017); Asai & Kajino (2019) use an autoencoder architecture in order to learn a discrete transition model suitable for classical planning algorithms. Li et al. (2020) learn compositional Koopman operators with graph neural networks mapping to a linear dynamics latent space, which allows for fast planning. Chen et al. (2018); Amos et al. (2017) perform efficient planning by using a convex model formulation and convex optimization. Alternatively, the replay buffer can be used as a non-parametric model in order to select waypoints (Eysenbach et al., 2019) or in an MPC fashion (Blundell et al., 2016). None of these methods perform differentiation through the planning algorithm in order to learn better latent representations.\nDifferentiation through planning Embedding differentiable planners has been proposed in previous works, e.g. in the continuous case with CEM (Amos & Yarats, 2020; Bharadhwaj et al., 2020). Wu et al. (2020) use a (differentiable) recurrent neural network as a planner. Tamar et al. (2016) use a differentiable approximation of the value iteration algorithm to embed it in a neural network. Silver et al. (2017b) differentiate through a few steps of value prediction in a learned MDP to match the externally observed rewards. Srinivas et al. (2018) use a differentiable forward dynamics model in latent space. Karkus et al. (2019) suggest a neural network architecture embedding MDP and POMDP\nsolvers and during the backward pass, they substitute the algorithms by learned approximations. In comparison, we do not perform any relaxation or approximation of the planner itself and we learn interpretable time-dependent costs of the latent planning graph based on expert demonstrations by differentiating through the planner. Similarly to our work, Yonetani et al. (2020) embed an A∗ algorithm into a neural network, but in comparison, their method does not operate with time-dependent costs, subgoal selection and does not provide a policy for closed-loop control.\nInverse reinforcement learning and imitation learning Uncovering the expert objective function from demonstrations has been a central topic in reinforcement learning (Ng & Russell, 2000). Our method is connected to inverse reinforcement learning in the sense that we learn the objective function that the expert optimizes to extract an optimal policy, also called apprenticeship learning (Abbeel & Ng, 2004; Neu & Szepesvári, 2012; Aghasadeghi & Bretl, 2011). What separates our approach is that the inferred costs are inherently part of the learned neuro-algorithmic policy in conjunction with the applied planner on the costs.\nOur method is an offline imitation learning method, but since we propose an end-to-end trainable policy, it is naturally extendable to the online case with a method such as DAgger (Ross et al., 2011) or other online reinforcement learning methods augmented with expert datasets (Reddy et al., 2019; Ho & Ermon, 2016).\nOffline model-based reinforcement learning Model-based methods have shown promise by facilitating better generalization (Janner et al., 2019). Approaches employing models fall into two camps: using models to extract a policy in a Dyna-style approach (Sutton, 1991; Janner et al., 2019; Sutton et al., 2008; Yao et al., 2009; Kaiser et al., 2019), or incorporating the model in a planning loop, i.e. model-predictive control (Finn & Levine, 2017; Racanière et al., 2017; Oh et al., 2017; Silver et al., 2017a). In this work, we consider the latter case where an implicit transition model is “hidden” within the predicted time-dependent costs.\nCombinatorial algorithms in end-to-end trainable networks We suggest a hybrid policy consisting of a neural network and an accompanying expert (shortest path) discrete solver that is trainable end-to-end. Incorporating expert discrete solvers into end-to-end trainable architectures is a topic with exciting recent developments. For the simpler setup of comparing to ground-truth values on the solver output, numerous frameworks have been suggested such as the “predict-and-optimize” framework and its variants (Elmachtoub & Grigas, 2017; Demirovic et al., 2019; Mandi et al., 2019). Also, specializations for concrete cases such as sparse structured inference (Niculae et al., 2018), logical satisfiability (Wang et al., 2019), submodular optimization (Djolonga & Krause, 2017) or mixed integer programming (Ferber et al., 2020) have been proposed.\nWe are interested in the harder case of providing an entirely hybrid architecture which may use the solver at intermediate levels and is trainable end-to-end. For this case, two approaches have recently emerged (Vlastelica et al., 2020; Berthet et al., 2020). Vlastelica et al. (2020) introduce an efficient implicit piece-wise linear interpolation scheme, while Berthet et al. (2020) introduce Monte Carlo technique for estimating the Jacobian of a Gaussian smoothing of the piecewise constant function. The approach from Vlastelica et al. (2020) is especially appealing, since it allows for uses in which the solver is the computational bottleneck. Therefore, we follow it in this work. By formulating the control problem as a time-dependent shortest path problem (TDSP), we show that the framework from Vlastelica et al. (2020) is applicable in specific control settings." }, { "heading": "3 MARKOV DECISION PROCESSES AND SHORTEST PATHS", "text": "We follow the MDP framework Puterman (2014) in a goal-conditioned setting Schaul et al. (2015). This is used in sequential decision making problems where a specific terminal state has to be reached.\nDefinition 1 A goal-conditioned Markov Decision Process (gcMDP),M is defined by the tuple (S, A, p, g, r), where S is the state space, A the action space, p(s′ | a, s) the probability of making the transition s → s′ when taking the action a, g is the goal, r(s, a, s′, g) the reward obtained when transitioning from state s to s′ while taking action a and aiming for goal g.\nConcretely, we concern ourselves with fully observable discrete MDPs, in which the Markov assumption for the state holds and where the state and action-space are discrete. The goal of reinforcement learning is to maximize the return G = ∑T t=0 rt of such a process. In gcMDPs the reward is such that the maximal return can be achieved by reaching the goal state g.\nGiven access to the transition probabilities and rewards, an optimal policy can be extracted by dynamic programming Bertsekas et al. (1995). In a graph representation of an gcMDP, the set of vertices V corresponds to the set of states S, traversing an edge corresponds to making a transition between states. We assume a deterministic process, such that the optimal policy can be extracted by standard shortest path algorithms, such as Dijkstra’s algorithm.\nIn this work, we imitate expert trajectories by training a policy with an embedded time-dependent shortest path solver end-to-end. Although the actual gcMDP solved by the expert may be stochastic, we learn a deterministic latent approximate gcMDP, M̂. Assuming that we have access to the topology of the gcMDP, by applying blackbox-differentiation theory Vlastelica et al. (2020) we are able to learn the underlying costs (instead of rewards) of M̂ such that the optimal policy on M̂ is also optimal inM. Although the MDP Definition 1 yields itself nicely towards learning the time-dependent edge costs cet , this can increase the problem dimensionality considerably with the out-degree of the vertices (here |A|). Thus, we consider cases where the reward function only depends on the current state and the goal: r(s, a, s′, g) = r(s, g). In this case, vertex costs cvt are sufficient for finding the optimal solution to the gcMDP. Accordingly, we rely on a vertex-based version of the shortest path algorithm." }, { "heading": "4 SHORTEST PATH ALGORITHM AND ITS DIFFERENTIATION", "text": "We will employ an efficient implementation of Dijkstra’s algorithm for computing the shortest path. For differentiation, we rely on the framework for blackbox differentiation of combinatorial solvers Vlastelica et al. (2020)." }, { "heading": "4.1 TIME-DEPENDENT SHORTEST PATH", "text": "The purely combinatorial setup can be formalized as follows. Let G = (V,E) be a graph. For every vi ∈ V , let c1i , . . . , cTi be non-negative real numbers; the costs of reaching the vertex vi at time-points 1, 2, . . . , T , where T is the planning horizon. The TIME-DEPENDENT-SHORTEST-PATH problem (TDSP) has as input the graph G, a pair of vertices s, e ∈ V (start and end) and the matrix C ∈ R|V |×T of the costs cti. This version of the shortest path problem can be solved by executing the Dijkstra shortest path algorithm1 Dijkstra (1959) on an augmented graph. In particular, we set\nV ∗ = {(v, t) : v ∈ V, t ∈ [1, T ]} E∗ = {((v1, t), (v2, t+ 1)) : (v1, v2) ∈ E↪→, t ∈ [1, T − 1]},\nwhere the cost of vertex (vi, t) ∈ V ∗ is simply cti and E↪→ is the original edge set E appended with all self-loops. This allows to “wait” at a fixed vertex v from timestep t to timestep t+1. In this graph, the task is to reach vertex (e, T ) from (s, 1) with the minimum traversal cost.\nThe time-dependent shortest path problem can be used for model predictive control using receding horizon planning, as done in our approach." }, { "heading": "4.2 APPLICABILITY OF BLACKBOX DIFFERENTIATION", "text": "The framework presented in Vlastelica et al. (2020) turns blackbox combinatorial solvers into neural network building blocks. The provided gradient is based on a piecewise linear interpolation of the true piecewise constant (possibly linearized) loss function, see Fig. 2. The exact gradient of this linear interpolation is computed efficiently via evaluating the solver on only one more instance (see Algorithm 1).\n1Even though the classical formulation of Dijkstra’s algorithm is edge-based, all of its properties hold true also in this vertex based formulation.\nAlgorithm 1 Forward and backward Pass for the shortest-path algorithm function FORWARDPASS(C, s, e) Y := TDSP(C, s, e) // Run Dijkstra’s algorithm save Y , C, s, e // Needed for backward pass return Y\nIn order to apply this differentiation scheme, the solver at hand needs to have a formulation in which it minimizes an inner-product objective (under arbitrary constraints). To that end, for a given graph G = (V,E) with time-dependent costs C ∈ R|V |×T we define Y ∈ {0, 1}|V |×T an indicator matrix of visited vertices. In particular, Y ti = 1 if and only if vertex vi is visited at time point t. The set of such indicator matrices that correspond to valid paths in the graph (V ∗, E∗) will be denoted as Adm(G). The time-dependent shortest path optimization problem can be then rewritten as\nTDSP(C, s, e) = argmin Y ∈Adm(G) ∑ (i,t) Y ti C t i . (1)\nThis is an inner-product objective and thus the theory from Vlastelica et al. (2020) applies. In effect, the deep network producing the cost tensor C can be trained via supervision signal from ground truth shortest paths." }, { "heading": "4.3 COST MARGIN", "text": "Our work is related to the problem of metric learning in the sense that we learn the distance metric between the current position of the agent (state) and the target position in the underlying gcMDP, allowing us to solve it with a shortest path algorithm. It has been shown that inducing a margin on the metric can be beneficial for generalization. Similarly to (Rolínek et al., 2020) in the context of rank-based metric learning, we induce a margin α on the costs of the latent gcMDP, increasing the cost of ground truth path and decreasing the rest in the training stage of the algorithm:\ncti =\n{ cti + α 2 if (vi, t) ∈ Y ∗\ncti − α2 if (vi, t) /∈ Y ∗ ∀ (vi, t) ∈ V\n∗. (2)\nDuring evaluation, the cost margin is removed from the shortest path calculation." }, { "heading": "5 NEURO-ALGORITHMIC POLICY ARCHITECTURE", "text": "We propose the Neuro-algorithmic Policy (NAP) framework, that is an end-to-end trainable deep policy architecture embedding an algorithmic component using the afore-mentioned techniques.\nIn this paper we consider a concrete architecture consisting of two main components: a backbone ResNet18 (without the final fully connected layers, a detailed description is available in Sec. C of the appendix) architecture and the shortest path solver, see Fig. 1. At each time step the policy receives two images concatenated channel-wise from which it predicts the cost matrix C for the planning horizon T with the cost-predictor and the start vertex s and end vertex e with the goal predictor, explained below.\nThe cost matrix C is given to the solver along with the start vertex s and end vertex e to compute the time-dependent shortest path Y . The cost-predictor is trained using the Hamming distance between the predicted plan Y and the expert plan Y ∗ that we use for supervision.\nThe policy is used in a model-predictive control setting, i.e. at execution time we predict the plan Y for horizon T at each time step and execute the first action from the plan." }, { "heading": "5.1 GOAL PREDICTION, GLOBAL AND LOCAL", "text": "In order to apply the solver to the learned latent graph representation, we need to map the current state of the environment to appropriate start and end vertices (s, e). To this end, we employ a second ResNet18 – the goal-predictor – similar to the cost-predictor that learns to extract the agent start position and a suitable target position. The training of this predictor is using a Cross-Entropy loss and is independent of learning the costs of the latent graph representation.\nAt training time, given the expert trajectories Y ∗ we have access to the current position of the agent and its position in the future. Thus, for predicting s we have a simple supervision signal, namely the current position. For the goal prediction e we extract a set of suitable goal locations from the expert. Here, we distinguish between global and local planning.\nIn the global setting, the last position of the expert is the goal e, corresponding to, for instance, the jewel in CRASH JEWEL HUNT, see Fig. 3.\nIn the local setting, we expect the end vertex to be an intermediate goal (“collect an orb”), which effectively allows for high-level planning strategies while the low-level planning is delegated to the discrete solver. In this case, the positively labeled supervision at time t are all locations of the (expert) agent between step t+ T and t+ 2T .\nThe local setting allows to limit the complexity of our method, which grows with the planning horizon. This is also a trade-off between the combinatorial complexity solved by the TDSP solver and the goal predictor. Ideally, the planning horizon T used for the cost-prediction is long enough to capture the combinatorial intricacies of the problem at hand, such as creating detours towards the goal in the case of future dangerous states, or avoiding dead-ends in a maze.\nThe local setting formulation makes our architecture a hierarchical method similar to Blaes et al. (2019); Nachum et al. (2018), and allows for solving tasks that are not typical goal-reaching problems, such as the CHASER environment." }, { "heading": "6 EXPERIMENTS", "text": "To validate our hypothesis that embedding planners into neural network architectures leads to better generalization, we consider several procedurally generated environments (from the ProcGen suite Cobbe et al. (2019) and CRASH JEWEL HUNT) with considerable variation between levels.\nWe compare to two baselines: a standard behavior cloning imitation learning baseline using a ResNet18 architecture trained with a cross-entropy classification loss on the same dataset as our method; and a reinforcement learning baseline using the PPO algorithm. More details on the training procedure and the hyperparameters can be found in appendix Sec. D.\nFor the experimental validation, we aim to anwser the following questions:\n• Can NAP be trained to perform well as policies in procedurally generated environments? • Can NAP generalize in a low data regime, i.e. after seeing only few different levels? • Can we also solve non-goal-reaching environments?" }, { "heading": "6.1 CRASH JEWEL HUNT", "text": "We first consider an environment we constructed to test NAP, called CRASH JEWEL HUNT which can be seen in Fig. 3. The environment corresponds to a grid-world of dimensions h× w where the goal is to move the agent (Fox) from an arbitrary start position in the left-most column to the goal position (jewel) arbitrarily positioned in the right-most column. Between the agent and the goal are obstacles, wooden boxes that move downwards (with cyclic boundary conditions) with velocities that vary across levels but not within a level, see Fig. 3(right). At each time step, the agent can choose to move horizontally or vertically in the grid by one cell or take no action.\nTo make the task challenging, we sample distinct environment configurations for the training set and the test set, respectively. More concretely, we vary the velocities, sizes and initial positions of the boxes as well as the start and goal positions." }, { "heading": "6.2 PROCGEN BENCHMARK", "text": "In addition to the jewel hunt environment, we evaluate our method on the MAZE, LEAPER and CHASER environments from the ProcGen suite Cobbe et al. (2019). We have chosen these environments because their structure adheres to our assumptions. For the LEAPER we modified the environment such that a grid-world dynamics applies (LEAPER(GRID)). Based on performance of the baselines, the resulting (LEAPER(GRID)) is not an easier environment.\nThe MAZE and the LEAPER(GRID) tasks have a static goal whose position only varies across levels, whereas the CHASER requires collection of all orbs without contact with the spiders, so the local goals need to be inferred on the fly. The CHASER environment is also particularly challenging as even the expert episodes require on average 150 steps, each of which carries a risk of dying. For this reason, we used three expert trajectories per level." }, { "heading": "6.3 RESULTS", "text": "We train our method (NAP) and the imitation learning baseline until saturation on a training set, resulting in virtually 100% success rate when evaluating on train configurations in the environment. For the PPO baseline we use the code from Cobbe et al. (2019) and provide also two subsequent frames and 200M time steps for training. For our method we also report performance of a version with access to the true start and end-point prediction (NAP+ oracle), with the exception of the CHASER where true goals are not well-defined.\nIn Fig. 4 we show the performance of the methods when exposed to different number of levels at training time. As reported in Cobbe et al. (2019), the baselines have a large generalization gap, and also poor performance when < 10 000 levels are seen. We find that NAP shows strong generalization performance, already for < 500 levels. In some environments, such as the MAZE we obtain near 80% success rate already with just 100 levels which is reached by PPO after seeing 200 000 levels. For the CRASH JEWEL HUNT 5× 5 already with 30 trajectories a third of the 1000 test-levels can be solved, the baseline manages less than 50 out of the 1000." }, { "heading": "6.4 SENSITIVITY TO THE PLANNING HORIZON", "text": "We provide a sensitivity analysis of the performance with different planning horizons. Our results indicate that longer horizons benefit environments with increased dynamical interactions. As apparent from Fig. 6, our method outperforms the imitation baseline in the crash environment, the gap between the methods being correlated with the complexity of the environment (5× 5 vs 5× 10). It can be seen also that making the planning horizon smaller in these environments hurts performance.\nOn the other hand, for environments with no dynamics, such as the maze environment, there is no benefit in using time-dependent costs, as expected. Nevertheless, there is still strong performance gain in generalization when using NAP oppose to vanilla imitation learning from expert trajectories." }, { "heading": "7 DISCUSSION", "text": "We have shown that hybrid neuro-algorithmic policies consisting of deep feature extraction and a shortest path solver – made differentiable via blackbox differentiation (Vlastelica et al., 2020) – enable learning policies that generalize to unseen environment settings in the low-data regime. Hybrid architectures are a stepping stone towards better use of inductive biases that enable stronger generalization. In NAP, the inductive bias that we impose is the topology of the latent planning graph in conjunction with a planning algorithm. Introducing the shortest-path solver as a module shifts the combinatorial complexity of the planning problem to efficient algorithmic implementations while alleviating the learning of good representations for planning.\nAlthough there is a clear benefit in using NAP, the method comes with certain caveats. We assume that the topological structure (i.e. that there is an underlying grid structure with a set of 5 actions) of the latent planning graph is known a priori. Furthermore, we assume that the structure of the latent graph is fixed and not dynamically changing over time, i.e. that each available action at a vertex corresponds to the same edge. Any results allowing to abandon some of these assumption will vastly increase applicability of this method and should be of immediate interest." }, { "heading": "A DATA GENERATION", "text": "In order to do imitation learning, we needed to create expert data. For CRASH 5× 5, CRASH 5× 5, LEAPER(GRID) and MAZE we can determine the exact ground truth costs leading to optimal behavior. As an example, CRASH 5× 5 contains moving boxes that when encountered lead to instant death, meaning infinite costs and otherwise the fixed cost of moving around in the environment.\nSince the environments become deterministic for a fixed random seed, we first unrolled their dynamics for each level. After obtaining the underlying grid structure and entities, we labeled them with costs and constructed a graph that reflects the grid structure. An expert trajectory is constructed by applying Dijkstra’s algorithm on this graph and the human-labeled costs and then executing in simulation.\nFor the CRASH JEWEL HUNT experiments, we randomly sampled 2000 solvable levels by varying number of boxes per column, their speed, the agent start position and the jewel position. The training levels were taken from the first half and the second half of levels was used for testing.For the LEAPER(GRID) and MAZE environments we have taken the levels determined by seeds 0-1000.\nFor CHASER, we applied a similar procedure but additionally, we recorded two sets of human trajectories, as we observed benefits in performance by incorporating more different expert trajectories for the same level. Since both the search procedure and human labeling are time consuming for this environment, we collected fewer expert trajectories for the CHASER than for the other environments, 3× 100, two thirds of which are from human players. For this reason, we constructed the dataset for the CHASER by recording human trajectories. Moreover, we observed benefits in the training by incorporating more different expert trajectories for the same level.\nLevels seeds 1000000-1001000 were taken for testing in the PROCGEN experiments." }, { "heading": "B ENVIRONMENTS", "text": "Our method is applicable in discrete environments, therefore we evaluated on environments from the PROCGEN benchmark and the CRASH JEWEL HUNT environment.\nWe created the CRASH JEWEL HUNT environment to evaluate our method, where the goal is for the fox (Crash) to reach the jewel. We found this environment convenient since we can influence the combinatorial difficulty directly, which is not true for the PROCGEN benchmark where we are limited to the random seeds used in the OpenAI implementation. The sources of variation in the CRASH JEWEL HUNT are the box velocities, initial positions, sizes, as well as the agent initial position and the jewel end position.\nWe modified the LEAPER environment to make discrete steps in the world in order to make our method applicable. This involved making the logs on the river move in discrete steps as well as the agent. For an additional description of the PROCGEN environmnets, we refer the reader to Cobbe et al. (2019)." }, { "heading": "C NETWORK ARCHITECTURE AND INPUT", "text": "For all of our experiments, we use the PyTorch implementation of the ResNet18 architecture as the base of our model. All of the approaches receive two stacked frames of the two previous time steps as input in order to make dynamics prediction possible. For the PPO baseline, we didn’t observe any benefit in adding the stacked frames as input and we used stable-baselines implementation from OpenAI in order to train it on the PROCGEN environments.\nIn the case of the behavior cloning baseline, the problem is a multi-class classification problem with the output being a multinomial distribution over actions.\nFor the variant NAP + oracle, we train a cost prediction network on top of which we run the Dijkstra algorithm on the outputted costs of the planning graph. This requires modifications to the original ResNet18 architecture. We remove the linear readout of the original ResNet18 architecture and replace it with the preceded by a convolutional layer of filter size 1 and adaptive max pooling layer to\nobtain the desired dimensions of the underlying latent planning graph. More concretely, the output x of the last ResNet18 block is followed by the following operation (as outputted by PyTorch) to obtain the graph costs:\nSequential( Conv2d(256, 2, kernel_size=(1, 1), stride=(1, 1)) Abs() AdaptiveMaxPool2d(output_size=(grid_height, grid_width)) )\nWhere grid_{height,width} denotes the height and width of the planning grid. For the full variant of NAP with goal and agent position prediction we have a separate position classifier that has the same base architecture as the cost prediction network with 2 additional linear readouts for the likelihoods of the latent graph vertices, more concretely (as outputted by PyTorch):\nSequential( Conv2d(256, 2, kernel_size=(1, 1), stride=(1, 1)) Abs() AdaptiveMaxPool2d(output_size=(grid_height, grid_width)) Flatten() Linear(grid_height × grid_width, grid_height × grid_width) )\nFor training the position classifier, we use a standard cross-entropy loss on the likelihoods. For NAP with position classification, we use the ground-truth expert start and goal positions to calculate the Hamming loss of the predicted path by the solver. At evaluation time, NAP uses the position classifier to determine the start and end vertices in the latent planning graph." }, { "heading": "D TRAINING PROCEDURE", "text": "For CRASH 5×5, CRASH 5×5, LEAPER(GRID) and MAZE we train the models on the same #levels, namely 1, 2, 5, 10, 20, 50, 100, 200, 500 and 1000. We evaluate on unseen 1000 levels in order to show that NAP exhibits superior generalization. The levels are generated as per description in section A. For each dataset size we run experiments with 3 random seeds and normalize the data to be zero mean and unit variance. For all experiments, we make use of the ADAM optimizer.\nWe determine the number of epochs for training depending on each dataset size as min(150000/#levels, 15000) to have roughly the same number of gradient updates in each experiment. We take the minimum over the 2 values because for smaller number of levels a large number iterations is not necessary to achieve good performance, but for a larger number of levels it is necessary.\nFor the CHASER, the training conditions were analogous to the other environments, only of slightly smaller scale due to its higher complexity. Models were trained on 10, 20, 50, and 100 levels and evaluated on 200 unseen levels. Each model trained for 40 epochs.\nD.1 PPO TRAINING PROCEDURE\nThe training of the PPO baseline is exactly the same as described in Cobbe et al. (2019) using the official code from https://github.com/openai/train-procgen, see Table 3 for the used parameters. The network architecture is the IMPALA-CNN. The algorithm is trained on the specified number of levels for 200 million environments interactions. We report numbers for 5 independent restarts." } ]
2,020
null
SP:92d7b00137258b40bcaf13fd19e032cf4c40b3d8
[ "1. This paper tackles the medical image understanding problem. The aim of this paper is to learn a generic feature representation for medical image that could benefits downstream tasks like medical image classification, zero-shot classification. The main contribution of this paper is proposing a contrastive loss that the matched pair of image and text should have a higher corresponding score than the mis-matched pairs.", "In this work, the authors propose a new model, named ConVIRT to learn the medical visual representation from paired image and textual data in an unsupervised strategy. In ConVIRT, they mainly use a contrastive loss with two modalities (images and texts) as inputs to learn the representation. The experimental results show their proposed model achieve higher performance than other methods in image classification and zero-shot retrieval tasks." ]
Learning visual representations of medical images is core to medical image understanding but its progress has been held back by the small size of hand-labeled datasets. Existing work commonly relies on transferring weights from ImageNet pretraining, which is suboptimal due to drastically different image characteristics, or rule-based label extraction from the textual report data paired with medical images, which is inaccurate and hard to generalize. We propose an alternative unsupervised strategy to learn medical visual representations directly from the naturally occurring pairing of images and textual data. Our method of pretraining medical image encoders with the paired text data via a bidirectional contrastive objective between the two modalities is domain-agnostic, and requires no additional expert input. We test our method by transferring our pretrained weights to 4 medical image classification tasks and 2 zero-shot retrieval tasks, and show that our method leads to image representations that considerably outperform strong baselines in most settings. Notably, in all 4 classification tasks, our method requires only 10% as much labeled training data as an ImageNet initialized counterpart to achieve better or comparable performance, demonstrating superior data efficiency.
[]
[ { "authors": [ "Michael David Abràmoff", "Yiyue Lou", "Ali Erginay", "Warren Clarida", "Ryan Amelon", "James C Folk", "Meindert Niemeijer" ], "title": "Improved automated detection of diabetic retinopathy on a publicly available dataset through integration of deep learning", "venue": "Investigative Ophthalmology & Visual Science,", "year": 2016 }, { "authors": [ "Emily Alsentzer", "John Murphy", "William Boag", "Wei-Hung Weng", "Di Jindi", "Tristan Naumann", "Matthew McDermott" ], "title": "Publicly available clinical BERT embeddings", "venue": "In Proceedings of the 2nd Clinical Natural Language Processing Workshop,", "year": 2019 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Xinlei Chen", "Haoqi Fan", "Ross Girshick", "Kaiming He" ], "title": "Improved baselines with momentum contrastive learning", "venue": "arXiv preprint arXiv:2003.04297,", "year": 2020 }, { "authors": [ "Marcella Cornia", "Matteo Stefanini", "Lorenzo Baraldi", "Rita Cucchiara" ], "title": "Meshed-memory Transformer for image captioning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Jeffrey De Fauw", "Joseph R Ledsam", "Bernardino Romera-Paredes", "Stanislav Nikolov", "Nenad Tomasev", "Sam Blackwell", "Harry Askham", "Xavier Glorot", "Brendan ODonoghue", "Daniel Visentin" ], "title": "Clinically applicable deep learning for diagnosis and referral in retinal disease", "venue": "Nature Medicine,", "year": 2018 }, { "authors": [ "Dina Demner-Fushman", "Marc D Kohli", "Marc B Rosenman", "Sonya E Shooshan", "Laritza Rodriguez", "Sameer Antani", "George R Thoma", "Clement J McDonald" ], "title": "Preparing a collection of radiology examinations for distribution and retrieval", "venue": "Journal of the American Medical Informatics Association,", "year": 2016 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT),", "year": 2019 }, { "authors": [ "Andre Esteva", "Brett Kuprel", "Roberto A Novoa", "Justin Ko", "Susan M Swetter", "Helen M Blau", "Sebastian Thrun" ], "title": "Dermatologist-level classification of skin cancer", "venue": null, "year": 2017 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre H Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap your own latent: A new approach to self-supervised learning", "venue": "arXiv preprint arXiv:2006.07733,", "year": 2020 }, { "authors": [ "Varun Gulshan", "Lily Peng", "Marc Coram", "Martin C Stumpe", "Derek Wu", "Arunachalam Narayanaswamy", "Subhashini Venugopalan", "Kasumi Widner", "Tom Madams", "Jorge Cuadros" ], "title": "Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus", "venue": "photographs. JAMA,", "year": 2016 }, { "authors": [ "Tanmay Gupta", "Arash Vahdat", "Gal Chechik", "Xiaodong Yang", "Jan Kautz", "Derek Hoiem" ], "title": "Contrastive learning for weakly supervised phrase grounding", "venue": "In Proceedings of the 16th European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Olivier J Hénaff", "Aravind Srinivas", "Jeffrey De Fauw", "Ali Razavi", "Carl Doersch", "SM Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Gabriel Ilharco", "Rowan Zellers", "Ali Farhadi", "Hannaneh Hajishirzi" ], "title": "Probing text models for common ground with visual representations", "venue": "arXiv preprint arXiv:2005.00619,", "year": 2020 }, { "authors": [ "Jeremy Irvin", "Pranav Rajpurkar", "Michael Ko", "Yifan Yu", "Silviana Ciurea-Ilcus", "Chris Chute", "Henrik Marklund", "Behzad Haghgoo", "Robyn Ball", "Katie Shpanskaya" ], "title": "CheXpert: A large chest radiograph dataset with uncertainty labels and expert comparison", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Baoyu Jing", "Pengtao Xie", "Eric Xing" ], "title": "On the automatic generation of medical imaging reports", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2018 }, { "authors": [ "Alistair EW Johnson", "Tom J Pollard", "Seth J Berkowitz", "Nathaniel R Greenbaum", "Matthew P Lungren", "Chih-ying Deng", "Roger G Mark", "Steven Horng" ], "title": "MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports", "venue": "Scientific Data,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In The 2015 International Conference for Learning Representations,", "year": 2015 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft COCO: Common objects in context", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2014 }, { "authors": [ "Guanxiong Liu", "Tzu-Ming Harry Hsu", "Matthew McDermott", "Willie Boag", "Wei-Hung Weng", "Peter Szolovits", "Marzyeh Ghassemi" ], "title": "Clinically accurate chest X-ray report generation", "venue": "In Machine Learning for Healthcare Conference,", "year": 2019 }, { "authors": [ "Jiasen Lu", "Dhruv Batra", "Devi Parikh", "Stefan Lee" ], "title": "ViLBERT: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-SNE", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Christopher D. Manning", "Mihai Surdeanu", "John Bauer", "Jenny Finkel", "Steven J. Bethard", "David McClosky" ], "title": "The Stanford CoreNLP natural language processing toolkit", "venue": "In Association for Computational Linguistics (ACL) System Demonstrations,", "year": 2014 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Maithra Raghu", "Chiyuan Zhang", "Jon Kleinberg", "Samy Bengio" ], "title": "Transfusion: Understanding transfer learning for medical imaging", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Pranav Rajpurkar", "Jeremy Irvin", "Aarti Bagul", "Daisy Ding", "Tony Duan", "Hershel Mehta", "Brandon Yang", "Kaylie Zhu", "Dillon Laird", "Robyn L Ball" ], "title": "MURA: Large dataset for abnormality detection in musculoskeletal radiographs", "venue": "In 1st Conference on Medical Imaging with Deep Learning (MIDL),", "year": 2018 }, { "authors": [ "Pranav Rajpurkar", "Jeremy Irvin", "Robyn L Ball", "Kaylie Zhu", "Brandon Yang", "Hershel Mehta", "Tony Duan", "Daisy Ding", "Aarti Bagul", "Curtis P Langlotz" ], "title": "Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists", "venue": "PLoS Medicine,", "year": 2018 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "ImageNet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "George Shih", "Carol C Wu", "Safwan S Halabi", "Marc D Kohli", "Luciano M Prevedello", "Tessa S Cook", "Arjun Sharma", "Judith K Amorosa", "Veronica Arteaga", "Maya Galperin-Aizenberg" ], "title": "Augmenting the National Institutes of Health chest radiograph dataset with expert annotations of possible pneumonia", "venue": "Radiology: Artificial Intelligence,", "year": 2019 }, { "authors": [ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "venue": "In ICLR Workshop,", "year": 2014 }, { "authors": [ "Weijie Su", "Xizhou Zhu", "Yue Cao", "Bin Li", "Lewei Lu", "Furu Wei", "Jifeng Dai" ], "title": "VL-BERT: Pretraining of generic visual-linguistic representations", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Hao Tan", "Mohit Bansal" ], "title": "LXMERT: Learning cross-modality encoder representations from transformers", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP),", "year": 2019 }, { "authors": [ "Ramakrishna Vedantam", "C Lawrence Zitnick", "Devi Parikh" ], "title": "CIDEr: Consensus-based image description evaluation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Linda Wang", "Alexander Wong" ], "title": "COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images", "venue": "arXiv preprint arXiv:2003.09871,", "year": 2020 }, { "authors": [ "Xiaosong Wang", "Yifan Peng", "Le Lu", "Zhiyong Lu", "Mohammadhadi Bagheri", "Ronald M Summers" ], "title": "ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Xiaosong Wang", "Yifan Peng", "Le Lu", "Zhiyong Lu", "Ronald M Summers" ], "title": "TieNet: Text-image embedding network for common thorax disease classification and reporting in chest X-rays", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR),", "year": 2018 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rmi Louf", "Morgan Funtowicz", "Joe Davison", "Sam Shleifer", "Patrick von Platen", "Clara Ma", "Yacine Jernite", "Julien Plu", "Canwen Xu", "Teven Le Scao", "Sylvain Gugger", "Mariama Drame", "Quentin Lhoest", "Alexander M. Rush" ], "title": "HuggingFace’s Transformers: Stateof-the-art natural language processing", "venue": "arXiv preprint arXiv:1910.03771,", "year": 2019 }, { "authors": [ "Kelvin Xu", "Jimmy Ba", "Ryan Kiros", "Kyunghyun Cho", "Aaron Courville", "Ruslan Salakhudinov", "Rich Zemel", "Yoshua Bengio" ], "title": "Show, attend and tell: Neural image caption generation with visual attention", "venue": "In International Conference on Machine Learning (ICML),", "year": 2015 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nMedical image understanding has the potential to transform healthcare and has seen rapid progress with the use of deep neural architectures (Gulshan et al., 2016; Esteva et al., 2017; De Fauw et al., 2018; Rajpurkar et al., 2018b). Yet, with expert-level performance achieved only in some specialties and under some circumstances, medical image understanding remains a difficult task for the majority of specialties, mainly due to its challenging nature and the extreme scarcity of annotated data.\nExisting work has followed two general approaches to obtain annotations for medical imaging tasks. The first approach has been using high-quality annotations created by medical experts (Abràmoff et al., 2016; Gulshan et al., 2016; Shih et al., 2019; Wang & Wong, 2020). However, the high cost of this approach has resulted in datasets that are mostly orders of magnitude smaller than natural image datasets such as ImageNet (Russakovsky et al., 2015). To remedy this, existing work has relied heavily on trans-\nferring model weights from ImageNet pretraining (Wang et al., 2017; Esteva et al., 2017; Irvin et al., 2019). This approach is suboptimal because, as shown in Figure 1, medical image understanding often requires representations of very fine-grained visual features that are drastically different from those required for identifying objects in natural images. As a result, Raghu et al. (2019) found that ImageNet pretraining often provides little to no benefit compared to simple random initialization.\nA second popular approach is to use expert-crafted rules to extract labels from the textual reports accompanying the medical images. This approach has led to datasets of larger scale, since the text data paired with medical images are often produced naturally by medical experts in their routine work-\nflow and abundant in a typical hospital’s IT systems. Nevertheless, this rule-based label extraction approach has two limitations: 1) the rules are often inaccurate and limited to a few major categories (Wang et al., 2017), leading to very inefficient use of the textual report data; 2) these rules are often domain-specific and sensitive to the style of the text, making cross-domain and cross-institution generalization difficult (Irvin et al., 2019).\nIn efforts to make more efficient use of unlabeled image data, several recent studies have shown promising results from contrastive representation learning from natural images (Chen et al., 2020a; He et al., 2020; Grill et al., 2020). However, as we will show, applying these image view-based contrastive methods to medical images provides only marginal benefits compared to ImageNet pretraining, a result mostly due to the high inter-class similarity of the medical images as in Figure 1.\nIn this work, we aim to improve visual representations of medical images by combining the benefits of both learning from abundant textual data and unsupervised statistical approaches. We present Contrastive VIsual Representation Learning from Text (ConVIRT), a framework for learning visual representations by exploiting the naturally occurring pairing of images and textual data. ConVIRT improves visual representations by maximizing the agreement between true image-text pairs versus random pairs via a bidirectional contrastive objective between the image and text modalities. We apply ConVIRT to the pretraining of medical image encoders, and show that it leads to higherquality in-domain image representations that capture the subtlety of visual features required for medical image understanding tasks.\nCompared to existing methods, ConVIRT has the advantages of utilizing the paired text data in a way agnostic to the medical specialty and requiring no additional expert input. This allows us to evaluate ConVIRT by transferring our pretrained weights to 4 different medical image classification tasks covering 2 different specialties. We find that the resulting models outperform all baseline initialization approaches, including the standard ImageNet pretraining and several strong baselines that also utilize the paired text data. Most notably, in all 4 tasks, ConVIRT requires only 10% as much labeled training data as an ImageNet initialized counterpart to achieve better or comparable performance. We further evaluate ConVIRT on two new zero-shot retrieval tasks, an image-image and a text-image retrieval task, and also find it superior to all baselines. To facilitate future research, we will make our code and the collected retrieval datasets available." }, { "heading": "2 METHOD", "text": "" }, { "heading": "2.1 TASK DEFINITION", "text": "We start by giving a formal description of our representation learning setting. We assume paired input (xv,xu) where xv represents one or a group of images, and xu represents a text sequence which describes the imaging information in xv . Our goal is to learn a parameterized image encoder function fv , which maps an image to a fixed-dimensional vector. We are then interested in transferring the learned image encoder function fv into downstream tasks, such as classification or image retrieval. In this work, we model the encoder function fv as a convolutional neural network (CNN).\nWe note that paired image-text data (xv,xu) naturally exists for many medical domains. Medical experts such as radiologists produce textual descriptions of images as part of their routine workflow, some of which are also made publicly available (Demner-Fushman et al., 2016; Johnson et al., 2019)." }, { "heading": "2.2 CONTRASTIVE VISUAL REPRESENTATION LEARNING FROM TEXT", "text": "An overview of our method, ConVIRT, for learning fv is shown in Figure 2. At a high level, our method converts each input image xv and text xu into d-dimensional vector representations v and u respectively, following a similar processing pipeline. For each input image xv , our method starts by drawing a random view x̃v from xv with a sampled transformation function tv ∼ T , where T represents a family of stochastic image transformation functions described later. Next, the encoder function fv transforms x̃v into a fixed-dimensional vector hv , followed by a non-linear projection function gv which further transforms hv into vector v:\nv = gv(fv(x̃v)), (1) where v ∈ Rd. Similarly, for each text input xu, we obtain a span x̃u from it following a sampling function tu, and then a text representation u with: u = gu(fu(x̃u)), where fu is a text encoder,\ngu a projection, and u ∈ Rd. The projection functions gv and gu project representations for both modalities from their encoder space to the same d-dimensional space for contrastive learning.\nAt training time, we sample a minibatch of N input pairs (xv , xu) from training data, and calculate their representation pairs (v, u). We use (vi, ui) to denote the i-th pair. The training objective of ConVIRT involves two loss functions. The first loss function is an image-to-text contrastive loss for the i-th pair:\n` (v→u) i = − log exp(〈vi,ui〉/τ)∑N k=1 exp(〈vi,uk〉/τ) , (2)\nwhere 〈vi,ui〉 represents the cosine similarity, i.e., 〈v,u〉 = v>u/‖v‖‖u‖; and τ ∈ R+ represents a temperature parameter. This loss takes the same form as the InfoNCE loss (Oord et al., 2018), and minimizing it leads to encoders that maximally preserve the mutual information between the true pairs under the representation functions. Intuitively, it is the log loss of an N -way classifier that tries to predict (vi, ui) as the true pair. Note that unlike previous work which use a contrastive loss between inputs of the same modality (Chen et al., 2020a; He et al., 2020), our image-to-text contrastive loss is asymmetric for each input modality. We therefore define a similar text-to-image contrastive loss as:\n` (u→v) i = − log exp(〈ui,vi〉/τ)∑N k=1 exp(〈ui,vk〉/τ) . (3)\nOur final training loss is then computed as a weighted combination of the two losses averaged over all positive image-text pairs in each minibatch:\nL = 1 N N∑ i=1 ( λ` (v→u) i + (1− λ)` (u→v) i ) , (4)\nwhere λ ∈ [0, 1] is a scalar weight." }, { "heading": "2.3 REALIZATION", "text": "We note that our ConVIRT framework defined above is agnostic to the specific choice of image and text encoders, transformations and projection functions. In this work, following previous work (Chen et al., 2020a), we model gv and gu as separate learnable single-hidden-layer neural networks, i.e., gv(·) = W(2)σ(W(1)(·)) where σ is a ReLU non-linearity, and similarly for gu. For the image encoder fv , we use the ResNet50 architecture (He et al., 2016) for all experiments, as it is the architecture of choice for much medical imaging work and is shown to achieve competitive performance. For the text encoder fu, we use a BERT encoder (Devlin et al., 2019) followed by a max-pooling layer over all output vectors. We initialize our BERT encoder with the ClinicalBERT model (Alsentzer et al., 2019) pretrained on the MIMIC clinical notes, which achieved state-of-theart performance on a suite of clinical NLP tasks. At training time we allow the encoder to adapt to our contrastive task by freezing the embeddings and the first 6 layers of this BERT encoder and fine-tuning the last 6 layers.\nFor the image transformation family T where tv is sampled from, we use sequential applications of five random transformations: cropping, horizontal flipping, affine transformation, color jittering and Gaussian blur. Different from recent work on contrastive visual representation learning (Chen et al., 2020a;b), we only apply brightness and contrast adjustments in color jittering, due to the monochrome nature of the medical images. For the text transformation function tu, we apply a simple uniform sampling of a sentence from the input document xu (i.e., x̃u is a randomly sampled sentence from xu for each minibatch). We did not use a more aggressive transformation mainly because sampling at the sentence level can preserve the semantic meaning of the sampled spans." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 DATA FOR PRETRAINING", "text": "We test our ConVIRT framework by pretraining two separate image encoders covering different medical specialties using two separate paired image-text datasets:\n• Chest image encoder: We use version 2 of the public MIMIC-CXR database (Johnson et al., 2019), which is a collection of chest radiograph images paired with their textual reports, and since its release has become a standard resource for studying multi-modal modeling of medical images. After preprocessing, this dataset contains a total of about 217k image-text pairs, with each pair containing an average of 1.7 images and 6.0 sentences.\n• Bony image encoder: We obtain a collection of musculoskeletal image-text pairs from the Rhode Island Hospital system. Following chest images, musculoskeletal images constitute the second most common type of radiograph images in a typical hospital. This dataset contains a total of 48k image-text pairs, with each pair containing an average of 2.5 images and 8.0 sentences.\nWe include model implementation and pretraining details in Appendix A." }, { "heading": "3.2 EVALUATION TASKS & DATA", "text": "We evaluate our pretrained image encoders on three downstream medical imaging tasks: image classification, zero-shot image-image retrieval and zero-shot text-image retrieval.\nImage Classification. We evaluate our pretrained image representations on four representative medical image classification tasks: 1) RSNA Pneumonia Detection (Wang et al., 2017; Shih et al., 2019), which involves binary classification of a chest radiograph image into either a pneumonia or a normal category; 2) CheXpert image classification (Irvin et al., 2019), which involves multi-label binary classification of a chest image for five individual labels, i.e., atelectasis, cardiomegaly, consolidation, edema and pleural effusion; 3) COVIDx image classification (Wang & Wong, 2020), which involves multi-class classification of a chest image into one of COVID19, non-COVID pneumonia or normal categories; and 4) MURA bony abnormality detection (Rajpurkar et al., 2018a), which involves binary classification of a musculoskeletal image into abnormal or normal. We report test accuracy for COVIDx given its balanced test set, and report the standard area under the receiver operating characteristic curve (AUC) metric for other tasks following previous work.\nFollowing previous work (Hénaff et al., 2020; Chen et al., 2020a; He et al., 2020), for all tasks, we evaluate each pretrained image encoder under two individual settings: a linear classification setting, where the pretrained CNN weights are frozen and only a linear classification head is trained for the task; and a fine-tuning setting, where both the CNN weights and the linear head are fine-tuned. The two settings complement each other for evaluation purposes: while the linear setting directly evaluates the quality of the extracted image features with the pretrained CNN, the fine-tuning setting more closely resembles how the pretrained CNN weights are used in practical applications.\nTo further compare the data efficiency of different pretraining methods, for each setting we evaluate the image encoders with 1%, 10% and all training data, respectively (except for the COVIDx dataset where we omit the 1% setting due to the scarcity of data for some categories). To control the variance in results, for all settings and models, we report average results aggregated over 5 independent training runs. We include further dataset processing and model training details in Appendix B.\nZero-shot Image-image Retrieval. This evaluation is similar to the conventional content-based image retrieval setting in which we search for images of a particular category using a representative query image. For evaluation, a group of query images and a larger collection of candidate images, each with a categorical label, are given to a pretrained CNN encoder. We encode each query and candidate image with this encoder, and then for each query, rank all candidates by their cosine similarities to the query in descending order. Since a widely-used annotated benchmark for this setting is not available, we create our own dataset by re-using existing annotations in the CheXpert dataset (Irvin et al., 2019) and additional expert annotations from a board-certified radiologist. The resulting dataset covers 8 different chest abnormality categories, each with 10 expert-annotated query and 200 candidate images. We include the detailed collection and annotation procedure in Appendix C, and refer to this dataset as CheXpert 8×200 Retrieval Dataset. We focus our evaluation on retrieval precision, and evaluate our models with Precision@k metrics where k = 5, 10, 100.\nZero-shot Text-image Retrieval. This setting is similar to the image-image retrieval setting, but instead of using query images, we retrieve images of a particular category with textual queries. For this purpose, we ask a radiologist to write 5 diverse and representative textual descriptions for each of the 8 abnormality categories for the same CheXpert 8x200 candidate images (see Appendix D for details). At test time, for each query we encode its text with the learned text encoder fu and then retrieve from candidate images in a similar way. This evaluation not only evaluates the quality of the learned image representations, but also the alignment between the text representations and the image representations. We again use Precision@k metrics where k = 5, 10, 100." }, { "heading": "3.3 BASELINE METHODS", "text": "We compare ConVIRT against the following standard or competitive initialization methods:\n• Random Init.: For all tasks we initialize the ResNet50 with its default random initialization. • ImageNet Init.: We use CNN weights pretrained on ImageNet (Russakovsky et al., 2015), which\nremains a dominant initialization approach for medical imaging work (Raghu et al., 2019). • Caption-LSTM: We initialize the CNN weights with ImageNet, and then pretrain it with an im-\nage captioning task using the standard CNN-LSTM with attention architecture (Xu et al., 2015). For the captioning task, we train the model to decode the paired textual report from the encoded image representations. Compared to the random or ImageNet initializations, this is an “in-domain” initialization baseline which uses the paired text data for representation learning.\n• Caption-Transformer: In this initialization we replace the CNN-LSTM model in Caption-LSTM with a CNN-Transformer-based captioning model in Cornia et al. (2020), which recently achieves state-of-the-art results on the COCO image captioning benchmark (Lin et al., 2014).\n• Contrastive-Binary: This baseline differs from our method by contrasting the paired image and text representations with a binary classification head, as is widely done in visual-linguistic pretraining work (Tan & Bansal, 2019; Su et al., 2020). For each input pair, we first project encoder outputs hv and hu into the same dimension with linear layers, concatenate them, and use a MLP network to predict a binary probability of whether the input is a real or a “fake” pair, which we train with a binary cross-entropy loss. During training, for each (xv , xu) pair in the training set, we construct a “fake” pair by replacing xu with a randomly sampled one from the dataset. We expect that the binary classification task requires the encoder to learn reasonable representations of the input images, and therefore is a stronger in-domain initialization baseline.\nFor fair comparison, for all baselines that require paired image-text data, we use the same datasets as in our contrastive pretraining. For the captioning-based methods, we always use the model checkpoints that achieve the best CIDEr score (Vedantam et al., 2015) on a held-out validation set." }, { "heading": "4 RESULTS", "text": "" }, { "heading": "4.1 CLASSIFICATION TASKS", "text": "Linear Classification. We present all linear classification results for the medical imaging tasks in Table 1a. We find that compared to random initialization, ImageNet initialization provides markedly\nbetter representations, despite pretrained on a very different domain of images; in-domain image initialization methods that use paired image-text data further improve over ImageNet initialization in almost all settings. Among the in-domain initialization methods, our proposed ConVIRT pretraining achieves the best overall results in all settings. Notably, we find on three out of the four tasks, with only 1% training data ConVIRT is able to achieve classification results better than the default ImageNet initialization with 100% training data, highlighting the high quality of the learned representations from ConVIRT.\nFine-tuning. We show the fine-tuning evaluation results in Table 1b. Similar to the linear setting, we find that: 1) ImageNet initialization is again better than random initialization with smaller margins; 2) all in-domain initialization methods are better than the popular ImageNet initialization in most settings; and 3) our proposed ConVIRT pretraining again achieves the best overall results in 10 out of the 11 settings, with the exception of the CheXpert dataset with all training data used, where the result of ConVIRT is similar to that of the Caption-Transformer result. Most notably, on all datasets, with only 10% labeled training data ConVIRT achieves classification results that are better or close to the ImageNet initialization with 100% training data results.\nWe also notice that our conclusion of using ImageNet versus random initialization is different from Raghu et al. (2019): while they showed comparable results from the two strategies, we find that using ImageNet initialization is still superior than random initialization in most results, justifying its popularity. Upon closer examination, we conjecture that this is likely due to under-optimization of their models: while our ResNet50 with random initialization achieves an average AUC of 85.8 on the CheXpert dataset, their ResNet50 model only achieved 83.5 AUC on the same evaluation set." }, { "heading": "4.2 RETRIEVAL TASKS", "text": "We present the zero-shot image-image and text-image retrieval results in Table 2. For the imageimage retrieval setting, we present additional results from fine-tuning our pretrained model on all CheXpert training data, and use them as “upper bounds” of the results obtained from the use of supervised labels. We find that: 1) using ImageNet pretrained CNN weights in a zero-shot image retrieval setting is only better than random guess by small margins; 2) all in-domain pretrained CNN\nweights achieve much better retrieval performance than ImageNet weights; and 3) our proposed ConVIRT pretraining achieves the best overall retrieval results on all metrics. We find that while Contrastive-Binary performs notably better than other baselines in the image-image retrieval setting, its text-image retrieval results are far from ConVIRT pretraining. We conjecture that the lack of an explicit similarity-based loss function in the Contrastive-Binary baseline results in misaligned representations in the image and text space, leading to poor results in text-image retrieval.\nTo understand how well ConVIRT pretraining helps separate images from different abnormality categories in its encoding space, in Figure 3 we present t-SNE plots (Maaten & Hinton, 2008) of candidate images in the CheXpert 8x200 dataset for five selected categories, from the ImageNet pretrained CNN encoder and the ConVIRT pretrained encoder. It is worth noting that clustering images in our setting is much more challenging than that in the general object classification setting due to the high inter-class similarity of the medical images. Nevertheless we find that ConVIRT pretraining achieves a better clus-\ntering of the images in the t-SNE plots. On the other hand, the lack of clear separations between groups suggests room for further improvement." }, { "heading": "5 ANALYSIS AND DISCUSSION", "text": "Comparisons to Image-only Contrastive Learning. ConVIRT shows superior results against baselines in evaluation, but an important question remains as to how it compares against existing image-only contrastive visual representation learning methods. We study this by running two popular such methods, SimCLR (Chen et al., 2020a) and MoCo v2 (Chen et al., 2020b), on the same collection of images that we used in our pretraining. We present the results in Table 3 and include model training details in Appendix E. We find that compared to ImageNet initial-\nization, both contrastive methods lead to marginal to moderate improvements on the classification\nand retrieval tasks. However, our training strategy substantially outperforms both methods on all tasks, demonstrating its effective use of information from the paired text data.\nTo understand the representational difference that has led to this difference in performance, for all four initialization methods, we visualize in Figure 4 the saliency maps (Simonyan et al., 2014) corresponding to the correct class on sampled images from the CheXpert dataset. Models for all initialization methods are trained with 1% CheXpert training data under the linear classification setting (with pretrained CNN weights frozen). We find that ImageNet pretraining has led to models that focus on trivial visual features that are mostly irrelevant to the task, and that the model with ConVIRT pretrained weights has focused on much more relevant areas than those with SimCLR and MoCo v2 pretraining, suggesting more effective representation learning. For example, for atelectasis, while the ConVIRT model has correctly focused on the bottom of the lung regions, the SimCLR model has much more scattered focus and the MoCo model has incorrectly focused on the heart region.\nCorrelation Between Contrastive Loss and End Task Performance. To understand the relation between a model’s performance on the ConVIRT pretraining task and its performance on the downstream tasks, we ran an analysis where for every 5 epochs during the pretraining, we transferred the pretrained checkpoint to the downstream tasks and evaluate its performance. The pretraining was run for a total of 200 epochs, and 40 points were obtained with varying validation loss and end task results. Figure 5 presents the results of the models’ validation loss on the pretraining task, and its achieved performance on the RSNA 1% data linear evaluation and the two retrieval tasks. For all three tasks, we find a clear positive correlation between the pretraining performance and the end task performance. This corroborates that by learning with the ConVIRT objective, the image encoder learns gradually improved representations for the end tasks, and suggests that further improvement on the pretraining task may have positive impact on the end task performance.\nHyperparameter Analysis. We run experiments to study the impact of hyperparameters, and find that: 1) similar to previous work on image-only contrastive learning (Chen et al., 2020a), the pretraining results are most sensitive to the choice of the temperature value τ ; 2) unlike previous work,\nchanging batch size does not lead to substantial change in the classification results; and 3) using linear projection heads instead of non-linear ones notably hurts the retrieval results. We include our detailed comparisons in Appendix F." }, { "heading": "6 RELATED WORK", "text": "Our work is most relevant to existing work on medical image classification, which we have covered in Section 1, and textual report generation from medical images (Wang et al., 2018; Jing et al., 2018; Liu et al., 2019). A dominant approach for initializing medical image encoders in this work has been using encoder weights pretrained on ImageNet, despite the drastic difference in image characteristics (Raghu et al., 2019). Instead, our work proposes an alternative in-domain pretraining strategy, and compares ImageNet pretraining with different pretraining approaches that also use the paired text data. To our knowledge our work represents the first systematic attempt in this direction.\nOur work is inspired by the recent line of work on image view-based contrastive visual representation learning (Hénaff et al., 2020; Chen et al., 2020a; He et al., 2020; Grill et al., 2020), but differs from existing studies by the contrastive learning with text modality, which as we show in Section 5, is more effective in learning high-quality representations of medical images.\nAnother line of work related to ours is visual-linguistic representation learning (Lu et al., 2019; Tan & Bansal, 2019; Su et al., 2020). Among existing studies, Ilharco et al. (2020) and Gupta et al. (2020) used a cross-modality contrastive objective related to ours, but for the purpose of probing visual-linguistic models and learning phrase grounding, respectively. Our work differs from this line of work in several crucial ways: 1) existing work in visual-linguistic learning focused on learning visual representations from paired text via a binary contrastive prediction task, whereas we showed the superior performance of the new cross-modality NCE objectives in our setting; 2) existing work has primarily used object representations extracted from image segmentation models in their preprocessing steps, making them less applicable to medical image understanding tasks where anatomical segmentations are extremely hard to obtain; 3) while existing work has run evaluation primarily on visual-linguistic tasks such as visual question answering, we instead focus on evaluation with classification and retrieval tasks which are at the center of medical image understanding research." }, { "heading": "7 CONCLUSION", "text": "We presented ConVIRT, an unsupervised method for learning medical visual representations from naturally occurring pairing of images and text. Our method relies on contrasting the image representations with the paired text data via a bidirectional objective between the two modalities. On 4 medical image classification tasks and 2 image retrieval tasks, ConVIRT outperformed other strong in-domain initialization methods that also use the text data, and led to representations with notably higher quality. Compared to ImageNet pretraining, ConVIRT is able to achieve the same level of classification accuracy with an order of magnitude less labeled data. We hope that our work can inspire future work that makes more efficient use of textual data for medical image understanding." }, { "heading": "A MODEL IMPLEMENTATION AND PRETRAINING DETAILS", "text": "Dataset Preprocessing. For the MIMIC-CXR chest radiograph dataset, we use the publicly available JPG version of it.1 For both the MIMIC-CXR chest dataset and the Rhode Island Hospital bone image datasets, we resize the image files to have a size of 256 on the larger side. For the textual radiology report data, we first tokenize all reports with the default English tokenizer in version 4.0.0 of the CoreNLP library (Manning et al., 2014). Next, we keep only the Findings and Impression sections and remove all other sections. We remove all image-text pairings from the dataset where the text section is empty or has less than 3 tokens. This preprocessing procedure gives us about 217k total image-text pairs for pretraining our chest image encoder and 48k total pairs for pretraining our bone image encoder.\nImage and Text Encoders. For the image encoder, we use the standard ResNet50 implementation provided by the torchvision library. For the text encoder, we use the BERT base encoder offered by the Transformers library (Wolf et al., 2019) and initialize it with the ClinicalBERT model (Alsentzer et al., 2019) pretrained on the MIMIC clinical notes. We also experimented with training a specialized BERT encoder on a large collection of radiology notes but found that it made no substantial difference in the pretraining results. At pretraining time we freeze the embeddings and the first 6 layers of this BERT encoder, and only fine-tune the last 6 layers for our contrastive task.\nOther Hyperparameters. For contrastive learning, we use projection layers with an output dimension d = 512, a temperature value τ = 0.1, a loss weight λ = 0.75. These hyperparameter settings are obtained by comparing the linear evaluation validation scores on the RSNA image classification task with the pretrained ResNet50 weights. For the image transformation family T , we adopt the implementations offered by the torchvision library.2 We apply random cropping with a ratio sampled from [0.6, 1.0]; horizontal flipping with p = 0.5; affine transformation with a degree sampled from [−20, 20], max horizontal and vertical translation fractions of 0.1, and a scaling factor sampled from [0.95, 1.05]; color jittering with brightness and contrast adjustment ratios sampled from [0.6, 1.4]; and Gaussian blur with σ ∈ [0.1, 3.0]. All images are resized to 224×224 after the transformation tv is applied. Limited by computational resources, we arrive at these image transformation parameters via preliminary experiments rather than a systematic search.\nPretraining Details. At pretraining time, for each dataset, we randomly sample 5k image-text pairs to form a held-out validation set. We we use the Adam optimizer (Kingma & Ba, 2015) with an initial learning rate of 1e-4 and weight decay of 1e-6. We initialize the image encoder with ImageNet pretrained weights at the beginning of pretraining, and use a fixed batch size of 32. We calculate the validation loss every 5000 steps, and if the validation loss does not decrease after 5 straight evaluation runs, we anneal the learning rate by a factor of 0.5. We stop pretraining after 200 evaluation runs, and save the model checkpoint that achieves the lowest validation loss. For efficiency, we employ mixed-precision training, and for reference, the whole pretraining run on the MIMIC-CXR dataset took about 3 days on a single Titan RTX GPU card.\nB IMAGE CLASSIFICATION EXPERIMENTS\nWe prepared and used the 4 image classification datasets following the procedures below:\n1. RSNA Pneumonia Detection (Wang et al., 2017; Shih et al., 2019): we used the original version of this dataset available at its Kaggle page,3 which contains 25184/1500/3000 annotated images in its training/validation/test sets, respectively.\n2. CheXpert image classification (Irvin et al., 2019): we downloaded the original version of this dataset from its official website.4 Since the original expert-labeled test set of this dataset is hidden and not included as part of the release, we instead followed Raghu et al. (2019) and used the original expert-labeled validation set as our test set, and randomly sampled 5000 images from 1https://physionet.org/content/mimic-cxr-jpg/2.0.0/ 2https://github.com/pytorch/vision 3https://www.kaggle.com/c/rsna-pneumonia-detection-challenge 4https://stanfordmlgroup.github.io/competitions/chexpert/\nthe original training set for validation purpose. The resulting dataset contains 218414/5000/234 images in each split.\n3. COVIDx image classification (Wang & Wong, 2020): we prepared this dataset following the scripts provided by its authors.5 We used the version 4 of this dataset, the latest version at the time of this work. We additionally randomly sampled 300 images from the training set for validation, resulting in a dataset with 13598/300/300 images in each split.\n4. MURA bony abnormality detection (Rajpurkar et al., 2018a): we downloaded the original version of this dataset from its website.6 Similar to the CheXpert dataset, we again used the original validation set as our test set, and randomly sampled 10% images from the training set for validation, resulting in a dataset with 33078/3730/3197 images in each split. Different from the other 3 datasets, the MURA dataset uses patient-level evaluation, meaning that the prediction results from different images of the same patient needs to be aggregated to produce a final prediction for the patient, which is then scored against the gold patient label. We therefore followed Rajpurkar et al. (2018a) and at test time aggregated result for a patient by averaging the predicted probabilities from multiple images.\nClassification Model Training Details. For all models that require ImageNet pretrained initialization, we use the pretrained weights from torchvision, which achieves an ImageNet top-5 error rate of 7.13%. For all datasets, we first zero-pad the input image to be square, and then resize it to be 224×224. For training, we use the Adam optimizer with an initial learning rate of 1e-3 for the COVIDx task and 1e-4 for the other three tasks. We additionally apply a weight decay of 1e-6 and a dropout before the last classification layer with p = 0.2 in all tasks. All classification models are trained with a batch size of 64. In the fine-tuning evaluation setting, we first “warmup” the classification head by freezing the CNN weights and only training the classification head with a learning rate of 1e-3 for 200 steps, after which we unfreeze the CNN weights and fine-tune the entire network together. Validation score is obtained after each epoch of training and we anneal the learning rate by a factor of 0.5 if the validation score is not improved after 3 epochs. The training is stopped after no validation improvement is observed for 10 straight epochs, at which point the model checkpoint with the highest validation score is evaluated on the test set.\nC IMAGE-IMAGE RETRIEVAL DATASET COLLECTION\nWe create the CheXpert 8×200 Retrieval Dataset with 8 different abnormality categories commonly found in Chest radiograph images, including atelectasis, cardiomegaly, edema, fracture, pleural effusion, pneumonia, pneumothorax and a special no finding category indicating that no obvious abnormality is found in the image. We create the dataset by reusing existing rule-labeled annotations in the CheXpert dataset (Irvin et al., 2019) and additional expert annotations. To create the candidate images for a category label `, we go through all images in the CheXpert training set, and keep an image as a candidate image if only its label for ` is positive and all other categories negative. We only include images with this “exclusive positivity” as candidate images, mainly to avoid confounding results between categories in retrieval evaluation.\nTo create the query images for a category `, we again first pre-select 50 exclusively positive images for this category in the CheXpert training set (with all candidate images excluded). Next, we ask a board-certified radiologist to examine each of the 50 images, and exclude images that: 1) might indicate additional abnormalities other than `, 2) have uncommon color or contrast distortions in the image, or 3) are not well posed during the capture of the image. This procedure is mainly to avoid including query images that have uncommon features and may therefore bias the retrieval evaluation results. At the end, we aggregate the annotation results from the radiologist and keep 10 query images for each abnormality category." }, { "heading": "D TEXT-IMAGE RETRIEVAL DATASET COLLECTION", "text": "For the text-image retrieval dataset, we first reuse all candidate images from the CheXpert 8×200 image-image retrieval dataset described above, with 200 images for each of 8 categories. To create\n5https://github.com/lindawangg/COVID-Net 6https://stanfordmlgroup.github.io/competitions/mura/\nthe textual queries for each abnormality category, we ask a board-certified radiologist to write at least 5 different sentences that he will use to describe this abnormality in radiology reporting. We additionally set the following requirements: 1) the sentences must describe the category with no ambiguity and must not include other categories; 2) the sentences must be diverse from each other; and 3) the sentences should not include very specific anatomic locations or rare clinical observations. At the end, we aggregate the results and keep 5 textual queries for each abnormality category. For reference, we present example textual queries in Table 4." }, { "heading": "E EXPERIMENTS ON IMAGE-ONLY CONTRASTIVE LEARNING METHODS", "text": "We run experiments with two popular image-only contrastive visual representation learning methods: SimCLR (Chen et al., 2020a) and MoCo v2 (Chen et al., 2020b). For a fair comparison, in both experiments we use the exact same set of images from the MIMIC-CXR dataset that we use in the pretraining of our method and the baselines. Our settings for each method are:\n• SimCLR: We use the open PyTorch implementation available at https://github.com/ sthalles/SimCLR. For image encoder we use ResNet50. We use cosine similarity in the loss function, set the temperature value to 0.1 and set the output dimension to 128. We use the default image augmentation functions in the paper except for the color jittering transformation where we set the saturation and hue adjustment to 0 due to the monochrome nature of our medical images. For training, we use the Adam optimizer with an initial learning rate of 3e-4 and weight decay of 1e-4. We set batch size to 128 and run training on a single GPU card for 100 epochs, as we find that increasing the batch size or number of epochs does not lead to improved results. We use the default settings for all other parameters.\n• MoCo v2: We use the authors’ PyTorch implementation available at https://github.com/ facebookresearch/moco. For image encoder we use ResNet50. We follow the default MoCo v2 setting and use a temperature value of 0.07 and an output dimension of 128. Similarly, we adopt the default image augmentation functions except for the color jittering transformation where we set the saturation and hue adjustment to 0. For training, we use the SGD optimizer with a learning rate of 0.0075 and weight decay of 1e-4. We use a batch size of 64 and a queue size of 4096, and run parallel training on two GPU cards for 100 epochs, as we find that further increasing the batch size or number of epochs does not lead to improved results. During training, we anneal the learning rate by a factor of 0.1 at the 60th and 80th epochs." }, { "heading": "F HYPERPARAMETER ANALYSIS", "text": "Similar to previous work on unsupervised image representation learning (Chen et al., 2020a; He et al., 2020), we first find that the effectiveness of ConVIRT pretraining is most sensitive to the temperature value τ . As shown in Table 5, using a temperature much lower than the ideal value (τ = 0.01) hurts the retrieval results, and a temperature much larger (τ = 1) notably hurts the performance on all tasks. Unlike previous work, we find that using a smaller or larger batch size hurts the retrieval performance, but neither setup brings substantial impact to the classification results. Lastly, we find that replacing the non-linear projection heads in gv and gu with linear layers hurts the\nretrieval results moderately, suggesting worse representations. However, this is again not reflected notably in the RSNA classification results." } ]
2,020
null
SP:4514e92c7a02cd2765a9cc4b35392594b022fa3e
[ "This paper works on long-tailed classification. The authors conducted an analysis and claimed that the difference of gradients computed on the head and tail classes plays an important role in the performance drop. The authors then proposed a two-stage approach to first train on the head classes and then train on the tail classes in an incremental learning fashion. The proposed algorithm achieved better performance than existing methods on benchmark datasets.", "This paper proposes an interesting view to analyze the long-tailed problem. It states that the gradients are dominated by the head classes so that the tail classes perform poorly. From this observation, the authors propose a dual-phase approach that first train $W_r, W_c^1$ with only head-class data, and extend to train $W_r, W_c$ with tail-class data and the constructed exemplar memory bank for head classes with a newly proposed memory retentive loss." ]
This work explores deep learning based classification model on real-world datasets with a long-tailed distribution. Most of previous works deal with the long-tailed classification problem by re-balancing the overall distribution within the whole dataset or directly transferring knowledge from data-rich classes to data-poor ones. In this work, we consider the gradient distortion in long-tailed classification when the gradient on data-rich classes and data-poor ones are incorporated simultaneously, i.e., shifted gradient direction towards data-rich classes as well as the enlarged variance by the gradient fluctuation on data-poor classes. Motivated by such phenomenon, we propose to disentangle the distinctive effects of data-rich and data-poor gradient and asynchronously train a model via a dualphase learning process. The first phase only concerns the data-rich classes. In the second phase, besides the standard classification upon data-poor classes, we propose an exemplar memory bank to reserve representative examples and a memoryretentive loss via graph matching to retain the relation between two phases. The extensive experimental results on four commonly used long-tailed benchmarks including CIFAR100-LT, Places-LT, ImageNet-LT and iNaturalist 2018 highlight the excellent performance of our proposed method.
[]
[ { "authors": [ "Jonathon Byrd", "Zachary Lipton" ], "title": "What is the effect of importance weighting in deep learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Kaidi Cao", "Colin Wei", "Adrien Gaidon", "Nikos Arechiga", "Tengyu Ma" ], "title": "Learning imbalanced datasets with label-distribution-aware margin loss", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nitesh V Chawla", "Kevin W Bowyer", "Lawrence O Hall", "W Philip Kegelmeyer" ], "title": "Smote: synthetic minority over-sampling technique", "venue": "Journal of artificial intelligence research,", "year": 2002 }, { "authors": [ "Peng Chu", "Xiao Bian", "Shaopeng Liu", "Haibin Ling" ], "title": "Feature space augmentation for long-tailed data", "venue": "arXiv preprint arXiv:2008.03673,", "year": 2020 }, { "authors": [ "Yin Cui", "Menglin Jia", "Tsung-Yi Lin", "Yang Song", "Serge Belongie" ], "title": "Class-balanced loss based on effective number of samples", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Chris Drummond", "Robert C Holte" ], "title": "C4. 5, class imbalance, and cost sensitivity: why undersampling beats over-sampling", "venue": "In Workshop on learning from imbalanced datasets II,", "year": 2003 }, { "authors": [ "Spyros Gidaris", "Nikos Komodakis" ], "title": "Dynamic few-shot visual learning without forgetting", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Hui Han", "Wen-Yuan Wang", "Bing-Huan Mao" ], "title": "Borderline-smote: a new over-sampling method in imbalanced data sets learning", "venue": "In International conference on intelligent computing,", "year": 2005 }, { "authors": [ "Munawar Hayat", "Salman Khan", "Waqas Zamir", "Jianbing Shen", "Ling Shao" ], "title": "Max-margin class imbalanced learning with gaussian affinity", "venue": null, "year": 1901 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Chen Huang", "Yining Li", "Change Loy Chen", "Xiaoou Tang" ], "title": "Deep imbalanced learning for face recognition and attribute prediction", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2019 }, { "authors": [ "Muhammad Abdullah Jamal", "Matthew Brown", "Ming-Hsuan Yang", "Liqiang Wang", "Boqing Gong" ], "title": "Rethinking class-balanced methods for long-tailed visual recognition from a domain adaptation perspective", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Bingyi Kang", "Saining Xie", "Marcus Rohrbach", "Zhicheng Yan", "Albert Gordo", "Jiashi Feng", "Yannis Kalantidis" ], "title": "Decoupling representation and classifier for long-tailed recognition", "venue": "In Eighth International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Salman Khan", "Munawar Hayat", "Syed Waqas Zamir", "Jianbing Shen", "Ling Shao" ], "title": "Striking the right balance with uncertainty", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Salman H Khan", "Munawar Hayat", "Mohammed Bennamoun", "Ferdous A Sohel", "Roberto Togneri" ], "title": "Cost-sensitive learning of deep feature representations from imbalanced data", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Ziwei Liu", "Zhongqi Miao", "Xiaohang Zhan", "Jiayun Wang", "Boqing Gong", "Stella X Yu" ], "title": "Large-scale long-tailed recognition in an open world", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "arXiv preprint arXiv:1608.03983,", "year": 2016 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Hyun Oh Song", "Yu Xiang", "Stefanie Jegelka", "Silvio Savarese" ], "title": "Deep metric learning via lifted structured feature embedding", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Wanli Ouyang", "Xiaogang Wang", "Cong Zhang", "Xiaokang Yang" ], "title": "Factors in finetuning deep model for object detection with long-tail distribution", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, highperformance deep learning library", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "William J Reed" ], "title": "The pareto, zipf and other power laws", "venue": "Economics letters,", "year": 2001 }, { "authors": [ "Mengye Ren", "Wenyuan Zeng", "Bin Yang", "Raquel Urtasun" ], "title": "Learning to reweight examples for robust deep learning", "venue": "arXiv preprint arXiv:1803.09050,", "year": 2018 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Jun Shu", "Qi Xie", "Lixuan Yi", "Qian Zhao", "Sanping Zhou", "Zongben Xu", "Deyu Meng" ], "title": "Meta-weightnet: Learning an explicit mapping for sample weighting", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Grant Van Horn", "Pietro Perona" ], "title": "The devil is in the tails: Fine-grained classification in the wild", "venue": "arXiv preprint arXiv:1709.01450,", "year": 2017 }, { "authors": [ "Yu-Xiong Wang", "Deva Ramanan", "Martial Hebert" ], "title": "Learning to model the tail", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Xi Yin", "Xiang Yu", "Kihyuk Sohn", "Xiaoming Liu", "Manmohan Chandraker" ], "title": "Feature transfer learning for face recognition with under-represented data", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "arXiv preprint arXiv:1710.09412,", "year": 2017 }, { "authors": [ "Xiao Zhang", "Zhiyuan Fang", "Yandong Wen", "Zhifeng Li", "Yu Qiao" ], "title": "Range loss for deep face recognition with long-tailed training data", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp. 5409–5418,", "year": 2017 }, { "authors": [ "Bolei Zhou", "Agata Lapedriza", "Aditya Khosla", "Aude Oliva", "Antonio Torralba" ], "title": "Places: A 10 million image database for scene recognition", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "Boyan Zhou", "Quan Cui", "Xiu-Shen Wei", "Zhao-Min Chen" ], "title": "Bbn: Bilateral-branch network with cumulative learning for long-tailed visual recognition", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Linchao Zhu", "Yi Yang" ], "title": "Inflated episodic memory with region self-attention for long-tailed visual recognition", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Past years have witnessed huge progress in visual recognition with the successful application of deep convolutional neural networks (CNNs) on large-scale datasets, e.g., ImageNet ILSVRC 2012 (Russakovsky et al., 2015), Places (Zhou et al., 2017). Such datasets are usually artificially collected and exhibit approximately uniform distribution concerning the number of samples in each class. Real-world datasets, however, are always long-tailed that only a few classes occupy the majority of instances in the dataset (data-rich) and most classes have rarely few samples (data-poor) (Reed, 2001; Van Horn & Perona, 2017). When modeling such datasets, many standard methods suffer from severe degradation of overall performance. More specifically, the recognition ability on classes with rarely few instances are significantly impaired (Liu et al., 2019).\nOne prominent direction is to apply class re-sampling or loss re-weighting to balance the influence of different classes (Byrd & Lipton, 2019; Shu et al., 2019) and another alternative is to conduct transferring (Wang et al., 2017; Liu et al., 2019) by the assumption that knowledge obtained on the data-rich classes should benefit the recognition of data-poor classes. Recently, more sophisticated models are designed to train the model either base on some new findings (Zhou et al., 2020; Kang et al., 2020) or combine all available techniques (Zhu & Yang, 2020). However, the property of longtailed setting makes it remain to be difficult to achieve large gains compared to balanced datasets.\nIn contrast to the aforementioned strategies, we approach the long-tailed recognition problem by analyzing gradient distortion in long-tailed data, attributing to the interaction between gradients generated by data-rich and data-poor classes, i.e., the direction of overall gradient is shifted to be closer to the gradient on data-rich classes and its norm variance is increased due to the dramatic variation in the gradient generated by data-poor classes. The degenerated performance when comparing with balanced datasets indicates the gradient distortion is negative during model training. Motivated by this, we hypothesize that the combined analysis for gradients generated by data-rich and data-poor classes could be improper in long-tailed data and attempt to disentangle these two gradients. We thus propose the conception of asynchronous modeling and split the original network to promote a\ndual-phase learning, along with the partition of the given dataset. In phase I, data-rich classes keeps the bulk of the original dataset. It facilitates better local representation learning and more precise classifier boundary determination by eliminating the negative gradient interaction produced by datapoor classes. Based on the model learned in phase I, we involve the rest data to do new boundary exploration in the second phase.\nWhile transiting from the first phase to the second, it is hoped to reserve the knowledge learned in the first phase. Specifically, we design an exemplar memory bank and introduce a memory-retentive loss. The memory bank reserves a few most prominent examples from classes in the first phase and collaborates with data in the second phase for classification. Also, the collaborated data, together with the new memory-retentive loss, tries to preserve old knowledge when the model adapts to new classes in the second phase.\nIn the experiments, we evaluate the proposed asynchronous modeling strategy by comparing to typical strategies, which include the re-balancing based methods (Cao et al., 2019) and transferring based methods (Liu et al., 2019). Furthermore, we also consider the latest, more sophisticated works, like BBN (Zhou et al., 2020), IEM (Zhu & Yang, 2020). The comprehensive study and comparison across four commonly used long-tailed benchmarks, including CIFAR100-LT, Places-LT, ImageNetLT and iNaturalist 2018 validate the efficacy of our method." }, { "heading": "2 RELATED WORK", "text": "Class re-sampling. Most works along with this line can be categorized as over-sampling of tail classes (Chawla et al., 2002; Han et al., 2005; Byrd & Lipton, 2019) or under-sampling over head classes (Drummond et al., 2003). While the idea of re-sampling makes the overall distribution more balanced, it may encounter the problem of over-fitting on rare data and the missing of critical information on dominant classes (Chawla et al., 2002; Cui et al., 2019), thus hurting the overall generalization. Beyond that, Ouyang et al. (2016); Liu et al. (2019) also involve a more refined idea of fine-tuning after representation extraction to adjust the final decision boundary.\nLoss re-weighting. Methods based on loss re-weighting generally allocate larger weights for tail classes to increase their importance (Lin et al., 2017; Ren et al., 2018; Shu et al., 2019; Cui et al., 2019; Khan et al., 2017; 2019; Huang et al., 2019). However, direct re-weighting method is difficult to be optimized when tackling a large-scale dataset (Mikolov et al., 2013). Recently, Cao et al. (2019) considers the margins of the training set and introduces a label-distribution-aware loss to enlarge the margins of tail classes. Hayat et al. (2019) proposes the first hybrid loss function to jointly cluster and classify feature vectors in the Euclidean space and to ensure uniformly spaced and equidistant class prototypes.\nKnowledge transfer. Along this line, methods based on knowledge transfer handle the challenge of imbalanced dataset by transferring the information learned on head classes to assist tail classes. While Wang et al. (2017) proposes to transfer meta-knowledge from the head in a progressive manner, recent strategies take consideration of intra-class variance (Yin et al., 2019), semantic feature (Liu et al., 2019; Chu et al., 2020) or domain adaptation (Jamal et al., 2020).\nRecently, BBN (Zhou et al., 2020) and LWS (Kang et al., 2020) boost the landscape of long-tailed problem based on some insightful findings. The former asserts that prominent class re-balancing methods can impair the representation learning and the latter claims that data imbalance might not be an issue in learning high-quality representations. IEM (Zhu & Yang, 2020) designs a more complex model that tries to concern available techniques, like feature transferring and attention. In this paper, we are motivated by gradient distortion in long-tailed data, which is caused by the gradient interaction between data-rich classes and data-poor classes. We thus propose to split the learning stage into two phases. We demonstrate that this separation allows straightforward approaches to achieve high recognition performance, without introducing extra parameters." }, { "heading": "3 OUR METHOD", "text": "Let X = {xi, yi}, i ∈ {1, ..., n} be the training set, where xi is the training data and yi is its corresponding label. The number of instances in class j is denoted as nj and the total number of\ntraining samples is denoted as n = ∑C\nj=1 nj , where C is the number of classes. Without loss of generality, we assume that the classes are sorted in decreasing order, that is, if i > j, ni ≤ nj . We define the whole network as f(x; [Wr;Wc]), where f is the implemented deep learning model with parameters Wr for representation learning and parameters Wc for classification, and x is the input." }, { "heading": "3.1 GRADIENT DISTORTION IN LONG TAIL", "text": "Given a long-tailed dataset, our goal is to achieve better overall performance across all classes. In contrast to previous common heuristics (e.g., resampling, reweighting and feature transfer), we revisit the problem of long-tailed classification from the perspective of gradient distortion. The overall gradient for updating is modulated by the gradients generated by data-rich classes in the head and data-poor classes in the tail. To state the details, we visualize the associated metrics in the training process of vanilla CIFAR100 and long-tailed CIFAR100 (CIFAR100-LT) in Fig. 1. Specifically, the cosine similarity between the gradients is visualized in Fig. 1(a) (CIFAR100-LT) and Fig. 1(b) (vanilla CIFAR100). Similarly, the norm of each gradient is recorded in Fig. 1(c) (CIFAR100-LT) and Fig. 1(d) (vanilla CIFAR100). The higher similarity between the overall gradient and the datarich gradient indicates that the overall gradient is shifted to the direction of the data-rich gradient. Meanwhile, the norm variance of overall gradient is enlarged due to more dramatic fluctuation of the gradient on data-poor classes. Motivated by the degenerated performance in long-tailed dataset, it is hypothesized that synchronous application of two distinctive gradients could impair the overall performance." }, { "heading": "3.2 ASYNCHRONOUS MODELING", "text": "Rather than directly regulating the overall gradient as previous methods, we begin with the disentanglement of two gradients and propose a dual-phase asynchronous modeling strategy. The data from data-rich classes is first considered in model training and then the rest classes are involved. Such asynchronous operation not only reduces the potential disturbance between two gradients, but also ensures the benefits of each gradient to be exploited. Mathematically, the original dataset is X with C classes. Suppose C1 classes are considered in phase I, we then write X1 as the set of data from C1 classes. The data in rest C2 classes is denoted as X2, where C2 = C − C1. Accordingly, the parameters Wc for C classes in f(x; [Wr,Wc]) are truncated as W 1c for C1 classes in the first phase." }, { "heading": "3.2.1 LEARNING IN THE FIRST PHASE", "text": "In model learning from data X1, the consideration of gradient on data-poor categories is avoided, which keeps the truncated model f(x; [Wr;W 1c ]) to be more concentrated. In optimization, the cross-entropy loss over the classes in X1 is minimized with respect to parameters Wr and W 1c .\nL1 = − ∑\n(x,y)∈X1\ny log f(x; [Wr;W 1 c ]). (1)\nFor further improvement in the training, some balanced sampling strategies could be incorporated in this phase. For example, the progressively-balanced strategy in (Kang et al., 2020) combines instance-balanced sampling and class-balanced sampling, that is, pj(t) = (1 − tT ) nj∑C i=1 ni + tT 1 C , where pj(t) denotes the sampling probability for class j in training epoch t. It is computed with linear combination for instances-based probability nj∑C\ni=1 ni and class-based probability 1C . T is the\ntotal epoch number." }, { "heading": "3.2.2 JOINT PREDICTION IN THE SECOND PHASE", "text": "We wish to involve the data in X2 to obtain a complete model across all C classes for overall evaluation. To do so, on the basis of parameters Wr obtained in phase I, we introduce the classifier parameters W 2c for the recognition of new classes in X2. Similar to phase I, the standard crossentropy loss across all data in X2 is considered. However, considering solely on data X2 tends to forget the knowledge learned in the first phase. To tackle with the obstacle, we thus design a memory bank and memory-retentive loss to realize the seamless connection between two data splits. First, representative samples in X1 are retained in an augmented memory module to enable the joint prediction over all classes. Second, the examples reserved in the memory are combined with X2, which are collaboratively trained with a unified memory-retentive loss.\nExemplar memory bank. In maintaining the knowledge obtained in the first phase, we design an exemplar memory bank that selects only a few most representative samples from classes in X1. For simplicity, the number of selected samples from each class is set to be equal. We denote the reserved data in the memory bank as M . Ideally, the most representative examples are samples that are closest to the center of each class. However, a precise class center is not always accessible. Thus in practice, the center is progressively estimated by accessing over the entries generated in previous steps to infer new entry in the memory bank.\nWithout loss of generalization, we consider class j in dataset X1 to demonstrate the detailed operation. We first compute the average feature from all examples in class j in original training set X1 to serve as a class prototype cj , which is thus the initial estimation of class center. We return the instance which is closest to cj in X1 and set it as the first selected sample for the memory bank,\nm1 = arg max xi∈X1\n{s(cj , X1)}, (2)\nwhere s is a vector space similarity metric, like cosine similarity. m1 is used to denote the returned sample xi. Before selecting the rest instances from X1, we need to update the estimated center cj . Without loss of generality, suppose we have selected k samples from X1 and denote the feature map of data in memory bank as Mj = [m1,m2, ..,mk] ∈ Rk×d, where d is the dimension of each feature map. Each sample in Mj serves as a guided hypothesis and its correlation with cj can then be computed for the new state zk+1, that is,\npi = exp(s(cj ,mi))∑ i exp(s(cj ,mi)) , (3)\nzk+1 = k∑ i=1 pimi = pMj , (4)\nwhere s is the same similarity metric as above. pi is computed by the distances between the selected data and the center prototype and it serves as weights to update state zk+1. zk+1 is the weighted average of all feature maps in Mj . New samples can then be returned for k + 1 step by performing\nmk+1 = arg max xi∈X1\n{s(cj + ∆, X1)}, (5)\nwhere ∆ is the residual between cj and zk+1, i.e., ∆ = cj − zk+1. mk+1 is used to denote the returned sample xi.\nMemory-retentive loss. Based on the memory bank, we obtain a combined data setD by extending X2 with examples in the memory bank M , i.e., D = M ⋃ X2. Similarly, the joint prediction with a cross-entropy loss is first considered. When the model is adapted to fit data X2, the knowledge learned in X1 tends to be forgotten. We thus introduce a new memory-retentive loss LGdis based on\ngraph matching, which provides a strong constraint in memorizing previous knowledge. Specifically, the feature map of each data in the training set D is a node in a graph. Based on the model learned in the first stage and the new model to be trained in the second phase, two graphs Gold and Gnew can thus be constructed. That is, we not only consider feature similarity of a single example on the old model and the new model, but also compute the global matching similarity on the whole training set D. Suppose the feature map of one node in Gold is zi and in Gold is ẑi, thus the similarity between old graph Gold and new graph Gnew is measured by computing the change between any node zi in Gold and any node zj in Gnew, that is,\naji = exp(s(zi, ẑj))∑ j′ exp(s(zi, ẑj′)) , zi ∈ Gold, ẑj , ẑj′ ∈ Gnew, (6)\nµi = ∑ j aji‖zi − ẑj‖, zi ∈ Gold, ẑj ∈ Gnew, (7)\nLGdis = ∑ i µi, zi ∈ Gold, (8)\nwhere s is the vector similarity metric. aji represents the distance between node i in graph Gold and node j in graph Gnew, µi thus ntuitively measures the difference between zj and its closest neighbor in graph Gnew. Consider all nodes in graph Gold together, we obtain the memory-retentive loss LGdis, which describe graph similarity between two graphs.\nOverall loss. Combined the above analysis together, the overall loss in phase II is thus as below:\nL = 1 |D| ∑ x∈D (Lcls(x) + Lintra(x)) + λL G dis, (9)\nwhere the first term is for classification and the second is the designed loss which constrains the knowledge in old model through graph matching, λ is a hyperparameter to balance the two terms. Notice that, apart from the standard cross-entropy loss Lcls(x) for input x in the first term, we also consider an intra-classification loss Lintra(x) to avoid memory data in M being dominated by new classes in X2. When we consider cosine linear classifier, one of the instantiations could be Lintra(x) = ∑K k=1 max(0,m− 〈w̄, z̄(x)〉+ 〈w̄k, z̄(x)〉, in which, w̄ is the ground-truth class embedding and w̄k denotes the other class embedding, z̄(x) is the normalized feature map of x,m is a margin value. 〈w̄, z̄(x)〉 denotes a positive score between w̄ and z̄(x), while 〈w̄k, z̄(x)〉 denotes the negative score between w̄k and z̄(x). Lintra optimizes the network to maintain a margin of m between the positive score and the highest negative score. Finally, for a comprehensive overview of the asynchronous modeling structure, we can find it in Algorithm 1 in Appendix B." }, { "heading": "4 EXPERIMENTS", "text": "Datasets. We perform extensive experiments on four long-tailed datasets, including CIFAR100LT (Cao et al., 2019), Places-LT (Liu et al., 2019), ImageNet-LT (Liu et al., 2019), and iNaturalist 2018 (iNatrualist, 2018). CIFAR100-LT is created with three different imbalance factors 50, 100, 200. For different versions of CIFAR100-LT, they are created from the original CIFAR100 that the samples in class y are truncated to nyµ y c−1 , where c is the number of all classes, y is the index of class and ny is the original number of training examples in class y. By varying µ to be 0.02, 0.01, 0.005, we obtain three groups of CIFAR100-LT with imbalance factor 50, 100, 200. More dataset details can be found in Appendix A.\nEvaluation Metrics. We evaluate the models on the corresponding balanced test/validation datasets and report the overall top-1 accuracy over all classes, denoted as Overall. Furthermore, to better describe the internal diversity across classes with different training samples, we follow Liu et al. (2019) to split the given dataset into three disjoint sets: Many-shot (classes with more than 100 images), Medium-shot (20∼100 images) and Few-shot (fewer than 20 images) and report the corresponding accuracy for comparison." }, { "heading": "4.1 COMPARISON WITH STATE-OF-THE-ART", "text": "In this section, we compare our method with a wild range of previous works in addressing long-tailed classification from different directions.\nPlaces-LT. We initialize the ResNet-152 backbone with ImageNet pre-trained parameters following Kang et al. (2020). In Table 1, we report the result of our baseline without asynchronous modeling and denote it as Ours (plain), that is, considering the dataset together without distinguishing the head and tail. The result based on asynchronous modeling is denoted as Ours. In order to compare with baselines like Zhu & Yang (2020), in which more parameters are introduced, we also consider the upgraded version Ours† with extended parameters. By comparing our asynchronous modeling with the plain baseline, we notice that the introduction of asynchronous modeling improves the overall result notably. We also outperform the state-of-the-art methods, including OLTR (Liu et al., 2019), LWS (Kang et al., 2020), etc. In comparison with IEM (Zhu & Yang, 2020), we see that comparable result is achieved without introducing any extra parameters. With more parameters considered, much higher accuracy is achieved in our setting.\nImageNet-LT. For ImageNet-LT, the most commonly adopted architecture is ResNet-10. We also evaluate with different backbones for a thorough comparison to previous works. Table 2 shows the overall results on three different backbones, i.e., ResNet-10, ResNet-50 and ResNet-152. We find that our asynchronously obtained model achieves the top performance with impressive improvements over decoupled methods cRT, NCM and LWS in Kang et al. (2020) across all backbones. Also, when comparing with OLTR (Liu et al., 2019) which also applies the memory mechanism, the memory bank in our strategy is obviously more efficient and useful. What is more, our method also outperforms IEM (Zhu & Yang, 2020) when more parameters are considered. More detailed results, i.e., the performance on three splits can be found in Appendix C.\nCIFAR100-LT. We follow Cao et al. (2019) and consider three different long–tailed versions with imbalance factors 50, 100, 200. The results in Table 3 demonstrate that in comparison with state-of-the-arts including CB-Focal (Cui et al., 2019), LDAM (Cao et al., 2019) and BBN (Zhou et al., 2020), our method consistently achieves the best performance across all three versions. Especially for CIFAR100-LT with imbalance factor 100, the incorporation of asynchronous modeling introduces more than 2% gains over our plain baseline.\niNaturalist 2018. We further evaluate our methods on iNaturalist 2018. iNaturalist 2018 is a real-world long-tailed dataset, consisting of over 8K categories. We follow Kang et al. (2020) to train the network for 200 epochs and show the results of two backbones, i.e., ResNet-50 and ResNet-152. From Table 4 we see the results are consistent with the previous datasets: training with asynchronous modeling strategy performs best across different backbones. It not only achieves better results than loss re-weighting or transferring based methods (Cao et al., 2019; Chu et al., 2020) but also outperforms decoupled cRT, NCM, LWS (Kang et al., 2020)." }, { "heading": "5 ABLATION STUDY", "text": "We now perform ablation study to investigate the effect of specific modules. We use ResNet-152 as the backbone and conduct related experiments on Places-LT to study the size of exemplar memory bank and the ratio between the classification loss and the memory-retentive loss. We consider the result under separated {Many, Medium, Few}-shots and the overall result. Similarly in Fig. 2 and Fig. 3, the axis for describing different shots is in the left. The change of overall result is depicted in the right of the figure, which is an independent axis.\nSize of memory bank. We first explore the effect of memory bank with different sizes. In the experiment, the size of memory bank depends on the selected number of samples from each class. Particularly, we consider five cases and set the reserved number of samples from each class in X1 as 2, 6, 10, 14, 18, respectively. For each cases, other operations are kept as the same. From Fig. 2(a), we see that with the increment of memory size, the performance on Few-shot is decreasing, which is opposite to the result on Many-shot. Generally speaking, the best overall result is achieved when memory size equals to 10. We notice that the overall result is changed under different memory sizes, but it is rather stable, varying from 39.4 to 39.8.\nThe ratio between the classification loss and the memory-retentive loss. Similarly, we also study how the ratio between the classification loss and the memory-retentive loss affects the final results. In practice, such balance is controlled by parameters λ in Eq. 9. Based on the initial option λ = √ C/C1, in whichC is the number of all class and C1 is the class used in phase I, the initial λ is scaled properly to obtain other four values. As shown in Fig. 2(b), we conclude that the best overall result of Places-LT is achieved when λ equals to 2.03. More importantly, the overall performance retains good for a wild range of λ, i.e., λ ≤ 2.03.\nThrough the above analysis of memory bank size and the λ, we notice that the changes of different modules do affect the overall performance. However, the mild variation indicates that our method is robust and stable.\nInfluence of different partitions. In this part, we investigate the influence of disentanglement point on the final performance. The disentanglement point also corresponds to the index of class since we arrange the order of classes by the number of instance in the paper. We conduct experiments on three datasets, including CIFAR100 with imbalance factor 100, Places-LT and ImageNet-LT and explore five disentanglement points for each dataset. The final results are shown in Fig. 3. To better show the variation of overall performance (the red line), we depict it using a separated vertical axis (the right one in each figure). We also show the change of different shots in each dataset: Many-shot in orange, Medium-shot in blue and Few-shot in purple. From the comparison on three datasets, we conclude the best disentanglement point for each dataset." }, { "heading": "6 CONCLUSION", "text": "In this paper, we begin with the visual phenomenon of gradient distortion in long tail and propose an asynchronous modeling strategy that learns a unified recognition model through two phases to better exploit the gradients generated by data-rich classes and data-poor classes. In unifying the training process, we introduce a memory bank and a memory-retentive loss to retain the knowledge learned in the first phase while exploring new boundaries in the second phase. The extensive results on four long tailed benchmark datasets which significantly outperform previous works validate the superior efficacy of our method." }, { "heading": "A APPENDIX", "text": "Dataset Details. Places-LT and ImageNet-LT are artificially truncated to follow a long-tailed distribution from Places-2 (Zhou et al., 2017) and ImageNet-2012 (Deng et al., 2009), respectively. Places-LT contains 62.5K images from 365 categories and the number of images per class varies from 4980 to 5. ImageNet-LT has 115.8 samples from 1000 classes and the number of images per class is decreased from 1280 to 5 images. iNaturalist 2018 is a real-world visual recognition dataset, that naturally exhibits long-tailed distribution. It consists 435,713 samples from 8,142 species.\nImplementation Details. We use the platform of PyTorch (Paszke et al., 2019) for all experiments. For CIFAR100-LT, we adopt ResNet-32 as the backbone. The batch size is 64 and the learning rate is initialized with 0.1. The number of epoch for training is 200 and we decay the learning rate at the 160th and 180th epochs by 0.01. For Places-LT, we choose ResNet-152 as the backbone with pretrained parameters from ImageNet 2012. The learning rate for representation learning is initialized with 5e-4 and that for classifier is 0.05. We train the model for 60 epochs and all the learning rate is decayed at 20th and 40th epochs by 0.01. On ImageNet-LT, we report results with ResNet-10, 50,101,152 (He et al., 2016). Similarly, ResNet-50, 152 are also used for iNaturalist 2018. For ImageNet and iNaturalist 2018, the learning rate is initialized with 0.05 and cosine learning rate scheduler (Loshchilov & Hutter, 2016) is applied to gradually decay learning rate from 0.05 to 0. For all experiments, if not specified, we use SGD optimizer with momentum 0.9, weight decay 5e-4. The image resolution for CIFAR100-LT is 32×32 and the rest is 224×224. The λ is empirically set based on √ num old num new , where “num old” indicates the number of classes in the first stage and “num new” is the number of new classes in the second stage. The threshold to split the dataset is set as the sum of classes in Many- and Medium-shot. For CIFAR100-LT, the threshold is 70, which means that we first learn the 70 classes in the head and then involve the rest. For ImageNet-LT, the threshold is 864 and Places-LT, the threshold is 294." }, { "heading": "B APPENDIX", "text": "Algorithm 1 Asynchronous Modeling for Long-Tailed Recognition Input: Dataset X = {xi, yi}, learning rate η, training epoch T ; 1: Divide dataset X into two parts according to the number of instances in each class. The one\ncovered data-rich classes is X1 and the rest is X2. 2: Model parameters W1 = [Wr;W 1c ] in phase I; 3: for i = 1, 2, · · · , T do 4: Sample mini-batch B from training set X1; 5: Compute cross-entropy loss L1 on B; 6: Update overall parameters W1 ←W1 − η∇W1L1; 7: end for 8: Construct memory bank with a few samples from classes in X1 and denote the set as M ; 9: Update training set D = X2 ⋃ M , extend model parameters as W = [Wr;Wc];\n10: for i = 1, 2, · · · , T do 11: Sample mini-batch B from training set D; 12: Compute classification loss 1|B| ∑ x∈B(Lcls(x) + Lintra(x)); 13: Compute memory-retentive loss LGdis on B; 14: Compute overall loss L = 1|B| ∑ x∈B(Lcls(x) + Lintra(x)) + λL G dis; 15: Update W ←W − η∇WL; 16: end for Output: Model with parameters W ." }, { "heading": "C APPENDIX", "text": "In Table 5, the detailed results of {Many, Medium, Few}-shots on ImageNet-LT are described. Besides from ResNet-{50, 152}, ResNet-101 is also considered here. Compared to the baseline without asynchronous learning (Ours (plain)), our method sacrifices little in Many-shot but improves a lot in Medium- and Few-shot. More importantly, we see that our asynchronous strategy boosts the overall performance across all backbones." } ]
2,020
null
SP:447a69bbd183f33b2950448c3d2bd50b7400410e
[ "The paper proposes a solution to few-shot meta learning approaches overfitting to the number of shots they are finetuned on, and not generalizing as well as expected to novel shots. In order to mitigate this problem, the paper suggests a parameterization of the meta learner which also conditions on the number of shots the model trains on. In practice, this is done via manipulation of the batch normalization parameters based on the number of shots. With this conditioning, the paper shows that the models perform better across a range of shots that they are evaluated on, compared to various sensible baselines.", "This paper proposed an implementation method of using different numbers of shots of data for few-shot learning such as to mitigate the negative effect of \"different shots\". It optimized the FiLM parameters using meta gradient descent during episodic meta-training with different-shot learning tasks. It conducted the experiments on quite a set of \"meta-dataset benchmarks\"." ]
Early few-shot classification work advocates for episodic training, i.e. training over learning episodes each posing a few-shot classification task. However, the role of this training regime remains poorly understood, and its usefulness is still debated. Standard classification training methods (“pre-training”) followed by episodic finetuning have recently achieved strong results. This work aims to understand the role of this episodic fine-tuning phase through an exploration of the effect of the “shot” setting (number of examples per class) that is used during fine-tuning. We discover that fine-tuning on episodes of a particular shot can specialize the pre-trained model to solving episodes of that shot at the expense of performance on other shots, in agreement with a trade-off recently observed in the context of end-to-end episodic training. To amend this, we propose a shot-conditional form of episodic fine-tuning, inspired from recent work that trains a single model on a distribution of losses. Our investigation shows that this improves overall performance, without suffering disproportionately on any shot. We also examine the usefulness of this approach on the large-scale Meta-Dataset benchmark where test episodes exhibit varying shots and imbalanced classes. We find that our flexible model improves performance in that challenging environment.
[]
[ { "authors": [ "Mohammad Babaeizadeh", "Golnaz Ghiasi" ], "title": "Adjustable real-time style transfer", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Peyman Bateni", "Raghav Goyal", "Vaden Masrani", "Frank Wood", "Leonid Sigal" ], "title": "Improved few-shot visual classification", "venue": "arXiv preprint arXiv:1912.03432,", "year": 2019 }, { "authors": [ "Peyman Bateni", "Raghav Goyal", "Vaden Masrani", "Frank Wood", "Leonid Sigal" ], "title": "Improved few-shot visual classification", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "John Bronskill", "Jonathan Gordon", "James Requeima", "Sebastian Nowozin", "Richard E Turner" ], "title": "Tasknorm: Rethinking batch normalization for meta-learning", "venue": "arXiv preprint arXiv:2003.03284,", "year": 2020 }, { "authors": [ "Tianshi Cao", "Marc Law", "Sanja Fidler" ], "title": "A theoretical analysis of the number of shots in few-shot learning", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Wei-Lun Chao", "Han-Jia Ye", "De-Chuan Zhan", "Mark Campbell", "Kilian Q Weinberger" ], "title": "Revisiting meta-learning as supervised learning", "venue": "arXiv preprint arXiv:2002.00573,", "year": 2020 }, { "authors": [ "Wei-Yu Chen", "Yen-Cheng Liu", "Zsolt Kira", "Yu-Chiang Frank Wang", "Jia-Bin Huang" ], "title": "A closer look at few-shot classification", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yinbo Chen", "Xiaolong Wang", "Zhuang Liu", "Huijuan Xu", "Trevor Darrell" ], "title": "A new meta-baseline for few-shot learning", "venue": "arXiv preprint arXiv:2003.04390,", "year": 2020 }, { "authors": [ "Harm De Vries", "Florian Strub", "Jérémie Mary", "Hugo Larochelle", "Olivier Pietquin", "Aaron C Courville" ], "title": "Modulating early visual processing by language", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Guneet S Dhillon", "Pratik Chaudhari", "Avinash Ravichandran", "Stefano Soatto" ], "title": "A baseline for few-shot image classification", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Alexey Dosovitskiy", "Josip Djolonga" ], "title": "You only train once: Loss-conditional training of deep networks", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Vincent Dumoulin", "Jonathon Shlens", "Manjunath Kudlur" ], "title": "A learned representation for artistic style", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Nikita Dvornik", "Cordelia Schmid", "Julien Mairal" ], "title": "Selecting relevant features from a universal representation for few-shot classification", "venue": "arXiv preprint arXiv:2003.09338,", "year": 2020 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Micah Goldblum", "Steven Reich", "Liam Fowl", "Renkun Ni", "Valeriia Cherepanova", "Tom Goldstein" ], "title": "Unraveling meta-learning: Understanding feature representations for few-shot tasks", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Timothy Hospedales", "Antreas Antoniou", "Paul Micaelli", "Amos Storkey" ], "title": "Meta-learning in neural networks: A survey", "venue": "arXiv preprint arXiv:2004.05439,", "year": 2020 }, { "authors": [ "Meiyu Huang", "Xueshuang Xiang", "Yao Xu" ], "title": "Training few-shot classification via the perspective of minibatch and pretraining", "venue": "arXiv preprint arXiv:2004.05910,", "year": 2020 }, { "authors": [ "Gregory Koch", "Richard Zemel", "Ruslan Salakhutdinov" ], "title": "Siamese neural networks for one-shot image recognition", "venue": "In ICML deep learning workshop,", "year": 2015 }, { "authors": [ "Xinzhe Li", "Qianru Sun", "Yaoyao Liu", "Qin Zhou", "Shibao Zheng", "Tat-Seng Chua", "Bernt Schiele" ], "title": "Learning to self-train for semi-supervised few-shot classification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yaoyao Liu", "Yuting Su", "An-An Liu", "Bernt Schiele", "Qianru Sun" ], "title": "Mnemonics training: Multi-class incremental learning without forgetting", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Leland McInnes", "John Healy", "James Melville" ], "title": "Umap: Uniform manifold approximation and projection for dimension reduction", "venue": "arXiv preprint arXiv:1802.03426,", "year": 2018 }, { "authors": [ "Boris Oreshkin", "Pau Rodríguez López", "Alexandre Lacoste" ], "title": "Tadam: Task dependent adaptive metric for improved few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ethan Perez", "Florian Strub", "Harm De Vries", "Vincent Dumoulin", "Aaron Courville" ], "title": "FiLM: Visual reasoning with a general conditioning layer", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Mengye Ren", "Eleni Triantafillou", "Sachin Ravi", "Jake Snell", "Kevin Swersky", "Joshua B Tenenbaum", "Hugo Larochelle", "Richard S Zemel" ], "title": "Meta-learning for semi-supervised few-shot classification", "venue": "arXiv preprint arXiv:1803.00676,", "year": 2018 }, { "authors": [ "James Requeima", "Jonathan Gordon", "John Bronskill", "Sebastian Nowozin", "Richard E Turner" ], "title": "Fast and flexible multi-task classification using conditional neural adaptive processes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tonmoy Saikia", "Thomas Brox", "Cordelia Schmid" ], "title": "Optimized generic feature learning for few-shot classification across domains", "venue": "arXiv preprint arXiv:2001.07926,", "year": 2020 }, { "authors": [ "Jun Shu", "Qi Xie", "Lixuan Yi", "Qian Zhao", "Sanping Zhou", "Zongben Xu", "Deyu Meng" ], "title": "Meta-weightnet: Learning an explicit mapping for sample weighting", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Qianru Sun", "Yaoyao Liu", "Tat-Seng Chua", "Bernt Schiele" ], "title": "Meta-transfer learning for few-shot learning", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Eleni Triantafillou", "Richard Zemel", "Raquel Urtasun" ], "title": "Few-shot learning through an information retrieval lens", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Eleni Triantafillou", "Tyler Zhu", "Vincent Dumoulin", "Pascal Lamblin", "Utku Evci", "Kelvin Xu", "Ross Goroshin", "Carles Gelada", "Kevin Swersky", "Pierre-Antoine Manzagol" ], "title": "Meta-Dataset: A dataset of datasets for learning to learn from few examples", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in neural information processing systems,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Few-shot classification is the problem of learning a classifier using only a few examples. Specifically, the aim is to utilize a training dataset towards obtaining a flexible model that has the ability to ‘quickly’ learn about new classes from few examples. Success is evaluated on a number of test episodes, each posing a classification task between previously-unseen test classes. In each such episode, we are given a few examples, or “shots”, of each new class that can be used to adapt this model to the task at hand, and the objective is to correctly classify a held-out set of examples of the new classes.\nA simple approach to this problem is to learn a classifier over the training classes, parameterized as a neural network feature extractor followed by a classification layer. While the classification layer is not useful at test time due to the class shift, the embedding weights that are learned during this “pre-training” phase evidently constitute a strong representation that can be used to tackle test tasks when paired with a simple “inference algorithm” (e.g. nearest-neighbour, logistic regression) to make predictions for each example in the test episode given the episode’s small training set. Alternatively, early influential works on few-shot classification (Vinyals et al., 2016) advocate for episodic training, a regime where the training objective is expressed in terms of performance on a number of training episodes of the same structure as the test episodes, but with the classes sampled from the training set. It was hypothesized that this episodic approach captures a more appropriate inductive bias for the problem of few-shot classification and would thus lead to better generalization.\nHowever, there is an ongoing debate about whether episodic training is in fact required for obtaining the best few-shot classification performance. Notably, recent work (Chen et al., 2019; Dhillon et al., 2020) proposed strong “pre-training” baselines that leverage common best practices for supervised training (e.g. normalization schemes, data augmentation) to obtain a powerful representation that works well for this task. Interestingly, other recent work combines the pre-training of a single classifier with episodic fine-tuning by removing the classification head and continuing to train the embedding network using the episodic inference algorithm that will be applied at test time (Triantafillou et al., 2020; Chen et al., 2020). The success of this hybrid approach suggests that perhaps the two regimes\nhave complementary strengths, but the role of this episodic fine-tuning is poorly understood: what is the nature of the modification it induces into the pre-trained solution? Under which conditions is it required in order to achieve the best performance?\nAs a step towards answering those questions, we investigate the effect of the shot used during episodic fine-tuning on the resulting model’s performance on test tasks of a range of shots. We are particularly interested in understanding whether the shot of the training episodes constitutes a source of information that the model can leverage to improve its few-shot classification performance on episodes of that shot at test time. Our analysis reveals that indeed a particular functionality that this fine-tuning phase may serve is to specialize a pre-trained model to solving tasks of a particular shot; accomplished by performing the fine-tuning on episodes of that shot. However, perhaps unsurprisingly, we find that specializing to a given shot comes at the expense of hurting performance for other shots, in agreement with (Cao et al., 2020)’s theoretical finding in the context of Prototypical Networks (Snell et al., 2017) where inferior performance was reported when the shot at training time did not match the shot at test time.\nGiven those trade-offs, how can our newfound understanding of episodic fine-tuning as shotspecialization help us in practice? It is unrealistic to assume that we will always have the same number of labeled examples for every new class we hope to learn at test time, so we are interested in approaches that operate well on tasks of a range of shots. However, it is impractical to fine-tune a separate episodic model for every shot, and intuitively that seems wasteful as we expect that tasks of similar shots should require similar models. Motivated by this, we propose to train a single shot-conditional model for specializing the pre-trained solution to a wide spectrum of shots without suffering trade-offs. This leads to a compact but flexible model that can be conditioned to be made appropriate for the shot appearing in each test episode.\nIn what follows we provide some background on few-shot classification and episodic models and then introduce our proposed shot-conditioning approach and related work. We then present our experimental analysis on the effect of the shot chosen for episodic fine-tuning, and we observe that our shot-conditional training approach is beneficial for obtaining a general flexible model that does not suffer the trade-offs inherent in naively specializing to any particular shot. Finally, we experiment with our proposed shot-conditional approach in the large-scale Meta-Dataset benchmark for few-shot classification, and demonstrate its effectiveness in that challenging environment." }, { "heading": "2 BACKGROUND", "text": "Problem definition Few-shot classification aims to classify test examples of unseen classes from a small labeled training set. The standard evaluation procedure involves sampling classification episodes by picking N classes at random from a test set of classes Ctest and sampling two disjoint sets of examples from the N chosen classes: a support set (or training set) of k labeled examples per class, and a query set (or test set) of unlabeled examples, forming N -way, k-shot episodes. The model is allowed to use the support set, in addition to knowledge acquired while training on a disjoint set of classes Ctrain, to make a prediction for examples in the query set, and is evaluated on its query set accuracy averaged over multiple test episodes.\nEpisodic training Early few-shot classification approaches (Vinyals et al., 2016) operate under the assumption that obtaining a model capable of few-shot classification requires training it on (mini-batches of) learning episodes, instead of (mini-batches of) individual examples as in standard supervised learning. These learning episodes are sampled in the same way as described above for test episodes, but with classes sampled from Ctrain this time. In other words, the model is trained to minimize a loss of the form:\nES,Q∼PN,ktrain 1 |Q| ∑ (x∗,y∗)∈Q − log pθ(y∗ | x∗,S) (1) where S and Q are support and query sets sampled from the distribution PN,ktrain of N -way, k-shot training episodes induced by Ctrain, and θ represents the model’s parameters. This training regime is often characterized as meta-learning or learning to learn, i.e. learning over many episodes how to learn within an episode (from few labeled examples). Episodic models differ by their “inference\nalgorithm”, i.e. the manner in which pθ(y∗ | x∗,S) is computed to classify query examples based on the support set.\nPrototypical Networks Prototypical Networks (Snell et al., 2017) is a simple but effective episodic model which constructs a prototype φc for each class c in an episode as\nφc = 1 |Sc| ∑ x∈Sc fθ(x), (2)\nwhere f is an embedding function parametrized by θ and Sc represents the set of support examples belonging to class c, and classifies a given query example as\np(y∗ = c | x∗,S) = exp(−||x ∗ − φc||22)∑\nc′ exp(−||x∗ − φc′ ||22) . (3)" }, { "heading": "3 SHOT CONDITIONAL EPISODIC (SCONE ) TRAINING", "text": "In this section we introduce Shot CONditional Episodic (SCONE ) training for the purpose of specializing a strong pre-trained model to solving few-shot classification tasks of a range of different shots, without suffering disproportionately for any shot.\nTraining objective Training episodically involves minimizing the objective shown in Equation 1. We first sample an episode from P k,Ntrain and compute a prediction pθ(y\n∗ | x∗,S) for each query example x∗. We then compute the cross-entropy loss on the query set using those predictions and perform a parameter update by backpropagating its gradient with respect to θ into the inference algorithm. In this work we concern ourselves with models that use an embedding function fθ to obtain a representation for the support and query examples of each episode on top of which the inference algorithm is applied. In Prototypical Networks, for instance, fθ contains all of the model’s learnable parameters.\nSCONE trains on episodes of varying shots and conditions the model on each episode’s shot distribution. (Figure 1) by minimizing\nEk∼Pk ES,Q∼PN,ktrain 1 |Q| ∑ (x∗,y∗)∈Q − log pθk(y∗ | x∗,S) , (4) where Pk is the distribution over shots at training time and θk depends on an episode’s sampled shots. In the Appendix, we include an algorithm box outlining SCONE fine-tuning.\nConditioning mechanism Rather than learning a separate set of model parameters for each shot setting, we modulate a subset of its parameters using FiLM (Perez et al., 2018), a simple conditioning mechanism which performs an affine feature-wise transformation of its input x based on conditioning information k (in our case, the episode’s number of shots):\nFiLM(x) = γ(k) x+ β(k). (5)\nThe dependency of γ and β on k is handled by maintaining distinct values for each shot setting and selecting the appropriate γ and β based on an episode’s shot. Equivalently, we can think of our approach as a compact representation of many shot-specific feature extractors which share all but their FiLM layer parameters.\nMore concretely, we maintain a set of FiLM parameters for each shot in the [1, MAX-SHOT] range (where MAX-SHOT is a hyperparameter) and let all shots settings greater than or equal to MAXSHOT share the same FiLM parametrization. As is often the case in practice, instead of inserting FiLM layers in the network’s architecture, we modulate the scaling and shifting parameter values of existing batch normalization layers (Dumoulin et al., 2017; De Vries et al., 2017). When performing episodic fine-tuning, we initialize all sets of FiLM parameters to those learned during pre-training (i.e. the learned batch normalization scaling and shifting coefficients). These different sets of FiLM parameters are then free to deviate from each other as a result of fine-tuning. We found it beneficial to penalize the L2-norm of β (regularizing the offset towards 0) and the L2 norm of γ − 1 (regularizing the scaling towards 1). For this purpose, we introduce a hyperparameter that controls the strength of this FiLM weight decay.\nHandling class-imbalanced episodes SCONE can also be used on imbalanced episodes, where different classes have different shots. In that case, instead of selecting a single set of FiLM parameters, we compute the FiLM parameters for an episode as the convex combination of the FiLM parameters associated with all shots found in the episode, where the weights of that combination are determined based on the frequency with which each shot appears in the episode.\nConcretely, the episode’s “shot distribution” s (a vector of length MAX-SHOT) is obtained by averaging the one-hot representations of the shots of the classes appearing in an episode. In the special case of a class-balanced episode, the resulting average will be exactly a one-hot vector. This shot distribution is then used for the purpose of selecting the episode’s FiLM parameters. This can be thought of as an embedding lookup sTF in a matrix F of FiLM parameters using a shot distribution s.\nSmoothing the shot distribution We expect similar shot values to require similar FiLM parameters, which we incorporate as an inductive bias by smoothing the shot distribution. We outline our SMOOTHSHOT procedure in the Appendix in Algorithm 1, which receives the shot s of a class (an integer), and a smoothing hyperparameter m (a float in [0, 1]) and returns the smoothed shot for that class, which is a vector of length MAX-SHOT. Essentially, the result of smoothing is that the returned vector representation of s is not strictly one-hot with only the position corresponding to the observed shot s being ‘on’. Instead, some entries surrounding that position are also non-zero. Specifically, the entries that are directly adjacent to s receive the value m, the entries two spots away from s the value m2, and so on, with entries further away from s receiving exponentially-decaying values." }, { "heading": "4 RELATED WORK", "text": "Few-shot classification A plethora of models have been recently proposed for few-shot classification, and we refer the reader to (Hospedales et al., 2020) for a broad survey. Before episodic training was introduced, few-shot classifiers often relied on metric learning (Koch et al., 2015; Triantafillou et al., 2017). This theme persisted in early episodic models like Matching Networks (Vinyals et al., 2016) and Prototypical Networks (Snell et al., 2017) where classification is made via nearest-neighbour comparisons in the embedding space. Matching Networks apply a soft k-NN algorithm where the label of a query example is predicted to be the weighted average of the (one-hot) support labels with the weights determined by the similarity of that query to each support example.\nGradient-based episodic models are another popular family of approaches following the influential MAML paper (Finn et al., 2017). To create a classifier for each given episode, this approach fine-tunes the embedding weights along with a linear classifier head using gradient descent on the support set.\nIntuitively, this results in learning an embedding space that serves as a useful starting point from which a few steps of gradient descent suffice to adapt the model to each episode’s classification task. Proto-MAML (Triantafillou et al., 2020) is a simple extension that initializes the linear classifier for each episode from the prototypes of the classes appearing in that episode.\nRecently, the field has shifted towards studying few-shot classification in more realistic environments like tiered-ImageNet (Ren et al., 2018) and Meta-Dataset (Triantafillou et al., 2020), which has encouraged research into newly-introduced challenges, such as accounting for multiple diverse datasets. Along these lines, Requeima et al. (2019); Bateni et al. (2019) proposed novel task conditioning approaches, Saikia et al. (2020) introduced an improved hyperparameter tuning approach, and Dvornik et al. (2020) proposed a method for selecting an appropriate set of features for each test episode out of a universal feature representation.\nUnderstanding episodic learning Our work inscribes itself in a recent line of work attempting to understand the differences between episodic and non-episodic learning. Goldblum et al. (2020) attempts to understand episodic learning from the perspective of how classes cluster in feature-space (for models that learn a final classification layer on top of a feature extractor) as well as from the perspective of local minima clusters (for gradient-based meta-learners). Huang et al. (2020); Chao et al. (2020) draw parallels between learning episodes and supervised learning examples, Bronskill et al. (2020) discusses batch normalization in episodic learning, drawing parallels from its use in non-episodic learning and Chen et al. (2020) contrasts episodic and non-episodic learning in their ability to generalize to new examples of previously seen classes or new examples of unseen classes. Finally, Cao et al. (2020) theoretically investigates the role of the shot in Prototypical Networks to explain the observed performance drop when there is a mismatch between the shots at training and test time. Instead, we empirically study the effect of the shot chosen during episodic fine-tuning of a pre-trained solution, in a larger-scale and more diverse environment.\nFeature-wise conditioning Feature-wise transformations such as FiLM (Perez et al., 2018) are used as a conditioning mechanism in a variety of problem settings; see Dumoulin et al. (2018) for a survey on the topic. (Shu et al., 2019) devise a loss re-weighting scheme that conditions on the loss at each time-step, which is a scalar, thus bearing similarity to our approach when conditioning on a scalar shot setting. In few-shot classification, (Sun et al., 2019) use feature-wise transformations as a means of transfer to new tasks. (Oreshkin et al., 2018; Requeima et al., 2019; Bateni et al., 2019) use FiLM to condition metric learners’ backbones on the support set, while (Dvornik et al., 2020) uses it as a way to represent many pre-trained classifiers using a shared parametrization. FiLM has also been used successfully for class-incremental learning Liu et al. (2020) and semi-supervised few-shot learning (Li et al., 2019). Notably, TADAM (Oreshkin et al., 2018), CNAPs (Requeima et al., 2019) and Simple-CNAPs (Bateni et al., 2019) also use task conditioning, but they use the mean of the support set for this and thus the ‘shot’ information is discarded. The purpose of our conditioning mechanism is instead to make the backbone shot-aware. The idea of shot-conditional learners is inspired by recent work that investigates loss-conditional training using feature-wise transformations (Dosovitskiy & Djolonga, 2020; Babaeizadeh & Ghiasi, 2020)." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 EXPLORING THE ROLE OF ‘SHOTS‘ DURING EPISODIC FINE-TUNING", "text": "In this subsection, we examine the effect of the ‘shot’ that is used during the episodic fine-tuning phase, and in particular how it impacts the resulting model’s ability to solve test episodes of different shots. We consider either using a fixed shot k throughout the fine-tuning phase, or fine-tuning on episodes of a distribution of shots. In the latter case, we explore both standard fine-tuning as well as SCONE fine-tuning that equips the model with the shot-conditioning mechanism described in the previous section. We also compare against EST (Cao et al., 2020).\nExperimental setup We ran this round of experiments on ImageNet using the class splits proposed in Meta-Dataset. First, we pre-trained a standard classifier on the set of training classes of ImageNet. We then removed the topmost classification layer, leaving us with a pre-trained backbone that we used as the initialization for the subsequent episodic fine-tuning round. We ran the following\nvariants of episodic fine-tuning: exclusively on 1-shot episodes (‘Fine-tune on 1-shot’), exclusively on 5-shot episodes (‘Fine-tune on 5-shot’), on episodes whose shot is drawn uniformly from the range [1, 40] (‘Fine-tune on all shots’), and on episodes with that same shot distribution but using SCONE (‘SCONE Fine-tune on all shots’), which additionally equips the backbone with the shot conditioning mechanism described in the previous section. We also consider ‘Fine-tune on best k-shot’, an additional baseline that fine-tunes exclusively on the shot k that is found to work best on average on the validation set (on the range of shots 1 − 40). For this, we trained models for k = 1, 5, 10, 15, 20, 30, 40 and found the best to be k = 15.\nAs mentioned in the previous section, when applying SCONE training, we penalize the L2 norm of FiLM parameters. For a fair comparison with the other models, we applied the same regularization to the batch normalization parameters of all models during the episodic fine-tuning phase, and we found this to be generally helpful. We tuned the strength of this regularization separately for each model and picked the variant that worked best on the validation set, which we report in the Appendix. We set the SCONE ’s MAX-SHOT hyperparameter to be 40 for this experiment.\nWe also compare to EST (Cao et al., 2020) which is a theoretically-grounded method for building shot resiliency in Gaussian classifiers. This involves applying a linear transformation on top of the learned embeddings that aims to strike a good balance between maximizing the inter-class variance and minimizing the intra-class variance. In practice, that trade-off is controlled via a hyperparameter ρ. We applied EST on top of the embeddings of ‘Fine-tune on all shots’ and we tuned ρ and the hyperparameter d controlling the projection dimensionality very extensively. The values that we found worked best (selected on the validation set of ImageNet on the range of shots 1-40) are substantially different than those used in the original EST paper: d = 480 and ρ = 5e − 8 (versus the original d = 120 and ρ = 1e− 3). We believe that this discrepancy may be due to our deeper backbones and larger range of shots. The EST configuration that worked best for us yields a minimal reduction in the embedding dimensionality, and primarily favours maximizing the inter-class variance, with the term that minimizes the intra-class variance having minimal effect.\nIn all cases, we fix the ‘way’ to 5. We use Prototypical Networks as the episodic model and we perform early stopping and model selection on the validation set of classes, where the validation performance of a variant is computed on episodes of the same (distribution of) shot(s) that it is trained on. All models are tested on a held-out test set of classes that is not seen during pre-training nor episodic fine-tuning, on 5-way episodes of different shot settings.\nFindings We observe from Figure 2 that fine-tuning on a fixed shot yields the best results on test episodes of that shot. For example, 1-shot accuracies show that ‘Fine-tune on 1-shot’ surpasses the performance of all other variants on 1-shot test episodes, with the analogous findings in 1-shot and 5-shot accuracies for 5-shot and 40-shot, respectively. Therefore, a particular functionality that the episodic fine-tuning phase may serve is to specialize the pre-trained model for performing well on tasks of a particular shot. However, as illustrated in all sub-plots of Figure 2, this shot specialization comes at the cost of severely reduced performance on tasks of very different shots. For instance, the model that is specialized for 40-shot tasks (‘Fine-tune on 40-shot’) performs very poorly on 1-shot test tasks and vice-versa. We also note that the ‘Fine-tune on best k-shot’ model does not suffice to perform well in all settings either, since k = 15 there, it performs really poorly on 1-shot for instance.\nIn practice, it may be desirable to perform well on more than a single shot setting at test time, without having to fine-tune multiple separate shot-specialized models. A reasonable approach to that is\nepisodically fine-tuning on a range of shots, to obtain a general model. Indeed, Figure 2 shows that ‘Fine-tune on all shots’ does not perform too poorly on any shot but, perhaps unsurprisingly, in any given setting, it falls short of the performance of the corresponding shot-specialized model.\nFinally, we observe that SCONE fine-tuning outperforms its shot-unaware counterpart in all settings (‘SCONE Fine-tune on all shots’ vs ‘Fine-tune on all shots’). This constitutes evidence that SCONE fine-tuning indeed leads to a more flexible model that can adapt to the shot of each episode via its conditioning mechanism, without suffering the trade-offs inherent in naively specializing a model exclusively to any particular shot. We can view a SCONE model as a very compact way of representing multiple shot-specialized models, where the information required for that specialization resides in the light-weight FiLM parameters. SCONE also outperforms the EST approach in this setting which also strives for shot resiliency, but does so by encouraging invariance to the shot setting rather than shot awareness as in SCONE ." }, { "heading": "5.2 LARGE-SCALE EXPERIMENTS ON META-DATASET", "text": "In what follows, we apply SCONE to the diverse and challenging Meta-Dataset benchmark for few-shot classification (Triantafillou et al., 2020). Meta-Dataset is comprised of ten distinct image datasets, including natural images, handwritten characters and sketches. It also defines a generative process for episodes that varies the way and shot across episodes, and within a particular episode varies the shot for different classes, introducing imbalance. The range of shots induced by this episode generator is also larger than what we considered in the previous section. It is a long-tailed distribution under which small and medium shots are more likely but it is possible to also encounter very large shots (e.g. >400), though this would happen very infrequently. We include histograms of the shot distributions of Meta-Dataset’s training, validation and test episodes in the Appendix. These experiments aim to investigate whether SCONE is effective on this broader shot distribution and imbalanced episodes.\nPrototypical Network on ImageNet For our first set of experiments on Meta-Dataset, we explore different strategies of episodic fine-tuning of the pre-trained classifier’s embedding weights using Prototypical Networks. For this, we use Meta-Dataset’s sampling algorithm to draw training episodes of varying shots and ways from ImageNet. We compare standard episodic fine-tuning (‘Standard’) to SCONE episodic fine-tuning (‘SCONE ’). Since SCONE uses L2-regularization on the sets of FiLM parameters, for a fair comparison we include a variant of standard episodic fine-tuning with L2-regularization on the batch normalization parameters (‘L2 BN’). We also include an ablation of our method that does not use any smoothing of the shot distribution. Finally, we compare to EST as well where we computed the EST transformation on the ‘L2 BN’ instead of the ‘Standard’ Prototypical Network variant, since that worked best. We tuned EST’s hyperparameters very extensively, as\ndescribed in Section 5.1, this time model-selecting on the validation sets of all datasets of MetaDataset. The values that worked best in this case are d = 480 and ρ = 5e− 9. As noted in Section 5.1, these are substantially different than those used in the original EST paper, likely due to our deeper backbones and significantly broader range of shots explored. Finally, we ran the same ‘Fine-tune on best k-shot’ baseline described in Section 5.1. In this case we found that the best k was 20.\nWe evaluate these models on the held-out set of ImageNet classes as well as the remaining 9 datasets. We set SCONE ’s MAX-SHOT to 200. We tune the learning rate and decay schedule separately for each variant and we perform model selection of SCONE ’s hyperparameters using the validation set. All additional details are reported in the Appendix, and we plan to open source our code upon publication.\nMeta-Baseline on all datasets Next, we experiment with the recent Meta-Baseline model (Chen et al., 2020). Meta-Baseline also consists of a pre-training phase (‘Classifier-Baseline’) followed by an episodic fine-tuning phase (‘Meta-Baseline’). Classifier-Baseline refers to simply training a classifier on the set of training classes. This variant is evaluated on few-shot episodes by discarding the ultimate classification layer and utilizing a cosine similarity-based nearest-centroid inference algorithm on the learned embeddings. Meta-Baseline then fine-tunes Classifier-Baseline’s pre-trained embeddings on the episodic objective of the aforementioned nearest-centroid algorithm.\nWhen training on all datasets of Meta-Dataset, they obtained strong results using their Classifier-Baseline which is in this case trained in a multi-task setup with separate output heads for the different datasets. They found that episodically fine-tuning that solution on all datasets did not help in general (it improved performance on some datasets but hurt performance on a larger number of datasets).\nInspired by that finding, we experimented with a SCONE training phase on top of Classifier-Baseline’s strong pre-trained solution where we froze the embedding weights to that powerful representation and we optimized only the set of SCONE ’s FiLM parameters for shot conditioning. We performed this fine-tuning on training episodes from all datasets, using MetaBaseline’s nearest centroid method as the episodic model. As a control experiment, we performed the same episodic fine-tuning but without shot-conditioning, where we optimized only the batch normalization parameters, keeping the remainder of the\nembedding weights frozen (‘Control’). This control can be thought of as a special case of SCONE where MAX-SHOT is set to 1.\nFindings The results of this investigation are shown in Table 1 and Table 2 (as well as their more heavily-annotated counterparts in the Appendix, Tables 4 and 5, that show the per-row rank computation). Following (Triantafillou et al., 2020), we run a test of statistical significance described in the Appendix to determine when to bold an entry. Table 1 shows that SCONE fine-tuning outperforms standard episodic fine-tuning in the context of Prototypical Networks. Interestingly, penalizing the L2-norm of batch normalization parameters during episodic fine-tuning is beneficial even when not using SCONE , but it does not reach the performance obtained by our shot-conditioning. The ablation of SCONE that does not use any smoothing of the shot distribution is also competitive, but performs worse than full SCONE . We also observe that EST is competitive in this setting, only slightly worse than SCONE , though we note that SCONE is a more general approach that is not tied to Gaussian classifiers. Similarly, in the context of Meta-Baseline, Table 2 shows that episodically fine-tuning the batch normalization parameters of the otherwise-frozen embedding is helpful (‘Control’), but using SCONE to learn a separate set of FiLM parameters for each shot yields additional gains in this setting too. Overall, despite the simplicity of SCONE , these results demonstrate its effectiveness on different shot distributions, and in different backbones.\nFiLM parameter visualization Finally, as a sanity check, we perform a UMAP projection (McInnes et al., 2018) of the learned FiLM parameters for each shot setting (Figure 3). As expected, similar shot settings tend to learn similar sets of FiLM parameters, which is reflective of the fact that they rely on similar features for classification.\nExample smoothed shot distribution To gain an intuition on the effect of our smoothing procedure, we illustrate in Figure 4 the result of smoothing an example shot distribution using m = 1 − 1e − 06, which is the value of the smoothing hyperparameter that we used for our Prototypical Network experiments on Meta-Dataset. For this, we consider a hypothetical 4-way episode where the shots for the four classes are: 1, 10, 23, and 103. We observe that the largest peak is in the range of small values, due to the first three shots of the episode, with the fourth shot causing a second peak around the value 103. As a reminder, this shot distribution defines the weights of the convex combination of FiLM parameters that will be used for the episode. In practice therefore, we are activating ‘blocks’ of FiLM parameters that are relevant for each episode, instead of strictly activating only the FiLM parameters of the observed shots." }, { "heading": "6 CONCLUSION", "text": "In summary, we present an analysis aiming to understand the role of episodic fine-tuning on top of a pre-trained model for few-shot classification. We discover that this fine-tuning phase can be used to specialize the pre-trained model to episodes of a given shot, leading to strong performance on test episodes of that shot at the expense of inferior performance on other shots. To eliminate that trade-off, we propose a shot-conditional episodic training approach that trains a model on episodes of a range of shots and can be conditioned at test time to modify its behavior appropriately depending on the shot of the given test episode. Our experimental analysis suggests that our proposed shotconditioning mechanism is beneficial both in smaller-scale experiments, as well as in the large-scale and diverse Meta-Dataset benchmark, in the context of two different episodic models. Future work could explore how to incorporate shot-awareness in other few-shot classification models. In addition to the architectural modification of FiLM conditioning on the shot distribution, are there algorithmic adjustments that can yield additional performance gains, such as a mechanism of determining the number of inner-loop updates to perform for gradient-based meta-learners based on the number of available shots?" }, { "heading": "A APPENDIX", "text": "" }, { "heading": "SCONE ’S TRAINING ALGORITHM IN MORE DETAIL", "text": "For clarity, we provide pseudocode for SCONE ’s training algorithm including our procedure for shot smoothing in Algorithm 1. We will also release our code upon publication for reproducibility.\nAlgorithm 1 SCONE training Input: Distributions of training episodes Ptrain, pre-trained embedding weights θ, pre-trained batch\nnorm weights γ and β, embedding function f , learning rate (a float), smoothing co-efficient m (a float in the range [0, 1]) and maximum supported shot MAX-SHOT (an int).\nOutput: Finetuned embedding weights θ′ and FiLM parameters F = {γ′, β′}.\nprocedure SMOOTH-SHOT(s,m,MAX-SHOT) if s > MAX-SHOT then\ns← MAX-SHOT . Cap s to the max supported shot end if s← s− 1 . So that s is in the range [0,MAX-SHOT − 1] s̃← ONE-HOT(s, DEPTH=MAX-SHOT) . Init the smoothed shot for 0 ≤ j ≤ MAX-SHOT do\nl← s− j − 1 . The index j slots to the left of s l← ONE-HOT(l, DEPTH=MAX-SHOT) ∗m . Outputs the zero vector if l < 0 r ← s+ j + 1 . The index j slots to the right of s r ← ONE-HOT(r, DEPTH=MAX-SHOT) ∗m . Outputs the zero vector if r < 0 s̃← s̃+ l + r m← m2 . Adjust the next iteration’s smoothing\nend for end procedure\nθ′ ← θ . Init the embedding weights from the pre-trained embeddings for 1 ≤ k ≤ MAX-SHOT do . Init the FiLM params from the pre-trained batch norm params\nγ′(k)← γ β′(k)← β\nend for while validation accuracy is improving do\nSample a training episode with support set S and query set Q Let k1, . . . kN be the shots of the episode’s classes. s← ZEROS(MAX-SHOT) . Init the (unnormalized) shot distribution for each class i do\nsi ← ONE-HOT(ki, DEPTH = MAX-SHOT) si ← SMOOTH-SHOT(si,m,MAX-SHOT) . Smooth the one-hot shot of class i s← s+ si end for s← s÷ SUM(s) . Normalize to get the episode’s shot distribution γ′s ← sT γ′ . Select the FiLM params for the episode β′s ← sTβ′ Let SH = {f(x; θ′, γ′s, β′s), y}(x,y)∈S . The embedded support set Let QH = {f(x; θ′, γ′s, β′s), y}(x,y)∈Q . The embedded query set L ← 1|QH | ∑ (h∗,y∗)∈QH − log p(y∗ | h∗,SH) . Compute the episode’s loss θ′ ← θ′ − ∂L ∂θ′ . Update the model via gradient descent γ′ ← γ′ − ∂L ∂γ′ β′ ← β′ − ∂L ∂β′\nend while\nHypothesis testing We follow the same procedure as in (Triantafillou et al., 2020) to compute ranks for different methods that in turn determine which entries to bold in our tables. Specifically, we perform a 95% confidence interval statistical test on the difference between the mean accuracies of pairs of entries of each row. If for two entries we are not able to reject the null hypothesis that the difference between their means is 0, they will receive the same rank. For example, if model A and model B are tied for the first place according to that test, they will each receive the rank 1.5 (the average of the ranks 1 and 2). If we are able to reject that hypothesis, however, the entry with the larger mean accuracy will receive a higher rank than the other. In each row, we bold the entries that are tied for the highest rank. For convenience, in the last section of the Appendix, we show a more heavily-annotated copy of each table in the paper to make the rank computation procedure more transparent." }, { "heading": "ADDITIONAL RESULTS", "text": "First, we provide in Figure 5 additional plots to cover more evaluation shot settings than those shown in Figure 2 in the main paper. The setup for this is exactly the same as for Figure 2.\nNext, since we observe that the ‘Best k-shot’ baseline performs well in Table 1, which reflects average performance across episodes of varying shots, we further break down its performance different ranges of shots in Figure 6. We find that while indeed ‘Best k-shot’ performs well for large shots, it actually performs poorly for low shots. This finding strengthens our case against this baseline: not only is it computationally expensive, requiring training multiple different models to pick the one that performs best, but is also is not as consistent as SCONE in its performance on different shot ranges.\nFinally, to place our results into context, we display the results of Meta-Baseline with SCONE alongside the performance of recent work, controlling for model capacity, in Table 3. The approaches we compare against are: Classifier-Baseline (Chen et al., 2020), SUR-pf (Dvornik et al., 2020), TaskNorm (Bronskill et al., 2020) and Simple CNAPs (Bateni et al., 2020). In particular, we report the performance of the parametric family of SUR (‘SUR-pf’) instead of full SUR (which has 8x more parameters), in order to make apples-to-apples comparisons with the remaining approaches. We find that the Meta-Baseline method, when combined with SCONE, achieves state-of-the-art on Meta-Dataset in this context, according to the average rank metric." }, { "heading": "EXPERIMENTAL DETAILS", "text": "We plan to open source our code upon publication, including all experimental details. In the meantime, we outline these details below for completeness.\nArchitecture We use ResNet-18 as the feature extractor for all of our experiments, following the implementation in (Triantafillou et al., 2020). For the SCONE variants, we add FiLM to all of the batch normalization layers throughout the network.\nImage processing For all experiments, we use Meta-Dataset’s input pipeline to obtain images, and we follow the image processing performed in (Chen et al., 2020) which yields images of size 128 x 128. We apply standard data augmentation consisting of horizontal flipping and random cropping followed by standardization using a commonly-used mean and standard deviation as in (Chen et al., 2020). For episodic models, data augmentation is applied in both the support and query sets. No data augmentation is used at validation nor test time.\nOptimization We use ADAM with exponential learning rate decay and weight decay of 1e− 8 to optimize all models in this work. We tune the initial learning rate, the decay rate, and the number of updates between each learning rate decay separately for each model presented in the paper. The initial learning rate values we considered are 0.0005 and 0.001, with a decay factor of 0.8 or 0.9 applied every 1000, 2000, 3000 steps. We ran a variant for every combination of those values. We also tune the weight decay applied to the FiLM parameters (for SCONE variants) or the batch normalization parameters (for non-SCONE variants). We tried the values: 1e− 8, 1e− 6, 1e− 4.\nSCONE hyperparameters For the SCONE variants, aside from the above hyperparameters, we additionally tune the smoothing parameter m described in the main paper that is used for training and for evaluation. We did not tune the MAX-SHOT hyperparameter mentioned in the main paper as we found that our initial choices worked reasonably. Specifically, we set it to 40 for the smaller-scale experiments where the maximum shot was 40, and to 200 for the large-scale experiments. The latter choice was performed heuristically since shots much larger than 200 are unlikely under the shot distribution induced by Meta-Dataset’s episode generator. For more information on that shot distribution, we refer the reader to the next section.\nSCONE smoothing hyperparameter We tuned the value of this hyperparameter that will be used both at training and at evaluation. At training time, we considered values in the range 0, 0.2, 0.4, 0.6, 0.9 for Prototypical Network experiments, and we picked the variant that worked best according to the validation performance that was computed without smoothing. Once the model was trained and all of the remaining hyperparamters were tuned, we performed a final validation round to tune the evaluation-time smoothing that will be used in the chosen model. We found it beneficial to use larger values here, picking the value of 1− 1e− 06 for example for the Prototypical Network on ImageNet. In the Meta-Baseline codebase, we trained with larger values of smoothing (the best we found was 1− 1e− 10) and didn’t find it beneficial to additionally smooth at evaluation time.\nModel selection For each experiment, we perform early stopping according to the performance on the validation set. For the models that train on a single shot k in the smaller-scale experiments, the validation performance that we monitor for early stopping is the average query set accuracy on k-shot 5-way episodes drawn from the validation set. For the models in the small-scale experiments that train on a distribution of shots, we use the average validation performance over 5-way episodes whose shot is sampled according to the same distribution used for training the respective model. For the larger-scale Meta-Dataset experiments, we draw validation episodes only from the validation set of ImageNet for the experiments that train on ImageNet only, or from the validation sets of all datasets for the experiments that train on all datasets. In both cases, the validation episodes are drawn using Meta-Dataset’s episode generator that yields episodes of variable ways and variable shots with class imbalance. In all cases, the average validation performance is computed over 600 validation episodes and is monitored every 2K training updates. We apply exponential smoothing to the resulting validation \"curve\" (using the default value of 0.6 in TensorBoard). Then, we choose the update step at which the highest peak of that curve is found and we use the checkpoint corresponding to that update step for testing." }, { "heading": "DISTRIBUTION OF SHOTS IN META-DATASET EPISODES", "text": "For reference, Figure 7 displays histograms of the number of shots produced by Meta-Dataset’s episode sampling algorithm. These are computed by sampling 600 episodes per dataset for each of the training, validation and test splits of Meta-Dataset." }, { "heading": "TABLES WITH MORE DETAILED RANKS", "text": "In this section, we include a copy of the same tables appearing previously in the paper, but additionally annotated with per-row ranks, to make the rank computation method more transparent. These tables are Table 4, Table 5 and Table 6." } ]
2,020
null
SP:0a87278c0da53a0b1989fad7932566b8ddd8634b
[ "The paper describes a technique based on the modified generalized gradient descent for finding multiple high-quality local optima of deep neural networks. The search method does not require re-initialization of the model parameters and can be carried out in a single training session. Identified local optima are then used to build model ensembles which appear to outperform several other ensembling approaches.", "This paper proposes a new method for applying the TRUST-TECH method to the ensemble of deep neural networks (DNNs). When applying TRUST-TECH to a deep neural network, it is difficult to determine the direction and exit point. This paper introduces Dynamic Searching Paths (DSP) to solve these problems. The proposed method can apply TRUST-TECH method to DNNs using Stochastic Gradient Descent (SGD) with minor memory overhead. " ]
The success of deep neural networks relied heavily on efficient stochastic gradient descent-like training methods. However, these methods are sensitive to initialization and hyper-parameters. In this paper, a systematical method for finding multiple high-quality local optimal deep neural networks from a single training session, using the TRUST-TECH (TRansformation Under Stability-reTaining Equilibria Characterization) method, is introduced. To realize effective TRUST-TECH searches to train deep neural networks on large datasets, a dynamic search paths (DSP) method is proposed to provide an improved search guidance in TRUSTTECH method. The proposed DSP-TT method is implemented such that the computation graph remains constant during the search process, with only minor GPU memory overhead and requires just one training session to obtain multiple local optimal solutions (LOS). To take advantage of these LOSs, we also propose an improved ensemble method. Experiments on image classification datasets show that our method improves the testing performance by a substantial margin. Specifically, our fully-trained DSP-TT ResNet ensmeble improves the SGD baseline by 15% (CIFAR10) and 13%(CIFAR100). Furthermore, our method shows several advantages over other ensembling methods.
[]
[ { "authors": [ "Peter Auer", "Mark Herbster", "Manfred K Warmuth" ], "title": "Exponentially many local minima for single neurons", "venue": "In Advances in Neural Information Processing Systems", "year": 1996 }, { "authors": [ "Hsiao-Dong Chiang", "Luı́s F.C. Alberto" ], "title": "Stability Regions of Nonlinear Dynamical Systems: Theory, Estimation, and Applications", "venue": null, "year": 2015 }, { "authors": [ "Hsiao-Dong Chiang", "Chia-Chi Chu" ], "title": "A systematic search method for obtaining multiple local optimal solutions of non-linear programming problems", "venue": "IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications,", "year": 1996 }, { "authors": [ "Hsiao-Dong Chiang", "L. Fekih-Ahmed" ], "title": "Quasi-stability regions of nonlinear dynamical systems: Theory", "venue": "IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications,", "year": 1996 }, { "authors": [ "Hsiao-Dong Chiang", "Chandan K. Reddy" ], "title": "Trust-tech based neural network training", "venue": "In International Joint Conference on Neural Networks, pp", "year": 2007 }, { "authors": [ "Hsiao-Dong Chiang", "Bin Wang", "Quan-Yuan Jiang" ], "title": "Applications of trust-tech methodology in optimal power flow of power systems", "venue": "In Optimization in the Energy Industry,", "year": 2009 }, { "authors": [ "Anna Choromanska", "Mikael Henaff", "Michael Mathieu", "Gérard Ben Arous", "Yann LeCun" ], "title": "The loss surfaces of multilayer networks", "venue": "In Artificial Intelligence and Statistics,", "year": 2015 }, { "authors": [ "Yann N. Dauphin", "Razvan Pascanu", "Caglar Gulcehre", "Kyunghyun Cho", "Surya Ganguli", "Yoshua Bengio" ], "title": "Identifying and attacking the saddle point problem in high-dimensional non-convex optimization", "venue": "In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume", "year": 2014 }, { "authors": [ "Yann N Dauphin", "Razvan Pascanu", "Caglar Gulcehre", "Kyunghyun Cho", "Surya Ganguli", "Yoshua Bengio" ], "title": "Identifying and attacking the saddle point problem in high-dimensional non-convex optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Laurent Dinh", "Razvan Pascanu", "Samy Bengio", "Yoshua Bengio" ], "title": "Sharp minima can generalize for deep nets", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Felix Draxler", "Kambis Veschgini", "Manfred Salmhofer", "Fred Hamprecht" ], "title": "Essentially no barriers in neural network energy landscape", "venue": "In Proceedings of the 35th International Conference on Machine Learning (PMLR),", "year": 2018 }, { "authors": [ "Stanislav Fort", "Huiyi Hu", "Balaji Lakshminarayanan" ], "title": "Deep ensembles: A loss landscape perspective", "venue": null, "year": 1912 }, { "authors": [ "Timur Garipov", "Pavel Izmailov", "Dmitrii Podoprikhin", "Dmitry Vetrov", "Andrew Gordon Wilson" ], "title": "Loss surfaces, mode connectivity, and fast ensembling of dnns", "venue": "In 32nd Conference on Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "V.G. Gudise", "G.K. Venayagamoorthy" ], "title": "Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks", "venue": "In Proceedings of the 2003 IEEE Swarm Intelligence Symposium", "year": 2003 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Gao Huang", "Yixuan Li", "Geoff Pleiss", "Zhuang Liu", "John E. Hopcroft", "Kilian Q. Weinberger" ], "title": "Snapshot ensembles: Train 1, get M for free", "venue": "In 5th International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Daniel Jiwoong Im", "Michael Tao", "Kristin Branson" ], "title": "An empirical analysis of deep network loss surfaces", "venue": null, "year": 2016 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In The 32th International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "Pavel Izmailov", "Dmitrii Podoprikhin", "Timur Garipov", "Dmitry P. Vetrov", "Andrew Gordon Wilson" ], "title": "Averaging weights leads to wider optima and better generalization", "venue": "In Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2018 }, { "authors": [ "Hannes Jonsson", "Greg Mills", "Karsten W. Jacobsen" ], "title": "Classical and Quantum Dynamics in Condensed Phase Simulations, chapter Nudged Elastic Band Method for Finding Minimum Energy Paths of Transitions", "venue": null, "year": 1998 }, { "authors": [ "Chia-Feng Juang" ], "title": "A hybrid of genetic algorithm and particle swarm optimization for recurrent network design", "venue": "IEEE Transactions on Systems, Man, and Cybernetics,", "year": 2004 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": null, "year": 2016 }, { "authors": [ "Diederik P. Kingma", "Jimmy Lei Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Samuli Laine", "Timo Aila" ], "title": "Temporal ensembling for semi-supervised learning", "venue": null, "year": 2016 }, { "authors": [ "Jaewook Lee", "Hsiao-Dong Chiang" ], "title": "A dynamical trajectory-based methodology for systematically computing multiple optimal solutions of general nonlinear programming problems", "venue": "IEEE Transactions on Automatic Control,", "year": 2004 }, { "authors": [ "Frank H.F. Leung", "H.K. Lam", "S.H. Ling", "Peter K.S. Tam" ], "title": "Tuning of the structure and parameters of a neural network using an improved genetic algorithm", "venue": "IEEE Transactions on Neural Networks,", "year": 2003 }, { "authors": [ "Hao Li", "Zheng Xu", "Gavin Taylor", "Tom Goldstein" ], "title": "Visualizing the loss landscape of neural nets", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Mohammad Moghimi", "Mohammad Saberian", "Jian Yang", "Li-Jia Li", "Nuno Vasconcelos", "Serge Belongie" ], "title": "Boosted convolutional neural networks", "venue": "In British Machine Vision Conference (BMVC), York,", "year": 2016 }, { "authors": [ "Chandan K. Reddy", "Hsiao-Dong Chiang", "Bala Rajaratnam" ], "title": "Trust-tech-based expectation maximization for learning finite mixture models", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2008 }, { "authors": [ "Zhiqiang Shen", "Zhankui He", "Xiangyang Xue" ], "title": "Meal: Multi-model ensemble via adversarial learning", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv 1409.1556,", "year": 2014 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Bin Wang", "Hsiao-Dong Chiang" ], "title": "Elite: Ensemble of optimal input-pruned neural networks using trust-tech", "venue": "IEEE Transactions on Neural Networks,", "year": 2011 }, { "authors": [ "Jingjing Xie", "Bing Xu", "Chuang Zhang" ], "title": "Horizontal and vertical ensemble with deep representation for classification", "venue": null, "year": 2013 }, { "authors": [ "Yongquan Yang", "Haijun Lv", "Ning Chen", "Yang Wu", "Jiayi Zheng", "Zhongxi Zheng" ], "title": "Local minima found in the subparameter space can be effective for ensembles of deep convolutional neural networks", "venue": "Pattern Recognition,", "year": 2020 }, { "authors": [ "Jing-Ru Zhang", "Jun Zhang", "Tat-Ming Lok", "Michael R. Lyu" ], "title": "A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training", "venue": "Applied Mathematics and Computation,", "year": 2007 }, { "authors": [ "Ruqi Zhang", "Chunyuan Li", "Jianyi Zhang", "Changyou Chen", "Andrew Gordon Wilson" ], "title": "Cyclical stochastic gradient mcmc for bayesian deep learning", "venue": "In ICLR,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Due to the high redundancy on parameters of deep neural networks (DNN), the number of local optima is huge and can grow exponentially with the dimensionality of the parameter space (Auer et al. (1996); Choromanska et al. (2015); Dauphin et al. (2014b)). It still remains a challenging task to locate high-quality optimal solutions in the parameter space, where the model performs satisfying on both training and testing data. A popular metric for the quality of a local solution is to measure its generalization capability, which is commonly defined as the gap between the training and testing performances (LeCun et al. (2015)). For deep neural networks with high expressivity, the training error is near zero, so that it suffices to use the test error to represent the generalization gap. Generally, local solvers do not have the global vision of the parameter space, so there is no guarantee that starting from a random initialization can locate a high-quality local optimal solution. On the other hand, one can apply a non-local solver in the parameter space to find multiple optimal solutions and select the high-quality ones. Furthermore, one can improve the DNN performance by ensembling these high-quality solutions with high diversity.\nTRUST-TECH plays an important role in achieving the above goal. In general, it computes highquality optimal solutions for general nonlinear optimization problems, and the theoretical foundations can be bound in (Chiang & Chu (1996); Lee & Chiang (2004)). It helps local solvers escape from one local optimal solution (LOS) and search for other LOSs. It has been successfully applied in guiding the Expectation Maximization method to achieve higher performance (Reddy et al. (2008)), training ANNs (Chiang & Reddy (2007); Wang & Chiang (2011)), estimating finite mixture models (Reddy et al. (2008)), and solving optimal power flow problems (Chiang et al. (2009); Zhang & Chiang (2020)). Additionally, it does not interfere with existing local or global solvers, but cooperates with them. TRUST-TECH efficiently searches the neighboring subspace of the promising candidates for new LOSs in a tier-by-tier manner. Eventually, a set of high-quality LOSs can be found. The idea of TRUST-TECH method is the following: for a given loss surface of an op-\ntimization problem, each LOS has its own stability region. If one start from one local optimum, and track the loss values along a given direction, we will find an exit point where loss start to decrease steadily, which means another stability region corresponding to a nearby LOS is found. By following a trajectory in the stability region, another LOS is computed.\nWe propose an optima exploring algorithm designed for DNNs that is able to find high-quality local optima in a systematic way, and thereby form optimal and robust ensembles. Normally for a deep neural network, exit points can hardly be found by original TRUST-TECH due to the huge dimensionality. So, in this work we introduce the Dynamic Searching Paths (DSP) method instead of fixed directions. We set the search directions to be trainable parameters. After an exploration step forward along the current direction, we calibrate the direction using the current gradient. By doing so, the method can benefit from not only the mature Stochastic Gradient Descent (SGD) training paradigm with powerful GPU acceleration capability, but also exit points can be easily found.\nThe overall DSP-TT method consists of four stages. First, we train the network using local solvers to get a tier-0 local optimal solution. Second, our proposed Dynamic Search Path TRUST-TECH (DSP-TT) method is called to find nearby solutions in a tier-by-tier manner. Third, a selection process is performed so that candidates with high quality are chosen. Finally, ensembles are built with necessary fine-tunings on selected member networks. To the best of our knowledge, this paper is the first one to search for multiple solutions on deep neural networks in a systematical way.\nOur major contributions and highlights are summarized as follows:\n• We propose the Dynamic Searching Path (DSP) method that enables exploration on highdimensional parameter space efficiently.\n• We show that combining TRUST-TECH method with DSP (DSP-TT) is effective in finding multiple optimal solutions on deep neural networks systematically.\n• We design and implement the algorithm efficiently that it obtains multiple local solutions within one training session with minor GPU memory overhead.\n• We develop the DSP-TT Ensembles of solutions found by DSP-TT with high quality and diversity for further improving the DNN performance." }, { "heading": "2 RELATED WORK", "text": "The synergy between massive numbers of parameters and nonlinear activations in deep neural networks leads to the existence of multiple LOSs trained on a specific dataset. Experiments show that different initializations lead to different solutions with various qualities (Dauphin et al. (2014a)). Even with the same initialization, the network can converge to different solutions depending on the loss function and the solver (Im et al. (2016)). Many regularization techniques are therefore proposed to force the network to converge to a better solution, some of which are proven to be useful and popular (Kingma & Ba (2015); Srivastava et al. (2014); Ioffe & Szegedy (2015)). However, it is still mysterious how these regularized solutions are compared to the global optimum.\nThere are researchers that focus on characterizing different local optima and investigating the internal relations among them. It is claimed in (Hochreiter & Schmidhuber (1997); Keskar et al. (2016)) that sharp minima prevent deep neural networks from generalizing well on the testing dataset. Later, Dinh et al. (2017) argued that the definition of flatness in (Keskar et al. (2016)) is problematic and came up with an example where solutions with different geometries can have similar test time performances. Li et al. (2018) designed a new visualization method that rebuilt the correspondence between the sharpness of the minimizer and the generalization capability. On the other hand, some researchers apply meta-heuristic algorithms to obtain a better local minimizer (Gudise & Venayagamoorthy (2003); Zhang et al. (2007); Juang (2004); Leung et al. (2003)). However, these methods were either designed for obsolete toy models or on explicit benchmark objective functions where there are analytical forms for global optimum, and therefore the effectiveness of these algorithms on deep architectures and large datasets seems unconvincing. Moreover, the advantage of the global searching ability seems to be crippled when it comes to deep neural networks, and the minimizers they found are still local. Recently, Garipov et al. (2018) reveal the relation among local optima by building pathways, called Mode Connectivities, as simple as polygonal chains or Bezier curves that\nconnect any two local optima. Draxler et al. (2018) also found similar results at the same time, although they used the Nudged Elastic Band (Jonsson et al. (1998)) method from quantum chemistry.\nTo address the issue of converging to suboptimal solutions, a great deal of research efforts were directed to ensembles. Xie et al. (2013) proposed horizontal and vertical ensembles that combine the output of networks at different training epochs. Laine & Aila (2016) used a group of models with different regularization and augmentation conditions to create variety. Moghimi et al. (2016) borrowed the concept of boosting to create a strong ensemble of CNNs. Izmailov et al. (2018) found that averaging weights from different iterations leads to flatter solutions than from SGD and helps in generalization. Huang et al. (2017a) proposed a method that obtains ensembles by collecting several local minima along a single training process using a cyclical learning rate schedule. Zhang et al. (2020) used similar approach, but with the sampling capability that fully exploits each mode. Garipov et al. (2018) developed the Fast Geometric Ensembling based on Mode Connectivity. Although the methods in these papers obtain multiple networks within one training session, these ensembles are still largely dependent on initialization. While these ensemble methods performs better than a single network, naive randomly initialized ensemble is still the best choice when training budget is unconstrained. Fort et al. (2020) explained this phenomenon as they explore different modes in function space compared to weight averaging. Shen et al. (2019) improved the ensemble inference efficiency via a teacher-student paradigm distilling the knowledge of an ensemble into one single network. Yang et al. (2020) built ensembles by randomly initialize on a subparameter space, aiming to alleviate the exponentially growing number of local minima on deep networks. Wang & Chiang (2011) used the TRUST-TECH (Chiang & Chu (1996); Lee & Chiang (2004); Chiang & Alberto (2015)) method to perform a systematic search for diversified minimizers to obtain their ensembles. They implemented TRUST-TECH for training and constructing high-quality ensembles of artificial neural networks and showed that their method consistently outperforms other training methods. We generalize this method and tailor it for deep architectures and to work efficiently with popular local solvers in deep learning.\n3 TRUST-TECH METHOD FOR MULTIPLE OPTIMAL SOLUTIONS\n3.1 TRUST-TECH METHODOLOGY\nAnother category of methods has been developed in recent years for systematically computing a set of local optimal solutions in a deterministic manner. This family of methods is termed TRUST-TECH methodology, standing for Transformation Under Stability-reTaining Equilibria Characterization. It is based on the following transformations:\n(i) the transformation of a local optimal solution (LOS) of a nonlinear optimization problem into a stable equilibrium point (SEP, Chiang & Chu (1996)) of a continuous nonlinear dynamical system.\n(ii) the transformation of the search space of nonlinear optimization problems into the union of the closure of stability regions of SEPs.\nHence, the optimization problem (i.e. the problem of finding LOSs) is transformed into the problem of finding SEPs, and therefore we use the terms LOS and SEP interchangeably in the following discussion. It will become clear that the stability regions of SEPs play an important role in finding these local optimal solutions. We note that, given a LOS, its corresponding first-tier LOSs are defined as those optimal solutions whose corresponding stability boundaries have a non-empty intersection with the stability boundary of the LOS (Chiang & Chu (1996); Lee & Chiang (2004)). The definition of the stability boundary and its characterization can be found in Chiang &\nFekih-Ahmed (1996). Similarly, its second-tier LOSs are defined as those optimal solutions whose\ncorresponding stability boundaries have a non-empty intersection with the stability boundary of first-tier LOSs (Chiang & Chu (1996); Lee & Chiang (2004)). See fig. 1 for an illustration.\nWe consider a general nonlinear unconstrained optimization problem defined as follows:\nmin x c(x) (1)\nwhere c : D ⊂ Rn → R is assumed to be continuously differentiable and D the set of feasible points (or search space). A point x∗ ∈ D is called a local minimum if c(x∗) ≤ c(x) for all x ∈ D with ‖x− x∗‖ < σ for σ > 0. To systematically search for multiple LOSs, a generalized negative gradient system based on the objective eq. (1) is constructed and is described by\ndx dt = −gradR c(x) = −R(x)−1 · ∇c(x) = f(x(t)) (2)\nwhere the state vector x(t) of this dynamic system belongs to the Euclidean space Rn and the function f : Rn → Rn satisfies the sufficient condition for the existence and the uniqueness of the solutions. R(x) is a positive definite symmetric matrix (also known as the Riemannian metric) that generalizes various training algorithms. For example, if R(x) = I (identity), it is a naive gradient descent algorithm. If R(x) = J (x)>J (x) (J is the Jacobian matrix), then it is the Gauss-Newton method. If R(x) = J (x)>J (x) + µI , it becomes the Levenberg-Marquardt (LM) algorithm. The Theorem of the Equilibrium Points and Local Optima (Lee & Chiang (2004)) shows one nice property of the gradient system (2), which is the critical point of the optimization problem (1) is a (asymptotically) SEP of the dynamic system (2). i.e. x̄ is a SEP of (2) if and only if x̄ is an isolated local minimum for (1). Hence, the task of finding the LOSs of (1) can be achieved by finding the corresponding SEPs of (2). In short, TRUST-TECH is a dynamical method designed to systematically compute multiple LOSs with the following features:\n(i) it is a systematic and deterministic method to escape from a LOS towards another LOS, (ii) it finds multiple LOSs in a tier-by-tier manner (see fig. 1), and\n(iii) has a solid theoretical foundation (Chiang & Chu (1996); Lee & Chiang (2004); Chiang & Alberto (2015); Zhang & Chiang (2020)).\nAnother distinguishing feature of TRUST-TECH is its ability to guide a local method and/or a metaheuristic method for effective computation of a set of LOSs or even the global optimal solution." }, { "heading": "3.2 SYSTEMATIC SEARCH ON DEEP NEURAL NETS", "text": "Our method follows the paradigm of the TRUST-TECH. The center idea is to find multiple LOSs in a tier-by-tier manner. On small scale problems, applying fixed searching directions is proven to be effective in practice (Chiang et al. (2009); Reddy et al. (2008); Chiang & Reddy (2007); Wang & Chiang (2011)). In these applications, either random directions or eigen-vectors of the objective Hessian evaluated at each SEP were used. But in training deep neural networks, finding a proper direction is challenging. For a deep neural network, when searching along a random and fixed direction, the loss value will grow indefinitely. Another issue is that the computational cost of the original TRUST-TECH is high. Specifically, it assumes cheap evaluation of the objective function at each search step. However, in supervised learning of a large dataset, only the empirical loss is accessible instead of the ground-truth objective function. This means evaluating the loss function for the entire training set, which is almost impossible since it is limited by computational restrictions.\nTo tackle both challenges, we propose the Dynamic Search Path (DSP) method that enables exploration on deep neural networks’ parameter space. Furthermore, we apply the DSP method to serve as the search paths for TRUST-TECH (DSP-TT). Details are discussed in section 3.2.1. An example of a one-tier DSP-TT method is shown in Algorithm 1." }, { "heading": "3.2.1 OBTAINING DYNAMIC SEARCHING PATHS FOR TRUST-TECH", "text": "In this section, we go through the details on how to construct dynamic searching paths during TRUST-TECH computation and how it helps converge to nearby LOSs. Construction of searching path is inspired by the mode connectivity proposed in Garipov et al. (2018), in which the authors\nAlgorithm 1 One-Tier DSP-TT Search 1: procedure T1SEARCH(model, dataset,maxiter, batchsize) 2: Initialize paths, candidates = {model.parameters} . intialize search paths and solution set. 3: for k ← 1 to maxiter do 4: batch← getBatch(dataset, batchsize) 5: ρ1, ρ2 ← Schedule(iter,maxiter, exit found) . update learning rates 6: ∆k ← Select(paths) . Randomly select one path 7: Update(model,∆k, ρ1) . forward search step 8: exit found← CheckExit(model,∆k, batch) . check for exit on kth path 9: if exit found then 10: ωs,k ← LocalSolver(model,∆k, dataset) . converge to tier-1 optimum 11: candidates = candidates ⋃ {ωs,k} . update solution set 12: else 13: Calibrate(model, ρ2,∆k, batch) . calibration step 14: return candidates\nfound there exists low-loss ”tunnels” between different LOSs. But Mode Connectivity is used to find a high-accuracy pathway between two local solutions by optimizing the expectation over a uniform distribution on the path. Our focus is finding the proper searching directions towards nearby optima when starting from one LOS. They also claimed that a path φθ cannot be explicitly learned when given one starting point. However, we find that such a construction is possible. Specifically, by redesigning the objective and combining the exploration capability of TRUST-TECH and the exploitation capability of SGD, another local optimum can be found starting from one LOS. More generally, by using such an optimization-based path-finding technique, one can find multiple tier-one SEPs (i.e. nearby LOSs) simultaneously.\nTo do this, we first train the neural network to obtain a LOS ω0 ∈ R|net|. Then we define a trainable search direction vector di ∈ R|net| (randomly initialized at d0), so that during a TRUST-TECH search from ω0, instantaneous parameter vector at step i is represented as (ω0 + di).\nAt each step i, DSP updates the direction di as:\ndi = ρ1(i) · di−1 + ρ2(i) · f(ω0 + di) (3)\nThe first term describes ρ1(i) · di−1 the original TRUST-TECH search with no direction calibration, where ρ1(i) ∈ (0, ρmax] is the step size schedule for exploration phase whose value increases from 0 to ρmax w.r.t. step i. The second term is the DSP calibration term, where ρ2(ti) is the step size schedule for the calibration, descentd represents a general local descent solvers, such as Gradient Descent, Newton’s Method, etc. f(·) is the dynamics defined in Equation (2), where various local solvers can be applied here. The stopping criteria of ρ1(ti) is determined dynamically by either an exit point is found or ρmax is reaches.\nThe above steps repeats until (ω0 + di) converges to another LOS, which we call it a tier-1 solution associated with ω0. An intuitive demonstration of this process is shown in Figure 2b.\nOur proposed scheme is scalable to performing multiple search directions starting from one LOS. To do this, we initialize multiple directions, and at each step, each search direction is updated via eq. (3). It is also worth noting that during training, the computation graph size is the same as the original network, since the algorithm only picks one direction to be included in the computation graph. Thus, minor memory overhead is introduced in practice. As for the computational efficiency, our proposed method evaluates objectives on mini-batches instead of the entire dataset, and determines the stopping criteria by an exponential moving average of past batch evaluations. To further stabilize the stochastic behavior caused by mini-batch evaluations, buffer variables are used to determine the state transition between up (loss values are climbing in current stability region) and down (reaches a nearby stability region and the loss decreases steadily). These resolve the efficiency issue of the original TRUST-TECH on large scale supervised learning problems." }, { "heading": "4 DSP-TT ENSEMBLES OF DEEP NEURAL NETWORKS", "text": "When training budget is less constrained, high-quality of each tier-1 solution is emphasized as having better test accuracy than the tier-0 network. On the other hand, for building ensembles with a limited budget, high-quality is emphasized more on the diversity among the collection of local optimal neural networks found to better serve the ensemble, in stead of on a single network.\nWith the proposed DSP-TT, a set of optimal network parameters with high accuracy can be found systematically given enough training budget, and with limited budget, the high diversity among tier0 and tier-1 solutions still remedies the weaker performance on tier-1 networks when serving the ensemble. Individual qualities are guaranteed because the starting point of any search is already a LOS from mature SGD-based solvers with high quality, which is also shown from the experiments, especially in Table 4. As for diversity, SEPs (i.e. optimal parameter values, or LOS) are separated by at least two stability regions because each SEP has its own stability region. It is necessary to initialize parameters in different stability regions in order to find multiple optimal solutions. The proposed TRUST-TECH based method is systematic in characterizing stability regions while other heuristic-based algorithms are not. And therefore, the diversity among SEPs found by our method is also high due to the mutual exclusiveness of stability regions.\nThe high-quality LOSs with high diversity further motivate us to build ensembles to make a more robust and accurate model than each single member. First, a list of candidates with high quality and diversity are selected. After that, a fine-tuning process is executed if necessary to help any underfitted candidates toward greater convergence. Since the searching process already integrates the gradient information, the fine-tuning in our algorithm requires little effort. In fact, as shown in the experiments, fine-tuning does not show a benefit for the ensembling performance, so this procedure is ignored by default. Finally, we build the final ensembles by either averaging (regression) or voting for (classification) the outputs. Sophisticated ensembling methods can be applied here, however it is out of the scope of this paper." }, { "heading": "5 EXPERIMENTS", "text": "Exit point verification is run using MLPs on UCI-wine and MNIST datasets. Further experiments are run using VGG-16 (Simonyan & Zisserman (2014)), DenseNet-100-BC (Huang et al. (2017b)) and ResNet-164 (He et al. (2016)) on CIFAR datasets. The program is developed on PyTorch framework. Each configuration is run multiple times and the average performance are shown.\nHyperparameters Training budget: DenseNet has 300 epochs of training budget, and ResNet/VGG has 200 epochs. Batch size: 128 for VGG and ResNet, and 64 for DenseNet. DSP-TT parameters: ρ1 increases 0.001 per iteration, ρ2 is 0.1× of the initial tier-0 learning rate. Fine-tuning phase requires 10 epochs per solution. All others: DenseNet follows Huang et al. (2017b), VGG and ResNet follows Garipov et al. (2018).\nFor DSP-TT ensembles, exit points are usually found in negligible time (e.g. around 1min on CIFAR compared to a full training which takes hours). So 50 epochs are given to one tier of DSP-TT search with all exit points, while the rest of the budget are given to tier-0 training." }, { "heading": "5.1 EXIT POINT VERIFICATION", "text": "Exit points play an important role in TRUST-TECH method in finding multiple local optimal solutions. Figures 3a and 3b shows full gradient and batch version of a loss change with respect to the DSP-TT search iterations along one search path. The loss value first goes up, escaping from the tier-0 solution. At a certain point, the loss reaches a local maximum and then goes down, suggesting that the search path hits the stability boundary and enters a nearby stability region.\nTo further verify that an exit point lies on the stability boundary, we do the following visualization: Several points along the search path near the exit point are sampled. Then a forward integration (gradient descent with small step size) is executed starting from each sample. Trajectories are plotted by projecting the parameter space onto two random orthogonal directions. Due to high computation cost, this process is only simulated using a 1-layer MLP with 5 neurons (61 parameters) trained on UCI-wine dataset. Each integration process is executed for 50,000 steps with step size of 0.01. As shown in fig. 3c, The points before (red) and after (blue) exit converge to two different points on the 2D projection space. We also observe the cosine between the initial and updated search directions remains close to 1.0 throughout the search process, suggesting that gradients only calibrate extreme dimensions of the initial direction, but does not interfere with the remaining majority of dimensions.\n5.2 TIER-BY-TIER SEARCH\nThe proposed DSP-TT computes: 5 tierone (from tier-zero LOS) and 5 tiertwo (from the best tier-one LOS) LOSs. Among these, we perform the following ensembles: Tier-1 (5 tier-one LOSs); Tier1-tune (5 tier-one LOSs, each with a finetuning); Tier-0-1 (1 tier-zero and 5 tier-one LOSs); Tier-0-1-2 (1 tier-zero, 5 tier-one and 5 tier-two LOSs). We use SGD as the local solver and DenseNet as the architecture. As shown in table 1, all DSP-TTenhanced ensembles outperform the baseline model. Although Tier-0-1-2 performs mostly best among all, it is sufficient to use Tier-0-1 in practice for efficiency, and\ntherefore we use Tier-0-1 in all the following experiments. From table 1, we also find that although fine-tuning individuals can improve its own performance, it does not help much on the ensembles performance. This shows that the diversity introduced by our algorithm dominates the fine-tuning improvements by individuals. So in later experiments, all fine-tunings are neglected." }, { "heading": "5.3 COMPARISON WITH OTHER ENSEMBLE ALGORITHMS", "text": "In this section, we compare our method with other popular ensemble methods (Huang et al. (2017a); Garipov et al. (2018)) in deep learning. Results are shown in tables 2 and 3.\nBesides accuracy, member diversity is another major quality for ensembles. Ideally, we want all members perform relatively well, while each member learns some knowledge that differs from that of others. We measure the output correlation (Huang et al. (2017a)) and the parameter distance (Garipov et al. (2018)). In table 2, the correlation by DSP-TT outperforms other ensemble methods.\nAnd a more detailed analysis in table 3 shows that both parameter distance and output correlation by DSP-TT Ensembles are better than SSE and FGE, and are at a similar level to those of Individual Ensembles (multiple networks trained from scratch). Moreover, our fully trained DSP-TT Ensembles outperforms Individual Ensembles, and improves the individual baseline by 15% (CIFAR10) and 13% (CIFAR100). table 4 shows that fully trained tier1 networks performs at least as good as the tier-0 network. This suggests that training from an exit point found by DSP-TT method is better than from a random initialization. It is notable that in multiple cases, FGE members are more correlated, indicating that these members are not multiple LOSs, but perturbations near one LOS. From this perspective,\nFGE can be regarded as a fine-tuning around one local optimal point.\nFrom the hardware side, DSP-TT search process introduces minor overhead to the GPU memory usage. Specifically, baseline training of ResNet-164 takes 3819Mb GPU memory, which increases to 3921Mb during DSP-TT search. This justifies our previous claim that TRUST-TECH does not increase the size of the computation graph with only a little additional overhead." }, { "heading": "5.4 ABLATION TEST ON DSP-TT HYPERPARAMETERS", "text": "The key hyperparameters for DSP-TT are ρ1 (pace of search step) and ρ2 (step size of calibration step) defined in Section 3.2.1. In this part we test the sensitivity of the two. We perform tests on a grid of (dρ1dt , ρ2) pairs, and record (1) the number of iterations to finish a DSP-TT search for exit points, (2) average ρ1 of each search path when an exit point is reached, and (3) average distance between the search origin (tier-0 solution) and each exit point. As shown in Figure 4, DSP-TT is insensitive to ρ2. And figs. 4b and 4c show (1) ρ1 and the distance between tier-0 and exit points are highly correlated, and (2) The surface becomes flat after the increment speed of ρ1 passes 5e − 4, suggesting that other stability regions are reached." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "In this paper, we propose a novel Dynamic Search Path TRUST-TECH training method for deep neural nets. Unlike other global solvers, our proposed method efficiently explores the parameter space in a systematic way. To make the original TRUST-TECH applicable to deep neural networks, we first develop the Dynamic Searching Path (DSP) method. Second, we adopt the batch evaluation formula to increase the algorithm efficiency. Additionally, to further improve the model performance, we build the DSP-TT Ensembles. Test cases show that our proposed training method helps individual models obtain a better performance, even when tier-1 search is applied. Our method is general purposed, so that it can be applied to various architecture with various local solver.\nMoreover, it is observed from Table 1 that percentage improvements in error rate is not as significant as that in loss. This suggests that the cross-entropy loss may be the bottleneck for further improvements in performance for classification tasks. Thus, designing a proper loss function that can be more sensitive to classification accuracy would be a valuable topic in the future." } ]
2,020
CONSTRUCTING MULTIPLE HIGH-QUALITY DEEP NEURAL NETWORKS: A TRUST-TECH-BASED AP-
SP:8b35f7c054e1ac74e0ad260add9723766df8613d
[ "This paper addresses disentanglement in the latent space of autoencoders. To this end, it combines ideas from four existing papers, namely the reconstruction loss of the Wasserstein autoencoder, the regularization term decomposition from the total correlation autoencoder, and entropy estimation using minibatch-weighted sampling or the density-ratio trick. This combination certainly makes sense, as it brings together methods that have previously been shown to work well in isolation.", "This paper extends the Wasserstein Autoencoder (WAE) work by splitting the divergence on the variational marginal into 2 terms, akin to what was done in TC-VAE. This enables directly controlling the explicit contribution of the total correlation term, which is likely to contribute to disentanglement more directly. They explore 2 variations of their model, based on different estimators of the TC term (TCWAE-MWS, using minibatch-weighted sampling; TCWAE-GAN, using a density ratio trick)." ]
Disentangled representation learning has undoubtedly benefited from objective function surgery. However, a delicate balancing act of tuning is still required in order to trade off reconstruction fidelity versus disentanglement. Building on previous successes of penalizing the total correlation in the latent variables, we propose TCWAE (Total Correlation Wasserstein Autoencoder). Working in the WAE paradigm naturally enables the separation of the total-correlation term, thus providing disentanglement control over the learned representation, while offering more flexibility in the choice of reconstruction cost. We propose two variants using different KL estimators and perform extensive quantitative comparisons on data sets with known generative factors, showing competitive results relative to state-of-the-art techniques. We further study the trade off between disentanglement and reconstruction on more-difficult data sets with unknown generative factors, where the flexibility of the WAE paradigm in the reconstruction term improves reconstructions.
[ { "affiliations": [], "name": "WASSERSTEIN AUTOENCODER" } ]
[ { "authors": [ "A. Achille", "S. Soatto" ], "title": "Information dropout: Learning optimal representations through noisy computation", "venue": "In IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "M. Aubry", "D. Maturana", "A. Efros", "B. Russell", "J. Sivic" ], "title": "Seeing 3D chairs: exemplar part-based 2D-3D alignment using a large dataset of CAD models", "venue": "In CVPR,", "year": 2014 }, { "authors": [ "P. Bachman", "R.D. Hjelm", "W. Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Y. Bengio", "A. Courville", "P. Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "In IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "C.P. Burgess", "I. Higgins", "A. Pal", "L. Matthey", "N. Watters", "G. Desjardins", "A. Lerchner" ], "title": "Understanding disentangling in β-VAE", "venue": null, "year": 2018 }, { "authors": [ "R.T.K. Chen", "X. Li", "R. Grosse", "D. Duvenaud" ], "title": "Isolating sources of disentanglement in VAEs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "K. Do", "T. Tran" ], "title": "Theory and evaluation metrics for learning disentangled representations", "venue": null, "year": 2019 }, { "authors": [ "C. Eastwood", "C.K.I. Williams" ], "title": "A framework for the quantitative evaluation of disentangled representations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "B. Esmaeili", "H.B. Wu", "S. Jain", "A. Bozkurt", "N. Siddharth", "B. Paige", "D.H. Brooks", "J. Dy", "J.-W. van de Meent" ], "title": "Structured disentangled representations", "venue": null, "year": 2018 }, { "authors": [ "C. Frogner", "C. Zhang", "H. Mobahi", "M. Araya", "T.A. Poggio" ], "title": "Learning with a Wasserstein loss", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "S. Gao", "R. Brekelmans", "G. Ver Steeg", "A. Galstyan" ], "title": "Auto-encoding total correlation explanation", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "M. Heusel", "H. Ramsauer", "T. Unterthiner", "B. Nessler", "S. Hochreiter" ], "title": "GANs trained by a two time-scale update rule converge to a local Nash equilibrium", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "I. Higgins", "L. Matthey", "A. Pal", "C. Burgess", "X. Glorot", "M.M. Botvinick", "S. Mohamed", "A. Lerchner" ], "title": "beta-VAE: Learning basic visual concepts with a constrained variational framework", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "I. Higgins", "D. Amos", "D. Pfau", "S. Racanière", "L. Matthey", "D.J. Rezende", "A. Lerchner" ], "title": "Towards a definition of disentangled representations", "venue": null, "year": 2018 }, { "authors": [ "R.D. Hjelm", "A. Fedorov", "S. Lavoie-Marchildon", "K. Grewal", "P. Bachman", "A. Trischler", "Y. Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "M.D. Hoffman", "M.J. Johnson" ], "title": "ELBO surgery: yet another way to carve up the variational evidence lower bound", "venue": "In NIPS Workshop on Advances in Approximate Bayesian Inference,", "year": 2016 }, { "authors": [ "H. Kim", "A. Mnih" ], "title": "Disentangling by factorising", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: a method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "D.P. Kingma", "M. Welling" ], "title": "Auto-encoding variational Bayes", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "A. Kumar", "P Sattigeri", "A Balakrishnan" ], "title": "Variational inference of disentangled latent concepts from unlabeled observations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Y. LeCun", "F.J. Huang", "L. Bottou" ], "title": "Learning methods for generic object recognition with invariance to pose and lighting", "venue": "In IEEE Computer Society Conference on Computer Vision and Pattern Recognition,", "year": 2004 }, { "authors": [ "Z. Liu", "P. Luo", "X. Wang", "X. Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In International Conference on Computer Vision,", "year": 2015 }, { "authors": [ "F. Locatello", "S. Bauer", "M. Lucic", "G Raetsch", "S. Gelly", "B. Schölkopf", "O. Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Loic Matthey", "Irina Higgins", "Demis Hassabis", "Alexander Lerchner" ], "title": "dSprites: Disentanglement testing", "venue": "Sprites dataset. https://github.com/deepmind/dsprites-dataset/,", "year": 2017 }, { "authors": [ "X. Nguyen", "M.J. Wainwright", "I.J. Michael" ], "title": "Estimating divergence functionals and the likelihood ratio by penalized convex risk minimization", "venue": "In Advances in Neural Information Processing Systems", "year": 2008 }, { "authors": [ "D.J. Rezende", "S. Mohamed", "D. Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "P. Rubenstein", "O. Bousquet", "J. Djolonga", "C. Riquelme", "I. Tolstikhin" ], "title": "Practical and consistent estimation of f-divergences", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "P.K. Rubenstein", "B. Schoelkopf", "I. Tolstikhin" ], "title": "Learning disentangled representations with Wasserstein Auto-Encoders", "venue": "In ICLR Workshop,", "year": 2018 }, { "authors": [ "M. Sugiyama", "T. Suzuki", "T. Kanamori" ], "title": "Density ratio matching under the Bregman divergence: A unified framework of density ratio estimation", "venue": "In Annals of the Institute of Statistical Mathematics,", "year": 2011 }, { "authors": [ "N. Tishby", "F.C. Pereira", "W. Bialek" ], "title": "The information bottleneck method", "venue": "In Annual Allerton Conference on Communication, Control and Computing,", "year": 1999 }, { "authors": [ "I. Tolstikhin", "O. Bousquet", "S. Gelly", "B. Schoelkopf" ], "title": "Wasserstein Auto-Encoders", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "M. Tschannen", "J. Djolonga", "P.K. Rubenstein", "S. Gelly", "M. Lucic" ], "title": "On mutual information maximization for representation learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "A. van den Oord", "Y. Li", "O. Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": null, "year": 2018 }, { "authors": [ "S. van Steenkiste", "F. Locatello", "J. Schmidhuber", "O. Bachem" ], "title": "Are disentangled representations helpful for abstract visual reasoning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "G. Ver Steeg", "A. Galstyan" ], "title": "Discovering structure in high-dimensional data through correlation explanation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "C. Villani" ], "title": "Optimal Transport: Old and New", "venue": null, "year": 2008 }, { "authors": [ "S. Watanabe" ], "title": "Information theoretical analysis of multivariate correlation", "venue": "In IBM Journal of Research and Development,", "year": 1960 }, { "authors": [ "S. Zhao", "J. Song", "S. Ermon" ], "title": "InfoVAE: Balancing learning and inference in variational autoencoders", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Tolstikhin" ], "title": "2018) first restrain the space of couplings to the joint distributions of the form", "venue": null, "year": 2018 }, { "authors": [ "Secondly", "Tolstikhin" ], "title": "2018) relax the constraint in Eq. 15 using a soft constraint with a Lagrange multiplier", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Learning representations of data is at the heart of deep learning; the ability to interpret those representations empowers practitioners to improve the performance and robustness of their models (Bengio et al., 2013; van Steenkiste et al., 2019). In the case where the data is underpinned by independent latent generative factors, a good representation should encode information about the data in a semantically meaningful manner with statistically independent latent variables encoding for each factor. Bengio et al. (2013) define a disentangled representation as having the property that a change in one dimension corresponds to a change in one factor of variation, while being relatively invariant to changes in other factors. While many attempts to formalize this concept have been proposed (Higgins et al., 2018; Eastwood & Williams, 2018; Do & Tran, 2019), finding a principled and reproducible approach to assess disentanglement is still an open problem (Locatello et al., 2019).\nRecent successful unsupervised learning methods have shown how simply modifying the ELBO objective, either re-weighting the latent regularization terms or directly regularizing the statistical dependencies in the latent, can be effective in learning disentangled representation. Higgins et al. (2017) and Burgess et al. (2018) control the information bottleneck capacity of Variational Autoencoders (VAEs, (Kingma & Welling, 2014; Rezende et al., 2014)) by heavily penalizing the latent regularization term. Chen et al. (2018) perform ELBO surgery to isolate the terms at the origin of disentanglement in β-VAE, improving the reconstruction-disentanglement trade off. Esmaeili et al. (2018) further improve the reconstruction capacity of β-TCVAE by introducing structural dependencies both between groups of variables and between variables within each group. Alternatively, directly regularizing the aggregated posterior to the prior with density-free divergences (Zhao et al., 2019) or moments matching (Kumar et al., 2018), or simply penalizing a high Total Correlation (TC, (Watanabe, 1960)) in the latent (Kim & Mnih, 2018) has shown good disentanglement performances.\nIn fact, information theory has been a fertile ground to tackle representation learning. Achille & Soatto (2018) re-interpret VAEs from an Information Bottleneck view (Tishby et al., 1999), re-phrasing it as a trade off between sufficiency and minimality of the representation, regularizing a pseudo TC between the aggregated posterior and the true conditional posterior. Similarly, Gao et al. (2019) use the principle of total Correlation Explanation (CorEX) (Ver Steeg & Galstyan, 2014) and maximize the mutual information between the observation and a subset of anchor latent points. Maximizing the\nmutual information (MI) between the observation and the latent has been broadly used (van den Oord et al., 2018; Hjelm et al., 2019; Bachman et al., 2019; Tschannen et al., 2020), showing encouraging results in representation learning. However, Tschannen et al. (2020) argued that MI maximization alone cannot explain the disentanglement performances of these methods.\nBuilding on the Optimal Transport (OT) problem (Villani, 2008), Tolstikhin et al. (2018) introduced the Wasserstein Autoencoder (WAE), an alternative to VAE for learning generative models. Similarly to VAE, WAE maps the data into a (low-dimensional) latent space while regularizing the averaged encoding distribution. This is in contrast with VAEs where the posterior is regularized at each data point, and allows the encoding distribution to capture significant information about the data while still matching the prior when averaged over the whole data set. Interestingly, by directly regularizing the aggregated posterior, WAE hints at more explicit control on the way the information is encoded, and thus better disentanglement. The reconstruction term of the WAE allows for any cost function on the observation space, opening the door to better suited reconstruction terms, for example when working with continuous RGB data sets where the Euclidean distance or any metric on the observation space can result in more accurate reconstructions of the data.\nIn this work, following the success of regularizing the TC in disentanglement, we propose to use the Kullback-Leibler (KL) divergence as the latent regularization function in the WAE. We introduce the Total Correlation WAE (TCWAE) with an explicit dependency on the TC of the aggregated posterior. Using two different estimators for the KL terms, we perform extensive comparison with succesful methods on a number of data sets. Our results show that TCWAEs achieve competitive disentanglement performances while improving modelling performance by allowing flexibility in the choice of reconstruction cost." }, { "heading": "2 IMPORTANCE OF TOTAL CORRELATION IN DISENTANGLEMENT", "text": "" }, { "heading": "2.1 TOTAL CORRELATION", "text": "The TC of a random vector Z ∈ Z under P is defined by\nTC(Z) , dZ∑ d=1 Hpd(Zd)−Hp(Z) (1)\nwhere pd(zd) is the marginal density over only zd and Hp(Z) , −Ep log p(Z) is the Shannon differential entropy, which encodes the information contained in Z under P . Since\ndZ∑ d=1 Hpd(Zd) ≤ Hp(Z) (2)\nwith equality when the marginals Zd are mutually independent, the TC can be interpreted as the loss of information when assuming mutual independence of the Zd; namely, it measures the mutual dependence of the marginals. Thus, in the context of disentanglement learning, we seek a low TC of the aggregated posterior, p(z) = ∫ X p(z|x) p(x) dx, which forces the model to encode the data into statistically independent latent codes. High MI between the data and the latent is then obtained when the posterior, p(z|x), manages to capture relevant information from the data." }, { "heading": "2.2 TOTAL CORRELATION IN ELBO", "text": "We consider latent generative models pθ(x) = ∫ Z pθ(x|z) p(z) dz with prior p(z) and decoder network, pθ(x|z), parametrized by θ. VAEs approximate the intractable posterior p(z|x) by introducing an encoding distribution (the encoder), qφ(z|x), and learning simultaneously θ and φwhen optimizing the variational lower bound, or ELBO, defined in Eq. 3:\nLELBO(θ, φ) , E pdata(X)\n[ E\nqφ(Z|X) [log pθ(X|Z)]−KL\n( qφ(Z|X) ‖ p(Z) )] ≤ E pdata(X) log pθ(X)\n(3)\nFollowing Hoffman & Johnson (2016), we can decompose the KL term in Eq. 3 as:\n1\nNbatch N∑ n=1 KL ( qφ(Z|xn) ‖ p(Z) ) = KL ( q(Z,N) ‖ q(Z)p(N) ) ︸ ︷︷ ︸\ni index-code MI\n+ KL ( q(Z) ‖ p(Z) ) ︸ ︷︷ ︸\nii marginal KL\n(4)\nwhere p(n) = 1N , q(z|n) = q(z|xn), q(z, n) = q(z|n)p(n) and q(z) = ∑N n=1 q(z|n) p(n). i refers to the index-code mutual information and represents the MI between the data and the latent under the join distribution q(z, n), and ii to the marginal KL matching the aggregated posterior to the prior. While discussion on the impact of a high index-code MI on disentanglement learning is still open, the marginal KL term plays an important role in disentanglement. Indeed, it pushes the encoder network to match the prior when averaged, as opposed to matching the prior for each data point. Combined with a factorized prior p(z) = ∏ d pd(zd), as it is often the case, the aggregated posterior is forced to factorize and align with the axis of the prior. More specifically, the marginal KL term in Eq. 4 can be decomposed the as sum of a TC term and a dimensionwise-KL term:\nKL ( q(Z) ‖ p(Z) ) = TC ( q(Z) ) + dZ∑ d=1 KL ( qd(Zd) ‖ pd(Zd) ) (5)\nThus maximizing the ELBO implicitly minimizes the TC of the aggregated posterior, enforcing the aggregated posterior to disentangle as Higgins et al. (2017) and Burgess et al. (2018) observed when strongly penalizing the KL term in Eq. 3. Chen et al. (2018) leverage the KL decomposition in Eq. 5 by refining the heavy latent penalization to the TC only. However, the index-code MI term in Eq. 4 seems to have little to no role in disentanglement (see ablation study of Chen et al. (2018)), potentially arming the reconstruction performances (Hoffman & Johnson, 2016)." }, { "heading": "3 WAE NATURALLY GOOD AT DISENTANGLING?", "text": "In this section we introduce the OT problem and the WAE objective, and discuss the compelling properties of WAEs for representation learning. Mirroring β-TCVAE decomposition, we derive the TCWAE objective." }, { "heading": "3.1 WAE", "text": "The Kantorovich formulation of the OT between the true-but-unknown data distribution PD and the model distribution Pθ, for a given cost function c, is defined by:\nOTc(PD, Pθ) = inf Γ∈P(PD,Pθ) ∫ X×X c(x, x̃) γ(x, x̃) dxdx̃ (6)\nwhere P(PD, Pθ) is the space of all couplings of PD and Pθ; namely, the space of joint distributions Γ on X × X whose densities γ have marginals pD and pθ. Tolstikhin et al. (2018) derive the WAE objective by restraining this space and relaxing the hard constraint on the marginal using a soft constraint with a Lagrange multiplier (see Appendix A for more details):\nWD,c(θ, φ) , E pD(x) E qφ(z|x) E pθ(x̃|z)\nc(x, x̃) + λD ( q(Z) ‖ p(Z) ) (7)\nwhere D is any divergence function and λ a relaxation parameter. The decoder, pθ(x̃|z), and the encoder, qφ(z|x), are optimized simultaneously by dropping the closed-form minimization over the encoder network, with standard stochastic gradient descent methods.\nSimilarly to the ELBO, objective 7 consists of a reconstruction cost term and a latent regularization term, preventing the latent codes to drift away from the prior. However, WAE explicitly penalizes the aggregate posterior. This motivates, following Section 2.2, the use of WAE in disentanglement learning. Rubenstein et al. (2018) have shown promising disentanglement performances without modifying the objective 7. Another important difference lies in the functional form of the reconstruction cost in the reconstruction term. Indeed, WAE allows for more flexibility in the reconstruction term with any cost function allowed, and in particular, it allows for cost functions better suited to the data at hand and for the use of deterministic decoder networks (Tolstikhin et al., 2018; Frogner et al., 2015). This can potentially result in an improved reconstruction-disentanglement trade off as we empirically find in Sections 4.2 and 4.1." }, { "heading": "3.2 TCWAE", "text": "In this section, for notation simplicity, we drop the explicit dependency of the distributions to their respective parameters.\nFollowing Section 2.2 and Eq. 5, we chose the divergence function, D, in Eq. 7, to be the KL divergence and assume a factorized prior (e.g. p(z) = N (0dZ , IdZ )), obtaining the same decomposition than in Eq. 5. Re-weighting each term in Eq. 5 with hyper-parameters β and γ, and plugging into Eq. 7, we obtain our TCWAE objective:\nWTC , E p(xn) E q(z|xn)\n[ E\np(x̃n|Z) c(xn, x̃n)\n] + βKL ( q(Z) ‖ dZ∏ d=1 qd(Zd) ) + γ dZ∑ d=1 KL ( qd(Zd) ‖ pd(Zd) ) (8)\nGiven the positivity of the KL divergence, the TCWAE in Eq. 8 is an upper-bound of the WAE objective of Eq. 7 with λ = min(β, γ).\nEq. 8 can be directly related to the β-TCVAE objective of Chen et al. (2018):\n−Lβ−TC , E p(xn) E q(z|xn)\n[ − log p(xn|Z) ] + βKL ( q(Z) ‖ dZ∏ d=1 qd(Zd) ) + γ dZ∑ d=1 KL ( qd(Zd) ‖ pd(Zd) ) + αIq ( q(Z,N); q(Z)p(N) ) (9)\nAs already mentioned, the main differences are the absence of index-code MI and a different reconstruction cost function. Setting α = 0 in Eq. 9 makes the two latent regularizations match but breaks the inequality in Eq. 3. Matching the two reconstruction terms would be possible if we could find a ground cost function c such that Ep(x̃n|Z)c(xn, x̃n) = − log p(xn|Z)." }, { "heading": "3.3 ESTIMATORS", "text": "While being grounded and motivated by information theory and earlier works on disentanglement, using the KL as the latent divergence function, as opposed to other sampled-based divergences (Tolstikhin et al., 2018; Patrini et al., 2018), presents its own challenges. Indeed, the KL terms are intractable, and especially, we need estimators to approximate the entropy terms. We propose to use two estimators, one based on importance weight-sampling Chen et al. (2018), the other on adversarial estimation using the denisty-ratio trick (Kim & Mnih, 2018).\nTCWAE-MWS\nChen et al. (2018) propose to estimate the intractable terms Eq log q(Z) and Eqd log qd(Z) in the KL terms of Eq. 8 with Minibatch-Weighted Sampling (MWS). Considering a batch of observation {x1, . . . xNbatch}, they sample the latent codes zi ∼ q(z|xi) and compute:\nE q(z) log q(z) ≈ 1 Nbatch Nbatch∑ i=1 log 1 N ×Nbatch Nbatch∑ j=1 q(zi|xj) (10)\nThis estimator, while being easily computed from samples, is a biased estimator of Eq log q(Z). Chen et al. (2018) also proposed an unbiased version, the Minibatch-Stratified Sampling (MSS). However, they found that it did not result in improved performances, and thus, as Chen et al. (2018), we chose to use the simpler MWS estimator. We call the resulting algorithm the TCWAE-MWS. Other sampled-based estimators of the entropy or the KL divergence have been proposed (Rubenstein et al., 2019; Esmaeili et al., 2018). However, we choose the solution of Chen et al. (2018) for 1) its simplicity and 2) the similarities between the TCWAE and β-TCVAE objectives.\nTCWAE-GAN\nA different approach, similar in spirit to the WAE-GAN originally proposed by Tolstikhin et al. (2018), is based on adversarial-training. While Tolstikhin et al. (2018) use the adversarial training to approximate the JS divergence, Kim & Mnih (2018) use the density-ratio trick and adversarial\ntraining to estimate the intractable terms in Eq. 8. The the density-ratio trick (Nguyen et al., 2008; Sugiyama et al., 2011) estimates the KL divergence as:\nKL ( q(z) ‖ dZ∏ d=1 qd(zd) ) ≈ E q(z) log D(z) 1−D(z) (11)\nwhere D plays the same role than the discriminator in GANs and ouputs an estimate of the probability that z is sampled from q(z) and not from ∏dZ d=1 qd(zd). Given that we can easily sample from q(z), we can use Monte-Carlo sampling to estimate the expectation in Eq. 11. The discriminator D is adversarially trained alongside the decoder and encoder networks. We call this adversarial version the TCWAE-GAN." }, { "heading": "4 EXPERIMENTS", "text": "We perform a series of quantitative and qualitative experiments, starting with an ablation study on the impact of using different latent regularization functions in WAEs followed by a quantitative comparison of the disentanglement performances of our methods with existing ones on toy data sets before moving to qualitative assessment of our method on more challenging data sets. Details of the data sets, the experimental setup as well as the networks architectures are given in Appendix B. In all the experiments we fix the ground-cost function of the WAE-based methods to be the square Euclidean distance: c(x, y) = ‖x− y‖2L2 ." }, { "heading": "4.1 QUANTITATIVE ANALYSIS: DISENTANGLEMENT ON TOY DATA SETS", "text": "Ablation study of the latent divergence function We compare the impact of the different latent regularization functions in WAE-MMD (Tolstikhin et al., 2018), TCWAE-MWS and TCWAE-GAN. We take β = γ in the TCWAE objectives isolating the impact of the different latent divergence functions used in the TCWAE and the original WAE. We train the methods with β ∈ {1, 2, 4, 6, 8, 10}, and report the results Figure 1 in the case of the NoisydSprites data set (Locatello et al., 2019). As expected, the higher the penalization on the latent regularization (high β), the poorer the reconstructions. We can see that the trade off between reconstruction and latent regularization is more sensible for TCWAE-GAN, where a relatively modest improvement in latent regularization results in an important deterioration of reconstruction performances while TCWAE-MWS is less sensible. This is better illustrated in Figure 1c with a much higher slope for TCWAE-GAN than for TCWAE-MWS. WAE seems to be relatively little impacted by the latent penalization weight. We note in Figure1b the bias of the MWS estimator (Chen et al., 2018). Finally, we plot the reconstruction versus the MMD between the aggregated posterior and the prior for all the models in Figure (1d). Interestingly, TCWAEs actually achieved a lower MMD (left part of the plot) even if they are not being trained with that regularization function. However, as expected given that the TCWAE do not optimized the reconstruction-MMD trade off, the WAE achieved a better reconstruction (bottom part of the plot).\nDisentanglement performances We compare our methods with β-TCVAE (Chen et al., 2018), FactorVAE (Kim & Mnih, 2018) and the original WAE-MMD (Tolstikhin et al., 2018) on the dSprites (Matthey et al., 2017), NoisydSprites (Locatello et al., 2019), ScreamdSprites (Locatello et al., 2019) and smallNORB (LeCun et al., 2004) data sets whose ground-truth generative-factors are known and given in Table 3, Appendix B.1. We use three different disentanglement metrics to assess the disentanglement performances: the Mutual Information Gap (MIG, Chen et al. (2018)), the factorVAE metric (Kim & Mnih, 2018) and the Separated Attribute Predictability score (SAP, Kumar et al. (2018)). We follow Locatello et al. (2019) for the implementation of these metrics. We use the Mean Square Error (MSE) of the reconstructions to assess the reconstruction performances of the methods. For each model, we use 6 different values for each parameter, resulting in thirty-six different models for TCWAEs, and six for the remaining methods (see Appendix B.1 for more details).\nMirroring the benchmark methods, we first tune γ in the TCWAEs, regularizing the dimensionwiseKL, subsequently focusing on the role of the TC term in the disentanglement performances. The heat maps of the different scores for each method and data set are given Figures 5, 6, 7 and 8 in Appendix C. As expected, while β controls the trade off between reconstruction and disentanglement, γ affects the range achievable when tuning β. Especially, for γ > 1, we can see Figures5,6, 7 and 8 that better disentanglement is obtained without much deterioration in reconstruction.\nTable 1 reports the results, averaged over 5 random runs, for the four different data sets. For each method, we report the best β taken to be the one achieving an overall best ranking on the four different metrics (see Appendix ?? for details). Note that the performances of WAE on the dSprites data set, both in term of reconstruction and disentanglement where significantly worse and meaningless, thus, in order to avoid unfair extra tuning of the parameters, we chose not to include them. TCWAEs achieve competitive performances across all the data sets, with top scores in several metrics. Especially, the square Euclidean distance seems to improve the trade off and perform better than the cross-entropy with color images (NoisydSprites, ScreamdSprites) but less so with black and white images (dSprites). See Appendix C for more results on the different data sets.\nAs a sanity check, we plot Figure 2 the latent traversals of the different methods on the smallNORB data set. More specifically, we encode one observation and traverse the latent dimensions one at the time (rows) and reconstruct the resulting latent traversals (columns). Visually, all methods, with the exception of WAE, learn to disentangle, capturing four different factors in line with the ground-truth generative factors. Models reconstructions and samples for the different data sets are given in Appendix C.\nKL\n(\n1/Ntest\n∑\ntestset q(zi|x) ‖ p(zi)\n)\nand the traversal range is [−2, 2].\nFinally, we visualise the reconstruction-disentanglement trade off by plotting the different disentanglement metrics against the MSE in Figure 3. As expected, when the TC regularization weight is increased, the reconstruction deteriorates while the disentanglement improves up to a certain point. Then, when too much penalization is put on the TC term, the poor quality of the reconstructions prevents any disentanglement in the generative factors. Reflecting the results of Table 1, TCWAE-MWS seems to perform better (top-left corner represents better reconstruction and disentanglement). TCWAE-GAN presents better reconstruction but slightly lower disentanglement performances (bottom left corner)." }, { "heading": "4.2 QUALITATIVE ANALYSIS: DISENTANGLEMENT ON REAL-WORLD DATA SETS", "text": "We train our methods on 3Dchairs (Aubry et al., 2014) and CelebA (Liu et al., 2015) whose generative factors are not known and qualitatively find that TCWAEs achieve good disentanglement. Figure 4 shows the latent traversals of four different factors learned by the TCWAEs, while Figures 16 and 18 in Appendix D show the models reconstructions and samples. Visually, TCWAEs manage to capture different generative factors while retaining good reconstructions and samples. This confirms our intuition that the flexibility offered in the construction of the reconstruction term, mainly the possibility to chose the reconstruction cost function and use deterministic decoders, improves the reconstruction-disentanglement trade off. In order to assess the quality of the reconstructions, we compute the MSE of the reconstructions and the FID scores (Heusel et al., 2017) of the reconstructions and samples. Results are reported in Table 2. TCWAEs indeed beat their VAEs counterparts in both data sets. It is worth noting that, while the performances of FactorVAE in Table 2 seem good, the\ninspection of the reconstructions and samples in Appendix D shows that FactorVAE in fact struggle to generalize and to learn a smooth latent manifold." }, { "heading": "5 CONCLUSION", "text": "Leveraging the surgery of the KL regularization term of the ELBO objective, we design a new disentanglement method based on the WAE objective whose latent divergence function is taken to be the KL divergence between the aggregated posterior and the prior. The WAE framework naturally enables the latent regularization to depend explicitly on the TC of the aggregated posterior, quantity previously associated with disentanglement. Using two different estimators of the KL terms, we show that our methods achieve competitive disentanglement on toy data sets. Moreover, the flexibility in the choice of the reconstruction cost function offered by the WAE framework makes our method more compelling when working with more challenging data sets." }, { "heading": "A WAE DERIVATION", "text": "We recall the Kantorovich formulation of the OT between the true-but-unknown data distribution PD and the model distribution Pθ, with given cost function c:\nOTc(PD, Pθ) = inf Γ∈P(PD,Pθ) ∫ X×X c(x, x̃) γ(x, x̃) dx dx̃ (12)\nwhere P(PD, Pθ) is the space of all couplings of PD and Pθ: P(PD, Pθ) = { Γ ∣∣∣ ∫ X γ(x, x̃) dx̃ = pD(x), ∫ X γ(x, x̃) dx = pθ(x̃) } (13)\nTolstikhin et al. (2018) first restrain the space of couplings to the joint distributions of the form:\nγ(x, x̃) = ∫ Z pθ(x̃|z) q(z|x) pD(x) dz (14)\nwhere q(z|x), for x ∈ X , plays the same role as the variational distribution in variational inference. While the marginal constraint on x (first constraint in Eq. 13) in Eq. 14 is satisfied by construction, the second marginal constraint (that over x giving pθ in in Eq. 13) is not guaranteed. A sufficient condition is to have for all z ∈ Z: ∫\nX q(z|x) pD(x) dx = p(z) (15)\nSecondly, Tolstikhin et al. (2018) relax the constraint in Eq. 15 using a soft constraint with a Lagrange multiplier:\nŴc(PD, Pθ) = inf q(Z|X) [ ∫ X×X c(x, x̃) γ(x, x̃) dx dx̃+ λD ( q(Z) ‖ p(Z) )] (16)\nwhere D is any divergence function, λ a relaxation parameter, γ is defined in Eq. 14 and q(Z) is the aggregated posterior as define in Section 2. Finally, they drop the closed-form minimization over the variational distribution q(z|x), to obtain the WAE objective, as defined in Section 3.1:\nWD,c(θ, φ) , E pD(X) E qφ(z|x) E pθ(x̃|z)\nc(x, x̃) + λD ( q(Z) ‖ p(Z) ) ≈ E p(xn) E qφ(z|xn) E pθ(x̃n|z) c(x, x̃n) + λD ( q(Z) ‖ p(Z) ) (17)\nB IMPLEMENTATION DETAILS\nB.1 EXPERIMENTAL SETUP\nWe train and compare our methods on four different data sets, two with known ground-truth generative factors (see Table 3): dSprites (Matthey et al., 2017) with 737,280 binary, 64 × 64 images and smallNORB (LeCun et al., 2004) with 48,600 greyscale, 64 × 64 images; and two with unknown ground-truth generative factors: 3Dchairs (Aubry et al., 2014) with 86,366 RGB, 64× 64 images and CelebA (Liu et al., 2015) with 202,599 RGB 64× 64 images.\nWe use a batch size of 64 in Section 4.2, while in the main experiments of Section 4.1, we take a batch size of 100. In the ablation study of Section 4.1, we use a bigger batch size of 256 in order to reduce the impact of the bias of the MWS estimator (Chen et al. (2018) however show that\nthere is very little impact on the performance of the MWS when using smaller batch size). For all experiments, we use the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 0.0005, beta1 of 0.9, beta2 of 0.999 and epsilon of 0.0008 and train for 300,000 iterations. For all the data sets of Section 4.1, we take the latent dimension dZ = 10, while we use dZ = 16 for 3Dchairs and dZ = 32 for CelebA. We use Gaussian encoders with diagonal covariance matrix in all the models and deterministic decoder networks when possible (WAE-based methods). We follow Locatello et al. (2019) for the architectures in all the experiments expect for CelebA where we follow Tolstikhin et al. (2018) (details of the networks architectures given Section B.2). We use a (positive) mixture of Inverse MultiQuadratic (IMQ) kernels and the associated reproductive Hilbert space to compute the MMD when it is needed (WAE and ablation study of Section 4.1).\nThe different parameter values used for each experiment are given Table 4. γ is chosen such that the resulting method achieves the best score s, when averaging over all the β values, where the score is defined as the sum of the ranking on each individual metric: s = rMSE + ∑ metric rmetric where rMSE designed the ranking of the MSE (lower is better) and rmetric, for metric in {MIG, FactorVAE, SAP}, is the ranking of the disentanglement performances as measured by the given metric (higher is better). β is then chosen such that the resulting method, with the previously found γ, achieves the best overall score s defined above. In Section 4.1, we use a validation run to select the parameters values and report the MSE and FID scores on a test run. MSE are computed on a test set of size 10,000 with batch size of 1,000, while we follow Heusel et al. (2017) for the FID implementation: we first compute the activation statistics of the features maps on the full test set for both the reconstruction, respectively samples, and the true observations. We then compute the Frechet distance between two Gaussian with the computed statistics.\nB.2 MODELS ARCHITECTURES\nThe Gaussian encoder networks, qφ(z|x) and decoder network, pθ(x|z), are parametrized by neural networks as follow:\npθ(x|z) = { δfθ(z) if WAE based method, N ( µθ(z),σ 2 θ(z) ) otherwise.\nqφ(z|x) =N ( µφ(x),σ 2 φ(x) ) where fθ , µθ , σ2θ , µφ and σ 2 φ are the outputs of convolutional neural networks. All the experiments use the architectures of Locatello et al. (2019) except for CelebA where we use the architecture inspired by Tolstikhin et al. (2018). The details for the architectures are given Table 5.\nAll the discriminator networks, D, are fully connected networks and share the same architecture given Table 5. The optimisation setup for the discriminator is given Table 6." }, { "heading": "C QUANTITATIVE EXPERIMENTS", "text": "HYPER PARAMETER TUNING\nDISENTANGLEMENT SCORES vs β\nFor each method, we plot the distribution (over five random runs) of the different metrics for different β values.\nRECONSTRUCTIONS AND SAMPLES" }, { "heading": "D QUALITATIVE EXPERIMENTS", "text": "3DCHAIRS\nCELEBA\nT C\nW A\nE -M\nW S Reconstructions Samples T C W A E -G A N\nFigure 18: Same as Figure 16 for the CelebA data set." } ]
2,020
null
SP:d9a70ada6ed2324c5b430e0a7a6785b1eb49d3ef
[ "The paper studies the problem of Post-Training Quantization of NNs, where no fine-tuning is performed to quantize the model. In particular, the authors focus on sub-8 bit quantization and propose a novel integer linear programming formulation to find the optimal bit width for a given model size. Additional approaches are proposed to minimize accuracy degradation after quantization. These include", "This paper proposed a set of methods for post-training quantization of dnns. The methods include AdaQuant (which jointly optimizes quantization steps for weight and activation per output activation of each layer), Integer Programming (which determines bit-precision for all the layers), and the batchnorm tuning. The authors presented promising experimental results on various neural networks to support the proposed methods. " ]
Lately, post-training quantization methods have gained considerable attention, as they are simple to use, and require only a small unlabeled calibration set. This small dataset cannot be used to fine-tune the model without significant over-fitting. Instead, these methods only use the calibration set to set the activations’ dynamic ranges. However, such methods always resulted in significant accuracy degradation, when used below 8-bits (except on small datasets). Here we aim to break the 8-bit barrier. To this end, we minimize the quantization errors of each layer separately by optimizing its parameters over the calibration set. We empirically demonstrate that this approach is: (1) much less susceptible to over-fitting than the standard fine-tuning approaches, and can be used even on a very small calibration set; and (2) more powerful than previous methods, which only set the activations’ dynamic ranges. Furthermore, we demonstrate how to optimally allocate the bit-widths for each layer, while constraining accuracy degradation or model compression by proposing a novel integer programming formulation. Finally, we suggest model global statistics tuning, to correct biases introduced during quantization. Together, these methods yield state-of-the-art results for both vision and text models. For instance, on ResNet50, we obtain less than 1% accuracy degradation — with 4-bit weights and activations in all layers, but the smallest two. Our code is publicly available at https://github.com/papers-submission/CalibTIP
[]
[ { "authors": [ "Yonathan Aflalo", "Asaf Noy", "Ming Lin", "Itamar Friedman", "Lihi Zelnik" ], "title": "Knapsack pruning with inner distillation", "venue": "arXiv preprint arXiv:2002.08258,", "year": 2020 }, { "authors": [ "Ron Banner", "Yury Nahshan", "Elad Hoffer", "Daniel Soudry" ], "title": "Aciq: Analytical clipping for integer quantization of neural networks. 2018", "venue": null, "year": 2018 }, { "authors": [ "Yaohui Cai", "Zhewei Yao", "Zhen Dong", "Amir Gholami", "Michael W Mahoney", "Kurt Keutzer" ], "title": "Zeroq: A novel zero shot quantization framework", "venue": "arXiv preprint arXiv:2001.00281,", "year": 2020 }, { "authors": [ "Yoni Choukroun", "Eli Kravchik", "Fan Yang", "Pavel Kisilev" ], "title": "Low-bit quantization of neural networks for efficient inference", "venue": "IEEE/CVF International Conference on Computer Vision Workshop (ICCVW),", "year": 2019 }, { "authors": [ "Matthieu Courbariaux", "Yoshua Bengio", "Jean-Pierre David" ], "title": "Binaryconnect: Training deep neural networks with binary weights during propagations", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Alexander Finkelstein", "Uri Almog", "Mark Grobman" ], "title": "Fighting quantization bias with bias", "venue": "arXiv preprint arXiv:1906.03193,", "year": 2019 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Matan Haroush", "Itay Hubara", "Elad Hoffer", "Daniel Soudry" ], "title": "The knowledge within: Methods for data-free model compression", "venue": "arXiv preprint arXiv:1912.01274,", "year": 2019 }, { "authors": [ "Itay Hubara", "Matthieu Courbariaux", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Quantized neural networks: Training neural networks with low precision weights and activations", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Benoit Jacob", "Skirmantas Kligys", "Bo Chen", "Menglong Zhu", "Matthew Tang", "Andrew Howard", "Hartwig Adam", "Dmitry" ], "title": "Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jangho Kim", "Yash Bhalgat", "Jinwon Lee", "Chirag Patel", "Nojun Kwak" ], "title": "Qkd: Quantization-aware knowledge distillation", "venue": "arXiv preprint arXiv:1911.12491,", "year": 2019 }, { "authors": [ "Raghuraman Krishnamoorthi" ], "title": "Quantizing deep convolutional networks for efficient inference: A whitepaper", "venue": "arXiv preprint arXiv:1806.08342,", "year": 2018 }, { "authors": [ "Darryl Lin", "Sachin Talathi", "Sreekanth Annapureddy" ], "title": "Fixed point quantization of deep convolutional networks", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Eldad Meller", "Alexander Finkelstein", "Uri Almog", "Mark Grobman" ], "title": "Same, same but differentrecovering neural network quantization error through weight factorization", "venue": null, "year": 1902 }, { "authors": [ "Markus Nagel", "Mart van Baalen", "Tijmen Blankevoort", "Max Welling" ], "title": "Data-free quantization through weight equalization and bias correction", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Markus Nagel", "Rana Ali Amjad", "Mart van Baalen", "Christos Louizos", "Tijmen Blankevoort" ], "title": "Up or down? adaptive rounding for post-training quantization", "venue": "arXiv preprint arXiv:2004.10568,", "year": 2020 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "Squad: 100,000+ questions for machine comprehension of text", "venue": "arXiv preprint arXiv:1606.05250,", "year": 2016 }, { "authors": [ "Mohammad Rastegari", "Vicente Ordonez", "Joseph Redmon", "Ali Farhadi" ], "title": "Xnor-net: Imagenet classification using binary convolutional neural networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Xiao Sun", "Jungwook Choi", "Chia-Yu Chen", "Naigang Wang", "Swagath Venkataramani", "Vijayalakshmi Viji Srinivasan", "Xiaodong Cui", "Wei Zhang", "Kailash Gopalakrishnan" ], "title": "Hybrid 8-bit floating point (hfp8) training and inference for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jeffrey P Van Doormaal", "George D Raithby" ], "title": "Enhancements of the simple method for predicting incompressible fluid flows", "venue": "Numerical heat transfer,", "year": 1984 }, { "authors": [ "Yonghui Wu", "Mike Schuster", "Zhifeng Chen", "Quoc V Le", "Mohammad Norouzi", "Wolfgang Macherey", "Maxim Krikun", "Yuan Cao", "Qin Gao", "Klaus Macherey" ], "title": "Google’s neural machine translation system: Bridging the gap between human and machine translation", "venue": "arXiv preprint arXiv:1609.08144,", "year": 2016 }, { "authors": [ "Ritchie Zhao", "Yuwei Hu", "Jordan Dotzel", "Chris De Sa", "Zhiru Zhang" ], "title": "Improving neural network quantization without retraining using outlier channel splitting", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Aojun Zhou", "Anbang Yao", "Yiwen Guo", "Lin Xu", "Yurong Chen" ], "title": "Incremental network quantization: Towards lossless cnns with low-precision weights", "venue": "arXiv preprint arXiv:1702.03044,", "year": 2017 }, { "authors": [ "Shuchang Zhou", "Yuxin Wu", "Zekun Ni", "Xinyu Zhou", "He Wen", "Yuheng Zou" ], "title": "Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients", "venue": "arXiv preprint arXiv:1606.06160,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "The pursuit of advanced Deep Neural Networks (DNNs) causes researchers to construct deeper and wider networks, making them expensive to use in terms of power and time. This increases the need for efficient implementations of these networks. Efficient networks reduce cloud-vendor costs and make it possible to run them on low-power devices such as smartphones and wearable devices. The most common off-the-shelf approach to improving network efficiency is quantization, which reduces the numerical precision of the network and its complexity and memory footprint.\nDNN quantization techniques can be classified as either post-training or quantization-aware training (QAT) techniques (Han et al., 2015; Courbariaux et al., 2015; Hubara et al., 2017; Zhou et al., 2016). Although QAT techniques, in general, achieve better results, there are important real-world scenarios in which they are not applicable. These are the cases where the training data is sensitive or simply unavailable at the time of deployment. For instance, when off-the-shelf or legacy models are being used, or when medical records are involved. Therefore, much attention has recently been dedicated to post-training quantization methods (Nagel et al., 2019; Banner et al., 2018; Zhao et al., 2019), which can be more easily applied in practice. These methods allow for network quantization to happen seamlessly when deployed, without requiring additional information from the user except a small unlabeled calibration set.\nUnfortunately, post-training quantization below 8-bit always incurs significant accuracy degradation and in some cases even higher numerical precision is required. In this paper, our goal is to break this barrier by distilling all the information the pre-trained model and calibration set encode. Our goal is to find an optimal scheme for current state of the art hardware which usually support 16,8,4 bits data types with per-channel quantization of the weights. To that end, we suggest a three-stage\npipeline that consists of methods applied solely on a small calibration set to reduce the local error introduced during the quantization process (e.g., round-off errors) followed by integer programming to determine the bit-width of different layers so that the overall accuracy degradation is minimized. Even without using mixed-precision, the suggested method is much less prone to over-fitting than current methods and yields best in class results for 8-bits Mobilenet-V2 and BERT-base trained on ImageNet and SQuAD1.1 datasets, respectively. Our paper suggests several contributions for mixed-precision post-training quantization:\n1. AdaQuant: A layer-by-layer optimization method that minimizes the error between the quantized layer output and the full-precision layer output. This method can consume only a small calibration dataset from training data without overfitting. In a comprehensive study, we show that AdaQuant defines a new state-of-the-art for post-training quantization on several networks and tasks, including vision models (Resnet18, Resnet50, MobilenetV2) and language (BERT).\n2. Integer programming: As some parts of the network may allow lower precision compared to other layers, we suggest an integer-linear programming based approach for determining the precision level of different layers. This method aims at maximizing either the expected speedup or savings in power consumption without violating a predefined constraint on network accuracy degradation or compression.\n3. Batch-norm tuning: Following quantization we observe an inherent bias in the mean and the variance of batch norm statistics. We show that by employing the re-estimated statistics in batch normalization, much of the quantized network degradation can be recovered.\n4. Light and Advanced pipelines: We analyze the advantages and disadvantages of each of the given methods and suggest two pipelines: (1) light pipeline that does not require a backward pass, thus can be invoked even on inference-only hardware; and (2) Advanced pipeline that includes also AdaQuant and bias tuning." }, { "heading": "2 RELATED WORK", "text": "There has been a significant effort to accelerate inference via quantization (Courbariaux et al., 2015; Han et al., 2015; Rastegari et al., 2016; Zhou et al., 2017). These works involve re-training in order to compensate for the degradation due to the quantization process. Post-training quantization, on the other hand is applied to a model after it was trained. Thus, it avoids re-training and as such it is much simpler to use. However, naively quantizing a full-precision model to INT4 or lower to accelerate computation usually incurs significant accuracy degradation (Krishnamoorthi, 2018; Jacob et al., 2018).\nAdaQuant: A recent post-training quantization method (Nagel et al., 2020), termed AdaRound, suggested optimizing the rounding policy. Instead of using the predominant rounding-to-nearest approach, they suggest formulating a per-layer quadratic optimization problem to optimize the roundoff error. Our proposed method, AdaQuant, takes another step and relaxes AdaRound’s implicit constraint which forces the quantized weights to be within ±1 of their round-to-nearest value. This is done by optimizing the weights and quantization parameters of each layer separately, over the calibration set, to minimize the MSE between the layer’s original and quantized outputs. As oppose to AdaRound we apply AdaQuant to find optimal quantization not only to weights but also to activations. In addtion we suggest two flavors for AdaQuant: (1) parallel-AdaQuant suited for mixed precision setting; (b) sequential-adaquant which suited for fixed configuration.\nInteger programming: Early work by Lin et al. (2016) used a convex optimization formulation which results in a simple greedy compression scheme. Aflalo et al. (2020) used a combinatorial optimization approach for network pruning. Their problem was formulated as a Knapsack problem that optimizes the trade-off between the channels importance and their associated computational cost. Cai et al. (2020) finds a mixed-precision configuration with a guaranteed Pareto efficient allocation with respect to model size and accuracy degradation. While this provides a ”best-effort” standard (e.g., the configuration cannot be further compressed without hurting accuracy), it does not suggest which of all possible outcomes is best. To the best of our knowledge, this work is the first to formalize a generic integer program, which can easily be adapted to various types of models and requirements with a clear objective and constraints.\nBatch norm tuning: Finkelstein et al. (2019) were the first to recognize that a significant source of degradation is a shift in the mean activation value. They show a simple method to compensate for this bias by updating the bias terms. Nagel et al. (2019) suggest to equalize the weight ranges in the network and correct biases in the error that are introduced during quantization .Recently Sun et al. (2019) suggested batch norm tuning for FP8 models. Here we detail how to perform this procedure on a per-channel quantized (PCQ) model with fused batch-norm layers. The procedure is light as it only requires to invoke the quantized model few times (on the calibration set) and adjust the quantization parameters.Moreover after retuning the BN layers can be reabsorbed which reduces the inference complexity. To the best of our knowledge we are the first to suggest it." }, { "heading": "3 OPTIMIZING THE QUANTIZATION PIPELINE", "text": "In most post-training quantization settings, a model and a small unlabeled calibration set are given. To avoid overfitting the calibration set, most studies utilize it only to extract the network’s internal statistics, which is later used to set the quantization parameters.\nHere we suggest using the calibration set much more extensively to tune the model while avoiding over-fitting the data. In the following subsections, we detail three different optimization methods over the calibration set: (1) AdaQuant, a layerwise optimization of weights and quantization parameters; (2) an integer programming formulation for a mixed-precision setting; and (3) Batch Normalization Tuning (BNT), for tuning the model’s internal statistics to match the numerical precision setting. We discuss the strengths and weaknesses of each method and suggest an optimization flow that exploits all the additive merits and leads to state-of-the-art results." }, { "heading": "3.1 ADAQUANT - LAYERWISE OPTIMIZATION OVER THE CALIBRATION SET", "text": "Several researchers suggested per-tensor optimization to reduce quantization error by minimizing some form of MSE objective between the quantized and the full-precision tensor X (either weights or activations). They look for an optimized quantization step size ∆̂ obtained by\n∆̂ = arg min ∆\n||X −Q∆(X)||2; Q∆(X) = ∆ · ⌊ X\n∆\n⌉ , (1)\nwhere Q(·) is the quantization function. Although these methods are fast and easy to use, they often result in an inferior solution — the loss in eq. 1 is sub-optimal, as it penalizes all the quantization errors equally. However, the loss should penalize more quantization errors which affect the classification. Accordingly, researchers suggested Quantization-Aware-Training (QAT) methods to fix this error by training the entire model at once. However, those methods have three limitations: (a) they require the large training set to avoid over-fitting, (b) they approximate the back-propagation gradients through discrete function (the quantizer) and (c) they have high computational and memory footprints. We suggest a modified objective for per-layer joint optimization of the weights and quantization parameters. (\n∆̂w, ∆̂x, V̂ )\n= arg min ∆w,∆x,V\n||WX −Q∆w(W + V ) ·Q∆x(X)||2, (2)\nwhere V is a continuous variable added to W and the quantized network weights are defined as Wq = Q∆̂w(W + V̂ ). In this new objective the quantized tensor is not required to be ”close” to the original tensor, as in eq. 1, thus benefiting from the flexibility that Quantization-Aware-Training methods have. Yet, it can be executed in parallel over all layers and is much less prone to over-fitting. Moreover, under a fixed configuration we can optimize the model globally and infer the error between layers. Thus, instead of running AdaQuant on all layers in parallel we can run it sequentially and fix the error induced by quantaizing former layers. Thus, Eq. 2 changes to:(\n∆̂wl , ∆̂xl , V̂l\n) = arg min\n∆wl ,∆xl ,Vl\n||WlXl −Q∆wl (Wl + Vl) ·Q∆xl (X q l )|| 2, (3)\nXq = σ(Q∆wl−1 (Wl−1 + Vl−1) ·Q∆xl (X q l−1)) (4)\nwhere σ(·) is some activation function. Note, that sequential AdaQuant should not be applied before the bit allocation was set as it optimize over noisy input obtain from predecessor quantized layers. We evaluate both flavors of adaquant\n(named, AdQuant and sequential-AdaQuant and detail our finding in section 5.1. We note that AdaQuant also optimizes over biases and offsets and optimized fused conv-bn-relu layers when present; these were removed from the formulation in Equation 2 for simplicity.\nSize of calibration set Perhaps surprisingly, although we experiment with a very small calibration set, no over-fitting is observed. Let us examine a simple fully connected layer W ∈ RM×N . The input and output are of sizes N and M , respectively. For each output we have B equations and N separate parameters (i.e., with no overlap in parameters between different outputs). Therefore if B N we generically have an infinite amount of solutions and we can overfit the data. If B N then we might underfit the data. Thus, the size of the calibration set required for AdaQuant should roughly be O(N). A similar derivation for convolution layers reveals that the calibration size should have B ≥ Ci·k 2\nHW samples to avoid over-fitting, where B is the number of unique samples, k is the convolution’s kernel size, Ci and Co is the number of input and output channels respectively and H,W represent height and width. In fig. 1 we compare AdaQuant to current state-ofthe-art methods including QAT with knowledge distillation (QAT-KLD) (Kim et al., 2019) and\nAdaRound (Nagel et al., 2020). For each method, we measured the top-1 accuracy with respect to the number of samples in the calibration set over five runs and present the mean and standard deviation. As can be seen, AdaQuant is superior to previous methods and specifically excels on small calibration sets. Remarkably, AdaQuant does not overfit even when optimized on a single image. Additional details can be found in section A and D of the Appendix." }, { "heading": "3.2 PER-LAYER BIT ALLOCATIONS WITH INTEGER PROGRAMMING", "text": "AdaQuant significantly enhances network accuracy at lower bit widths. However, it is often not sufficient by itself to attain acceptable accuracy. Therefore, in practical use cases, the user would like to balance between accuracy and performance (e.g., power and speed), by setting several layers to higher precision. Our high-level goal in this section would be to optimize the overall network performance while maintaining a predefined accuracy degradation or a model compression constraint.\nIn the following, we provide an integer-programming (IP) formulation for optimizing per-layer bit allocations. Depending on the needs, our performance metrics P would be either the execution time of the network or its power consumption. Also, with every layer quantization, there is an associated quantization error that affects the training loss L. We chose the latter to be our penalty metric. Integer programming is applied in those situations where a given problem can clearly be represented in the form of a linear relationship between different decision variables. Unlike other previous works on compression, it attains a global optimum. For example, Lin et al. (2016) suggested a convex optimization problem, but the constraints and the objective are not linear. This typically has a drastic impact on convergence time, and the quality of the results since the Simplex method can no longer be applied (Van Doormaal & Raithby, 1984).\nBasic formulation We are given a neural network with L layers. For each layer l, we have weights Wl that need to be multiplied with activations of the previous layer Xl−1. Such lower bit width multiplications can be executed by quantizing the weights and activations to achieve higher throughput and energy-efficient solutions. Let W kl and X n l−1 represent a quantized version of Wl and Xl−1 to k and n bits, respectively. For each layer i, a low-bit width multiplication W kl ·Xnl−1 results in a loss degradation ∆Lk,nl and in performance improvement ∆P k,n l with respect to the original product\nWl ·Xl−1. This performance improvement measure needs to be additive and sum up to a total benefit in end-to-end network performance (e.g., power, model size, etc.). Our goal would be to maximize the total performance improvement without exceeding the total network degradation ∆L.\nWe now turn to solve the above problem using an integer program. We define a binary variable Ik,nl , which is set to one if and only if the weights W kl are multiplied with the activations X n l−1 at layer l; otherwise we set the indicator to zero i.e., Ik,nl = 0. Then, the basic bit allocation problem can be formulated as follows:\nMaximize L−1∑ l=0 ∆Pl (5a)\nSubject to ∑ l ∆Ll ≤ ∆L, (5b)\n∀l ∈ {1, ..., L} :∆Pl = ∑ k,n Ik,nl ·∆P k,n l ,∆Ll = ∑ k,n Ik,nl ·∆L k,n l (5c)\n∀l ∈ {1, ..., L} : ∑ k,n Ik,nl = 1, I k,n l ∈ {0, 1} (5d)\nThe objective function (3a) maximizes the total performance improvement. Constraints (3b) and (3c) ensure that the total degradation in loss and the total improvements in performance due to the quantization of layer l to k-bit-weights and n-bit-activations would be ∆Ll and ∆Pl, respectively. Equation (3d) states that the restriction on total degradation of ∆L is obeyed and ensures that only one configuration (of quantized weights and activation) per layer is selected." }, { "heading": "3.3 BATCH NORMALIZATION TUNING", "text": "A common practice is fusing BN layers into their predecessor weight layers before applying posttraining quantization to reduce the amount of Multiply-Accumulate (MAC) operations. However, the reduction in bit-width after quantization can cause the model’s internal statistics to deviate further from those of the full precision model. To compensate for this deviation, we suggest updating BN statistics. First, we need to reconstruct the BN layers then re-tune the BN layers’ statistics (by a few iterations of running-mean to re-collect the statistics). Finally, re-absorb (re-fuse) the BN layers into the weight layers (this is possible only in a per-channel weights quantization setting, which is the current standard). Next, we give more details on each phase.\nReconstructing BN layers Assume the original (pre-fusing) BN parameters γo, βo and are known, as is usually the case. We would like to initialize µ, σ2, as well as the BN parameters γr and βr (r for ”reconstructed”) so that the reconstructed BN\nBNr(x) = γr x− µ√ σ2 + + βr ≈ x (6)\nwill re-adjust the model statistics. To do so, first we initialize the reconstructed BN layers by setting the following parameters (denoted by r):\nµ = βr = βo; σ 2 = γ2o ; γr = √ γ2o + (7)\nso that BNr(x) = x. Then, we update µ and σ2 by collecting running mean and running variance on the calibration data. We stress that the BN parameters, γr, βr, do not change while applying BN tuning, as we only invoke forward propagation.\nRe-fusing BN layers Due to the per-channel quantization setting we use, the collected statistics can be fused back into the current quantization scale as follows:\nW ′i = Wi γr σ ; b′i = γr σ (bi − µ) + βr; ∆′wi = γr σ ∆wi (8)\nThus, in addition to the regular BN fusion, the quantization step is adjusted by γrσ−1. Additional details are given in section B of the Appendix .\nBias tuning Much like Finkelstein et al. (2019), we suggest to apply a global bias-tuning procedure on the final mixed-precision model by applying quantization-aware training to minimize the Knowledge Distillation (KD) loss (which does not require labels). Since we restrict the trainable variables to be the biases only, we can train only on the calibration set without experiencing overfitting." }, { "heading": "4 QUANTIZATION FLOW", "text": "Past years have seen the rapid development of efficient deployment techniques (Nagel et al., 2019; Haroush et al., 2019). Deployment flows can vary based on the user setting such as hardware constraints, deployment time and task/dataset availability. While some users are willing to pay at initialization the time and effort to gain another fraction of accuracy, others require a simple and fast solution. We address this by suggesting two novel pipelines, light and advanced. Our pipelines are designed to the current, most common setting: per-channel quantization with a small calibration set.\nOur light pipeline requires three steps: (1) Fuse layers and define quantization parameters; (2) Find optimal mixed-precision configuration using IP; and (3) Use BN tuning to correct the internal statistics. We note that all steps do not require back-propagation and thus are very light and fast. In addition to the light setting, in the advanced pipeline we apply AdaQuant to reduce each layer’s output distortion from its full precision counterpart before invoking the IP algorithm. A detail comparison between the two pipeline is given in table-1. Models that were optimized using AdaQuant to different bit-widths can be seamlessly stitched thus having the ability to create an optimized model in a mixed precision setting. Subsequently, global methods such as tuning both BN statistics and the layers’ biases can be applied to reduce a Knowledge Distillation loss. Although there are additional post-training quantization techniques that could be potentially combined with our methods, such as bias correction (Banner et al., 2018), equalization (Meller et al., 2019), and outlier channel splitting (Zhao et al., 2019), we did not find it necessary: our results demonstrate that our relatively simple pipeline yields state of the art accuracy on both vision and text models, even without combining such methods. In the following sections we show our findings and give an ablation study that highlights the importance of each method and their combination." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we demonstrate our methods and pipelines on several models and datasets. We first start by analyzing image recognition models such as ResNet18/50, MobileNet-V2, which were trained over the ImageNet dataset. Next, we demonstrate our method robustness by applying it on question answering task using the popular BERT model (Devlin et al., 2018), which was fine-tuned on SQuAD1.1 dataset (Rajpurkar et al., 2016). In all our experiments, we used a small calibration set taken from the training dataset. Unless stated otherwise, we applied asymmetric per-channel quantization (i.e. GEMLOWP Wu et al. (2016)) with quantized offset (i.e., zero point). Next, we analyze each method’s strengths and weaknesses separately and argue for its validity. Additional implementation details can be found in section and the code are given in sections D E of the Appendix." }, { "heading": "5.1 ADAQUANT", "text": "Recently several researchers suggested different types of MSE optimization. In most cases, the optimization was done per-tensor (i.e., for the weights and activations separately). Here we argue that by optimizing both quantization parameters and the weights jointly we can reduce the MSE even further and hence improve the accuracy as demonstrated in fig. 2b. In contrast to AdaRound (Nagel et al., 2020) which restricted the change of the weights to be within ±1 we allow the weights to change as needed. As can be seen in fig. 2a the weights indeed change their quantized value by more than one. Since our pipeline is focused on the mixed-precision setting we optimize each layer\nseparately to enable maximum flexibility when stitching the optimized models. Under that setting AdaQuant can be performed in parallel across all layers. However, since most recent papers do not show full compression-accuracy curves and only a few attempt 4-bit compression, we also compare our results to common fixed configurations using our sequential-AdaQuant flavor. While sequential AdaQuant cannot be parallelized or used for the mixed-precision setting it yields best-in-class results for all models tested as can be seen in table-2 and 3. For instance, on the extensively studied 8bit MobileNet-V2 (MobileNet-V2) topology we achieved 71.6% top-1 accuracy — less than 0.5% degradation compared to the full precision counterparts (71.9%).\nTesting the strength of this method on both vision and text topologies resulted in state-of-the-art results. As can be seen in table 3, on BERT-base model over SQuAD1.1 dataset (BERT-Base-SQuAD1.1) we managed to obtain 88.45% F1 score using just AdaQuant — less than 0.5% of its full precision counterpart (81.3%). Throughout our experiments, we avoided using any augmentation technique and follow the known validation set prepossessing. s" }, { "heading": "5.2 INTEGER PROGRAMMING", "text": "Our Integer programming formulation requires us to have two quantities per-layer: (1) loss degradation and; (2) performance improvement. Obtaining those quantities requires to invoke the model over a small calibration set L times (once per layer) and measure the loss degradation and the performance gain. In our experiments, we set the performance value to be the number of parameters, but this measure could be changed to any additive measure. In all experiments, we used 1000 samples from the training set as our calibration set. Our setting considers only a mixture of 8-bit and 4-bit layers; to further test IP capabilities, we investigate a mixture of 2-4-8 bits as well. Unfortunately, since 2-bits quantization in post-training setting results in high degradation, the IP algorithm chose only mixture of 4-8 bits for compression ratio higher than 12.5%. Yet for 12.5% compression ratio, IP method found that by setting one layer to 2-bits while setting 8 smaller layers to 8-bits accuracy gains over 5.5% with respect to uniform 4-bit quantization. Also, by allowing a less hardware friendly setting where numerical precision can have the form of any integer between 2-8, yields the highest compression-accuracy ratio (fig. 3 - relaxed advanced pipeline)." }, { "heading": "5.3 BATCH-NORM TUNING", "text": "Batch-Norm Tuning (BNT) has a significant advantage, as it does not require weight optimization. Since BNT is applied by invoking the entire model, we must apply it only after setting the mixedprecision bit-width configuration. This is the case for all global optimization methods including bias-tuning. Notably, BNT requires only a few (at most 10) forward passes over the calibration set and yield significant gains (fig. 3). In this study, we applied BNT on models trained with BN layers only. However, it might be possible to extend this method to models without BN layers by reconstructing it from the statistics. We encourage the reader to investigate this path." }, { "heading": "5.4 FULL PIPELINE AND ABLATION STUDY", "text": "Although several researchers suggested different methods for post-training mixed-precision quantization, none offer their code. Each paper focuses on a different quantization setting (e.g., quantizing only the weights, per-tensor quantization, etc.). Therefore, to demonstrate our pipeline strength, we created two different baselines based on common practices:\n• Greedy-accuracy: recent studies suggested measuring each layer sensitivity and, based on the compression target, reduce the precision for the most robust layers.\n• Greedy-compression: the complementary greedy approach (Lin et al., 2016) to sort the layers by their number of parameters and increase the precision of the layers from the smallest to the largest layer until the compression budget is reached.\nSurprisingly, although the size of the layer should correlate with its sensitivity to quantization, the two greedy methods yield entirely different configurations. Investigating the configuration greedycompression found that sorting by compression correlates with the location of the layers in the model. In most vision models, the layers closer to the input have fewer parameters. This aligns with current common practice (Banner et al., 2018). Notably, even when not combined with any other technique, the IP method obtained the best bit-width configurations stressing its importance.\nNext, we turn to consider the light and advanced pipelines. Under challenging compression rates, our light-pipeline results highlight the importance of BN tuning. As can be seen in our experiment fig. 3, by merely invoking the model at inference mode for a few iterations and fixing the intermediate statistics, one can recover more than 1.5% of the accuracy (73.7% v.s 75.37%). As expected, by applying the advanced pipeline, one can obtain state-of-the-art accuracy. Arguably, our most impressive results are at 0.13% compression rate in which we managed to stay within 1% of the full precision accuracy while converting 96% of the model to 4-bit. For the challenging MobileNet-V2\nwe managed to switch 25% of the layers to 4bit (weights and activations) while maintaining less than 2% degradation; Additionally, we achieved, for the first time, reasonable top-1 accuracy of 65% when almost the entire model is in 4-bit." }, { "heading": "A SIZE OF CALIBRATION SET", "text": "Fully Connected layers Let’s assume that we have weights of size W ∈ RM×N and input and output are of sizes N and M respectively. Recalling Eq. 2 and setting Y = WX and W ′ = W + V results in: (\n∆̂w′ , ∆̂x, V̂ )\n= arg min ∆w,∆x,V\n||Y −Q∆w′ (W ′) ·Q∆x(X)||2,\nFor simplicity we assume that ∆x is fixed and define Xq = Q∆x(X), Wq = Q∆w′ (W ′). Therefore,\nif we have B unique samples, then the problem we aim to solve have the following structure: w11 ... w1N· · · . . . · · · wM1 ... wMN x11 ... x1B· · · . . . · · · xN1 ... xNB = y11 ... y1B· · · . . . · · · yM1 ... yMB which translates to: w11x11 ... w1NxN1 w11x11 ... w1NxN2 · · · . . . · · ·\nwM1x1B ... wMNxNB\n = y11 y12\n... yMB Notice that in the above equations, for each output we have a different set of parameters, therefore we can examine each output separately. For a single output we are in scalar linear regression with N parameters and B equations. If B ≥ N we are under-parameterized, and if B < N we are over-parameterized.\nConvolution layers layers Similarly, for convolution layers with Co output channels, Ci input channels, and kernel size k each element of the output is a dot product of Ci · k · k parameters. We have in total Co×H ×W outputs where H,W is the output height and width. Thus we need B ≥ Ci·k 2\nHW samples to avoid over-fitting, where B is the number of unique samples." }, { "heading": "B RECONSTRUCTION AND RE-FUSING OF BATCH NORMALIZATION", "text": "In this section, we provide more details on Batch Normalization reconstruction and re-fusing procedure.\nReconstructing BN layers: Consider a Batch Normalization layer with parameters γo, βo that fused into previous convolutional layer weight and bias. Fusing batch normalization layer transforms weights and bias as following:\nW ′i = Wi γo σ ; b′i = γo σ (bi − µ) + βo; (9)\nTo reconstruct the batch normalization, we would like to initialize µ, σ2, as well as the BN parameters γr and βr (r for ”reconstructed”) so that the reconstructed BN is approximately identity fig. 4.\nBNr(x) = γr x− µ√ σ2 + + βr ≈ x (10)\nTo do so, first we initialize the reconstructed BN layers by setting the following parameters (denoted by r):\nµ = βr = βo; σ 2 = γ2o γr = √ γ2o + (11)\nso that BNr(x) = x.\nNow, we can update µ and σ2 by collecting running mean and running variance on the calibration data. We stress that the BN parameters, γr, βr, do not change while applying BN tuning, as we only invoke forward propagation.\nRe-fusing BN layers: After BNT phase we need to fuse Batch Normalization layer again into convolution weights and bias. Regular batch normalization fusing will cause degradation due to quantization of the weights. To resolve this issue we can leverage per-channel quantization setting we use.\nDenote swi , zwi scale and zero point of the weigh, the quant/dequant operation defined as:\nWq = swi ⌊ W swi − ⌊ zwi swi ⌉⌉ + ⌊ zwi swi ⌉ (12) We can fuse parameters of the batch normalization layer as following:\nW ′i = Wi γr σx ; b′i = γr σr (bi − µx) + βr s′wi = γr σx swi ; z ′ wi = γr σx zwi\n(13)\nFinally we can show that transformations eq. (13) equivalent to γrσrWq\nW ′q = s ′ wi W ′ s′wi − ⌊ z′wi s′wi ⌉+ ⌊ z′wi s′wi ⌉ = γr σr swi ⌊ W swi − ⌊ zwi swi ⌉⌉ + ⌊ zwi swi\n⌉ = γr σr Wq\n(14)" }, { "heading": "C ADDITIVE LOSS ASSUMPTION FOR INTEGER-PROGRAMMING", "text": "Suppose the loss function of the network L depends on a certain set of variables (weights, activations, etc.), which we denote by a vector v. We would like to measure the effect of adding quantization noise to this set of vectors.\nSince the quantization is emulated with additive noise, the loss is smooth and thus can be expanded to the Taylor series:\n∆L = L(v + ε)− L(v) = (15)\n= ∂LT\n∂v ε+ εT ∂2L ∂2v\nε+O ( ‖ε‖3 ) . (16)\nOne can see from Eq 16 that when the quantization error ε is sufficiently small, the overall degradation ∆L can be approximated as a sum ofN independent degradation processes by neglecting the quadratic terms ε2:\n∆L ≈ ∂L T\n∂v ε = n∑ i ∂L ∂vi · εi (17)\nWe note that Lin et al. (2016); Choukroun et al. (2019) used a similar assumption with respect to the additivity of quantization noise." }, { "heading": "D EXPERIMENTAL DETAILS", "text": "In all our experiments, we used a small subset of the training set to run our methods. Specifically, for vision models, we used 1000 unlabeled images from the ImageNet training set (single image for each class) as a calibration set. For the Bert model, we used one paragraph from the training set. All presented methods AdaQuant, BNT, BT, and IP, performed well on such small calibration set producing SOTA results. Next we detail our setting for each of the technique in our pipelines\nD.1 ADAQUANT\nAdaQuant optimization problem defined as following except zero-point of the quantizer which we omitted from eq. (18):\n( ∆̂w, ∆̂x, V̂W , V̂b ) = arg min\n∆w,∆x,VW ,Vb\n||WX + b−Q∆w(W + VW ) ·Q∆x(X)−Q(b+ Vb)||2 (18)\nTechnically to find a solution for eq. (18), we use Adam optimizer with different learning rates per type of parameters. We set different learning rates for weight, bias, and quantization parameters of input and weights. After experimenting with different models, we found that the same set of LR parameters worked for each model. The learning rates are 1e− 5, 1e− 3, 1e− 1, 1e− 3 for weight, bias, quantization parameters of the inputs, and weights, respectively.\nFor vision models, we used 1000 unlabeled images from the ImageNet training set (single image for each class), running Adam optimizer for 100 iterations and a batch-size of 50 unless otherwise stated.\nFor BERT-base model, we used one paragraph from the training set, running Adam optimizer for 50 - 100 iterations depending on the type of layer. Learning rates and batch size are the same as of vision models.\nIn fig. 1 we aimed to answer the following question: Assuming you have a small calibration set and no resources constraints (time,power) which method is the most accurate and robust Out method were evaluated by running each experiments five times and reporting mean and standrad deviation. Here, in fig. 5, we add an additional naive early-stop plot on top of QAT-KLD experiment. We split the calibration data into two equal sets and train on half the examples while evaluation our performance on the other half Both KLD experiments used an SGD optimizer over 10 epochs; starting with learning rate of 0.1 and decreasing it by 1e-2 factor after 2 and 8 epochs. We also conducted KLD experiments with Adam optimizer and learning rate of 1e-3 where performed but their results were inferior. As can be seen in the plot AdaQuant is superior to other methods and remarkably excels on small calibration sets. As can be seen in fig. 5 the early exit results were inferior to the QAT-KLD as they use much smaller training set. However, other types of training-validation splits (e.g. 80-20) may boost the results.\nD.2 INTEGER PROGRAMMING\nOur IP method requires two steps, the first is measuring the properties of each layer, and the second is applying the program based on these measurements with user defined constraint. As reference, we measure the loss (can also be accuracy) of the base precision model on the calibration set. Next, we measure the sensitivity of each layer by evaluating a model where all layers are qunatize to the base-precision but one layer that is quantized to lower precision (e.g., all 8-bit but one layer with 4-bit). The ∆Ll in Eq. 3 is defined as the difference between the reference model loss and the measured loss. If a layer is robust to quantization, ∆Ll will be small, and if a layer is sensitive to quantization, ∆Ll will be large. The performance gain in the case of compression, is simply the model parameters size difference when lowering the precision of the examined layer. Hence, if a layer has N parameters, the performance gain when changing from 8-bit to 4-bit result in compression gain of ∆Pl = N ∗ 8−N ∗ 4 = 4N . In the second stage, we run the integer program based on the sensitivity and compression measured on each layer along with the user defined constraint.\nD.3 BATCH NORMALIZATION AND BIAS TUNING\nThe Batch Norm tuning phase is the most lightweight phase of the pipeline. We found empirically less than ten iterations of statistics update are sufficient. We also found that as compression growth,\nmore iterations of batch norm tuning are required. At the bias tuning phase, we perform 200 iterations of fine-tuning with the learning-rate of 0.1." }, { "heading": "E CODE", "text": "For all our vision dataset we used the default torchvision pre-trained model. For BERT-base experiment we fined-tuned on SQUAD1.1 dataset and provide the script for that as a part of our repository. Our code can be found at: https://github.com/papers-submission/CalibTIP." } ]
2,020
IMPROVING POST TRAINING NEURAL QUANTIZATION: LAYER-WISE CALIBRATION
SP:c4662d0c24d1744837443315de5f92042cada40b
[ "Recently, several researchers have been trying to combine the goodnesses of direct policy search approaches (mostly based on evolutionary computation approaches) and those of policy gradient approaches in control tasks. This paper proposes a novel combination of an evolutionary direct policy search and an actor-critic approach. The authors combines a cross-entropy method, which directly samples parameters of actor network (policy) from a Gaussian distribution that is trained during the search process, and the twin delayed deep deterministic policy gradient (TD3), which is an off-policy actor-critic approach. ", "The paper proposes a new method combining evolutionary methods and RL. In particular, the authors combine CEM and TD3 in PGPS. PGPS maintains a population of policies, which interact with the environment to collect data filling the replay buffer. The data in replay buffer is then used to train TD3. PGPS enables information flow in both directions: when the TD3 policy performs poorly, the elite policy from the population is used to guide TD3 by an imitation learning loss; The TD3 critic helps select top policies in the population and the TD3 actor is also included in the population. The experiments on simple Mujoco domains demonstrate the utility of PGPS and the ablation study analyzes the utility of each part of PGPS." ]
Gradient-based policy search algorithms (such as PPO, SAC, or TD3) in deep reinforcement learning (DRL) have shown successful results on a range of challenging control tasks. However, they often suffer from deceptive gradient problems in flat or gentle regions of the objective function. As an alternative to policy gradient methods, population-based evolutionary approaches have been applied to DRL. While population-based search algorithms show more robust learning in a broader range of tasks, they are usually inefficient in the use of samples. Recently, reported are a few attempts (such as CEMRL) to combine gradient with a population in searching optimal policy. This kind of hybrid algorithm takes advantage of both camps. In this paper, we propose yet another hybrid algorithm, which more tightly couples policy gradient with the population-based search. More specifically, we use Cross Entropy Method (CEM) for population-based search and Twin Delayed Deep Deterministic Policy Gradient (TD3) for policy gradient. In the proposed algorithm called Coupling Policy Gradient with Population-based Search (PGPS), a single TD3 agent, which learns by a gradient from all experiences generated by population, leads a population by providing its critic function Q as a surrogate to select better performing next generation population from candidates. On the other hand, if the TD3 agent falls behind the CEM population, then the TD3 agent is updated toward the elite member of CEM population using loss function augmented with the distance between the TD3 and the CEM elite. Experiments in five challenging control tasks in a MuJoCo environment show that PGPS is robust to deceptive gradient and also outperforms the state-of-the-art algorithms.
[]
[ { "authors": [ "Cristian Bodnar", "Ben Day", "Pietro Lió" ], "title": "Proximal distilled evolutionary reinforcement learning", "venue": "arXiv preprint arXiv:1906.09807,", "year": 2019 }, { "authors": [ "Cédric Colas", "Olivier Sigaud", "Pierre-Yves Oudeyer" ], "title": "Gep-pg: Decoupling exploration and exploitation in deep reinforcement learning algorithms", "venue": "arXiv preprint arXiv:1802.05054,", "year": 2018 }, { "authors": [ "Edoardo Conti", "Vashisht Madhavan", "Felipe Petroski Such", "Joel Lehman", "Kenneth Stanley", "Jeff Clune" ], "title": "Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Pieter-Tjerk De Boer", "Dirk P Kroese", "Shie Mannor", "Reuven Y Rubinstein" ], "title": "A tutorial on the cross-entropy method", "venue": "Annals of operations research,", "year": 2005 }, { "authors": [ "Scott Fujimoto", "Herke Van Hoof", "David Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "arXiv preprint arXiv:1802.09477,", "year": 2018 }, { "authors": [ "Tanmay Gangwani", "Jian Peng" ], "title": "Policy optimization by genetic distillation", "venue": "arXiv preprint arXiv:1711.01012,", "year": 2017 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Nikolaus Hansen" ], "title": "The cma evolution strategy: A tutorial", "venue": "arXiv preprint arXiv:1604.00772,", "year": 2016 }, { "authors": [ "Ashley Hill", "Antonin Raffin", "Maximilian Ernestus", "Adam Gleave", "Anssi Kanervisto", "Rene Traore", "Prafulla Dhariwal", "Christopher Hesse", "Oleg Klimov", "Alex Nichol", "Matthias Plappert", "Alec Radford", "John Schulman", "Szymon Sidor", "Yuhuai Wu" ], "title": "Stable baselines. https://github.com/ hill-a/stable-baselines, 2018", "venue": null, "year": 2018 }, { "authors": [ "Max Jaderberg", "Valentin Dalibard", "Simon Osindero", "Wojciech M Czarnecki", "Jeff Donahue", "Ali Razavi", "Oriol Vinyals", "Tim Green", "Iain Dunning", "Karen Simonyan" ], "title": "Population based training of neural networks", "venue": "arXiv preprint arXiv:1711.09846,", "year": 2017 }, { "authors": [ "Yaochu Jin" ], "title": "Surrogate-assisted evolutionary computation: Recent advances and future challenges", "venue": "Swarm and Evolutionary Computation,", "year": 2011 }, { "authors": [ "Whiyoung Jung", "Giseung Park", "Youngchul Sung" ], "title": "Population-guided parallel policy search for reinforcement learning", "venue": "arXiv preprint arXiv:2001.02907,", "year": 2020 }, { "authors": [ "Shauharda Khadka", "Kagan Tumer" ], "title": "Evolution-guided policy gradient in reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Shauharda Khadka", "Somdeb Majumdar", "Santiago Miret", "Evren Tumer", "Tarek Nassar", "Zach Dwiel", "Yinyin Liu", "Kagan Tumer" ], "title": "Collaborative evolutionary reinforcement learning", "venue": null, "year": 1905 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Joel Lehman", "Jay Chen", "Jeff Clune", "Kenneth O Stanley" ], "title": "Safe mutations for deep and recurrent neural networks through output gradients", "venue": "In Proceedings of the Genetic and Evolutionary Computation Conference,", "year": 2018 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Guoqing Liu", "Li Zhao", "Feidiao Yang", "Jiang Bian", "Tao Qin", "Nenghai Yu", "Tie-Yan Liu" ], "title": "Trust region evolution strategies", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Matthias Plappert", "Rein Houthooft", "Prafulla Dhariwal", "Szymon Sidor", "Richard Y Chen", "Xi Chen", "Tamim Asfour", "Pieter Abbeel", "Marcin Andrychowicz" ], "title": "Parameter space noise for exploration", "venue": null, "year": 1905 }, { "authors": [ "Aloïs Pourchot", "Olivier Sigaud" ], "title": "Cem-rl: Combining evolutionary and gradient-based methods for policy search", "venue": "arXiv preprint arXiv:1810.01222,", "year": 2018 }, { "authors": [ "Tim Salimans", "Jonathan Ho", "Xi Chen", "Szymon Sidor", "Ilya Sutskever" ], "title": "Evolution strategies as a scalable alternative to reinforcement learning", "venue": "arXiv preprint arXiv:1703.03864,", "year": 2017 }, { "authors": [ "Juergen Schmidhuber", "Jieyu Zhao" ], "title": "Direct policy search and uncertain policy evaluation", "venue": null, "year": 1998 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "David Silver", "Guy Lever", "Nicolas Heess", "Thomas Degris", "Daan Wierstra", "Martin Riedmiller" ], "title": "Deterministic policy gradient algorithms", "venue": null, "year": 2014 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Joe Staines", "David Barber" ], "title": "Optimization by variational bounding", "venue": "In ESANN,", "year": 2013 }, { "authors": [ "Felipe Petroski Such", "Vashisht Madhavan", "Edoardo Conti", "Joel Lehman", "Kenneth O Stanley", "Jeff Clune" ], "title": "Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning", "venue": "arXiv preprint arXiv:1712.06567,", "year": 2017 }, { "authors": [ "Yee Teh", "Victor Bapst", "Wojciech M Czarnecki", "John Quan", "James Kirkpatrick", "Raia Hadsell", "Nicolas Heess", "Razvan Pascanu" ], "title": "Distral: Robust multitask reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "George E Uhlenbeck", "Leonard S Ornstein" ], "title": "On the theory of the brownian motion", "venue": "Physical review,", "year": 1930 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "In Thirtieth AAAI conference on artificial intelligence,", "year": 2016 }, { "authors": [ "Daan Wierstra", "Tom Schaul", "Tobias Glasmachers", "Yi Sun", "Jan Peters", "Jürgen Schmidhuber" ], "title": "Natural evolution strategies", "venue": "The Journal of Machine Learning Research,", "year": 2014 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nIn Reinforcement Learning (RL), an agent interacts with the environment, and its goal is to find the policy that maximizes the objective function, which is generally defined as a cumulative discounted reward. Recently, many researchers have worked on combining deep neural networks and a gradient-based RL algorithm, generally known as Deep Reinforcement Learning (DRL). This approach has achieved great success not only in the discrete action domain, such as in Go (Silver et al., 2017) and Atari games (Mnih et al., 2015; 2016), but also in the continuous action domain, such as in Robot control (Fujimoto et al., 2018; Lillicrap et al., 2015; Schulman et al., 2015). However, it is difficult to use the gradient-based method for the objective function (J),\nwhich includes “many wide flat regions” since the gradient (∇θJ) is near zero at a flat point. Figure 1 is an extreme case consisting of only flat regions, which is called a piece-wise constant function. This problem remains an unsolved issue in gradient-based DRL with continuous control domains (Colas et al., 2018). The Swimmer in a MuJoCo environment (Todorov et al., 2012) has already been reported to be hard to use the gradient-based method (Jung et al., 2020; Liu et al., 2019). Our experiment shows that the objective function of Swimmer includes wide flat regions (Appendix A).\nThe population-based Evolutionary Approach (EA), which is an alternative to the gradient-based method, has also shown successful results in various control tasks (Conti et al., 2018; Liu et al., 2019;\nSalimans et al., 2017; Such et al., 2017). As a population-based search, the EA generates a population of agents to explore policy, and the population is regenerated with improvement in each generation. The EA is also known as a direct policy search (Schmidhuber & Zhao, 1998) because it directly searches by perturbing the parameter of policy. In Figure 1, the Cross-Entropy Method (CEM) as a kind of population-based search is simply described, where the current population sampled from the target distribution is evaluated. Then the distribution is updated to the direction for generating a more promising population. Not depending on the gradient, these approaches are robust to flat or deceptive gradients (Staines & Barber, 2013; Liu et al., 2019). However, the EA is sample inefficient because it requires a Monte-Carlo evaluation, and the previous results and data generally cannot be reused.\nThe off-policy Policy Gradient (PG) algorithm uses the data from arbitrary policies to train its actor and critic functions. It generates exciting potential by combining the EA and PG, where the data which is discarded in a standard EA is directly used to train the PG’s functions. Khadka & Tumer (2018) and Pourchot & Sigaud (2018) introduced a framework combining the EA and off-policy PG. However, the framework of (Khadka & Tumer, 2018) is less efficient to train the policy for general tasks than the PG algorithm alone, and the framework of (Pourchot & Sigaud, 2018) is unsuitable to train the policy for a task providing a deceptive gradient.\nIn this paper, we propose another hybrid algorithm, called Policy Gradient with Population-based Search (PGPS) in which the CEM and Twin Delayed Deep Deterministic Policy Gradient (TD3) (Fujimoto et al., 2018) are combined. It is as robust to a deceptive gradient as the CEM and more efficient to train the policy for general tasks than TD3. To be robust to a deceptive gradient, the proposed algorithm is constructed in a way similar to the one in (Khadka & Tumer, 2018), where the TD3 is trained using data from the CEM and periodically participates in the CEM population as an individual (PG guides EA). However, in this basic framework, the TD3 sometimes falls into the inferior solution and inefficiently searches. To get the TD3 out of the inferior solution, we let the EA guide the TD3 by guided policy learning (Jung et al., 2020) (EA guides PG). Furthermore, the TD3 critic contributes to generating a more promising population by filtering the set of actors sampled from CEM (Q-critic filtering). Lastly, to control the trade-off between the frequency of search and stable estimation, we used evaluation step scheduling in the process of population evaluation (Increasing evaluation steps). It carries out frequent searches when searching far from the optimal, whereas it carries out stable estimation when searching close to the optimal. These approaches bring out more synergies between the CEM and the TD3 while maintaining both the population-based search and the gradient-based search. Consequently, the proposed algorithm is not only robust to a deceptive gradient, but also produces outstanding performances with a low additional computational cost." }, { "heading": "2 RELATED WORKS", "text": "Recently, beyond the view of an alternative approach, few attempts have been proposed in the form of A supporting B. An attempt is to use EA to fill a replay buffer with diverse samples. In Colas et al. (2018), a Goal Exploration Process (GEP), a kind of EA, is firstly applied to search the policy and to fill a replay buffer with the diverse samples, and then the off-policy PG algorithm is sequentially used for fine tuning the parameters of the policy. Another attempt is to combine a population-based approach and PG for efficiently searching a good policy or the good hyper-parameters of an algorithm in parallel multi-learners setting. These applications generally consist of periodically evaluating the population, followed by distributing good knowledge to the other learners. To find the best architecture and hyper-parameters, Jaderberg et al. (2017) proposed a Population-Based Training (PBT) method in which the current best knowledge is periodically transferred to PG learners. Gangwani & Peng (2017) developed the distilled crossover using imitation learning and mutation based on the PG. Proposed operators transfer the information on current good policies into the next population without destructive change to the neural network. Jung et al. (2020) introduced a soft-manner guided policy learning to fuse the knowledge of the best policy with other identical multiple learners while maintaining a more extensive search area for the exploration.\nThe idea of combining the population-based EA and off-policy PG was recently introduced by Khadka & Tumer (2018). Their approach was called Evolutionary-Guided Reinforcement Learning (ERL) in which the Genetic Algorithm (GA) and the Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015) are combined. In ERL frameworks, the GA transfers the experience from evaluation into the DDPG through a replay buffer, and the DDPG transfers the knowledge learned\nfrom the policy gradient into the GA by periodically injecting a PG actor into the GA population. Khadka et al. (2019) expanded the PG algorithm of ERL from a single DDPG learner to multiple TD3 learners with a resource manager. Bodnar et al. (2019) revised the GA’s crossover and mutation to the distilled crossover and proximal mutation inspired by (Gangwani & Peng, 2017) and (Lehman et al., 2018) to prevent the destruction of neural networks. Pourchot & Sigaud (2018) introduced another framework, which combines the CEM and the TD3. In this framework, the TD3 algorithm has only a critic function trained using the experience from the CEM. In order to propagate the knowledge learned by policy gradient to the CEM, half of the population is updated to the direction indicated by the TD3 critic for a fixed number of steps, followed by the evaluation. The policy gradient for half of the population not only enhances the gradient-based learning, but also deteriorate the CEM’s robustness over a deceptive gradient.\nIn this paper, we introduce another hybrid algorithm, in which the CEM and the TD3 are combined as in CEMRL (Pourchot & Sigaud, 2018). However, the TD3 has both actor and critic, which are trained by a gradient from experiences generated by CEM. And then, the TD3 actor periodically participates in CEM population as in ERL (Khadka & Tumer, 2018). This structure is an effective way to maintain a direct policy search of CEM. To enhance the performance, we introduced new interactions processes between the CEM and TD3 instead of carrying out a policy gradient for numerous individual actors." }, { "heading": "3 BACKGROUNDS", "text": "Twin Delayed Deep Deterministic Policy Gradient (TD3) RL framework has an agent interacting with an environment generally defined by a Markov Decision Process (MDP). At each timestep t, an agent receives the state st, and takes an action at according to the policy π, and then receives a reward rt and the next state st+1 at next time step t + 1. The goal of RL is to find the policy that maximizes the discounted cumulative return Rt = ∑∞ k=t γ\nk−trk where γ is a discounted factor. Off-policy RL can use the data from arbitrary policies to train its actor and critic functions repeatedly, which is a key point for improving recent gradient-based RL. Silver et al. (2014) introduced the off-policy Deterministic Policy Gradient (DPG), which has an advantage for high-dimensional action spaces. The DDPG (Lillicrap et al., 2015) was extended from the DPG to apply it to a deep neural network. TD3 (Fujimoto et al., 2018) is an advanced version of the DDPG, which suffers from the overestimation bias of the critic. To correct this bias, two critics are introduced, and then the critic with the lowest state-action value is taken during the critic update as in the Double Deep Q-Network (DDQN) (Van Hasselt et al., 2016). Figure 2(a) represents the architecture of the TD3.\nCross Entropy Method (CEM) The Evolutionary Approach (EA) is a heuristic search method inspired by nature, where the current population is evaluated, and the next population is regenerated using the current evaluation result in order to produce a higher Return, which is also known as Fitness and defined as a cumulative sum of immediate reward for a fixed number of steps. The Estimation of Distribution Algorithm (EDA) is a class of the EA: It updates the target distribution to generate a better population. Depending on the update method for the distribution, EDAs are classified as a CEM (De Boer et al., 2005), a Covariance Matrix Adaptation Evolutionary Strategy (Hansen, 2016), an Evolutionary Strategy (Salimans et al., 2017), and a Natural Evolutionary Strategy (Wierstra et al., 2014). We used the CEM as one part of our proposed algorithm. As shown in Figure 2(b), the CEM procedures are as follows: The population is sampled from the multivariate Gaussian N(µ,Σ) and\nevaluated, and for the top K, which is smaller than the population size (N ), high performing actors are used to update a new mean (µ) and covariance (Σ) of the target distribution. The weight can be given to each actor according to the rank of the Return (Hansen, 2016). The elite actor can be passed to the next population, which is known as elitism. The more detailed procedure is reported in (De Boer et al., 2005). In this paper, we used a diagonal covariance to reduce the parameters.\nPopulation-guided Parallel Policy Search (P3S) The Guided Policy Learning (GPL) is commonly used when the elite policy leads some sub-policies to a better solution in multiple policies setting. Teh et al. (2017) introduced GPL for joint learning of numerous tasks in which a common policy encourages local policies to act better. Jung et al. (2020) proposed a soft-manner GPL, called the Population-guided Parallel Policy Search, for multiple identical learners with the same objective, where a population is evaluated periodically. Then sub-policies are trained to maximize their critic value and to concurrently minimize the distance from the elite policy for the next period. For this purpose, Augmented Loss (2) is used to train the sub-policies instead of Original Loss (1).\nOriginal Loss : LO(π) = Es∼SS [−Qπ(s, π(s))] (1) Augmented Loss : LA(π, πelite, β) = Es∼SS [−Qπ(s, π(s)) + β||π(s)− πelite(s)||22] (2)\nwhere π is a trained policy, Qπ is a critic function depending on π, πelite is the elite policy, SS is the set of states, and ||π(s)− πelite(s)||22 is the Euclidean distance measure between the trained policy and the elite policy. β is a distance weight and is controlled adaptively. In this paper, we used a revised GPL inspired by P3S so that the CEM elite actor guides the TD3 to better space." }, { "heading": "4 COUPLING POLICY GRADIENT WITH POPULATION BASED SEARCH ALGORITHM", "text": "As shown in Figure 3, the Policy Gradient with Population-based Search (PGPS) is a coupling framework between the CEM and the TD3. In this framework, two algorithms encourage each other to be efficient. The general flow is as follows. The generated actor population is evaluated by interacting with the environment, where each actor can fail to reach the max step (T ). The experience (the set of state transitions) is saved in a replay buffer, and Returns go to the CEM. The CEM updates the parameters of the population using top K (K is set to half of the population) high performing actors, and then the TD3 trains its critic and actor using the mini-batches sampled from the replay buffer. The knowledge of TD3 is periodically transferred to the CEM by copying the TD3 actor\ninto the last actor of the CEM population. The knowledge of the CEM is transferred to the TD3 by the GPL when the TD3 is presumed to falls into the inferior solution. Lastly, the next population is regenerated, where the TD3 critic is used to selects promising actors among the set sampled from N(µ,Σ) so that the population will be better. A pseudocode of the PGPS is described in Appendix B.\nGradient-based Update In a standard CEM, the experience of state transitions is discarded immediately because they require only Returns to update the target distribution. However, the TD3 enables the discarded experience to be reused to train its actor and critic functions. Therefore, the experience is saved in the replay buffer and then is repeatedly used in a gradient-based update.\nPG Guides EA In order to transfer the knowledge learned in TD3 to the CEM, the TD3 actor is periodically copied to the last actor of the population. If the last actor is included in the top-performing K actors, a multivariate Gaussian move to the direction indicated by TD3. On the other hand, if the TD3 actor is excluded from the top-performing K actors, the CEM ignores the knowledge from the TD3 and focuses on the direct policy search. High copying frequency benefits knowledge propagation from the TD3 to the CEM, but it can disturb the population-based search.\nEA Guides PG When the TD3 falls into the inferior solution, mainly due to a deceptive gradient, it is difficult to get out by relying solely on the gradient-based method despite the experience from good behavior policy (Colas et al., 2018). Therefore, we use the Guided Policy Learning (GPL), where the CEM elite actor leads the TD3 to escape the inferior solution. We judge the TD3 to have fallen into the inferior solution if its actor shows a lower Return than (mean − one standard deviation) of Returns of the current population. The TD3 actor that has fallen into the inferior solution is updated for the direction to minimize the Augmented Loss LA, equation (2). Moreover, the TD3 critic is trained through the target actor indirectly guided by the elite actor, and it will fix the critic to be appropriate. The distance weight (β) in the in LA is adapted several times during the GPL-based update by equation (3). It is a simplified version of P3S (Jung et al., 2020) and similar to Adaptive TRPO, which was introduced in (Schulman et al., 2017).\nβ = { β × 2 if D(πTD3, πelite) > Dtarget × 1.5 β / 2 if D(πTD3, πelite) < Dtarget / 1.5\n(3)\nwhere the distance measure D(πTD3, πelite) is defined as Es∼SS [||πTD3(s)− πelite(s)||22], SS is the set of states in the replay buffer, and Dtarget as a hyper-parameter determines how close the TD3 actor and the elite actor should be. During the GPL-based update, the TD3 actor stays around the CEM elite actor while maximizing its critic value.\nQ-critic Filtering It is important to generate a good population, since it not only leads N(µ,Σ) to be better but also encourages the TD3 to be trained well by filling the replay buffer with good experience. However, one cannot know which actor is better before the evaluation. To estimate the potential in advance, we use the Q-critic as a surrogate model (Jin, 2011). It can sort out promising actors from the set of actors sampled from N(µ,Σ) before the evaluation.\nProposition1 If Ea∼πi(·|s)[QTD3(s, a)] ≥ Ea∼πTD3(·|s)[QTD3(s, a)] for all s, Ea∼πi(·|s)[Qπi(s, a)] ≥ Ea∼πTD3(·|s)[QTD3(s, a)]\nSpecific proof is described in Appendix C. We assume that a higher Ea∼πi(·|s)[QTD3(s, a)] indicates a higher Ea∼πi(·|s)[Qπi(s, a)]. In this assumption, Es∼SS [QTD3(s, πi(s))] is used as the relative potential of the actor i (πi) where SS is the set of states in the replay buffer. The overall procedure is as follows: M ( N) actors are sampled from N(µ,Σ), and then Q-critic fills half of the population with the actors with higher relative potential by filtering M actors. The remaining half consists of the elite actor and the actors sampled from N(µ,Σ) for the exploration.\nIncreasing Interaction Steps In order to efficiently control the trade-off between the frequency of searches and stable performance estimation of actors, we used evaluation steps scheduling, where the evaluation steps between actor and environment increase with cumulative evaluation step by equation (4).\nT = min(Tinit + 100×mod(cumulative evaluation steps, Tinter), Tmax) (4)\nwhere T is the current evaluation step, which means that each actor can maximally interact with the environment as much as T . Tinit is the initial evaluation step, Tinter is the interval for increasing evaluation step, and Tmax is the maximum evaluation step depending on the task.\nThe evaluation step should be sufficiently long for stable performance estimation of the population, but it reduces the number of the CEM generation and delays the update of the TD3. Short evaluation steps make it possible to carry out more population-based searches and frequent TD3 updates but causes unstable performance estimation. As the arbitration approach, we used the increasing evaluation steps. This approach guarantees more searches and frequent updates at the beginning stage of learning and fine estimation at the later stage of learning." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 COMPARISON TO BASELINES", "text": "Environment The proposed algorithm (PGPS) was empirically evaluated on five games in Mujoco managed by OpenAI Gym, which is widely used as continuous control benchmarks.\nBaseline Algorithms We compared the performance of PGPS with various EA, PG, and EAPG algorithms, such as CEM, PPO(Schulman et al., 2017), DDPG(Lillicrap et al., 2015), TD3(Fujimoto et al., 2018), SAC(Haarnoja et al., 2018), CERL(Khadka et al., 2019), PDERL(Bodnar et al., 2019), and CEMRL(Pourchot & Sigaud, 2018). Almost algorithms were implemented using the code provided by its authors. However, the DDPG is implemented by the code provided by the authors of the TD3. OpenAI Spinningup (Achiam, 2018) was used to implement PPO. PGPS and CEM were implemented using PyTorch. Our code is available at http://github.com/NamKim88/PGPS.\nHyper-parameter Setting For stable learning, we performed tuning on the architecture of the neural network and the hyper-parameters of the learning rate, population size, the period of copying TD3 actors to the population, increasing evaluation steps, and Q-critic filtering. The detailed setting is described in Appendix D. Adam (Kingma & Ba, 2014) was used to train the neural networks in the TD3. The hyper-parameters of baseline algorithms were set as the same as the reference code.\nPerformance Evaluation Metrics Each algorithm learns five times at different seeds for a task. Each learning runs for a million timesteps. That is, the total numbers of interacting all agents with the environment are a million. The evaluation tests are performed every 10,000 steps. Each evaluation test performed without any exploration behavior and reports the average reward over ten episodes, where the evaluation step of a episode is 1,000. For the evaluation test of the PGPS, the current mean (µ) of N(µ,Σ) was used as the parameters of the evaluation policy. If the learning is performed at seed 1, the evaluation test will proceed at random seed 1 + 100. This approach is the same as the original code of the TD3 (Fujimoto et al., 2018) and applied to all baseline algorithms. The curves in Figure 4 reported over the average performance of policies trained at five random seeds from 1 to 5.\nResults In Figure 4 the performance of all baseline algorithms is similar to that reported in the original author’s papers and reference papers. Some differences come from the training seed, evaluation metrics, the variance of algorithms, and the version of MuJoCo and Pytorch.\nThe results show that all PG algorithms suffer from a deceptive gradient in Swimmer-v2. In contrast, the CEM, which is a direct policy search method, yields the best result over a deceptive gradient in Swimmer-v2. The CERL and PDERL as a combining algorithm between GA and PG show better performance than PG algorithms. However, their performances are lower than the CEM because the knowledge propagated from the PG to the GA disturbs the direct policy search of the GA. Despite the CEMRL combining algorithm between the CEM and TD3, it shows similar performance to the TD3. This result comes from that the gradient-based update of half of the population ruins the population-based search of the CEM.\nIn the remaining four tasks which are advantageous to the gradient-based method, advanced PG algorithms show much better performance than the CEM. The performances of CERL and PDERL are located between an advanced PG algorithm and the CEM in most cases. Especially, CERL yields lower performance than the TD3, which is one part of CERL. This is due to the failure of GA and\nTD3 to combine effectively. CEMRL outperforms all baseline algorithms in four tasks since the gradient-based update for multiple actors amplifies the advantage of the gradient method.\nPGPS carried out gradient-based update on the TD3, and then the TD3 actor is periodically copied to the last actor of the CEM population. It keeps the computational cost lower and also minimizes the disturbance from the gradient method on a direct policy search. As a result, the PGPS can achieve results comparable to the CEM in Swimmer-v2 which is advantageous to a direct policy search. Furthermore, the PGPS shows an outstanding performance in the remaining four tasks, which are advantageous to the gradient-based method. That performance is due to the additional interaction processes for coupling two algorithms efficiently, such as mutual guidance (PG-EA), Q-critic filtering, and increasing evaluation steps." }, { "heading": "5.2 ABLATIONS STUDIES", "text": "In this subsection, we performed ablations studies to investigate the effect on the final performance and computational time when a specific process is cumulatively added to the base model. The added sequence is as follows: PG guides EA (P to E), EA guides PG (E to P), Q-filtering, and increasing evaluation steps. HalfCheetah-v2, Walker2d-v2, and Ant-v2 are used for the ablation studies. Each task was learned five times at random seeds of 1 to 5. Each learning runs for a million timesteps. Table 1 reports the average over 15 (three games×five learning) runs. Evaluation test is performed using the µ of the CEM except for the Base model.\nBase model The experience from CEM is saved to a replay buffer. However, the TD3 and CEM are trained independently without any guidance. Two evaluation tests are performed using the TD3 and the µ of the CEM at the end of learning. A better one is selected for the final performance.\nP to E shows the most noteworthy performance improvement within the proposed algorithm. A good experience is essential to train TD3’s actor and critic functions well. As the TD3 actor directly guides the population to be better, the population fills the replay buffer with good experience, and then it\nmakes sure the TD3 will be trained well again. For the perspective of the CEM, the TD3 actor is a good exploration, which cannot be developed from a random sampling, and improves the population to be better. For the perspective of the TD3, the experience from the population is a richer set of exploration behavior, which is better than PG-based action space exploration (Plappert et al., 2017). The other processes also additionally contribute to performance improvement of about 4 ∼ 18% The remarkable point is that these processes incur a low additional computational cost. The Q-filtering requires a relatively highest computational cost, but it is only about 11% of that of the TD3 alone.\nThe effect of EA guidance In contrast to the existing hybrid algorithm (Bodnar et al., 2019; Khadka et al., 2019) in which the PG only guides the EA, the proposed algorithm lets the EA also guides the TD3. It is beneficial to pull the TD3 agent out of the inferior solution, especially if it is captured due to a deceptive gradient. Figure 5 represents the effect of EA guidance (EA guides PG) in swimmer-v2. As shown in Figure 5, the TD3 actor without EA guidance stays on the inferior solution most of the time, whereas the TD3 actor with EA guidance quickly gets out of the inferior solution and goes on searching for a better space. The proposed algorithm will also perform robust learning\neven if both deceptive and general gradient occurs in the environment since It lets CEM lead TD3 when a deceptive gradient occurs and lets TD3 lead CEM when the general gradient occurs.\nexploration behavior for both the EA and the PG. To further improve exploration, the distance-based criteria introduced in (Bodnar et al., 2019) can also be used with Q-critic filtering." }, { "heading": "6 CONCLUSIONS", "text": "In this paper, we proposed another hybrid algorithm coupling the Cross-Entropy Method (CEM) and the Twin Delayed Deep Deterministic Policy Gradient (TD3), which is called Policy Gradient with Population-based Search (PGPS). Proposed algorithm is not only robust to a deceptive gradient, which is difficult to be learned by the TD3 alone, but it also achieves higher sample efficiency, which is deficient in the EA alone. To be robust to a deceptive gradient with low additional computational cost, we revised the existing hybrid algorithm framework to make it an improved structure. Also, to enhance the performance, we introduced new interaction processes such as mutual guidance (PG↔EA), Q-critic filtering, and increasing evaluation step. First, mutual guidance is the most crucial process, where the TD3 guides the CEM to make it better, and the CEM also guides the TD3 that has fallen into the inferior solution to make it better. Second, Q-critic helps the population to consist of more promising actors by filtering the set of actors sampled from a multivariate Gaussian. Lastly, the increasing evaluation step controls the trade-off between the frequency of searches and stable estimation. It takes the frequent searches and updates when searching on a coarse policy space far from the optimal at the beginning stage of learning and fine estimation close to the optimal at the later stage of learning. Our experiments on a MuJoCo confirmed that the proposed algorithm outperforms state-of-the-art PG and hybrid algorithms, while its computational cost is kept at about 13.5% higher than the TD3 algorithm." }, { "heading": "A APPENDIX A. THE SHAPE OF SWIMMER-V2", "text": "Average Episode Discounted Reward = Eπ(·|s)[ ∑∞ t=0 γ trπt ] ≈ 130 ∑30 n=1 ∑999 t=0 γ trπt\nAverage Episode Return = Eπ(·|s)[ ∑∞ t=0 r π t ] ≈ 130 ∑30 n=1 ∑999 t=0 r π t\nwhere γ = 0.99, and the architecture of the linear policy : [state dim, action dim]→ tanh To find the reason why the previous state-of-the-art policy gradient methods, such as TRPO, PPO, SAC, and TD3, were hard to solve the Swimmer in the Mujoco Environment, we performed an interesting two-steps experiment. In the first step, we sat an actor, following a simple linear policy, and executed the CEM algorithm to find an optimal policy parameters θ in the Swimmer. When the Return value of the actor reached over 200, we saved the policy parameters θ200. In the next step, while changing one parameter value, we evaluated the changed policy thirty times with different seeds and recorded the all Discounted Rewards (J(S0)) and Returns. We represents the some cases of the experiment in Figure 1. As you can find in the Figure 1, we can discover interesting facts: 1) on the graphs (a), (d), and (f) in Figure 1, we can find wide regions, where the gradients ∇θJ(S0) are near zero; 2) except for (b) in Figure 1, at particular parameter value, the gradient of J(S0) and Return is steep enough to be seen as a piece-wise; and 3) on the graphs (c), (d) and (e), the graphs are shaped like valleys neat at those steep points. We suspected the above facts as the cause of the deceptive gradient problem of the Swimmer, and raised the question in the Introduction. Finally, by considering those issues, we propose the combined algorithm with TD3 and CEM." }, { "heading": "B APPENDIX B. PSEUDOCODE OF PGPS ALGORITHM", "text": "Algorithm 1 Coupling Policy Gradient with Population based Search Algorithm\nSet hyper-parameters: TD3: lractor, lrcritic, τ, γ, and κ ; CEM: pop-size N , top K, Σinit,Σend, and τΣ ; Mutual guidance: FTD3→CEM , βinit, and Dtarget ; Q-critic filtering: Tstart−Q and SR ; Increasing interaction steps: Tmax, Tinit, and Tinterval\n1: Initialize the mean µ of the multivariate Gaussian of the CEM 2: Initialize the TD3 actor πTD3 and TD3 critic QTD3 3: initialize replay buffer R 4: total_steps = 0 5: for generation=1:∞ do 6: if total_steps ≥ Tstart−Q then 7: pop← Q-critic Filtering(N, πelite, µ,Σ, QTD3, R) 8: else 9: pop[1]← πelite, pop[2 : N ] are sampled from N(µ,Σ)\n10: end if 11: if generation mod FTD3→CEM = 0 then 12: pop[N ]← πTD3 13: end if\n14: T = min(Tinit + 100×mod(total_steps, Tinterval), Tmax)\n15: interaction_steps = 0 16: for i=1:pop_size N do 17: Set the current actor π as pop[i]. 18: Returni, (st, at, rt, st+1)t=1:tend(tend≤T ) ← Evaluate(π, T ) 19: Fill replay buffer R with (st, at, rt, st+1)t=1:tend 20: interaction_steps = interaction_steps + tend 21: end for 22: total_steps = total_steps + interaction_steps\n23: Update (πelite, µ, Σ) with the top-K Return actors\n24: num_update = interaction_steps / 5 25: if generation mod FTD3→CEM = 0 and ReturnN < MEAN(Returns)-STD(Returns) then 26: for i=1:5 do 27: Sampled states 2 (SS2) are drawn from R\n28: Update β =\n{ β × 2 if Es∼SS2 [||πTD3(s)− πelite(s)|| 2 2] > Dtarget × 1.5\nβ / 2 if Es∼SS2 [||πTD3(s)− πelite(s)|| 2 2] < Dtarget / 1.5\n29: Train QTD3 for num_update mini-batches from R using a standard TD3 algorithm 30: Train πTD3 for num_update mini-batches from R to minimize LA(πTD3, πelite, β) 31: end for 32: else 33: for i=1:5 do 34: Train QTD3 for num_update mini-batches from R using a standard TD3 algorithm 35: Train πTD3 for num_update mini-batches from R to minimize LO(πTD3) 36: end for 37: end if 38: end for\nIn contrast to a standard TD3(Fujimoto et al., 2018) performing one step interaction with environment and then one update repeatedly, our TD3 carries out as many updates as the sum of the evaluation steps of the current generation after the evaluation. In proposed algorithm, the total updates steps is divided into 5 iterations. At each iteration, a critic firstly trained for a fixed number of steps followed by an actor is trained for same steps. If total update steps are 10,000, at each iteration, critics is firstly trained for 2,000 minibatches, and then actor is trained for 2,000 minibatches for the direction\nto maximize critic. For stabilizing the volatility of critic, it is widely used in the implementation (Achiam, 2018; Hill et al., 2018; Pourchot & Sigaud, 2018). LO(πTD3) is original TD3 Loss in equation (1). LA(πTD3, πelite, β) is augmented loss for guided policy learning in equation (2).\nAlgorithm 2 Function Q-critic Filtering 1: procedure Q-critic Filtering(N, πelite, µ,Σ, QTD3, R) 2: pop[1]← πelite, pop[2 : N/2] are sampled from N(µ,Σ) 3: πj=1:M(=SR∗N) are sampled from N(µ,Σ) 4: Sampled states 1 (SS1) are drawn from replay buffer R 5: for j=1:M do 6: Pj = Es∼SS1 [QTD3(SS1, πj)] 7: end for 8: pop[N/2 + 1, N ]← Select policies (πs) with higher Pj among πj=1:M 9: Return pop 10: end procedure\nAlgorithm 3 Function Evaluate 1: procedure Evaluate(πelite, T ) 2: returns, t, buffer (BF ) = 0, 0, [ ] 3: Reset environment and get initial state s0 4: while env is not done and t ≤ T do 5: Select action at = π(st) 6: Execute action at and receive reward rt and next state st+1 7: Fill BF with stat transition (st, at, rt, st+1) 8: returns = returns+ rt and t = t+ 1 9: end while 10: Return returns,BF 11: end procedure\nIn a standard TD3 algorithm, Gaussian noise or Ornstein-Uhlenbeck(Uhlenbeck & Ornstein, 1930) noise are added to the action at for exploration. It is usually known as action space noise. Pourchot & Sigaud (2018) empirically showed that action space noise does not contribute the performance improvement. We also cannot find any evidence for the advantage about action space noise. Therefore, proposed algorithm does not use action space noise." }, { "heading": "C APPENDIX C. PROOF OF PROPOSITION 1", "text": "In this section, we prove Proposition 1.\nProposition1 If Ea∼πi(·|s)[QTD3(s, a)] ≥ Ea∼πTD3(·|s)[QTD3(s, a)] for all s, Ea∼πi(·|s)[Qπi(s, a)] ≥ Ea∼πTD3(·|s)[QTD3(s, a)]\nProof. For arbitrary st\nVπTD3(st)\n= Eat∼πTD3(·|st)[QπTD3(st, at)]\n≤ Eat∼πi(·|st)[QπTD3(st, at)]\n= Eat∼πi(·|st)[r πi t + γEat+1∼πTD3(·|st+1)[QπTD3(st+1, at+1)]\n≤ Eat∼πTD3(·|st)[r πi t + γEat+1∼πi(·|st+1)[QπTD3(st+1, at+1)]\n= Eat∼πi(·|st)[r πi t + γr πi t+1 + γ 2Eat+2∼πTD3(·|st+2)[QπTD3(st+2, at+2)]\n≤ Eat∼πi(·|st)[r πi t + γr πi t+1 + γ 2Eat+2∼πTD3(·|st+2)[QπTD3(st+2, at+2)]\n. . .\n≤ Eat∼πi(·|st)[r πi t + γr πi t+1 + γ 2rπit+2 + · · ·+ γ∞rπi∞ + · · · ] ∼= Eat∼πi(·|st)[ ∑∞ k=t γ k−trπik ]\n∼= Ea∼πi(·|s)[Qπi(s, a)]\n= Vπi(st)\nWe assumed that higher Ea∼πi(·|s)[QTD3(s, a)] means higher Ea∼πi(·|s)[Qπi(s, a)]. Therefore, the policy π with higher Ea∼π(·|s)[QTD3(s, a)] is better. We used sampled state (SS) from repay buffer to estimate the performance of policy. That is, Es∼SS [QTD3(s, πi(s))] is a estimator for Ea∼π(·|s)[QTD3(s, a)] for all s. Sum up, the policy with higher Es∼SS [QTD3(s, πi(s))] is better policy." }, { "heading": "D APPENDIX D. DETAILED HYPERPARAMETERS SETTING", "text": "Table 2 includes the architecture of the neural networks. Table 3 represents the hyperparameters that kept constant across all tasks. Table 4 describes the hyperparameters that vary with the task." } ]
2,020
null
SP:4e5cbc8389be556e7f0bc008d19d635e6736622f
[ "The paper presents an analysis of differential privacy in machine learning, with a focus on neural networks trained via differentially private stochastic gradient descent (DPSGD). The main focus and the message in the paper is that the handcrafted features work better compared to learned features during training of NNs and having more training data results in better outcomes (i.e. a better privacy-utility trade-off).", "The paper considers ways of improving private versions of SGD in the context of image classification. The main finding is that providing \"hand crafted\" features can significantly improve the privacy/accuracy trade-off. In some cases, even a linear model built on top of such features (like those produced by ScatterNet), can improve over differentially private SGD. A plausible explanation for this phenomenon is that extra features can reduce the number of iterations required in SGD, resulting in better privacy and/or less noise. (It is also argued that having much more data similarly improves the trade-off, but this is unsurprising and, it seems, has been observed before by McMahan et al.)" ]
We demonstrate that differentially private machine learning has not yet reached its “AlexNet moment” on many canonical vision tasks: linear models trained on handcrafted features significantly outperform end-to-end deep neural networks for moderate privacy budgets. To exceed the performance of handcrafted features, we show that private learning requires either much more private data, or access to features learned on public data from a similar domain. Our work introduces simple yet strong baselines for differentially private learning that can inform the evaluation of future progress in this area.
[ { "affiliations": [], "name": "Florian Tramèr" }, { "affiliations": [], "name": "Dan Boneh" } ]
[ { "authors": [ "Martin Abadi", "Andy Chu", "Ian Goodfellow", "H Brendan McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang" ], "title": "Deep learning with differential privacy", "venue": "In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2016 }, { "authors": [ "Joakim Andén", "Stéphane Mallat" ], "title": "Deep scattering spectrum", "venue": "IEEE Transactions on Signal Processing,", "year": 2014 }, { "authors": [ "Mathieu Andreux", "Tomás Angles", "Georgios Exarchakis", "Roberto Leonarduzzi", "Gaspar Rochette", "Louis Thiry", "John Zarka", "Stéphane Mallat", "Joakim Andén", "Eugene Belilovsky", "Joan Bruna", "Vincent Lostanlen", "Matthew J. Hirn", "Edouard Oyallon", "Sixin Zhang", "Carmine Cella", "Michael Eickenberg" ], "title": "Kymatio: Scattering transforms in Python", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "Sanjeev Arora", "Simon S Du", "Zhiyuan Li", "Ruslan Salakhutdinov", "Ruosong Wang", "Dingli Yu" ], "title": "Harnessing the power of infinitely wide deep nets on small-data tasks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Eugene Bagdasaryan", "Omid Poursaeed", "Vitaly Shmatikov" ], "title": "Differential privacy has disparate impact on model accuracy", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Raef Bassily", "Adam Smith", "Abhradeep Thakurta" ], "title": "Private empirical risk minimization: Efficient algorithms and tight error bounds", "venue": "In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science,", "year": 2014 }, { "authors": [ "Keith Bonawitz", "Vladimir Ivanov", "Ben Kreuter", "Antonio Marcedone", "H Brendan McMahan", "Sarvar Patel", "Daniel Ramage", "Aaron Segal", "Karn Seth" ], "title": "Practical secure aggregation for privacy-preserving machine learning", "venue": "In ACM SIGSAC Conference on Computer and Communications Security (CCS),", "year": 2017 }, { "authors": [ "Joan Bruna", "Stéphane Mallat" ], "title": "Invariant scattering convolution networks", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Zhiqi Bu", "Jinshuo Dong", "Qi Long", "Weijie J Su" ], "title": "Deep learning with Gaussian differential privacy", "venue": "arXiv preprint arXiv:1911.11607,", "year": 2019 }, { "authors": [ "Nicholas Carlini", "Chang Liu", "Úlfar Erlingsson", "Jernej Kos", "Dawn Song" ], "title": "The secret sharer: Evaluating and testing unintended memorization in neural networks", "venue": "In 28th USENIX Security Symposium,", "year": 2019 }, { "authors": [ "Nicholas Carlini", "Florian Tramer", "Eric Wallace", "Matthew Jagielski", "Ariel Herbert-Voss", "Katherine Lee", "Adam Roberts", "Tom Brown", "Dawn Song", "Ulfar Erlingsson", "Alina Oprea", "Colin Raffel" ], "title": "Extracting training data from large language models", "venue": "arXiv preprint arXiv:2012.07805,", "year": 2020 }, { "authors": [ "Yair Carmon", "Aditi Raghunathan", "Ludwig Schmidt", "John C Duchi", "Percy S Liang" ], "title": "Unlabeled data improves adversarial robustness", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Kamalika Chaudhuri", "Claire Monteleoni", "Anand D Sarwate" ], "title": "Differentially private empirical risk minimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Chen Chen", "Jaewoo Lee" ], "title": "Stochastic adaptive line search for differentially private optimization", "venue": "arXiv preprint arXiv:2008.07978,", "year": 2020 }, { "authors": [ "Mia Xu Chen", "Benjamin N Lee", "Gagan Bansal", "Yuan Cao", "Shuyuan Zhang", "Justin Lu", "Jackie Tsay", "Yinan Wang", "Andrew M Dai", "Zhifeng Chen" ], "title": "Gmail smart compose: Real-time assisted writing", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Kevin Swersky", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "Big self-supervised models are strong semi-supervised learners", "venue": "arXiv preprint arXiv:2006.10029,", "year": 2020 }, { "authors": [ "Adam Coates", "Andrew Y Ng" ], "title": "Learning feature representations with k-means", "venue": "In Neural networks: Tricks of the trade,", "year": 2012 }, { "authors": [ "Navneet Dalal", "Bill Triggs" ], "title": "Histograms of oriented gradients for human detection", "venue": "IEEE computer society conference on computer vision and pattern recognition (CVPR’05),", "year": 2005 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2009 }, { "authors": [ "Cynthia Dwork", "Krishnaram Kenthapadi", "Frank McSherry", "Ilya Mironov", "Moni Naor" ], "title": "Our data, ourselves: Privacy via distributed noise generation", "venue": "In Annual International Conference on the Theory and Applications of Cryptographic Techniques,", "year": 2006 }, { "authors": [ "Cynthia Dwork", "Frank McSherry", "Kobbi Nissim", "Adam Smith" ], "title": "Calibrating noise to sensitivity in private data analysis", "venue": "In Theory of cryptography conference,", "year": 2006 }, { "authors": [ "Cynthia Dwork", "Vitaly Feldman", "Moritz Hardt", "Toniann Pitassi", "Omer Reingold", "Aaron Leon Roth" ], "title": "Preserving statistical validity in adaptive data analysis", "venue": "In Proceedings of the forty-seventh annual ACM symposium on Theory of computing,", "year": 2015 }, { "authors": [ "Vitaly Feldman" ], "title": "Does learning require memorization? a short tale about a long tail", "venue": "In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing,", "year": 2020 }, { "authors": [ "Vitaly Feldman", "Tijana Zrnic" ], "title": "Individual privacy accounting via a Rényi filter", "venue": "arXiv preprint arXiv:2008.11193,", "year": 2020 }, { "authors": [ "Vitaly Feldman", "Ilya Mironov", "Kunal Talwar", "Abhradeep Thakurta" ], "title": "Privacy amplification by iteration", "venue": "IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS),", "year": 2018 }, { "authors": [ "Robin C Geyer", "Tassilo Klein", "Moin Nabi" ], "title": "Differentially private federated learning: A client level perspective", "venue": "arXiv preprint arXiv:1712.07557,", "year": 2017 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: Training ImageNet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Matthew Jagielski", "Jonathan Ullman", "Alina Oprea" ], "title": "Auditing differentially private machine learning: How private is private SGD", "venue": "arXiv preprint arXiv:2006.07709,", "year": 2020 }, { "authors": [ "Bargav Jayaraman", "Lingxiao Wang", "David Evans", "Quanquan Gu" ], "title": "Distributed learning without distress: Privacy-preserving empirical risk minimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Peter Kairouz", "H Brendan McMahan", "Brendan Avent", "Aurélien Bellet", "Mehdi Bennis", "Arjun Nitin Bhagoji", "Keith Bonawitz", "Zachary Charles", "Graham Cormode", "Rachel Cummings" ], "title": "Advances and open problems in federated learning", "venue": "arXiv preprint arXiv:1912.04977,", "year": 2019 }, { "authors": [ "Peter Kairouz", "Mónica Ribero", "Keith Rush", "Abhradeep Thakurta" ], "title": "Dimension independence in unconstrained private erm via adaptive preconditioning", "venue": "arXiv preprint arXiv:2008.06570,", "year": 2020 }, { "authors": [ "Daniel Kifer", "Adam Smith", "Abhradeep Thakurta" ], "title": "Private convex empirical risk minimization and highdimensional regression", "venue": "In Conference on Learning Theory, pp", "year": 2012 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Simon Kornblith", "Jonathon Shlens", "Quoc V Le" ], "title": "Do better imagenet models transfer better", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Zhiyuan Li", "Ruosong Wang", "Dingli Yu", "Simon S Du", "Wei Hu", "Ruslan Salakhutdinov", "Sanjeev Arora" ], "title": "Enhanced convolutional neural tangent kernels", "venue": null, "year": 1911 }, { "authors": [ "Jingcheng Liu", "Kunal Talwar" ], "title": "Private selection from private candidates", "venue": "In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing,", "year": 2019 }, { "authors": [ "David G Lowe" ], "title": "Object recognition from local scale-invariant features", "venue": "In Proceedings of the seventh IEEE international conference on computer vision,", "year": 1999 }, { "authors": [ "Alexander Selvikvåg Lundervold", "Arvid Lundervold" ], "title": "An overview of deep learning in medical imaging focusing on MRI", "venue": "Zeitschrift für Medizinische Physik,", "year": 2019 }, { "authors": [ "Christopher Manning", "Hinrich Schutze" ], "title": "Foundations of statistical natural language processing", "venue": null, "year": 1999 }, { "authors": [ "Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Aguera y Arcas" ], "title": "Communicationefficient learning of deep networks from decentralized data", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "H Brendan McMahan", "Daniel Ramage", "Kunal Talwar", "Li Zhang" ], "title": "Learning differentially private recurrent language models", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Ilya Mironov" ], "title": "Rényi differential privacy", "venue": "IEEE 30th Computer Security Foundations Symposium (CSF),", "year": 2017 }, { "authors": [ "Ilya Mironov", "Kunal Talwar", "Li Zhang" ], "title": "Rényi differential privacy of the sampled Gaussian mechanism", "venue": "arXiv preprint arXiv:1908.10530,", "year": 2019 }, { "authors": [ "Milad Nasr", "Reza Shokri", "Amir houmansadr" ], "title": "Improving deep learning with differential privacy using gradient encoding and denoising", "venue": "arXiv preprint arXiv:2007.11524,", "year": 2020 }, { "authors": [ "Kobbi Nissim", "Sofya Raskhodnikova", "Adam Smith" ], "title": "Smooth sensitivity and sampling in private data analysis", "venue": "In Proceedings of the thirty-ninth annual ACM symposium on Theory of computing,", "year": 2007 }, { "authors": [ "Edouard Oyallon", "Stéphane Mallat" ], "title": "Deep roto-translation scattering for object classification", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Edouard Oyallon", "Sergey Zagoruyko", "Gabriel Huang", "Nikos Komodakis", "Simon Lacoste-Julien", "Matthew Blaschko", "Eugene Belilovsky" ], "title": "Scattering networks for hybrid representation learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Nicolas Papernot", "Martín Abadi", "Ulfar Erlingsson", "Ian Goodfellow", "Kunal Talwar" ], "title": "Semi-supervised knowledge transfer for deep learning from private training data", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Nicolas Papernot", "Shuang Song", "Ilya Mironov", "Ananth Raghunathan", "Kunal Talwar", "Úlfar Erlingsson" ], "title": "Scalable private learning with PATE", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Nicolas Papernot", "Steve Chien", "Shuang Song", "Abhradeep Thakurta", "Ulfar Erlingsson" ], "title": "Making the shoe fit: Architectures, initializations, and tuning for learning with privacy, 2020a", "venue": "URL https://openreview. net/forum?id=rJg851rYwH", "year": 2020 }, { "authors": [ "Nicolas Papernot", "Abhradeep Thakurta", "Shuang Song", "Steve Chien", "Úlfar Erlingsson" ], "title": "Tempered sigmoid activations for deep learning with differential privacy", "venue": "In Theory and Practice of Differential Privacy,", "year": 2020 }, { "authors": [ "Vinay Uday Prabhu", "Abeba Birhane" ], "title": "Large image datasets: A pyrrhic win for computer vision", "venue": "arXiv preprint arXiv:2006.16923,", "year": 2020 }, { "authors": [ "Ali Rahimi", "Benjamin Recht" ], "title": "Random features for large-scale kernel machines", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Ali Sharif Razavian", "Hossein Azizpour", "Josephine Sullivan", "Stefan Carlsson" ], "title": "CNN features off-the-shelf: an astounding baseline for recognition", "venue": "In Computer Vision and Pattern Recognition Workshops (CVPRW),", "year": 2014 }, { "authors": [ "Benjamin Recht", "Rebecca Roelofs", "Ludwig Schmidt", "Vaishaal Shankar" ], "title": "Do CIFAR-10 classifiers generalize to CIFAR-10", "venue": "arXiv preprint arXiv:1806.00451,", "year": 2018 }, { "authors": [ "Benjamin Rubinstein", "Peter Bartlett", "Ling Huang", "Nina Taft" ], "title": "Learning in a large function space: Privacypreserving mechanisms for SVM learning", "venue": "Journal of Privacy and Confidentiality,", "year": 2012 }, { "authors": [ "Vaishaal Shankar", "Alex Fang", "Wenshuo Guo", "Sara Fridovich-Keil", "Ludwig Schmidt", "Jonathan Ragan-Kelley", "Benjamin Recht" ], "title": "Neural kernels without tangents", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Reza Shokri", "Vitaly Shmatikov" ], "title": "Privacy-preserving deep learning", "venue": "In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security,", "year": 2015 }, { "authors": [ "Reza Shokri", "Marco Stronati", "Congzheng Song", "Vitaly Shmatikov" ], "title": "Membership inference attacks against machine learning models", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Congzheng Song", "Thomas Ristenpart", "Vitaly Shmatikov" ], "title": "Machine learning models that remember too much", "venue": "In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2017 }, { "authors": [ "Om Thakkar", "Galen Andrew", "H Brendan McMahan" ], "title": "Differentially private learning with adaptive clipping", "venue": "arXiv preprint arXiv:1905.03871,", "year": 2019 }, { "authors": [ "Antonio Torralba", "Rob Fergus", "William T Freeman" ], "title": "80 million tiny images: A large data set for nonparametric object and scene recognition", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 1970 }, { "authors": [ "Yu-Xiang Wang", "Borja Balle", "Shiva Prasad Kasiviswanathan" ], "title": "Subsampled Rényi differential privacy and analytical moments accountant", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Shaomei Wu", "Hermes Pique", "Jeffrey Wieland" ], "title": "Using artificial intelligence to help blind people ‘see", "venue": null, "year": 2016 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Da Yu", "Huishuai Zhang", "Wei Chen", "Tie-Yan Liu", "Jian Yin" ], "title": "Gradient perturbation is underrated for differentially private convex optimization", "venue": "arXiv preprint arXiv:1911.11363,", "year": 2019 }, { "authors": [ "Lei Yu", "Ling Liu", "Calton Pu", "Mehmet Emre Gursoy", "Stacey Truex" ], "title": "Differentially private model publishing for deep learning", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2019 }, { "authors": [ "Tao Yu", "Eugene Bagdasaryan", "Vitaly Shmatikov" ], "title": "Salvaging federated learning by local adaptation", "venue": "arXiv preprint arXiv:2002.04758,", "year": 2020 }, { "authors": [ "Yingxue Zhou", "Xiangyi Chen", "Mingyi Hong", "Zhiwei Steven Wu", "Arindam Banerjee" ], "title": "Private stochastic nonconvex optimization: Adaptive algorithms and tighter generalization bounds", "venue": "arXiv preprint arXiv:2006.13501,", "year": 2020 }, { "authors": [ "Yingxue Zhou", "Zhiwei Steven Wu", "Arindam Banerjee" ], "title": "Bypassing the ambient dimension: Private SGD with gradient subspace identification", "venue": "arXiv preprint arXiv:2007.03813,", "year": 2020 }, { "authors": [ "Yuqing Zhu", "Xiang Yu", "Manmohan Chandraker", "Yu-Xiang Wang" ], "title": "Private-kNN: Practical differential privacy for computer vision", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Coates", "Ng" ], "title": "achieve above 80% test accuracy on CIFAR-10 with linear models trained on top of a dictionary of features extracted from a mixture of image patches. Their approach relies on a combination of many ‘tricks”, including data normalization, data whitening, tweaks to standard Gaussian-Mixture-Model (GMM) algorithms, feature selection", "venue": null, "year": 2012 }, { "authors": [ "B DP-SGD" ], "title": "RDP AND PRIVATE DATA NORMALIZATION Throughout this work, we use the DP-SGD algorithm of Abadi et al. (2016): Algorithm 1: DP-SGD (Abadi et al., 2016) input :Data", "venue": null, "year": 2016 }, { "authors": [ "activations", "which Papernot" ], "title": "2020b) found to outperform the more common ReLU activations", "venue": null, "year": 2020 }, { "authors": [ "Data Norm" ], "title": "σnorm = 8) C.7 PRIVATE LEARNING ON LARGER DATASETS For the experiment in Section 5.1, we use an additional 500K images from the Tiny Images dataset (Torralba et al., 2008), which were collected and labeled by Carmon et al. (2019) using a pre-trained CIFAR-10 classifier", "venue": "(see (Carmon et al.,", "year": 2019 }, { "authors": [ "Yu" ], "title": "2019a) show that DP-SGD can achieve higher utility than many of these approaches, both asymptotically and empirically. Here, we take a closer look at the “Privacy Amplification by Iteration” work of (Feldman et al., 2018)", "venue": null, "year": 2018 }, { "authors": [ "man" ], "title": "2018) requires less noise than DPSGD only for very small or very large privacy budgets", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine learning (ML) models have been successfully applied to the analysis of sensitive user data such as medical images (Lundervold & Lundervold, 2019), text messages (Chen et al., 2019) or social media posts (Wu et al., 2016). Training these ML models under the framework of differential privacy (DP) (Dwork et al., 2006b; Chaudhuri et al., 2011; Shokri & Shmatikov, 2015; Abadi et al., 2016) can protect deployed classifiers against unintentional leakage of private training data (Shokri et al., 2017; Song et al., 2017; Carlini et al., 2019; 2020).\nYet, training deep neural networks with strong DP guarantees comes at a significant cost in utility (Abadi et al., 2016; Yu et al., 2020; Bagdasaryan et al., 2019; Feldman, 2020). In fact, on many ML benchmarks the reported accuracy of private deep learning still falls short of “shallow” (non-private) techniques. For example, on CIFAR-10, Papernot et al. (2020b) train a neural network to 66.2% accuracy for a large DP budget of ε = 7.53, the highest accuracy we are aware of for this privacy budget. Yet, without privacy, higher accuracy is achievable with linear models and non-learned “handcrafted” features, e.g., (Coates & Ng, 2012; Oyallon & Mallat, 2015). This leads to the central question of our work:\nCan differentially private learning benefit from handcrafted features?\nWe answer this question affirmatively by introducing simple and strong handcrafted baselines for differentially private learning, that significantly improve the privacy-utility guarantees on canonical vision benchmarks.\nOur contributions. We leverage the Scattering Network (ScatterNet) of Oyallon & Mallat (2015)— a non-learned SIFT-like feature extractor (Lowe, 1999)—to train linear models that improve upon the privacy-utility guarantees of deep learning on MNIST, Fashion-MNIST and CIFAR-10 (see Table 1). For example, on CIFAR-10 we exceed the accuracy reported by Papernot et al. (2020b) while simultaneously improving the provable DP-guarantee by 130×. On MNIST, we match the privacy-utility guarantees obtained with PATE (Papernot et al., 2018) without requiring access to any public data. We find that privately training deeper neural networks on handcrafted features also significantly improves over end-to-end deep learning, and even slightly exceeds the simpler linear models on CIFAR-10. Our results show that private deep learning remains outperformed by handcrafted priors on many tasks, and thus has yet to reach its “AlexNet moment” (Krizhevsky et al., 2012).\nWe find that models with handcrafted features outperform end-to-end deep models, despite having more trainable parameters. This is counter-intuitive, as the guarantees of private learning degrade\nwith dimensionality in the worst case (Bassily et al., 2014).1 We explain the benefits of handcrafted features by analyzing the convergence rate of non-private gradient descent. First, we observe that with low enough learning rates, training converges similarly with or without privacy (both for models with and without handcrafted features). Second, we show that handcrafted features significantly boost the convergence rate of non-private learning at low learning rates. As a result, when training with privacy, handcrafted features lead to more accurate models for a fixed privacy budget.\nConsidering these results, we ask: what is the cost of private learning’s “AlexNet moment”? That is, which additional resources do we need in order to outperform our private handcrafted baselines? Following McMahan et al. (2018), we first consider the data complexity of private end-to-end learning. On CIFAR-10, we use an additional 500,000 labeled Tiny Images from Carmon et al. (2019) to show that about an order of magnitude more private training data is needed for end-to-end deep models to outperform our handcrafted features baselines. The high sample-complexity of private deep learning could be detrimental for tasks that cannot leverage “internet-scale” data collection (e.g., most medical applications).\nWe further consider private learning with access to public data from a similar domain. In this setting, handcrafted features can be replaced by features learned from public data via transfer learning (Razavian et al., 2014). While differentially private transfer learning has been studied in prior work (Abadi et al., 2016; Papernot et al., 2020a), we find that its privacy-utility guarantees have been underestimated. We revisit these results and show that with transfer learning, strong privacy comes at only a minor cost in accuracy. For example, given public unlabeled ImageNet data, we train a CIFAR-10 model to 92.7% accuracy for a DP budget of ε = 2.\nOur work demonstrates that higher quality features—whether handcrafted or transferred from public data—are of paramount importance for improving the performance of private classifiers in low (private) data regimes.\nCode to reproduce our experiments is available at https://github.com/ftramer/ Handcrafted-DP." }, { "heading": "2 STRONG SHALLOW BASELINES FOR DIFFERENTIALLY PRIVATE LEARNING", "text": "We consider the standard central model of differential privacy (DP): a trusted party trains an ML model f on a private dataset D ∈ D, and publicly releases the model. The learning algorithm A\n1A number of recent works have attempted to circumvent this worst-case dimensionality dependence by leveraging the empirical observation that model gradients lie in a low-dimensional subspace (Kairouz et al., 2020; Zhou et al., 2020b).\nsatisfies (ε, δ)-differential privacy (Dwork et al., 2006a), if for any datasets D,D′ that differ in one record, and any set of models S:\nPr[A(D) ∈ S] ≤ eε Pr[A(D′) ∈ S] + δ.\nDP bounds an adversary’s ability to infer information about any individual training point from the model. Cryptography can split the trust in a central party across users (Jayaraman et al., 2018; Bonawitz et al., 2017).\nPrior work has trained private deep neural networks “end-to-end” (e.g., from image pixels), with large losses in utility (Shokri & Shmatikov, 2015; Abadi et al., 2016; Papernot et al., 2020b). In contrast, we study the benefits of handcrafted features that encode priors on the learning task’s public domain (e.g., edge detectors for images). Although end-to-end neural networks outperform such features in the non-private setting, our thesis is that handcrafted features result in an easier learning task that is more amenable to privacy. We focus on computer vision, a canonical domain for private deep learning (Abadi et al., 2016; Yu et al., 2019b; Papernot et al., 2020b; Nasr et al., 2020)), with a rich literature on handcrafted features (Lowe, 1999; Dalal & Triggs, 2005; Bruna & Mallat, 2013). Our approach can be extended to handcrafted features in other domains, e.g., text or speech." }, { "heading": "2.1 SCATTERING NETWORKS", "text": "We use the Scattering Network (ScatterNet) of Oyallon & Mallat (2015), a feature extractor that encodes natural image priors (e.g., invariance to small rotations and translations) using a cascade of wavelet transforms (Bruna & Mallat, 2013). As this cascade of transforms is data independent, we can obtain a differentially private classifier by privately fine-tuning a (linear) model on top of locally extracted features. In Appendix A, we discuss other candidate “non-deep” approaches that we believe to be less suitable for differentially private learning.\nWe use the default parameters in (Oyallon & Mallat, 2015), a ScatterNet S(x) of depth two with wavelets rotated along eight angles. For images of size H ×W , this network extracts features of dimension (K,H/4,W/4), with K = 81 for grayscale images, and K = 243 for RGB images. Note that the transform is thus expansive. More details on ScatterNets are in Appendix C.1." }, { "heading": "2.2 DIFFERENTIALLY PRIVATE SCATTERNET CLASSIFIERS", "text": "To train private classifiers, we use the DP-SGD algorithm2 of Abadi et al. (2016) (see Appendix B). DP-SGD works as follows: (1) batches of expected size B are sampled at random;3 (2) gradients are clipped to norm C; (3) Gaussian noise of variance σ2C2/B2 is added to the mean gradient. DP-SGD guarantees privacy for gradients, and is thus oblivious to preprocessing applied independently to each data sample, such as the ScatterNet transform.\nWhen training a supervised classifier on top of ScatterNet features with gradient descent, we find that normalizing the features is crucial to obtain strong performance. We consider two approaches:\n• Group Normalization (Wu & He, 2018): the channels of S(x) are split into G groups, and each is normalized to zero mean and unit variance. Data points are normalized independently so this step incurs no privacy cost.\n• Data Normalization: the channels of S(x) are normalized by their mean and variance across the training data. This step incurs a privacy cost as the per-channel means and variances need to be privately estimated.\nTable 2 shows that normalization significantly accelerates convergence of non-private linear models trained on ScatterNet features, for MNIST, Fashion-MNIST and CIFAR-10. For CIFAR-10, Data\n2Yu et al. (2019a) show that DP-SGD outperforms other algorithms for private convex optimization, e.g., logistic regression with output or objective perturbation (Chaudhuri et al., 2011; Bassily et al., 2014; Kifer et al., 2012). In Appendix D.3, we show that DP-SGD also outperforms Privacy Amplification by Iteration (Feldman et al., 2018) in our setting.\n3Existing DP-SGD implementations (tensorflow/privacy, 2019; pytorch/opacus, 2020) and many prior works (e.g., (Abadi et al., 2016; Papernot et al., 2020b)) heuristically split the data into random batches of size exactly B. We use the same heuristic and show in Appendix D.4 that using the correct batch sampling does not affect our results.\nNormalization performs significantly better than Group Normalization, so the small privacy cost of estimating channel statistics is warranted. While the maximal test accuracy of these models falls short of state-of-the-art CNNs, it exceeds all previously reported results for differentially private neural networks (even for large privacy budgets)." }, { "heading": "3 EVALUATING PRIVATE SCATTERNET CLASSIFIERS", "text": "We compare differentially private ScatterNet classifiers and deep learning models on MNIST (LeCun et al., 2010), Fashion-MNIST (Xiao et al., 2017) and CIFAR-10 (Krizhevsky, 2009). Many prior works have reported improvements over the DP-SGD procedure of Abadi et al. (2016) for these datasets. As we will show, ScatterNet classifiers outperform all prior approaches while making no algorithmic changes to DP-SGD. ScatterNet classifiers can thus serve as a strong canonical baseline for evaluating proposed improvements over DP-SGD in the future." }, { "heading": "3.1 EXPERIMENTAL SETUP", "text": "Most prior works find the best model for a given DP budget using a hyper-parameter search. As the private training data is re-used many times, this overestimates the privacy guarantees. Private hyper-parameter search is possible at a small cost in the DP budget (Liu & Talwar, 2019), but we argue that fully accounting for this privacy leakage is hard as even our choices of architectures, optimizers, hyper-parameter ranges, etc. are informed by prior analysis of the same data. As in prior work, we thus do not account for this privacy leakage, and instead compare ScatterNet models and end-to-end CNNs with similar hyper-parameter searches. Moreover, we find that ScatterNet models are very robust to hyper-parameter changes and achieve near-optimal utility with random hyper-parameters (see Table 3). To evaluate ScatterNet models, we apply the following hyper-parameter search:\n• We begin by fixing a privacy schedule. We target a moderate differential privacy budget of (ε = 3, δ = 10−5) and compute the noise scale σ of DP-SGD so that the privacy budget is consumed after T epochs. We try different values of T , with larger values resulting in training for more steps but with higher noise.\n• We fix the gradient clipping threshold for DP-SGD to C = 0.1 for all our experiments. Thakkar et al. (2019) suggest to vary this threshold adaptively, but we did not observe better performance by doing so.\n• We try various batch sizes B and base learning rates η, with linear learning rate scaling (Goyal et al., 2017).4\n• We try both Group Normalization (Wu & He, 2018) with different choices for the number of groups, and private Data Normalization with different choices of privacy budgets (see Appendix B for details).\nWe perform a grid-search over all parameters as detailed in Appendix C.5. We compare our ScatterNet classifiers to the CNN models of Papernot et al. (2020b) (see Appendix C.2), which achieve the\n4Our decision to try various batch sizes is inspired by Abadi et al. (2016) who found that this parameter has a large effect on the performance of DP-SGD. Yet, in Appendix D.1 we show empirically, and argue formally that with a linear learning rate scaling (Goyal et al., 2017), DP-SGD performs similarly for a range of batch sizes. As a result, we recommend following the standard approach for tuning non-private SGD, wherein we fix the batch size and tune the learning rate.\nhighest reported accuracy for our targeted privacy budget for all three datasets. We also perform a grid-search for these models, which reproduces the results of Papernot et al. (2020b). We use the ScatterNet implementation from Kymatio (Andreux et al., 2020), and the DP-SGD implementation in opacus (pytorch/opacus, 2020) (formerly called pytorch-dp).\nWe use a NVIDIA Titan Xp GPU with 12GB of RAM for all our experiments. To run DP-SGD with large batch sizes B, we use the “virtual batch” approach of opacus: the average of clipped gradients is accumulated over multiple “mini-batches”; once B gradients have been averaged, we add noise and take a gradient update step. Code to reproduce our experiments is available at https://github.com/ftramer/Handcrafted-DP." }, { "heading": "3.2 RESULTS", "text": "To measure a classifier’s accuracy for a range of privacy budgets, we compute the test accuracy as well as the DP budget ε after each training epoch (with the last epoch corresponding to ε = 3). For various DP budgets (ε, δ = 10−5) used in prior work, Table 1 shows the maximal test accuracy achieved by a linear ScatterNet model in our hyper-parameter search, averaged over five runs. We also report results with CNNs trained on ScatterNet models, which are described in more detail below. Figure 1 further compares the full privacy-accuracy curves of our ScatterNets and of the CNNs of Papernot et al. (2020b). Linear models with handcrafted features significantly outperform prior results with end-to-end CNNs, for all privacy budgets ε ≤ 3 we consider. Even when prior work reports results for larger budgets, they do not exceed the accuracy of our baseline.\nIn particular, for CIFAR-10, we match the best CNN accuracy in (Papernot et al., 2020b)—namely 66.2% for a budget of ε = 7.53—with a much smaller budget of ε = 2.6. This is an improvement in the DP-guarantee of e4.9 ≈ 134. On MNIST, we significantly improve upon CNN models, and match the results of PATE (Papernot et al., 2018), namely 98.5% accuracy at ε = 1.97, in a more restricted setting (PATE uses 5,000 public unlabeled MNIST digits). In Appendix C.5, we provide the hyperparameters that result in the highest test accuracy for our target DP budget of (ε = 3, δ = 10−5). We did not consider larger privacy budgets for ScatterNet classifiers, as the accuracy we achieve at ε = 3 is close to the accuracy of non-private ScatterNet models (see Table 2).\nAs noted above, our models (and those of most prior work) are the result of a hyper-parameter search. While we do not account for the privacy cost of this search, Table 3 shows that an additional advantage of ScatterNet classifiers is an increased robustness to hyper-parameter changes. In particular, for CIFAR-10 the worst configuration for linear ScatterNet classifiers outperforms the best configuration for end-to-end CNNs. Moreover, on MNIST and Fashion-MNIST, the median accuracy of linear ScatterNet models outperforms the best end-to-end CNN.\nTraining CNNs on Handcrafted Features. Since linear models trained on handcrafted features outperform the privacy-utility guarantees of deep models trained end-to-end, a natural question is whether training deeper models on these features achieves even better results. We repeat the above experiment with a similar CNN model trained on ScatterNet features (see Appendix C.2). The privacy-accuracy curves for these models are in Figure 2. We find that handcrafted features also improve the utility of private deep models, a phenomenon which we analyze and explain in Section 4. On CIFAR-10, the deeper ScatterNet models even slightly outperform the linear models, while for MNIST and Fashion-MNIST the linear models perform best. This can be explained by the fact that\nin the non-private setting, linear ScatterNet models achieve close to state-of-the-art accuracy on MNIST and Fashion-MNIST, and thus there is little room for improvement with deeper models (see Table 11). Table 3 further shows that ScatterNet CNNs are also less sensitive to hyper-parameters than end-to-end CNNs.\nNote that on each dataset we consider, end-to-end CNNs can outperform ScatterNet models when trained without privacy. Thus, end-to-end CNNs trained with DP-SGD must eventually surpass ScatterNet models for large enough privacy budgets. But this currently requires settling for weak provable privacy guarantees. On CIFAR-10 for example, ScatterNet classifiers still outperform end-to-end CNNs for ε = 7.53 (Papernot et al., 2020b). While the analysis of DP-SGD might not be tight, Jagielski et al. (2020) suggest that the true ε guarantee of DP-SGD is at most one order of magnitude smaller than the current analysis suggests. Thus, surpassing handcrafted features for small privacy budgets on CIFAR-10 may require improvements beyond a tighter analysis of DP-SGD." }, { "heading": "4 HOW DO HANDCRAFTED FEATURES HELP?", "text": "In this section, we analyze why private models with handcrafted features outperform end-to-end CNNs. We first consider the dimensionality of our models, but show that this does not explain the utility gap. Rather, we find that the higher accuracy of ScatterNet classifiers is due to their faster convergence rate when trained without noise.\nSmaller models are not easier to train privately. The utility of private learning typically degrades as the model’s dimensionality increases (Chaudhuri et al., 2011; Bassily et al., 2014). This is also the case with DP-SGD which adds Gaussian noise, of scale proportional to the gradients, to each model parameter. We thus expect smaller models to be easier to train privately. Yet, as we see from Table 4, for MNIST and Fashion-MNIST the linear ScatterNet model has more parameters than the CNNs. For CIFAR-10, the end-to-end CNN we used is larger, so we repeat the experiment from Section 3 with a CNN of comparable size to the ScatterNet classifiers (see Appendix D.5). This has a minor effect on the performance of the CNN. Thus, the dimensionality of ScatterNet classifiers fails to explain their better performance.\nModels with handcrafted features converge faster without privacy. DP-SGD typically requires a smaller learning rate than noiseless (clipped) SGD, so that the added noise gets averaged out over small steps. We indeed find that the optimal learning rate when training with DP-SGD is an order of magnitude lower than the optimal learning rate for training without noise addition (with gradients clipped to the same norm in both cases).\nTable 4: Number of trainable parameters of our models. For CIFAR-10, we consider two different end-to-end CNN architectures (see Appendix C.2), the smaller of which has approximately as many parameters as the linear ScatterNet model.\nMNIST & Fashion-MNIST CIFAR-10 ScatterNet+Linear 40K 155K ScatterNet+CNN 33K 187K CNN 26K 551K / 168K\n0 20 40 60\nEpochs\n0\n20\n40\n60\n80\n100\nT ra\nin A\ncc ur\nac y\n(% )\nLow LR (η = 0.25)\nScatterNet+Linear ScatterNet+CNN CNN No Noise\n0 20 40 60\nEpochs\n0\n20\n40\n60\n80\n100\nHigh LR (η = 4.0)\nFigure 3: Convergence of DP-SGD with and without noise on CIFAR-10, for ScatterNet classifiers and end-to-end CNNs. (Left): low learning rate. (Right): high learning rate.\nTo understand the impact of gradient noise on the learning process, we conduct the following experiment: we select a low learning rate that is near-optimal for training models with gradient noise, and a high learning rate that is near-optimal for training without noise. For both learning rates, we train CIFAR-10 models both with and without noise (with gradient clipping in all cases). Figure 3 shows that with a high learning rate, all classifiers converge rapidly when trained without noise, but gradient noise vastly degrades performance. With a low learning rate however, training converges similarly whether we add noise or not. What distinguishes the ScatterNet models is the faster convergence rate of noiseless SGD. The experimental setup and similar qualitative results on MNIST and Fashion-MNIST are in Appendix C.6. Thus, we find that handcrafted features are beneficial for private learning because they result in a simpler learning task where training converges rapidly even with small update steps. Our analysis suggests two avenues towards obtaining higher accuracy with private deep learning:\n• Faster convergence: Figure 3 suggests that faster convergence of non-private training could translate to better private learning. DP-SGD with adaptive updates (e.g., Adam (Kingma & Ba, 2015)) indeed sometimes leads to small improvements (Papernot et al., 2020b; Chen & Lee, 2020; Zhou et al., 2020a). Investigating private variants of second-order optimization methods is an interesting direction for future work.\n• More training steps (a.k.a more data): For a fixed DP-budget ε and noise scale σ, increasing the training set size N allows for running more steps of DP-SGD (McMahan et al., 2018). In Section 5.1, we investigate how the collection of additional private data impacts the utility of private end-to-end models." }, { "heading": "5 TOWARDS BETTER PRIVATE DEEP LEARNING", "text": "We have shown that on standard vision tasks, private learning strongly benefits from handcrafted features. Further improving our private baselines seems hard, as they come close to the maximal accuracy of ScatterNet models (see Table 2). We thus turn to other avenues for obtaining stronger privacy-utility guarantees. We focus on CIFAR-10, and discuss two natural paths towards better private models: (1) access to a larger private training set, and (2) access to a public image dataset from a different distribution (some works also consider access to public unlabeled data from the same distribution as the private data (Papernot et al., 2017; 2018; Zhu et al., 2020))." }, { "heading": "5.1 IMPROVING PRIVACY BY COLLECTING MORE DATA", "text": "We first analyze the benefits of additional private labeled data on the utility of private models. Since the privacy budget consumed by DP-SGD scales inversely with the size of the training data N , collecting more data allows either to train for more steps, or to lower the amount of noise added per step—for a fixed DP budget ε.\nTo obtain a larger dataset comparable to CIFAR-10, we use 500K pseudo-labeled Tiny Images5 (Torralba et al., 2008) collected by Carmon et al. (2019).6 We then train private models on subsets of size 10,000 ≤ N ≤ 550,000 from this dataset. Figure 4 reports the highest test accuracy achieved for a privacy budget of (ε = 3, δ = 1/2N) (see Appendix C.7 for the experimental setup). We find that we need about an order-of-magnitude increase in the size of the private training dataset in order for end-to-end CNNs to outperform ScatterNet features. As we show in Appendix C.7, larger datasets allow DP-SGD to be run for more steps at a fixed privacy budget and noise level (as also observed in (McMahan et al., 2018))—thereby overcoming the slow convergence rate we uncovered in Section 4. While the increased sample complexity of private deep learning might be viable for “internet-scale” applications (e.g., language modeling across mobile devices), it is detrimental for sensitive applications with more stringent data collection requirements, such as in healthcare." }, { "heading": "5.2 TRANSFER LEARNING: BETTER FEATURES FROM PUBLIC DATA", "text": "Transfer learning is a natural candidate for privacy-preserving computer vision, as features learned on public image data often significantly outperform handcrafted features (Razavian et al., 2014). We first consider transfer learning from CIFAR-100 to CIFAR-10, where the labeled CIFAR-100 data is assumed public. We extract features from the penultimate layer of a ResNeXt (Xie et al., 2017) model trained on CIFAR-100. A non-private linear model trained on these features achieves 84% accuracy on CIFAR-10. When training linear models with DP-SGD, we get the privacy-utility curve in Figure 5 (see Appendix C.8 for details). We reach an accuracy of 80.0% at a budget of (ε = 2, δ = 10−5), a significant improvement over prior work for the same setting and privacy budget, e.g., 67% accuracy in (Abadi et al., 2016) and 72% accuracy in (Papernot et al., 2020a). The large gap between our results and prior work is mainly attributed to a better choice of source model (e.g., the transfer learning setup in (Papernot et al., 2020a) achieves 75% accuracy on CIFAR-10 in the non-private setting). Mirroring the work of Kornblith et al. (2019) on non-private transfer learning, we thus find that the heuristic rule “better models transfer better” also holds with differential privacy.\n5The Tiny Images dataset has been withdrawn after the discovery of offensive class labels (Prabhu & Birhane, 2020). The subset used by Carmon et al. (2019) is filtered to match the CIFAR-10 labels, and is thus unlikely to contain offensive content.\n6The privacy guarantees obtained with this dataset could be slightly overestimated, as the pseudo-labels of Carmon et al. (2019) are obtained using a model pre-trained on CIFAR-10, thus introducing dependencies between private data points.\nWe further consider access to a public dataset of unlabeled images. We extract features from the penultimate layer of a SimCLR model (Chen et al., 2020a) trained on unlabeled ImageNet. A nonprivate linear model trained on these features achieves 95% accuracy on CIFAR-10 (using labeled ImageNet data marginally improves non-private transfer learning to CIFAR-10 (Chen et al., 2020a)). With the same setup as for CIFAR-100 (see Appendix C.8), we train a linear model to 92.7% accuracy for a DP budget of (ε = 2, δ = 10−5) (see Figure 5)." }, { "heading": "6 CONCLUSION AND OPEN PROBLEMS", "text": "We have demonstrated that differentially private learning benefits from “handcrafted” features that encode priors on the learning task’s domain. In particular, we have shown that private ScatterNet classifiers outperform end-to-end CNNs on MNIST, Fashion-MNIST and CIFAR-10. We have further found that handcrafted features can be surpassed when given access to more data, either a larger private training set, or a public dataset from a related domain. In addition to introducing strong baselines for evaluating future improvements to private deep learning and DP-SGD, our work suggests a number of open problems and directions for future work:\nImproving DP by accelerating convergence: Our analysis in Section 4 shows that a limiting factor of private deep learning is the slow convergence rate of end-to-end deep models. While the existing literature on second-order optimization for deep learning has mainly focused on improving the overall wall-clock time of training, it suffices for DP to reduce the number of private training steps—possibly at an increase in computational cost.\nFederated learning: While we have focused on a standard centralized setting for DP, our techniques can be extended to decentralized training schemes such as Federated Learning (McMahan et al., 2017; Bonawitz et al., 2017; Kairouz et al., 2019). DP has been considered for Federated Learning (Geyer et al., 2017; McMahan et al., 2018), but has also been found to significantly degrade performance in some settings (Yu et al., 2020).\nHandcrafted features for ImageNet and non-vision domains: To our knowledge, there have not yet been any attempts to train ImageNet models with DP-SGD, partly due to the cost of computing per-sample gradients. While linear classifiers are unlikely to be competitive on ImageNet, handcrafted features can also help private learning by accelerating the convergence of CNNs, as we have shown in Figure 2. Notably, Oyallon et al. (2018) match the (non-private) accuracy of AlexNet (Krizhevsky et al., 2012) on ImageNet with a small six-layer CNN trained on ScatterNet features. Another interesting direction is to extend our results to domains beyond vision, e.g., with handcrafted features for text (Manning & Schutze, 1999) or speech (Andén & Mallat, 2014)." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We thank: Mani Malek, Ilya Mironov, Vaishaal Shankar and Ludwig Schmidt for fruitful discussions about differential privacy and computer vision baselines, and comments on early drafts of this paper; Nicolas Papernot and Shuang Song for helping us reproduce the results in (Papernot et al., 2020b); Nicolas Papernot for comments on early drafts of this paper; Edouard Oyallon for enlightening discussions about Scattering networks." }, { "heading": "A WHY SCATTERNETS?", "text": "In this paper, we propose to use the ScatterNet features of Oyallon & Mallat (2015) as a basis for shallow differentially private vision classifiers. We briefly discuss a number of other shallow approaches that produce competitive results for canonical vision tasks, but which appear less suitable for private learning.\nUnsupervised feature dictionaries. Coates & Ng (2012) achieve above 80% test accuracy on CIFAR-10 with linear models trained on top of a dictionary of features extracted from a mixture of image patches. Their approach relies on a combination of many ‘tricks”, including data normalization, data whitening, tweaks to standard Gaussian-Mixture-Model (GMM) algorithms, feature selection, etc. While it is conceivable that each of these steps could be made differentially private, we opt here for a much simpler unlearned baseline that is easier to analyze and to apply to a variety of different tasks. We note that existing work on differentially-private learning of mixtures (e.g., (Nissim et al., 2007)) has mainly focused on asymptotic guarantees, and we are not aware of any exiting algorithms that have been evaluated on high-dimensional datasets such as CIFAR-10.\nKernel Machines. Recent work on Neural Tangent Kernels (Jacot et al., 2018) has shown that the performance of deep neural networks on CIFAR-10 could be matched by specialized kernel methods (Li et al., 2019; Arora et al., 2020; Shankar et al., 2020). Unfortunately, private learning with non-linear kernels is intractable in general (Chaudhuri et al., 2011; Rubinstein et al., 2012). Chaudhuri et al. (2011) propose to obtain private classifiers by approximating kernels using random features (Rahimi & Recht, 2008), but the very high dimensionality of the resulting learning problem makes it challenging to outperform our handcrafted features baseline. Indeed, we had originally considered a differentially-private variant of the random-feature CIFAR-10 classifier proposed in (Recht et al., 2018), but found the model’s high dimensionality (over 10 million features) to be detrimental to private learning." }, { "heading": "B DP-SGD, RDP AND PRIVATE DATA NORMALIZATION", "text": "Throughout this work, we use the DP-SGD algorithm of Abadi et al. (2016):\nAlgorithm 1: DP-SGD (Abadi et al., 2016) input :Data {x1 . . . ,xN}, learning rate η, noise scale σ, batch size B, gradient norm bound C, epochs T\n1 Initialize θ0 randomly for t ∈ [T · N/B] do\n2 Sample a batchBt by selecting each xi independently with probability B/N 3 For each xi ∈ Bt: gt(xi)← ∇θtL(θt,xi) // compute per-sample gradients 4 g̃t(xi)← gt(xi) ·min(1,C/‖gt(xi)‖2) // clip gradients 5 g̃t ← 1B (∑ xi∈Bt g̃t(xi) +N (0, σ 2C2I) ) // add noise to average gradient with\nGaussian mechanism\n6 θt+1 ← θt − ηg̃t // SGD step output :θTN/B\nThe tightest known privacy analysis of the DP-SGD algorithm is based on the notion of Rényi differential privacy (RDP) from Mironov (2017), which we recall next.\nDefinition B.1 (Rényi Divergence). For two probability distributions P and Q defined over a range R, the Rényi divergence of order α > 1 is\nDα(P‖Q) := 1\nα− 1 log Ex∼Q\n( P (x)\nQ(x)\n)α .\nDefinition B.2 ((α, ε)-RDP (Mironov, 2017)). A randomized mechanism f : D → R is said to have ε-Rényi differential privacy of order α, or (α, ε)-RDP for short, if for any adjacent D,D′ ∈ D it holds that\nDα(f(D)‖f(D′)) ≤ ε .\nTo analyze the privacy guarantees of DP-SGD, we numerically compute Dα(f(D)‖f(D′)) for a range of orders α (Mironov et al., 2019; Wang et al., 2019) in each training step, where D and D′ are training sets that differ in a single element. To obtain privacy guarantees for t training steps, we use the composition properties of RDP:\nLemma B.3 (Adaptive composition of RDP (Mironov et al., 2019)). Let f : D → R1 be (α, ε1)-RDP and g : R1 × D → R2 be (α, ε2)-RDP, then the mechanism defined as (X,Y ), where X ∼ f(D) and Y ∼ g(X,D), satisfies (α, ε1 + ε2)-RDP.\nFinally, the RDP guarantees of the full DP-SGD procedure can be converted into a (ε, δ)-DP guarantee:\nLemma B.4 (From RDP to (ε, δ)-DP (Mironov et al., 2019)). If f is an (α, ε)-RDP mechanism, it also satisfies (ε+ log\n1/δ α−1 , δ)-DP for any 0 < δ < 1.\nPrivate Data Normalization. In order to apply Data Normalization to the ScatterNet features (which greatly improves convergence, especially on CIFAR-10), we use the PrivDataNorm procedure in Algorithm 2 to compute private estimates of the per-channel mean and variance of the ScatterNet features.\nAlgorithm 2: Private Data Normalization Function PrivChannelMean(dataD ∈ RN×K×H×W , norm bound C, noise scale σnorm)\n1 For 1 ≤ i ≤ N : µi ← Eh,w [ D(i,·,h,w) ] ∈ RK // compute per-channel means for\neach sample\n2 µi ← µi ·min(1,C/‖µi‖2) // clip each sample’s per-channel means\n3 µ̃← Ei[µi] + 1NN (0, σ 2 normC 2I) // private mean using Gaussian mechanism 4 return µ̃\nFunction PrivDataNorm(dataD, norm bounds C1, C2, noise scale σnorm, threshold τ) 1 µ̃← PrivChannelMean(D, C1, σnorm) // private per-channel mean 2 µ̃D2 ← PrivChannelMean(D2, C2, σnorm) // private per-channel mean-square 3 Ṽar← max(µ̃D2 − µ̃2, τ) // private per-channel variance\n4 For each 1 ≤ i ≤ N , D̂i ← (Di − µ̃)/ √ Ṽar // normalize each sample\nindependently\n5 return D̂\nIn order to obtain tight privacy guarantees for the full training procedure (i.e., privacy-preserving Data Normalization followed by DP-SGD), we first derive the RDP guarantees of PrivDataNorm:\nClaim B.5. The PrivDataNorm procedure is (α, α/σ2norm)-RDP for any α > 1.\nThe above claim follows from the RDP guarantees of the Gaussian mechanism in (Mironov, 2017), together with the composition properties of RDP in Lemma B.3 above.\nFinally, given an RDP guarantee of (α, ε1) for PrivDataNorm, and an RDP guarantee of (α, ε2) for DP-SGD, we apply Lemma B.3 to obtain an RDP guarantee of (α, ε1 + ε2), and convert to a DP guarantee using Lemma B.4." }, { "heading": "C EXPERIMENTAL SETUP", "text": "" }, { "heading": "C.1 SCATTERING NETWORKS", "text": "We briefly review the scattering network (ScatterNet) of Oyallon & Mallat (2015). Consider an input x. The output of a scattering network of depth J is a feature vector given by\nS(x) := AJ ∣∣W2 |W1 x| ∣∣ , (1)\nwhere the operatorsW1 andW2 are complex-valued wavelet transforms, each followed by a non-linear complex modulus, and the final operator A performs spatial averaging over patches of 2J features. Both wavelet transforms W1 and W2 are linear operators that compute a cascade of convolutions with filters from a fixed family of wavelets. For an input image of spatial dimensions H × W , the ScatterNet is applied to each of the image’s color channels independently to yield an output tensor of dimension (K, H\n2J , W 2J ). The channel dimensionality K depends on the network depth J and the granularity of the wavelet filters, and is chosen so that K/22J = O(1) (i.e., the ScatterNet approximately preserves the data dimensionality).\nFor all experiments, we use the default parameters proposed by Oyallon & Mallat (2015), namely a Scattering Network of depth J = 2, consisting of wavelet filters rotated along eight angles. For an an input image of spatial dimensions H ×W , this configuration produces an output of dimension (K,H/4,W/4), with K = 81 for grayscale images, and K = 243 for RGB images." }, { "heading": "C.2 MODEL ARCHITECTURES", "text": "Below, we describe the ScatterNet+Linear, ScatterNet+CNN and end-to-end CNN architectures used in Section 3 and Section 4. The CNN architectures are adapted from Papernot et al. (2020b).\nLinear ScatterNet Classifiers. The default Scattering Network of Oyallon & Mallat (2015) extracts feature vectors of size (81, 7, 7) for MNIST and Fashion-MNIST and of size (243, 8, 8) for CIFAR-10. We then train a standard logistic regression classifier (with per-class bias) on top of these features, as summarized below:\nEnd-to-end CNNs. We use the CNN architectures proposed by Papernot et al. (2020b), which were found as a result of an architecture search tailored to DP-SGD.7 Notably, these CNNs are quite small (since the noise of DP-SGD grows with the model’s dimensionality) and use Tanh activations, which Papernot et al. (2020b) found to outperform the more common ReLU activations. For the experiments in Section 4, we also consider a smaller CIFAR-10 model, with a dimensionality comparable to the linear ScatterNet classifier. While the standard model has six convolutional layers of size 32-32-64-64-128-128, the smaller model has five convolutional layers of size 16-16-32-32-64 (with max-pooling after the 2nd, 4th and 5th convolution).\n7The CNN architecture for CIFAR-10 in Table 7 differs slightly from that described in (Papernot et al., 2020b). Based on discussions with the authors of (Papernot et al., 2020b), the architecture in Table 7 is the correct one to reproduce their best results.\nTable 6: End-to-end CNN model for MNIST and Fashion-MNIST, with Tanh activations (Papernot et al., 2020b).\nLayer Parameters Convolution 16 filters of 8x8, stride 2, padding 2 Max-Pooling 2x2, stride 1 Convolution 32 filters of 4x4, stride 2, padding 0 Max-Pooling 2x2, stride 1 Fully connected 32 units Fully connected 10 units\nScatterNet CNNs. To fine-tune CNNs on top of ScatterNet features, we adapt the CNNs from Table 6 and Table 7. As the ScatterNet feature vector is larger than the input image (784 → 3969 features for MNIST and Fashion-MNIST, and 3072→ 15552 features for CIFAR-10), we use smaller CNN models. For MNIST and Fashion MNIST, we reduce the number of convolutional filters. For CIFAR-10, we reduce the network depth from 8 to 3, which results in a model with approximately as many parameters as the linear ScatterNet classifier.\nTable 8: CNN model fine-tuned on ScatterNet features for MNIST and Fashion-MNIST, with Tanh activations.\nLayer Parameters Convolution 16 filters of 3x3, stride 2, padding 1 Max-Pooling 2x2, stride 1 Convolution 32 filters of 3x3, stride 1, padding 1 Max-Pooling 2x2, stride 1 Fully connected 32 units Fully connected 10 units" }, { "heading": "C.3 EFFECT OF NORMALIZATION", "text": "To evaluate the effect of feature normalization in Table 2, we train linear models on ScatterNet features using DP-SGD without noise (σ = 0). We train one model without feature normalization, one with Data Normalization, and three with Group Normalization (Wu & He, 2018) with G ∈ {9, 27, 81} groups. For Group Normalization, Table 2 reports results for the best choice of groups. The remaining hyper-parameters are given below." }, { "heading": "C.4 NON-PRIVATE MODEL PERFORMANCE", "text": "For each of the model architectures described in Appendix C.2, we report the best achieved test accuracy without privacy, and without any other form of explicit regularization. For MNIST and Fashion-MNIST, fine-tuning a linear model or a CNN on top of ScatterNet features results in similar performance, whereas on CIFAR-10, the CNN performs slightly better. For Fashion-MNIST the end-to-end CNN performs slightly worse than the linear model (mainly due to a lack of regularization). For CIFAR-10, the end-to-end CNN significantly outperforms the ScatterNet models." }, { "heading": "C.5 EVALUATING PRIVATE SCATTERNET CLASSIFIERS", "text": "We use DP-SGD with momentum for all experiments. Prior work found that the use of adaptive optimizers (e.g., Adam (Kingma & Ba, 2015)) provided only marginal benefits for private learning (Papernot et al., 2020a). Moreover, we use no data augmentation, weight decay, or other mechanisms aimed at preventing overfitting. The reason is that differential privacy is itself a powerful regularizer (informally, differential privacy implies low generalization error (Dwork et al., 2015)), so our models all underfit the training data.\nThe table below lists the ranges of hyper-parameters used for the experiments in Section 3, to train linear ScatterNet classifiers, end-to-end CNNs, and CNNs fine-tuned on ScatterNet features.\nIn Table 13, we give the set of hyper-parameters that resulted in the maximal accuracy for our target DP budget of (ε = 3, δ = 10−5). For each model, we report the base learning rate, before re-scaling by B/512. We find that some hyper-parameters that result in the best performance are at the boundary of our search range. Yet, as we show in Figure 8, modifying these hyper-parameters results in no significant upward trend, so we refrained from further increasing our search space." }, { "heading": "C.6 MEASURING MODEL CONVERGENCE SPEED", "text": "For the experiments in Section 4, we compare the convergence of the models from Appendix C.2 when trained with and without noise, and with either a low or high learning rate. The table below lists the hyper-parameters for the CIFAR-10 experiment in Figure 3, as well as for the corresponding experiments for MNIST and Fashion-MNIST in Figure 11. When training without privacy, we still clip gradients to a maximal norm of C = 0.1, but omit the noise addition step of DP-SGD (and we also omit the noise when using Data Normalization)." }, { "heading": "C.7 PRIVATE LEARNING ON LARGER DATASETS", "text": "For the experiment in Section 5.1, we use an additional 500K images from the Tiny Images dataset (Torralba et al., 2008), which were collected and labeled by Carmon et al. (2019) using a pre-trained CIFAR-10 classifier (see (Carmon et al., 2019, Appendix B.6) for details on the selec-\ntion process for this dataset).8 We create datasets of size N ∈ {10K, 25K, 50K, 100K, 250K, 550K} by taking subsets of this larger dataset. We only use the data of Carmon et al. (2019) to complement the CIFAR-10 dataset whenN > 50K. As noted by Carmon et al. (2019), the additional 500K images do not entirely match the distribution of CIFAR-10. Nevertheless, we find that training our classifiers without privacy on augmented datasets of size N > 50K does not negatively impact the test accuracy on CIFAR-10.\nFor each training set size, we re-train our models with a hyper-parameter search. To limit computational cost, and informed by our prior experiments, we fix some parameters, as shown in Table 15. When applying Data Normalization to ScatterNet features, we compute the per-channel statistics only over the original CIFAR-10 samples, and compute the privacy guarantees of PrivDataNorm using the Rényi DP analysis of the sampled Gaussian mechanism (Mironov et al., 2019; Wang et al., 2019).\nThe only hyper-parameters are thus the number of epochs (normalized by the size of the original CIFAR-10 data) and the learning rate η. The optimal values we found for these parameters are given below in Table 16. As we increase the dataset size, we obtain better accuracy by training for more steps and with higher learning rates. Figure 4 reports the final accuracy for these best-performing models." }, { "heading": "C.8 EVALUATION OF PRIVATE TRANSFER LEARNING", "text": "For the transfer learning experiments in Figure 5, we use a ResNeXt-29 model pre-trained on CIFAR-100,9 and a ResNet-50 model trained on unlabeled ImageNet (Deng et al., 2009) using SimCLRv2 (Chen et al., 2020b).10\nTo train private linear classifiers on CIFAR-10, we first extract features from the penultimate layer of the above pre-trained models. For the ResNeXt model, we obtain features of dimension 1024, and for\n8The full Tiny Images dataset was recently withdrawn by its curators, following the discovery of a large number of offensive class labels (Prabhu & Birhane, 2020). The subset collected by Carmon et al. (2019) contains images that most closely match the original CIFAR-10 labels, and is thus unlikely to contain offensive content.\n9https://github.com/bearpaw/pytorch-classification. 10https://github.com/google-research/simclr.\nthe SimCLRv2 ResNet, we obtain features of dimension 4096. We then use DP-SGD with a similar setup as for the linear ScatterNet classifiers, except that we do not normalize the extracted features. We also target a tighter privacy budget of (ε = 2, δ = 10−5). We then run a hyper-parameter search as listed below in Table 17. Figure 5 shows the best test accuracy achieved for each DP budget, averaged across five runs. We further report the set of hyper-parameters that resulted in the maximal accuracy for the targeted privacy budget of (ε = 2, δ = 10−5)." }, { "heading": "D ADDITIONAL EXPERIMENTS AND FIGURES", "text": "" }, { "heading": "D.1 ON THE EFFECT OF BATCH SIZES IN DP-SGD", "text": "In this section, we revisit the question of the selection of an optimal batch size for DP-SGD. In their seminal work, Abadi et al. (2016) already investigated this question, and noted that the choice of batch size can have a large influence on the privacy-utility tradeoff. They empirically found that for a dataset of size N , a batch size of size approximately √ N produced the best results. However, their experiments measured the effect of the batch size while keeping other parameters, including the noise multiplier σ and the learning rate η, fixed.\nWhen training without privacy, it has been shown empirically that the choice of batch size has little effect on the convergence rate of SGD, as long as the learning rate η is scaled linearly with the batch size (Goyal et al., 2017). Hereafter, we argue formally and demonstrate empirically that if we use a linear learning rate scaling, and fix the number of training epochs T for a target privacy budget ε, then the choice of batch size also has a minimal influence on the performance of DP-SGD.\nWe first consider the effect of the sampling rate B/N on the noise scale σ required to attain a fixed privacy budget of ε after T epochs. There is no known closed form expression for σ, so it is usually estimated numerically. We empirically establish the following claim, and verify numerically that it holds for our setting in Figure 6:\nClaim D.1. Given a fixed DP budget (ε, δ) to be reached after T epochs, the noise scale σ as a function of the sampling rate B/N is given by σ(B/N) ≈ c · √ B/N , for some constant c ≥ 0.\nGiven this relation between batch size and noise scale, we proceed with a similar analysis as in (Goyal et al., 2017), for the case of DP-SGD. Given some initial weight θt, performing k steps of DP-SGD with clipping norm C = 1, batch size B, learning rate η and noise scale σ yields:\nθt+k = θt − η ∑\nj<k\n1\nB\n( ∑\nx∈Bt+j\ng̃t+j(x) +N (0, σ2I) )\n= ( θt − η 1\nB\n∑\nj<k\n∑\nx∈Bt+j\ng̃t+j(x) ) +N ( 0, kη2σ2 B2 I )\nIf we instead take a single step of DP-SGD with larger batch size kB, a linearly scaled learning rate of kη, and an adjusted noise scale σ̃ = √ kσ (by Claim D.1), we get:11\nθt+1 = θt − kη 1\nkB\n(∑\nj<k\n∑\nx∈Bt+j\ng̃t(x) +N (0, σ̃2I) )\n= ( θt − η 1\nB\n∑\nj<k\n∑\nx∈Bt+j\ng̃t(x) ) +N ( 0, kη2σ2 B2 I )\nThus, we find that the total noise in both updates is identical. Under the same heuristic assumption as in (Goyal et al., 2017) that g̃t(x) ≈ g̃t+j(x) for all j < k, the two DP-SGD updates above are thus similar. This analysis suggests that as in the non-private case (Goyal et al., 2017), increasing the batch size and linearly scaling the learning rate should have only a small effect on a model’s learning curve.\nWe now verify this claim empirically. We follow the experimental setup in Section 3, and set a privacy budget of (ε = 3, δ = 10−5) to be reached after a fixed number of epochs T . For different choices of batch size B, we numerically compute the noise scale σ that fits this “privacy schedule”. For the initial batch size of B0 = 512, we select a base learning rate η that maximizes test accuracy at epoch T . As we increase the batch size to B = kB0, we linearly scale the learning rate to kη. The concrete parameters are given below:\nAs we can see in Figure 7, the training curves for CNNs trained with DP-SGD are indeed near identical across a variety of batch sizes.\n11We make a small simplification to our analysis here and assume that one batch of DP-SGD sampled with selection probability kB/N is identical to k batches sampled with selection probability B/N." }, { "heading": "D.2 ANALYSIS OF HYPER-PARAMETERS", "text": "To understand the effect of varying the different hyper-parameters of DP-SGD, Figure 8 shows the median and maximum model performance for different choices of a single parameter. The median and maximum are computed over all choices for the other hyper-parameters in Table 12. As we can see, the maximal achievable test accuracy is remarkably stable when fixing one of the algorithm’s hyper-parameters, with the exception of overly large batch sizes or overly low learning rates for end-to-end CNNs." }, { "heading": "D.3 COMPARING DP-SGD AND PRIVACY AMPLIFICATION BY ITERATION", "text": "While DP-SGD is the algorithm of choice for differentially private non-convex learning, it is unclear why it should be the best choice for learning private linear models. Indeed, starting with the work of Chaudhuri et al. (2011), there have been many other proposals of algorithms for private convex optimization with provable utility guarantees, e.g., (Bassily et al., 2014; Kifer et al., 2012; Feldman et al., 2018). Yet, Yu et al. (2019a) show that DP-SGD can achieve higher utility than many of these approaches, both asymptotically and empirically.\nHere, we take a closer look at the “Privacy Amplification by Iteration” work of (Feldman et al., 2018). Feldman et al. (2018) observe that DP-SGD guarantees differential privacy for every gradient update step. Under the assumption that intermediate model updates can be hidden from the adversary, they propose a different analysis of DP-SGD for convex optimization problems that has a number of conceptual advantages. First, the algorithm of Feldman et al. (2018) does not require the training indices selected for each batch Bt do be hidden from the adversary. Second, their approach can support much smaller privacy budgets than DP-SGD.\nHowever, we show that these benefits come at a cost in practice: for the range of privacy budgets we consider in this work, DP-SGD requires adding less noise than Privacy Amplification by Iteration (PAI). To compare the two approaches, we proceed as follows: We analytically compute the noise scale σ that results in a privacy guarantee of (ε, δ = 10−5) after 10 training epochs with a batch sampling rate of 512/50000.12 Figure 9 shows that DP-SGD requires adding less noise, except for large\n12The guarantees of Privacy Amplification by Iteration apply unevenly to the elements of the training data. We choose the noise scale so that at least 99% of the data elements enjoy (ε, δ)-DP.\nprivacy budgets (ε > 40), or very small ones (ε < 0.2). In the latter case, both algorithms require adding excessively large amounts of noise. We observe a qualitatively similar behavior for other sampling rates.\nFor completeness, we evaluate the PAI algorithm of Feldman et al. (2018) for training linear ScatterNet classifiers on CIFAR-10. We evaluate a broader range of hyper-parameters, including different clipping thresholds C ∈ {0.1, 1, 10} (PAI clips the data rather than the gradients), a wider range of batch sizesB ∈ {32, 64, . . . , 2048}, and a wider range of base learning rates η ∈ {2−3, 2−2, . . . , 23}. We find that for privacy budgets 1 ≤ ε ≤ 3, the optimal hyper-parameters for PAI and DP-SGD are similar, but the analysis of PAI requires a larger noise scale σ. As a result, PAI performs worse than DP-SGD, as shown in Figure 10." }, { "heading": "D.4 DP-SGD WITH POISSON SAMPLING", "text": "The analysis of DP-SGD (Abadi et al., 2016; Mironov et al., 2019) assumes that each batch Bt is created by independently selecting each training sample with probability B/N. This is in contrast to typical implementations of SGD, where the training data is randomly shuffled once per epoch, and divided into successive batches of size exactly B. The latter “random shuffle” approach has been used in most implementations of DP-SGD (e.g., (tensorflow/privacy, 2019; pytorch/opacus, 2020)) as well as in prior work (e.g., (Abadi et al., 2016; Papernot et al., 2020b)), with the (implicit) assumption that this difference in batch sampling strategies will not affect model performance. We verify that this assumption is indeed valid in our setting. We re-train the linear ScatterNet and end-to-end CNN models that achieved the highest accuracy for a DP budget of (ε = 3, δ = 10−5) (with the hyper-parameters detailed in Table 13), using the correct “Poisson sampling” strategy. The test accuracy of these models (averaged over five runs) are shown in Table 19. For all datasets and models, the two sampling schemes achieve similar accuracy when averaged over five runs." }, { "heading": "D.5 EXPERIMENTS WITH SMALLER END-TO-END CNN MODEL ON CIFAR-10", "text": "In Section 4, we investigate whether the dimensionality of different classifiers has a noticeable impact on their privacy-utility tradeoffs. To this end, we repeat the CIFAR-10 experiments from\nSection 3 with a smaller end-to-end CNN architecture. Specifically, we take the end-to-end CNN architecture from Table 7 and reduce the number of filters in each convolutional layer by a factor of two and remove the last convolutional layer). This results in a CNN model with a comparable number of trainable parameters as the linear ScatterNet classifier (see Table 4). In Table 20, we compare the privacy-utility of this smaller CNN models with the original larger CNN model evaluated in Section 3. While the change of model architecture does affect the model accuracy, the effect is minor, and the accuracy remains far below that of the ScatterNet classifiers with a comparable number of parameters." }, { "heading": "D.6 MODEL CONVERGENCE SPEED ON MNIST AND FASHION-MNIST", "text": "We run the same experiment as in Figure 3 for MNIST and Fashion-MNIST, to compare the convergence rate of different classifiers with and without privacy, for different learning rates. The experimental setup is described in Appendix C.6. Figure 11 shows qualitatively similar results as Figure 3: with a high learning rate, all models converge quickly when trained without gradient noise, but the addition of noise is detrimental to the learning process. In contrast, with a much lower learning rate the training curves for DP-SGD are nearly identical, whether we add noise or not. In this regime, the ScatterNet classifiers converge significantly faster than end-to-end CNNs when trained without privacy." } ]
2,021
DIFFERENTIALLY PRIVATE LEARNING NEEDS BETTER FEATURES (OR MUCH MORE DATA)
SP:88108abfa920eda1a0766301bdfd70113f61f8b3
[ "- This paper presents a simple but effective method rooted in trust region theory for fine-tuning pre-trained models without 'representational collapse'. Compared to previous methods (such as SMART by Jiang et al. (2019)), the newly proposed methods (R3F and R4F) are computationally simple while achieving more strong performance on several NLP tasks including GLUE, XNLI and summarization. The authors also introduce the concept of 'representational collapse', which means the degradation of generalizable representations of pre-trained models during the fine-tuning stage. Moreover, they empirically demonstrated that SMART and their proposed methods are effective in relieving representational collapse, compared to typical fine-tuning based on normal gradient descent (i.e., one without constraints).", "The paper proposes a method for finetuning pre-trained models that ensures the generalization ability of the representation is maintained. The key innovation is that the computationally expensive ascent step in the mirror descent method of SMART can be replaced by simply injecting noise. The results support the hypothesis that this works well for keeping the generalization-ability of the model. The authors also define the degradation of the generalizability of the representation during finetuning as “representational collapse”. " ]
Although widely adopted, existing approaches for fine-tuning pre-trained language models have been shown to be unstable across hyper-parameter settings, motivating recent work on trust region methods. This paper presents a simplified and efficient method rooted in trust region theory that replaces previously used adversarial objectives with parametric noise (sampling from either a normal or uniform distribution), thereby discouraging representation change during fine-tuning when possible without hurting performance. We also introduce a new analysis to motivate the use of trust region methods more generally, by studying representational collapse; the degradation of generalizable representations from pre-trained models as they are fine-tuned for a specific end task. Extensive experiments show that our fine-tuning method matches or exceeds the performance of previous trust region methods on a range of understanding and generation tasks (including DailyMail/CNN, Gigaword, Reddit TIFU, and the GLUE benchmark), while also being much faster. We also show that it is less prone to representation collapse; the pre-trained models maintain more generalizable representations every time they are fine-tuned.
[ { "affiliations": [], "name": "Armen Aghajanyan" }, { "affiliations": [], "name": "Akshat Shrivastava" }, { "affiliations": [], "name": "Anchit Gupta" }, { "affiliations": [], "name": "Naman Goyal" } ]
[ { "authors": [ "Luisa Bentivogli", "Peter Clark", "Ido Dagan", "Danilo Giampiccolo" ], "title": "The fifth pascal recognizing textual entailment challenge", "venue": "In TAC,", "year": 2009 }, { "authors": [ "Daniel Cer", "Mona Diab", "Eneko Agirre", "Inigo Lopez-Gazpio", "Lucia Specia" ], "title": "Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation", "venue": "arXiv preprint arXiv:1708.00055,", "year": 2017 }, { "authors": [ "Zewen Chi", "Li Dong", "Furu Wei", "Nan Yang", "Saksham Singhal", "Wenhui Wang", "Xia Song", "Xian-Ling Mao", "Heyan Huang", "Ming Zhou" ], "title": "Infoxlm: An information-theoretic framework for crosslingual language model pre-training", "venue": null, "year": 2020 }, { "authors": [ "Kevin Clark", "Urvashi Khandelwal", "Omer Levy", "Christopher D Manning" ], "title": "What does bert look at? an analysis of bert’s attention", "venue": null, "year": 1906 }, { "authors": [ "Alexis Conneau", "Guillaume Lample", "Ruty Rinott", "Adina Williams", "Samuel R Bowman", "Holger Schwenk", "Veselin Stoyanov" ], "title": "Xnli: Evaluating cross-lingual sentence representations", "venue": "arXiv preprint arXiv:1809.05053,", "year": 2018 }, { "authors": [ "Alexis Conneau", "Kartikay Khandelwal", "Naman Goyal", "Vishrav Chaudhary", "Guillaume Wenzek", "Francisco Guzmán", "Edouard Grave", "Myle Ott", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Unsupervised cross-lingual representation learning at scale", "venue": null, "year": 1911 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Jesse Dodge", "Gabriel Ilharco", "Roy Schwartz", "Ali Farhadi", "Hannaneh Hajishirzi", "Noah Smith" ], "title": "Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping", "venue": null, "year": 2002 }, { "authors": [ "William B Dolan", "Chris Brockett" ], "title": "Automatically constructing a corpus of sentential paraphrases", "venue": "In Proceedings of the Third International Workshop on Paraphrasing (IWP2005),", "year": 2005 }, { "authors": [ "Karl Moritz Hermann", "Tomas Kocisky", "Edward Grefenstette", "Lasse Espeholt", "Will Kay", "Mustafa Suleyman", "Phil Blunsom" ], "title": "Teaching machines to read and comprehend", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Shankar Iyer", "Nikhil Dandekar", "Kornel Csernai" ], "title": "First quora dataset release: Question pairs, 2017", "venue": null, "year": 2017 }, { "authors": [ "Haoming Jiang", "Pengcheng He", "Weizhu Chen", "Xiaodong Liu", "Jianfeng Gao", "Tuo Zhao" ], "title": "Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization", "venue": null, "year": 1911 }, { "authors": [ "Byeongchang Kim", "Hyunwoo Kim", "Gunhee Kim" ], "title": "Abstractive summarization of reddit posts with multi-level memory networks", "venue": "arXiv preprint arXiv:1811.00783,", "year": 2018 }, { "authors": [ "Mike Lewis", "Yinhan Liu", "Naman Goyal", "Marjan Ghazvininejad", "Abdelrahman Mohamed", "Omer Levy", "Ves Stoyanov", "Luke Zettlemoyer" ], "title": "Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "venue": null, "year": 1910 }, { "authors": [ "Mike Lewis", "Marjan Ghazvininejad", "Gargi Ghosh", "Armen Aghajanyan", "Sida Wang", "Luke Zettlemoyer" ], "title": "Pre-training via paraphrasing", "venue": null, "year": 2020 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Michael McCloskey", "Neal J Cohen" ], "title": "Catastrophic interference in connectionist networks: The sequential learning problem", "venue": "In Psychology of learning and motivation,", "year": 1989 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "arXiv preprint arXiv:1802.05957,", "year": 2018 }, { "authors": [ "Marius Mosbach", "Maksym Andriushchenko", "Dietrich Klakow" ], "title": "On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines", "venue": "arXiv preprint arXiv:2006.04884,", "year": 2020 }, { "authors": [ "Courtney Napoles", "Matthew R Gormley", "Benjamin Van Durme" ], "title": "Annotated gigaword", "venue": "In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX),", "year": 2012 }, { "authors": [ "Razvan Pascanu", "Yoshua Bengio" ], "title": "Revisiting natural gradient for deep networks", "venue": "arXiv preprint arXiv:1301.3584,", "year": 2013 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "Squad: 100,000+ questions for machine comprehension of text", "venue": "arXiv preprint arXiv:1606.05250,", "year": 2016 }, { "authors": [ "Garvesh Raskutti", "Sayan Mukherjee" ], "title": "The information geometry of mirror descent", "venue": "IEEE Transactions on Information Theory,", "year": 2015 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D Manning", "Andrew Ng", "Christopher Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In Proceedings of the 2013 conference on empirical methods in natural language processing,", "year": 2013 }, { "authors": [ "Michael Tschannen", "Josip Djolonga", "Paul K Rubenstein", "Sylvain Gelly", "Mario Lucic" ], "title": "On mutual information maximization for representation learning", "venue": null, "year": 1907 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel Bowman" ], "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP,", "year": 2018 }, { "authors": [ "Alex Warstadt", "Amanpreet Singh", "Samuel R Bowman" ], "title": "Neural network acceptability judgments", "venue": "arXiv preprint arXiv:1805.12471,", "year": 2018 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112–1122", "venue": "Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Dongling Xiao", "Han Zhang", "Yukun Li", "Yu Sun", "Hao Tian", "Hua Wu", "Haifeng Wang" ], "title": "Ernie-gen: An enhanced multi-flow pre-training and fine-tuning framework for natural language generation", "venue": null, "year": 2001 }, { "authors": [ "Yu Yan", "Weizhen Qi", "Yeyun Gong", "Dayiheng Liu", "Nan Duan", "Jiusheng Chen", "Ruofei Zhang", "Ming Zhou" ], "title": "Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training", "venue": null, "year": 2001 }, { "authors": [ "Jingqing Zhang", "Yao Zhao", "Mohammad Saleh", "Peter J Liu" ], "title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization", "venue": null, "year": 1912 }, { "authors": [ "Tianyi Zhang", "Felix Wu", "Arzoo Katiyar", "Kilian Q Weinberger", "Yoav Artzi" ], "title": "Revisiting fewsample bert fine-tuning", "venue": "arXiv preprint arXiv:2006.05987,", "year": 2020 }, { "authors": [ "Chen Zhu", "Yu Cheng", "Zhe Gan", "Siqi Sun", "Tom Goldstein", "Jingjing Liu" ], "title": "Freelb: Enhanced adversarial training for natural language understanding", "venue": "In International Conference on Learning Representations,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Pre-trained language models (Radford et al., 2019; Devlin et al., 2018; Liu et al., 2019; Lewis et al., 2019; 2020) have been shown to capture a wide array of semantic, syntactic, and world knowledge (Clark et al., 2019), and provide the defacto initialization for modeling most existing NLP tasks. However, fine-tuning them for each task is a highly unstable process, with many hyperparameter settings producing failed fine-tuning runs, unstable results (considerable variation between random seeds), over-fitting, and other unwanted consequences (Zhang et al., 2020; Dodge et al., 2020).\nRecently, trust region or adversarial based approaches, including SMART (Jiang et al., 2019) and FreeLB (Zhu et al., 2019), have been shown to increase the stability and accuracy of fine-tuning by adding additional constraints limiting how much the fine-tuning changes the initial parameters. However, these methods are significantly more computationally and memory intensive than the more commonly adopted simple-gradient-based approaches.\nThis paper presents a lightweight fine-tuning strategy that matches or improves performance relative to SMART and FreeLB while needing just a fraction of the computational and memory overhead and no additional backward passes. Our approach is motivated by trust region theory while also reducing to simply regularizing the model relative to parametric noise applied to the original pre-trained representations. We show uniformly better performance, setting a new state of the art for RoBERTa fine-tuning on GLUE and reaching state of the art on XNLI using no novel pre-training approaches (Liu et al., 2019; Wang et al., 2018; Conneau et al., 2018). Furthermore, the low overhead of our family of fine-tuning methods allows our method to be applied to generation tasks where we consistently outperform standard fine-tuning, setting state of the art on summarization tasks.\nWe also introduce a new analysis to motivate the use of trust-region-style methods more generally, by defining a new notion of representational collapse and introducing a new methodology for measuring it during fine-tuning. Representational collapse is the degradation of generalizable representations of pre-trained models during the fine-tuning stage. We empirically show that standard fine-tuning degrades generalizable representations through a series of probing experiments on GLUE tasks. Furthermore, we attribute this phenomenon to using standard gradient descent algorithms for the fine-tuning stage. We also find that (1) recently proposed fine-tuning methods rooted in trust region, i.e., SMART, can alleviate representation collapse, and (2) our methods alleviate representational collapse to an even greater degree, manifesting in better performance across almost all datasets and models.\nOur contributions in this paper are the following.\n• We propose a novel approach to fine-tuning rooted in trust-region theory, which we show directly alleviates representational collapse at a fraction of the cost of other recently proposed fine-tuning methods.\n• Through extensive experimentation, we show that our method outperforms standard finetuning methodology following recently proposed best practices from Zhang et al. (2020). We improve various SOTA models from sentence prediction to summarization, from monolingual to cross-lingual.\n• We further define and explore the phenomena of representational collapse in fine-tuning and directly correlate it with generalization in tasks of interest." }, { "heading": "2 LEARNING ROBUST REPRESENTATIONS THROUGH REGULARIZED FINE-TUNING", "text": "We are interested in deriving methods for fine-tuning representations that provide guarantees on the movement of representations, in the sense that they do not forget the original pre-trained representations when they are fine-tuned for new tasks (see Section 4 for more details). We introduce a new fine-tuning method rooted in an approximation to trust region, which provides guarantees for stochastic gradient descent algorithms by bounding some divergence between model at update t and t+ 1 (Pascanu & Bengio, 2013; Schulman et al., 2015b; Jiang et al., 2019).\nLet f : Rm×n → Rp be a function which returns some pre-trained representation parameterized by θf from m tokens embedded into a fixed vector of size n. Let the learned classification head g : Rp → Rq be a function which takes an input from f and outputs a valid probability distribution parameterized by θg in q dimensions and let X be our dataset. In the case of generation, we can assume the classification head is simply an identity function or softmax depending on the loss function. Let L(θ) denote a loss function given by θ = [θf , θg]. We are interested in minimizing L with respect to θ such that each update step is constrained by movement in the representational density space p(f). More formally given an arbitrary\narg min ∆θ\nL(θ + ∆θ)\ns.t. KL(p(f(· ; θf ))||p(f(· ; θf + ∆θf ))) = (1)\nThis constrained optimization problem is equivalent to doing natural gradient descent directly over the representations (Pascanu & Bengio, 2013). Unfortunately, we do not have direct access to the density of representations; therefore, it is not trivial to directly bound this quantity. Instead, we propose to do natural gradient over g · f with an additional constraint that g is at most 1-Lipschitz (which naturally constrains change of representations, see Section A.1 in the Appendix). Traditional computation of natural gradient is computationally prohibitive due to the need for inverting the Hessian. An alternative formulation of natural gradient can be stated through mirror descent, using Bregmann divergences (Raskutti & Mukherjee, 2015; Jiang et al., 2019).\nThis method primarily serves as a robust regularizer by preventing large updates in the model’s probability space. This family of methods is classically known as trust-region methods (Pascanu & Bengio, 2013; Schulman et al., 2015a).\nLSMART (θ, f, g) = L(θ) + λEx∼X [ sup\nx∼:|x∼−x|≤ KLS(g · f(x) ‖ g · f(x∼))\n] (2)\nHowever, the supremum is computationally intractable. An approximation is possible by doing gradient ascent steps, similar to finding adversarial examples. This was first proposed by SMART with a symmetrical KLS(X,Y ) = KL(X||Y ) +KL(Y ||X) term (Jiang et al., 2019). We propose an even simpler approximation which does not require extra backward computations and empirically works as well as or better than SMART. We altogether remove the adversarial nature from SMART and instead optimize for a smoothness parameterized by KLS . Furthermore, we optionally also add a constraint on the smoothness of g by making it at most 1-Lipschitz, the intuition being if we can bound the volume of change in g we can more effectively bound f .\nLR3(f, g, θ) = L(θ) + λEx∼X [KLS(g · f(x) ‖ g · f(x+ z))] R3F Method (3) s.t. z ∼ N (0, σ2I) or z ∼ U(−σ, σ) (4) s.t. Lip{g} ≤ 1 Optional R4F Method (5)\nwhere KLS is the symmetric KL divergence and z is a sample from a parametric distribution. In our work we test against two distributions, normal and uniform centered around 0. We denote this as the Robust Representations through Regularized Finetuning (R3F) method.\nAdditionally we propose an extension to R3F (R4F; Robust Representations through Regularized and Reparameterized Finetuning, which reparameterizes g to be at most 1-Lipschitz via Spectral Normalization (Miyato et al., 2018). By constraining g to be at most 1-Lipschitz, we can more directly bound the change in representation (Appendix Section A.1). Specifically we scale all the weight matrices of g by the inverse of their largest singular values WSN := W/σ(W ). Given that spectral radius σ(WSN ) = 1 we can bound Lip{g} ≤ 1. In the case of generation, g does not have any weights therefore we can only apply the R3F method." }, { "heading": "2.1 RELATIONSHIP TO SMART AND FREELB", "text": "Our method is most closely related to the SMART algorithm, which utilizes an auxiliary smoothness inducing regularization term, which directly optimizes the Bregmann divergence mentioned above in Equation 2 (Jiang et al., 2019).\nSMART solves the supremum by using an adversarial methodology to ascent to the largest KL divergence with an −ball. We instead propose to remove the ascent step completely, optionally fixing the smoothness of the classification head g. This completely removes SMART’s adversarial nature and is more akin to optimizing the smoothness of g · f directly. Another recently proposed adversarial method for fine-tuning, FreeLB optimizes a direct adversarial loss LFreeLB(θ) = sup∆θ:|∆θ|≤ L(θ + ∆θ) through iterative gradient ascent steps. This is similar to SMART in the sense that both are adversarial and require gradient ascent steps. Unfortunately, the need for extra forward-backward passes can be prohibitively expensive when fine-tuning large pre-trained models (Zhu et al., 2019).\nOur method is significantly more computationally efficient than adversarial based fine-tuning methods, as seen in Table 1. We show that this efficiency does not hurt performance; we can match or exceed FreeLB and SMART on a large number of tasks. In addition, the relatively low costs of our methods allow us to improve over fine-tuning on an array of generation tasks." }, { "heading": "3 EXPERIMENTS", "text": "We will first measure performance by fine-tuning on a range of tasks and languages. The next sections report why methods rooted in trust region, including ours, outperform standard fine-tuning. We aimed for fair comparisons throughout all of our experiments by using fixed budget hyperparameters searches across all methods. Furthermore, for computationally tractable tasks, we report median/max numbers as well as show distributions across a large number of runs." }, { "heading": "3.1 SENTENCE PREDICTION", "text": "" }, { "heading": "GLUE", "text": "We will first test R3F and R4F on sentence classification tasks from the GLUE benchmark (Wang et al., 2018). We select the same subset of GLUE tasks that have been reported by prior work in this space (Jiang et al., 2019): MNLI (Williams et al., 2018), QQP (Iyer et al., 2017), RTE (Bentivogli et al., 2009), QNLI (Rajpurkar et al., 2016), MRPC (Dolan & Brockett, 2005), CoLA (Warstadt et al., 2018), SST-2 (Socher et al., 2013).1\nConsistent with prior work (Jiang et al., 2019; Zhu et al., 2019), we focus on improving the performance of RoBERTa-Large based models in the single-task setting (Liu et al., 2019). We report the performance of all models on the GLUE development set.\nMethod\n5000\n6000\n7000\n8000\n9000\nSe co\nnd s\nSST-2 Walltime Analysis\nFinetuning Method Standard++ SMART R3F R3F R4F R4F\nods across ten seeds to demonstrate the stability properties of individual methods in Figure 2.\nR3F and R4F unanimously improve over Standard and Standard++ fine-tuning. Furthermore, our methods match or exceed adversarial methods such as SMART/FreeLB at a fraction of the computational cost when comparing median runs. We show computational cost in Figure 1 for a single task, but the relative behavior of wall times is consistent across all other GLUE tasks. We note that we could not find a discernable difference in the experimental setting, which would make the selection between R3F vs. R4F trivial.\n1We do not test against STS-B because it is a regression task where our KL divergence is not defined (Cer et al., 2017)." }, { "heading": "SST-2", "text": "" }, { "heading": "XNLI", "text": "We hypothesize that staying closer to the original representations is especially crucial for crosslingual tasks, especially in the zero-shot fashion where drifting away from pre-trained representations for a single language might manifest in loss of cross-lingual capabilities. In particular, we take a look at the popular XNLI benchmark, containing 15 languages (Conneau et al., 2018). We compare our method against the standard trained XLM-R model in the zero-shot setting (Conneau et al., 2019).\nWe present our result in Table 3. R3F and R4F dominate standard pre-training on 14 out of the 15 languages in the XNLI task. R4F improves over the best known XLM-R XNLI results reaching SOTA with an average language score of 81.4 across five runs. The current state of the art, INFOXLM required a novel pre-training method to reach the same numbers (Chi et al., 2020)." }, { "heading": "3.2 SUMMARIZATION", "text": "While prior work in non-standard finetuning methods tends to focus on sentence prediction and GLUE tasks (Jiang et al., 2019; Zhu et al., 2019; Zhang et al., 2020), we look to improve abstractive summarization, due to its additional complexity and computational cost, specifically we look at three datasets: CNN/Dailymail (Hermann et al., 2015), Gigaword (Napoles et al., 2012) and Reddit TIFU (Kim et al., 2018).\nLike most other NLP tasks, summarization recently has been dominated by the fine-tuning of large pre-trained models. For example, PEGASUS explicitly defines a pre-training objective to facilitate the learning of representations tailored to summarization tasks manifesting in state-of-the-art performance on various summarization benchmarks (Zhang et al., 2019). ProphetNet (Yan et al., 2020) improved over these numbers by introducing their own novel self-supervised task as did ERNIEGEN (Xiao et al., 2020).\nIndependent of the pre-training task, standard fine-tuning on downstream tasks follows a simple formula of using a label smoothing loss while directly fine-tuning the whole model without adding any new parameters. We propose the addition of the R3F term directly to the label smoothing loss. We note that R4F cannot be applied directly to generation tasks due to its reparameterization nature.\nWe present our results in Table 4. Our method (R3F) outperforms standard fine-tuning across the board for three tasks across all of the ROUGE metric variants. Notably, we improve Gigaword and Reddit TIFU ROUGE-1 scores by a point and four points, respectively." }, { "heading": "4 REPRESENTATIONAL COLLAPSE", "text": "Catastrophic forgetting, proposed initially as catastrophic interference, is a phenomenon that occurs during sequential training where new updates interfere catastrophically with previous updates manifesting in forgetting of particular examples for a fixed task (McCloskey & Cohen, 1989). Catastrophic forgetting has been historically associated with continuous learning, and recent work (Mosbach et al., 2020) showed that catastrophic forgetting concerning the original MLM objective is not detrimental for end task training. Instead, the issue lies in optimization. Inspired by this work, we explore the related problem of representational collapse, the degradation of generalizable representations of pre-trained models during the fine-tuning stage. This definition is independent of a specific fine-tuning task but is rather over the internal representations generalizability over a large union of tasks. Another view of this phenomenon is that fine-tuning collapses the wide range of information available in the representations into a smaller set needed only for the immediate task and particular training set.\nMeasuring such degradations is non-trivial. Simple metrics such as the distance between pre-trained representations and fine-tuned representations are not sufficient (e.g., adding a constant to the pretrained representations will not change representation power, but will change distances). One approach would be to estimate mutual information of representations across tasks before and after finetuning, but the estimation of mutual information is notoriously hard, especially in high-dimensions (Tschannen et al., 2019). We instead propose a series of probing experiments meant to provide us\nwith empirical evidence of the existence of representation collapse on the GLUE benchmark (Wang et al., 2018)." }, { "heading": "4.1 PROBING EXPERIMENTS", "text": "" }, { "heading": "PROBING GENERALIZATION OF FINE-TUNED REPRESENTATIONS", "text": "To measure the generalization properties of various fine-tuning methodologies, we follow probing methodology by first freezing the representations from the model trained on one task and then finetuning a linear layer on top of the model for another task. Doing this form of probing can directly measure the quality of representations learned by various fine-tuning methods and how much they collapse when fine-tuned on a sequence of tasks.\nIn particular, we fine-tune a RoBERTa model on SST-2 and train a linear layer for six other GLUE tasks, respectively. Our results are shown in Figure 3. Appendix A.2 presents the hyperparameters. Across all tasks, one of the two variants of our method performed best across various fine-tuning methods.\nConversely, standard fine-tuning produced representations that were worse than other finetuning methods across the board, hinting at the sub-optimality of standard fine-tuning. Furthermore, R3F/R4F consistently outperforms the adversarial fine-tuning method SMART." }, { "heading": "PROBING REPRESENTATION DEGRADATION", "text": "To show the effect of representation collapse, we propose an experiment to measure how the fine-tuning process degrades representations by sequentially training on a series of GLUE tasks. We arbitrarily select 3 GLUE tasks (QNLI, QQP, and RTE) and a source task (SST-2). We begin by training a model on our source task and then train on QNLI, QQP, and RTE in a sequential order using the best checkpoint from\nthe prior iteration. At each point in the chain, we probe the source task and measure performance. We compare standard SGD with the best trust-region fine-tuning approach (R4F). Our results are depicted in Figure 4.\nAs we can see with the standard fine-tuning process, our model diverges from the source task resulting in lower performance probes; however, with our method, the probes change much less with sequential probing resulting in better probing and end performance." }, { "heading": "PROBING REPRESENTATION RETENTION", "text": "To further understand representational collapse’s impact, we extend our probing experiments to train a cyclic chain of tasks. We showed that traditional fine-tuning degrades representations during the fine-tuning process in our prior experiments, meaning standard fine-tuning learns poorer representation compared to alternative fine-tuning methods. The dual to looking at degradation is to look at the retainment of learned representations. To do this, we take a look at cyclic sequential probing. Sequential probing involves training a model on task A, probing B, then training model fine-tuned on B and probing task C, and so forth. We then create a cyclic chain A→ B → C︸ ︷︷ ︸\nCycle 1\n→ A→ B → C︸ ︷︷ ︸ Cycle 2\nfrom where we compare tasks via their probe performance at each cycle.\nWe expect probing performance to increase at every cycle; since every cycle, the task we are probing on will undergo a full fine-tuning. What we are interested in is the level of retention in representations after the fine-tuning. Specifically, we hypothesize that our method, specifically R4F, will retain representations significantly better than the Standard++ fine-tuning method.\nIn our experiments we consider the following sequence of GLUE tasks: SST-2→ QNLI→ QQP→ RTE. We defer hyperparameter values to Appendix (Section A.2).\nLooking at Figure 5, we see that R4F retains the quality of representations significantly better than standard fine-tuning methods." }, { "heading": "5 CONCLUSION", "text": "We propose a family of new fine-tuning approaches for pre-trained representations based on trustregion theory: R3F and R4F. Our methods are more computationally efficient and outperform prior work in fine-tuning via adversarial learning (Jiang et al., 2019; Zhu et al., 2019). We show that this is due to a new phenomenon during fine-tuning: representational collapse, where representations learned during fine-tuning degrade, leading to worse generalization. Our analysis shows that standard fine-tuning is sub-optimal when it comes to learning generalizable representations, and instead, our methods retain representation generalizability and improve end task performance.\nWith our method, we improve upon monolingual and multilingual sentence prediction tasks as well as generation tasks compared to standard and adversarial fine-tuning methods. Notably, we set state of the art on DailyMail/CNN, Gigaword, Reddit TIFU, improve the best-known results on finetuning RoBERTa on GLUE, and reach state of the art on zero-shot XNLI without the need for any new pre-training method.\nWe note there are many flavors of RXF that can occur with various noise distributions or perturbation strategies. We believe a larger, more general framework exists which connects trust region methods and fine-tuning in general. We leave this area of exploration for future work." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 CONTROLLING CHANGE OF REPRESENTATION VIA CHANGE OF VARIABLE", "text": "Let us say we have random variables in some type of markovian chain x, y, z; y = f(x; θf ), z = g(y; θg)\nThe change of variable formulation for probability densities is\np(f(x; θf )) = p(g(f(x; θf ))) ∣∣∣∣det dg(f(x; θf ))df(x; θf ) ∣∣∣∣ (6)\nDirect application of change of variable gives us\nKL(p(f(x; θf ))||p(f(x; θf + ∆θf ))) = (7)∑ p(f(x; θf )) log p(f(x; θf ))\np(f(x; θf + ∆θf )) = (8)\n∑ p(g(f(x; θf ))) ∣∣∣∣det dg(f(x; θf ))df(x; θf ) ∣∣∣∣ [ (9)\nlog p(g(f(x; θf ))) + log ∣∣∣∣det dg(f(x; θf ))df(x; θf ) ∣∣∣∣ (10)\n− log p(g(f(x; ∆θf )))− log ∣∣∣∣det dg(f(x; ∆θf ))df(x; ∆θf ) ∣∣∣∣ (11) ] (12)\nLet us make some more assumptions. Let g(y) = Wy where the spectral norm of W,ρ(W ) = 1. We can then trivially bound detW ≤ 1. Then we have\n= ∑\np(g(f(x; θf ))) ∣∣∣∣det dg(f(x; θf ))df(x; θf ) ∣∣∣∣ [log p(g(f(x; θf )))− log p(g(f(x; ∆θf )))] (13)\n= ∑\np(g(f(x; θf ))) ∣∣∣∣det dg(f(x; θf ))df(x; θf ) ∣∣∣∣ log p(g(f(x; θf )))p(g(f(x; ∆θf ))) (14)\n≤ ∑ p(g(f(x; θf ))) log p(g(f(x; θf )))\np(g(f(x; ∆θf ))) (15)\n= KL(p(g(f(x; θf )))||p(g(f(x; ∆θf )))) (16)\nWe also see that tightness is controlled by |detW |, which is bounded by the singular value giving us intuition to the importance of using spectral normalization." }, { "heading": "A.2 EXPERIMENT HYPER-PARAMETERS", "text": "For our GLUE related experiments, both full fine-tuning and probing, the following parameters are used. For probing experiments, the difference is our RoBERTa encoder is frozen, and the encoder dropout is removed." } ]
2,021
null
SP:be719de25d3d60635a9508fd610f2da3f4fd164d
[ "In this submission, a common modelling assumption for unsupervised disentanglement is challenged: that the disentangled representation follows the independence structure of the underlying (data generating) factors. Instead, the paper proposes to consider *action sequences* which describe how datapoints are interrelated. The paper provides evidence that the capacity of the latent representation (controlled by Lagrange parameter beta in beta-VAE related models) is related to the significance of particular action sequence for disentanglement. To leverage this insight, the fractional VAE (FVAE) is proposed, consisting of several sub-encoders and different training stages. The disentangling properties of the FVAE is demonstrated on the dSprites and 3D chairs datasets, with the FVAE performing favourably to the beta-VAE w.r.t. the Mutual Information Gap (MIG) disentanglement metric on dSprites.", "This paper addresses the problem of disentangling representations using Variational Autoencoders. In particular, the authors introduce the concept of disentangling action sequences and propose the fractional variational autoencoder framework to disentangle them step-by-step. To this end, they analyze the inductive biases on the data and define latent information thresholds which are correlated with the significance of the actions." ]
Disentanglement is a highly desirable property of representation due to its similarity with human’s understanding and reasoning. This improves interpretability, enables the performance of down-stream tasks, and enables controllable generative models. However, this domain is challenged by the abstract notion and incomplete theories to support unsupervised disentanglement learning. We demonstrate the data itself, such as the orientation of images, plays a crucial role in disentanglement instead of the ground-truth factors, and the disentangled representations align the latent variables with the action sequences. We further introduce the concept of disentangling action sequences which facilitates the description of the behaviours of the existing disentangling approaches. An analogy for this process is to discover the commonality between the things and categorizing them. Furthermore, we analyze the inductive biases on the data and find that the latent information thresholds are correlated with the significance of the actions. For the supervised and unsupervised settings, we respectively introduce two methods to measure the thresholds. We further propose a novel framework, fractional variational autoencoder (FVAE), to disentangle the action sequences with different significance step-by-step. Experimental results on dSprites and 3D Chairs show that FVAE improves the stability of disentanglement.
[]
[ { "authors": [ "Mathieu Aubry", "Daniel Maturana", "Alexei A. Efros", "Bryan C. Russell", "Josef Sivic" ], "title": "Seeing 3d chairs: Exemplar part-based 2d-3d alignment using a large dataset of cad models", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "TPAMI, 35(8):1798–1828,", "year": 2013 }, { "authors": [ "Tian Qi Chen", "X. Li", "Roger B. Grosse", "D. Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "ArXiv,", "year": 2018 }, { "authors": [ "Kien Do", "Truyen Tran" ], "title": "Theory and evaluation metrics for learning disentangled representations", "venue": "In 8th International Conference on Learning Representations,ICLR", "year": 2020 }, { "authors": [ "Ahmed Elgammal", "Bingchen Liu", "Mohamed Elhoseiny", "Marian Mazzone" ], "title": "Can: Creative adversarial networks, generating art by learning about styles and deviating from style norms", "venue": "arXiv preprint arXiv:1706.07068,", "year": 2017 }, { "authors": [ "I. Higgins", "Nicolas Sonnerat", "Loic Matthey", "A. Pal", "C. Burgess", "Matko Bosnjak", "M. Shanahan", "M. Botvinick", "Demis Hassabis", "Alexander Lerchner" ], "title": "Scan: Learning hierarchical compositional visual concepts", "venue": "arXiv: Machine Learning,", "year": 2018 }, { "authors": [ "Irina Higgins", "Arka Pal", "Andrei A. Rusu", "Loïc Matthey", "Christopher Burgess", "Alexander Pritzel", "Matthew Botvinick", "Charles Blundell", "Alexander Lerchner" ], "title": "DARLA: improving zero-shot transfer in reinforcement learning", "venue": "Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Insu Jeon", "Wonkwang Lee", "Gunhee Kim" ], "title": "Ib-gan: Disentangled representation learning with information bottleneck gan, 2019", "venue": null, "year": 2019 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "35th International Conference on Machine Learning, ICML 2018,", "year": 2018 }, { "authors": [ "Abhishek Kumar", "P. Sattigeri", "A. Balakrishnan" ], "title": "Variational inference of disentangled latent concepts from unlabeled observations", "venue": "ArXiv,", "year": 2018 }, { "authors": [ "Brenden M. Lake", "Tomer D. Ullman", "Joshua B. Tenenbaum", "Samuel J. Gershman" ], "title": "Building machines that learn and think like people", "venue": "Behavioral and brain sciences,", "year": 2017 }, { "authors": [ "Guillaume Lample", "Neil Zeghidour", "Nicolas Usunier", "Antoine Bordes", "Ludovic DENOYER" ], "title": "Fader networks:manipulating images by sliding attributes", "venue": "NIPS 30,", "year": 2017 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucie", "Gunnar Rätsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Romain Lopez", "Jeffrey Regier", "N. Yosef", "Michael I. Jordan" ], "title": "Information constraints on auto-encoding variational bayes", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Loic Matthey", "Irina Higgins", "Demis Hassabis", "Alexander Lerchner" ], "title": "dsprites: Disentanglement testing sprites dataset", "venue": null, "year": 2017 }, { "authors": [ "J. Peters", "D. Janzing", "B. Schölkopf" ], "title": "Elements of causal inference: Foundations and learning algorithms, 2017", "venue": null, "year": 2017 }, { "authors": [ "K. Ridgeway" ], "title": "A survey of inductive biases for factorial representation-learning", "venue": "ArXiv,", "year": 2016 }, { "authors": [ "Michal Rolinek", "Dominik Zietlow", "G. Martius" ], "title": "Variational autoencoders pursue pca directions (by accident)", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Andrew M Saxe", "Yamini Bansal", "Joel Dapello", "Madhu Advani", "Artemy Kolchinsky", "Brendan D Tracey", "David D Cox" ], "title": "On the information bottleneck theory of deep learning", "venue": "Journal of Statistical Mechanics: Theory and Experiment,", "year": 2019 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Learning factorial codes by predictability minimization", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "B. Schölkopf", "D. Janzing", "J. Peters", "Eleni Sgouritsa", "K. Zhang", "J. Mooij" ], "title": "On causal and anticausal learning", "venue": "In ICML,", "year": 2012 }, { "authors": [ "Satosi Watanabe" ], "title": "Information theoretical analysis of multivariate correlation", "venue": "IBM Journal of research and development,", "year": 1960 }, { "authors": [ "Yizhe Zhu", "Mohamed Elhoseiny", "Bingchen Liu", "Xi Peng", "Ahmed Elgammal" ], "title": "A generative adversarial approach for zero-shot learning from noisy texts", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Yizhe Zhu", "Jianwen Xie", "Bingchen Liu", "Ahmed Elgammal" ], "title": "Learning feature-to-feature translator by alternating back-propagation for zero-shot learning", "venue": null, "year": 1904 }, { "authors": [ "The basic architecture for all experiments follows the settings on Locatello" ], "title": "The hyperparameters of our proposed methods are listed in Tab", "venue": "1. Tab. 3 and 2 show the measured thresholds of the intrinsic action sequences.", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "The basis of artificial intelligence is to understand and reason about the world based on a limited set of observations. Unsupervised disentanglement learning is highly desirable due to its similarity with the way we as human think. For instance, we can infer the movement of a running ball based on a single glance. This is because the human brain is capable of disentangling the positions from a set of images. It has been suggested that a disentangled representation is helpful for a large variety of downstream tasks (Schölkopf et al., 2012; Peters et al., 2017). According to Kim & Mnih (2018), a disentangled representation promotes interpretable semantic information. That brings substantial advancement, including but not limited to reducing the performance gap between humans and AI approaches (Lake et al., 2017; Higgins et al., 2018). Other instances of disentangled representation include semantic image understanding and generation (Lample et al., 2017; Zhu et al., 2018; Elgammal et al., 2017), zero-shot learning (Zhu et al., 2019), and reinforcement learning (Higgins et al., 2017b). Despite the advantageous of the disentangling representation approaches, there are still two issues to be addressed including the abstract notion and the weak explanations.\nNotion The conception of disentangling factors of variation is first proposed in 2013. It is claimed in Bengio et al. (2013) that for observations the considered factors should be explanatory and independent of each other. The explanatory factors are however hard to formalize and measure. An alternative way is to disentangle the ground-truth factors (Ridgeway, 2016; Do & Tran, 2020). However, if we consider the uniqueness of the ground-truth factors, a question which arises here is how to discover it from multiple equivalent representations? As a proverb “one cannot make bricks without straw”, Locatello et al. (2019) prove the impossibility of disentangling factors without the help of inductive biases in the unsupervised setting.\nExplanation There are mainly two types of explanations for unsupervised disentanglement: information bottleneck, and independence assumption. The ground-truth factors affect the data\nindependently, therefore, the disentangled representations must follow the same structure. The approaches, holding the independence assumption, encourage independence between the latent variables (Schmidhuber, 1992; Chen et al., 2018; Kim & Mnih, 2018; Kumar et al., 2018; Lopez et al., 2018). However, the real-world problems have no strict constraint on the independence assumption, and the factors may be correlative. The other explanation incorporates information theory into disentanglement. Burgess et al.; Higgins et al.; Insu Jeon et al.; Saxe et al. suggest that a limit on the capacity of the latent information channel promotes disentanglement by enforcing the model to acquire the most significant latent representation. They further hypothesize that the information bottleneck enforces the model to find the significant improvement.\nIn this paper, we first demonstrate that instead of the ground-truth factors the disentangling approaches learn actions of translating based on the orientation of the images. We then propose the concept of disentangling actions which discover the commonalities between the images and categorizes them into sequences. We treat disentangling action sequences as a necessary step toward disentangling factors, which can capture the internal relationships between the data, and make it possible to analyze the inductive biases from the data perspective. Furthermore, the results on a toy example show that the significance of actions is positively correlated with the threshold of latent information. Then, we promote that conclusion to complex problems. Our contributions are summarized in the following:\n• We show that the significance of action is related to the capacity of learned latent information, resulting in the different thresholds of factors.\n• We propose a novel framework, fractional variational autoencoder (FVAE) to extracts explanatory action sequences step-by-step, and at each step, it learns specific actions by blocking others’ information.\nWe organize the rest of this paper as follows. Sec.2 describes the development of unsupervised disentanglement learning and the proposed methods based on VAEs. In Sec.3, through an example, we show that the disentangled representations are relative to the data itself and further introduce a novel concept, i.e., disentangling action sequences. Then, we investigate the inductive biases on the data and find that the significant action has a high threshold of latent information. In Sec.4, we propose a step-bystep disentangling framework, namely fractional VAE (FVAE), to disentangle action sequences. For the labelled and unlabelled tasks, we respectively introduce two methods to measure their thresholds. We then evaluate FVAE on a labelled dataset (dSprites, Matthey et al. (2017)) and an unlabelled dataset (3D Chairs, Aubry et al. (2014)). Finally, we conclude the paper and discuss the future work in Sec.5" }, { "heading": "2 UNSUPERVISED DISENTANGLEMENT LEARNING", "text": "We first introduce the abstract concepts and the basic definitions, followed by the explanations based on information theory and other related works. This article focuses on the explanation of information theory and the proposed models based on VAEs." }, { "heading": "2.1 THE CONCEPT", "text": "Disentanglement learning is fascinating and challenging because of its intrinsic similarity to human intelligence. As depicted in the seminal paper by Bengio et al., humans can understand and reason from a complex observation to the explanatory factors. A common modeling assumption of disentanglement learning is that the observed data is generated by a set of ground-truth factors. Usually, the data has a high number of dimensions; hence it is hard to understand, whereas the factors have a low number of dimensions, thus simpler and easier to be understood. The task of disentanglement learning is to uncover the ground-truth factors. Such factors are invisible to the training process in an unsupervised setting. The invisibility of factors makes it hard to define and measure disentanglement (Do & Tran, 2020).\nFurthermore, it is shown in Locatello et al. (2019) that it is impossible to unsupervised disentangle the underlying factors for the arbitrary generative models without inductive biases. In particular, they suggest that the inductive biases on the models and the data should be exploited. However, they do not provide a formal definition of the inductive bias and such a definition is still unavailable." }, { "heading": "2.2 INFORMATION BOTTLENECK", "text": "Most of the dominant disentangling approaches are the variants of variational autoencoder (VAE). The variational autoencoder (VAE) is a popular generative model, assuming that the latent variables obey a specific prior (normal distribution in practice). The key idea of VAE is maximizing the likelihood objective by the following approximation:\nL(θ,φ;x,z)=Eqφ(z|x)[logpθ(x|z)]−DKL(qφ(z|x)||p(z)), (1)\nwhich is known as the evidence lower bound (ELBO); where the conditional probabilityP (x|z),Q(z|x) are parameterized with deep neural networks.\nHiggins et al. find that the KL term of VAEs encourages disentanglement and introduce a hyperparameter β in front of the KL term. They propose the β-VAE maximizing the following expression:\nL(θ,φ;x,z)=Eqφ(z|x)[logpθ(x|z)]−βDKL(qφ(z|x)||p(z)). (2)\nβ controls the pressure for the posterior Qφ(z|x) to match the factorized unit Gaussian prior p(z). Higher values of β lead to lower implicit capacity of the latent information and ambiguous reconstructions. Burgess et al. propose the Annealed-VAE that progressively increases the information capacity of the latent code while training:\nL(θ,φ;x,z,C)=Eqφ(z|x)[logpθ(x|z)]−γ|DKL(qφ(z|x)||p(z))−C| (3)\nwhere γ is a large enough constant to constrain the latent information,C is a value gradually increased from zero to a large number to produce high reconstruction quality. As the total information bottleneck gradually increasing, they hypothesize that the model will allocate the capacity of the most improvement of the reconstruction log-likelihood to the encoding axes of the corresponding factor. However, they did not exploit why each factor makes different contributions to the reconstruction log-likelihood." }, { "heading": "2.3 OTHER RELATED WORK", "text": "The other dominant direction initiates from the prior of factors. They assume that the ground-truth factors are independent of each other, and a series of methods enforce the latent variables have the same structure as the factors. FactorVAE (Kim & Mnih, 2018) applies a discriminator to approximately calculate the total correlation (TC, Watanabe (1960)); β-TCVAE (Chen et al., 2018) promotes the TC penelty by decomposing the KL term; DIP-VAE (Kumar et al., 2018) identifies the covariance matrix of q(z)." }, { "heading": "3 DISENTANGLING ACTION SEQUENCES", "text": "For machines and humans, disentangling the underlying factors is a challenging task. For instance, there are more than 1,400 breeds of dogs in the world, and it seems impossible for an ordinary person to distinguish all of them just by looking at their pictures. The challenge in disentangling the underlying factors is mainly due to the complexity of establishing relationships without supervision, where the corresponding models should contain some level of prior knowledge or inductive biases. However, it is possible to determine the differences without having extensive knowledge. For example, one may mistakenly identify the breed of a dog by looking at a picture but is almost impossible to misrecognize a dog as a cat. Therefore, in practice, discovering the differences or similarities is often a much easier task than that of uncovering the underlying factors, and it does not also need much prior knowledge. One may conclude that discovering the commonalities between things is an important step toward disentanglement." }, { "heading": "3.1 ACTION SEQUENCES", "text": "Action The continuous set of images over a certain direction.\nThe observed data consists of a set of actions such as rotation, translation, and scaling. Such actions are meaningful and underlying. Therefore, the goal of disentanglement learning is to separate the underlying actions of observation. Usually, the continuous actions are infeasible for the machine learning system, and we have to convert them into some discrete and consistent sequences.\nGiven a dataset S, we assume it consists of parameterized action sequences, and each action sequence is a subset of the dataset. We can model a parameterized action sequence as:\nSi(mj,j 6=i)=Tmj,j 6=i({t}), (4)\nwhere Si∈S denotes an action sequence , i denotes the type of actions,m denotes a parameter vector, T denotes a transformation, t denotes the step of the action. The procedure is similar to the latent traversal in Higgins et al. (2017a). Differently, action sequences describe both the real observed data and the reconstructed data: For the real observed data,m is the ground-truth factor vector, and T is the generating function; For the reconstructed data,m is the latent variable vector, andT is a neural network. For clarity, we denote generating actions as sequences of images controlled by the ground-truth factors and action sequences as the approximation of these actions generated by a neural network.\nHowever, the neural networks are flexible enough to approximate any actions, and it’s tricky to infer and learn the generating actions. The marvelous thing about disentanglement learning is that the models seem to learn these actions unsupervisedly. A popular view of disentanglement believes the ground-truth factors should be separated into independent latent variables Locatello et al. (2019); Chen et al. (2018). That means the learned action sequences have to match the generating actions precisely. However, (Rolinek et al., 2019) shows that the VAEs conduct a PCA-like behavior, and the models prefer to extract the principal component. Hence, the models may select some action sequences having more considerable variations and learn the significant action sequences rather than the generating actions. Using VAEs, minimizing the objective increases the overlap between the posterior distributions across the dataset Burgess et al. (2018), and it then leads to the learned action sequences showing some internal relationships. Although (Locatello et al., 2019) suggest that the inductive biases on the models and the data should be exploited, the current researches focus on the role of the models on disentanglement Burgess et al. (2018); Chen et al. (2018); Rolinek et al. (2019)." }, { "heading": "3.2 INDUCTIVE BIAS ON THE DATA", "text": "We believe the data itself containing the vital information for disentanglement. In other words, keeping the applied model the same, the learned representations change corresponding to some modifications on the data. Therefore, there exist some data clues inside the data guiding the models to disentangle. We create a toy dataset family— each dataset contains 40x40 images which are generated from an original image of an 11x5 rectangle by translating on a 64x64 canvas. Each image on the dataset has a unique\nlabel (position of X, position of Y) to describe the ground-truth factors. In this dataset family, there are two variables: the orientation of the rectangle and the way to determine the two factors. There are infinite solutions to determine these two factors; the polar coordinate system and the Cartesian coordinate system are the most common solution. We then create a baseline dataset, A1, with a horizontal rectangle in the Cartesian coordinate system and obtain its variants. A2 differs in the positions determined by the polar coordinate system, and A3 differs in the orientation (45 degrees) of the images. A4 uses the results of dSprites, and we only show the rotation parts w.r.t. three shapes. For the experiment settings in this section, we choose the well-examined baseline model, β-VAE (β=50), and the other settings such as the backbone network, learning rate, and optimizer refers the settings in Locatello et al. (2019).\nAs it is shown in Fig. 1, we visualize the learned representations in the latent space. One can see that A1 and A3 have the same generating actions but different in the orientation of the rectangles, and the difference of the rectangles causes the rotation of the learned representations; the model fails to learn the generating actions, however, the learned action sequences are explanatory and similar to A1’s (see in A.4). We argue that current approaches don’t guarantee to separate the ground-truth factors into isolating dimension (the same conclusion as Locatello et al. (2019)), because A2 and A3 fail to learn the generating action sequences. However, the data guarantee the learned action sequences (A1, A3). One can see that the invariant in A1, A2, and A3 is that they have learned two actions moving along the direction of the long side of the rectangle and the orthogonality direction (see in Fig. 10)." }, { "heading": "3.3 SIGNIFICANCE OF ACTION SEQUENCES", "text": "It is suggested in Burgess et al. (2018) that the underlying components have different contributions to the objective function. (Rolinek et al., 2019) addressed that the VAEs conduct PCA-like behaviors. However, they didn’t exploit the inductive biases on the data explicitly. Therefore, we use entropy to measure the information among this action:\nH(S)=− ∫ x∈S p(x)log(p(x))dx, (5)\nwhereS is an action,x is a image belonging to this action, p(x) is the probability of the image occupation in this action. However, this formula cannot be applied directly. For the discrete situation, we assume each image is a sample from an action distribution which obeys the Gaussian DrstributionN(µ,σ2), and we use X̄ to estimate the action distribution. Hence, we obtain the approximate entropy by sampling:\nH(S′)=− 1 N ∑ xi∈S′ log( 1 σ √ 2π exp− (xi−X̄) 2 2σ2 ), (6)\nwhere S′ is the set of an action, X̄= 1N ∑ xi∈S′xi.\nIn this part, we build a new translation family (A5) with two controllable parameters θ,L, where θ is the orientation of the rectangle,L is the translating distance of the rectangle. A5 has only one factor and indicates an action moving from the left to the right, and θandL controls the entropy of this action.\nAs shown in Fig. 3(a), a high value of Lmeans a long traveling area and a small overlap, which has large entropy. If the value ofL is small enough, the images in the sequence are almost the same, the action has small entropy or is insignificant. Similarly, the action has more overlap when θ=90. Note that the maximum for both are reached when θ=0 or 180 andL is at its maximum. To measure how much information the model has learned, we select KL divergence to indicate the latent information. The experimental results reveal a similar trend between the entropy and the KL diverge in Fig. 3(b). Therefore, a higher significance of the action ends in more information on the latent variables. In contrast, gradually increasing the pressure on the KL term, the latent information decreases until it reaches zero. One can infer that there exists a critical point that the model can learn information from the action. We call that the threshold of latent information. We hypothesize the significance of actions and latent information thresholds are positive correlation (A.5)." }, { "heading": "4 PLAIN IMPLEMENTATION ACCORDING TO THRESHOLDS", "text": "We have discussed the situation with only one action, and there are various thresholds for different significant actions. Particularly, if β is large enough, information of the insignificant actions will be blocked, decaying to a single factor discovery problem. From the modeling perspective, the learning process is similar to a single action learning problem. However, the difficulty of disentanglement is that different kinds of ground-truth actions are mixed, and a single and a fixed parameter β is unable to separate them. Therefore, the plain idea is to set different thresholds on the learning phases, and then in\neach phase, we enforce the model to learn specific actions by blocking information of the secondarily significant actions. We propose a fractional variational autoencoder (FVAE) which disentangles the action sequences step-by-step. The architecture of FVAE is shown in Fig. 4(a). The encoder consists of several groups of sub-encoders, and the inputs of the decoder are the concatenated codes of all sub-encoders. Besides, to prevent re-entangling the learned actions (see in Fif. 5), we set different learning rates for the sub-encoders, which reduces the learning rate for the reset of N-1 groups and prevent the model from allocating the learned codes. The training process of FVAE is similar to the common operation in chemistry for separating mixtures—distillation. To separate a mixture of liquid, we repeat the process of heating the liquid with different boiling points, and in each step, the heating temperature is different to ensure that only one component is being collected.\nDiscussion Although AnnealedVAE follows the same principles as the FVAE, it differs in the interpretation of the effects of beta, and it does not explicitly prevent mixing the factors. Moreover, the performance of AnnealedVAE depends on the choice of hyperparameter in practice Locatello et al. (2019). \"A large enough value\" is hard to determine, and the disentangled representation is re-entangled for an extremely large C. To address this issue, here we introduce two methods to determine the thresholds in each phase for the labeled and unlabelled tasks." }, { "heading": "4.1 LABELLED TASK", "text": "For the labeled setting, we focus on one type of action and clip the rest of them at first. However, the samples of one action are usually insufficient. For example, there are only three types of shapes on dSprites. Besides, the label information may be corrupted, and only some parts of the dataset are labeled.\nTo address these issues, we introduce the architecture shown in Fig. 4(b), in which the label information excepted for the target actions are directly provided to the decoder. We evaluate FVAE on the dSprites dataset (involving five actions: translating, rotating, scaling, and shaping). We first measure the threshold of each action, and the result is shown in Fig. 6(a). One can see that the thresholds of translating and scaling are higher than the others. This suggests that these actions are significant and easy to be disentangled. This is in line with the results in Burgess et al. (2018); Higgins et al. (2017a).\nAccording to these thresholds, we then arrange three stages for dSprites. At each stage, we set up a bigger number of β than the threshold of the secondary action. The pressure on the KL term also prevents the insignificant actions from being disentangled and ensures that the model only learns from the information of the target action. The training of each stage can be found in the Appendix. As shown in Fig. 7(a), the translation factor is disentangled at first easily, while it is hard to distinguish the shape, orientation, and scale. Gradually, scaling and orientation also emerge in order. Nevertheless, it should be noted that the shaping is still hard to be separated. This could be attributed to the lack of commonalities between these three shapes on dSprites and motion compensation for a smooth transition. In other words, in terms of shape, the lack of intermediate states between different shapes is an inevitable hurdle for its disentanglement. Fig. 7 shows a more substantial difference between the β-VAE and the FVAE. β-VAE has an unstable performance compared to FVAE, and position information entangles with orientation on some dimension." }, { "heading": "4.2 UNLABELLED TASK", "text": "For the unlabelled setting, we introduce the annealing test to detect the potential components. In the beginning, a very large value for β is set to ensure that no action is learned. Then, we gradually decrease β to disentangle the significant actions. There exists a critical point in which the latent information starts increasing, and that point approximates the threshold of the corresponding action.\n3D Chairs is an unlabelled dataset containing 1394 3D models from the Internet. Fig. 6(b) shows the result of the annealing test on 3D Chairs. One can recognize three points where the latent information suddenly increases: 60, 20, 4. Therefore, we arrange a three-stage training process for 3D Chairs (more details in the Appendix). As shown in Fig. 7(b), one can see the change of azimuth in the first stage. In the second stage, one can see the change of size, and in the third stage, one can see the change of the leg style, backrest, and material used in the making of the chair." }, { "heading": "5 CONCLUSION", "text": "We demonstrated an example of the effects of images’ orientation on the disentangled representations. We have further investigated the inductive biases on the data by introducing the concept of disentangling action sequences, and we regarded that as discovering the commonality between the things, which is essential for disentanglement. The experimental results revealed that the actions with higher significance have a larger value of thresholds of latent information. We further proposed the fractional variational autoencoder (FVAE) to disentangle the action sequences with different significance step-by-step. We then evaluated the performance of FVAE on dSprites and 3D Chairs. The results suggested robust disentanglement where re-entangling is prevented.\nThis paper proposed a novel tool to study the inductive biases by action sequences. However, other properties of inductive biases on the data remain to be exploited. The current work focuses on an alternative explanation for disentanglement from the perspective of information theory. In the future, the influence of independence on disentanglement requires further investigation." }, { "heading": "A APPENDIX", "text": "A.1 DATASETS\nA.2 DSPRITES AND 3D CHAIRS\nA.3 TRAINING DETAILS\nThe basic architecture for all experiments follows the settings on Locatello et al. (2019). The hyperparameters of our proposed methods are listed in Tab. 1. Tab. 3 and 2 show the measured thresholds of the intrinsic action sequences.\nA.4 LEARNED ACTION SEQUENCES\nFig. 10 is the supplemental results of the experiments on Sec 3.2.\nA.5 THE HYPOTHESIS OF KL\nWe hypothesize the KL diverge is inversely proportional to β and positively proportional toH:\nKL= H(S) β2+C1 ∗C2, (7)\nwhere C1,C2 are constant. We examine this in Fig. 11.\nA.6 SAMPLES" } ]
2,020
DISENTANGLING ACTION SEQUENCES: FINDING COR-
SP:ad4ab0f3fa32fd60cf01ee69259206d6d6f4ed22
[ "The paper concerns model-based meta-RL. It exploits the fact that meta-RL can be formulated as POMDP in which the task indicator is part of the (unobserved) hidden state. Thus, the paper effectively analyzes and proposes model-based algorithms for POMDPs. The paper bounds the gap between the expected reward of a policy in the actual POMDP and the estimated model and then theoretically shows that this gap can be reduced when using dyna-style / branched rollouts instead of full rollouts under the learned model. Motivated by this finding, the paper proposes a Dyna-like algorithm for POMDPs. In the experimental evaluation, the paper compares its proposed method, M3PO, to two recent meta-RL approaches in a range of meta-RL environments for continuous control.", "This paper focus on model-based RL on a POMDP setting (they call it \"meta RL\"), where the policy and model need to infer the current hidden state according to history. It provides a theoretical relation between true environment returns and the returns from learned models in a POMDP setting. And it also provides a practical algorithm called M3PO and shows this algorithm is more sample efficient than some meta-RL baselines in some continuous control tasks." ]
Model-based reinforcement learning (MBRL) has been applied to meta-learning settings and has demonstrated its high sample efficiency. However, in previous MBRL for meta-learning settings, policies are optimized via rollouts that fully rely on a predictive model of an environment. Thus, its performance in a real environment tends to degrade when the predictive model is inaccurate. In this paper, we prove that performance degradation can be suppressed by using branched meta-rollouts. On the basis of this theoretical analysis, we propose Meta-Modelbased Meta-Policy Optimization (M3PO), in which the branched meta-rollouts are used for policy optimization. We demonstrate that M3PO outperforms existing meta reinforcement learning methods in continuous-control benchmarks.
[]
[ { "authors": [ "Maruan Al-Shedivat", "Trapit Bansal", "Yuri Burda", "Ilya Sutskever", "Igor Mordatch", "Pieter Abbeel" ], "title": "Continuous adaptation via meta-learning in nonstationary and competitive environments", "venue": "In Proc. ICLR,", "year": 2018 }, { "authors": [ "Christopher M Bishop" ], "title": "Pattern recognition and machine learning", "venue": "springer,", "year": 2006 }, { "authors": [ "Kyunghyun Cho", "Bart Van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "venue": "In Proc. EMNLP,", "year": 2014 }, { "authors": [ "Ignasi Clavera", "Jonas Rothfuss", "John Schulman", "Yasuhiro Fujita", "Tamim Asfour", "Pieter Abbeel" ], "title": "Model-based reinforcement learning via meta-policy optimization", "venue": "In Proc. CoRL,", "year": 2017 }, { "authors": [ "Vladimir Feinberg", "Alvin Wan", "Ion Stoica", "Michael I. Jordan", "Joseph E. Gonzalez", "Sergey Levine" ], "title": "Model-based value expansion for efficient model-free reinforcement learning", "venue": "In Proc. ICML,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Sergey Levine" ], "title": "Meta-learning and universality: Deep representations and gradient descent can approximate any learning algorithm", "venue": "In Proc. ICLR,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proc. ICML,", "year": 2017 }, { "authors": [ "Shixiang Gu", "Ethan Holly", "Timothy Lillicrap", "Sergey Levine" ], "title": "Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates", "venue": "In Proc. ICRA,", "year": 2017 }, { "authors": [ "Abhishek Gupta", "Russell Mendonca", "YuXuan Liu", "Pieter Abbeel", "Sergey Levine" ], "title": "Metareinforcement learning of structured exploration strategies", "venue": "In Proc. NeurIPS,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In Proc. ICML,", "year": 2018 }, { "authors": [ "Mikael Henaff" ], "title": "Explicit explore-exploit algorithms in continuous state spaces", "venue": "In Proc. NeurIPS,", "year": 2019 }, { "authors": [ "Takuya Hiraoka", "Takahisa Imagawa", "Tatsuya Mori", "Takashi Onishi", "Yoshimasa Tsuruoka" ], "title": "Learning robust options by conditional value at risk optimization", "venue": "In Proc. NeurIPS,", "year": 2019 }, { "authors": [ "Maximilian Igl", "Luisa Zintgraf", "Tuan Anh Le", "Frank Wood", "Shimon Whiteson" ], "title": "Deep variational reinforcement learning for POMDPs", "venue": "In Proc. ICML,", "year": 2018 }, { "authors": [ "Allan Jabri", "Kyle Hsu", "Abhishek Gupta", "Ben Eysenbach", "Sergey Levine", "Chelsea Finn" ], "title": "Unsupervised curricula for visual meta-reinforcement learning", "venue": "In Proc. NeurIPS,", "year": 2019 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to trust your model: Modelbased policy optimization", "venue": "In Proc. NeurIPS,", "year": 2019 }, { "authors": [ "Taylor W Killian", "Samuel Daulton", "George Konidaris", "Finale Doshi-Velez" ], "title": "Robust and efficient transfer learning with hidden parameter Markov decision processes", "venue": "In Proc. NIPS,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": null, "year": 2015 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Proc. NIPS,", "year": 2017 }, { "authors": [ "Alex X. Lee", "Anusha Nagabandi", "Pieter Abbeel", "Sergey Levine" ], "title": "Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model", "venue": null, "year": 1907 }, { "authors": [ "Yuping Luo", "Huazhe Xu", "Yuanzhi Li", "Yuandong Tian", "Trevor Darrell", "Tengyu Ma" ], "title": "Algorithmic framework for model-based deep reinforcement learning with theoretical guarantees", "venue": "In Proc. ICLR,", "year": 2018 }, { "authors": [ "Russell Mendonca", "Abhishek Gupta", "Rosen Kralev", "Pieter Abbeel", "Sergey Levine", "Chelsea Finn" ], "title": "Guided meta-policy search", "venue": "In Proc. NeurIPS,", "year": 2019 }, { "authors": [ "Nikhil Mishra", "Mostafa Rohaninejad", "Xi Chen", "Pieter Abbeel" ], "title": "A simple neural attentive metalearner", "venue": "In Proc. ICLR,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "A. Nagabandi", "G. Kahn", "R.S. Fearing", "S. Levine" ], "title": "Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning", "venue": "In Proc. ICRA,", "year": 2018 }, { "authors": [ "Anusha Nagabandi", "Ignasi Clavera", "Simin Liu", "Ronald S Fearing", "Pieter Abbeel", "Sergey Levine", "Chelsea Finn" ], "title": "Learning to adapt in dynamic, real-world environments via meta-reinforcement learning", "venue": "In Proc. ICLR,", "year": 2019 }, { "authors": [ "Anusha Nagabandi", "Chelsea Finn", "Sergey Levine" ], "title": "Deep online learning via meta-learning: Continual adaptation for model-based RL", "venue": "In Proc. ICLR,", "year": 2019 }, { "authors": [ "Christian F Perez", "Felipe Petroski Such", "Theofanis Karaletsos" ], "title": "Generalized hidden parameter MDPs transferable model-based RL in a handful of trials", "venue": "In Proc. AAAI,", "year": 2020 }, { "authors": [ "Aravind Rajeswaran", "Sarvjeet Ghotra", "Sergey Levine", "Balaraman Ravindran" ], "title": "EPOpt: Learning Robust Neural Network Policies Using Model Ensembles", "venue": "In Proc. ICLR,", "year": 2017 }, { "authors": [ "Aravind Rajeswaran", "Igor Mordatch", "Vikash Kumar" ], "title": "A game theoretic framework for model based reinforcement learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Kate Rakelly", "Aurick Zhou", "Chelsea Finn", "Sergey Levine", "Deirdre Quillen" ], "title": "Efficient off-policy meta-reinforcement learning via probabilistic context variables", "venue": "In Proc. ICML,", "year": 2019 }, { "authors": [ "Prajit Ramachandran", "Barret Zoph", "Quoc V Le" ], "title": "Searching for activation functions", "venue": "arXiv preprint arXiv:1710.05941,", "year": 2017 }, { "authors": [ "Jonas Rothfuss", "Dennis Lee", "Ignasi Clavera", "Tamim Asfour", "Pieter Abbeel" ], "title": "Promp: Proximal meta-policy search", "venue": "In Proc. ICLR,", "year": 2019 }, { "authors": [ "Steindór Sæmundsson", "Katja Hofmann", "Marc Peter Deisenroth" ], "title": "Meta reinforcement learning with latent variable Gaussian processes", "venue": "arXiv preprint arXiv:1803.07551,", "year": 2018 }, { "authors": [ "Juergen Schmidhuber", "Jieyu Zhao", "MA Wiering" ], "title": "Simple principles of metalearning", "venue": "Technical report IDSIA,", "year": 1996 }, { "authors": [ "David Silver", "Joel Veness" ], "title": "Monte-Carlo planning in large POMDPs", "venue": "In Proc. NIPS,", "year": 2010 }, { "authors": [ "Bradly C Stadie", "Ge Yang", "Rein Houthooft", "Xi Chen", "Yan Duan", "Yuhuai Wu", "Pieter Abbeel", "Ilya Sutskever" ], "title": "Some considerations on learning to explore via meta-reinforcement learning", "venue": "arXiv preprint arXiv:1803.01118,", "year": 2018 }, { "authors": [ "Wen Sun" ], "title": "Towards Generalization and Efficiency in Reinforcement Learning", "venue": "PhD thesis,", "year": 2019 }, { "authors": [ "Wen Sun", "Nan Jiang", "Akshay Krishnamurthy", "Alekh Agarwal", "John Langford" ], "title": "Model-based RL in contextual decision processes: PAC bounds and exponential improvements over model-free approaches", "venue": "In Proc. COLT,", "year": 2019 }, { "authors": [ "Richard S Sutton" ], "title": "Dyna, an integrated architecture for learning, planning, and reacting", "venue": "ACM Sigart Bulletin,", "year": 1991 }, { "authors": [ "Sebastian Thrun", "Lorien Pratt" ], "title": "Learning to learn: Introduction and overview", "venue": "In science & business media,", "year": 1998 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In Proc. IROS,", "year": 2012 }, { "authors": [ "Jane X Wang", "Zeb Kurth-Nelson", "Dhruva Tirumala", "Hubert Soyer", "Joel Z Leibo", "Remi Munos", "Charles Blundell", "Dharshan Kumaran", "Matt Botvinick" ], "title": "Learning to reinforcement learn", "venue": "arXiv preprint arXiv:1611.05763,", "year": 2016 }, { "authors": [ "Grady Williams", "Andrew Aldrich", "Evangelos Theodorou" ], "title": "Model predictive path integral control using covariance variable importance sampling", "venue": "arXiv preprint arXiv:1509.01149,", "year": 2015 }, { "authors": [ "Tianhe Yu", "Deirdre Quillen", "Zhanpeng He", "Ryan Julian", "Karol Hausman", "Chelsea Finn", "Sergey Levine" ], "title": "Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning", "venue": "In Proc. CoRL,", "year": 2019 }, { "authors": [ "Luisa Zintgraf", "Kyriacos Shiarlis", "Maximilian Igl", "Sebastian Schulze", "Yarin Gal", "Katja Hofmann", "Shimon Whiteson" ], "title": "VariBAD: A very good method for Bayes-adaptive deep RL via metalearning", "venue": "In Proc. ICLR,", "year": 2020 }, { "authors": [ "Janner" ], "title": "2019)), they do not take the use of the replay buffers into account in the their theoretical analysis", "venue": "Dmodel in Algorithm", "year": 2019 }, { "authors": [ "Janner" ], "title": "dance with a current policy π under a predictive model pθ", "venue": null, "year": 2019 }, { "authors": [ "Perez" ], "title": "M3PO. The figures show learning curves of GHP-MDP and M3PO. In each figure, the vertical axis represents expected returns and the horizontal axis represents the number of training samples (x1000). GHP-MDP was evaluated in two trials, and each trial was run for three days in real-times", "venue": null, "year": 2020 } ]
[ { "heading": null, "text": "Model-based reinforcement learning (MBRL) has been applied to meta-learning settings and has demonstrated its high sample efficiency. However, in previous MBRL for meta-learning settings, policies are optimized via rollouts that fully rely on a predictive model of an environment. Thus, its performance in a real environment tends to degrade when the predictive model is inaccurate. In this paper, we prove that performance degradation can be suppressed by using branched meta-rollouts. On the basis of this theoretical analysis, we propose Meta-Modelbased Meta-Policy Optimization (M3PO), in which the branched meta-rollouts are used for policy optimization. We demonstrate that M3PO outperforms existing meta reinforcement learning methods in continuous-control benchmarks." }, { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) methods have achieved remarkable success in many decision-making tasks, such as playing video games or controlling robots (e.g., Gu et al. (2017); Mnih et al. (2015)). In conventional RL methods, when multiple tasks are to be solved, a policy is independently learned for individual tasks. In general, each learning requires millions of training samples from the environment. This independent learning with a large number of samples prevents conventional RL methods from being applied to practical multi-task problems (e.g., robotic manipulation problems involving grasping or moving different types of objects (Yu et al., 2019)). Meta-learning methods (Schmidhuber et al., 1996; Thrun & Pratt, 1998) have recently gained much attention as a promising solution to this problem (Finn et al., 2017). They learn a structure shared in the tasks by using a large number of samples collected across the parts of the tasks. Once learned, these methods can adapt quickly to new (or the rest of the) tasks with a small number of samples given.\nMeta-RL methods have previously been introduced into both model-free and model-based settings. For model-free settings, there are two main types of approaches proposed so far, recurrent-based policy adaptation (Duan et al., 2017; Mishra et al., 2018; Rakelly et al., 2019; Wang et al., 2016) and gradient-based policy adaptation (Al-Shedivat et al., 2018; Finn & Levine, 2018; Finn et al., 2017; Gupta et al., 2018; Rothfuss et al., 2019; Stadie et al., 2018). In these approaches, policies adapt to a new task by leveraging the history of past trajectories. Following previous work (Clavera et al., 2018), we refer to these adaptive policies as meta-policies in our paper. In these modelfree meta-RL methods, in addition to learning control policies, the learning of policy adaptation is also required (Mendonca et al., 2019). Thus, these methods require more training samples than conventional RL methods.\nFor model-based settings, there have been relatively few approaches proposed so far. Sæmundsson et al. (2018) and Perez et al. (2020) use a predictive model (i.e., a transition model) conditioned by a latent variable for model predictive control. Nagabandi et al. (2019a;b) introduced both recurrentbased and gradient-based meta-learning methods into model-based RL. In these approaches, the predictive models adapt to a new task by leveraging the history of past trajectories. In analogy to the meta-policy, we refer to these adaptive predictive models as meta-models in our paper. Generally, these model-based meta-RL approaches are more sample efficient than the model-free approaches. However, in these approaches, the meta-policy (or the course of actions) is optimized via rollouts relying fully on the meta-model. Thus, its performance in a real environment tends to degrade when the meta-model is inaccurate. In this paper, we address this performance degradation problem in model-based meta-RL.\nAfter reviewing related work (Section 2) and preliminaries (Section 3), we present our work by first formulating model-based meta-RL (Section 4). Model-based (and model-free) meta-RL settings have typically been formulated as special cases of solving partially observable Markov decision processes (POMDPs) (e.g.,Duan et al. (2017); Killian et al. (2017); Perez et al. (2020)). In these special cases, specific assumptions, such as intra-episode task invariance, are additionally introduced. However, there are model-based meta-RL settings where such assumptions do not hold (e.g., Nagabandi et al. (2019a;b)). To include these settings into our scope, we formulate model-based meta-RL settings as solving POMDPs without introducing such additional assumptions. Then, we conduct theoretical analysis on its performance guarantee (Section 5). We first analyse the performance guarantee in full meta-model-based rollouts, which most of the previous model-based metaRL methods hold. We then introduce the notion of branched meta-rollouts. Branched meta-rollouts are Dyna-style rollouts (Sutton, 1991) in which we can adjust the reliance on the meta-model and real environment data. We show that the performance degradation due to the meta-model error in the branched meta-rollouts is smaller than that in the full meta-model-based rollouts. On the basis of this theoretical analysis, we propose a practical model-based meta-RL method called Meta-Model-based Meta-Policy Optimization (M3PO) where the meta-model is used in the branched rollout manner (Section 6). Finally, we experimentally demonstrate that M3PO outperforms existing methods in continuous-control benchmarks (Section 7).\nWe make the following contributions in both theoretical and empirical frontiers. Theoretical frontier: 1. Our work is the first attempt to provide a theoretical relation between learning the metamodel and the real environment performance. In the aforementioned model-based meta-RL literature, it has not been clear how learning the meta-model relates to real environment performance. Our theoretical analysis provides relations between them (Theorems 1, 2 and 3). This result theoretically justifies meta-training a good transition model to improve overall performance in the real environment. 2. Our analysis also reveals that the use of branched meta-rollouts can suppress performance degradation due to meta-model errors. 3. We refine previous fundamental theories proposed by Janner et al. (2019) to consider important premises more properly (Theorems 4 and 5). This modification is important to strictly guarantee the performance especially when the model-rollout length is long. Empirical frontier: We propose and show the effectiveness of M3PO. Notably, we show that M3PO achieves better sample efficiency than existing meta-RL methods in complex tasks, such as controlling humanoids." }, { "heading": "2 RELATED WORK", "text": "In this section, we review related work on POMDPs and theoretical analysis in model-based RL.\nPartially observable Markov decision processes 1: In our paper, we formulate model-based metaRL as solving POMDPs, and provide its performance guarantee under the branched meta-rollout scheme. POMDPs are a long-studied problem (e.g., (Ghavamzadeh et al., 2015; Sun, 2019; Sun et al., 2019)), and many works have discussed a performance guarantee of RL methods to solve POMDPs. However, the performance guarantee of the RL methods based on branched meta-rollouts has not been discussed in the literature. On the other hand, a number of researchers (Igl et al., 2018; Lee et al., 2019; Zintgraf et al., 2020) have proposed model-free RL methods to solve a POMDP without prior knowledge of the accurate model. However, they do not provide theoretical analyses of performance. In this work, by contrast, we propose a model-based meta-RL method and provide theoretical analyses on its performance guarantee.\nTheoretical analysis on the performance of model-based RL: Several theoretical analyses on the performance of model-based RL have been provided in previous work (Feinberg et al., 2018; Henaff, 2019; Janner et al., 2019; Luo et al., 2018; Rajeswaran et al., 2020). In these theoretical analyses, standard Markov decision processes (MDPs) are assumed, and the meta-learning (or POMDP) setting is not discussed. In contrast, our work provides a theoretical analysis on the meta-learning (and POMDP) setting, by substantially extending the work of Janner et al. (2019). Specifically, Janner et al. (2019) analysed the performance guarantee of branched rollouts on MDPs, and introduced branched rollouts into a model-based RL algorithm. We extend their analysis and algorithm to a meta-learning (POMDP) case. In addition, we modify their theorems so that important premises\n1We include works on Bayes-adaptive MDPs (Ghavamzadeh et al., 2015; Zintgraf et al., 2020) because they are a special case of POMDPs.\n(e.g., the effect of multiple-model rollout factors) are more properly considered. See A.1 in the appendix for a more detailed discussion of our contribution." }, { "heading": "3 PRELIMINARIES", "text": "Meta reinforcement learning: We assume online adaptation situations (Nagabandi et al., 2019a;b) where the agent can leverage a few samples to adapt to a new task. Here, a task specifies the transition probability and the reward function. Information about task identity cannot be observed by the agent, and the task may change at any step in an episode. A meta-RL process is composed of meta-training and meta-testing. In meta-training, a policy and a predictive model that are prepared for efficient adaptation are learned with a meta-training task set. In meta-testing, on the basis of the meta-training result, the policy and the predictive model adapt to a new task. For the adaptation, the trajectory observed from the beginning of the episode to the current time step is leveraged. As we noted earlier, we call an adaptive policy and a predictive model of this sort a meta-policy and a meta-model, respectively.\nPartially observable Markov decision processes: We formalize our problem with a POMDP, which is defined as a tuple 〈O,S,A, pob, r, γ, pst〉. Here, O is a set of observations, S is a set of hidden states, A is a set of actions, pob := O × S × A → [0, 1] is the observation probability, pst := S × S × A → [0, 1] is the state transition probability, r : S × A → R is a reward function and γ ∈ [0, 1) is a discount factor. At time step t, these functions are used as pst(st|st−1, at−1), pob(ot|st, at−1) 2 and rt = r(st, at). The agent cannot directly observe the hidden state, but receives the observation instead. The agent selects an action on the basis of a policy π := p(at+1|ht). Here, ht is a history (the past trajectories) defined as ht := {a0, o0, ..., at, ot}. We denote the set of the histories by H. Given the definition of the history, the history transition probability can be defined as p(ht+1|at+1, ht) := p(ot+1|ht). Here, p(ot+1|ht) := ∑ st+1 ∑ st p(st|ht)p(st+1|st, at)p(ot+1|st+1, at), where p(st|ht) is the belief about the hidden state. The goal of RL in the POMDP is to find the optimal policy π∗ that maximizes the expected return R := ∑∞ t=0 γ\ntrt (i.e., π∗ = arg max π Ea∼π,h∼p [R])." }, { "heading": "4 FORMULATING MODEL-BASED META-RL", "text": "In this section, we formulate model-based meta-RL as solving a POMDP by using a parameterized meta-policy and a meta-model. The outline of our formulation is shown in Figure 4 in the appendix.\nIn our formulation, the task is included in the hidden state: S := T × S ′. Here T is the set of task τ and S ′ is the set of the other hidden state factors s′. With this definition, the state transition probability, observation probability and reward function can be defined respectively as follows: p(st+1|st, at) = p(τt+1, s′t+1|τt, s′t, at), p(ot+1|st+1, at) = p(ot+1|τt+1, s′t+1, at) and r(st, at) = r(τt, s ′ t, at). In addition, as with Finn & Levine (2018); Finn et al. (2017); Rakelly et al. (2019), we assume that the task set T and the initial task distribution p(τ0) do not change in meta-training and meta-testing. Owing to this assumption, in our analysis and algorithm, metatraining and meta-testing can be seen as an identical one. Note that the task can be changed during an episode. Namely, the value of τt+1 is not necessarily equal to that of τt.\nWe define a meta-policy and meta-model as πφ(at+1|ht) and pθ(rt, ot+1|ht), respectively. Here, φ and θ are learnable parameters for them. rt and ot+1 are assumed to be conditionally independent given ht, i.e., pθ(rt, ot+1|ht) = pθ(rt|ht) · pθ(ot+1|ht). As with p(ht+1|at+1, ht), the meta-model for the history can be defined as pθ(ht+1|at+1, ht) := pθ(ot+1|ht). We use the parameterized meta-model and meta-policy as shown in Algorithm 1. This algorithm is composed of 1) data collection in the real environment and 2) optimization of the meta-policy and meta-model. In 1), the data is collected from the real environment with the meta-policy and stored into a dataset D. In 2), the meta-policy and meta-model are optimized to maximize Ea∼πφ,r,h∼pθ [R] − C( m(θ), π(φ)). Here, Ea∼πφ,r,h∼pθ [R] is a meta-model return (the return of\n2For simplicity, we use these probabilities by abbreviating the subscripts “st” and “ob.”\nAlgorithm 1 Abstract Meta Model-based Meta-Policy Optimization (Abstract M3PO) 1: Initialize meta-policy πφ, meta-model pθ and dataset D. 2: for N epochs do 3: Collect trajectories from environment in accordance with πφ: D = D ∪ {(ht, ot+1, rt)}. 4: Optimize πφ and pθ: (φ, θ) ← arg max\n(φ,θ)\nEa∼πφ,r,h∼pθ [R] − C( m(θ), π(φ)). Here, D is\nused to evaluate m(θ) and π(φ) and generate an initial history h0. 5: end for\nthe meta-policy on the meta-model) 3. C( m(θ), π(φ)) is a discrepancy depending on the two error quantities m and π . Their detailed definitions are introduced in the next section.\nOur POMDP-based formulation covers a wide range of meta-RL settings including Bayes-adaptive MDPs (Zintgraf et al., 2020), hidden parameter-MDPs (Killian et al., 2017; Perez et al., 2020), parameterized MDPs (Duan et al., 2017; Finn & Levine, 2018), and the settings considered by Nagabandi et al. (2019a;b) and Jabri et al. (2019). These settings can be primarily recovered in our formulation by introducing both or either of the following assumptions (Asm1 and Asm2) 4. Asm1: the task does not change during an episode. Asm2: observation o is identical to the hidden state s′: o = s′. The Bayes-adaptive MDPs, hidden-parameter MDPs and parameterized MDPs can be recovered by introducing both Asm1 and Asm2. The setting of Nagabandi et al. (2019a;b) can be recovered by introducing Asm2, and the setting of Jabri et al. (2019) can be recovered by introducing Asm1. As a detailed example, we recover the parameterized MDPs in Appendix A.5." }, { "heading": "5 PERFORMANCE GUARANTEES OF MODEL-BASED META-RL", "text": "In this section, we analyse the performance guarantee of model-based meta-RL with an inaccurate meta-model. In Section 5.1, we provide the performance guarantee in a full meta-model-based rollout. In Section 5.2, we introduce the notion of a branched meta-rollout and analyse its performance guarantee. We show that the meta-model error is less harmful in the branched meta-rollout than that in the full meta-model-rollout." }, { "heading": "5.1 PERFORMANCE GUARANTEE IN A FULL META-MODEL-BASED ROLLOUT CASE", "text": "Our goal is to outline a theoretical framework in which we can provide performance guarantees for Algorithm 1. To show the guarantees, we construct a lower bound taking the following form:\nEπφ,p [R] ≥ Eπφ,pθ [R]− C( m(θ), π(φ)). (1)\nHere, Eπφ,p [R] denotes a true return (i.e., the return of the meta-policy in the real environment). The discrepancy between these returns, C, can be expressed as the function of two error quantities: the generalization error of the meta-policy and the distribution shift due to the updated meta-policy. For our analysis, we define the bounds of the generalization error m and the distribution shift π as follows: Definition 1. m(θ) := maxt Ea∼πD,h∼p [DTV (p(ht+1|at+1, ht)||pθ(ht+1|at+1, ht))]. Here, DTV is a total variation distance and πD is the data-collection policy whose actions contained in D follow 5. Definition 2. π(φ) := maxht DTV (πD(at+1|ht)||πφ(at+1|ht)).\nWe also assume that the expected reward is bounded by a constant rmax. Definition 3. rmax > maxt ∣∣∑ st p(st|ht)r(st, at) ∣∣. Now we present our bound, which is an extension of the theorem proposed in Janner et al. (2019).\n3For simplicity, we use the abbreviated style Eπφ,pθ [R]. 4For simplicity, we omit implementation-level assumptions (e.g., “meta-policy or meta-model are implemented on the basis of gradient-based MAML” in Finn & Levine (2018); Nagabandi et al. (2019a)). 5As with Janner et al. (2019), to simplify our analysis, we assume that the meta-model can accurately estimate reward. We discuss the case in which the reward prediction of a meta-model is inaccurate in A.8.\nTheorem 1 (The POMDP extension of Theorem 4.1. in Janner et al. (2019)). Let m = maxt Ea∼πD,h∼p [DTV (p(ht+1|at+1, ht)||pθ(ht+1|at+1, ht))] and π = maxht DTV (πD(at+1|ht)||πφ(at+1|ht)). Then, the true returns are bounded from below by meta-model returns of the meta-policy and discrepancy:\nEπφ,p [R] ≥ Eπφ,pθ [R]− rmax [ 2γ ( m + 2 π)\n(1− γ)2 + 4 π (1− γ) ] ︸ ︷︷ ︸\nC( m(θ), π(φ))\n(2)\nThis theorem implies that the discrepancy of the returns under full meta-model-based rollout scales linearly with both m and π . If we can reduce the discrepancy C, the two returns are closer to each other. As a result, the performance degradation is more significantly suppressed. In the next section, we discuss a new meta-model usage to reduce the discrepancy induced by the meta-model error m." }, { "heading": "5.2 PERFORMANCE GUARANTEE IN THE BRANCHED META-ROLLOUT CASE", "text": "The analysis of Theorem 1 relies on running full rollouts through the meta-model, causing metamodel errors to compound. This is reflected in the bound by a factor scaling quadratically with the effective horizon, 1/(1 − γ). In such cases, we can improve the algorithm by choosing to rely less on the meta-model and instead more on real environment data.\nTo allow for adjustment between meta-model-based and model-free rollouts, we introduce the notion of a branched meta-rollout. The branched meta-rollout is a kind of Dyna-style rollout (Sutton, 1991), in which the meta-model-based rollout is run as being branched from real environment data. More concretely, the rollout is run in the following two processes. 1) We begin a rollout from a history under the data-collection meta-policy’s history distribution pπD (ht), and 2) we then run k steps in accordance with πφ under the learned meta-model pθ.\nUnder such a scheme, the true return can be bounded from below: Theorem 2. Under the k steps branched meta-rollouts, using the bound of a meta-model error under πD, m = maxt Ea∼πD,h∼p,t [DTV (p(h′|h, a)||pθ(h′|h, a))], the bound of the meta-policy shift π = maxht DTV (πD||πφ), and the return on the meta-model E(a,h)∼Dmodel [R] where Dmodel is the set of samples collected through branched rollouts, the following inequation holds,\n(3)Eπφ,p [R] ≥ E(a,h)∼Dmodel [R]− rmax { 1 + γ2 (1− γ)2 2 π + γ − kγk + (k − 1)γk+1 (1− γ)2 ( π + m)\n+ γk − γ γ − 1 ( π + m) + γk 1− γ (k + 1)( π + m)\n} .\nThe discrepancy factors relying on m in Theorem 2 can be smaller than those relying on m in Theorem 1 6. This indicates that the performance degradation due to the meta-model error can be more suppressed than that in the full meta-model-based rollout 7." }, { "heading": "6 META-MODEL-BASED META-POLICY OPTIMIZATION WITH DEEP RL", "text": "In the previous section, we show that the use of branched meta-rollouts can suppress performance degradation. In this section, on the basis of this result, we modify Algorithm 1 so that the metapolicy and meta-model are optimized with E(a,h)∼Dmodel [R] − C( m(θ), π(φ)), instead of with Eπφ,pθ [R]− C( m(θ), π(φ)). More specifically, we propose the following modifications to Algorithm 1:\nMeta-policy optimization: The meta-policy is optimized with the branched meta-rollouts return E(a,h)∼Dmodel [R] 8. For the optimization, we adapt PEARL (Rakelly et al., 2019) be-\n6See Corollary 1 in the appendix. 7In A.6, we also prove that the discrepancy can be further reduced by introducing an additional assumption. 8To stabilize the learning, we omit C( m(θ), π(φ)) from the optimization objective for the meta-policy. The transition of π(θ) during learning with this limited optimization objective is shown in Figure 12 in the appendix. The result indicates that π(θ) tends to decrease as the training data size (epoch) grows.\nAlgorithm 2 Meta-Model-based Meta-Policy Optimization with Deep RL (M3PO) 1: Initialize meta-policy πφ, meta-model pθ, environment datasetDenv, meta-model datasetDmodel.\n2: for N epochs do 3: Train meta-model pθ with Denv: θ ← arg max\nθ EDenv [pθ(rt, ot+1|ht)]\n4: for E steps do 5: Take actions according to πφ; add the trajectory to Denv 6: for M model rollouts do 7: Sample ht uniformly from Denv 8: Perform k-step meta-model rollouts starting from ht using meta-policy πφ; add fictitious trajectories to Dmodel 9: end for\n10: for G gradient updates do 11: Update policy parameters with Dmodel: φ← φ−∇φJDmodel(φ) 12: end for 13: end for 14: end for\ncause it achieved a good learning performance in meta-learning settings. We use the fictitious trajectories generated from the branched meta-rollouts to optimize the meta-policy. Formally, πφ is optimized by using the gradient of optimization objective JDmodel(φ) := E(a,h)∼Dmodel [ DKL ( πφ||exp ( Qπφ − Vπφ ))] . Here, DKL is the Kullback-Leibler divergence,\nQπφ(at+1, ht) := E(r,h)∼Dmodel [R|at+1, ht] and Vπφ(ht) := ∑ at+1\nQπφ(at+1, ht)πφ(at+1|ht). As in PEARL, in πφ, the latent context is estimated by using past trajectories. The estimated context is then used to augment the policy input. Formally, the meta-policy is implemented as πφ(at+1|ht) = ∑ z πφ(at+1|ot, z)pφ(z|a0, o0, ..., at, ot). z is the latent context and pφ(z|a0, o0, ..., at, ot) is a context encoder. πφ(at+1|ot, z) is the conditioned policy. Similarly, in Qπφ and Vπφ , the estimated latent context is also used to augment their input. Meta-model optimization: The meta-model is optimized to minimize the discrepancy (i.e., minimize m) 9. For the meta-model, to consider both aleatoric and epistemic uncertainties, we use a bootstrap ensemble of B dynamics models {p1θ, ..., pBθ }. Here, piθ is the i-th conditional Gaussian distribution with diagonal covariance: piθ(rt, ot+1|ht) = N ( rt+1, ot+1|µiθ (ht) , σiθ (ht) ) . µiθ and σiθ are the mean and standard deviation, respectively. In our implementation, we use the recurrentbased architecture inspired by Duan et al. (2017); Nagabandi et al. (2019b); Rakelly et al. (2019); at each evaluation of the model, {a1, o1, ..., at−1, ot−1} in ht is fed to a recurrent neural network (RNN), and then its hidden unit output and {at, ot} in ht are fed to the feed-forward neural network that outputs the mean and standard deviation of the Gaussian distribution. We use the gated recurrent unit (GRU) (Cho et al., 2014) for the RNN. The recurrent layer in the GRU is composed of five sigmoid units. In addition, the feed-forward neural network is composed of an input layer, two hidden layers, and an output layer. The input and hidden layers are composed of 400 swish units (Ramachandran et al., 2017). To minimize m, we learn the meta-model via maximum likelihood estimation with Denv. As in Lakshminarayanan et al. (2017), the ensemble {p1θ, ..., pBθ } is learned on the shuffle of the real trajectories inDenv. We apply a re-parameterization trick to evaluate the distributions, and the meta-model parameters θ are optimized with a gradient-based optimizer. For the gradient-based optimizer we use Adam (Kingma & Ba, 2015). To avoid overfitting, we use weight decay and early termination (Bishop, 2006).\nThe resulting algorithm is shown in Algorithm 2. The modifications “Meta-model optimization” and “Meta-policy optimization” in the above paragraph are reflected in line 3 and lines 4–13, respectively. In line 4, k-step branched meta-rollouts are run. The appropriately set k contributes to decreasing the discrepancy, and suppresses performance degradation. Thus, we treat k as a hyperparameter, and set it to different values in different environments so that the discrepancy decreases. For the experiments described in the next section, we tune the hyperparameters for this algorithm by a grid search. The search result for the hyperparameter values is described in Table 1 in the appendix.\n9The transition of m over training epochs is shown in Figure 11 in the appendix, and it indicates that the model error tends to decrease as the number of epochs increases.\nFor our experiments, we implemented our algorithm by extending the codebase of Model-based Policy Optimization 10. We made two main extensions: (1) introduce a latent context to the policy as in PEARL and (2) replace a predictive model with the meta-model based on the RNN." }, { "heading": "7 EXPERIMENTS", "text": "In this section, we report our experiments 11 aiming to answer the following questions: Q.1: Can our method (M3PO) outperform existing meta-RL methods? Q.2: How do meta-model-rollout lengths k affect the actual performance?\nIn our experiments, we compare our method (M3PO) with two baseline methods: PEARL (Rakelly et al., 2019) and Learning to adapt (L2A) (Nagabandi et al., 2019a). More detailed information is described in A.10 in the appendix. We conduct a comparative evaluation of the methods on a variety of simulated robot environments using the MuJoCo physics engine (Todorov et al., 2012). We prepare the environments proposed in the meta-RL (Finn & Levine, 2018; Nagabandi et al., 2019a; Rakelly et al., 2019; Rothfuss et al., 2019) and robust-RL (Hiraoka et al., 2019; Rajeswaran et al., 2017) literature: Halfcheetah-fwd-bwd, Halfcheetah-pier, Ant-fwd-bwd, Ant-crippledleg, Walker2D-randomparams and Humanoid-direc. In the environments, the agent is required to adapt to a fluidly changing task that the agent cannot directly observe. Detailed information about each environment is described in A.11 in the appendix.\nRegarding Q1, our experimental results indicate that M3PO outperforms existing meta-RL methods. In Figure 1, the learning curves of M3PO and existing meta-RL methods (L2A and PEARL) on meta-training phases are shown. These learning curves indicate that the sample efficiency of M3PO is better than those of L2A and PEARL 12. The performance (return) of L2A remains poor and does not improve even when the training data increases. PEARL can improve meta-policy performance via training in all environments. However, the degree of improvement of PEARL is smaller than that of M3PO. In a number of the environments (e.g., Halfcheetah-pier), the relative performance of M3PO against PEARL becomes asymptotically worse. This indicates that, as with Nagabandi et al. (2018), dynamic switching from M3PO to PEARL or other model-free approaches needs to be considered to further improve overall performance.\nRegarding Q2, we conducted an evaluation of M3PO by varying its model-rollout length k. The evaluation results (Figure 2) indicate that the performance tends to degrade when the model-rollout length is long. We can see significant performance degradation especially in Ant-fwd-bwd and Humanoid-direc. In Ant-fwd-bwd, the performance at k = 100 is significantly worse than that at k = 10. In Humanoid-direc, the performance at k = 5 is significantly worse than that at k = 1. As we have seen, the performance degradation in Humanoid-direc is more sensitive to the model-rollout length than that in Ant-fwd-bwd. One reason for this is that the meta-model error in Humanoid-direc is larger than that in Ant-fwd-bwd (Figure 11 in the appendix).\nAn example of meta-policies learned by M3PO with 200k samples in Humanoid-direc is shown in Figure 3, and it indicates the learned policy successfully adapts to different tasks. Additional examples of meta-policy learned by PEARL and M3PO are shown in the video at the following link: https://drive.google.com/file/d/1DRA-pmIWnHGNv5G_ gFrml8YzKCtMcGnu/view?usp=sharing" }, { "heading": "8 CONCLUSION", "text": "In this paper, we analysed the performance guarantee (and performance degradation) of MBRL in a meta-learning setting. We first formulated model-based reinforcement learning in a meta-learning setting as solving a POMDP. We then conducted theoretical analyses on the performance guarantee in both the full model-based rollout and the branched meta-rollout. We showed that the performance degradation due to the meta-model error in the branched meta-rollout is smaller than that in the full\n10https://github.com/JannerM/mbpo 11The source code to replicate the experiments will be open to the public. 12Note that, in an early stage of the training phase, there are many test episodes in which unseen tasks appear. Therefore, the improvement of M3PO over L2A and PEARL at the early stage of learning indicates its high adaptation capability for unseen situations.\nmeta-model-based rollouts. Based on the theoretical result, we introduced branched meta-rollouts to policy optimization and proposed M3PO. Our experimental results show that it achieves better sample efficiency than PEARL and L2A." }, { "heading": "A APPENDICES", "text": "" }, { "heading": "A.1 HOW DOES OUR WORK DIFFER FROM JANNER ET AL. (2019)?", "text": "Although our work is grounded primarily on the basis of Janner et al. (2019), we provide non-trivial contributions in both theoretical and practical frontiers: (1) We provide theorems about the relation between true returns and returns on inaccurate predictive models (model returns) on a “meta-learning (POMDPs)” setting (Section 5). In their work, they provide theorems about the relation between the true returns and the model returns in the branched rollout in MDPs. In contrast, we provide theorems about the relation between the true returns and of the model returns in the branched rollout in POMDPs. In addition, in the derivation of theorems (Theorems 4.2 and 4.3) in their work, a number of important premises are not properly taken into consideration (the detailed discussion on it is described in the second paragraph in A.7). We provide new theorems, in which the premises are more properly reflected, for both MDPs and POMDPs (A.3, A.6, and A.7). (2) We extend the modelbased policy optimization (MBPO) proposed by Janner et al. into the meta-learning (POMDPs) setting (Section 6). MBPO is for the MDP settings and does not support POMDP settings, while our method (M3PO) supports POMDP settings. Furthermore, we empirically demonstrate the usefulness of the meta-model usage in the branched rollout manner in the POMDP settings (Section 7).\nA.2 OUTLINE OF OUR MODEL-BASED META-RL FORMULATION\n14" }, { "heading": "A.3 PROOFS OF THEOREMS IN THE MAIN CONTENT", "text": "Before starting the derivation of the main theorems, we introduce a lemma useful for bridging POMDPs and MDPs. Lemma 1 (Silver & Veness (2010)). Given a POMDP 〈O,S,A, pob, r, γ, pst〉, consider the derived MDP with histories as states, 〈H,A, γ, r̄, phi〉, where ∀t. phi := p(ht+1|at+1, ht) =∑ st∈S ∑ st+1∈S p(st|ht)p(st+1|st, at)p(ot+1|st+1, at) and r̄(ht, at) := ∑ st∈S p(st|ht)r(st, at). Then, the value function V̄π(ht) of the derived MDP is equal to the value function Vπ(ht) of the POMDP.\nProof. The statement can be derived by backward induction on the value functions. See the proof of Lemma 1 in Silver & Veness (2010) for details." }, { "heading": "PROOF OF THEOREM 1:", "text": "Proof. By Lemma 1, our problem in POMDPs can be mapped into the problem in MDPs, and then Theorem 4.1 in Janner et al. (2019) can be applied to the problem.\nSimilarly, the proof of Theorem 2 is derived by mapping our problem into that in MDPs by Lemma 1 and leveraging theoretical results on MDPs." }, { "heading": "PROOF OF THEOREM 2:", "text": "Proof. By Lemma 1, our problem in POMDPs can be mapped into the problem in MDPs, and then Theorem 4 in A.7 can be applied to the problem." }, { "heading": "A.4 THE DISCREPANCY RELYING ON THE META-MODEL ERROR IN THEOREM 1 AND THAT IN", "text": "THEOREM 2\nCorollary 1. The discrepancy factors relying on m in Theorem 1, CTh1,m, are equal to or larger than those relying on m at k = 1 in Theorem 2, CTh2,m.\nProof. By Theorem 1 and 2,\n(4)CTh1,m = rmax 2γ m\n(1− γ)2 .\n(5)CTh2,m = rmax γ\n1− γ 2 m.\nGiven that γ ∈ [0, 1), rmax > 0 and m ≥ 0,\n(6)CTh1,m − CTh2,m = rmax 4γ m − 2γ2 m\n(1− γ)2 ≥ 0." }, { "heading": "A.5 CONNECTION TO A TYPICAL META-RL SETTING", "text": "In Section 4, we formulate model-based meta-RL to solve a POMDP. However, this formulation may make it difficult for certain readers to comprehend the connection to a typical meta-RL setting. Although in the normative meta-RL setting (the parameterized MDPs (Finn et al., 2017)), the objective to be optimized is given as the return expected with respect to a task distribution, such an objective does not appear in the formulation in Section 4. In this section, we show that such an objective can be derived by specializing the formulation in Section 4 under a number of assumptions\n(Corollary 2). Then, we explain why we did not adopt such a specialization and maintained a more abstract formulation in Section 4.\nFirst, letting a task set and a task distribution be denoted by T and p(τ) where τ ∈ T , respectively, we introduce the following assumptions: Assumption 1. S := O × T . Assumption 2. p(st+1|st, at) := p(ot+1|ot, τt, at) · 1(τt+1 = τt). Assumption 3. For t > 0, p(st|ht) := p(τt|ht) · 1(τt = τ0). Assumption 4. p(τ0|h0) := p(τ0).\nHere, 1(·) is the indicator function that returns one if the argument is true, and zero otherwise. With these assumptions, the following corollary holds: Corollary 2. Given a POMDP 〈O,S,A, pob, r, γ, pst〉 and a task set T , consider the parameterized MDP with histories as states, 〈H,A, γ, r̄, p̄ob〉, where ∀t. p̄ob := p(ot+1|ot, τ0, at) and r̄ := r(ot, τ0, at). Under Assumptions 2∼5, the expected return on the parameterized MDP Ea∼π,h∼p,τ∼p(τ) [ ∑∞ t γ tr̄t] := ∑ τ∈T p(τ)Ea∼π,h∼p [ ∑∞ t γ\ntr̄t|τ ] is equal to the expected return on the POMDP Ea∼π,h∼p [R].\nProof. By applying Lemma 1, the value function in a POMDP 〈O,S,A, pob, r, γ, pst〉 can be mapped to the value function V̄π(ht) in the derived MDP, which is 〈H,A, γ, r̄, phi〉, where ∀t. phi := p(ht+1|at+1, ht) = ∑ st∈S ∑ st+1∈S p(st|ht)p(st+1|st, at)p(ot+1|st+1, at) and r̄(ht, at) :=∑\nst∈S p(st|ht)r(st, at). By applying the assumptions, this value function can be transformed to a different representation that explicitly contains τ and its distribution: For t > 0,\n(7)\nV̄π(ht) = ∑ at+1 π(at+1|ht) ∑ st∈S p(st|ht)r(st, at)\n+ γ ∑\not+1∈O ∑ st∈S ∑ st+1∈S p(st|ht)p(st+1|st, at)p(ot+1|st+1, at)V̄π(ht+1) = ∑ at+1 π(at+1|ht) r(ot, τ0, at)︸ ︷︷ ︸ r̄t +γ ∑ ot+1∈O p(ot+1|ot, τ0, at)︸ ︷︷ ︸ p̄ob V̄π(ht+1)\n . Likewise, for t = 0,\n(8)\nV̄π(h0) = ∑ a1 π(a1|h0) {∑ τ0 p(τ0)r(o0, τ0, a0) + γ ∑ o1∈O ∑ τ0 p(τ0)p(o1|o0, τ0, a0)V̄π(h1) }\n= ∑ τ0 p(τ0) ∑ a1 π(a1|h0)\n{ r(o0, τ0, a0) + γ\n∑ o1∈O p(o1|o0, τ0, a0)V̄π(h1) } ︸ ︷︷ ︸\nV̄π(h0)\n.\nTherefore,\n(9)\nEa ∼π,h∼p [R] = ∑ h0 p(h0)V̄π(h0)\n= ∑ τ0 p(τ0) ∑ h0 p(h0)V̄π(h0)\n= Ea∼π,h∼p,τ∼p(τ) [ ∞∑ t γtr̄t ]\nvertical axis represents k ∈ [1, 50] and each horizontal axis represents m, m′ ∈ [0, 1]. In all figures, for evaluating the discrepancy values, we set the other variables as rmax = 1, π = 1 − m for (a), and π = 1− m′ for (b). A key insight with the figures is following two points. First, discrepancy values in Theorem 2 (a) tend to increase as the k value increases, whereas those in Theorem 3 (b) are not. Thus, there could be an optimal k value that larger than one. Second, the contour colours in (b) are paler than those in (a). This implies, given m = m′ , Theorem 3 provides a tighter bound of the performance.\nBy Corollary 2, our formulation in Section 4 can be specialized into a problem where the objective to be optimized is given as the return expected with respect to a task distribution. We can derive the meta-model returns with discrepancies for bounding the true return (i.e., Ea∼πφ,h∼pθ,τ∼p(τ) [ ∑∞ t γ\ntr̄t]−C( m, π)) by using Corollary 2 instead of Lemma 1 in the proofs of Theorem 1 and replacing pθ(ot|o0, τ0, at) with pθ(ot|ht−1). The main reason that we do not adopt such a specialization in Section 4 is to avoid restrictions induced by the assumptions (Assumption 2∼5). For example, Assumption 2 states that hidden states are composed by observations and a task. Since observations can be observed from an agent by its definition, the information in the hidden states that cannot be observed from an agent is the task. However, in many meta-RL settings (e.g., application domains (Jabri et al., 2019) where video images are used as the observation), there should be other information that cannot be observed from the agent. In addition, Assumptions 3 and 4 state that the task is invariant in each episode. However, in many meta-RL settings (e.g., Nagabandi et al. (2019a;b)), the task can change to a different one in each episode. To avoid such restrictions, we decided not to specialize the formulation by introducing the assumptions." }, { "heading": "A.6 ADDITIONAL ANALYSIS OF THE PERFORMANCE DEGRADATION IN THE BRANCHED META-ROLLOUT", "text": "Figure 5a shows that the discrepancy value in Theorem 2 tends to monotonically increase as the value of k increases, regardless of the values of γ and m. This means that the optimal value of k is always 1. However, intuitively, we may expect that there is an optimal value for k that is higher than 1 when the meta-model error is small. As will be shown in this section, this intuition is correct. One of the main reasons for the mismatch of the intuition and the tendency of the discrepancy in Theorem 2 is that, for its analysis, the meta-model error on the data collection meta-policy πD (i.e., m) is used instead of that on the current meta-policy πφ. This ignorance of the meta-model error on the current meta-policy induces the pessimistic estimation of the discrepancy in the analysis (see the evaluation of “term B” and “term C” in the proof of Theorem 4 via the proof of Theorem 2 in the appendix for more details).\nTo estimate the discrepancy more tightly, as in Janner et al. (2019), we introduce the assumption that the meta-model error on the current meta-policy can be approximated by π and m: Assumption 5. An approximated meta-model error on a current policy m′ : m′( π) ≈ m + π\nd m′ d π , where d m′d π is the local change of m′ with respect to π .\nTo see the tendency of this approximated meta-model error, we plot the empirical value of d m′/d π , varying the size of training samples for the meta-model in Figure 10 in A.12. The figure shows that, as the training sample size increases, the value tends to gradually approach zero. This means that training meta-models with more samples provide better generalization on the nearby distributions.\nEquipped with the approximated meta-model’s error on the distribution of the current meta-policy πφ, we arrive at the following bound: Theorem 3. Let m′ ≥ maxt Ea∼πφ,h∼p [DTV (p(h′|h, a)||pθ(h′|h, a))],\n(10)Eπφ,p [R] ≥ E(a,h)∼Dmodel [R]− rmax { 1 + γ2 (1− γ)2 2 π + γ − kγk + (k − 1)γk+1 (1− γ)2 ( m′ − π)\n+ γk − γ γ − 1 ( m′ − π) + γk 1− γ (k + 1)( m′ − π)\n} .\nProof. By Lemma 1, our problem in POMDPs can be mapped into the problem in MDPs, and then Theorem 5 in A.7 can be applied to the problem.\nGiven that m = m′ , it is obvious that the discrepancy in Theorem 3 is equal to or smaller than that in Theorem 2. In the discrepancy in Theorem 3, all terms except the first term become negative when m′ < π . This implies that the optimal k that minimizes the discrepancy can take on the value higher than 1 when the meta-model error is relatively small. The empirical trend of the discrepancy value (Figure 5b) supports it; when m′ is lower than 0.5 (i.e., m′ < π), the discrepancy values decrease as the value of k grows regardless of the value of γ. This result motivates us to set k to the value higher than 1 in accordance with the meta-model error for reducing the discrepancy.\nA.7 THE DERIVATION OF THE RELATION OF THE RETURNS IN k-STEP BRANCHED ROLLOUTS (k ≥ 1) IN MARKOV DECISION PROCESSES\nIn this section, we discuss the relation of the true returns and the model returns under the branched rollout in an MDP, which is defined by a tuple 〈S,A, r, γ, pst〉. Here, S is the set of states, A is the set of actions, pst := p(s′|s, a) is the state transition probability for any s′, s ∈ S and a ∈ A, r is the reward function and γ ∈ [0, 1) is the discount factor. At time step t, the former two functions are used as p(st|st−1, at), rt = r(st, at). The agent selects an action on the basis of a policy π := p(at+1|st). We denote the data collection policy by πD and the state visitation probability under πD and p(s′|s, a) by pπD (st). We also denote the predictive model for the next state by pθ(s\n′|s, a). In addition, we define the upper bounds of the reward scale as rmax > maxs,a|r(s, a)|. Note that, in this section, to discuss the MDP case, we are overriding the definition of the variables and functions that were defined for the POMDP case in the main body. In addition, for simplicity, we use the abbreviated style Eπ,p [R] for the true return Ea∼π,s∼p [R := ∑∞ t=0 γ trt].\nAlthough the theoretical analysis on the relation of the returns in the MDP case is provided by Janner et al. (2019), in their analysis, a number of important premises are not properly taken into consideration. First, although they use the replay buffers for the branched rollout (i.e. datasets Denv and Dmodel in Algorithm 2 in Janner et al. (2019)), they do not take the use of the replay buffers into account in the their theoretical analysis. Furthermore, they calculate state-action visitation probabilities based solely on a single model-based rollout factor. In the branched rollout, stateaction visitation probabilities (except for the one at t = 0) should be affected by multiple past model-based rollouts. For example, a state-action visitation probability at t (s.t. t > k) is affected by the model-based rollout branched from real trajectories at t− k and ones from t− k+ 1 to t− 1 (in total, k model-based rollouts). However, in their analysis (the proof of Lemma B.4 in Janner et al. (2019)), they calculate state-action visitation probabilities based solely on the model-based rollout. For example, in their analysis, it is assumed that a state-action visitation probability at t (s.t. t > k) is affected only by the model-based rollout branched from real trajectories at t − k. These oversights of important premises in their analysis induce a large mismatch between those for their theorems and those made for the actual implementation of the branched rollout (i.e., Algorithm 2 in\nJanner et al. (2019)) 13. Therefore, we decided to newly derive the theorems on the branched rollout, reflecting these premises more appropriately.\nThe outline of our branched rollout is shown in Figure 6. Here, we assume that the trajectories collected from the real environment are stored in a dataset Denv. The trajectories stored in Denv can be seen as trajectories following the true dynamics p(s′|s, a) and data collection policy (i.e., a mixture of previous policies used for data collection) πD. At each branched rollout, the trajectories in Denv are uniformly sampled 14, and then starting from the sampled trajectories, k-step modelbased rollouts in accordance with π under pθ is run. The fictitious trajectories generated by the branched rollout is stored in a model dataset Dmodel 15. This process more appropriately reflects the actual implementation of the branched rollout (i.e., lines 5–8 in Algorithm 2) in Janner et al. (2019) 16. The performance of π is evaluated as the expected return under the state-action visitation probability in Dmodel.\n13The mismatch becomes larger especially when the model-rollout length k is large because state-action visitation probabilities are affected by these rollouts more significantly with large k.\n14Thus, the initial state probability for the rollout starting from the trajectories follows pπD (s) 15Here, when the trajectories are stored in Dmodel, the states in the trajectories are augmented with time step information to deal with the state transition depending on the time step. 16Note that the extension of this process to the POMDP case is compatible with the implementation of the branched meta-rollout in our algorithm (lines 4–13 in Algorithm 2).\nFormally, we define the return in the branched rollout E(a,s)∼Dmodel [R] as:\nE(a,s)∼Dmodel [R] := ∑ s0,a0 pπD (s0, a0)r(s0, a0) + k−1∑ t=1 ∑ st,at γtpbrt<k(st, at)r(st, at)\n+ ∞∑ t=k ∑ st,at γtpbrt≥k(st, at)r(st, at) (11)\npbrt<k(st, at) := 1\nt t−1∑ i=0 pbrt<k,i(st, at) (12)\npbrt≥k(st, at) := 1\nk k−1∑ i=0 pbrt≥k,i(st, at) (13)\npbrt<k,i(st, at) := ∑\nsi,...,st−1 ∑ ai,...,at−1 pπD (si)Π t−1 j=ipθ(sj+1|sj , aj+1)π(aj+1|sj) (14)\npbrt≥k,i(st, at) := ∑\nst−k+i,...,st−1 ∑ at−k+i,...,at−1\npπD (st−k+i)Π t−1 j=t−k+ipθ(sj+1|sj , aj+1)π(aj+1|sj) (15)\nHere, pbrt<k,i(st, at) and p br t≥k,i(st, at) are the state-action visitation probabilities that the i-th yellow trajectories (node) from the bottom at each timestep t in Figure 6 follow. In later discussions, for simplicity, we use the abbreviated style EDmodel [R] for E(a,s)∼Dmodel [R].\nBefore starting the derivation of theorems, we introduce a useful lemma.\nLemma 2. Assume that the rollout process in which the policy and dynamics can be switched to other ones at time step tsw. Letting two probabilities be p1 and p2, for 1 ≤ t′ ≤ tsw, we assume that the dynamics distributions are bounded as m,pre ≥ maxt′ Es∼p1 [DTV (p1(st′ |st′−1, at′)||p2(st′ |st′−1, at′))]. In addition, for tsw < t\n′ ≤ t, we assume that the dynamics distributions are bounded as m,post ≥ maxt′ Es∼p1 [DTV (p1(st′ |st′−1, at′)||p2(st′ |st′−1, at′))]. Likewise, the policy divergence is bounded by π,pre and π,post. Then, the following inequation holds\n(16) ∑ st,at |p1(st, at)− p2(st, at)| ≤ 2(t− tsw)( m,post + π,post) + 2tsw( m,pre + π,pre)\nProof. The proof is done in a similar manner to those of Lemma B.1 and B.2 in (Janner et al., 2019).\n∑ st,at |p1(st, at)− p2(st, at)|\n= ∑ st,at |p1(at)p1(st|at)− p2(at)p2(st|at)|\n= ∑ st,at |p1(at)p1(st|at)− p1(at)p2(st|at) + (p1(at)− p2(at))p2(st|at)|\n≤ ∑ st,at p1(at) |p1(st|at)− p2(st|at)|+ ∑ at |p1(at)− p2(at)|\n≤ ∑ st,at p1(at) |p1(st|at)− p2(st|at)|+ ∑ at,st−1 |p1(at, st−1)− p2(at, st−1)|\n= ∑ st,at p1(at) |p1(st|at)− p2(st|at)|\n+ ∑\nat,st−1\n|p1(st−1)p1(at|st−1)− p1(st−1)p2(at|st−1) + (p1(st−1)− p2(st−1))p2(at|st−1)|\n≤ ∑ st,at p1(at) |p1(st|at)− p2(st|at)|+ ∑ at,st−1 p1(st−1) |p1(at|st−1)− p2(at|st−1)|\n+ ∑ st−1 |p1(st−1)− p2(st−1)|\n≤ ∑ st,at p1(at) |p1(st|at)− p2(st|at)|+ ∑ at,st−1 p1(st−1) |p1(at|st−1)− p2(at|st−1)|\n+ ∑\nst−1,at−1\n|p1(st−1, at−1)− p2(st−1, st−1)|\n≤ 2 m,post + 2 π,post + ∑\nst−1,at−1\n|p1(st−1, at−1)− p2(st−1, st−1)|\n≤ 2(t− tsw)( m,post + π,post) + ∑\nstsw ,atsw\n|p1(stsw , atsw)− p2(stsw , stsw)|\n≤ 2(t− tsw)( m,post + π,post) + 2tsw( m,pre + π,pre)\n(17)\nNow, we start the derivation of our bounds.\nTheorem 4. Under the k-step branched rollouts, using the bound of a model error under πD, m = maxt Ea∼πD,s∼p,t [DTV (p(s′|s, a)||pθ(s′|s, a))] and the bound of the policy shift π = maxsDTV (π||πD), the following inequation holds,\n(18)Eπ,p [R] ≥ EDmodel [R]− rmax { 1 + γ2 (1− γ)2 2 π + γ − kγk + (k − 1)γk+1 (1− γ)2 ( π + m)\n+ γk − γ γ − 1 ( π + m) + γk 1− γ (k + 1)( π + m)\n} ." }, { "heading": "Proof.", "text": "∣∣Eπ,p [R]− EDmodel [R]∣∣ = ∣∣∣∣∣∣∣\n∑ s0,a0\n{pπ(s0, a0)− pπD (s0, a0)} r(s0, a0) + ∑k−1 t=1 ∑ st,at γt { pπ(st, at)− pbrt<k(st, at) } r(st, at)\n+ ∑∞ t=k ∑ st,at γt { pπ(st, at)− pbrt≥k(st, at) } r(st, at) ∣∣∣∣∣∣∣ ≤ ∑ s0,a0 |pπ(s0, a0)− pπD (s0, a0)| |r(s0, a0)| + ∑k−1 t=1 γ t ∑ st,at ∣∣pπ(st, at)− pbrt<k(st, at)∣∣ |r(st, at)| + ∑∞ t=k γ t ∑ st,at ∣∣∣pπ(st, at)− pbrt≥k(st, at)∣∣∣ |r(st, at)| \n≤ \nrmax ∑ s0,a0\n|pπ(s0, a0)− pπD (s0, a0)|︸ ︷︷ ︸ term A\n+rmax ∑k−1 t=1 γ t ∑ st,at ∣∣pπ(st, at)− pbrt<k(st, at)∣∣︸ ︷︷ ︸ term B\n+rmax ∑∞ t=k γ t ∑ st,at ∣∣pπ(st, at)− pbrt≥k(st, at)∣∣︸ ︷︷ ︸ term C (19)\nFor term A, we can bound the value in similar manner to the derivation of Lemma 2.∑ s0,a0 |pπ(s0, a0)− pπD (s0, a0)| = ∑ s0,a0 |pπ(a0)p(s0)− pπD (a0)p(s0)|\n= ∑ s0,a0 |pπ(a0)p(s0)− pπ(a0)p(s0) + (pπ(a0)− pπD (a0)) p(s0)|\n≤ ∑ s0,a0\npπ(a0) |p(s0)− p(s0)|︸ ︷︷ ︸ =0\n+ ∑ a0\n|pπ(a0)− pπD (a0)|︸ ︷︷ ︸ ≤2 π\n≤ 2 π (20)\nFor term B, we can apply Lemma 2 to bound the value, but it requires the bounded model error under the current policy π. Thus, we need to decompose the distance into two by adding and subtracting pπD : ∑\nst,at |pπ(st, at)− pbr,t<k(st, at)| = ∑ st,at ∣∣∣∣ pπ(st, at)− pπD (st, at)+pπD (st, at)− pbrt<k(st, at) ∣∣∣∣\n≤ ∑ st,at\n|pπ(st, at)− pπD (st, at)|︸ ︷︷ ︸ ≤2t π\n+ ∑ st,at ∣∣pπD (st, at)− pbrt<k(st, at)∣∣ (21) ∑ st,at ∣∣pπD (st, at)− pbrt<k(st, at)∣∣ = ∑ st,at\n∣∣∣ 1t ∑t−1i=0 pπD (st, at)− 1t ∑t−1i=0 pbrt<k,i(st, at)∣∣∣ ≤ 1\nt t−1∑ i=0 ∑ st,at ∣∣pπD (st, at)− pbrt<k,i(st, at)∣∣ (A)\n≤ 1 t t−1∑ i=0 {2 (t− i) · ( π + m)}\n= 1\nt\n{ t2( π + m) + t( π + m) } (22)\nFor (A), we apply Lemma 2 with setting m,post = m and π,post = π for the rollout following π and pθ(s′|s), and m,pre = 0 and π,pre = 0 for the rollout following πD and p(s′|s), respectively. To recap term B, the following equation holds:\n∑ st,at |pπ(st, at)− pbr,t<k(st, at)| ≤ 2t π + t( π + m) + ( π + m) (23)\nFor term C, we can derive the bound in a similar manner to the term B case:\n∑ st,at |pπ(st, at)− pbr,t≥k(st, at)| = ∑ st,at ∣∣∣∣ pπ(st, at)− pπD (st, at)+pπD (st, at)− pbrt≥k(st, at) ∣∣∣∣\n≤ ∑ st,at\n|pπ(st, at)− pπD (st, at)|︸ ︷︷ ︸ ≤2t π\n+ ∑ st,at ∣∣pπD (st, at)− pbrt≥k(st, at)∣∣ (24)\n∑ st,at ∣∣pπD (st, at)− pbrt≥k(st, at)∣∣ = ∑ st,at ∣∣∣ 1k∑k−1i=0 pπD (st, at)− 1k∑k−1i=0 pbrt≥k,i(st, at)∣∣∣ ≤ 1\nk k−1∑ i=0 ∑ st,at ∣∣pπD (st, at)− pbrt≥k,i(st, at)∣∣ ≤ 1\nk k−1∑ i=0 {2 (k − i) · ( π + m)}\n= 1\nk\n{ k2( π + m) + k( π + m) } (25)\nTo recap term C, the following equation holds:\n∑ st,at ∣∣pπ(st, at)− pbrt≥k(st, at)∣∣ ≤ 2t π + k( π + m) + ( π + m) (26)\nBy substituting Eqs. 20, 23, and 26, into Eq. 19, we obtain the result:∣∣Eπ,p [R]− EDmodel [R]∣∣ ≤ rmax2 π +rmax ∑k−1 t=1 γ\nt {2t π + t( π + m) + ( π + m)} +rmax ∑∞ t=k γ t {2t π + k( π + m) + ( π + m)} = rmax { 2 π + 1−kγ(k−1)+(k−1)γk (1−γ)2 γ (3 π + m) + γk−γ γ−1 ( π + m)\n+ ∑∞ t=k γ t {2t π + k( π + m) + ( π + m)}\n}\n= rmax 2 π + 1−kγ(k−1)+(k−1)γk (1−γ)2 γ (3 π + m) + γk−γ γ−1 ( π + m) + ∑∞ t=1 γ\nt {2t π + k( π + m) + ( π + m)} − ∑k−1 t=1 γ t {2t π + k( π + m) + ( π + m)} = rmax 2 π + 1−kγ(k−1)+(k−1)γk (1−γ)2 γ (3 π + m) + γk−γ γ−1 ( π + m) + 2(1−γ)2 γ π + γ\n1−γ {k( π + m) + ( π + m)} − ∑k−1 t=1 γ t {2t π + k( π + m) + ( π + m)}\n\n= rmax 2 π + 1−kγ(k−1)+(k−1)γk (1−γ)2 γ (3 π + m) + γk−γ γ−1 ( π + m) + 2(1−γ)2 γ π + γ 1−γ {k( π + m) + ( π + m)} − 1−kγ (k−1)+(k−1)γk (1−γ)2 2γ π\n−γ k−γ γ−1 {k( π + m) + ( π + m)} = rmax 1+γ2 (1−γ)2 2 π + γ−kγk+(k−1)γk+1 (1−γ)2 ( π + m)\n+γ k−γ γ−1 ( π + m) +\n( γ\n1−γ − γk−γ γ−1 ) (k + 1)( π + m) = rmax { 1+γ2 (1−γ)2 2 π + γ−kγk+(k−1)γk+1 (1−γ)2 ( π + m)\n+γ k−γ γ−1 ( π + m) + γk 1−γ (k + 1)( π + m)\n} (27)\nTheorem 5. Let m′ ≥ maxtEa∼π,s∼p [DTV (p(s′|s, a)||pθ(s′|s, a))],\n(28)Eπ,p [R] ≥ EDmodel [R]− rmax { 1 + γ2 (1− γ)2 2 π + 1− kγ(k−1) + (k − 1)γk (1− γ)2 γ( m′ − π)\n+ γk − γ γ − 1 ( m′ − π) + γk 1− γ (k + 1)( m′ − π)\n} .\nProof. The derivation of this theorem is basically the same as that in the Theorem 4 case except for the way of evaluation of the bound of terms B and C.\nFor term B, we can apply Lemma 2 to bound the value:∑ st,at ∣∣pπ(st, at)− pbrt<k(st, at)∣∣ = ∑ st,at ∣∣∣ 1t ∑t−1i=0 pπD (st, at)− 1t ∑t−1i=0 pbrt<k,i(st, at)∣∣∣ ≤ 1\nt t−1∑ i=0 ∑ st,at ∣∣pπ(st, at)− pbrt<k,i(st, at)∣∣ (D)\n≤ 1 t t−1∑ i=0 {(t− i)2 m′ + i2 π}\n= 1\nt\n{ 2t2 m′ − (t− 1)t m′ + (t− 1)t π } = 1\nt\n{ 2t2 m′ − t2 m′ + t m′ + t2 π − t π } = 1\nt\n{ t2( m′ + π) + t( m′ − π) } = t( m′ + π) + ( m′ − π) (29)\nFor (D), we apply Lemma 2 with setting m,post = m′ and π,post = 0 for the rollout following π and pθ(s′|s), and m,pre = 0 and π,pre = π for the rollout following πD and p(s′|s). For term C, we can derive the bound in a similar manner to the case of term B:∑ st,at ∣∣pπ(st, at)− pbrt≥k(st, at)∣∣ = ∑ st,at\n∣∣∣ 1k∑k−1i=0 pπD (st, at)− 1k∑k−1i=0 pbrt≥k,i(st, at)∣∣∣ ≤ 1\nk k−1∑ i=0 ∑ st,at ∣∣pπ(st, at)− pbrt≥k,i(st, at)∣∣ ≤ 1\nk k−1∑ i=0 {(k − i)2 m′ + (t− k + i)2 π}\n= 1\nk\n{ 2k2 m′ − (k − 1)k m′ + 2tk π − 2k2 π + (k − 1)k π } = 1\nk\n{ 2kt π + 2( m′ − π)k2 − (k − 1)k( m′ − π) } = 1\nk\n{ 2kt π + ( m′ − π)k2 + k( m′ − π) } (30)\nBy substituting Eqs. 20, 29, and 30, into Eq. 19, we obtain the result:∣∣Eπ,p [R]− EDmodel [R]∣∣ ≤ rmax{ 2 π +∑k−1t=1 γt {t( m′ + π) + ( m′ − π)}+∑∞t=k γt { 1k {2kt π + ( m′ − π)k2 + k( m′ − π)}} }\n= rmax\n{ 2 π + 1−kγ(k−1)+(k−1)γk (1−γ)2 γ( m′ + π) + γk−γ γ−1 ( m′ − π)\n+ ∑∞ t=k γ t 1 k { 2kt π + ( m′ − π)k2 + k( m′ − π)\n} }\n= rmax 2 π + 1−kγ(k−1)+(k−1)γk (1−γ)2 γ( m′ + π) + γk−γ γ−1 ( m′ − π) + ∑∞ t=1 γ t 1 k { 2kt π + ( m′ − π)k2 + k( m′ − π) } − ∑k−1 t=1 γ t 1 k { 2kt π + ( m′ − π)k2 + k( m′ − π) } \n= rmax 2 π + 1−kγ(k−1)+(k−1)γk (1−γ)2 γ( m′ + π) + γk−γ γ−1 ( m′ − π) + 1(1−γ)2 2γ π + γ 1−γ {( m′ − π)k + ( m′ − π)} − 1−kγ (k−1)+(k−1)γk (1−γ)2 2γ π\n−γ k−γ γ−1 {( m′ − π)k + ( m′ − π)} = rmax 1+γ2 (1−γ)2 2 π + γ−kγk+(k−1)γk+1 (1−γ)2 ( m′ − π) +γ\nk−γ γ−1 ( m′ − π) +\n( γ\n1−γ − γk−γ γ−1 ) (k + 1)( m′ − π) = rmax { 1+γ2 (1−γ)2 2 π + γ−kγk+(k−1)γk+1\n(1−γ)2 ( m′ − π) +γ\nk−γ γ−1 ( m′ − π) + γk 1−γ (k + 1)( m′ − π)\n} (31)" }, { "heading": "A.8 THE RELATION OF RETURNS IN THE CASE WHERE A REWARD PREDICTION IS INACCURATE", "text": "In Section 5, we discuss the relation between the true returns and the returns estimated on the metamodel under the assumption that the reward prediction error is zero. The theoretical result under this assumption is still useful because there are many cases where the true reward function is given and the reward prediction is not required. However, a number of readers still may want to know what the relation of the returns is under the assumption that the reward prediction is inaccurate. In this section, we provide the relation of the returns under inaccurate reward prediction in the MDP case 17.\n17Here, we do not discuss the theorems in the POMDP case because those in the MDP case can be easily extended into the POMDP case by utilizing Lemma 1.\nWe start our discussion by defining the bound of the reward prediction error r: Definition 4. r := maxt E(at,st)∼Dmodel [|r(st, at)− rθ(st, at)|], where rθ(st, at) := Ert∼pθ [rt|at, st].\nWe also define the return on the branched rollout with inaccurate reward prediction.\nEDmodel [R̂] := ∑ s0,a0 pπD (s0, a0)rθ(s0, a0) + k−1∑ t=1 ∑ st,at γtpbrt<k(st, at)rθ(st, at)\n+ ∞∑ t=k ∑ st,at γtpbrt≥k(st, at)rθ(st, at) (32)\nNow, we provide the relation between the returns under inaccurate reward prediction. Theorem 6 (Extension of Theorem 4 into the case where reward prediction is inaccurate). Under the k steps branched rollouts, using the bound of a model error under πD, m = maxt Ea∼πD,s∼p,t [DTV (p(s′|s, a)||pθ(s′|s, a))], the bound of the policy shift π = maxsDTV (π||πD) and the bound of the reward prediction error r = maxt E(at,st)∼Dmodel [|r(st, at)− rθ(st, at)|], the following inequation holds,\n(33)Eπ,p [R] ≥ EDmodel [R̂]− rmax { 1 + γ2 (1− γ)2 2 π + γ − kγk + (k − 1)γk+1 (1− γ)2 ( π + m)\n+ γk − γ γ − 1 ( π + m) + γk 1− γ (k + 1)( π + m)\n} − γ\n1− γ r.\nProof.\n(34) ∣∣∣Eπ,p[R]− EDmodel [R̂]∣∣∣ = ∣∣∣Eπ,p[R]− EDmodel [R] + EDmodel [R]− EDmodel [R̂]∣∣∣\n≤ ∣∣Eπ,p[R]− EDmodel [R]∣∣+ ∣∣∣EDmodel [R]− EDmodel [R̂]∣∣∣\n(35)\n∣∣∣EDmodel [R]− EDmodel [R̂]∣∣∣ = ∣∣∣∣∣∣\n∑ s0,a0\npπD (s0, a0) {r(s0, a0)− rθ(s0, a0)} + ∑k−1 t=1 ∑ st,at\nγtpbrt<k(st, at) {r(st, at)− rθ(st, at)} + ∑∞ t=k ∑ st,at γtpbrt≥k(st, at) {r(st, at)− rθ(st, at)} ∣∣∣∣∣∣ ≤ ∑ s0,a0 pπD (s0, a0) |r(s0, a0)− rθ(s0, a0)|\n+ k−1∑ t=1 ∑ st,at γtpbrt<k(st, at) |r(st, at)− rθ(st, at)|\n+ ∞∑ t=k ∑ st,at γtpbrt≥k(st, at) |r(st, at)− rθ(st, at)|\n≤ ∞∑ t=0 γt r = γ\n1− γ r\nBy substituting Eqs. 27 and 35 into Eq. 34, we obtain the result:\n(36) ∣∣∣Eπ,p[R]− EDmodel [R̂]∣∣∣ ≤ rmax\n{ 1+γ2\n(1−γ)2 2 π + γ−kγk+(k−1)γk+1 (1−γ)2 ( π + m)\n+γ k−γ γ−1 ( π + m) + γk 1−γ (k + 1)( π + m)\n} + γ\n1− γ r\nTheorem 7 (Extension of Theorem 5 into the case where reward prediction is inaccurate). Let m′ ≥ maxtEa∼π,s∼p [DTV (p(s′|s, a)||pθ(s′|s, a))] and r = maxt E(at,st)∼Dmodel [|r(st, at)− rθ(st, at)|],\n(37)Eπ,p [R] ≥ EDmodel [R̂]− rmax { 1 + γ2 (1− γ)2 2 π + γ − kγk + (k − 1)γk+1 (1− γ)2 ( m′ − π)\n+ γk − γ γ − 1 ( m′ − π) + γk 1− γ (k + 1)( m′ − π)\n} − γ\n1− γ r.\nProof. Similar to the derivation of Theorem 6, we obtain the result by substituting Eq. 31 and 35 into Eq. 34." }, { "heading": "A.9 PEARL IN SECTIONS 6 AND 7", "text": "The PEARL algorithm used in Sections 6 and 7 refers to “PEARL with RNN-traj” in (Rakelly et al., 2019). The comparison result with other types of PEARL (i.e., vanilla PEARL and PEARL with RNN-tran) in (Rakelly et al., 2019) and M3PO are shown in Figure 7. The figure indicates that M3PO achieves better sample efficiency than them." }, { "heading": "A.10 BASELINE METHODS FOR OUR EXPERIMENT", "text": "PEARL: The model-free meta-RL method proposed in Rakelly et al. (2019). This is an offpolicy method and implemented by extending Soft Actor-Critic (Haarnoja et al., 2018). By leveraging experience replay, this method shows high sample efficiency. We reimplemented the PEARL algorithm on TensorFlow, referring to the original implementation on PyTorch (https: //github.com/katerakelly/oyster). Learning to adapt (L2A): The model-based meta-RL proposed in Nagabandi et al. (2019a). In this method, the meta-model is implemented with MAML (Finn et al., 2017) and the optimal action is found by the model predictive path integral control (Williams et al., 2015) on the full meta–model based rollouts. We adapt the following implementation of L2A to our experiment: https://github.com/iclavera/learning_to_adapt" }, { "heading": "A.11 ENVIRONMENTS FOR OUR EXPERIMENT", "text": "For our experiment in Section 7, we prepare simulated robot environments using the MuJoCo physics engine (Todorov et al., 2012) (Figure 8): Halfcheetah-fwd-bwd: In this environment, meta-policies are used to control the half-cheetah, which is a planar biped robot with eight rigid links, including two legs and a torso, along with six actuated joints. Here, the half-cheetah’s moving direction is randomly selected from “forward” and “backward” around every 15 seconds (in simulation time). If the half-cheetah moves in the correct direction, a positive reward is fed to the half-cheetah in accordance with the magnitude of movement, otherwise, a negative reward is fed. Halfcheetah-pier: In this environment, the half-cheetah runs over a series of blocks that are floating on water. Each block moves up and down when stepped on, and the changes in the dynamics are rapidly changing due to each block having different damping and friction properties. These properties are randomly determined at the beginning of each episode. Ant-fwd-bwd: Same as Halfcheetah-fwd-bwd except that the meta-policies are used for controlling the ant, which is a quadruped robot with nine rigid links, including four legs and a torso, along with eight actuated joints. Ant-crippled-leg: In this environment, we randomly sample a leg on the ant to cripple. The crippling of the leg causes unexpected and drastic changes to the underlying dynamics. One of the four legs is randomly crippled every 15 seconds. Walker2D-randomparams: In this environment, the meta-policies are used to control the walker, which is a planar biped robot consisting of seven links, including two legs and a torso, along with six actuated joints. The walker’s torso mass and ground friction is randomly determined every 15 seconds. Humanoid-direc: In this environment, the meta-policies are used to control the humanoid, which is a biped robot with 13 rigid links, including two legs, two arms and a torso, along with 17 actuated joints. In this task, the humanoid moving direction is randomly selected from two different directions around every 15 seconds. If the humanoid moves in the correct direction, a positive reward is fed to the humanoid in accordance with the magnitude of its movement, otherwise, a negative reward is fed." }, { "heading": "A.12 COMPLEMENTARY EXPERIMENTAL RESULTS", "text": "d π\n) and the horizontal axis represents\nthe training sample size (x1000). The red-dotted line is the linear interpolation of the blue dots, which shows the trend of the local change decreasing as the training sample size grows." }, { "heading": "A.13 COMPLEMENTARY ANALYSIS", "text": "In addition to Q1 and Q2 in the main content, we also conducted a complementary analysis to answer the following question. Q.3: Does the use of a meta-model in M3PO contribute to the improvement of the meta-policy?\nIn an analysis in this section, we compare M3PO with the following method. Model-based MetaPolicy Optimization (M2PO): This method is a variant of M3PO, in which a non-adaptive predictive model is used instead of the meta-model. The predictive model architecture is the same as that in the MBPO algorithm (Janner et al., 2019) (i.e., the ensemble of Gaussian distributions based on four-layer feed-forward neural networks).\nRegarding Q3, our experimental result indicates that the use of a meta-model contributed to the performance improvement in a number of the environments. In Figure 15 in the appendix, we can clearly see the improvement of M3PO against M2PO in Halfcheetah-fwd-bwd. In addition, in the Ant environments, although the M3PO’s performance is seemingly the same as that of M2PO, the qualitative performance is quite different; the M3PO can produce a meta-policy for walking in the correct direction, while M2PO failed to do so (M2PO produces the meta-policy “always standing” with a very small amount of control signal). For Humanoid-direc, in contrast, M2PO tends to achieve better sample efficiency than M3PO. We hypothesize that the primary reason for this is that during the plateau at the early stage of training in Humanoid-direc, the predictive model used in M2PO generates fictitious trajectories that make meta-policy optimization more stable. To verify this hypothesis, we compare TD-errors (Q-function errors) during training, which is an indicator of the stability of meta-policy optimization, in M3PO and M2PO. The evaluation result (Figure 16 in the appendix) shows that during the performance plateau (10–60 epoch), the TD-error in M2PO was actually lower than that in M3PO; this result supports our hypothesis. In this paper, we did not focus on the study of meta-model usage to generate the trajectories that make meta-policy optimization stable, but this experimental result indicates that such a study is important for further improving M3PO." }, { "heading": "A.14 HYPERPARAMETER SETTING", "text": "Table 1: Hyperparameter settings for M3PO results shown in Figure 1. x → y over epohs a → b denotes a thresholded linear function, i.e., at epoch e, f(e) = min(max(x+ e−a\nb−a · (y − x), x), y).\nH al\nfc he\net ah\n-f w\ndbw\nd\nH al\nfc he\net ah\n-p ie\nr\nA nt\n-f w\ndbw\nd\nA nt\n-c ri\npp le\ndle\ng\nW al\nke r2\nD -r\nan do\nm pa\nra m\ns\nH um\nan oi\nddi\nre c\nN epoch 200 E environment step per epoch 1000 M meta-model rollouts per environment step 1e3 5e2 1e3 5e2 B ensemble size 3 G meta-policy update per environment step 40 20 k meta-model horizon 1 1 1→ 25 1 1\nover epoch 20→ 100\nA.15 NOTATIONS\nTable 3: Ek[k = 1] settings for results shown in Figure 17. a → b denotes a thresholded linear function, i.e., at epoch e, f(e) = min(max(1− e−a\nb−a , 1), 0).\nH al\nfc he\net ah\n-p ie\nr\nW al\nke r2\nD -r\nan do\nm pa\nra m\ns\nH um\nan oi\nddi\nre c\nEk[k = 1] expected meta-model horizon 80→ 130 50→ 100 150→ 250" }, { "heading": "A.16 COMPLEMENTARY ANALYSIS 2", "text": "In Figures 1 and 13, we can see that, in a number of the environments (Halcheetah-pier, Walker2Drandomparams and Humanoid-direc), the long-term performance of M3PO is worse than that of PEARL. This indicates that a gradual transition from M3PO to PEARL (or other model-free approaches) needs to be considered to further improve overall performance. In this section, we propose to introduce such a gradual transition approach to M3PO and evaluate it on the environments where the long-term performance of M3PO worse than that of PEARL.\nFor the gradual transition, we introduce the notion of the “expected” model-rollout length Ek[k] for k ∈ {0, 1} to M3PO. In this notion, Ek[k = 1] means the probability of one-step meta-model rollout. Namely, with the probability of Ek[k = 1], the fictitious trajectory generated by one-step meta-model rollout is used for the policy update (in line 11 in Algorithm 2), and, with the probability of 1 - Ek[k = 1], the real trajectory in Denv are used for the policy update. In our implementation, we replace Dmodel in line 11 in Algorithm 2 with the mixed dataset Dmix. Here, Dmix is defined as\nDmix = Ek[k = 1] · Dmodel + (1− Ek[k = 1]) · Denv. (38)\nWe linearly reduce the value of Ek[k = 1] from 1 to 0 depending on training epoch. With this value reduction, the M3PO is gradually transitioned to PEARL (i.e., the model-free approach).\nWe evaluate M3PO with the gradual transition (M3PO-h) in three environments (Halfcheetahpier, Walker2D-randomparams and Humanoid-direc) where the long-term performance of M3PO is worse than that of PEARL in Figures 1 and 13. The hyperparameter setting (except for setting schedule for the value of Ek[k = 1]) for the experiment is the same as that for Figures 1 and 13 (i.e., the one shown in Table 1). Regarding to setting schedule for the value of Ek[k = 1], we reduce the value of Ek[k = 1] in accordance with Table 3.\nEvaluation results are shown in Figure 17. Due to the limitation of our computational resource, we could not continue the training of M3PO-h until convergence, and the resulting training epoch is much shorter than those of PEARL. Nevertheless, we can see that some trials of M3PO-h achieve same or better scores with the long-term performances of PEARL in all environments (e.g., M3POh-1 and M3PO-h-2 achieve the same or better performance than the best scores of PEARL-2 and PEARL-3 in Halfcheetah-pier)." }, { "heading": "A.17 HIGH-LEVEL SUMMARY", "text": "" } ]
2,020
null
SP:da04daf3c2ef194dd3e9460acf3c967bb0222062
[ "This paper delves deeper into understanding shape-based representation of CNNs in an empirical way. Based on the stylized images, it proposes to use edge maps to more explicitly feed shape information to learning models. Besides, the common way to let models learn the shape-based representation is to train on the dataset contained the shape information while the texture information is severely distorted. The paper takes the point of changing the statistics of feature maps would result in style changes and proposed style-randomization to help CNNs better focus on shape information. Also, it connects the biasing degree on shape information of models with the defensive performance against common corruptions, like Gaussian additive noise, blur, and etc. An intuitive conclusion was drawn that there is no clear correlation between shape bias and robustness against common corruptions, and justified by extensive experiments. ", "The paper disproves the hypothesis that addressing shape bias improves robustness to corruptions of neural networks, which has been stated by the previous studies [1, 2]. The paper demonstrates that the degree of shape bias of a model is not correlated with classification accuracy on corrupted images via experiments. For the experiments, this paper presents two novel methods to encourage CNNs to be shape-biased: 1) edge dataset and 2) style randomization (SR). In the experiments, the authors train CNNs to be shape-biased to various degrees based on the proposed methods. Additionally, they compare test accuracies of the models to evaluate shape bias and robustness to corruption. In addition, this paper shows that through fine-tuning the affine parameters of the normalization layers, a CNN trained on original images can achieve comparable, if not better, performance than a CNN trained with data augmentation." ]
Convolutional neural networks (CNNs) learn to extract representations of complex features, such as object shapes and textures to solve image recognition tasks. Recent work indicates that CNNs trained on ImageNet are biased towards features that encode textures and that these alone are sufficient to generalize to unseen test data from the same distribution as the training data but often fail to generalize to out-of-distribution data. It has been shown that augmenting the training data with different image styles decreases this texture bias in favor of increased shape bias while at the same time improving robustness to common corruptions, such as noise and blur. Commonly, this is interpreted as shape bias increasing corruption robustness. However, this relationship is only hypothesized. We perform a systematic study of different ways of composing inputs based on natural images, explicit edge information, and stylization. While stylization is essential for achieving high corruption robustness, we do not find a clear correlation between shape bias and robustness. We conclude that the data augmentation caused by style-variation accounts for the improved corruption robustness and increased shape bias is only a byproduct.
[ { "affiliations": [], "name": "Chaithanya Kumar Mummadi" }, { "affiliations": [], "name": "Ranjitha Subramaniam" }, { "affiliations": [], "name": "Julien Vitay" }, { "affiliations": [], "name": "Jan Hendrik Metzen" } ]
[ { "authors": [ "Pablo Arbelaez", "Michael Maire", "Charless Fowlkes", "Jitendra Malik" ], "title": "Contour detection and hierarchical image segmentation", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2010 }, { "authors": [ "Nicholas Baker", "Hongjing Lu", "Gennady Erlikhman", "Philip J Kellman" ], "title": "Deep convolutional networks do not classify based on global object shape", "venue": "PLoS computational biology,", "year": 2018 }, { "authors": [ "W. Brendel", "M. Bethge" ], "title": "Approximating cnns with bag-of-local-features models works surprisingly well on imagenet", "venue": null, "year": 2019 }, { "authors": [ "John Canny" ], "title": "A computational approach to edge detection", "venue": "IEEE Transactions on pattern analysis and machine intelligence,", "year": 1986 }, { "authors": [ "Woong-Gi Chang", "Tackgeun You", "Seonguk Seo", "Suha Kwak", "Bohyung Han" ], "title": "Domain-specific batch normalization for unsupervised domain adaptation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V Le" ], "title": "Randaugment: Practical automated data augmentation with a reduced search space", "venue": null, "year": 1909 }, { "authors": [ "Vincent Dumoulin", "Jonathon Shlens", "Manjunath Kudlur" ], "title": "A learned representation for artistic style", "venue": "arXiv preprint arXiv:1610.07629,", "year": 2016 }, { "authors": [ "Kevin Eykholt", "Ivan Evtimov", "Earlence Fernandes", "Bo Li", "Amir Rahmati", "Florian Tramer", "Atul Prakash", "Tadayoshi Kohno", "Dawn Song" ], "title": "Physical Adversarial Examples for Object Detectors", "venue": "In 12th USENIX Workshop on Offensive Technologies,", "year": 2018 }, { "authors": [ "Robert Geirhos", "Carlos RM Temme", "Jonas Rauber", "Heiko H Schütt", "Matthias Bethge", "Felix A Wichmann" ], "title": "Generalisation in humans and deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A. Wichmann", "Wieland Brendel" ], "title": "Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and Harnessing Adversarial Examples", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Dan Hendrycks", "Thomas G. Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Saurav Kadavath", "Dawn Song" ], "title": "Using self-supervised learning can improve model robustness and uncertainty", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Steven Basart", "Norman Mu", "Saurav Kadavath", "Frank Wang", "Evan Dorundo", "Rahul Desai", "Tyler Zhu", "Samyak Parajuli", "Mike Guo" ], "title": "The many faces of robustness: A critical analysis of out-of-distribution generalization", "venue": "arXiv preprint arXiv:2006.16241,", "year": 2020 }, { "authors": [ "Dan Hendrycks", "Norman Mu", "Ekin Dogus Cubuk", "Barret Zoph", "Justin Gilmer", "Balaji Lakshminarayanan" ], "title": "Augmix: A simple method to improve robustness and uncertainty under data shift", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Xun Huang", "Serge Belongie" ], "title": "Arbitrary style transfer in real-time with adaptive instance normalization", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jason Jo", "Yoshua Bengio" ], "title": "Measuring the tendency of cnns to learn surface statistical regularities", "venue": "arXiv preprint arXiv:1711.11561,", "year": 2017 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "International Conference on Learning Representations (Workshop),", "year": 2017 }, { "authors": [ "Mark Lee", "J. Zico Kolter" ], "title": "On physical adversarial patches for object detection", "venue": "International Conference on Machine Learning (Workshop),", "year": 2019 }, { "authors": [ "Boyi Li", "Felix Wu", "Ser-Nam Lim", "Serge Belongie", "Kilian Q. Weinberger" ], "title": "On feature normalization and data augmentation, 2020", "venue": null, "year": 2020 }, { "authors": [ "Yun Liu", "Ming-Ming Cheng", "Xiaowei Hu", "Kai Wang", "Xiang Bai" ], "title": "Richer convolutional features for edge detection", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Raphael Gontijo Lopes", "Dong Yin", "Ben Poole", "Justin Gilmer", "Ekin D Cubuk" ], "title": "Improving robustness without sacrificing accuracy with patch gaussian augmentation", "venue": null, "year": 1906 }, { "authors": [ "Tiange Luo", "Tianle Cai", "Mengxiao Zhang", "Siyu Chen", "Di He", "Liwei Wang" ], "title": "Defective convolutional layers learn robust cnns", "venue": null, "year": 1911 }, { "authors": [ "Claudio Michaelis", "Benjamin Mitzkus", "Robert Geirhos", "Evgenia Rusak", "Oliver Bringmann", "Alexander S Ecker", "Matthias Bethge", "Wieland Brendel" ], "title": "Benchmarking robustness in object detection: Autonomous driving when winter is coming", "venue": null, "year": 1907 }, { "authors": [ "Hyeonseob Nam", "HyunJae Lee", "Jongchan Park", "Wonjun Yoon", "Donggeun Yoo" ], "title": "Reducing domain gap via style-agnostic networks", "venue": "arXiv preprint arXiv:1910.11645,", "year": 2019 }, { "authors": [ "E. Rusak", "L. Schott", "R. Zimmermann", "J. Bitterwolf", "O. Bringmann", "M. Bethge", "W. Brendel" ], "title": "Increasing the robustness of dnns against image corruptions by playing the game of noise", "venue": "URL https://arxiv.org/abs/2001.06057", "year": 2020 }, { "authors": [ "Baifeng Shi", "Dinghuai Zhang", "Qi Dai", "Zhanxing Zhu", "Yadong Mu", "Jingdong Wang" ], "title": "Informative dropout for robust representation learning: A shape-bias perspective", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Haohan Wang", "Songwei Ge", "Zachary Lipton", "Eric P Xing" ], "title": "Learning robust global representations by penalizing local predictive power", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Qizhe Xie", "Eduard Hovy", "Minh-Thang Luong", "Quoc V Le" ], "title": "Self-training with noisy student improves imagenet classification", "venue": "arXiv preprint arXiv:1911.04252,", "year": 2019 }, { "authors": [ "Saining Xie", "Zhuowen Tu" ], "title": "Holistically-nested edge detection", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Dong Yin", "Raphael Gontijo Lopes", "Jon Shlens", "Ekin Dogus Cubuk", "Justin Gilmer" ], "title": "A fourier perspective on model robustness in computer vision", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sangdoo Yun", "Dongyoon Han", "Seong Joon Oh", "Sanghyuk Chun", "Junsuk Choe", "Youngjoon Yoo" ], "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Richard Zhang" ], "title": "Making convolutional networks shift-invariant again", "venue": "In ICML,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "As deep learning is increasingly applied to open-world perception problems in safety-critical domains such as robotics and autonomous driving, its robustness properties become of paramount importance. Generally, a lack of robustness against adversarial examples has been observed (Szegedy et al., 2014; Goodfellow et al., 2015), making physical-world adversarial attacks on perception systems feasible (Kurakin et al., 2017; Eykholt et al., 2018; Lee & Kolter, 2019). In this work, we focus on a different kind of robustness: namely, robustness against naturally occurring common image corruptions. Robustness of image classifiers against such corruptions can be evaluated using the ImageNet-C benchmark (Hendrycks & Dietterich, 2019), in which corruptions such as noise, blur, weather effects, and digital image transformations are simulated. Hendrycks & Dietterich (2019) observed that recent advances in neural architectures increased performance on undistorted data without significant increase in relative corruption robustness.\nOne hypothesis for the lack of robustness is an over-reliance on non-robust features that generalize well within the distribution used for training but fail to generalize to out-of-distribution data. Ilyas\n∗Equal contribution.\net al. (2019) provide evidence for this hypothesis on adversarial examples. Similarly, it has been hypothesized that models which rely strongly on texture information are more vulnerable to common corruptions than models based on features encoding shape information (Geirhos et al., 2019; Hendrycks & Dietterich, 2019). Alternative methods for increasing corruption robustness not motivated by enhancing shape bias use more (potentially unlabeled) training data (Xie et al., 2019) or use stronger data augmentation (Lopes et al., 2019; Hendrycks* et al., 2020). Note that our meaning of “shape” & “texture” is built on the definitions by Geirhos et al. (2019).\nIn this paper, we re-examine the question of whether increasing the shape bias of a model actually helps in terms of corruption robustness. While prior work has found that there are training methods that increase both shape bias and corruption robustness (Geirhos et al., 2019; Hendrycks & Dietterich, 2019), this only establishes a correlation and not a causal relationship. To increase the shape bias, Geirhos et al. (2019) “stylize” images by imposing the style of a painting onto the image, leaving the shape-related structure of the image mostly unchanged while modifying texture cues so that they get largely uninformative of the class. Note that image stylization can be interpreted as a specific form of data augmentation, providing an alternative hypothesis for increased corruption robustness which would leave the enhanced shape bias as a mostly unrelated byproduct.\nIn this work, we investigate the role of the shape bias for corruption robustness in more detail. We propose two novel methods for increasing the shape bias:\n• Similar to Geirhos et al. (2019), we pre-train the CNN on an auxiliary dataset which encourages learning shape features. In contrast to Geirhos et al. (2019) that use stylized images, this dataset consists of the edge maps for the training images that are generated using the pre-trained neural network of Liu et al. (2017) for edge detection. This method maintains global object shapes but removes texture-related information, thereby encouraging learning shape-based representations. • In addition to pre-training on edge maps, we also propose style randomization to further enhance the shape bias. Style randomization is based upon sampling parameters of the affine transformations of normalization layers for each input from a uniform distribution.\nOur key finding is summarized in Figure 1. While pre-training on stylized images increases both shape bias and corruption robustness, these two quantities are not necessarily correlated: pre-training on edge maps increases the shape bias without consistently helping in terms of corruption robustness. In order to explain this finding, we conduct a systematic study in which we create inputs based on natural images, explicit edge information, and different ways of stylization (see Figure 2 for an illustration). We find that the shape bias gets maximized when combining edge information with stylization without including any texture information (Stylized Edges). However, for maximal corruption robustness, superimposing the image (and thus its textures) on these stylized edges is required. This, however, strongly reduces shape bias. In summary, corruption robustness seems to benefit most from style variation in the vicinity of the image manifold, while shape bias is mostly\nunrelated. Thus, image stylization is best interpreted as a strong data augmentation technique that encourages robust representations, regardless whether these representations are shape-based or not.\nMoreover, we present results for a setting where we fine-tune only parameters of the affine transformation of a normalization layer on the target distribution (stylized or corrupted images, respectively) for a CNN trained on regular images. Surprisingly, this is already sufficient for increasing the shape bias/corruption robustness considerably. We conclude that CNNs trained on normal images do learn shape-based features and features robust to corruptions but assign little weight to them. It may thus be sufficient to perform augmentation in feature space (extending Nam et al. (2019); Li et al. (2020)) so that higher weights are assigned to features that are robust to relevant domain shifts." }, { "heading": "2 RELATED WORK", "text": "Texture-vs-Shape Bias Geirhos et al. (2019) and Baker et al. (2018) hypothesized that CNNs tend to be biased towards textural cues rather than shape cues. This line of research is further supported by Brendel & Bethge (2019), where the authors show that BagNets, Deep Neural Networks (DNN) trained and evaluated only on small restricted local image patches, already perform reasonably well on ImageNet. Similarly, Yin et al. (2019) and Jo & Bengio (2017) showed using a Fourier space analysis that DNNs rely on surface statistical regularities and high-frequency components. The texture-vs-shape bias can be quantified by evaluating a network either on images with texture-shape cue conflict (Geirhos et al., 2019) or on images which were patch-wise shuffled (Luo et al., 2019).\nRobustness Against Common Corruptions Common corruptions are potentially stochastic image transformations motivated by real-world effects that can be used for evaluating model robustness. Hendrycks & Dietterich (2019) proposed the ImageNet-C dataset that contains simulated corruptions such as noise, blur, weather effects and digital image transformations. Geirhos et al. (2018) showed that humans are more robust to image corruptions than CNNs.\nApproaches to improve corruption robustness include data augmentation (Lopes et al., 2019; Yun et al., 2019; Hendrycks* et al., 2020; Cubuk et al., 2019), self-training with more training data (Xie et al., 2019), novel architectures and building blocks (Zhang, 2019; Hu et al., 2018), and changes in the training procedure (Hendrycks et al., 2019; Rusak et al., 2020; Wang et al., 2019). Motivated by the texture-vs-shape hypothesis, Geirhos et al. (2019) and Michaelis et al. (2019) train their network on a stylized version of ImageNet. The idea is that style transfer removes textural cues and models trained on stylized data thus have to rely more on shape information. The observed increase in corruption robustness on this stylized data was attributed to the shape bias. In this work, we provide evidence that contradicts this claim.\nSimilar to training on stylized images, Style Blending (Nam et al., 2019) employs style transfer in latent space by interpolating between feature statistics of different samples in a batch. Li et al. (2020) extend this idea and use feature space blending along with label interpolation. Hendrycks et al. (2019) considers self-supervised training with the prediction of image rotations as an auxiliary task. The authors argue that predicting rotation requires shape information and thus improves robustness. Similarly, Shi et al. (2020) proposes Dropout-like algorithm to reduce the texture bias and thereby increase the shape bias to improve model robustness. However, the authors also discuss that a “sweet\nspot” between shape and texture is needed for the model to be robust for domain generalization. With Patch-wise Adversarial Regularization, Wang et al. (2019) try to penalize reliance on local predictive representations in early layers and encourage the network to learn global concepts. Other augmentation techniques that aim to improve common corruption robustness are PatchGaussian (Lopes et al., 2019), CutMix (Yun et al., 2019), AugMix (Hendrycks* et al., 2020), and RandAugment (Cubuk et al., 2019). At this point, it remains unclear whether the increase in robustness caused by these augmentations is due to learning fundamentally different representations such as more shapebiased ones or to more incremental improvements in feature quality.\nEdge-based Representations A classical method for extracting edge maps is the Canny edge extractor Canny (1986). More recent approaches use DNNs (Xie & Tu, 2015; Liu et al., 2017) (see Figure A1). Geirhos et al. (2019) evaluate their shape-biased models on edge maps obtained with a Canny edge detector. ImageNet-Sketch (Wang et al., 2019) is a newly collected sketch-like dataset matching the ImageNet validation dataset in shape and size. It is used to evaluate generalization to domain shifts. In contrast to these works, we generate the edge-based representations with an edge detector using Richer Convolutional Features (RCF) (Liu et al., 2017) (see Figure A1) and use them explicitly for training. We provide evidence that edge-based representations enhance the shape bias, through an evaluation on images with induced texture-shape cue conflict and patch-shuffled images." }, { "heading": "3 LEARNING SHAPE-BASED REPRESENTATIONS", "text": "Similar to Geirhos et al. (2019), we aim to enhance the shape bias of a network so that it bases its decision more on shape details than on the style of objects encoded in textures. While Geirhos et al. (2019) augment training data with different styles (stylization), thereby making texture cues less predictive, we extract edge information (edge maps) from the training images to maintain explicit shape details and remove texture-related information completely. Here, we consider grayscale intensity edge maps rather than separate edge maps for each color channel. We propose to train CNNs using the edge maps in addition to the standard training data to learn shape-based representations for more effective shape-based decision-making.\nBesides training on the dataset with explicit shape cues, high capacity networks learn different feature representations when trained jointly on datasets from different distributions. Despite edge maps encouraging CNNs to learn shape-based representations, we observe that the network learns to encode features with texture details when introduced to the standard image data during training. We propose here to further restrain the network from learning texture details on standard image data. We discuss below the extraction of edge details from images to create the edge map dataset and explain the technique to reduce the texture bias of the CNN.\nEdge dataset Given a standard image dataset, we construct a new dataset with edge maps (named the Edge dataset) by extracting the edge details of each image. The edge details are extracted by the CNN-based edge detector using richer convolutional features (RCF) proposed in Liu et al. (2017). RCF network produces a single-channel edge map that contains the pixel values between [0, 255]. We convert the non-binary edge map into a binary map with values in {0, 255} using a threshold of 128 and transform it into a 3-channel RGB edge map by duplicating the channels, so we can use the edge maps as a direct input to train the CNNs. The edge maps from the Edge dataset are used as input and can be independently used to train or evaluate CNNs without necessarily being combined with the standard image data. Please refer to Section A.1 for the details of RCF network.\nStyle Randomization (SR) While using a dataset with explicit shape cues enhances shape-based representations, we propose to further reduce the texture bias of the network when training on standard images. It is shown in the literature of style transfer (Dumoulin et al., 2016; Huang & Belongie, 2017) that the statistics of feature maps (e.g., mean and standard deviation) of a CNN effectively capture the style of an image and changing these statistics would correspond to a change in the style of an image. SIN dataset is generated using such style transfer technique and shown to reduce the texture bias of the networks. Inspired by this observation, we propose a simple technique to effectively reduce the texture bias using the feature statistics when being trained on standard training data. We modify the style of an image in the feature space so that the network becomes style-invariant. In particular, we randomize the style details, i.e. feature statistics, of an image during training such that the network can not rely on the texture cues. A similar approach named Style Blending (SB) is proposed in Nam et al. (2019) which randomizes the style information by interpolating the feature\nstatistics between different samples in a mini-batch. We propose here a slightly different approach to make the network invariant to style information. Instead of interpolating the statistics of similar distribution of data i.e, training samples, we completely randomize the feature statistics (mean and standard deviation) by randomly sampling them from an uniform distribution. Considering Xi as the ith feature map of an intermediate layer in CNN, and µi & σi as the feature statistics of Xi, the style randomized feature map X̂i is defined as:\nX̂i := σ̂i ∗ ( Xi − µi σi ) + µ̂i (1)\nwhere σ̂i ∼ Uniform(0.1, 1) and µ̂i ∼ Uniform(−1, 1). These specific choices of sampling for σ̂i and µ̂i were found to perform best on our evaluations. The style transfer technique described in Huang & Belongie (2017) replaces the feature statistics of content image with the statistics of a desired style image to change the style. Similarly, we replace the statistics of content image with random statistics to change the style information. Training the network with SR reduces the texture bias and improves shape-based decision making. An advantage of SR over SB is that the feature statistics are sampled from a different distribution than the training data, that encourages learning representations to generalize better to out-of-distribution data. We show in Section 5 that SR outperforms SB and aids the network to induce stronger shape-based representations." }, { "heading": "4 EXPERIMENTAL SETTINGS", "text": "Dataset We use a subset of 20 classes from ImageNet dataset (ImageNet20, or IN) that are chosen randomly, to study the role of shape bias towards corruption robustness; the main reason being that extensive experiments on this dataset are feasible with limited computation. Details about this dataset can be found in Section A.2. The Edge dataset of IN (referred to as E) is generated as described in Section 3.\nStylization variants In addition to enhancing the shape bias using the edge maps, we further study the contribution of different factors of Stylized ImageNet (SIN) (Geirhos et al., 2019) to gain insights on its improved performance on corruptions. We break down SIN into different factors to understand their influence on corruption robustness. We segregate the factors that jointly generate the stylized images and the factors that are hypothesized to improve corruption robustness. These include i) shape bias of the network, ii) styles that are transferred from paintings and iii) statistics of natural images from IN. The role of shape bias is studied using the Edge dataset (E) proposed in Section 3. Other variants study the role of the remaining factors and are explained below:\nRole of stylization We create Stylized Edges (SE, see Figure 2) for which the styles from the paintings are transferred to the edge maps of Edge ImageNet20 (E). Here, we study the significance of stylization without the presence of the statistics (texture details) of natural images.\nRole of out-of-distribution styles SIN is generated by transferring the styles from out-of-distribution images, namely paintings. We create its variant called Intra-Stylized IN (I-SIN, see Figure 2) for which in-distribution images from IN are chosen randomly to transfer the styles. We also generate Intra-Stylized Edges (I-SE) where the image styles of IN are transferred to the Edge dataset E.\nRole of natural images statistics The above variants of E or SE test the role of shape and stylization without retaining texture cues of natural images. We create another variant called Superposition (SE+IN, see Figure 2) that interpolates images ISE from SE with images IIN from IN to embed the statistics (texture details) from natural images: ISE+IN := (1− α) · ISE + α · IIN. We set α = 0.5. These different stylized variants including E allow insights into the interplay between shape bias and corruption robustness. For simplicity, we term the networks that are trained on a certain dataset using the name of that dataset. For example, network trained on Stylized Edge (SE) is referred to as SE. The evaluation of SIN and I-SIN reveals the significance of the choice of styles and evaluation of Edge (E) indicates the role of the shape bias for corruption robustness. SE explains the importance of stylization and finally SE+IN allows to understand the importance of natural image statistics that are preserved in SIN and I-SIN but are missing from SE. Table 3 provides an overview of the input image compositions of different variants that are described above.\nNetwork details We employ a ResNet18 architecture with group normalization (Wu & He, 2018) and weight standardization (Qiao et al., 2019). We include SR described in Section 3 in the architecture. ResNet18 contains 4 stages of series of residual blocks and SR is inserted before every stage. We train ResNet18 on different datasets and their variants described above. IN and SIN are considered as baselines. We show that E possesses more global shape details of the objects whereas SIN demonstrates little or no texture bias for decision making. Both these datasets are complementary to each other and further enhance shape-based predictions when combined (termed as E-SIN). Note that SR is used to reduce texture bias and IN contains by far the strongest texture cues. Hence, SR is applied only on the training samples of IN but not on other dataset variants. Nevertheless, SR applied on other dataset variants found no differences in the results.\nTraining details Network on differnt dataset variants except IN are trained in two stages. The first stage begins with training the network on the respective dataset variant (e.g: E) for a total of 75 epochs starting with a learning rate of 0.1, which is dropped at the 60th epoch by a factor 10. In the second stage, the networks are then fine-tuned on the respective dataset along with IN (e.g: E & IN) for another round of 75 epochs starting with a learning rate of 0.01, later reduced to 0.001 at the 60th epoch. On the other hand, the network on IN is trained for 100 epochs with a learning rate of 0.1, reduced to 0.01 and 0.001 at the 60th and 90th epochs, respectively. We use a batch size of 128 samples with the SGD optimizer and weight decay 10−4.\nDuring the fine-tuning stage, we freeze the first convolutional layer and the first normalization layer’s affine parameters. We observed that freezing these two layers demonstrate more global shape bias than fine-tuning all the layers in the network. During fine-tuning, the networks receive an equal number of training samples from both datasets (e.g: 128 samples from E and 128 samples from IN in a mini-batch). Note that the data distribution of edge maps from the datasets E, SE and I-SE are different than the distribution of images from other datasets. Fine-tuning the network on inputs with different distributions results in degradation of the performance. In other words, the datasets E, SE and I-SE do not preserve natural image statistics and degrade task performance when finetuning along with clean images. Hence, we weigh the loss of training samples of edge maps from E, SE and I-SE when fine-tuning along with IN. The loss between training samples is weighted as follows: Loss L = LIN + λLedgemaps, with λ = 0.01. Finetuning on style variants SIN, I-SIN that better preserve natural image statistics does not affect classification performance significantly, hence λ is not used. Larger λ preserves the shape bias but affects the clean accuracy while smaller λ reduces shape bias of the network. In case of E-SIN, We fine-tune the network that is pre-trained on E in the first stage of training with SIN and IN in the second stage and show that such setup further improves shape-based predictions. All ResNet18 models have validation accuracy of about 87% on IN." }, { "heading": "5 EVALUATION OF SHAPE BIAS", "text": "In this section, we evaluate different methods in terms of their shape bias using two different evaluation criteria - Shuffled image patches and Texture-shape cue conflict that are described below.\nShuffled image patches: Following Luo et al. (2019), we manipulate images by perturbing the shape details while preserving the local texture of the objects. We divide an image into different patches of size n × n with n ∈ {2, 4, 8} and randomly shuffle the patches as shown in Figure\nA2a. Larger n corresponds to more distorted shapes. The performance of networks that rely more on shape is expected to deteriorate more strongly as the number of patches increases. We conduct this evaluation only on the ImageNet20 validation images that were correctly classified by all the networks that are selected for comparison.\nTexture-shape cue conflict: The cue conflict image dataset proposed by Geirhos et al. (2019) consists of images where the shape of an object carries the texture of a different object. For example, the object cat holds the texture of elephant as shown in Figure A2b. Each image in the dataset contains two class labels: labels with respect to shape and texture. The evaluation is carried out to test the network’s bias towards shape or texture. Networks with strong shape bias will exhibit higher accuracy according to the shape label while networks with texture bias will have higher accuracy for texture-based label. The original dataset contains a total of 1280 cue conflict images designed for the evaluation of the networks trained on the entire ImageNet dataset. 400 of these images have classes (shape labels) present in ImageNet20. A subset of 100 instances (20 instances from 5 different categories) from the selected images also has a texture label that belongs to ImageNet20 (see Figure A2b bottom). The remaining 300 images with texture labels that do not belong to the classes of ImageNet20 are not considered for texture-based classification.\nResults The results in Table 1 compare style blending (SB) (Nam et al., 2019), style randomization (SR) (Section 3), and no styling in feature space for networks trained on IN, SIN and E. In terms of performance on 4 × 4 shuffled patches, SB performs worse than no styling, and SR performs even worse than SB. This indicates increasing shape bias from no styling over SB to SR. This finding is reinforced by an increasing number of images classified according to the shape label for texture-shape cue conflict images from no styling over SB to SR. Similarly, when comparing different training datasets, SIN results in stronger shape bias than IN, and E exhibits stronger shape bias than SIN.\nIn Table 2, we compare additional networks, all with SR enabled. Here, we again see a consistent trend of increasing shape bias from IN over SIN to E. Moreover, stylized edges (SE) further increase shape bias than E. Lastly, E-SIN improves shape bias even slightly beyond SE. In summary, we can see a clear increase in shape bias for the methods proposed in this paper over IN or SIN. Next, we investigate to which extent this also results in an increased corruption robustness." }, { "heading": "6 INFLUENCE OF SHAPE BIAS ON COMMON CORRUPTIONS", "text": "We compare different networks in terms of their corruption robustness. Figure 3 shows the accuracy of different networks for two types of corruptions: Gaussian noise and frost (refer Figure A5 for all corruptions). Table 3 presents the corruption accuracy averaged over 15 ImageNet-C corruptions along with shape and texture results on the texture-shape cue conflict dataset. Generally, a CNN trained on IN performs poorly in terms of corruption robustness while SIN is relatively robust. On the other hand, E performs considerably worse than SIN and is not consistently better than IN despite having an even stronger shape bias than SIN. Networks SE and E-SIN further increase shape bias but\nstill perform considerably worse than SIN in terms of corruption robustness. These results contradict the hypothesis that stronger shape bias results in increased corruption robustness.\nThe only method that slightly surpasses SIN in terms of corruption robustness is the superposition of SE with natural images (SE+IN). However, this method has a relatively small shape bias. A common theme of SIN and SE+IN is that both exhibit properties of a natural image but are strongly distorted by stylization (see Figure 2). We hypothesize that these methods correspond to strong augmentation methods that stay close enough to the data manifold while inducing high diversity in appearance and thereby encourage learning robust representations, which need not necessarily be shape-based. We extend these findings to larger datasets with 200 classes of ImageNet, deeper architectures like ResNet50, DenseNet121, MobileNetV2 and different normalization layers like BatchNorm in Section A.5. Lastly, as can be seen from Figure 3, intra-stylization is nearly as effective as stylization based on paintings, implying that style need not necessarily be out-of-distribution for being useful." }, { "heading": "7 ON THE ADAPTABILITY OF LEARNED REPRESENTATIONS", "text": "As seen in the previous section, style augmentation on natural images is important for the network to be able to generalize to different domains such as common corruptions. We now study how easily\na pre-trained network can be adapted to a different distribution such as corruptions. Importantly, this uses the “unknown” distortion during training; this experiment is not meant as a practical procedure for the ImageNet-C benchmark but rather for understanding internal representations of a network.\nChang et al. (2019) showed that domain-specific affine parameters in normalization layers are essential when training a network on different input data distributions jointly. We conduct a similar experiment with the key difference that our network is already pre-trained on IN/E and only the affine parameters of normalization layers are fine-tuned to fit the distribution of the respective target domain. First, we fine-tune affine parameters of the network on several ImageNet-C corruptions separately and evaluate the mean corruption accuracy on the same corruption across different severity levels. As shown in Table 4 (left), performance on the corruptions can be greatly improved even with fixed convolutional parameters trained on IN/E by just tuning the affine parameters. Similarly, we also fine-tune the affine parameters of pre-trained CNN on SIN. Results in Table 4 (right) show not only an improvement on SIN validation accuracy but also improved shape-based classification results on texture-shape cue conflict images. These results suggest that the standard CNN encodes robust representations that can be leveraged when adapting affine parameters on a target domain." }, { "heading": "8 CONCLUSION", "text": "We performed a systematic empirical evaluation of the hypothesis that enhanced shape bias of a neural network is predictive for increased corruption robustness. Our evidence suggests that this is not the case and increased shape bias is mostly an unrelated byproduct. Increased corruption robustness by image stylization is better explained as a strong form of augmentation which encourages robust representations regardless whether those are shape-based or based on other cues. We conclude that if corruption robustness is the main objective, one should not primarily focus on increasing the shape bias of learned representations. Potential future research directions will focus on understanding whether shape-biased representations offer advantages in other domains than corruption robustness (Hendrycks et al., 2020). Moreover, one could try devising stronger augmentation procedures in image or feature space based on our findings. Lastly, gaining a better understanding of which kind of features (if not shape-based ones) are essential for corruption robustness is an important direction." } ]
2,021
DOES ENHANCED SHAPE BIAS IMPROVE NEURAL NETWORK ROBUSTNESS TO COMMON CORRUPTIONS?
SP:10461f5707fe6a701045ed1c3a96c22ceb858960
[ "The paper proposes an Bayesian-symbolic physics (BSP), an intuitive physics model that jointly infers symbolic force laws and object properties (mass, friction coefficient). The inductive bias is force summation, F=ma, and a grammar of force laws to express object interactions. The inference is done via an EM method that alternates between object property estimation (E-step) and force law induction (M-step), using techniques like symbolic regression and Hamiltonian Monte Carlo (HMC). Some preliminary experiments are shown for the method's effectiveness and data efficiency. ", "The paper addresses the problem of sample-efficient inference for symbolic physical rules. In the literature, there exists neural-network based models for learning a physical engine which have good predictive accuracy but poor sample efficiency, as well as symbolic models which are highly sensitive to deviations from their fixed physics engine. To be able to overcome issues as such, authors propose a generative model along with a symbolic regression framework, in which forces are produced from a probabilistic context free grammar that is designed to mimic simple Newtonian physics. This particular grammar is parameterized by a few latent variables related to unobserved properties of the physical environment, such as mass and charge. Finally, they develop an Expectation-Maximization algorithm, in order for estimating these latent variables as well as inferring the underlying physical laws of the system." ]
Humans are capable of reasoning about physical phenomena by inferring laws of physics from a very limited set of observations. The inferred laws can potentially depend on unobserved properties, such as mass, texture, charge, etc. This sampleefficient physical reasoning is considered a core domain of human common-sense knowledge and hints at the existence of a physics engine in the head. In this paper, we propose a Bayesian symbolic framework for learning sample-efficient models of physical reasoning and prediction, which are of special interests in the field of intuitive physics. In our framework, the environment is represented by a top-down generative model with a collection of entities with some known and unknown properties as latent variables to capture uncertainty. The physics engine depends on physical laws which are modeled as interpretable symbolic expressions and are assumed to be functions of the latent properties of the entities interacting under simple Newtonian physics. As such, learning the laws is then reduced to symbolic regression and Bayesian inference methods are used to obtain the distribution of unobserved properties. These inference and regression steps are performed in an iterative manner following the expectation–maximization algorithm to infer the unknown properties and use them to learn the laws from a very small set of observations. We demonstrate that on three physics learning tasks that compared to the existing methods of learning physics, our proposed framework is more dataefficient, accurate and makes joint reasoning and learning possible.
[ { "affiliations": [], "name": "A BAYESIAN-SYMBOLIC" } ]
[ { "authors": [ "Kelsey R Allen", "Kevin A Smith", "Joshua B Tenenbaum" ], "title": "The tools challenge: Rapid trial-anderror learning in physical problem solving", "venue": null, "year": 1907 }, { "authors": [ "Brandon Amos", "Laurent Dinh", "Serkan Cabi", "Thomas Rothörl", "Sergio Gómez Colmenarejo", "Alistair Muldal", "Tom Erez", "Yuval Tassa", "Nando de Freitas", "Misha Denil" ], "title": "Learning awareness models", "venue": "arXiv preprint arXiv:1804.06318,", "year": 2018 }, { "authors": [ "Fabien Baradel", "Natalia Neverova", "Julien Mille", "Greg Mori", "Christian Wolf" ], "title": "Cophy: Counterfactual learning of physical dynamics", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Christopher Bates", "Peter W Battaglia", "Ilker Yildirim", "Joshua B Tenenbaum" ], "title": "Humans predict liquid dynamics using probabilistic simulation", "venue": "In CogSci,", "year": 2015 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Joshua B Tenenbaum" ], "title": "Simulation as an engine of physical scene understanding", "venue": "Proceedings of the National Academy of Sciences,", "year": 2013 }, { "authors": [ "Peter W. Battaglia", "Razvan Pascanu", "Matthew Lai", "Danilo Rezende", "Koray Kavukcuoglu" ], "title": "Interaction networks for learning about objects, relations and physics", "venue": "[cs],", "year": 2016 }, { "authors": [ "Neil R Bramley", "Tobias Gerstenberg", "Joshua B Tenenbaum", "Todd M Gureckis" ], "title": "Intuitive experimentation in the physical world", "venue": "Cognitive psychology,", "year": 2018 }, { "authors": [ "Philip G. Breen", "Christopher N. Foley", "Tjarda Boekholt", "Simon Portegies Zwart" ], "title": "Newton vs the machine: solving the chaotic three-body problem using deep neural networks", "venue": null, "year": 1910 }, { "authors": [ "Frank Brescia" ], "title": "Fundamentals of Chemistry: A Modern Introduction", "venue": null, "year": 1966 }, { "authors": [ "Brian M. Cerny", "Peter C. Nelson", "Chi Zhou" ], "title": "Using differential evolution for symbolic regression and numerical constant creation", "venue": "In Proceedings of the 10th annual conference on Genetic and evolutionary computation,", "year": 2008 }, { "authors": [ "Michael B Chang", "Tomer Ullman", "Antonio Torralba", "Joshua B Tenenbaum" ], "title": "A compositional object-based approach to learning physical dynamics", "venue": "arXiv preprint arXiv:1612.00341,", "year": 2016 }, { "authors": [ "Miles Cranmer", "Alvaro Sanchez-Gonzalez", "Peter Battaglia", "Rui Xu", "Kyle Cranmer", "David Spergel", "Shirley Ho" ], "title": "Discovering symbolic models from deep learning with inductive biases", "venue": null, "year": 2020 }, { "authors": [ "J.W. Davidson", "D.A. Savic", "G.A. Walters" ], "title": "Symbolic and numerical regression: Experiments and applications", "venue": "Developments in Soft Computing, Advances in Soft Computing,", "year": 2001 }, { "authors": [ "Simon Duane", "Anthony D Kennedy", "Brian J Pendleton", "Duncan Roweth" ], "title": "Hybrid Monte Carlo", "venue": "Physics letters B,", "year": 1987 }, { "authors": [ "Sebastien Ehrhardt", "Aron Monszpart", "Niloy J Mitra", "Andrea Vedaldi" ], "title": "Learning a physical longterm predictor", "venue": "arXiv preprint arXiv:1703.00247,", "year": 2017 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "[cs],", "year": 2017 }, { "authors": [ "Katerina Fragkiadaki", "Pulkit Agrawal", "Sergey Levine", "Jitendra Malik" ], "title": "Learning visual predictive models of physics for playing billiards", "venue": "arXiv preprint arXiv:1511.07404,", "year": 2015 }, { "authors": [ "Tobias Gerstenberg", "Noah D Goodman", "David A Lagnado", "Joshua B Tenenbaum" ], "title": "How, whether, why: Causal judgments as counterfactual contrasts", "venue": "In CogSci,", "year": 2015 }, { "authors": [ "NeuroAnimator Grzeszczuk", "Terzopoulos D Hinton G" ], "title": "Neuro Animator. Fast neural network emulation and control of physics-based models", "venue": "Proc. ACM SIGGRAPH ‘98", "year": 1998 }, { "authors": [ "Matthew D. Hoffman", "Andrew Gelman" ], "title": "The no-u-turn sampler: Adaptively setting path lengths in hamiltonian monte carlo", "venue": null, "year": 2011 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Thomas Kipf", "Ethan Fetaya", "Kuan-Chieh Wang", "Max Welling", "Richard Zemel" ], "title": "Neural relational inference for interacting systems", "venue": "arXiv preprint arXiv:1802.04687,", "year": 2018 }, { "authors": [ "Michael Kommenda", "Gabriel Kronberger", "Stephan Winkler", "Michael Affenzeller", "Stefan Wagner" ], "title": "Effects of constant optimization by nonlinear least squares minimization in symbolic regression", "venue": "In Proceedings of the 15th annual conference companion on Genetic and evolutionary computation,", "year": 2013 }, { "authors": [ "John R. Koza" ], "title": "Genetic programming as a means for programming computers by natural selection", "venue": "Statistics and Computing,", "year": 1994 }, { "authors": [ "Brenden M Lake", "Tomer D Ullman", "Joshua B Tenenbaum", "Samuel J Gershman" ], "title": "Building machines that learn and think like people", "venue": "Behavioral and brain sciences,", "year": 2017 }, { "authors": [ "Radford M Neal" ], "title": "MCMC using Hamiltonian dynamics", "venue": "Handbook of Markov chain Monte Carlo,", "year": 2011 }, { "authors": [ "Markus Quade", "Markus Abel", "Kamran Shafi", "Robert K. Niven", "Bernd R. Noack" ], "title": "Prediction of dynamical systems by symbolic regression", "venue": "Physical Review E,", "year": 2016 }, { "authors": [ "Adam N Sanborn", "Vikash K Mansinghka", "Thomas L Griffiths" ], "title": "Reconciling intuitive physics and newtonian mechanics for colliding objects", "venue": "Psychological review,", "year": 2013 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Victor Bapst", "Kyle Cranmer", "Peter Battaglia" ], "title": "Hamiltonian graph networks with ode integrators", "venue": "[physics],", "year": 2019 }, { "authors": [ "Michael Schmidt", "Hod Lipson" ], "title": "Distilling free-form natural laws from experimental data", "venue": null, "year": 2009 }, { "authors": [ "Sungyong Seo", "Chuizheng Meng", "Yan Liu" ], "title": "Physics-aware difference graph networks for sparsely-observed dynamics", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Kevin Smith", "Lingjie Mei", "Shunyu Yao", "Jiajun Wu", "Elizabeth Spelke", "Josh Tenenbaum", "Tomer Ullman" ], "title": "Modeling expectation violation in intuitive physics with coarse probabilistic object representations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Silviu-Marian Udrescu", "Max Tegmark. Ai" ], "title": "feynman: A physics-inspired method for symbolic regression", "venue": "Science Advances,", "year": 2020 }, { "authors": [ "Tomer D. Ullman", "Andreas Stuhlmüller", "Noah D. Goodman", "Joshua B. Tenenbaum" ], "title": "Learning physical parameters from dynamic scenes", "venue": "Cognitive Psychology,", "year": 2018 }, { "authors": [ "Nicholas Watters", "Daniel Zoran", "Theophane Weber", "Peter Battaglia", "Razvan Pascanu", "Andrea Tacchetti" ], "title": "Visual interaction networks: Learning a physics simulator from video", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Frank Wood", "Jan Willem Meent", "Vikash Mansinghka" ], "title": "A new approach to probabilistic programming inference", "venue": "In Artificial Intelligence and Statistics,", "year": 2014 }, { "authors": [ "Jiajun Wu", "Ilker Yildirim", "Joseph J Lim", "Bill Freeman", "Josh Tenenbaum" ], "title": "Galileo: Perceiving physical object properties by integrating a physics engine with deep learning", "venue": "In Advances in neural information processing systems,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Imagine a ball rolling down a ramp. If asked to predict the trajectory of the ball, most of us will find it fairly easy to make a reasonable prediction. Not only that, simply by observing a single trajectory people can make reasonable guesses about the material and weight of the ball and the ramp. It is astonishing that while the exact answers to any of these prediction and reasoning tasks requires an indepth knowledge of Newtonian mechanics and solving of some intricate equations, yet an average human can perform such tasks without any formal training in physics. Even from an early age, humans demonstrate an innate ability to quickly learn and discover the laws of physical interactions with very limited supervision. This allows them to efficiently reason and plan action about commonsense tasks even in absence of complete information (Spelke, 2000; Battaglia et al., 2013). Recent studies suggest that this ability of efficient physical reasoning with limited supervision is driven by a noisy model of the exact Newtonian dynamics, referred as the intuitive physics engine (IPE; Bates et al., 2015; Gerstenberg et al., 2015; Sanborn et al., 2013; Lake et al., 2017; Battaglia et al., 2013).\nAs sample-efficient physical reasoning is recognized as a core domain of human common-sense knowledge (Spelke & Kinzler, 2007); therefore an important problem in artificial intelligence is to develop agents that not only learn faster but also generalize beyond the training data. This has lead to a surge in works aimed at developing agents with an IPE or a model of the environment dynamics (Amos et al., 2018; Chang et al., 2016; Grzeszczuk & Animator, 1998; Fragkiadaki et al., 2015; Watters et al., 2017; Battaglia et al., 2016; Sanchez-Gonzalez et al., 2019; Ehrhardt et al., 2017; Kipf et al., 2018; Seo et al., 2019; Baradel et al., 2020). Among these, neural-network based learned models of physics (Breen et al., 2019; Battaglia et al., 2016; Sanchez-Gonzalez et al., 2019) tend to have good predictive accuracy but poor sample efficiency for learning. On the other hand, symbolic models (Ullman et al., 2018; Smith et al., 2019; Sanborn et al., 2013; Bramley et al., 2018) are sample efficient but fail to adapt or accommodate any deviation from their fixed physics engine.\nInspired by humans’ highly data-efficient ability of learning and reasoning about their environment, we present Bayesian-symbolic physics (BSP), the first fully Bayesian approach to symbolic intuitive physics that, by combining symbolic learning of physical force laws and statistical learning of unobserved properties of objects, enjoys the sample efficiency of symbolic methods with the accuracy and generalization of data-driven learned approaches. In BSP, we pose the evolution of the environment dynamics over time as a generative program of its objects interacting under Newtonian mechanics using forces, as shown in figure 2. Being a fully Bayesian model, we treat objects and their properties such as mass, charge, etc. as random variables. As force laws are simply functions of these properties under the Newtonian assumption, in BSP we replace data-hungry neural networks (NN) with symbolic regression (SR) to learn explicit force laws (in symbolic form) and then evolve them deterministically using equations of motion. But a naive SR implementation is not enough: a vanilla grammar that does not constrain the search space of the force-laws can potentially have far worse sample efficiency and accuracy than a neural network. Therefore, we also introduce a grammar of Newtonian physics that leverages dimensional analysis to induce a physical unit system over the search space and then imposes physics-based constraints on the production rules, which help prune away any physically meaningless laws, thus drastically speeding up SR.\nOur main contributions are threefold:\n• We introduce a fully differentiable, top-down, Bayesian model for physical dynamics and an expectation-maximization (EM) based algorithm, which combines Markov chain Monte Carlo (MCMC) and SR, for maximum likelihood fitting of the model.\n• We introduce a grammar of Newtonian physics that appropriately constrains SR to allow data-efficient physics learning.\n• Through empirical evaluations, we demonstrate that the BSP approach reaches human-like sample efficiency, often just requiring 1 to 5 observations to learn the exact force laws – usually more than 10x fewer than that of the closest neural alternatives." }, { "heading": "2 RELATED WORK", "text": "At a high level, the logic of physics engines can be decomposed into a dynamics module and a model of how the entities interact with each other depending on their mutual properties. These modules can be further divided into more components depending on how the module is realized. Using this break-down, we can categorize different models of physics based on what components of the model are learned. In figure 1, we compare some of the recent models of physics that are of closely related to our work. Starting on the right end, we have fully learned, deep neural-network approach used by Breen et al. (2019) that do not use any prior knowledge about physics and therefore learn to predict dynamics completely in purely data-driven way. In the middle are hybrid models that introduce some prior knowledge about physical interaction or dynamics in their deep network based prection model. These include interaction networks (INs; Battaglia et al., 2016), ODE graph networks (OGNs) and Hamiltonian ODE graph networks (HOGNs; Sanchez-Gonzalez et al., 2019). Since these approaches employ deep networks to learn, they tend to have very good predictive accuracy but extremely bad sample efficiency and therefore require orders of magnitude more data to train than humans (Ullman et al., 2018; Battaglia et al., 2016; Sanchez-Gonzalez et al., 2019). On\nthe other end of the spectrum (left) are the fully symbolic, rule-based physics models and engines (Smith et al., 2019; Allen et al., 2019; Wu et al., 2015; Ullman et al., 2018). While these methods are suitable for reasoning tasks, they lack the flexibility of the data-driven, learned models as they cannot generalize or adapt to any changes in the environment that their fixed physics engine simulates. For example, in such fixed models, inference can fail on physically implausible scenes and may require additional tricks to resolve such issues (Smith et al., 2019).\nSymbolic regression has been used for general physics learning in many prior works ranging from Schmidt & Lipson (2009) that used SR to discover force laws from experimental data to the more recent work of Cranmer et al. (2020) on distilling symbolic forces from INs using genetic algorithms. Even more recently, Udrescu & Tegmark (2020) proposed an interesting framework AI Feynman, which recursively simplifies the SR problem using dimensional analysis and symmetries discovered by neural networks to discover the underlying physics equation that generated the data. The focus of these prior work has been to discover the underlying physical equations that directly leads to the observed data but unlike our approach, they do not target to allow for reasoning in physical environments, that is common task of interest in intuitive physics studies." }, { "heading": "3 BAYESIAN-SYMBOLIC PHYSICS", "text": "Our framework, Bayesian-symbolic physics, combines symbolic learning of physical force laws with Bayesian statistical learning of object properties such as mass and charge. The environment is modelled by a probabilistic generative model governed by Newtonian dynamics. Physical laws are learnable symbolic expressions that determine the force exerted on each object, based on the position and properties of other objects. These properties might not be observed, so they are treated as latent variables and learned in a Bayesian fashion. The physical laws themselves have a prior distribution to organize the search space and discourage the model from learning physically meaningless laws; we call this distribution a grammar of Newtonian physics.\nTo learn with incomplete data, we are inspired by results from Ullman et al. (2018), humans are able to simultaneously predict the trajectory and update their inference about the properties of the object under new observation. This motivates an EM-based learning and inference method to fit BSP models. In the E-step, we obtain the distribution of entity properties by sampling from their posterior distribution using the current guess of force laws, and in the M-step, we use the samples from the E-step to perform SR to update the force functions. This enables BSP models to learn and reason in environment with incomplete information." }, { "heading": "3.1 GENERATIVE MODEL OF THE ENVIRONMENT", "text": "We represent each entity i ∈ {1 . . . N} by a vector of properties zi, such as mass, charge, coefficient of friction, and shape, some of which may be unobserved. At each time step t, a state vector sit = (p i t,v i t) is associated with each entity i, where p i t ∈ Rd and vit ∈ Rd are position and velocity vectors respectively and d is the dimensionality of the environment, typically 2 or 3. Let {τ i}Ni=1, where τ i = pi1:T := (p i 1, . . . ,p i T ), be the set of observed trajectories from an environment with N entities. Together with a prior on z, the generative process of BSP defines a joint probability distribution p(D, z;F ) over the observed trajectory data D and latent properties z given the force function F .1 An example of the generative process of a three-body problem is shown in figure 2.\nThe state transition of an entity in a Newtonian system depends not only on its properties and current state but also on its interaction with other entities in the environment. Therefore, we define a pairwise interaction function F (zi, si, zj , sj), where i, j ∈ {1 . . . N}; we interpret F as the force applied to i due to its interaction with j. Then, the trajectory τi of each entity is generated by a transition function T that consumes the current state and all of its interactions as\nsit+1 = T ( sit, F (z i, sit, z 1, s1t ), . . . , F (z i, sit, z N , sNt ) ) . (1)\nAs forces are additive, forces on entity i can be easily summed to get the total force applied as f it = ∑N j=1 F (z i, sit, z j , sjt ). Similar to Sanchez-Gonzalez et al. (2019), we use numerical integration to simulate the Newtonian dynamics, updating s with the acceleration obtained from f it . 1As physical dynamics are typically sensitive to initial states, we assume the noise-free initial states are given either as part of the data D or as a point-mass prior over the initial state, thus are omitted in the notation.\nSpecifically, we choose the Euler integrator since its update rules correspond to the basic relations between position, velocity and acceleration. With these specifications, equation 1 becomes\nait = f i t/m i, vit+1 = v i t + at∆t, p i t+1 = p i t + v i t+1∆t, (2)\nwhere mi is the mass of the recipient of the force f it and ∆t is the step size of the Euler integrator. Finally, we add Gaussian noise to each trajectory {τ i}Ni=1, that is, D := {τ̃ i}Ni=1 where τ̃ i := (p̃i1, . . . , p̃ i T ), p̃ i t ∼ N (pit, σ2) and σ is the noise level. For clarity, Appendix A provides the complete generative process represented by a probabilistic program." }, { "heading": "3.2 A GRAMMAR OF NEWTONIAN PHYSICS", "text": "In order attain sample efficiency, we chose to learn F (zi, si, zj , sj) using symbolic search, but this approach can be inefficient if the search space of possible functions is too large, or inaccurate if the search space is too small. Therefore, we constrain the function F to be a members of a context-free language called the grammar of Newtonian physics, G, which we now describe. We consider the following terminal nodes in G: learnable constants c1, c2, c3, masses mi,mj , friction coefficients µi, µj , shapes si, sj , positions pi,pj , velocities vi,vj and the contact point c for a pair of entities.2 and the following arithmetic expressions: (·)2 (square), +, −, ×, ÷, ‖·‖2 (L2norm), normalise(·) and project(·, ·) (project a vector to an unit vector). However, naively supporting all possible arithmetic expressions for any combination of terminal nodes would make SR highly inefficient and even lead to physically meaningless force laws. Therefore, in order to constrain expression search further, we introduce physics-inspired production rules, along with preterminals and nonterminals nodes, as shown in Figure 3. We now discuss the design choice of our grammar.\nMotivated by dimensional analysis in natural sciences, in which the relations between different units of measurement are tracked (Brescia, 2012), we build the concept of units of measurement into the nonterminals of G. That is, mass has the unit kilogram (Kg), unit meter (Meter ) for distance and meter per second (MeterSec) for speed. With this unit system in place, we only allow addition\n2In cases of no contact, c is set as the middle position of the two objects, i.e. c = (pi + pj) /2.\nAlgorithm 1: Robust expectation–maximization for Bayesian-symbolic physics initialize the force function F0 as constantly zero ; for i = 1, . . . ,m do\nsimulate k + k′ chains from p(z | D;Fi−1) using HMC ; // E-step starts compute ESS for each chain and remove k′ chains with the smallest ESS ; select the last sample from each chain as {z1, . . . , zk} ; get current loss function Li(e, c) = ∑k i=1 L(e, c; zi,D) ; // M-step starts get candidates C = {(t∗1, c∗1), . . . , (t∗r , c∗r)} by Algorithm 2 with Li for r repetitions ; find (t∗, c∗) from C with the best loss and set Fi = getF(t∗, c∗,G) ; // Update force\nend return F = Fm and {z1, . . . , zk} ∼ p(z | D;Fm) ;\nand subtraction of symbols with the same units, therefore pruning away physically meaningless expressions, e.g. Kg −Meter . Importantly, this leads to forces laws that have the unit of Newton. We also forbid the direct use of absolute positions pi, pj and c and only allow their differences.3 This ensures that all force laws are reference-invariant, that is, independent on the choice of the reference frame. To be consistent with the unit system, we call vectors obtained from operations on positions vectors, meter vector (MeterVec) and those on velocities as meter per second vector (MeterSecVec). These variables are all vectors thus we call them reference-invariant vectors (RefInvVec). When such vectors are normalised, or divided by their corresponding “scalar variables”, they becomes unit-less vectors (UnitlessVec) that can be used to describe a direction.\nThe start symbol of the grammar is Force . We allow forces to be summed by right-branching or be conditioned on a Boolean expression. In order to support conditional forces, which are forces that only apply when a condition is true, such as collision force and friction. Since the goal of our work is not perceptual learning, we provide perceptual primitives (collision detection as a function doesCollide and isOn to check if a disc is on a mat) in the grammar. Collision is then handled by applying an extra force to the entity when doesCollide is true; the force must still be learned.\nFinally, some care is needed to ensure the grammar is unambiguous. For example, if we used a rule like Coeff → Coeff × Coeff , then the grammar could generate many expressions that redundantly represent the same function. This would make search much more expansive. Instead, we represent this rule in a right-branching way by the introduction of BaseCoeff and BaseForce as nonterminals. Although the grammar puts basic constraints on plausible physical laws, it is still expressive. For example, there are more than 6 million possible trees of depth 6." }, { "heading": "3.3 LEARNING ALGORITHM", "text": "Following the EM approach, our learning method (Algorithm 1) alternates between an E-step, where object properties are estimated given the current forces (Section 3.3.2), and an M-step step, where forces are learned given object properties (Section 3.3.1). See Appendix B.1 for hyperparameters." }, { "heading": "3.3.1 SYMBOLIC REGRESSION WITH LEARNABLE CONSTANTS", "text": "Symbolic regression is function approximation that searches over the space of mathematical expressions that are specified by a user-provided context-free grammar (CFG; Koza, 1994). A CFG consists a start symbol, sets of nonterminal, preterminal and terminal symbols and a set of production rules. If each production rule in the grammar is specified with a probability (with the probabilities for all rules summed to 1), such a grammar is called probabilistic context-free grammars (PCFGs), which effectively defines a distribution over possible expression trees. As such, one can sample from a PCFG and/or evaluate the probability of a given tree. In our work, we use the cross-entropy method for SR. The method starts with PCFG of a given grammar that has equal probabilities of all production rules. At each iteration, it samples n number of trees (up-to a specified depth d) from the current PCFG and evaluates their fitness by a loss function L. After this, trees with the top-k fitness are seleted and used to fit a PCFG, which will be used in the next iteration, via maximal likelihood.\n3This is in fact consistent with how such variables are pre-processed in the neural network approaches. Usually the mean of a pair of positions are subtracted from the pair to make them reference-invariant.\nAlgorithm 2: Cross-entropy method with learnable constants initialize a PCFG P0 for G uniformaly ; for i = 1, . . . ,m do\ninitialize an empty candidate set C ; for j = 1, . . . , n do\nsample an expression ej ∼ Pi−1, ei−1 with a maximum depth of d ; solve c∗j = arg minc L(ej , c) by L-BFGS ; // Lower-level optimization compute the loss of the sampled tree `j = L(ej , c∗j ) and add (ej , `j) to C ;\nend if i < m then\nfit a PCFG Pi on trees from C with the top-k fitness via maximum-likelihood ; end return the best expression tree e∗ from C and the corresponding constant as c∗ ;\nTo learn the force laws, we need to find an expression e ∈ G and a setting for the learnable constants c = [c1, c2, c3] that define the force function Fe,c. The loss used by the cross-entropy method involves computing the log-likelihood of the generative model. As the observed trajectory is generated sequentially given an initial state, the computation of the log-likelihood term cannot be parallelized and can be computationally expensive in practice. Following the loss form of (Battaglia et al., 2016; Sanchez-Gonzalez et al., 2019), we use a teacher-forced or vectorized version of the log-likelihood\nLL(e, c; z,D) = N∑ i=1 T−1∑ t=1 logN (p̃it+1;T ( s̃it, Fe,c(z i, s̃it, z 1, s̃1t ), . . . , Fe,c(z i, s̃it, z N , s̃Nt ) ) , σ) (3)\nwhere T follows equation 2 and s̃it := (p̃it, ṽit). As such, we assume velocity is also available in the dataset D. Clearly, equation 3 differs from the sequential version, as the input for the integrator contains noise at each step. However, similar to previous works, we found it is not an issue when learning forces by regression and allows a speed-up of 10x in terms of computing the log-likelihood.\nIn order to favor simpler trees, we add a regularization term, a weighted log-probability under a uniform PCFG prior of G, to the negative log-likelihood; our final loss per trajectory\nL(e, c; z,D) = −LL(e, c; z,D) + λ logP0(e). (4)\nHere P0 is the uniform PCFG of G and λ is a hyper-parameter that controls the regularization. The loss for multiple trajectories is just a summation of L over individual trajectories. Optimizing equation 4 can be seen as maximum a posterior (MAP) estimation.\nWhen using the cross-entropy method for symbolic regression, the continuous constants c = [c1, c2, c3] require care as they can take any value. To handle this, we use bilevel optimization, where the upper-level is the original symbolic regression problem and the inner-level is an extra optimization for constants. Specifically, we use an L-BFGS step to optimize the constants before computing the loss of each candidate tree within the cross-entropy iterations. For cases where the constants are very small, e.g. the gravitational constant G = 6.67× 10−11, we parameterize the constants as c×10−9 to avoid numerical issues in the inner-level optimization. Traditionally, if such strategy is not used, constants are either randomly generated from a predefined, fixed integer set or a continuous interval, or for evolutionary algorithms, they can be mutated and combined during evolution to produce constants that fit better; such constants are often referred as ephemeral constants (Davidson et al., 2001). Compared to these methods, the benefit of our formulation is that the evaluation of each tree candidate depends on the symbolic form only as the constants are optimized-away, making the search more efficiently. Note that although the literature has not explicitly considered our way of constant learning as bilevel optimization, such strategy is not new and is similar in spirit to (Cerny et al., 2008; Kommenda et al., 2013; Quade et al., 2016). In contrast to recent use of bilevel optimization in meta-learning, e.g. (Finn et al., 2017), our method is simpler: As our upper-level optimization is gradient-free, we do not need to pass gradient from the lower-level to the upper-level.\nIn practice, as the cross-entropy method itself is sensitive to random initializations, in order to robustify the M-step, we repeat it for r runs and pick the best optimizer. We provide a complete description of cross-entropy method with learnable constants in Algorithm 2." }, { "heading": "3.3.2 REASONING ABOUT UNKNOWN PROPERTIES", "text": "With a force function F given, reasoning of unknown properties is reduced to posterior inference on z in the generative model specified in Section 3.1. Since for a fixed F , the model in our framework is end-to-end, piecewise differentiable with respect to properties, we perform inference by sampling using Hamiltonian Monte Carlo (HMC; Duane et al., 1987; Neal et al., 2011). Other particle-based alternatives like importance sampling and sequential Monte Carlo are possible but are less efficient.\nIn order to draw k samples from the posterior robustly in the E-step, we first run k+ k′ independent HMC chains by the no-U-turn sampler (NUTS; Hoffman & Gelman, 2011) for a reasonably large number of iterations, where k′ is also a hyper-parameter to choose. After this, we remove k′ chains with the smallest effective sample size (ESS). This reduces the chance of using samples from chains that mixed poorly or got stuck in bad region due to random initialization. Finally, we pick the last sample from each chain as the samples returned by the E-step {z1, . . . , zk}." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we present a battery of empirical evaluations of the symbolic M-step and demonstrate how our complete EM algorithm is capable of joint reasoning and learning.\nDatasets. We consider three simulated datasets for our evaluation. These three datasets, if combined together, correspond to the dataset used in Ullman et al. (2018) for assessing physics learning and reasoning in humans. The first dataset, NBODY (n-body simulation with 4 bodies), is populated by placing a heavy body with large mass and no velocity at [0, 0] and three other bodies at random positions with random velocities such that, they would orbit the heavy body in the middle in the absence of the other two bodies. The gravitational constant is set such that the system is stable within the duration of the simulation. The ground truth force to learn is the gravitational force between the bodies. The second dataset, BOUNCE, is generated by simulating elastic collision between a set of discs and the box that they are contained within. The gravitational constant is set small such that the gravitational force is ignorable and the ground truth force to learn is the collision resolution force. The last dataset, MAT, simulates the friction-based interaction of discs and a mat. We populate this data by rolling discs with different initial states over mats with random sizes and positions and applying friction when they come in contact. The ground truth force to learn is this friction. Each dataset consists of 100 scenes, 20 of which are held-out for testing. All scenes are simulated using a physics engine with a discretization of 0.02, for 50 frames; see figure 4 for an example of each dataset and Appendix B.2 for the ground truth force expressions under our grammar.\nBaselines Being most closely related to our work, we use a specific instantiation of the OGN model (Sanchez-Gonzalez et al., 2019) as the neural baseline. The original OGN model does not assume any particular Hamiltonian dynamics and thus outputs the partial derivatives of both position and momentum variables to integrate the dynamics. In our specific realization, we only make it to output the partial derivative of the velocity variable since under the Newtonian dynamics, the partial derivative of the position variable is analytically known as the velocity. Since OGN (and other neural methods, such as IN) is designed to learn with all properties given, we assume all the properties are fully observed and compare our symbolic force model against OGN. See Appendix B.3 for details of the neural architecture, training and parameterization setup for OGN." }, { "heading": "4.1 DATA-EFFICIENCY: SYMBOLIC VS NEURAL", "text": "We now compare the symbolic M-step of BSP against the OGN-based neural baselines in terms of data-efficiency. Specifically, we check how accurate the per-frame predictions of these models are on held-out data, and how their accuracies change with the amount of training data. We use a noise-free version of the trajectory in this evaluation and provide all the properties as observed data since the neural baselines cannot be trained if the properties are not fully observed. For each dataset, holding out 20 scenes for evaluation, we randomly shuffle the remaining 80 scenes for training and use the first k scenes to fit the model. Because an average human can perform such learning tasks with about 5 scenes (Ullman et al., 2018), we only vary k from 1 to 10 in our experiments. We use the root mean squared error (RMSE) per frame as the performance metric, repeat each of the experiments five times with different training set and finally report the mean, maximum and minimum performance for all the methods, as shown in figure 5. As reference, we include the performance of F = 0, a zero-force baseline, corresponding to the constant velocity baseline in (Battaglia et al., 2016). We also include the performance of the ground truth force F ∗, which has an RMSE per frame of 0. As it can be seen, the symbolic M-step is more sample-efficient than the neural baseline for all datasets within this limited data regime. For NBODY and MAT, BSP can find the ground truth force function with 1 scene and 4 scenes respectively. For BOUNCE, the neural baseline cannot reach the performance of F = 0 even after 10 scenes for training. This is a known issue with neural network approaches when learning collision as the inherently sparse nature of the collision interaction does not provide enough training signal (Battaglia et al., 2016). As BOUNCE is the only case where our method fails to find the true law within 10 scenes, we include the typical inferred law in Appendix C as well the predicated trajectories of some selected scenes for inspection." }, { "heading": "4.2 GENERALIZATION", "text": "It is worth checking how the laws learned in Section 4.1 generalize to new scenes beyond the training data. In cases where the true law is successfully recovered, the expression will generalize to novel scenes undoubtedly. Therefore, it is more interesting to inspect the generalization ability of an approximate law, that is a law which is not completely equivalent to the true law but is close. The emerged law for the BOUNCE dataset is such an example as mentioned earlier. It has an expression of F † = c ‖vi − vj‖2 pi−c‖pi−c‖2 doesCollide(pi, si,pj , sj); see figure 10 in Appendix C for the actual tree. Although it is not identical to the true law, it is still a good approximation: it takes into accounts the velocity difference into consideration and finds the correct force direction. We now consider applying this law to a completely new scene: a vertical-view world where the gravity is pointing in downward direction. figure 6 shows the predicted trajectory with true and the approximate law with two different initial conditions. As it can be seen, the approximate law successfully generalize this novel world. For the first condition, the projection is very close to the true one, while\nforce function F † = 239.99 mimj‖pi−pj‖2 pi−pj ‖pi−pj‖2 . The constant in figure 7d is c = 2.04× 10 3.\nfor the second condition, the concept of bounce is also correctly transferred. The corresponding animations for these plots can also be found in the supplementary material for further inspections." }, { "heading": "4.3 LEARNING WITH UNOBSERVED PROPERTIES", "text": "Now, we demonstrate how the joint learning and reasoning in the BSP method (Algorithm 1) can recover the true force law when some properties are unobserved. As the first experiment, we use three scenes from the (noisy) NBODY dataset (with four entities per scene), such that if the true masses are given, the M-step can successfully learn the true force law. Next, we assume that the masses of the three light entities are unknown with a uniform prior U(0.02, 9) and the mass of the heavy entity is known. We use Algorithm 1 to fit the same generative model that simulates the data using BSP. figure 7 shows the posterior distribution over mass and the force function at initialization (figure 7a), middle (figure 7b) and convergence (figure 7c). In this run, after 3 iterations, our algorithm successfully recovers the true force function. We repeat this experiment ten times with randomly sampled scenes and for eight of them, BSP successfully recovers the true force law. Note that because the intermediate learned force law F † is incorrect, the variance of the posterior (in figure 7b) is larger than the one from the true force law (in figure 7c). Compared the expression at convergence in figure 8d with the true law, the algorithm replaces pi − pj with pi − c and a scaled constant. This is valid as the contact point is defined as c = (pi + pj) /2 when there is no contact.\nFor the second experiment, we use five scenes from the (noisy) MAT dataset. We assume that the only unknown is the friction coefficient of the mat with a truncated Gaussian prior T runcated(N (µ0, 22), 0, 5) (truncated between 0 and 5), where µ0 is the true coefficient, Note that the variance 22 is large enough to be uncertain, justifying a fair choice of the prior. Similarly, we use Algorithm 1 to fit the same generative model that simulates the data using BSP. figure 8 shows the posterior distribution over mass and the force function at initialization (8a), middle (8b) and convergence (8c) of the algorithm. Compared the expression at convergence with the true law, the algorithm learns vi − vj instead of vi as the mat velocity is zero, i.e. vj = 0, in all scenes," }, { "heading": "5 DISCUSSION AND CONCLUSION", "text": "We present BSP, the first fully Bayesian approach to symbolic intuitive physics by combining symbolic learning of physical force laws and statistical learning of unobserved properties. Our work paves the way for using learnable, data-efficient IPEs in intuitive physics by providing a computational framework to study how humans’ iterative reasoning-learning is mentally performed." }, { "heading": "A COMPLETE GENERATIVE PROCESS", "text": "Section 3 describes the top-down generative model piece by piece, to improve the clarity of the EM framework, we provide the complete generative process of the observation given force function F , which corresponds to the E-step in our method, as a probabilistic program in Algorithm 3. In this probabilistic program, we use the keyword ASSUME and OBSERVE for sampling latent variables and observations separately, following the notations from Wood et al. (2014)." }, { "heading": "B EXPERIMENTAL DETAILS", "text": "B.1 HYPER-PARAMETERS FOR ALGORITHM 1\nFor the E-step, we use k = 3 and k′ = 2 and the hyper-parameters for NUTS are: 150 adaptation steps, 150 HMC iterations, a maximum tree depth of 4 and a target acceptance ratio of 0.75. For the M-step, we repeat r = 2 runs and the hyper-parameters for the cross-entropy method are: 1, 000 total populations, 500 selected populations, 4 iterations and a maximum depth of 8. The weighting parameter for the PCFG prior are 1 for the NBODY and BOUNCE datasets and 1× 10−4 for the MAT dataset.\nB.2 GROUND TRUTH FORCES\nThe symbolic tress of ground truth forces that are used to generated the datasets used in Section 4 are given in figure 9.\nB.3 TRAINING SETUP OF OGNS\nFor the OGN baseline, we use a multilayer perceptron (MLP) of din → 50→ 50 where the activation function is the rectified linear unit (ReLU) for the node model and use a MLP of (50 + 50) → 50 → 50 → 50 → dout as the edge model. For training, we use the ADAM optimizer (Kingma\n& Ba, 2014) with a learning rate of 5× 10−3 for a total number of 10,000 passes of scenes for the NBODY and MAT datasets and a total number of 20,000 passes of scenes for the BOUNCE dataset. For example, if 5 scenes are used, the total number of pass of the dataset would be 10000/5, for the NBODY dataset. This makes sure that the training time for all experiments are fixed.\nIn addition, we found that we need provide additional prior knowledge on how forces are related to the mass and acceleration by parameterzing them as Fe(·) = maθ(·), where θ is NN parameters, otherwise they fail to learn. This parameterization is fact consistent with (Sanchez-Gonzalez et al., 2019) in which NNs output partial derivatives of the Hamiltonian system." }, { "heading": "C THE LEARNED BOUNCE LAW", "text": "As mentioned in Section 4.1 and discussed in Section 4.2, the only case in which BSP fails to infer the true law (within 10 scenes) is of special interest for further inspection. A typical approximate law learned in Section 4.1 is shown in figure 10; see Section 4.2 for discussion on how this law differs from the true one. To highlight, there are basically two mismatches between the true law and the learned law. First, there is no projection operation that correctly calculates the effect of speed. Second, the mass-based coefficient is missing. To assist inspection, we also provide some visualizations in figure 11 using initial conditions from the training set for inspection. The corresponding animations can be found in the supplementary material." } ]
2,020
null
SP:5343a29c611b40fc6df160bff09a9aaf8140d0ab
[ "This paper develops an intrinsic reward to help identify factors of variation within a family of MDPs. This intrinsic reward takes a form of curiosity and is used to develop initial behaviors to identify the causes of the latent variation in the environment dynamics. The experiments are used to validate the proposed intrinsic reward across several analyses used to identify its utility and effectiveness.", "This paper considers the problem of skill discovery in settings where the data appears to be a Markov Decision Process and part of the state is unobservable. The hidden state variables are interpreted as causal factors that control important aspects of the environment dynamics. Under this interpretation, the paper advocates for the use of a reward that encourages learned skills that exercise individual components of the hidden state. These skills are learned with a model-based RL algorithm -- one skill per causal factor -- then transferred for use in a downstream control problem that uses a different learning algorithm. The paper claims the learned skills are qualitatively meaningful, and that they enable agents to solve downstream problems without any additional training. Data used for empirical evidence comes from a simulated manipulation robot. " ]
Humans show an innate ability to learn the regularities of the world through interaction. By performing experiments in our environment, we are able to discern the causal factors of variation and infer how they affect the dynamics of our world. Analogously, here we attempt to equip reinforcement learning agents with the ability to perform experiments that facilitate a categorization of the rolled-out trajectories, and to subsequently infer the causal factors of the environment in a hierarchical manner. We introduce a novel intrinsic reward, called causal curiosity, and show that it allows our agents to learn optimal sequences of actions, and to discover causal factors in the dynamics. The learned behavior allows the agent to infer a binary quantized representation for the ground-truth causal factors in every environment. Additionally, we find that these experimental behaviors are semantically meaningful (e.g., to differentiate between heavy and light blocks, our agents learn to lift them), and are learnt in a self-supervised manner with approximately 2.5 times less data than conventional supervised planners. We show that these behaviors can be re-purposed and fine-tuned (e.g., from lifting to pushing or other downstream tasks). Finally, we show that the knowledge of causal factor representations aids zero-shot learning for more complex tasks.
[]
[ { "authors": [ "Ossama Ahmed", "Frederik Träuble", "Anirudh Goyal", "Alexander Neitz", "Manuel Wüthrich", "Yoshua Bengio", "Bernhard Schölkopf", "Stefan Bauer" ], "title": "Causalworld: A robotic manipulation benchmark for causal structure and transfer learning. Under submission 2020", "venue": null, "year": 2020 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Christopher P Burgess", "Irina Higgins", "Arka Pal", "Loic Matthey", "Nick Watters", "Guillaume Desjardins", "Alexander Lerchner" ], "title": "Understanding disentangling in β-vae. arxiv 2018", "venue": "arXiv preprint arXiv:1804.03599,", "year": 2018 }, { "authors": [ "Eduardo F Camacho", "Carlos Bordons Alba" ], "title": "Model predictive control", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Tian Qi Chen", "Xuechen Li", "Roger B Grosse", "David K Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Marco Cuturi", "Mathieu Blondel" ], "title": "Soft-dtw: a differentiable loss function for time-series", "venue": "arXiv preprint arXiv:1703.01541,", "year": 2017 }, { "authors": [ "Veronica Czitrom" ], "title": "One-factor-at-a-time versus designed experiments", "venue": "The American Statistician,", "year": 1999 }, { "authors": [ "Pieter-Tjerk De Boer", "Dirk P Kroese", "Shie Mannor", "Reuven Y Rubinstein" ], "title": "A tutorial on the cross-entropy method", "venue": "Annals of operations research,", "year": 2005 }, { "authors": [ "Finale Doshi-Velez", "George Konidaris" ], "title": "Hidden parameter markov decision processes: A semiparametric regression approach for discovering latent task parametrizations", "venue": "In IJCAI: proceedings of the conference,", "year": 2016 }, { "authors": [ "Ronald Aylmer Fisher" ], "title": "Design of experiments", "venue": "Br Med J,", "year": 1936 }, { "authors": [ "Charles Robert Hicks" ], "title": "Fundamental concepts in the design of experiments", "venue": null, "year": 1964 }, { "authors": [ "Ashley Hill", "Antonin Raffin", "Maximilian Ernestus", "Adam Gleave", "Anssi Kanervisto", "Rene Traore", "Prafulla Dhariwal", "Christopher Hesse", "Oleg Klimov", "Alex Nichol", "Matthias Plappert", "Alec Radford", "John Schulman", "Szymon Sidor", "Yuhuai Wu" ], "title": "Stable baselines. https://github.com/ hill-a/stable-baselines, 2018", "venue": null, "year": 2018 }, { "authors": [ "Maximilian Ilse", "Jakub M Tomczak", "Christos Louizos", "Max Welling" ], "title": "Diva: Domain invariant variational autoencoders", "venue": "arXiv preprint arXiv:1905.10427,", "year": 2019 }, { "authors": [ "Taylor W Killian", "Samuel Daulton", "George Konidaris", "Finale Doshi-Velez" ], "title": "Robust and efficient transfer learning with hidden parameter markov decision processes", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "arXiv preprint arXiv:1802.05983,", "year": 2018 }, { "authors": [ "Durk P Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "Max Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Rätsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "arXiv preprint arXiv:1811.12359,", "year": 2018 }, { "authors": [ "Francesco Locatello", "Ben Poole", "Gunnar Rätsch", "Bernhard Schölkopf", "Olivier Bachem", "Michael Tschannen" ], "title": "Weakly-supervised disentanglement without compromises", "venue": "arXiv preprint arXiv:2002.02886,", "year": 2020 }, { "authors": [ "Hung Ngo", "Matthew Luciw", "Alexander Forster", "Juergen Schmidhuber" ], "title": "Learning skills from play: artificial curiosity on a katana robot arm", "venue": "In The 2012 international joint conference on neural networks (IJCNN),", "year": 2012 }, { "authors": [ "Giambattista Parascandolo", "Niki Kilbertus", "Mateo Rojas-Carulla", "Bernhard Schölkopf" ], "title": "Learning independent causal mechanisms", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Christian F Perez", "Felipe Petroski Such", "Theofanis Karaletsos" ], "title": "Generalized hidden parameter mdps: Transferable model-based rl in a handful of trials", "venue": "AAAI Conference On Artifical Intelligence,", "year": 2020 }, { "authors": [ "Jonas Peters", "Dominik Janzing", "Bernhard Schölkopf" ], "title": "Elements of causal inference", "venue": null, "year": 2017 }, { "authors": [ "Peter J Rousseeuw" ], "title": "Silhouettes: a graphical aid to the interpretation and validation of cluster analysis", "venue": "Journal of computational and applied mathematics,", "year": 1987 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts", "venue": "Connection Science,", "year": 2006 }, { "authors": [ "B. Schölkopf" ], "title": "Artificial intelligence: Learning to see and act", "venue": "(News & Views). Nature,", "year": 2015 }, { "authors": [ "Bernhard Schölkopf" ], "title": "Causality for machine learning", "venue": "arXiv preprint arXiv:1911.10500,", "year": 2019 }, { "authors": [ "Jiayu Yao", "Taylor Killian", "George Konidaris", "Finale Doshi-Velez" ], "title": "Direct policy transfer via hidden parameter markov decision processes", "venue": "In LLARLA Workshop, FAIM,", "year": 2018 }, { "authors": [ "Luisa Zintgraf", "Kyriacos Shiarlis", "Maximilian Igl", "Sebastian Schulze", "Yarin Gal", "Katja Hofmann", "Shimon Whiteson" ], "title": "Varibad: A very good method for bayes-adaptive deep rl via meta-learning", "venue": null, "year": 1910 }, { "authors": [ "De Boer" ], "title": "We sampled 40 plans per iteration from the distribution initialized to uniform U(controlLow, controlHigh). Each of the sampled plans are applied to each of the training environments and the top 10% of the plans are used to update the distribution", "venue": "Overview of training", "year": 2005 }, { "authors": [ "∈ O" ], "title": "d(·, ·) in the space of trajectories is set to be Soft Dynamic Time Warping (Cuturi", "venue": null, "year": 2017 }, { "authors": [ "Assumption Peters" ], "title": "Consider the outcome S obtained by applying an action sequence", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Discovering causation in environments an agent might encounter remains an open and challenging problem for causal reinforcement learning (Schölkopf (2015), Bengio et al. (2013), Schölkopf (2019)). Most approaches take the form of BAMDPs (Bayes Adaptive Markov Decision Processes) (Zintgraf et al. (2019)) or Hi-Param MDP (Hidden Parameter MDPs) (Doshi-Velez & Konidaris (2016); Yao et al. (2018); Killian et al. (2017); Perez et al. (2020)) which condition the transition p(st+1|st, at;H) and/or reward function R(rt+1|st, at, st+1;H) of each environment on hidden parameters (also referred to as causal factors in some of the above studies). Let s ∈ S, a ∈ A, r ∈ R, H ∈ H where S, A, R, and H are the set of states, actions, rewards and feasible hidden parameters. In the physical world and in the case of mechanical systems, examples of the parameter hj ∈ H include gravity, coefficients of friction, masses and sizes of objects. Typically, H is treated as a latent variable for which an embedding is learned during training, using variational methods (Kingma et al. (2014); Ilse et al. (2019)). Let s0:T be the entire state trajectory of length T . Similarly, a0:T is the sequence of actions applied during that trajectory by the agent that results in s0:T . In an environment parameterized by these causal factors, these latent variable approaches define a probability distribution over the entire sequence of (rewards, states, actions) conditioned on a latent z as p(r0:T , s0:T , a0:T−1; z) that factorizes as\nT−1∏ i=1 p(rt+1|st, at, st+1, z)p(st+1|st, at, z)p(at|st, z) (1)\ndue to the Markov assumption. At test time, the agent infers the causal factor associated with its environment by observing the trajectories produced by its initial actions that can be issued by any policy such as model-based reinforcement learning.\nIn practice, however, discovering causal factors in a physical environment is prone to various challenges that are caused by the disjointed nature of the influence of these factors on the produced\ntrajectories. More specifically, at each time step, the transition function is affected by a subset of global causal factors. This subset is implicitly defined on the basis of the current state and the action taken. For example, if a body in an environment loses contact with the ground, the coefficient of friction between the body and the ground no longer affects the outcome of any action that is taken. Likewise, the outcome of an upward force applied by the agent to a body on the ground is unaffected by the friction coefficient. We can therefore take advantage of this natural discontinuity to discern causal factors.\nWithout knowledge of how independent causal mechanisms affect the outcome of a particular action in a given state in an environment, it becomes impossible for the agent to conclude where the variation it encountered came from. Unsurprisingly, Hi-Param and BAMDP approaches fail to learn a disentangled embedding for the causal factors, making their behaviors uninterpretable (Perez et al. (2020)). For example, if, in an environment, a body remains stationary under a particular force, the Hi-Param or BAMDP agent may apply a higher force to achieve its goal of perhaps moving the body, but will be unable to conclude whether the \"un-movability\" was caused by high friction or high mass of the body. Additionally, these approaches require human-supervised reward engineering, making it difficult to apply them outside of the simulated environments they are tested in.\nOur goal is, instead of focusing on maximizing reward for some particular task, to allow agents to discover causal processes through exploratory interaction. During training, our agents discover self-supervised experimental behaviors which they apply to a set of training environments. These behaviors allow them to learn about the various causal mechanisms that govern the transitions in each environment. During inference in a novel environment, they perform these discovered behaviors sequentially and use the outcome of each behavior to infer the embedding for a single causal factor (Figure 1).\nThe main challenge while learning a disentangled representation for the causal factors of the world is that several causal factors may affect the outcome of behaviors in each environment. For example, when pushing a body on the ground, the outcome, i.e., whether the body moves, or how far the body is pushed, depends on several factors, e.g., mass, shape and size, frictional coefficients, etc. However, if, instead of pushing on the ground, the agent executes a perfect grasp-and-lift behavior, only mass will affect whether the body is lifted off the ground or not.\nThus, it is clear that not all experimental behaviors are created equal and that the outcomes of some behaviors are caused by fewer causal factors than others. Our agents learn these behaviors without supervision using causal curiosity, an intrinsic reward. The outcome of a single such experimental behavior is then used to infer a binary quantized embedding describing the single isolated causal factor. Even though causal factors of variation in a physical world are easily identifiable to humans, a concrete definition is required to back up our proposed method. We conjecture that the causality of a factor of variation depends on the available actions to the agent. If the set of actions that an agent can take is very limited, there is no way for it to discern a diverse set of causal factors in the environment. Definition 1 (Causal factors). Consider the POMDP (O, S , A, p, r) with observation space O, state space S , action spaceA, the transition function p, and the reward function r. Let o0:T ∈ OT denotes a trajectory of observations and T be the length of such trajectories. Let d(·, ·) : OT ×OT → R+ be a distance function defined on the space of trajectories of length T . The set H = {h1, h2, . . . , hk} is called a set of −causal factors if for every hj ∈ H , there exists a unique sequence of actions a0:T that clusters the state trajectories into two sets S and S′ such that\nmin{d(o0:T , o′0:T ) : o0:T ∈ O, o′0:T ∈ O′} > (2)\nand that hj is the cause of the trajectory of states obtained i.e.,\np(o0:T |do(hj = k), a0:T ) 6= p(o0:T |do(hj = k′), a0:T ) ∀k 6= k′ (3)\nIntuitively, a factor of variation affecting a set of environments is called causal if there exists a sequence of actions available to the agent where the resultant trajectories are clustered into two or more sets (for simplicity here we assume binary clusters). This is analogous to the human ability to conclude whether objects are heavy or light, big or small. For a gentle introduction to the intuition about this definition, we refer the reader to Appendix D.\nAccording to Def. 1, a causal factor is a parameter in the environment whose value, when intervened on (i.e. varied) over a set of values, results in trajectories of states that are divisible into disjoint\nclusters under a particular sequence of actions. These clusters represent the quantized values of the causal factor. For example, mass, which is a causal factor of a body, under an action sequence of a grasping and lifting motion, results in 2 clusters, liftable (low mass) and not liftable (high mass). However, such an action sequence is not known in advance. Therefore, discovering a causal factor in the environment boils down to finding a sequence of actions that makes the effect of that factor prominent by producing clustered trajectories for different values of that environmental factor.\nUsing the above, we propose an intrinsic reward, which allows our agents to discover experimental behaviors which are semantically meaningful and can be used to re-train for downstream tasks, resulting in high sample efficiency. Our work, therefore, forms an important link between structured representation learning and skill discovery, two largely disjoint fields in RL, which stand to benefit from each other.\nThe contributions of the work are as follows:\n• We equip agents with the ability to perform experiments and behave meaningfully in a set of environments in an unsupervised manner. These behaviors can expose or obfuscate specific independent causal mechanisms that occur in the world of the agent, allowing the agent to learn about each in the absence of the others, an important human behavioral trait.\n• We introduce an intrinsic reward, causal curiosity, which allows our agents to discover these behaviors without human-engineered rewards. The outcomes of the experiments are used to learn a disentangled quantized binary representation for the causal factors of the environment, analogous to the human ability to conclude whether objects are light/heavy, big/small etc.\n• Through extensive experiments, we conclude that knowledge of the causal factors aids sample efficiency in two ways - first, that the knowledge of the causal factors aids transfer learning across multiple environments, and, second, that the experimental behaviors acquired can be repurposed for downstream tasks." }, { "heading": "2 METHOD", "text": "Consider a set of N environments E with e(i) ∈ E where e(i) denotes the ith environment. The letter H is overloaded. While H is a set of global causal factors (as defined in Def. 1) such that hj ∈ H , each causal factor hj is itself a random variable which assumes a particular value for every instantiation of an environment. Thus every environment e(i) is represented by a set of causal factors {h(i)j ∀j}. For each environment e(i), (z (i) (0), z (i) (1)...z (i) (K−1)) represents the disentangled embedding vector, such that z(i)(j) encodes h (i) j .\nAlgorithm 1 Training Scheme 1: Initialize j = 0 2: Initialize training environment set Envs 3: for iteration m to M do . Experiment Planner Training Loop 4: Sample experimental behavior a0:T ∼ CEM(·) 5: for ith env in Envs do 6: Apply a0:T to env 7: Collect S(i) = O(i)0:T 8: Reset env 9: Calculate −L(S|M) given that M is bimodal clustering model . Calculate Curiosity 10: Update CEM(·) distribution with highest reward trajectories 11: Use learnt qM (z|S) for cluster assignment of each env in Envs i.e. z(i)j = qM (z|S(i)) 12: Update j = j + 1 13: Repeat from step 2, first setting Envs = {e(i) : z(i)j−1 = 0} and then, setting\nEnvs = {e(i) : z(i)j−1 = 1}" }, { "heading": "2.1 TRAINING THE EXPERIMENT PLANNER", "text": "To learn about causal processes through interaction, the agent must produce a sequence of actions a0:T−1 that we call experimental behavior, which, when applied to environment e(i) ∈ E , produces a sequence of observations (state) s(i) = [o(i)0 , o (i) 1 ..o (i) T ], which is then used to infer the value of the embedding for a single causal factor z(i)(j).\nWe motivate this using model selection criterion. Normally in model selection applications, the observations are fixed and the goal is to find a model M∗ that is closest to reality, as represented by:\nM∗ = argmin M (L(M) + L(S|M)) (4)\nwhere L(·) is the description length. However, here, the situation is reversed. A simple bi-modal clustering model is fixed, motivated by Definition 1. Then, the agent is motivated to produce actions that result in observations that are best explained by this model. These discovered action sequences are the experimental behaviors we desire.\na∗0:T = argmin a0:T (L(M) + L(S|M)) (5)\nwhere each observed trajectory S = S(a0:T ) is a function of the action sequence. As mentioned earlier, the model is fixed in this formulation; hence, the first term L(M) is constant and not a function of the actions. −L(S|M) that is fed back to the RL agent as a reward function to maximize. We regard this reward function as causal curiosity.\nNote that since each causal factor has its own independent causal mechanism that causes S, the MDL of S will be higher if multiple causal factors cause S. On the contrary, if the agent produces actions which result in an S that is easily explained by a low-capacity bi-modal model M , then it will imply that S is caused by fewer causal factors. Consequently, the causal curiosity reward for such an action sequence, −L(S|M), will be high. Therefore, causal curiosity favors experimental behaviors that result in observations caused by few causal factors - thereby allowing us to use S to infer a representation for a single causal factor. For details, please refer Appendix A." }, { "heading": "2.2 CAUSAL INFERENCE MODULE", "text": "By maximizing the causal curiosity reward it is possible to achieve behaviors which result in trajectories of states only caused by a single hidden parameter. However, we wish to use the outcome of performing these experimental behaviors in each environment to infer a representation for the causal factor isolated by the experiment in question.\nWe achieve this through cluster membership. After training the Model Predictive Control Planner (Camacho & Alba (2013)), we sample from an action sequence a0:T and apply it to each of the\ntraining environments. The learnt clustering model M is then used to infer a representation for each environment using the collected outcome S(i) obtained by applying a0:T to each environment.\nz (i) j = qM (z|S (i)) (6)\nThis corresponds to Step 11 of Algorithm (1). The representation learnt is binary in nature corresponding to the quantization of the continuous spectrum of values a causal factor takes in the training set into high and low values. Note however that a binary quantized embedding is not a necessary part of our method. A dense embedding may alternatively be learnt here similar to (Perez et al. (2020); Zintgraf et al. (2019)) using approximate variational inference. However, performing interventions on a dense embedding (Section 2.3) increases the computational complexity exponentially. Balancing space and time complexity, we report results using the quantized binary form of Equation (6). We discuss the implications of increasing the complexity of z(i)j in the discussion." }, { "heading": "2.3 INTERVENTIONS ON BELIEFS", "text": "Having learnt about the effects of a single causal factor of the environment we wish to learn such experimental behaviors for each of the remaining hidden parameters that may vary in an environment. To achieve this, in an ideal setting, the agent would require access to the generative mechanism of the environments it encounters. Ideally, it would hold the values of the causal factor already learnt about constant i.e. do(hj = constant), and intervene over (vary the value of) another causal factor over a set of values K i.e. do(hj = k) such that k ∈ K. For example, if a human scientist were to study the effects of a causal factor, say mass of a body, she would hold the values of all causal factors constant, (interact with cubes of the same size and external texture) and vary only mass to see how it affects the outcome of specific behaviors she applies to each body.\nHowever, in the real-world the agent does not have access to the generative mechanism of the environments it encounters, but merely has the ability to act in them. Thus, it can intervene on the representations of a causal factor of the environment i.e. do(zi = constant). For, example having learnt about gravity, the agent picks all environments it believes have low gravity, and uses them to learn about a separate causal factor say, friction.\nThis corresponds to Step 13 of Algorithm (1). Thus, to learn about the jth causal factor, we repeat steps 3 onwards on each of the clusters obtained for the j − 1th.\nEnvs = {e(i) : z(i)j−1 = k}, k ∈ {0, 1} (7)\nThis process continues in the form of a tree (Figure 4), where for each cluster of environments, a new experiment learns to split the cluster into 2 sub-clusters depending on the value of another hidden parameter. At level n, the agent produces 2n experiments and inference models, having already intervened on the binary quantized representations of n causal factors." }, { "heading": "3 RELATED WORK", "text": "Doshi-Velez & Konidaris (2016) define a class Markov Decision Processes where transition probabilities p(st+1|st, at; θ) depend on a hidden parameter θ, whose value is not observed, but its effects are felt. Killian et al. (2017) and Yao et al. (2018) utilize these Hidden Parameter MDPs (Markov Decision Processes) to enable efficient policy transfer, assuming that transition probabilities across states are a function of hidden parameters. Perez et al. (2020) relax this assumption, allowing both transition probabilities and reward functions to be functions of hidden parameters. Zintgraf et al. (2019) approach the problem from a Bayes-optimal policy standpoint, defining transition probabilities and reward functions to be dependent on a hidden parameter characteristic of the MDP in consideration. We utilize this setup to define causal factors. Substantial attempts have been made at unsupervised disentanglement, most notably, the β-VAE Higgins et al. Burgess et al. (2018), where a combination of factored priors and the information bottleneck force disentangled representations. Kim & Mnih (2018) enforce explicit factorization of the prior without compromising on the mutual information between the data and latent variables, a shortcoming of the β-VAE. Chen et al. (2018) factor the KL divergence into a more explicit form, highlighting an improved objective function and a classifier-agnostic disentanglement metric. Locatello et al. (2018) show theoretically that unsupervised disentanglement (in the absence of inductive\nbiases) is impossible and highly unstable, susceptible to random seed values. They follow this up with Locatello et al. (2020) where they show, both theoretically and experimentally, that pair-wise images provide sufficient inductive bias to disentangle causal factors of variation. However, these works have been applied to supervised learning problems whereas we attempt to disentangle the effects of hidden variables in dynamical environments, a relatively untouched question. Curiosity for robotics is not a new area of research. Schmidhuber (2006), Ngo et al. (2012), Pathak et al. (2017) describe curiosity as the motivation behind the behavior of an agent in an environment for which the outcome is unpredictable, i.e., an intrinsic reward that motivates the agent to explore the unseen portions of the state space (and subsequent transitions). While causal curiosity is an intrinsic reward, it differs from these traditional definitions of curiosity in that it motivates the agent to produce structure in the outcome of its behavior." }, { "heading": "4 EXPERIMENTS", "text": "Our work has 2 main thrusts - the discovered experimental behaviors and the representations obtained from the outcome of the behaviors in environments. The experimental behaviors are tied to contributions 1 and 2 in the Introduction. The causal factors allow us to achieve contribution 3 in the Introduction. We visualize these learnt behaviors and verify that they are indeed semantically meaningful and interpretable. We quantify the utility of the learned behaviors by using the behaviors as pre-training for a downstream task. In our experimental setup, we verify that these behaviors are indeed invariant to all other causal factors except one. We visualize the representations obtained using these behaviors and verify that they are indeed the binary quantized representations for each of the ground truth causal factors that we manipulated in our experiments. Finally, we verify that the knowledge of the representation does indeed aid transfer learning and zero-shot generalizability in downstream tasks. Causal World We use the Causal World Simulation (Ahmed et al. (Under submission 2020)) based on the Pybullet Physics engine to test our approach. The simulator consists of a 3-fingered robot, with 3 joints on each finger. We constrain each environment to consist of a single object that the agent can interact with. The causal factors that we manipulate for each of the objects are size, shape and mass of the blocks. The simulator allows us to capture and track the positions and velocities of each of the movable objects in an environment. While, for most experiments, the 3D position and 3D pose of the blocks is used as the state at each time step, we perform ablation studies where less information is provided to the agent." }, { "heading": "4.1 VISUALIZING DISCOVERED BEHAVIORS", "text": "We would like to analyze whether the discovered experimental behaviors are human interpretable, i.e., are the experimental behaviors discovered in each of the setups semantically meaningful? We find that our agents learn to perform several useful behaviors without any supervision. For instance, to differentiate between objects with varying mass, we find that they acquire a perfect grasp-and-lift behavior with an upward force. In other random seed experiments, the agents learn to lift the blocks by using the wall of the environment for support. To differentiate between cubes and spheres, the agent discovers a pushing behavior which gently rolls the spheres along a horizontal direction. Qualitatively, we find that these behaviors are stable and predictable. See videos of discovered behaviors here (website under construction).\nConcurrent with the objective they are trained on, we find that the acquired behaviors impose structure on the outcome when applied to each of the training environments. The outcome of each experimental behavior on the set of training environments results in dividing it into 2 subsets. These subsets correspond to the binary quantized values of a single factor, e.g., large or small, while being invariant to the values of other causal factors of the environments. We also perform ablation studies where instead of providing the full state vector, we provide only one coordinate (e.g., only x, y or z coordinate of the block). We find that causal curiosity results in behaviors that differentiate the environments based on outcomes along the direction provided. For example, when only the x coordinate was provided, the agent learned to evaluate mass by applying a pushing behavior along the x direction. Similarly, a lifting behavior was obtained when only the z coordinate was supplied to the curiosity module (Figure 2)." }, { "heading": "4.2 UTILITY OF LEARNED BEHAVIORS FOR DOWNSTREAM TASKS", "text": "While the behaviors acquired are semantically meaningful, we would like to quantify their utility as pre-training for downstream tasks. We analyze the performance on Lifting where the agent must grasp and lift a block to a predetermined height and Travel, where the agent must impart a velocity to the block along a predetermined direction. We re-train the learnt planner using an external reward for these tasks (Curious). We implement a baseline vanilla Cross Entropy Method optimized Model Predictive Control Planner (De Boer et al. (2005)) trained using the identical reward function and compare the rewards per trajectory during training. We also run a baseline (Additive reward) which explores whether the agent recieves both the causal curiosity reward and the external reward. We find high zero-shot generalizability and quicker convergence as compared to the vanilla CEM planner (Figure ??). We find that maximizing the curiosity reward in addition to simultaneously maximizing external rewards results in suboptimal performance due to our formulation of the curiosity reward. To maximize curiosity, the agent must discover behaviors that divide environments into 2 clusters. Thus in the context of the experimental setups, this corresponds to acquiring a lifting/pushing behavior that allows the agent to lift/impart horizontal velocity to blocks in half of the environments, while not being able to do so in the remaining environments. However, the explicit external reward incentivizes the agent to lift/impart horizontal velocity blocks in all environments. Thus these competing objectives result in sub-par performance." }, { "heading": "4.3 VISUALIZATION OF HIERARCHICAL BINARY LATENT SPACE", "text": "Our agents discover a disentangled latent space such that they are able to isolate the sources of causation of the variability they encounters in their environments. For every environment, they learn a disentangled embedding vector which describes each of the causal factors.\nTo show this, we use 3 separate experimental setups - Mass, SizeMass and ShapeSizeMass where each of the causal factors are allowed to vary over a range of discrete values. During Mass, the agent is allowed access to 5 environments with objects having the same shape (cuboids) and size but differing only in mass. During SizeMass, the agent has access to 30 environments with cuboids having sizes and masses ranging over 6 and 5 values respectively. Finally, during ShapeSizeMass,\nthe agent has access to 60 environments with objects having shapes, sizes and masses ranging over 2, 6, 5, and values respectively.\nDuring training, the agent discovers a hierarchical binary latent space (Figure 4), where each level of hierarchy corresponds to a single causal factor. The binary values at each level of hierarchy correspond to the high/low values of the causal factor in question. To our knowledge, we obtain the first interpretable latent space describing the various causal processes in the environment of an agent. This implies that it learns to quantify each physical attribute of the blocks it encounters in a completely unsupervised manner." }, { "heading": "4.4 KNOWLEDGE OF CAUSAL FACTORS AIDS TRANSFER", "text": "Next, we test whether knowledge of the causal factors does indeed aid transfer and zero-shot generalizability. To this end, we supply the representations obtained by the agent during the experimental behavior phase as input to a policy network in addition to the state of the simulator, and train it for a place-and-orient downstream task (Figure 1). We define 2 experimental setups - TransferMass and TransferSizeMass. In Mass, the agent is given access to 10 environments, with 10 varying values of mass. In TransferSizeMass, the agent is allowed access to 10 environments, with 2\nand 5 values of size and mass respectively. In both setups, the agent learns about the varying causal mechanisms by optimizing causal curiosity. Subsequently, using the causal representation along with the state for each environment, it is trained to maximize external reward. For details of the setup, please see Appendix B.\nAfter training, the agents are exposed to a set of unseen test environments, where we analyze their zero-shot generalizability. These test environments consist of unseen masses and sizes and their unseen combinations. This corresponds to \"Strong Generalization\" as defined by Perez et al. (2020). We report results averaged over 10 random seeds.\nFor each setup, we train a PPO-optimized Actor-Critic Policy (referred to as Causally-curious agent) with access to the causal representations and a 56 dimensional state vector from the environment i.e., at ∼ π(·|st, z0:K) (thus, a total of 57 dimensional input for TransferMass, and a 58 dimensional for TransferSizeMass). Similar to Perez et al. (2020), we implement 2 baselines - the Generalist and the Specialist. The Specialist consists of an agent with identical architecture as Causally-curious agent, but without access to causal representations (i.e., receives a 56 dimensional state vector). It is initialized randomly and is trained only on the test environments, serving as a benchmark for complexity of the test tasks. It performs poorly, indicating that the test tasks are complex. The architecture of the Generalist is identical to the Specialist. Like the Specialist, the Generalist also does not have access to the causal representations, but is trained on the same set of training environments that the Causally-curious agent is trained on. The poor performance of the generalist indicates that the tasks distribution of training and test tasks differs significantly and that memorization of behaviors does not yield good transfer. We find that causally-curious agents significantly outperform the both baselines indicating that indeed, knowledge of the causal representation does aid zero-shot generalizability." }, { "heading": "5 CONCLUSION", "text": "We introduce causal curiosity, an intrinsic reward that allows agents to discover binary quantized representations for the causal factors that affect environments an RL agent may encounter. We show that optimizing causal curiosity rewards results in the agent performing self-supervised experiments. We find that these experiments happen to be semantically meaningful and can be used as pre-training for downstream tasks. While our work learns binary quantized causal representations, a dense encoding may improve the amount of encoded information about the causal mechanisms of the environments. We leave this to future work." }, { "heading": "E SCALABILITY LIMITATION", "text": "We utilize the extremely popular One-Factor-at-a-time (OFAT) general paradigm of scientific investigation, as an inspiration for our method. In the case of many hundreds of causal factors, the complexity of this method will scale exponentially. However, we believe that this would indeed be the case given a human experimenter attempting to discover the causation in any system she is studying. Learning about causation is a computationally expensive affair. We point the reader towards a wealth of material on the design of scientific experiments and more specifically the lack of scalability of OFAT (Fisher (1936); Hicks (1964); Czitrom (1999)). Nevertheless, OFAT remains the de facto standard for scientific investigation.\nAlgorithm 2 Inference Loop 1: Input: Unseen Test Environment env, trained Planner and Causal Inference Module 2: Initialize causalRep = [ ] 3: Initialize training environment set Envs 4: for k in range(K) do 5: Reset env 6: Sample experimental behavior a0:T ∼ CEM(·| causalRep) 7: Apply a0:T to env . Exploration Phase 8: Collect S = o0:T 9: Use learnt qM (z|S) for cluster assignment i.e. zk = qM (z|s, causalRep) 10: Append zk to causalRep . Causal Inference Module 11: Learn a policy conditioned on causal factors at ∼ π(·|ot, z0:K) to maximize external reward." } ]
2,020
null
SP:59f3aa13da7e04d36e60a67555cd8254047e949a
[ "The proposed BlendedSearch (BS) presents an intuitive next step in the combination of global and local search schemes for hyper-parameter optimization (HPO). Global search schemes are widely used for HPO but can suffer from large HPO times since their vanilla forms do not account for function evaluation costs. Local search schemes are usually not widely used for HPO but seem useful if the goal is to restrict the search to a region of the search space where the function evaluation costs do not grow drastically. The proposed BS interleaves global and local search steps to ensure that the global search does not go into regions of high evaluation costs while also avoiding being stuck in local minima.", "This paper proposes BlendSearch, which combines global and local optimisation for the problem of hyperparameter optimisation when search cost is heterogenous. To achieve so, they use the combination of one global search instance (e.g. Bayesian optimisation; used to identify promising regions as starting points for local search) with multiple local search instances (which actually do the search). The local search instances will be created, merged and deleted on the fly using the criteria proposed by the authors. The paper finally experimentally validates their approach in various hyperparameter tuning experiments to show promising results." ]
We study the problem of using low cost to search for hyperparameter configurations in a large search space with heterogeneous evaluation cost and model quality. We propose a blended search strategy to combine the strengths of global and local search, and prioritize them on the fly with the goal of minimizing the total cost spent in finding good configurations. Our approach demonstrates robust performance for tuning both tree-based models and deep neural networks on a large AutoML benchmark, as well as superior performance in model quality, time, and resource consumption for a production transformer-based NLP model fine-tuning task.
[ { "affiliations": [], "name": "Chi Wang" }, { "affiliations": [], "name": "Qingyun Wu" }, { "affiliations": [], "name": "Silu Huang" }, { "affiliations": [], "name": "Amin Saied" } ]
[ { "authors": [ "Takuya Akiba", "Shotaro Sano", "Toshihiko Yanase", "Takeru Ohta", "Masanori Koyama" ], "title": "Optuna: A next-generation hyperparameter optimization framework", "venue": "In Proceedings of the 25rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2019 }, { "authors": [ "James S Bergstra", "Rémi Bardenet", "Yoshua Bengio", "Balázs Kégl" ], "title": "Algorithms for hyper-parameter optimization", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Eric Brochu", "Vlad M. Cora", "Nando de Freitas" ], "title": "A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning", "venue": "CoRR, abs/1012.2599,", "year": 2010 }, { "authors": [ "Adam D. Bull" ], "title": "Convergence rates of efficient global optimization algorithms", "venue": "J. Mach. Learn. Res.,", "year": 2011 }, { "authors": [ "Tianqi Chen", "Carlos Guestrin" ], "title": "Xgboost: A scalable tree boosting system", "venue": "In KDD,", "year": 2016 }, { "authors": [ "Andrew M Dai", "Quoc V Le" ], "title": "Semi-supervised sequence learning", "venue": "Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "David Eriksson", "Michael Pearce", "Jacob Gardner", "Ryan D Turner", "Matthias Poloczek" ], "title": "Scalable global optimization via local bayesian optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Stefan Falkner", "Aaron Klein", "Frank Hutter" ], "title": "BOHB: Robust and efficient hyperparameter optimization at scale", "venue": "In Proceedings of the 35th International Conference on Machine Learning", "year": 2018 }, { "authors": [ "Yuzhou Gao", "Tengchao Yu", "Jinglai Li" ], "title": "Bayesian optimization with local search", "venue": "In International Conference on Machine Learning, Optimization, and Data Science,", "year": 2020 }, { "authors": [ "Pieter Gijsbers", "Erin LeDell", "Janek Thomas", "Sébastien Poirier", "Bernd Bischl", "Joaquin Vanschoren" ], "title": "An open source automl benchmark", "venue": "In AutoML Workshop at ICML", "year": 2019 }, { "authors": [ "András György", "Levente Kocsis" ], "title": "Efficient multi-start strategies for local search algorithms", "venue": "Journal of Artificial Intelligence Research,", "year": 2011 }, { "authors": [ "Frank Hutter", "Holger H. Hoos", "Kevin Leyton-Brown" ], "title": "Sequential model-based optimization for general algorithm configuration", "venue": "In Learning and Intelligent Optimization,", "year": 2011 }, { "authors": [ "Kevin Jamieson", "Ameet Talwalkar" ], "title": "Non-stochastic best arm identification and hyperparameter optimization", "venue": "In Artificial Intelligence and Statistics,", "year": 2016 }, { "authors": [ "Kirthevasan Kandasamy", "Gautam Dasarathy", "Jeff Schneider", "Barnabás Póczos" ], "title": "Multi-fidelity bayesian optimisation with continuous approximations", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Guolin Ke", "Qi Meng", "Thomas Finley", "Taifeng Wang", "Wei Chen", "Weidong Ma", "Qiwei Ye", "TieYan Liu" ], "title": "Lightgbm: A highly efficient gradient boosting decision tree", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Aaron Klein", "Stefan Falkner", "Simon Bartels", "Philipp Hennig", "Frank Hutter" ], "title": "Fast bayesian optimization of machine learning hyperparameters on large datasets", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Tor Lattimore", "Csaba Szepesvári" ], "title": "Bandit algorithms", "venue": null, "year": 2020 }, { "authors": [ "Liam Li", "Kevin Jamieson", "Afshin Rostamizadeh", "Ekaterina Gonina", "Jonathan Ben-tzur", "Moritz Hardt", "Benjamin Recht", "Ameet Talwalkar" ], "title": "A system for massively parallel hyperparameter tuning", "venue": "In Proceedings of Machine Learning and Systems,", "year": 2020 }, { "authors": [ "Lisha Li", "Kevin Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Hyperband: A novel bandit-based approach to hyperparameter optimization", "venue": "In ICLR’17,", "year": 2017 }, { "authors": [ "Zhiyun Lu", "Liyu Chen", "Chao-Kai Chiang", "Fei Sha" ], "title": "Hyper-parameter tuning under a budget constraint", "venue": "In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Yurii Nesterov", "Vladimir Spokoiny" ], "title": "Random gradient-free minimization of convex functions", "venue": "Foundations of Computational Mathematics,", "year": 2017 }, { "authors": [ "Jasper Snoek", "Hugo Larochelle", "Ryan P Adams" ], "title": "Practical bayesian optimization of machine learning algorithms", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "James C Spall" ], "title": "Multivariate stochastic approximation using a simultaneous perturbation gradient approximation", "venue": "IEEE transactions on automatic control,", "year": 1992 }, { "authors": [ "Niranjan Srinivas", "Andreas Krause", "Sham M Kakade", "Matthias Seeger" ], "title": "Gaussian process optimization in the bandit setting: No regret and experimental design", "venue": "arXiv preprint arXiv:0912.3995,", "year": 2009 }, { "authors": [ "Qingyun Wu", "Chi Wang", "Silu Huang" ], "title": "Frugal optimization for cost-related hyperparameters", "venue": "In AAAI’21,", "year": 2021 }, { "authors": [ "Li Yang", "Abdallah Shami" ], "title": "On hyperparameter optimization of machine learning algorithms: Theory and practice", "venue": null, "year": 2020 }, { "authors": [ "Wu" ], "title": "Optuna 2.0.0 (https://optuna. readthedocs.io/en/stable/index.html) with default settings for TPE sampler. For LS, we follow the implementation guidelines", "venue": "Settings of BO and LS. For BO,", "year": 2021 } ]
[ { "heading": "1 INTRODUCTION", "text": "Hyperparameter optimization (HPO) of modern machine learning models is a resource-consuming task, which is unaffordable to individuals or organizations with little resource (Yang & Shami, 2020). Operating HPO in a low-cost regime has numerous benefits, such as democratizing ML techniques, enabling new applications of ML, which requires frequent low-latency tuning, and reducing the carbon footprint. It is inherently challenging due to the nature of the task: trying a large number of configurations of heterogeneous cost and accuracy in a large search space. The expense can accumulate from multiple sources: either a large number of individually cheap trials or a small number of expensive trials can add up the required resources.\nThere have been multiple attempts to address the efficiency of HPO from different perspectives. Each of them has strengths and limitations. For example, Bayesian optimization (BO) (Brochu et al., 2010), which is a class of global optimization algorithms, is used to minimize the total number of iterations to reach global optima. However, when the cost of different hyperparameter configurations is heterogeneous, vanilla BO may select a configuration that incurs unnecessarily high cost. As opposed to BO, local search (LS) methods (Wu et al., 2021) are able to control total cost by preventing very expensive trials until necessary, but they may get trapped in local optima. Multi-fidelity methods (Jamieson & Talwalkar, 2016) aim to use cheap proxies to replace some of the expensive trials and approximate the accuracy assessment, but can only be used when such proxies exist. A single search strategy is difficult to meet the generic goal of economical HPO.\nIn this work, we propose a blended search strategy which combines global search and local search strategy such that we can enjoy benefits from both worlds: (1) global search can ensure the convergence to the global optima when the budget is sufficient; and (2) local search methods enable a better control on the cost incurred along the search trajectory. Given a particular global and local search method, our framework, which is named as BlendSearch, combines them according to the following design principles. (1) Instead of sticking with a particular method for configuration selection, we consider both of the candidate search methods and decide which one to use at each round of the configuration selection. (2) We use the global search method to help decide the starting points of local search threads. (3) We use the local search method to intervene the global search method’s configuration selection to avoid configurations that may incur unnecessarily large evaluation cost. (4) We prioritize search instances of both methods according to their performance and efficiency of performance improvement on the fly. Extensive empirical evaluation on the AutoML\n∗Equal contribution\nBenchmark (Gijsbers et al., 2019) validates the robust performance of our method on a wide variety of datasets. BlendSearch is now publicly available in an open-source AutoML Library1." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "We first briefly introduce the vanilla Bayesian optimization methods and local search methods, which are among the building blocks of our method. Bayesian optimization is a class of global optimization algorithms which is suitable for optimizing expensive black-box functions. It models the probabilistic distribution of the objective conditioned on the optimization variables. Typical models include Gaussian process (Snoek et al., 2012), random forest (Hutter et al., 2011), and tree Parzen estimator (TPE) (Bergstra et al., 2011). In BO methods, an acquisition function is used to determine the next point to evaluate. Two common acquisition functions are the expected improvement (EI) (Bull, 2011) over the currently best-observed objective and upper confidence bound (UCB) (Srinivas et al., 2009). Local search methods are prevalent in the general optimization literature (Spall et al., 1992; Nesterov & Spokoiny, 2017) but less studied in the HPO literature due to the possibility of getting trapped in local optima (György & Kocsis, 2011). Recent work (Wu et al., 2021) shows that a local search method FLOW2 can make HPO cost-effective when combined with low-cost initialization and random restart. At each iteration, it samples a pair of vectors (with opposite directions) uniformly at random from a unit sphere, the center of which is the best configuration found so far (a.k.a. incumbent) and the radius of which is the current stepsize. Expensive configurations are avoided in the beginning as each iteration proposes a configuration near the incumbent. Random restart of the local search is performed once the convergence condition is satisfied.\nThere are several attempts to address the limitations of vanilla BO or local search methods. BOwLS (BO with local search) (Gao et al., 2020) uses a BO model to select the starting point of a local search thread. Each local search thread is run until convergence and the BO model is updated with the start point and the converged loss. Trust region BO (Eriksson et al., 2019) fits a fixed number of local models and performs a principled global allocation of samples across these models via an implicit bandit approach. It is primarily designed for HPO problems with high-dimensional numerical hyperparamters. Unfortunately, all existing work that tries to combine global search with local search methods does not consider the heterogeneity of evaluation cost incurred along with the search. There are also a lot of attempts in making HPO efficient by speeding up configuration evaluation. Multifidelity optimizations (Klein et al., 2017; Li et al., 2017; Kandasamy et al., 2017; Falkner et al., 2018; Lu et al., 2019; Li et al., 2020) are proposed for this purpose. They usually require an additional degree of freedom in the problem called ‘fidelity’, to allow performance assessment on a configura-\n1https://github.com/microsoft/FLAML\ntion with different fidelities. There is surprisingly little prior work for generic cost-effective HPO. Gaussian process with expected improvement per second (GPEIPS) (Snoek et al., 2012) models the evaluation cost using another Gaussian process, and heuristically adds the estimated cost into the acquisition function. It does not always outperform GPEI as the acqusition function can overpenalize good but expensive configurations." }, { "heading": "3 BLENDSEARCH", "text": "Our framework needs the following information as inputs.\n• B is the total budget of cost. In this work, we measure the cost by CPU/GPU time. • P is the input HPO problem, which has the following attributes characterizing the problem.\n– P.X is the search space in which each element x is a d-dimensional hyperparameter configuration. For a non-categorical hyperparameter coordinate i ∈ [d], if different values of xi lead to heterogeneous cost and there is a known value P.xLowCosti corresponding to a low cost, it is considered as a controlled dimension. We use P.D to denote such a set of controlled dimensions. – P.LossFunc(·) is the loss function to be minimized in terms of the configurations x ∈ P.X . – P.CostFunc(·) is the cost function that outputs the cost incurred when evaluating x. The goal of an HPO algorithm is to minimize the loss P.LossFunc(x) with the constraint that the total cost incurred G(π) := ∑ x∈I(π) P.CostFunc(x) ≤ B, where I(π) is the search trajectory of algorithm π. Note that both P.LossFunc(x) and P.CostFunc(x) are black-box functions meaning that typically the analytic form is not available, and only function values can be observed. In order to distinguish the operation of querying from loss/cost function and the loss/cost observation, we use P.LossFunc(x) and P.CostFunc(x) to denote the former, and use l(x) and c(x) to denote the latter. l and c (omitting x) are used when there is no ambiguity. • G is the global search method to be used. L is the local search method to be used. L.∆ is the largest stepsize used in local search method L, i.e., the largest possible change on a hyperparameter value between two consecutive search steps.\nThe overall design of our framework is presented in Figure 2 and Algorithm 1. The key idea is to maintain a pool of search threads, one of which corresponds to global search and the others local search. The pool starts with one global search thread and gradually adds local search threads as the search goes on. Here a search thread is an instance of a global search or local search method, each with its own search trajectory. At each round, a search thread selector selects one of the search threads from the pool according to a priority metric that reflects the search threads’ current performance and efficiency of performance improvement. The selected search thread will then be used to propose a configuration to evaluate in this round. When the selected search thread is the global search thread, a config validator first checks whether the proposed configuration is within\nthe ‘admissible’ region for evaluation. If not, it uses a backup local search thread instead. A local search thread will be created only when the global search thread proposes a valid config and a certain create thread condition is met, and will be deleted once it converges. Two local search threads will be merged into one if they are close enough. The priority of each search thread is updated after each evaluation round.\nAlgorithm 1 BlendSearch Inputs: HPO problem P , a global search method G, a local search method L, total budget B.\n1: Initialization: Initialize F .S = [S0] where S0 is an instance of G. We denote by F .x0 the initial point of the search. By design, the values of the controlled dimensions of F .x0 are set to be P.xLowCost and those for the other dimensions are proposed by S0. 2: while F .c < B do 3: S̃, S̃bak ← SelectThread(F) 4: (x, l, c)← SearchEvaluate1Step(S̃,F , P ) 5: if x is invalid then (x, l, c)← SearchEvaluate1Step(S̃bak,F , P ) 6: if x is proposed by global search & CreateNewLSCondition is satisfied then 7: Initialize S = InitializeSearchThread(L, P, (x, l, c)) 8: Add the new LS thread S into the pool: F .S← F .S + S 9: DeleteAndMergeLS(F) . Merge or delete existing LS threads when necessary\n10: Update F .Priority 11: if x is proposed by global search then UpdateGSModel(S0, (x, l, c))\nFor convenience, we use F to denote a collection of framework-level variables: • F .S is the list of search threads maintained in our framework. F .S contains at least one search\nthread and among them there is one and only one global search thread, i.e., S0, in F .S. • F .Priority is a priority dictionary, in which the keys are the search threads in F .S, and the\nvalues are the priority of the corresponding search threads. • Bookkeeping information of F : F .l∗ is the best loss achieved among all the search threads and F .c is the total cost consumed in our framework. • F .R is the ‘admissible’ region on the controlled dimensions of the current search, which is a hyperrectangle and can be written in the form of F .R := {[F .xmini ,F .xmaxi ]}i∈D with F .xmini and F .xmaxi denoting the minimum and maximum value along the i-th dimension in F .R respectively. They are initially set as F .xmini = F .xmaxi = P.xLowCosti for all i ∈ D. The ‘admissible’ region gradually expands during the search: (1) it is expanded to cover all the points evaluated by all the search threads and all the points that are possible to be reached by the local search within one search step, as shown in line 7 and 8 of Algorithm 2; (2) it expands if a local search thread converges, as shown in line 3 of Algorithm 7 (included in Appendix A).\nIn the following, we explain the key steps in our algorithm.\nStep 1: Search thread selector (line 3 of Alg 1). In addition to the primary search thread S̃, SelectThread also outputs a backup search thread S̃bak which is guaranteed to be a local search thread. It is set to be none when there is no local search thread yet in F .S. Specifically,\nS̃ = arg max S∈F.S\nF .Priority(S), S̃bak = {\narg maxS∈(F.S\\S0) F .Priority(S) F .S \\ S0 6= ∅ None F .S \\ S0 = ∅,\n(1) The design of the priority metric follows the principle of optimism in the face of uncertainty from the multi-armed bandit problem to balance exploitation and exploration (Lattimore & Szepesvári, 2020). Specifically, linear extrapolation is performed adaptively and locally to calculate the improvement speed of each search thread. The estimated future reward based on such a linear extrapolation provides a first-order upper bound of the ground truth future reward assuming each search thread has a diminishing return, i.e., the speed of improvement decreases as more resource is spent. Formally, we introduce the following variables and functions for each search thread S ∈ F .S. • Bookkeeping information: S.l1st and S.l2nd are the best loss so far and second best loss before the\nbest loss is achieved. S.c1st and S.c2nd are the total cost taken when S.l1st and S.l2nd are achieved respectively. S.c is the total cost spent in S. S.x1st is the best configuration found so far.\n• S.s is the performance improvement speed of S. It is calculated as S.s = S.l 2nd−S.l1st\nS.c−S.c2nd . This formula is only valid when there is at least one improvement. Otherwise, we do not have enough information to estimate the speed of improvement. We set the speed to the highest speed of all the search threads when S.l2nd = S.l1st. It is due to an implicit assumption of diminishing return. • S.xmin and S.xmax are the minimum and maximum value of the i-th dimension of all hyperparameters configurations evaluated in S respectively. • S.CostImp(·) is a function whose input is a target loss and output is the anticipated cost for S to generate a better loss than this target. We use the following formula to compute it, which is intuitively using the cost for improvement in the past to estimate that in the future.\nS.CostImp(l) = max { S.c− S.c1st, S.c1st − S.c2nd, 2S.l\n1st − l S.s\n} (2)\nOur proposed priority metric is essentially the negative of the projected loss of S:\nF .Priority(S) = −(S.l1st − S.s× b) (3) in which b = min(maxS∈F.S S.CostImp(F .l∗), B − F .c). maxS∈F.S S.CostImp(F .l∗) can be considered as the resource needed for every S to have a better performance than the currently best performance F .l∗. Our priority metric estimates the loss of each search thread if such an amount of resource (restricted by the budget left B − F .c) is given. By considering both the search threads’ current performance and potential improvement, it provides a fair trade-off between exploiting the currently-best and exploring the potentially-better choices.\nAlgorithm 2 SearchEvaluate1Step Inputs: HPO problem P , search thread S, and F\n1: if S is None then Construct x as follows: generate the controlled dimensions of x by adding Guassian noises on the corresponding dimensions of F .x0 and for the rest of the dimensions sample uniformly at random from the search space P.X . 2: else x← S.ProposeConfig() 3: if S = S0 (i.e., S is the global search thread) & x /∈ F .R then x←invalid 4: else l, c← P.LossFunc(x), P.CostFunc(x) . Evaluate configuration x 5: if x 6= invalid then 6: BookKeeping(S,x, l, c,F) and update speed S.s, 7: ∀i ∈ P.D, S.xmini ← min{xi, S.xmini }, F .xmini ← min{S.xmini − L.∆,F .xmini }, 8: ∀i ∈ P.D, S.xmaxi ← max{xi, S.xmaxi }, F .xmaxi ← max{S.xmaxi + L.∆,F .xmaxi } 9: Outputs: x, l, c\nStep 2: Config validator and evaluator (line 4-5 of Alg 1). After a search thread (and a backup search thread) is selected, the next step is to propose the next configuration to try with the chosen search thread(s). Intuitively speaking, we consider generating the next configuration to try primarily according to the selected search thread S̃ whose priority is ranked the highest. But we set a guard rail for the global search thread as it may propose an unnecessarily high-cost configuration. We thus introduce a config validator to validate the configurations proposed by global search according to whether they are within the current admissible region of our framework F .R (line 3 of Alg 2). A configuration marked as ‘invalid’ means that it is considered to be prone to incur unnecessarily high cost and will not be evaluated at this round. In this case, the selected backup search thread will be used to perform another round of SearchEvaluate1Step (line 5 of Alg 1) if it is a valid search thread (i.e., not none). In the case where the backup thread is none, we generate the new configuration according to line 1 of Alg 2. The config validator helps avoid potentially high-cost evaluation and thus avoid creating local search threads from high-cost points until necessary. It does not stick to local searches forever because the admissible region F .R gets expanded. Note that according to the definition of F .R, only the controlled dimensions of the hyperparameter configurations are subject to validation check. If needed, a multi-fidelity pruning strategy can be used in this config evaluator component. Multi-fidelity pruning does not necessarily yield better performance. So the adoption of multi-fidelity pruning in BlendSearch is optional.\nStep 3: Search thread creator, updater and cleaner (line 6-11 of Alg 1). If the newly proposed configuration is proposed by global search and it is not marked as ‘invalid’, we consider creating\na new local search thread using the proposed configuration as a starting point. To make sure the newly created local search thread is relatively good, we first check whether the proposed configuration’s performance is better than at least half of the existing threads’ performance (specified in the CreateNewLSCondition). If so, a new local search thread will be initialized and added to the active search thread pool S. In DeleteAndMergeLS, we check whether a local search thread has converged according to the convergence condition of the specific local search method. If it is, the search thread will be removed from S. In addition, we also go through all the local search threads to see whether the incumbent of a LS thread is reachable in one step by another LS thread with lower loss (ref. Appendix A). If so, the former LS thread will be deleted. After a configuration proposed by global search is evaluated, the observation tuple (x, l, c) is then used to update the model of the global search method through function UpdateGSModel. For example, when the global search method is a Bayesian optimization method, the model is the surrogate model used.\nDue to page limit, detailed pseudocode for several of the straightforward functions mentioned in our framework are provided in Appendix A, including CreateNewLSCondition, DeleteAndMergeLS, InitializeSearchThread and BookKeeping." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate BlendSearch in tuning tabular machine learning libraries with an AutoML benchmark (Gijsbers et al., 2019), and in fine-tuning NLP models for text data. The AutoML benchmark consists of 39 tabular datasets that represent real-world data science classification problems. It includes datasets of all sizes, of different problem domains and with various levels of difficulty. As each dataset has 10 cross-validation folds, all the results reported in this paper are averaged over the 10 folds. With this benchmark, we are able to evaluate multiple HPO methods on a large number of datasets within a manageable computational budget, for tuning three machine learning libraries: XGBoost (Chen & Guestrin, 2016), LightGBM (Ke et al., 2017) and DeepTables2. The first two are popular libraries based on gradient boosted trees, and the third is an easy-to-use deep learning toolkit which utilizes latest research findings for tabular data. We chose them because gradient boosted trees and deep neural networks are the most frequent winners in data science competitions. We run experiments for XGBoost and LightGBM on the AutoML benchmark and report the results in Section 4.1. As a real application, we report an NLP model fine-tuning task for a production use case in Section 4.2. Finally, in Section 4.3 we perform ablation study to investigate the effectiveness of several important components of our framework. Due to page limit, we include the results for tuning DeepTables in Appendix B. We include the following baselines in our experiments.\n• BO (Akiba et al., 2019) – the Bayesian optimization baseline. We choose a modern HPO library Optuna and use the TPE sampler because of its flexibility in handling mixed continuous and discrete space and good peformance reported in existing work (Falkner et al., 2018). • LS (Wu et al., 2021) – the recent baseline of using local search with random restart, based on FLOW2. It is proved to be able to control cost effectively and outperform BO methods for numerical cost-related hyperparameter search. • BOwLS (Gao et al., 2020) – the baseline of an existing approach of combining local search with BO, i.e., using BO to propose start points for local search. • ASHA (Li et al., 2020), i.e., asynchronous successive halving – a state-of-the-art HPO method that uses multi-fidelity optimization and supports parallel tuning.\nInitialization setting. For the local search method used in our framework, a low-cost initialization is needed to realize its unique advantages in controlling the cost. It is implemented via setting a lowcost initial value for each of the controlled dimensions. For example, among the hyperparameters tuned in LightGBM (shown in Table 2 of Appendix B), three hyperparameters, including ‘tree num’, ’leaf num’ and ‘min child weight’ have initial values corresponding to the min or max values in their range, depending on whether they have a positive or negative correlation with the evaluation cost. It does not require the loss of the initial configuration to be low. Only one single low-cost initial value for each controlled dimension needs to be specified as input. To ensure a fair evaluation, we use the same low-cost initial point (if controlled dimensions exist) as the starting point of all the baselines.\n2https://github.com/DataCanvasIO/DeepTables" }, { "heading": "4.1 TUNING XGBOOST AND LIGHTGBM", "text": "We tune a set of 9-dimensional hyperparameters (all numerical) in LightGBM and 11-dimensional hyperparameters (9 numerical and 2 categorical) in XGBoost. A detailed description of the search space can be found in Appendix B. In this section, we omit the multi-fidelity baseline method, i.e., ASHA, because we do not find a good ‘fidelity’ dimension that works well in tuning LightGBM and XGBoost (ref. additional results in Appendix B). We perform the evaluation on 37 out of the 39 datasets from the AutoML benchmark (‘Robert’ and ‘Dionis’ are excluded due to out-ofmemory error and extremely long training time). The input budget B (in terms of CPU time) is set to be 4 hours for the 3 largest datasets among the 37 datasets, and 1 hour for the rest. From the performance curves shown in Figure 3, we observe that BO tends to perform well on small datasets, e.g., ‘credit’ in Figure 3(a), where the 1h budget is sufficient. Under the same budget, it may perform badly on large datasets, e.g., ‘volkert’ in Figure 3(c), as it may try configurations which consume a very large portion of the budget at the early stage of the search. On the medium size dataset, e.g., ‘KDDCup09’ in Figure 3(b), the local search method is more efficient than BO in the early stage but is outperformed in the later stage. BlendSearch performs similarly to the better one between LS and BO in the early stage, and surpasses both of them in the later stage. We also observe that BOwLS performs similarly with LS (sometimes worse). This is because BOwLS needs to wait until a local search converges before proposing a new one. The aggregated result over all the test cases in Figure 4(a) & (b) is consistent with these observations.\nThe interplay between local and global search in BlendSearch. We investigated the dynamics of local and global search thread selection in BlendSearch. As a case study, we show the results in tuning XGBoost on two datasets in Figure 1(b) and Figure 5(a). The result in Figure 1(b) shows\nthat the global search indeed plays a role after 1000s which contributes to BlendSearch’s better performance comparing to LS. The result in Figure 5(a) shows that although the first point suggested by the global search (at around 200s) does not yield significantly better performance immediately, the induced new local search thread (the thread in green circle) lead to a significantly better performance soon. These two figures together indicate that global search is not only responsible for directly finding better configurations, but also for creating local search threads that can achieve yet better performance. We show two statistics about the overall interplay between global and local search in BlendSearch on all the datasets evaluated for tuning XGBoost in Figure 6.\nWe provide additional experiments for tuning XGBoost and LightGBM in Appendix B, including a comparison with ASHA under different settings of fidelity, and an empirical study about the effect of low-cost initialization.\nTakeaway. (1) BlendSearch is able to overcome the limitations of BO and LS and at the same time, inherit their advantages. By blending BO and LS, BlendSearch is able to outperform both on this large collection of datasets over time. (2) The interplay between global and local search indeed contribute to BlendSearch’s good performance." }, { "heading": "4.2 TRANSFORMER-BASED NLP MODEL FINE-TUNING", "text": "This section presents an application of economical hyperparameter optimization to an NLP model fine tuning task used in a large software company. It starts from a large transformer-based model Turing-NLRv2 with 24 transformer layers, each with 16 attention heads and hidden-dimension 1024 totaling 340M parameters. It is pretrained on English Wikipedia (2,500M words) and BookCorpus (800M words), and uses byte-pair encoding3. This pre-trained model is then fine-tuned (Dai & Le, 2015) for use in multiple production scenarios, including sequence classification, named entity recognition and question answering. The fine-tuning procedure is performed for a dozen separate tasks, and is repeated on a regular cadence, typically every few weeks. We focus our experiment on a single sequence classification fine-tuning task where the objective is to label a document (consisting of one or more sentences) with one of five possible classes. For fine-tuning this model we introduce a classification layer with 1024 × 5 = 5120 additional weight parameters, randomly initialized. The dataset used for training consists of 52K labeled examples, which we split 80/20 for training/validation. The objective to maximize is the f1-score obtained on the validation set of 10.4K labeled documents. Selecting hyperparameters for fine-tuning this model has been a manual process that typically takes a data scientist a few days.\n3msturing.org\nIn our experiment we use a 6-dimensional search space (4 numerical and 2 categorical). A detailed description of all the hyperparameters tuned, including their ranges, can be found in Table 5 in Appendix B. We compare to ASHA and let it use 16 VMs with 4 NVIDIA Tesla V100 GPUs on each VM. We run ASHA with 16 concurrent jobs for 6 hours in wallclock time. That amounts to 4×16×6 = 384 GPU hours of hardware cost. We run BlendSearch on a single VM of the same configuration for 3 hours, which uses 12 GPU hours in total. For comparison, we include ASHA-1 using the same single VM. The results are summarized in Table 1.\nTakeaway. Not only BlendSearch finds a more accurate model than both ASHA and manual effort, it does so faster and consumes only 3% of the resources of ASHA-16." }, { "heading": "4.3 ABLATION STUDY", "text": "To investigate the effectiveness of several important modules of our framework (colored in orange in Figure 2), we perform an ablation study in tuning XGBoost on a random subset of the datasets (one third of the datasets mentioned in Section 4.1). We show the aggregated rank of different variants of our method in Figure 4(c). Specifically, we study the following three modules of the framework. (1) Priority metric. ‘BS-Priority:RoundRobin’ uses the round-robin policy in the search thread selector. (2) Config validator. ‘BS-w/o-ConfigValidator’ skips the validity check of the proposed configuration. (3) Create new thread condition. In ‘BS-CreateCond:Always’ and ‘BS-CreatCond:BestOrFirst’, the following two conditions are used as the condition for creating new local search threads respectively: always create, and create a new thread only when the loss is better than all the existing search threads’ loss or there is no local search thread yet.\nFrom this ablation study, we have the following observations: (1) Making round-robin selection is worse than doing selection using our designed priority metric. Round-robin’s relative performance becomes worse in the later stage of the search because it cannot avoid bad-performing search threads. (2) The config validator is also vital in our framework. (3) Overall, the conditions for creating new threads has a smaller impact on our method comparing to the other designs studied. ‘Better than half’ condition used by default tends to perform the best." }, { "heading": "5 EXTENSION AND FUTURE WORK", "text": "In the low-resource scenario which is targeted by this paper, each single trial is not resource-saturated if we spend all resources in it. So we do not recommend parallel trials in this low-resource scenario. In the case where more resources than the maximum resource each single trial can consume are available, our framework can be extended by running the trials from different search threads on multiple workers. For example, if there are additional workers available, we can keep invoking the search thread selector (but skip the local search threads that have O(d) trials running). Our design of having multiple independent local search threads naturally allows efficient asynchronous parallel trials. The design of utilizing existing global optimization methods allows existing easyto-parallelize global optimization (such as random search or batch versions of BO) to be plugged in. The prioritization of search threads is still useful as long as the maximal concurrent number of trials divided by the number of search threads is smaller than O(d). Since our method can be used together with multi-fidelity pruning methods, it can naturally inherit the asynchronous resource scheduling when used in the parallel setting. Parallelization is now supported in the latest version of BlendSearch’s implementation.\nIn this work, we show the effectiveness of BlendSearch through extensive empirical study. As future work, it is worth studying the theoretical properties of BlendSearch, including theoretical guarantees about its convergence rate and total resource consumption." }, { "heading": "B MORE DETAILS ABOUT EXPERIMENTS AND ADDITIONAL RESULTS", "text": "B.1 EXPERIMENT SETUP\nSettings of BO and LS. For BO, we use implementation from Optuna 2.0.0 (https://optuna. readthedocs.io/en/stable/index.html) with default settings for TPE sampler. For LS, we follow the implementation guidelines from Wu et al. (2021). After a local search thread is created from a particular starting point, we fix the categorical dimensions and only search for numerical dimensions in that local search thread. A local search thread S is considered to have converged (corresponding to S.converged() in Algorithm 7) once the stepsize of the local search thread is smaller than a lower bound introduced by Wu et al. (2021).\nExperiments in tuning XGBoost and LightGBM. The XGBoost and LightGBM experiments are performed in a server with Intel Xeon E5-2690 v4 2.6GHz, and 256GB RAM. A full list of hyperparameters tuned and their ranges can be found in Table 3 and Table 2. The search space for numerical hyperparameters aligns with the search space used in (Wu et al., 2021). On the same fold, the same random seed is used for LS, BO and BS. Experiments on different folds use different random seeds.\nExperiments in NLP model fine tuning. For ASHA, we set min and max epochs as 1 and 16, and reduction factor 4.\nAlgorithm 6 CreateNewLSCondition Inputs: l,F . Outputs: |F .S| = 1 or l ≤ Median({S.l1st}S∈F.S\\S0)\nAlgorithm 7 DeleteAndMergeLS Inputs: S,F\n1: if S.converged() then 2: F .S← F .S \\ S, 3: F .R.xmini ← F .R.xmini − L.∆, and F .R.xmaxi ← F .R.xmaxi + L.∆, ∀i ∈ D′ 4: else 5: for ∀S′ ∈ F .S \\ S do 6: if S ∈ S′.ReachableInOneStep() & S′.l < S.l then 7: F .S← F .S \\ S 8: break 9: else if S′ ∈ S.ReachableInOneStep() & S.l < S′.l then F .S← F .S \\ S′\nResult aggregation details. Aggregated rank in Figure 4(a)&(c) and 12(c) is calculated as follows: (1) per dataset per fold, the method is ranked based on the loss on validation set at each second (x-axis), starting from when there is at least one finished config evaluation in any method; (2) the rank is then averaged across datasets per fold; (3) we finally compute the average rank (line) and confidence interval (shaded area) across 10 folds. Scaled loss in Figure 4(b) is calculated similarly. Per dataset per fold, min-max scaling is applied on each method using the maximum and minimum loss along the whole performance curve across all methods.\nB.2 ADDITIONAL EXPERIMENTAL RESULTS ON LIGHTGBM AND XGBOOST\nMore performance curves on LightGBM and XGBoost. The performance curves for tuning LightGBM on 3 representative datasets with an 1h budget are shown in Figure 7. We observe that the performance of LS is quite good (comparing to BO), especially on large datasets. This result is consistent with the results reported in (Wu et al., 2021), where all the hyperparameters for tuning are numerical. In our experiment of XGBoost tuning, we include categorical hyperparameters. LS performs worse in this case because the introduction of categorical hyperparameters amplifies the local search method’s limitation of being trapped in local optima. The observations about BlendSearch for LightGBM are similar to XGBoost tuning. The performance curves on the three large datasets with a 4h budget are shown in Figure 8 and 9, where similar conclusions can be drawn.\nMulti-fidelity. We compare BO and BlendSearch with the multi-fidelity baseline ASHA for tuning LightGBM and XGBoost in Figure 10. In this experiment, we tried two choices of fidelity dimensions with ASHA, including number of iterations and sample size (the sample size begins with 10K, so small datasets are excluded) respectively. The results show that the multi-fidelity baseline overall perform no better than BO and are significantly worse than BlendSearch.\nAblation study on the low-cost initialization. In this work, we use low-cost initialization for the controlled dimensions of the hyperparameters. Although such information is fairly easy to obtain, we investigate our method’s robustness when no controlled dimension is provided. We test BlendSearch in a controlled dimension agnostic setting: there are still hyperparameters with heterogeneous cost, but the controlled dimensions and a low-cost initial point is not specified as input. In such scenarios, BlendSearch will use random initialization and the config validator always returns ‘yes’. We compare BlendSearch with local search and BO under such a setting using the same random initial point. In Figure 11 we report the results including the aggregated rank and scaled loss on LightGBM across half of the datasets mentioned in Section 4.1 in Figure 11(a) & (b). The results show that even if BlendSearch is agnostic to the controlled dimensions and a random initialization is used, it is still able to outperform both the local search method and BO.\nB.3 TUNING DEEPTABLES.\nIn this experiment, we tune 9-dimensional hyperparameters (5 numerical and 4 categorical as detailed in Table 4) in DeepTables. Since the training of deep neural networks are more time-consuming than that of XGBoost, we run experiments for DeepTables on the datasets where they are worse than the best known performance in the benchmark, including ‘shuttle’,‘cnae’,‘mfeat’,‘vehicle’,‘phoneme’, ‘kc1’. All experiments for DeepTables are performed in a server with the same CPU, 110GB RAM, and one Tesla P100 GPU. A full list of hyperparameters tuned and their ranges can be found in Table 4.\nRecall that we mentioned multi-fidelity pruning strategies could be incorporated into BlendSearch in the config evaluator component. In this experiment, we are particularly interested showing the performance of BlendSearch when combined with multi-fidelity methods. To this end, we include the three state-of-the-art multi-fidelity methods, including BOHB (Falkner\net al., 2018), ASHA (Li et al., 2020), and asynchronous HyperBand (Li et al., 2017; 2020) which are shown efficient for tuning deep neural networks and the BlendSearch based on each of them. We use the following libraries for baselines: For BOHB, we use HpBandSter 0.7.4 (https://github.com/automl/HpBandSter). For ASHA and asynchronous HyperBand, we use implementations from Optuna 2.0.0. In all the methods compared, including both existing methods and variants of BlendSearch, the number of training epochs is used as the fidelity dimension, with maximum epochs set to be 1024, reduction factor set to be 3, and minimum epochs 4. For ASHA, we set the minimum early stopping rate to be 4 (we adopted this setting as it yields better performance comparing to the default setting, i.e., 0). The number of training epochs is used as the fidelity dimension.\nBlendSearch incorporates existing multi-fidelity methods in the following way: Each config, either proposed by global search or local search, uses the same schedule to increase the fidelity and check its pruning condition. For example, when ASHA (Li et al., 2020), i.e., asynchronous successive halving, with a reduction factor of η, is used as the pruning strategy, after each config is evaluated by a certain fidelity, it is compared with other configs already evaluated by the same fidelity. The config will be pruned if its loss is ranked in the worst 1/η. Otherwise, the fidelity is multiplied by η. In addition to the original pruning conditions specified by the multi-fidelity method, a configuration will also be pruned at a particular fidelity level where no pruning is performed yet, and the configuration does not yield superior performance (comparing to the currently-best performance) when evaluated at that fidelity level.\nWe present the performance of all compared methods for tuning DeepTables in Figure 12. Figure 12(a) shows the learning curves on dataset cane with budget 1h. Figure 12(b) and (c) show the aggregated rank and loss on all the 6 datasets within budget 1h. The performance of multi-fidelity methods are significantly improved when used in our BlendSearch framework." } ]
2,021
ECONOMICAL HYPERPARAMETER OPTIMIZATION WITH BLENDED SEARCH STRATEGY
SP:ba5cdfc4c1ad55f08c3e39934785e11e61b202ea
[ "The paper proposes to use the capsules to perform object detection on COCO. Capsules, while showing promises, are usually too expensive for tasks beyond MNIST and Cifar. The authors propose three key improvements in DeformCaps, SplitCaps and SE-Routing to improve the efficiency and therefore allow capsules to be applied on larger tasks such as object detection. The authors claim novelties in:", "This paper introduces capsule network for object detection. To solve the issue of capsule network when applied to large-scale detection problems, this paper develops deformable capsules, a new prediction head SplitCaps, and a dynamic routing algorithm, SE-Routing. Experiments are conducted on COCO where it performs slightly worse than the baselines but arguably predicts less false positives." ]
Capsule networks promise significant benefits over convolutional networks by storing stronger internal representations, and routing information based on the agreement between intermediate representations’ projections. Despite this, their success has been mostly limited to small-scale classification datasets due to their computationally expensive nature. Recent studies have partially overcome this burden by locally-constraining the dynamic routing of features with convolutional capsules. Though memory efficient, convolutional capsules impose geometric constraints which fundamentally limit the ability of capsules to model the pose/deformation of objects. Further, they do not address the bigger memory concern of class-capsules scaling-up to bigger tasks such as detection or large-scale classification. In this study, we introduce deformable capsules (DeformCaps), a new capsule structure (SplitCaps), and a novel dynamic routing algorithm (SE-Routing) to balance computational efficiency with the need for modeling a large number of objects and classes. We demonstrate that the proposed methods allow capsules to efficiently scale-up to large-scale computer vision tasks for the first time, and create the first-ever capsule network for object detection in the literature. Our proposed architecture is a one-stage detection framework and obtains results on MS COCO which are on-par with state-of-the-art one-stage CNN-based methods, while producing fewer false positive detections.
[ { "affiliations": [], "name": "DEFORMABLE CAPSULES" } ]
[ { "authors": [ "Michael A Alcorn", "Qi Li", "Zhitao Gong", "Chengfei Wang", "Long Mai", "Wei-Shinn Ku", "Anh Nguyen" ], "title": "Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Zhe Chen", "Jing Zhang", "Dacheng Tao" ], "title": "Recursive context routing for object detection", "venue": "International Journal of Computer Vision,", "year": 2020 }, { "authors": [ "Robert T Clemen", "Robert L Winkler" ], "title": "Combining probability distributions from experts in risk analysis", "venue": "Risk analysis,", "year": 1999 }, { "authors": [ "Jifeng Dai", "Haozhi Qi", "Yuwen Xiong", "Yi Li", "Guodong Zhang", "Han Hu", "Yichen Wei" ], "title": "Deformable convolutional networks", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Kevin Duarte", "Yogesh Rawat", "Mubarak Shah" ], "title": "Videocapsulenet: A simplified network for action detection", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Kevin Duarte", "Yogesh S. Rawat", "Mubarak Shah" ], "title": "Capsulevos: Semi-supervised video object segmentation using capsule routing", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Ross Girshick" ], "title": "Fast r-cnn", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Ross Girshick", "Jeff Donahue", "Trevor Darrell", "Jitendra Malik" ], "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2014 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask r-cnn", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Geoffrey E Hinton", "Alex Krizhevsky", "Sida D Wang" ], "title": "Transforming auto-encoders", "venue": "In International conference on artificial neural networks,", "year": 2011 }, { "authors": [ "Geoffrey E Hinton", "Sara Sabour", "Nicholas Frosst" ], "title": "Matrix capsules with em routing", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Amelia Jiménez-Sánchez", "Shadi Albarqouni", "Diana Mateus" ], "title": "Capsule networks against medical imaging data challenges", "venue": null, "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Adam Kosiorek", "Sara Sabour", "Yee Whye Teh", "Geoffrey E Hinton" ], "title": "Stacked capsule autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Rodney LaLonde", "Ulas Bagci" ], "title": "Capsules for object segmentation", "venue": "In Medical Imaging with Deep Learning (MIDL),", "year": 2018 }, { "authors": [ "Rodney LaLonde", "Drew Torigian", "Ulas Bagci" ], "title": "Encoding visual attributes in capsules for explainable medical diagnoses", "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). Springer,", "year": 2020 }, { "authors": [ "Hei Law", "Jia Deng" ], "title": "Cornernet: Detecting objects as paired keypoints", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Yanghao Li", "Yuntao Chen", "Naiyan Wang", "Zhaoxiang Zhang" ], "title": "Scale-aware trident networks for object detection", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2019 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2014 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Shu Liu", "Lu Qi", "Haifang Qin", "Jianping Shi", "Jiaya Jia" ], "title": "Path aggregation network for instance segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Wei Liu", "Dragomir Anguelov", "Dumitru Erhan", "Christian Szegedy", "Scott Reed", "Cheng-Yang Fu", "Alexander C Berg" ], "title": "Ssd: Single shot multibox detector", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2016 }, { "authors": [ "Fausto Milletari", "Nassir Navab", "Seyed-Ahmad Ahmadi" ], "title": "V-net: Fully convolutional neural networks for volumetric medical image segmentation", "venue": "In 2016 Fourth International Conference on 3D Vision (3DV),", "year": 2016 }, { "authors": [ "Inyoung Paik", "Taeyeong Kwak", "Injung Kim" ], "title": "Capsule networks need an improved routing algorithm", "venue": "arXiv preprint arXiv:1907.13327,", "year": 2019 }, { "authors": [ "Arjun Punjabi", "Jonas Schmid", "Aggelos K Katsaggelos" ], "title": "Examining the benefits of capsule neural networks", "venue": "arXiv preprint arXiv:2001.10964,", "year": 2020 }, { "authors": [ "Joseph Redmon", "Ali Farhadi" ], "title": "Yolo9000: better, faster, stronger", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Joseph Redmon", "Ali Farhadi" ], "title": "Yolov3: An incremental improvement", "venue": "arXiv preprint arXiv:1804.02767,", "year": 2018 }, { "authors": [ "Joseph Redmon", "Santosh Divvala", "Ross Girshick", "Ali Farhadi" ], "title": "You only look once: Unified, real-time object detection", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Sara Sabour", "Nicholas Frosst", "Geoffrey E Hinton" ], "title": "Dynamic routing between capsules", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Bharat Singh", "Mahyar Najibi", "Larry S Davis" ], "title": "Sniper: Efficient multi-scale training", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Mervyn Stone" ], "title": "The opinion pool", "venue": "The Annals of Mathematical Statistics, pp", "year": 1961 }, { "authors": [ "Yao-Hung Hubert Tsai", "Nitish Srivastava", "Hanlin Goh", "Ruslan Salakhutdinov" ], "title": "Capsules with inverted dot-product attention routing", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "T Vijayakumar" ], "title": "Comparative study of capsule neural network in various applications", "venue": "Journal of Artificial Intelligence,", "year": 2019 }, { "authors": [ "Yequan Wang", "Aixin Sun", "Jialong Han", "Ying Liu", "Xiaoyan Zhu" ], "title": "Sentiment analysis by capsules", "venue": "In Proceedings of the World Wide Web Conference,", "year": 2018 }, { "authors": [ "Fisher Yu", "Dequan Wang", "Evan Shelhamer", "Trevor Darrell" ], "title": "Deep layer aggregation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Shifeng Zhang", "Longyin Wen", "Xiao Bian", "Zhen Lei", "Stan Z Li" ], "title": "Single-shot refinement neural", "venue": null, "year": 2021 }, { "authors": [ "Zhao", "Jianbo Ye", "Min Yang", "Zeyang Lei", "Suofei Zhang", "Zhou Zhao" ], "title": "Investigating capsule", "venue": "pattern recognition,", "year": 2018 }, { "authors": [ "2019a. Xingyi Zhou", "Jiacheng Zhuo", "Philipp Krahenbuhl" ], "title": "Bottom-up object detection by grouping", "venue": null, "year": 2019 }, { "authors": [ "Zhou" ], "title": "2019a). While we do not make any sweeping claims, we wish to comment on a few general patterns that seemed to emerge in these examples. In Figure 3, we show a prototypical example of the general trend we are describing", "venue": "In Figures", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Capsule networks promise many potential benefits over convolutional neural networks (CNNs). These include practical benefits, such as requiring less data for training or better handling unbalanced class distributions (Jiménez-Sánchez et al., 2018), and important theoretical benefits, such as buildingin stronger internal representations of objects (Punjabi et al., 2020), and modeling the agreement between those intermediate representations which combine to form final object representations (e.g. part-whole relationships) (Kosiorek et al., 2019; Sabour et al., 2017). Although these benefits might not be seen in the performance metrics (e.g. average precision) on standard benchmark computer vision datasets, they are important for real-world applications. As an example, it was found by Alcorn et al. (2019) that CNNs fail to recognize 97% of their pose space, while capsule networks have been shown to be far more robust to pose variations of objects (Hinton et al., 2018); further, real-world datasets are not often as extensive and cleanly distributed as ImageNet or MS COCO.\nThese benefits are achieved in capsule networks by storing richer vector (or matrix) representations of features, rather than the simple scalars of CNNs, and dynamically choosing how to route that information through the network. The instantiation parameters for a feature are stored in these capsule vectors and contain information (e.g. pose, deformation, hue, texture) useful for constructing the object being modeled. Early studies have shown strong evidence that these vectors do in fact capture important local and global variations across objects’ feature components (or parts) within a class (Punjabi et al., 2020; Sabour et al., 2017). Inside their networks, capsules dynamically route their information, seeking to maximize the agreement between these vector feature representations and the higher-level feature vectors they are attempting to form.\nDespite their potential benefits, many have remained unconvinced about the general applicability of capsule networks to large-scale computer vision tasks. To date, no capsule-based study has achieved classification performance comparable to a CNN on datasets such as ImageNet, instead relegated to smaller datasets such as MNIST or CIFAR. Worse still, to the best of our knowledge, no capsule network has shown successful results in object detection, a very important problem in computer\nvision, robotics, and medical imaging. Now, the argument can be made that standard benchmark datasets such as ImageNet or MS COCO likely contain majority of objects in that 3% range of usual poses, and thus CNNs will appear to perform extremely well when measured in terms of accuracy, stripping capsule networks of one of their largest advantages. However, until capsule networks can perform on-par with CNNs on these typical object poses, few will care about the benefits of stronger internal representations and better generalization to unseen poses.\nSummary of Our Contributions: (1) We propose the first ever capsule-based object detection framework in the literature. Our network is a one-stage (single-shot) architecture, where objects are both localized and classified using capsules, and can perform on-par with the state-of-the-art CNNs on a large-scale dataset (MS COCO). (2) We address the geometric constraint of convolutional capsules (and locally-constrained routing) by introducing deformable capsules, where parent capsules learn to adaptively sample child capsules, effectively eliminating rigid spatial restrictions while remaining memory efficient. (3) We design a new capsule-based prediction head structure, SplitCaps, which reformulates the projections of an objects’ instantiation parameters, presence, and class, eliminating the previous dimensional increase of capsules by the number of classes. This crucial addition enables the training of capsule networks on large-scale computer vision datasets for the first time in the literature. (4) To route information across SplitCaps’ unique structure, we introduce a novel Squeezeand-Excitation inspired dynamic routing algorithm, SE-Routing, which seeks to maximize agreement between child capsule projections, without the need of iterative loops" }, { "heading": "2 DEFORMABLE CAPSULES: FIXING LOCALLY-CONSTRAINED DYNAMIC ROUTING", "text": "The capsule network architecture proposed by Sabour et al. (2017) acted on global information, where digit capsules represented the pose and presence of digits in an image regardless of spatial location. The information from all children in the previous layer was sent to every parent in the following layer, weighted via the routing coefficients found in a cosine similarity routing algorithm. While this proved to be a highly-effective strategy, it was also computationally expensive, limiting its use to only small-scale datasets. Recent works attempted to scale up capsule networks to larger problems such as biomedical image segmentation (LaLonde & Bagci, 2018) or action detection in video (Duarte et al., 2018) by using convolutional capsules and locally-constraining the routing algorithm. Although efficient solutions were presented in those studies, the representation power of capsule networks was fundamentally limited due to imposing local constraints. This is because convolutions, by design, have a fixed geometric structure (Dai et al., 2017), and such a geometric constraint significantly inhibits capsules’ ability to model part-whole relationships, relying on parts of objects to fall within a fixed local grid. Therefore, it is unreasonable to expect a capsule to effectively represent the pose and deformations of an object when the information related to the parts of that object are locked into a fixed spatial relationship.\nIn this study, we propose to effectively solve this aforementioned problem by introducing a method that balances efficiency with the ability for a capsule to represent any pose and deformation of an object (i.e. where child capsules can be found in different spatial relationships to one another for the same parent). In such a formulation, global information is not explicitly required, but it does require parent capsules to have more flexibility over which child capsules they draw information from. Our proposed solution is deformable capsules. The idea behind the proposed algorithm is simple: if parent capsules are supposed to capture common deformations of the objects they represent within their vectors, then the choice of which children to aggregate information from must be handled in a deformable manner as well. Deformable capsules allow parents to adaptively gather projections from a non-spatially-fixed set of children, and thus effectively and efficiently model objects’ poses.\nTo achieve this overall goal, we follow the same efficient convolutional capsule paradigm, where projection vectors are formed via a convolution operation with a kernel centered on the parent capsules’ spatial location, but now we learn an additional set of weights for each parent capsule. These learnable weights are the same shape as each parent’s kernel, and represent the offset values for the spatial sampling of child capsules for that parent. Based on the child capsule representation vectors in the previous layer, these weights learn which children a parent capsule should adaptively sample from for a given input image. Dynamic routing then determines how to weight the information coming from each of these children based on their agreement for each projected parent.\nLet us give a concrete example to better illustrate the parameter savings of this technique. Given a H = 128 by W = 128 grid of child capsules with ci = 32 capsule types of ai = 8 atoms each, being routed to a set of cj = 10 parent capsule types of aj = 16 atoms each, the fully-connected capsules of Sabour et al. (2017) would requireH×W×ci×ai×cj×aj ⇒ 128×128×32×8×10×16 ≈ 671M parameters for this layer alone (assuming the goal is classification, with detection requiring a multiplicative increase by the detection grid size). Instead, using our proposed deformable capsules with a k2 = 52 kernel, we only require 2× k× k× ai × cj × aj ⇒ 2× 5× 5× 8× 10× 16 ≈ 64K parameters. Convolutional capsules with locally-constrained routing require 32K parameters, not needing the additional spatial offsets kernel, but as mentioned above, they are fundamentally limited in the poses and deformations that they can represent. In our experiments, we found deformable capsules to converge faster and to much higher performance than convolutional capsules.\n3 OBJECTS AS CAPSULES: SplitCaps WITH SE-Routing\nWe propose a novel one-stage (single-shot) capsule network architecture for object detection, called DeformCaps, where objects are detected, classified, and modeled with capsules. Our overall network architecture, shown in Fig. 1, is built upon CenterNet by Zhou et al. (2019a) who proposed to represent objects in images as scalar point values located at the center of their bounding boxes. The authors then regress the remaining characteristics of the object (e.g. height, width, depth) for each center-point detected. In our work, we follow the same center-point detection paradigm, but represent our objects with capsule vectors instead. Since several recent studies have found utilizing a CNN backbone before forming capsule types to be beneficial to overall performance (Duarte et al., 2018; Kosiorek et al., 2019; Tsai et al., 2020), we adopt the preferred backbone of CenterNet, DLA-34 (Yu et al., 2018), as this gave the best trade-off between speed and accuracy. Features extracted from the backbone are sent to our capsule object detection head and a bounding box regression head. In order to perform object detection using capsules, we introduce a new capsule structure, called SplitCaps, composed of class-agnostic and class-presence capsules, and a novel routing algorithm, called SE-Routing." }, { "heading": "3.1 SPLITCAPS: CLASS-AGNOSTIC CAPSULES AND CLASS-PRESENCE CAPSULES", "text": "As discussed in Section 2, the original CapsNet by Sabour et al. (2017) was extremely expensive in computation and our proposed deformable capsules is a best possible solution to balance non-rigid deformations of objects while remaining memory efficient. However, there is a more significant memory hurdle capsule networks must overcome when scaling up to large-scale datasets, such as MS COCO. In their current implementation, capsule networks represent each class with its own parent\ncapsule vector. On small scale classification datasets (and using deformable routing), this is not an issue; it amounts to 2× k× k× ai × cj × aj parameters and N × ci × cj × aj × 4 bytes to store the intermediate representations to be routed for the parents, where cj is usually around 10 classes to represent. Let us suppose we have 5× 5 kernels with 32 input capsule types of 8 atoms per capsule, 10 output capsule types of 16 atoms per capsule, and a batch size of 32. In total, we would have 2× 5× 5× 8× 10× 16 = 64K parameters and 32× 32× 10× 16× 4 ≈ 655 KB. When we scale this up to object detection, we now need to store representations for every possible object location, and for MS COCO we need to represent 80 possible classes. This gives us 2× k × k×ai× cj ×aj ⇒ 2×5×5×8×80×16 = 512K parameters and N ×H×W × ci× cj ×aj ×4 bytes⇒ 32× 128× 128× 32× 80× 16× 4 ≈ 86 GB for the intermediate representations, where we assume the output grid of detections is 128 × 128 with a single detection (i.e. bounding box) predicted per class per location. The problem is not any better for large-scale classification datasets such as ImageNet either, where we lose the grid of predictions but grow to 1000 classes, which would require 2× 5× 5× 8× 1000× 16 = 6.4M parameters and 32× 32× 1000× 16× 4 ≈ 66 GB for the intermediate representations. Clearly, with most GPU memories limited to 12–24 GB, a solution is needed to be found for capsule networks to scale up to larger-scale computer vision tasks.\nTo overcome this issue, we propose a new type of capsule architecture, SplitCaps, to more efficiently scale capsule networks to large-scale computer vision tasks. SplitCaps contains two parent capsule types, each with a different number of atoms per capsule, for each location of the detection grid. As before, the idea is to balance efficiency with the ability to learn powerful representations. Towards this goal, SplitCaps proposes to divide up between its two parent capsules the tasks of (i) learning the instantiation parameters necessary to model the possible variations of an object and (ii) predicting which classes of objects are present in a given input. The first capsule type we refer is our classagnostic object instantiation capsules, and the second we refer is class presence capsules.\nClass-agnostic object instantiation capsules: The purpose of these capsules is similar to those in previous works: model the possible variations (in pose, deformation, texture, etc.) of objects within a vector of instantiation parameters, the span of which should cover all possible variations for that object at test (hence why capsules are better than CNNs at generalizing to unseen poses). While previous capsule networks did this in a class-wise manner, we argue such a formulation is not required and possibly it is redundant. Many variations (e.g. rotation, skew, stroke thickness) may be class-independent, and thus to model these variations class-wise would require repetition across each capsule type. Instead, we propose to model all classes within a single capsule type (i.e. class-agnostic). In this way, while class-dependent variations would each require their own dimensions of the vector, class-independent variations can each be modeled along a single dimension for all possible objects. Since it is reasonable to assume there will be at least some class-specific variations, we increase the default capsule vector dimension from 16 to 64 to accommodate for possible class-specific instantiation parameters.\nDuring training, if an object is present at a given spatial location, the 64-dimensional capsule vector for that location is fed to a reconstruction regularization sub-network to construct the mask of that object as similar to the reconstruction regularization used by Sabour et al. (2017) for classification. This sub-network is a relatively small and fast addition: a set of three ReLU-activated 1× 1 convolutional layers with 256 filters each, followed by a final sigmoid-activated 1 × 1 convolution with N = n2 = 282 = 784 filters, before reshaping outputs to n× n. Since objects’ scales vary dramatically, we scale normalize all objects’ ground-truth masks to be 28 × 28 (by following He et al. (2017)). Supervised training is conducted by computing the Dice loss (Milletari et al., 2016) between the predicted reconstruction, r, and the object’s mask, m\nLr = 2 ∑N i rimi∑N\ni r 2 i + ∑N i m 2 i , (1)\nwhere Lr is used to provide a regularization signal to the instantiation parameters being learned. Class presence capsules: The class presence capsules attempt to model which classes of objects are present in the input at each spatial location, if any. We accomplish this by setting the atoms per capsule to the number of classes being represented (i.e. 80 for MS COCO). Just as Hinton et al. (2018) separately modeled pose (with a matrix) and activation (with a scalar), this 80-dimensional vector can be viewed as a class-dependent set of activation values. The activation values are then passed through a sigmoid function and thresholded; if one or more activation values are above the\nthreshold, an object is determined to be at that spatial location with the strongest activated dimension determining the class label.\nIn order to produce a smooth loss function during training, we create a ground-truth heatmap by fitting a Gaussian distribution rather than a single point, to the center-point of each object’s bounding box, with variance proportional to the size of the box following Zhou et al. (2019a). More specifically, we create a heatmap H ∈ [0, 1]Xd ×Yd ×K containing each down-scaled ground truth center-point p̃ = ( px d , py d ) for class k ∈ K using a Gaussian kernelHxyk = exp ( − (x−p̃x) 2+(y−p̃y)2 2σ2p ) , where d is the amount of downsampling in the network and σp is an object-size-adaptive standard deviation (Law & Deng, 2018). In the case of overlapping Gaussians, we take the element-wise maximum. To handle the large class imbalance between objects and background in our heatmaps, we use a penalty-reduced pixel-wise logistic regression with a focal loss (Lin et al., 2017):\nLh = −1 P ∑ xyk\n{ (1− Ĥxyk)α log(Ĥxyk) if Hxyk = 1,\n(1−Hxyk)β(Ĥxyk)α log(1− Ĥxyk), otherwise; (2)\nwhere α, β are hyper-parameters of the focal loss, P is the number of center-points in the input, used to normalize all positive focal loss instances to 1 (Zhou et al., 2019a). We use α = 2 and β = 4 in all our experiments, following Law & Deng (2018). At test, to efficiently retrieve the object’s exact center, we run a 3× 3 max-pooling over the thresholded spatial map. To predict the height and width of the bounding boxes of objects and recover the x, y offsets needed to map back to the upscaled image dimensions, we follow the same formulation as Zhou et al. (2019a) and pass the backbone features through a 3× 3 convolutional layer with 256 feature maps, then a 1× 1 convolutional layer with 2 feature maps. These layers predict the local offset, Ô ∈ RWd ×Hd ×2, and size prediction, Ŝ ∈ RWd ×Hd ×2, for each center-point and are supervised by\nLo = 1\nP ∑ p ∣∣∣Ôp̃ − (p d − p̃ )∣∣∣ and Ls = 1 P ∑ p ∣∣∣Ŝp − (x2 − x1, y2 − y1)∣∣∣ , (3) respectively. Our final objective function is thus defined as L = Lh + λrLr + λsLs + λoLo. We keep λs = 0.1 and λo = 1 as done in Zhou et al. (2019a), and set λr = 0.1 initially, then step up to λr = 2.0 at the half-way in training." }, { "heading": "3.2 SE-ROUTING: SPLITCAPS NEW ROUTING ALGORITHM", "text": "As a core component of capsule networks, dynamic routing seeks to maximize the agreement between child capsule projections for parent capsules and to fully-leverage the richer representations being stored. Since SplitCaps introduces a unique capsule head structure, where instantiation parameters and activations are split across different capsule types, previous dynamic routing algorithms can no longer be directly applied. To overcome this, we propose a new dynamic routing algorithm that takes inspiration from Squeeze-and-Excitation networks (Hu et al., 2018), which we call SE-Routing, and illustrated in Fig. 2.\nPreviously proposed dynamic routing algorithms (e.g. Sabour et al. (2017) and Hinton et al. (2018)) were typically iterative, requiring a hand-tuned loop of routing iterations, which proved to be slow and temperamental in practice. Different studies found different numbers of iterations to be effective, and one meta-study of five different iterative dynamic routing algorithms found them all to be largely ineffective (Paik et al., 2019). To avoid this pitfall, we propose a new routing algorithm to dynamically assign weights to child capsule projections based on their agreement, computed in a single forward pass using a simple gating mechanism with sigmoid activation. Unlike Sabour et al. (2017) which uses a routing softmax to force a one-hot mapping of information from each child to parents, our proposed SE-Routing learns a non-mutually-exclusive relationship between children and parents to allow multiple children to be emphasised for each parent.\nCreating child capsule projection descriptors (squeeze): Following the Squeeze-and-Excitation paradigm, we first must compute the squeeze (i.e. a set of descriptors which summarize relevant information about each feature) to create a set of child capsule projection descriptors. In Hu et al. (2018), the authors proposed to use the global average activation of each channel with the goal of modeling channel interdependencies. In this study, our goal is to maximize the agreement\nbetween child projections, for both the instantiation parameters and class presence of the object being modeled. With that motivation, we compute three separate descriptors which are fed into the excitation phase of the routing: (i) cosine angle between the mean projection vector and each child’s projection, which captures object instantiation agreement; (ii) Kullback–Leibler (KL) divergence of each child’s predicted class distribution and an aggregated distribution, which captures class presence agreement; and (iii) variance of each child’s predicted class distribution, which captures class presence uncertainty.\nThe cosine angle descriptor, a, is calculated in a similar manner to Sabour et al. (2017). A mean projection vector, ũ = 1/N ∑N i ûi, is first computed using the set of child capsule projections, Û = {û1, û2, ..., ûN}. Then we compute a set of cosine angles between each individual projection and this mean, a = {a1, a2, ..., aN}, where ai = (ũ · ûi)/(|ũ| · |ûi|). In a similar fashion, we compute a KL divergence descriptor, b, by first creating an aggregate object class distribution. To create this aggregate distribution, we follow the work of Clemen & Winkler (1999), insofar as each child capsule type is treated as an expert giving its prediction about the true underlying class distribution. First, we compute a simple linear opinion pool (Stone, 1961), p(z̃) =∑N i σs(zi)/N , where p(z̃) is the aggregated probability distribution, Z = {z1, z2, ...,zN} is the\nset of child class presence projection vectors, and σs(zi)j = ezij/ ∑K k e\nzik for j = {1, ...,K}, i = {1, ..., N} is the softmax function used to transform projection vectors into normalized probability distributions over the K classes. Then, we measure the agreement between each child’s predicted distributions, σs(zi), and the aggregate distribution, p(z̃), as the KL divergence between them bi = ∑K k p(z̃k) log(p(z̃k)/σs(zi)k).\nLastly, we take our child capsules’ predicted distributions, σs(zi), and compute their variance to estimate the uncertainty each child has: ci = ∑K k (σs(zi)k − ∑K k σs(zi)k)\n2. Our three sets of descriptors are efficiently computed for all capsules simultaneously (i.e. for entire batch and across spatial locations) on GPU in parallel with matrix operations. They are then concatenated, s = a⊕ b⊕ c, and fed to the excitation layers of our routing mechanism (Fig. 2). Determining routing coefficients (excitation): The excitation stage of the SE-Routing algorithm has the task of learning a mapping from the concatenated set of capsule descriptors, s, into a set\nof routing coefficients for the child capsule projections. Since parents capsules types are no longer different classes in our formulation, but rather two separated aspects of modeling objects, we compute a single set of routing coefficients at each spatial location for both parents. Formally, this mapping is computed as r = σ(W2δ(W1s)), where W1 ∈ R 3N t ×3N , W2 ∈ RN× 3N t , δ is the ReLU activation function, σ is the sigmoid activation function, and t is the reduction ratio used to form this mapping into a two fully-connected (FC) layer bottleneck. A brief note: although excitation routing has interesting parallels to self-attention (e.g. dynamically conditioned on the input), our learned mapping is non-mutually-exclusive, while self-attention and CapsNet’s dynamic routing both rely on applying a softmax function over outputs.\nFinally, with the determined routing coefficients r = {r1, r2, ..., rN}, we can compute the output of the SplitCaps detection head. Projection vectors from each child to each parent are computed using the proposed deformable capsules (as described in Section 2). These projections are then combined and weighted by the routing coefficients to form the final parent capsules. These final parents contain the instantiation parameters, vobj = ∑N i riûobj|i, and class presence, vcls = ∑N i riûcls|i, of any objects being represented at the given spatial location within the detection grid." }, { "heading": "4 DEFORMABLE CAPSULES ON MS COCO", "text": "We evaluated our deformable capsule object detection framework on the MS COCO dataset (Lin et al., 2014), which contains 118K training, 5K validation and 20K hold-out testing images. Average precision (AP) is reported over all IOU thresholds and at thresholds 0.5 (AP50) and 0.75 (AP75). We followed the training procedure proposed in Zhou et al. (2019a), training on 512 × 512 pixel inputs, yielding 128× 128 detection grids, using random flip, random scaling (between 0.6 to 1.3), cropping, and color jittering as data augmentation, and Adam (Kingma & Ba, 2014) to optimize our objective function. Due to limited compute resources, we initialized the backbone network weights from CenterNet and only train for 40 epochs with a batch size of 12 and learning rate of 5e-4 with 5× drops at 5, 15, and 25 epochs. Longer training would likely yield superior results, as found by Zhou et al. (2019a) who obtained better results for CenterNet when increasing from 140 to 230 epochs.\nIn Table 1, we provide results of our proposed deformable capsule network with and without flip and multi-scale augmentations following Zhou et al. (2019a). Inference time on our hardware (Intel Xeon E5-2687 CPU, Titan V GPU, Pytorch 1.2.0, CUDA 10.0, and CUDNN 7.6.5) was consistent with those reported by Zhou et al. (2019a).1 While DeformCaps performs slightly worse than CenterNet in terms of AP, it does so while producing far fewer false positive detections, as shown in Table 2 in Appendix A.2. For ablations, we trained a version of DeformCaps which replaces the proposed deformable capsules with the standard locally-constrained convolutional capsules (nonDeformCaps), and a version which removed the routing procedures (No-Routing). These ablations show the contribution of each component of the proposed method.\n1We will make our code publicly available for the community for reproducible research." }, { "heading": "5 RELATED WORKS", "text": "" }, { "heading": "5.1 CAPSULE NETWORKS", "text": "The idea of capsules was first introduced by Hinton et al. (2011). Sabour et al. (2017) extended this and proposed dynamic routing between capsules. The EM routing algorithm was then modified by Hinton et al. (2018). Recently, capsule networks could achieve the state-of-the-art performance for a wide range of applications: video object segmentation (Duarte et al., 2019), point cloud segmentation (Zhao et al., 2019), explainable medical diagnosis (LaLonde et al., 2020), text classification (Zhao et al., 2018), sentiment analysis (Wang et al., 2018), and various other applications (Vijayakumar, 2019)." }, { "heading": "5.2 OBJECT DETECTION", "text": "Region proposal-based approaches: R-CNN was one of the first successful deep object detectors, in which a selective search algorithm was used to select a number of region proposals, CNN features were extracted from each of the region proposals and were used to both classify the object and regress its bounding box (Girshick et al., 2014). The later addition of Fast R-CNN (Girshick, 2015) provided end-to-end training and addressed the speed and efficiency issues of R-CNN.\nAnchors-based approaches: Anchors-based approaches sample fixed-shape bounding boxes (anchors) around a low-resolution image grid, then attempt to classify anchors into object classes. Faster R-CNN (Ren et al., 2015) generates region proposals in a first stage network, then attempts to classify and regress bounding boxes for the top-k highest scoring anchors in a second stage network. Later studies such as Redmon et al. (2016) dramatically speed up the process by converting the proposal classifier to a multi-class one-stage detector. Since then, researchers have been working on improving one-stage detectors by including shape priors (Redmon & Farhadi, 2017; 2018), multiple feature resolutions (Liu et al., 2016), re-weighting the loss among different samples (Lin et al., 2017), or modeling channel-wise attention (Chen et al., 2020).\nKeypoint estimation-based approaches: CornerNet (Law & Deng, 2018) attempts to detect objects by predicting two bounding box corners as keypoints. ExtremeNet (Zhou et al., 2019b) extends CornerNet’s approach by estimating all corners and the center of the objects’ bounding box. However, these methods rely on significantly slow combinatorial grouping post-processing stage. Zhou et al. (2019a) proposed CenterNet which attempts to predict only an objects’ center point, and regress all other necessary values from there without the need for grouping or post-processing." }, { "heading": "6 DISCUSSIONS, LIMITATIONS, & FUTURE WORK", "text": "Our proposed deformable capsules (DeformCaps) with SplitCaps object-class representations and Squeeze-and-Excitation inspired SE-Routing algorithm represents an important step for capsule networks to scale-up to large-scale computer vision problems, such as object detection or large-scale classification. Our proposed one-stage object detection capsule network is able to obtain results on MS COCO which are on-par with other state-of-the-art one-stage CNN-based networks for the first time in the literature, while also producing fewer false positives. Examining the qualitative results, provided in Appendix A.3, lends empirical evidence that DeformCaps can better generalize to unusual poses/viewpoints of objects than CenterNet (Zhou et al., 2019a). We hope our work will inspire future research into the considerable potential of capsule networks.\nLimitations: Our study contains some limitations, discussed in greater detail in Appendix A.1. Briefly, (1) we had difficulty integrating the bounding box regression values into our capsule object detection head; (2) the choice of descriptors used in the squeeze is somewhat handcrafted, and is open to further investigation; (3) the choice of dimensions to model the class-agnostic instantiation parameters of objects was chosen semi-arbitrarily and could likely improve from fine-search; and (4) the choice of reconstructing objects’ masks versus image patches is not thoroughly explored.\nFuture directions: The reconstruction sub-network of DeformCaps could possibly be trained to produce a fast single-shot instance segmentation framework. At test, potentially detected objects could have their instantiation vectors reconstructed into objects’ masks, then these masks would simply be resized to the predicted bounding-boxes, similar to He et al. (2017) but without needing to have the initial reshape and ROI alignment required in their two-stage approach." }, { "heading": "A APPENDIX", "text": "A.1 EXTENDED EXPLANATIONS OF LIMITATIONS AND POSSIBLE RECOMMENDATIONS\nIn the discussion section of the main body of our paper, we mentioned four potential limitations in our study. We would like to discuss these in a bit more detail here. Since so many components of our method are newly introduced, there is a wide range of choices which could be investigated and improved by future researchers and engineers, and we suggest a few of those here:\n(1) We had difficulty in integrating the bounding box regression values into our capsule object detection head. In our implementation, the class-agnostic capsules are trained to predict scalenormalized masks of 28× 28. Ultimately, we would like to integrate predicting the object masks and the boxes for those masks together, as these tasks surely share mutual information. However, to the best of our knowledge, no published works exist for using capsules on a real-valued regression task.\n(2) For our proposed SE-Routing, as with the original Squeeze-and-Excitation network, the choice of descriptors computed in the squeeze is somewhat handcrafted. We propose to use the cosine angle, KL divergence, and variance, and provide justifications for each of these choices, then allow the excitation to learn which of these pieces of information is most beneficial dynamically for each given input. Nonetheless, it is completely plausible that different descriptors could yield superior results. We unfortunately do not have the compute resources to run ablation studies over each of these chosen descriptors individually.\n(3) The choice of 64 dimensions to model the class-agnostic instantiation parameters was decided somewhat empirically. As we argued in the main paper, it is unlikely that all variations across object poses are completely class independent; thus, to represent these extra dimensions of variation, we increase our vector lengths considerably (16→ 64). However, it is possible that the number of classindependent and class-dependent variations is significantly higher or lower than the value chosen, and largely will depend on the complexity of the data being modeled. This difficulty is analogous to determining the optimal number of convolutional filters to use at every given layer of a CNN. Related to this, there is the potential for the class-dependent dimensions of the instantiation vectors to have unwanted influence over the cosine angle descriptors when attempting to represent objects of other classes. It could be beneficial to pass class information from the class presence capsule type over to the object instantiation capsule type to dynamically attend to the relevant dimensions of its vector for a given object. In a similar manner, it could be beneficial when computing the probability aggregation using the linear opinion pool to weight the expert opinions in proportion to their uncertainty instead of uniformly.\n(4) We chose to reconstruct object’s masks with the motivation of forcing the network to learn variations in shape, pose, and deformations. Since CNNs are known to be biased to texture information\nover shape, we chose not to explicitly supervise the learning of any texture information. Nonetheless, it is plausible that reconstructing the object with texture could yield superior performance. Further, we chose to set the value of the reconstruction regularization’s contribution to the loss to 0.1, following what was found most beneficial by CenterNet (Zhou et al., 2019a) for weighting the size loss contribution, and from a concern to not over-regularize the network early in training, then stepped this value to 2.0 half-way through training to make its value roughly equal to the other loss terms. From our experience, the accuracy remained fairly consistent across values up to 2.0 for this term, while setting its weight to 0.0 resulted in a degradation of performance. We found that increasing the value during training led to faster improvements in performance, consistent with other works in the literature that use such a regularization term. Engineering efforts on this parameter, such as a temperature function to automatically increase this weight during training, may prove beneficial if the goal is to reach the maximum possible accuracy.\nA.2 ANALYSIS OF FALSE POSITIVES\nDeformCaps tends to be more conservative with its detections than CenterNet. This can be observed both by the slightly lower confidence scores (typically 0.1 less than CenterNet for most detections), and by the overall fewer amount of boxes placed in scenes. CenterNet tends to produce far more false positives than DeformCaps, both in the case of incorrect detections and of multiple detections for the same object which failed to be suppressed by the NMS algorithm. Though DeformCaps producing slightly lower confidence scores might account for some of the reduction in false positives, we observe CenterNet consistently producing fairly confident false predictions while DeformCaps does not produce a detection in the same region at all (see qualitative examples in Appendix A.3). A quantitative analysis of this is provided in Table 2. These number are generted using the official MS COCO evaluation code in it’s standard operation. However, instead of only returning the average precision (AP) ratio of true positives (TP) and false positives (FP), namely TP/(TP +FP ), we also return the raw FP count as well.\nIn the default operation of CenterNet, there is no non-maximum suppression (NMS) operation performed. Instead, a sort of pseudo-NMS is performed by passing a 3× 3 Max Pooling operation over the detection grid activations to extract objects’ centers. When running CenterNet with multiscale testing, a NMS operation is then added to effectively choose which scale is the best fit for a given object detection. Therefore, the false positives being seen in Appendix A.3 are a direct results of multiple object centers being predicted incorrectly for the same object or object centers being predicted where there are no objects. We find that DeformCaps, which predicts objects as capsule vectors and not scalar, does not suffer from these former class of FPs.\nThis observation of less false positive detections is consistent with what we would expect from a capsule network with dynamic routing as compared to a traditional convolutional neural network (CNN). Where a CNN passes on all activations to the next layer, capsule networks utilize a dynamic routing algorithm to only pass on activations if there is agreement amongst the child capsule projections. In our proposed method specifically, with a SplitCaps structure and SE-Routing, the agreement is computed for projections of both the pose and class of the object being represented. It follows naturally that this would limit the amount of false positive detections which are produced, by reducing the amount of activations that get passed on. Further, we find from a survey of these qualitative examples that DeformCaps is better able to detect objects when being presented in an unusual pose or from an usual viewpoint than its CenterNet counterpart. This gives empirical support to one of the purported benefits of capsule networks, to be able to better generalize to unseen poses and viewpoints.\nA.3 QUALITATIVE RESULTS ON MS COCO\nIncluded in this section are a number of qualitative examples for CenterNet (Zhou et al., 2019a) and the proposed DeformCaps on the MS COCO test-dev dataset (Lin et al., 2014) using both flip and multi-scale augmentations. Results for CenterNet were obtained using the official code and trained models provided by the authors (Zhou et al., 2019a). While we do not make any sweeping claims, we wish to comment on a few general patterns that seemed to emerge in these examples. In Figure 3, we show a prototypical example of the general trend we are describing. In Figures 4– 5, we include more examples of this trend. In Figures 6– 7, we include a set of interesting examples of unusual object viewpoints or poses being better captured by DeformCaps than by CenterNet." } ]
2,020
null
SP:38d522d92ad048087149a9d612a694c8ab95f3af
[ "The authors present a working memory model composed of a recurrent neural network trained via gradient descent and an associative memory based on the approach taken by Ba et al. (2016) in \"Using Fast Weights to Attend to the Recent Past\". The model consists of an LSTM to which takes the input and its own state from the previous step to produce an output (or new state) which is then passed to a fast weight memory (FWM) module.", "The solution proposed is the combination of an RNN (LSTM) and Fast Weighted Memory (FWM). The LSTM produces a query to the memory used to retrieve information from the memory and be presented at the model output. It also controls the memory through fast weights that are updated through a Hebbian mechanism. The FWM is based on Tensor Product Representations (TPR). The FWM is differentiable and builds upon the work of TPR-RNN from Schlag and Schmidhuber and Metalearned Neural Memory (MNM) by Munkhdalai et al. In the experimental section, the authors propose a concatenated version of the bAbI dataset to test their model with language modeling and question answering. Further the model is trained on a meta-learning task over POMDPs on graphs, and on language modeling on the PennTree Bank dataset. They show that the LSTM-FWM model generalizes better than without memory and similar models and with smaller capacity." ]
Humans can quickly associate stimuli to solve problems in novel contexts. Our novel neural network model learns state representations of facts that can be composed to perform such associative inference. To this end, we augment the LSTM model with an associative memory, dubbed Fast Weight Memory (FWM). Through differentiable operations at every step of a given input sequence, the LSTM updates and maintains compositional associations stored in the rapidly changing FWM weights. Our model is trained end-to-end by gradient descent and yields excellent performance on compositional language reasoning problems, meta-reinforcement-learning for POMDPs, and small-scale word-level language modelling.1
[ { "affiliations": [], "name": "Imanol Schlag" }, { "affiliations": [], "name": "Tsendsuren Munkhdalai" } ]
[ { "authors": [ "Aishwarya Agrawal", "Aniruddha Kembhavi", "Dhruv Batra", "Devi Parikh" ], "title": "C-vqa: A compositional split of the visual question answering (vqa", "venue": "v1.0 dataset. ArXiv,", "year": 2017 }, { "authors": [ "Yuval Atzmon", "Jonathan Berant", "Vahid Kezami", "Amir Globerson", "Gal Chechik" ], "title": "Learning to generalize to new compositions in image understanding", "venue": "arXiv preprint arXiv:1608.07639,", "year": 2016 }, { "authors": [ "Jimmy Ba", "Geoffrey E Hinton", "Volodymyr Mnih", "Joel Z Leibo", "Catalin Ionescu" ], "title": "Using fast weights to attend to the recent past", "venue": "In Advances In Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Sergey Bartunov", "Jack Rae", "Simon Osindero", "Timothy Lillicrap" ], "title": "Meta-learning deep energybased memory models", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Robert Csordas", "Juergen Schmidhuber" ], "title": "Improving differentiable neural computers through memory masking, de-allocation, and link distribution sharpness control", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime Carbonell", "Quoc V Le", "Ruslan Salakhutdinov" ], "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "venue": null, "year": 1901 }, { "authors": [ "S. Das", "C.L. Giles", "G.Z. Sun" ], "title": "Learning context-free grammars: Capabilities and limitations of a neural network with an external stack memory", "venue": "In Proceedings of the The Fourteenth Annual Conference of the Cognitive Science Society, Bloomington,", "year": 1992 }, { "authors": [ "Mostafa Dehghani", "Stephan Gouws", "Oriol Vinyals", "Jakob Uszkoreit", "Lukasz Kaiser" ], "title": "Universal transformers", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Mete Demircigil", "Judith Heusel", "Matthias Löwe", "Sven Upgang", "Franck Vermet" ], "title": "On a model of associative memory with huge storage capacity", "venue": "Journal of Statistical Physics,", "year": 2017 }, { "authors": [ "Jerome A Feldman" ], "title": "Dynamic connections in neural networks", "venue": "Biological cybernetics,", "year": 1982 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Jerry A Fodor", "Zenon W" ], "title": "Pylyshyn. Connectionism and cognitive architecture: A critical analysis", "venue": null, "year": 1988 }, { "authors": [ "F.A. Gers", "J. Schmidhuber", "F. Cummins" ], "title": "Learning to forget: Continual prediction with LSTM", "venue": "Neural Computation,", "year": 2000 }, { "authors": [ "Samuel J Gershman", "Kenneth A Norman", "Yael Niv" ], "title": "Discovering latent causes in reinforcement learning", "venue": "Current Opinion in Behavioral Sciences,", "year": 2015 }, { "authors": [ "Alex Graves", "Greg Wayne", "Ivo Danihelka" ], "title": "Neural turing machines", "venue": "arXiv preprint arXiv:1410.5401,", "year": 2014 }, { "authors": [ "Robert F Hadley" ], "title": "Systematicity in connectionist language learning", "venue": "Mind & Language,", "year": 1994 }, { "authors": [ "Geoffrey E Hinton", "David C Plaut" ], "title": "Using fast weights to deblur old memories", "venue": "In Proceedings of the ninth annual conference of the Cognitive Science Society,", "year": 1987 }, { "authors": [ "S. Hochreiter", "J. Schmidhuber" ], "title": "Long Short-Term Memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "S. Hochreiter", "A.S. Younger", "P.R. Conwell" ], "title": "Learning to learn using gradient descent", "venue": "In Lecture Notes on Comp", "year": 2001 }, { "authors": [ "John J Hopfield" ], "title": "Neural networks and physical systems with emergent collective computational abilities", "venue": "Proceedings of the national academy of sciences,", "year": 1982 }, { "authors": [ "Daniel D. Johnson" ], "title": "Learning graphical state transitions", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Pentti Kanerva" ], "title": "Sparse distributed memory", "venue": "MIT press,", "year": 1988 }, { "authors": [ "Louis Kirsch", "Sjoerd van Steenkiste", "Juergen Schmidhuber" ], "title": "Improving generalization in meta reinforcement learning using learned objectives", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Tamara G Kolda", "Brett W Bader" ], "title": "Tensor decompositions and applications", "venue": "SIAM review,", "year": 2009 }, { "authors": [ "Bart Kosko" ], "title": "Bidirectional associative memories", "venue": "IEEE Transactions on Systems, man, and Cybernetics,", "year": 1988 }, { "authors": [ "Ben Krause", "Emmanuel Kahembwe", "Iain Murray", "Steve Renals" ], "title": "Dynamic evaluation of neural sequence models", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Dmitry Krotov", "John J Hopfield" ], "title": "Dense associative memory for pattern recognition", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Joseph B Kruskal" ], "title": "Rank, decomposition, and uniqueness for 3-way and n-way arrays", "venue": "Multiway data analysis,", "year": 1989 }, { "authors": [ "Ankit Kumar", "Ozan Irsoy", "Peter Ondruska", "Mohit Iyyer", "James Bradbury", "Ishaan Gulrajani", "Victor Zhong", "Romain Paulus", "Richard Socher" ], "title": "Ask me anything: Dynamic memory networks for natural language processing", "venue": "Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Brenden M. Lake", "Marco Baroni" ], "title": "Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks", "venue": "CoRR, abs/1711.00350,", "year": 2017 }, { "authors": [ "Brenden M Lake", "Tomer D Ullman", "Joshua B Tenenbaum", "Samuel J Gershman" ], "title": "Building machines that learn and think like people", "venue": "Behavioral and Brain Sciences,", "year": 2017 }, { "authors": [ "Hung Le", "Truyen Tran", "Svetha Venkatesh" ], "title": "Self-attentive associative memory", "venue": "arXiv preprint arXiv:2002.03519,", "year": 2020 }, { "authors": [ "Yujia Li", "Daniel Tarlow", "Marc Brockschmidt", "Richard Zemel" ], "title": "Gated graph sequence neural networks", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Gábor Melis", "Chris Dyer", "Phil Blunsom" ], "title": "On the state of the art of evaluation in neural language models", "venue": "CoRR, abs/1707.05589,", "year": 2017 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Stephen Merity", "Nitish Shirish Keskar", "Richard Socher" ], "title": "Regularizing and optimizing LSTM language models", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Thomas Miconi", "Kenneth Stanley", "Jeff Clune" ], "title": "Differentiable plasticity: training plastic neural networks with backpropagation", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Thomas Miconi", "Aditya Rawal", "Jeff Clune", "Kenneth O. Stanley" ], "title": "Backpropamine: training self-modifying neural networks with differentiable neuromodulated plasticity", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tomáš Mikolov", "Martin Karafiát", "Lukáš Burget", "Jan Černockỳ", "Sanjeev Khudanpur" ], "title": "Recurrent neural network based language model", "venue": "In Eleventh annual conference of the international speech communication association,", "year": 2010 }, { "authors": [ "Nikhil Mishra", "Mostafa Rohaninejad", "Xi Chen", "Pieter Abbeel" ], "title": "A simple neural attentive meta-learner", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Michael C Mozer", "Sreerupa Das" ], "title": "A connectionist symbol manipulator that discovers the structure of context-free languages", "venue": "In Advances in neural information processing systems,", "year": 1993 }, { "authors": [ "Tsendsuren Munkhdalai", "Hong Yu" ], "title": "Neural semantic encoders", "venue": "In Proceedings of the conference. Association for Computational Linguistics. Meeting,", "year": 2017 }, { "authors": [ "Tsendsuren Munkhdalai", "Alessandro Sordoni", "Tong Wang", "Adam Trischler" ], "title": "Metalearned neural memory", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Denis Paperno", "Germán Kruszewski", "Angeliki Lazaridou", "Ngoc Quan Pham", "Raffaella Bernardi", "Sandro Pezzelle", "Marco Baroni", "Gemma Boleda", "Raquel Fernández" ], "title": "The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2016 }, { "authors": [ "Steven Andrew Phillips" ], "title": "Connectionism and the problem of systematicity", "venue": "PhD thesis, University of Queensland,", "year": 1995 }, { "authors": [ "Alexander Pritzel", "Benigno Uria", "Sriram Srinivasan", "Adria Puigdomenech Badia", "Oriol Vinyals", "Demis Hassabis", "Daan Wierstra", "Charles Blundell" ], "title": "Neural episodic control", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Hubert Ramsauer", "Bernhard Schäfl", "Johannes Lehner", "Philipp Seidl", "Michael Widrich", "Lukas Gruber", "Markus Holzleitner", "Milena Pavlović", "Geir Kjetil Sandve", "Victor Greiff" ], "title": "Hopfield networks is all you need", "venue": null, "year": 2008 }, { "authors": [ "Frank Rosenblatt" ], "title": "The perceptron: a probabilistic model for information storage and organization in the brain", "venue": "Psychological review,", "year": 1958 }, { "authors": [ "Adam Santoro", "Sergey Bartunov", "Matthew Botvinick", "Daan Wierstra", "Timothy Lillicrap" ], "title": "Metalearning with memory-augmented neural networks", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Imanol Schlag", "Jürgen Schmidhuber" ], "title": "Gated fast weights for on-the-fly neural program generation", "venue": "In NIPS Metalearning Workshop,", "year": 2017 }, { "authors": [ "Imanol Schlag", "Jürgen Schmidhuber" ], "title": "Learning to reason with third order tensor products", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Imanol Schlag", "Paul Smolensky", "Roland Fernandez", "Nebojsa Jojic", "Jürgen Schmidhuber", "Jianfeng Gao" ], "title": "Enhancing the transformer with explicit relational encoding for math problem solving", "venue": null, "year": 1910 }, { "authors": [ "Margaret L Schlichting", "Alison R Preston" ], "title": "Memory integration: neural mechanisms and implications for behavior", "venue": "Current opinion in behavioral sciences,", "year": 2015 }, { "authors": [ "J. Schmidhuber" ], "title": "Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-.", "venue": "hook. Diploma thesis, Inst. f. Inf., Tech. Univ. Munich,", "year": 1987 }, { "authors": [ "J. Schmidhuber" ], "title": "Learning to control fast-weight memories: An alternative to recurrent nets", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "J. Schmidhuber" ], "title": "On decreasing the ratio between learning complexity and number of time-varying variables in fully recurrent nets", "venue": "In Proceedings of the International Conference on Artificial Neural Networks,", "year": 1993 }, { "authors": [ "J. Schmidhuber" ], "title": "On learning how to learn learning strategies", "venue": "Technical Report FKI-198-94, Fakultät für Informatik, Technische Universität München,", "year": 1994 }, { "authors": [ "H.T. Siegelmann", "E.D. Sontag" ], "title": "Turing computability with neural nets", "venue": "Applied Mathematics Letters,", "year": 1991 }, { "authors": [ "P. Smolensky" ], "title": "Tensor product variable binding and the representation of symbolic structures in connectionist systems", "venue": "Artif. Intell.,", "year": 1990 }, { "authors": [ "Paul Smolensky" ], "title": "Symbolic functions from neural computation", "venue": "Phil. Trans. R. Soc. A,", "year": 2012 }, { "authors": [ "Kelsey N Spalding", "Margaret L Schlichting", "Dagmar Zeithamova", "Alison R Preston", "Daniel Tranel", "Melissa C Duff", "David E Warren" ], "title": "Ventromedial prefrontal cortex is necessary for normal associative inference and memory integration", "venue": "Journal of Neuroscience,", "year": 2018 }, { "authors": [ "Sainbayar Sukhbaatar", "Jason Weston", "Rob Fergus" ], "title": "End-to-end memory networks", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Volker Tresp", "Yunpu Ma" ], "title": "The tensor memory", "venue": "hypothesis. ArXiv,", "year": 2017 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Christoph von der Malsburg" ], "title": "The correlation theory of brain function (internal report 81-2)", "venue": "Goettingen: Department of Neurobiology, Max Planck Intitute for Biophysical Chemistry,", "year": 1981 }, { "authors": [ "Jason Weston", "Antoine Bordes", "Sumit Chopra", "Tomas Mikolov" ], "title": "Towards ai-complete question answering: A set of prerequisite toy tasks. CoRR, abs/1502.05698, 2015a", "venue": "URL http:// arxiv.org/abs/1502.05698", "year": 2015 }, { "authors": [ "Caiming Xiong", "Stephen Merity", "Richard Socher" ], "title": "Dynamic memory networks for visual and textual question answering", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Ruslan Salakhutdinov", "William W. Cohen" ], "title": "Breaking the softmax bottleneck: A high-rank RNN language model", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Wei Zhang", "Bowen Zhou" ], "title": "Learning to update auto-associative memory in recurrent neural networks for improving sequence memorization", "venue": "arXiv preprint arXiv:1709.06493,", "year": 2017 }, { "authors": [ "Merity" ], "title": "An alternative which we do not explore here is to use multiple FWM-layers each with one LSTM cell and one FWM. We trained our model for 1000 epochs on PTB and 1600 epochs on WT2. Similar to Merity et al. (2018) we switched from Adam to Averaged Stochastic Gradient Descent (ASGD) after 916 epochs and 1372 epochs for PTB and WT2 models respectively. We tune the dropout", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Humans continually adapt in order to understand new situations in changing environments. One important adaptive ability is associative inference for composing features extracted from distinct experiences and relating them to each other (Schlichting & Preston, 2015; Gershman et al., 2015). Suppose Alice has shared with you pictures of her toddler. Later, at the office party, you see a man carrying the depicted toddler. Since the toddler yields a shared feature in two different contexts, it may be plausible to infer that the man is Alice’s partner, without ever seeing him and Alice together. The ability to rapidly associate and bind together novel stimuli can help to derive knowledge systematically, in addition to the knowledge gained directly from observation.\nVirtually all modern cognitive architectures applied to challenging artificial intelligence problems are based on deep artificial neural networks (NNs). Despite their empirical successes and theoretical generality, NNs tend to struggle to generalise in situations similar to the given example (Lake et al., 2017; Phillips, 1995; Lake & Baroni, 2017). This weakness becomes even more severe if the training and test data exhibit systematic differences (Atzmon et al., 2016; Agrawal et al., 2017). For example, during training, the man’s representation might never be associated with the toddler’s, but during testing, this association might be necessary to make a useful prediction. In problems where humans excel, this sort of inference is likely ubiquitous since data is often combinatorially complex in a way that observations used during training will likely cover just a small fraction of all possible compositions. Such a lack of productivity and systematicity is a long-standing argument against the use of NNs as a substrate of an artificial cognitive architecture (Fodor & Pylyshyn, 1988; Hadley, 1994; McLaughlin, 2009).\nThe hidden state of a neural model is a learned representation of the task-relevant information extracted from the input. To generalise to never-seen-before compositions of stimuli, the function which produces the state representation must be able to systematically construct all possible states. This requires a general and preferrably differentiable method, such as the Tensor Product Representation (TPR; Smolensky (1990)). TPRs provide a general and differentiable method for embed-\n1Source code and data used in this paper is available at github.com/ischlag/Fast-Weight-Memory-public\nding symbolic structures in vector spaces. A TPR state representation is constructed via the tensor product (i.e. the generalised outer-product) of learned component representations. Under certain constraints, such a mechanism guarantees a unique representation for every possible combination of components (Smolensky, 1990; 2012).\nIn this work, we augment a recurrent NN (RNN) with an additional TPR-like memory representation. To facilitate the learning of multi-step associative inference, the TPR memory can be queried multiple times in a row, allowing the model to chain together various independent associations. In contrast to previous work on fast weights, we apply our memory-augmented RNN to much longer sequences. This requires the model to update its associative memory. Furthermore, we demonstrate the generality of our method by applying it to meta-reinforcement learning and small scale language modelling problems.\nIn the next section, we cover related memory-augmented NNs. Section 3 describes the FWM in detail. Section 4 demonstrates the generality of our method through experiments in the supervised, self-supervised, and meta-reinforcement learning setting. The supervised-learning experiments in subsection 4.1 consist of a more challenging version of the bAbI dataset dubbed concatenated-bAbI or catbAbI. The meta-reinforcement learning experiment in section 4.2 demonstrates the FWM’s ability to learn to explore a partially observable environment through its ability to perform associative inference. Finally, the self-supervised experiments in subsection 4.3 demonstrate that the FWM can compete with the state-of-the-art word-level language models on small benchmark datasets." }, { "heading": "2 RELATED WORK", "text": "RNNs such as the Long Short-Term Memory (LSTM; Hochreiter & Schmidhuber (1997); Gers et al. (2000)) are in theory capable of implementing any algorithm (Siegelmann & Sontag, 1991). However, the linear growth of the hidden state of a fully connected RNN leads to quadratic growth in the number of trainable weights. Early work addressed this issue through the use of additional memory (Das et al., 1992; Mozer & Das, 1993) and differentiable fast weights (Schmidhuber, 1992; 1993). Recently, memory-augmented NNs have solved algorithmic toy problems (Graves et al., 2014; 2016) as well as reasoning and inference problems in synthetic and natural language (Weston et al., 2015b; Xiong et al., 2016).\nInspired by the random-access memory of computer architectures, a common approach is to incorporate a soft and differentiable lookup table into the NN model. Such slot-based memory matrices have shown to be difficult to train (Munkhdalai & Yu, 2017b) and require sophisticated mechanisms for the allocation and deallocation of memory (Csordas & Schmidhuber, 2019). The TransformerXL (TXL; Dai et al. (2019)), an autoregressive language model variant of the Transformer (Vaswani et al., 2017), can be understood as a slot-based memory-augmented RNN where every new state is pushed into an immutable queue of finite size. Although it is recurrent, the layers of a transformer architecture are strictly forced to use inputs from a lower layer which limits its generality. Nevertheless, a sufficiently deep and well regularised TXL model has achieved state-of-the-art performance in large scale language modelling tasks.\nA biologically more plausible alternative of increasing the memory capacity of NNs are fastchanging weights, i.e. stateful weights that can adapt as a function of its input. Non-differentiable fast weights or “dynamic links” have been published since 1981 (von der Malsburg, 1981; Feldman, 1982; Hinton & Plaut, 1987). Subsequent work showed that a regular network can be trained by gradient descent to control the fast weights of a separate network (Schmidhuber, 1992) or of itself (Schmidhuber, 1993) in an end-to-end differentiable fashion. Recently, fast weights have made a comeback and achieved good results in small toy problems where regular NNs fall short (Ba et al., 2016a; Schlag & Schmidhuber, 2017; Munkhdalai & Yu, 2017a; Pritzel et al., 2017; Ha et al., 2017; Zhang & Zhou, 2017; Miconi et al., 2018; 2019; Schlag & Schmidhuber, 2018; Munkhdalai et al., 2019; Bartunov et al., 2020).\nMost memory-augmented NNs are based on content-based or key-based lookup mechanisms. An alternative to the storage of patterns in a lookup table is the idea that patterns are reconstructed through the implicit iterative minimisation of an energy function, such as in the classical Hopfield network (Steinbuch, 1961; Willshaw et al., 1969; Hopfield, 1982; Kanerva, 1988) or the modern Hopfield network (Krotov & Hopfield, 2016; Demircigil et al., 2017; Ramsauer et al., 2020). This is\noften described as an auto-associative type of memory as it reconstructs a previously stored pattern that mostly resembles the current pattern. A much less studied variation is the hetero-associative memory (see e.g. Kosko (1988)) where the retrieved pattern is different from the input pattern. This is more relevant for our use case. We aim to train an LSTM to construct, maintain, and edit its associative memory. The ability to edit Hopfield networks partially is not very well studied. For this reason, we employ a simple (multi-)linear hetero-associative memory as it is more closely related to the theory of TPRs (whose manipulation is well understood) and because the association is retrieved in a single step.\nOur work directly builds on two examples of differentiable fast weight memories: the TPR-RNN by Schlag & Schmidhuber (2018) and the Metalearned Neural Memory (MNM) by Munkhdalai et al. (2019). The TPR-RNN is a sentence-level model for reasoning on text. It achieves excellent results on the regular bAbI tasks but it underperforms on word-level bAbI (Schlag et al., 2019) or algorithmic toy problems (Le et al., 2020). In contrast, the MNM is a word-level model which augments the LSTM with a fully-connected multi-layer feed-forward network as its memory and trains it using a meta-learning objective. Both, MNM and TPR-RNN were developed on the regular bAbI dataset which only contains short sequences and does not require the model to remove deprecated associations from its memory. In this work, we train on an infinite sequence of bAbI stories where our FWM achieves excellent performance and improves over MNM. We further demonstrate strong performance in small-scale language modelling and meta reinforcement-learning which demonstrates the generality of our contribution." }, { "heading": "3 PROPOSED METHOD", "text": "Our FWM is a fast-changing, multi-linear map which is controlled by a slowly-changing, non-linear LSTM. The slow weights of the LSTM are regular NN weights which are updated during training by gradient descent. In contrast, the fast weights of the FWM are updated by the LSTM at every step of the input sequence through a Hebb-like differentiable mechanism. This allows the FWM function to change rapidly even during testing—hence the name fast weights. Along with updating the fast weights, the LSTM also generates a memory query which is used to retrieve information that was previously stored. The retrieved information then becomes part of the model’s output.\n3.1 THE FAST WEIGHT MEMORY\nGiven a sequence of tokens x = (x1, ..., xT ) from a vocabulary V, the task of language modelling is to train a model which maximizes the joint probability p(x) which we factorize autoregressively p(x1:T ) =∏T t=1 p(xt|x0:t−1) where x0 is an artificial start token.2 In this work, we train an RNN model to encode the input sequence x1:t into ht, the hidden state of the LSTM, and Ft, the fast weight tensor of the FWM, to maximize the probability of the next token xt+1.\nAt step t of the input sequence, the input token xt is embedded in a dE-dimensional vector space using a lookup table et = embedding(xt). An LSTM with dLSTM hidden units encodes the sequence of embedded tokens into a fixed size vector representation ht = LSTM(et,ht−1). The probability distribution over the next token x̂t+1 = softmax(W\n(s)(ht + FWM(ht,Ft)) where Ft ∈ RdFWM×d 2 FWM are the fast weights of the FWM at step t and W (s) ∈ R|V|×dLSTM . Note that the fast\nweight matrix Ft is a reshaped third-order tensor Ft ∈ RdFWM×dFWM×dFWM . This allows us to describe third-order tensor operations using matrix multiplications. We’ll now describe in detail the FWM function and how its fast weights are updated.\n2We use the notation x1:t to refer to the sequence (x1, x2, ..., xt)." }, { "heading": "3.1.1 WRITING", "text": "The FWM is updated at every step t using the write mechanism described in this section. To this end, we extract from the hidden state ht: the write strength β (a scalar bounded by 0 and 1 using the sigmoid function σ), the two key vectors k1,k2, and the new value v.\n[k1,k2,v] = φ(Wwriteht) (1) β = σ(Wβht) (2)\nThe purpose of writing to memory is to learn a context-specific association between the input pattern k1 ⊗ k2 and the output pattern v. The usage of the tensor-product in the input pattern factorises the the representational space which guarantees unique orthogonal vector representations for novel key pairs. A specific example of such is given and demonstrated by Schlag & Schmidhuber (2018) where the first key learns to represent an entity and the second key a specific action, thereby, learning a representational space that generalises to never seen entity and action compositions.\nIn stark contrast to the complex memory operations of the TPR-RNN, we employ a single, simple, and word-level operation which is closely related to the perceptron learning rule (Rosenblatt, 1958). It allows the model to replace the previous association vold with a convex combination of the old and new value βv + (1 − β)vold. With the scalar β the LSTM controls if the new association fully replaces the previous value (β = 1) or if the information of both are mixed together. Our fast weight update works as follows: First, the current value vold that is associated with k1 ⊗ k2 is retrieved. Second, we remove the old association from the map by subtracting vec(k1 ⊗ k2) ⊗ vold from our memory, where vec vectorises the matrix. Third, we add vec(k1⊗k2)⊗(βv+(1−β)vold). All three steps can be achieved at once using the following update rule (see appendix section B for the proof): F ′t = Ft−1 + β vec(k1 ⊗ k2)⊗ (v − vold). (3) To prevent the fast weights from potentially growing endlessly, we scale down the fast weights whenever ||F ′t ||2 > 1. This is achieved through the following element-wise scaling.\nFt = F ′t\nmax(1, ||F ′t ||2) . (4)" }, { "heading": "3.1.2 READING", "text": "For each step of the input sequence, the model queries the memory in order to retrieve a previously stored value. Due to the keys and values being generated separately, the network can retrieve values which are informationally independent from their keys. In order to perform more complex associative inference, like e.g. transitive inference (a → b, b → c, therefore, a → c), we employ multiple reads where we use the retrieved value as one of the keys in the next query (see equation 7).\nn (0) t = φ(Wnht) (5)\ne (i) t = φ(W (i) e ht), 1 ≤ i ≤ Nr (6)\nn (i) t = LN(Ft vec(n (i−1) t ⊗ e (i) t )), 1 ≤ i ≤ Nr (7)\nFWM(ht,Ft) = Won (Nr) t . (8)\nHere LN refers to layernorm without the learned element-wise affine map (Ba et al., 2016b), vec reshapes the matrix into a vector, φ is the hyperbolic tangent function, and the matrices Wn,W (i) e ∈ RdFWM×dLSTM , i ∈ {1..Nr} and Wo ∈ RdLSTM×dFWM are regular slow weights trained by gradient descent which allow us to decouple the dimensionality of the LSTM from the dimensionality of the FWM. In eq. 7, Ft is the multi-linear map which we query using the LSTM-generated “input” e(i) and the previous retrieval n(i−1) (except for the first query where both keys are LSTM-generated)." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 CONCATENATED-BABI", "text": "The bAbI tasks is a popular toy dataset to benchmark neural networks with memory augmentations and reasoning capabilities (Weston et al., 2015a). It consists of a set of short stories with\nquestions embedded in the text. The stories were generated by simulating multiple entities in a virtual environment and cover different contexts in which entities change their state on their own or through an interaction. Each story-sample belongs to one of 20 different tasks that the authors of the dataset considered important for intelligent dialogue agents. The tasks contain questions which require reasoning capabilities like deduction, coreference, or counting. All tasks require some level of symbolic reasoning, and the first neural and non-neural baselines demonstrated poor generalisation performance on test data (Weston et al., 2015a).\nWe aim to improve the bAbI benchmark as a means of developing intelligent dialogue agents. To this end, we propose concatenated-bAbI (catbAbI): an infinite sequence of bAbI stories. catbAbI is generated from the bAbI dataset and during training, a random sample/story from any task is drawn without replacement and concatenated to the ongoing story. The preprocessing for catbAbI addresses several issues: it removes the supporting facts, leaves the questions embedded in the story, inserts the correct answer after the question mark, and tokenises the full sample into a single sequence of words. As such, catbAbI is designed to be trained in an autoregressive way and analogous to closed-book question answering.\ncatbAbI models can be trained in two different ways: language modelling mode (LM-mode) or question-answering mode (QA-mode). In LM-mode, the catbAbI models are trained like autoregressive word-level language models. In QA-mode, the catbAbI models are only trained to predict the tokens that are answers to questions—making it more similar to regular bAbI. QA-mode is simply implemented by masking out losses on non-answer predictions. In both training modes, the model performance is solely measured by its accuracy and perplexity when answering the questions. Performance on non-answers is irrelevant on catbAbI because the tokens are either very predictive or inherently unpredictable, and there is nothing appealing to be learned. Despite measuring performance only for answers, we argue that LM-mode is interesting for three reasons. First, LM-mode removes the bias of knowing which words would benefit from a symbolic inference mechanism. Second, LM-mode trains the model on a sequence with tokens which are inherently unpredictable. Such tokens could also appear in natural language and might harm the model’s ability to learn a useful representation of the story. Indeed, in the next section, we will give evidence for such a generalisation gap. Third, the LM-mode setting allows us to directly compare our method with state-of-the-art language models." }, { "heading": "4.1.1 RESULTS", "text": "We compare our FWM directly with the current state-of-the-art on word-level bAbI: Metalearned Neural Memory (MNM; Munkhdalai et al. (2019)). We also include two strong autoregressive word-level language models as baselines: a regularized LSTM (Merity et al., 2018; Melis et al., 2017) and a regularized Transformer-XL (TXL; Dai et al. (2019)). Lastly, we also evaluate Ba’s Fast Weights which attend to the recent past (JBFW; Ba et al. (2016a)) but were unable to find hyperparameters that converged. We truncate backpropagation through time (tBPTT) to 200 tokens for all models and limited the amount of GPU memory to ~16GB for practical reasons. For every model, we performed a hyperparameter search in QA mode over the first 3k steps of which a smaller selection was trained for 30-60k steps. For all models, we adopt the best QA mode hyperparameters for the LM mode results. Table 1 lists the best accuracy and perplexity of each model over three seeds while figure 2 shows the learning curves of the best seeds. Further hyperparameter search results can be found in the appendix section F.\nOur experiments on catbAbI show that a regularized, 4-layer deep, and residual LSTM, and a 3- layer deep TXL with attention over the last 1400 tokens, achieve strong performance on catbAbI. MNM, on the other hand, suffered a ~10% drop in QA mode accuracy compared to its performance on bAbI which demonstrates the increased difficulty of catbAbI. The JBFW model is not able to make meaningful predictions on catbAbI which may be due to its inability of removing previous associations and fixed fast weight memory decay. Our FWM achieves an excellent accuracy on catbAbI while being by far the smallest in parameter count and weight to activation ratio. The performance gap between FWM and MNM suggests the importance of our fast weight memory mechanism. In figure 3 we visualise how the FWM can chain memories from different points in time to perform transitive inference.\nWe chose to include the TXL model in our comparison due to its autoregressive nature and strong performance in large-scale language modelling benchmarks. However, we point out that the TXLs\ncontext window is larger than the average bAbI story. In this case, due to the shortness of the stories, catbAbI becomes more of an open-book problem for the TXL model since it has the capability of looking up representations of its previous input whereas the RNN models do not. This fundamentally limits the TXL model as it can only condition its prediction on information that is no longer than its attention window to past states. The RNN models, which are general function approximators, for better or for worse, are instead forced to learn to carry the necessary information through time." }, { "heading": "4.2 META-REINFORCEMENT LEARNING", "text": "Meta reinforcement learning (Meta-RL) applies meta-learning (Schmidhuber, 1987; Hochreiter et al., 2001; Finn et al., 2017) to the field of reinforcement learning (Schmidhuber, 1994). An agent is trained on multiple environments (or tasks) and receives environmental feedback as part of its input. To maximise its total reward in an environment, the agent has to leverage the feedback signals and adapt. A successful agent is capable of maximising its reward in novel environments that it has not been exposed to during training. Recent work achieved notable progress in this domain (Santoro et al., 2016; Mishra et al., 2018; Kirsch et al., 2020). We experiment with tasks drawn randomly from a large set of partially observable Markov decision processes (POMDPs). In this set, every environment consists of precisely five states and three actions. Globally, every environment can be viewed as a sparse directed graph where nodes are locations, and the directed edges are one-way modes of transportation—similar to a metro transit map of a city (Graves et al., 2016). To generate\n3Bigger JBFW models did not improve performance. See appendix section F.5. 4The number of immutable activations is 512 × 2 × 3 × (1200 + 199) while the number of mutable\nactivations is merely 512× 2× 3 = 3072. Only the TXL model maintains immutable activations.\na new environment, we sample the adjacency matrix of the graph such that actions are deterministic, and every location is reachable from any other location (see figure 4). We sample graphs such that there are no actions that lead to the same location, and such that not every action is always a valid way of transitioning. We added the exact algorithm to generate graphs, as well as further details, to the appendix section I.\nThe agent’s goal is to reach the reward location. Upon arrival, the agent receives the reward, followed by a random reset of the agent’s and reward’s location. Whenever the agent takes an action that does not lead to a new location, it receives a penalty. At every step, the agent receives as an input: its current location, the reward location, its last action, and the reward received so far.\nWe run our experiment for 30 steps and compare our FWM to an LSTM baseline. Both methods are trained on the same training set of 600 graphs and tested on 600 novel graphs. We optimise our agent with the Advantage Actor-Critic (A2C) algorithm, a non-asynchronous version of the A3C method (Mnih et al., 2016). In our experiments, the LSTM-based agent requires more episodes, a bigger network, and eventually overfit to the training graphs. The FWM-based agent however trains faster and generalises to randomly sampled graphs.\nWe argue that the bAbI stories and the episodes on the graphs are similar in the following three ways. First, in both problems, the network has to construct a useful and context-specific representation from its ongoing input. Second, as part of its input, the network repeatedly receives an objective (the reward location versus the question) which requires the exploitation of the context-specific information. Third, the model has to produce a discrete sequence (actions in the environment in RL and reasoning steps in catbAbI) to optimise its training signal (high reward versus low uncertainty)." }, { "heading": "4.3 LANGUAGE MODELLING", "text": "Comparing FWM to autoregressive language models on catbAbI begs the question: how does FWM perform on popular word-level language modelling datasets such as Penn Treebank (PTB; Mikolov et al. (2010)) or WikiText-2 (WT2; Merity et al. (2017))? It is unclear to which extend a symbolic inference mechanism is beneficial for language modelling. PTB and WT2 contain virtually no questions and are constructed from Wikipedia and news articles which are designed to be easily parsed by the reader. Nevertheless, in figure 6 we show how our FWM exploits recurring subject names to reduce its uncertainty. Not many memory augmented NNs have been able to bridge from small and toy reasoning tasks to general language models—and those which did, underperformed (Paperno et al., 2016; Sukhbaatar et al., 2015). We use the regularized 3-layer AWD-LSTM (Merity et al.,\n2018) as the slow RNN in our FWM model to minimize further hyperparameter search. The experimental results in table 2 demonstrate a relative improvement over the AWD-LSTM baselines, which suggest the benefit of our FWM even in language modelling benchmarks. However, in contrast to catbAbI, all three models achieve very similar results which might indicate that PTB and WT2 do not benefit as strongly from an associative reasoning capacity. We added the experimental details to the appendix section H.\nSince the publication of AWD-LSTM (Merity et al., 2018), various extensions (some of which are orthogonal to our memory augmentation) have been proposed (Krause et al., 2018; Merity et al., 2018; Yang et al., 2018). In this work, we are not primarily interested in beating the state-of-the-art in language modelling and leave it for future work to explore the possible synergies between these methods." }, { "heading": "5 DISCUSSION", "text": "An order-three memory tensor is a computationally demanding method for constructing compositional state representations. With vector components in Rn, the tensor product computation alone has a space and time complexity of O(n3). For practical reasons, this forces the FWM to remain small, relative to the slow NN, which limits the number of associations that can be maintained at once. Previous work has proposed approximations of such memory tensors in a variance-optimal way (Schlag et al., 2019). In our ablation experiments in section E, we show on catbAbI that concatenating the keys results in a performance accuracy drop of ~5%. We also experiment with fewer read operations (smallerNr) which also results in a performance degradation (appendix figure 7). However, further improvements might not come from scaling up but from more general symbolic manipulations. We address the capacity of the FWM and the necessity of the tensor product from a linear hetero-associative memory perspective in section A of the appendix. Finally, our fast weight memory can be thought of as a primitive “working memory” of the model—analogous to the working memory in the human brain (Spalding et al., 2018). This idea is supported by recent work which proposes a cognitive model of the human brain that is based on such higher-order tensors (Tresp & Ma, 2017)." }, { "heading": "6 CONCLUSION", "text": "Our new FWM is a fast weights architecture capable of learning from synthetic data to answer questions which require various symbolic reasoning skills. To improve generality, we overcome issues of the popular bAbI dataset by introducing more general and more difficult variation dubbed catbAbI. We report excellent performance on catbAbI and compare with strong baselines based on state-of-the-art language models, as well as, the previous state-of-the-art in word-level bAbI. We also apply the FWM in a challenging meta-reinforcement learning environment where the agent generalises to novel environments by learning from its observations and actions. Finally, in a selfsupervised setting, we apply the FWM to word-level language modelling on PTB and WT2 where it beats the AWD-LSTM and AWD-Transformer-XL baselines." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We thank NVIDIA Corporation for donating several DGX machines, and IBM for donating a Minsky machine. This research was supported by an European Research Council Advanced Grant (no: 742870)." }, { "heading": "A Further Discussion 16", "text": "" }, { "heading": "B Derivation of the Update Rule 16", "text": "" }, { "heading": "C A Comment on the Regular bAbI Dataset and Previous Work 16", "text": "" }, { "heading": "D Concatenated-bAbI Details 17", "text": "" }, { "heading": "E Ablation 18", "text": "" }, { "heading": "F Hyperparameter search for catbAbI 20", "text": "F.1 Fast Weight Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 F.2 Metalearned Neural Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 F.3 Transformer-XL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 F.4 LSTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 F.5 Attention to the Recent Past Fast Weights . . . . . . . . . . . . . . . . . . . . . . 24" }, { "heading": "G Best catbAbI Runs Broken Down by Task 25", "text": "" }, { "heading": "H Language Modelling 27", "text": "H.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27\nI Meta Reinforcement Learning 28" }, { "heading": "A FURTHER DISCUSSION", "text": "One way of assessing the capacity of the third-order tensor memory is its rank (which is analogous to the rank of a matrix). However, there exists no general algorithm to determine the rank of a given higher-order tensor A ∈ RI×J×K . There exists only a loose upper bound described by rank(A) ≤ min{IJ, IK, JK} (Kruskal, 1989; Kolda & Bader, 2009). It might be tempting to simplify the FWM by replacing the outer-product of the input with a concatenation as a means to reduce the space and time complexity. However, in highly compositional domains, the concatenated input will suffer from interference between memories. Consider a problem which, from a set of 10 symbols, requires the association of any three symbols represented by the vectors s, r, t ∈ R10. In the case of a concatenation, one rank of the fast weight memory is [s; r]⊗ t where we refer to [s; r] as the key representation. The read vectors s′, r′ ∈ R10, are then concatenated and matrix multiplied to retrieve the previous association t̂ = F [s′; r′]. Here we refer to [s′; r′] as the query representation. Since there are ten distinct symbols of which any two can behave as a key representation, there exist 102 = 100 unique key patterns. To guarantee noise-free retrieval in any context, the vectors of the key representations have to be orthogonal. However, [s′; r′] is only a 20 dimensional space which means that certain key representations cannot be used simultaneously without interference. The tensor product, on the other hand, is capable of noise-free retrieval because it represents the key as s⊗ r ∈ R10×10 which allows for 100 orthogonal keys and as such the possibility of noise-free retrieval. We conclude that if the problem is highly compositional, in a sense that every component can be composed with any other component, then the tensor product will be better suited than a concatenation. Experimentally we evaluate concatenated keys in section E. The results show that concatenated keys will result in a slightly worse performance (see figure 8). As an alternative, a non-linear memory, e.g. through the use of a softmax, would not require orthogonality in it’s keys to be free of interference and could result in a larger storage capacity." }, { "heading": "B DERIVATION OF THE UPDATE RULE", "text": "Theorem B.1. Given two key vectors k1,k2 ∈ Rd and two value vectors vold,vnew ∈ Rd with d ∈ Z>0, a mixing coefficient β ∈ (0, 1), and a fast weight memory Fold = vec(k1⊗k2)⊗vold where vec refers to the vectorisation of the higher-order tensor, then the (recurrent) fast weight update rule given by Fold +β vec(k1⊗k2)⊗(vnew−vold) results in Fnew = vec(k1⊗k2)⊗[(1−β)vold +βvnew].\nProof.\nFnew = Fold + β vec(k1 ⊗ k2)⊗ (vnew − vold) (9) = vec(k1 ⊗ k2)⊗ vold + vec(k1 ⊗ k2)⊗ (βvnew − βvold) (10) = vec(k1 ⊗ k2)⊗ [vold + βvnew − βvold] (11) = vec(k1 ⊗ k2)⊗ [(1− β)vold + βvnew] (12)" }, { "heading": "C A COMMENT ON THE REGULAR BABI DATASET AND PREVIOUS WORK", "text": "The bAbI tasks is a popular toy dataset to benchmark neural networks with memory augmentations and reasoning capabilities (Weston et al., 2015a). It consists of a set of short stories with questions embedded in the text. The stories were generated by simulating multiple entities in a virtual environment and cover different contexts in which entities change their state or interact with each other. Each story-sample belongs to one of 20 different tasks that the authors of the dataset considered important for intelligent dialogue agents. The tasks contain questions which require reasoning capabilities like deduction, coreference, or counting. All tasks require some level of symbolic reasoning, and the first neural and non-neural baselines demonstrated poor generalisation performance on test data (Weston et al., 2015a). In addition to the story sentences, the questions, and the answers, the dataset also included supporting facts which demarcated question-relevant sentences in the story. The stories often follow multiple parallel plots where each new sentence is advancing one of the plots by a single fact.\nThe bAbI dataset did not include a strict experimental protocol which resulted in several variations that differed slightly. Early methods achieved good results by relying on the supporting facts (Weston et al., 2015b; Kumar et al., 2016) or other supervised training signals (see e.g. Johnson (2017); Li et al. (2016)).\nSome researchers achieved great results by reformatting the data such that the question is read before the story or, similarly, by giving the model the capacity to lookup parts of the story, e.g. through some attentional mechanism, after the question has been read (Sukhbaatar et al., 2015; Xiong et al., 2016; Dehghani et al., 2019). Such methods have shown to be useful for answering questions while maintaining access to the full story. We argue that this is similar to open-book question answering. In such a setting, the model is incentivised to look up information instead of capturing the useful bits of the data it has seen. The advantage of the latter becomes more evident in a different scenario: imagine the model is processing a book where a user can ask a question about the content at any time. An open-book approach will have to store all previous sentences in its memory and apply its answer-search mechanism to all of the data. Instead, a closed-book approach would store a compressed version of the story, or the question-relevant information of the story.\nIt is essential to acknowledge that the sentences in the bAbI stories of all tasks are short and simplistic. Virtually every sentence contains precisely one fact. Because of that, it might be that sentencelevel models have an advantage over word-level models. Indeed, a previous sentence-level model has reported poor performance in the word-level setting (Schlag & Schmidhuber, 2018). This limits their generality since sentences in natural language are often not limited to a single fact.\nLastly, even though the bAbI dataset was initially designed with the questions embedded in the story, virtually all methods so far preprocess the dataset such that a sample with four questions is split into four samples with one question each (Weston et al., 2015b). This arguably simplifies the problem because the model does not need to maintain the state of other entities which are not relevant to the question once it is read. However, it remains to be tested if this would result in inferior performance." }, { "heading": "D CONCATENATED-BABI DETAILS", "text": "Concatenated-bAbI (catbAbI) is a preprocessing and experimental procedure to evaluate autoregressive models in their capability of predicting words which require certain reasoning skills (here answers of questions). In this work we only focused on the 10k samples per task version of bAbI but all our scripts can be applied to the 1k version as well. We used the same train/test/valid split of the data as in regular bAbI. In contrast to previous work, we do not split the stories to contain only one question. We remove the sentence indecies and concatenate the sentences with answers following a question mark into one long sequence of words. The preprocessed data is a shuffled list of samples. Each sample comes with its task id for diagnosis. All answers are preceeded by a question mark.\nTo ensure that stories do not overlap and become ambiguous, we add a special end-of-story token before concatenating the new story. For each word, the preprocessing script provides its task id to measure the performance on different tasks. Similarly, it also provides a special answer token which signifies if the current word is an answer or not. Naturally, the task id and answer information are not provided to the model as an input. The validation and test data are processed likewise, but for a proper comparison of various models, validation and test data are shuffled only once5. During training and evaluation, the validation and test stories are drawn deterministically.\nDuring training we uniformly sample stories without replacement and concatenate them into a long sequence. Since a question mark is not always the end of a story we resolve any ambiguity by\n5We provide the preprocessed catbAbI data together with our code so future work can compare using the same validation and test sequence.\nseparating the stories with a special end-of-story token. The model is trained on this long sequence in an autoregressive way with truncated backpropagation. At the end of the epoch, we fill the batch with padding symbols if the sequences in the batch have different lengths.\nIn LM-mode we mask padding tokens and in QA-mode we mask everything except the steps with a question mark as input. At the end of the epoch we carry over the hidden states to the new epoch. Reseting all hidden states to the same or to zeros had a weak negative effect on final performance but was not explored thouroghly. For evaluation on valid and test splits a copy of the hidden state of the first batch element is used. Evaluation on valid is done throughout training with a large batch-size to maintain speed. Evaluation on test is done with a batch-size of one. During evaluation on valid and test the samples are picked sequentially to ensure that all models are evaluated on the same valid and test sequence of bAbI stories." }, { "heading": "E ABLATION", "text": "We evaluate the FWM model with different number of recurrent steps. Experiments in figure 7 indicate that just one step is already achieving over 95% accuracy but more inference steps help on rarer but harder tasks. We also test a FWM version where the read and query keys are concatenated instead of multiplied through the tensor product. In this version, the FWM results in a weight matrix with R2dFWM×dFWM instead of Rd2FWM×dFWM . The results in figure 8 indicate a drop in performance." }, { "heading": "F HYPERPARAMETER SEARCH FOR CATBABI", "text": "Since catbAbI is an ongoing sequence of stories, backpropagation through time (BPTT) is infeasable for all models which is why we truncate BPTT to the last 200 tokens. Hyperparameters were chosen such that they fit roughly on one GPU with 16GB of memory. All models use a token embedding size of 256 and the Adam optimizer. We exclusively tuned the hyperparameters for the QM setting and transfer only the best to the LM setting. We run a grid search over the batch-size, learning rate, and various model specific parameters such as dropout rates or number of layers on top of additional manually chosen settings. For computational reasons we run two rounds of grid-search: an initial round of 3,000 steps of which the best are moved to the second round where we train them for 30,000 or 60,000 steps. In the following subsections we give further details for each model seperately.\nF.1 FAST WEIGHT MEMORY\nWe set dLSTM = 256, dFWM = 32, Nr = 3 and searched experimented with two seeds for batch sizes 64, 128 and learning rates 0.0001, 0.00025, 0.0005, 0.001, 0.002.\nF.2 METALEARNED NEURAL MEMORY\nWe only experimented with the plastic version of MNM as it was reported to be the best. We used the same hyperparameters for the fast weights as reported by Munkhdalai et al. (2019): 3 layer of fast weights with a dimensionality of 100. We searched over the batch sizes 64, 128; learning rates 0.00025, 0.0005, 0.001, 0.002; and meta-objective coefficient (reg) 1.0, 2.0. In the first 3,000 steps the MNM didn’t show any instability but for longer runs the MNM would sometimes result in NaNs or become unstable.\nF.3 TRANSFORMER-XL\nWe ported the official Transformer-XL implementation6 to our own codebase; fully reusing the model code for our catbAbI experiments. We employ a linear learning-rate warm-up schedule over the first 1000 steps and run a grid search over batch size, learning rate, number of layers, and memory length with some additional manual selected parameters. Our best setting uses a learning rate of 0.00025, memory width of 1200, a hidden state size of dmodel = 512, an inner dimension of the fully connected part of dinner = 2048, and 3 transformer layers. Several long runs can be seen in figure 12. Our experiments show how various seeds eventually become unstable and overfit. Some settings also resulted in NaNs which we have removed from figure 12. The best performing models and most stable where 3 layer models with a large memory and a small learning rate (see figure 13).\n6Source: github.com/kimiyoung/transformer-xl/blob/master/pytorch/mem_ transformer.py\nF.4 LSTM\nWe heavily regularize a four-layer stack of residually connected LSTM cells, each with 512 hidden units. Inspired by AWD-LSTM (Merity et al., 2018), we use dropout in four different ways to regularize the model. We dropout the tokens of the input sequence, elements of the embedding vector, elements of the recurrent weight matrix, and elements of the of the hidden representation between LSTM layers.\nF.5 ATTENTION TO THE RECENT PAST FAST WEIGHTS\nWe evaluate our own implementation of Fast Weights as introduced by Ba et al. (2016a). They propose an RNN augmented with fast weights which modulate the slow weights of an Elman RNN using a fixed fast weight learning and decay rate (JBFW). Our hyperparameter search did not result in any model performing over 15% on the test data." }, { "heading": "G BEST CATBABI RUNS BROKEN DOWN BY TASK", "text": "" }, { "heading": "H LANGUAGE MODELLING", "text": "The code of our language modelling experiments is forked from Uber AI Lab’s (github.com/uberresearch/differentiable-plasticity/tree/master/awd-lstm-lm) which is itself forked from the Salesforce Language model toolkit (github.com/Smerity/awd-lstm-lm). The FWM uses the same three layer LSTM as the slow RNN with the same optimisations as done by Merity et al. (2018). An alternative which we do not explore here is to use multiple FWM-layers each with one LSTM cell and one FWM. We trained our model for 1000 epochs on PTB and 1600 epochs on WT2. Similar to Merity et al. (2018) we switched from Adam to Averaged Stochastic Gradient Descent (ASGD) after 916 epochs and 1372 epochs for PTB and WT2 models respectively. We tune the dropout parameters on the validation set and, after training, we also tune the softmax temperature (tuning the softmax temperature results in ~1 ppl of improvement). The embedding layers were initialized randomly from a uniform distribution, uniform(-0.25, 0.25), which was crucial in our FWM language models. The hyperparameters used for all reported results are in table 4.\nThe Transformer-XL PTB results were based using the authors official code and hyperparameter setting (see zihangdai.github.io/misc/ptb.zip) which includes AWD-style regularisation, model averaging, and softmax tuning. The WT2 results are based on the same code using the best hyperparameters found by Tim Dettmers (see github.com/TimDettmers/transformer-xl/tree/wikitext2/pytorch)." }, { "heading": "I META REINFORCEMENT LEARNING", "text": "The meta reinforcement learning experiments trains an agent in training POMDPs and evaluates it on test POMDPs. The environments are directed graphs with labeled edges. As part of the data generating process, novel graphs are sampled according the python algorithm in listing 1. Actions and states are one-hot encoded. The agent receives a 17 dimensional input: the reward location, the current location, the previous action, a fixed bit, the fractional progress as current steptotal steps , and the current reward sum. Getting to the reward location gives a reward of 10. Choosing an invalid action gives a penalty of 0.05. We use a discounting factor of 0.9 and a value coefficient of 0.1. The entropy coefficient of A2C is set to 0.03.\nThe agent and reward locations are randomly selected at the beginning of the episode. With only 5 states, the reward is reachable in at most 5 steps. As elaborated in section 4.2, such optimal behaviour is only possible once the agent has learned the graphs from its experience. Whenever the reward is placed in the environment a reset timer is set to 0. When the agent reaches the reward, or after 6 unsuccessful steps, the reset timer is set to 0 and the reward and agent are randomly placed in the environment. We train with a batch size of 600 agents and optimize the average step loss using the Adam optimizer.\nimport numpy as np\ndef sample_adjacency_matrix(n_actions, n_states, random_state): while True:\nA = np.zeros((n_actions, n_states, n_states))\n# every state has to be leavable by at least one action for from_state in range(n_states):\nto_state = random_state.choice([i for i in range(n_states) if i != from_state])\naction = random_state.randint(0, n_actions) A[action, from_state, to_state] = 1\n# every state has to be reachable by one or more from-states for to_state in range(n_states):\n# only select states which don't have any neighbours given an action action_list, from_list = np.where(A.sum(2) == 0) # remove self from the selection options = np.asarray(list(filter(lambda x: x[0] != to_state,\nzip(from_list, action_list)))) indecies = np.arange(options.shape[0]) chosen_idx = random_state.choice(indecies) from_state, action = options[chosen_idx] A[action, from_state, to_state] = 1\n# reject if they are not all connected Q = A.sum(0) Q[Q > 0] = 1 for _ in range(n_states):\nQ = np.matmul(Q,Q) if (Q == 0).sum() == 0:\nreturn A\nListing 1: Python3 code to sample new environments such that any state is reachable by any other state." } ]
2,021
LEARNING ASSOCIATIVE INFERENCE USING FAST WEIGHT MEMORY
SP:22b6740eb3b2977aaffb8919aee4883f62af815f
[ "The paper studies an off-policy evaluation (OPE) problem for Markov decision processes (MDPs). It suggests an optimization-based method that can construct a non-asymptotic confidence interval, for a given confidence level, for the value function of a policy starting from a fixed initial distribution. The paper builds on the works of Feng et al. (2019, 2020); the main advantages of the current work with respect to the previous methods are that the suggested approach guarantees a faster convergence rate, it does not require full independence between transition pairs, and it does not need the global optimal solution of the underlying optimization problem, in order to construct guaranteed confidence intervals. The authors present some theoretical results about the construction, including a discussion on the special case of using RKHS approaches, and also present numerical experiments on benchmark problems, such as the inverted-pendulum, cartpole and type-1 diabetes.", "This work constructs non-asymptotic confidence intervals for off-policy evaluation. This is achieved by assuming that the reward at any given time only depends on the state action pair, leveraging that assumed structure to define the difference between the empirical and estimated bellman residual operators as a Martingale difference sequence. This, in turn, then allows the authors to apply a Hoeffding-like concentration inequality which applies to Hilbert spaces. The authors then provide a derivation of the confidence bounds by considering the divergence between policies. The work improves on the rate of prior work from $O(n^{-\\frac{1}{4}})$ to $O(n^{-\\frac{1}{2}})$ and allows for estimation without the need of global optimality via the dual formulation, both of which are very nice additions to the literature. Experimental evaluation backs up the authors’ claims, showing very strong performance with respect to prior art. " ]
Off-policy evaluation (OPE) is the task of estimating the expected reward of a given policy based on offline data previously collected under different policies. Therefore, OPE is a key step in applying reinforcement learning to real-world domains such as medical treatment, where interactive data collection is expensive or even unsafe. As the observed data tends to be noisy and limited, it is essential to provide rigorous uncertainty quantification, not just a point estimation, when applying OPE to make high stakes decisions. This work considers the problem of constructing nonasymptotic confidence intervals in infinite-horizon off-policy evaluation, which remains a challenging open question. We develop a practical algorithm through a primal-dual optimization-based approach, which leverages the kernel Bellman loss (KBL) of Feng et al. (2019) and a new martingale concentration inequality of KBL applicable to time-dependent data with unknown mixing conditions. Our algorithm makes minimum assumptions on the data and the function class of the Q-function, and works for the behavior-agnostic settings where the data is collected under a mix of arbitrary unknown behavior policies. We present empirical results that clearly demonstrate the advantages of our approach over existing methods.
[ { "affiliations": [], "name": "DUAL BOUNDS" }, { "affiliations": [], "name": "Yihao Feng" }, { "affiliations": [], "name": "Ziyang Tang" }, { "affiliations": [], "name": "Na Zhang" }, { "affiliations": [], "name": "Qiang Liu" } ]
[ { "authors": [ "Sylvain Arlot", "Gilles Blanchard", "Etienne Roquain" ], "title": "Some nonasymptotic results on resampling in high dimension, I: confidence regions", "venue": "The Annals of Statistics,", "year": 2010 }, { "authors": [ "Kavosh Asadi", "Evan Cater", "Dipendra Misra", "Michael L Littman" ], "title": "Equivalence between wasserstein and value-aware loss for model-based reinforcement learning", "venue": "arXiv preprint arXiv:1806.01265,", "year": 2018 }, { "authors": [ "Marc G Bellemare", "Will Dabney", "Rémi Munos" ], "title": "A distributional perspective on reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Bo Dai", "Ofir Nachum", "Yinlam Chow", "Lihong Li", "Csaba Szepesvári", "Dale Schuurmans" ], "title": "Coindice: Off-policy confidence interval estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Yaqi Duan", "Zeyu Jia", "Mengdi Wang" ], "title": "Minimax-optimal off-policy evaluation with linear function approximation", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Yaakov Engel", "Shie Mannor", "Ron Meir" ], "title": "Reinforcement learning with Gaussian processes", "venue": "In Proceedings of the 22nd international conference on Machine learning,", "year": 2005 }, { "authors": [ "Yihao Feng", "Lihong Li", "Qiang Liu" ], "title": "A kernel loss for solving the Bellman equation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yihao Feng", "Tongzheng Ren", "Ziyang Tang", "Qiang Liu" ], "title": "Accountable off-policy evaluation with kernel Bellman statistics", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Raphael Fonteneau", "Susan A. Murphy", "Louis Wehenkel", "Damien Ernst" ], "title": "Batch mode reinforcement learning based on the synthesis of artificial trajectories", "venue": "Annals of Operations Research,", "year": 2013 }, { "authors": [ "Mohammad Ghavamzadeh", "Shie Mannor", "Joelle Pineau", "Aviv Tamar" ], "title": "Bayesian reinforcement learning: A survey", "venue": "arXiv preprint arXiv:1609.04436,", "year": 2016 }, { "authors": [ "Mohammad Ghavamzadeh", "Shie Mannor", "Joelle Pineau", "Aviv Tamar" ], "title": "Bayesian reinforcement learning: A survey", "venue": "arXiv preprint arXiv:1609.04436,", "year": 2016 }, { "authors": [ "Josiah P Hanna", "Peter Stone", "Scott Niekum" ], "title": "Bootstrapping with models: Confidence intervals for off-policy evaluation", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Botao Hao", "Yaqi Duan", "Hao Lu", "Csaba Szepesvári", "Mengdi Wang" ], "title": "Bootstrapping statistical inference for off-policy evaluation", "venue": "arXiv preprint arXiv:2102.03607,", "year": 2021 }, { "authors": [ "Wassily Hoeffding" ], "title": "Probability inequalities for sums of bounded random variables", "venue": "Journal of the American Statistical Association,", "year": 1963 }, { "authors": [ "Nan Jiang", "Jiawei Huang" ], "title": "Minimax confidence interval for off-policy evaluation and policy optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Nan Jiang", "Lihong Li" ], "title": "Doubly robust off-policy evaluation for reinforcement learning", "venue": "In Proceedings of the 23rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Ilya Kostrikov", "Ofir Nachum" ], "title": "Statistical bootstrapping for uncertainty estimation in off-policy evaluation", "venue": "arXiv preprint arXiv:2007.13609,", "year": 2020 }, { "authors": [ "Soumendra Nath Lahiri" ], "title": "Resampling methods for dependent data", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Nevena Lazic", "Dong Yin", "Mehrdad Farajtabar", "Nir Levine", "Dilan Gorur", "Chris Harris", "Dale Schuurmans" ], "title": "A maximum-entropy approach to off-policy evaluation in average-reward MDPs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Qiang Liu", "Lihong Li", "Ziyang Tang", "Dengyong Zhou" ], "title": "Breaking the curse of horizon: Infinitehorizon off-policy estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yao Liu", "Pierre-Luc Bacon", "Emma Brunskill" ], "title": "Understanding the curse of horizon in off-policy evaluation via conditional importance sampling", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Ali Mousavi", "Lihong Li", "Qiang Liu", "Denny Zhou" ], "title": "Black-box off-policy estimation for infinitehorizon reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Susan A. Murphy", "Mark van der Laan", "James M. Robins" ], "title": "Marginal mean models for dynamic regimes", "venue": "Journal of the American Statistical Association,", "year": 2001 }, { "authors": [ "Ofir Nachum", "Yinlam Chow", "Bo Dai", "Lihong Li" ], "title": "Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ofir Nachum", "Bo Dai", "Ilya Kostrikov", "Yinlam Chow", "Lihong Li", "Dale Schuurmans" ], "title": "Algaedice: Policy gradient from arbitrary experience", "venue": "arXiv preprint arXiv:1912.02074,", "year": 2019 }, { "authors": [ "Yurii Nesterov" ], "title": "Introductory lectures on convex optimization: A basic course, volume 87", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Daniel Paulin" ], "title": "Concentration inequalities for markov chains by marton couplings and spectral methods", "venue": "Electron. J. Probab,", "year": 2015 }, { "authors": [ "Iosif Pinelis" ], "title": "An approach to inequalities for the distributions of infinite-dimensional martingales", "venue": "In Probability in Banach Spaces,", "year": 1992 }, { "authors": [ "Doina Precup" ], "title": "Eligibility traces for off-policy policy evaluation", "venue": "Computer Science Department Faculty Publication Series, pp", "year": 2000 }, { "authors": [ "Doina Precup" ], "title": "Temporal abstraction in reinforcement learning", "venue": "ProQuest Dissertations and Theses,", "year": 2001 }, { "authors": [ "Doina Precup", "Richard S. Sutton", "Satinder P. Singh" ], "title": "Eligibility traces for off-policy policy evaluation", "venue": "In Proceedings of the 17th International Conference on Machine Learning,", "year": 2000 }, { "authors": [ "Alexander Rakhlin", "Karthik Sridharan", "Ambuj Tewari" ], "title": "Sequential complexities and uniform martingale laws of large numbers", "venue": "Probability Theory and Related Fields,", "year": 2015 }, { "authors": [ "Lorenzo Rosasco", "Mikhail Belkin", "Ernesto De Vito" ], "title": "On learning with integral operators", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Bernhard Scholkopf", "Alexander J Smola" ], "title": "Learning with kernels: support vector machines, regularization, optimization, and beyond", "venue": "Adaptive Computation and Machine Learning series,", "year": 2018 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Alex Smola", "Arthur Gretton", "Le Song", "Bernhard Schölkopf" ], "title": "A hilbert space embedding for distributions", "venue": "In Algorithmic learning theory,", "year": 2007 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": null, "year": 1998 }, { "authors": [ "Ziyang Tang", "Yihao Feng", "Lihong Li", "Dengyong Zhou", "Qiang Liu" ], "title": "Doubly robust bias reduction in infinite horizon off-policy estimation", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Ziyang Tang", "Yihao Feng", "Na Zhang", "Jian Peng", "Qiang Liu" ], "title": "Off-policy interval estimation with lipschitz value iteration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Philip S Thomas" ], "title": "Safe reinforcement learning", "venue": "PhD thesis, University of Massachusetts,", "year": 2015 }, { "authors": [ "Philip S. Thomas", "Emma Brunskill" ], "title": "Data-efficient off-policy policy evaluation for reinforcement learning", "venue": "In Proceedings of the 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Philip S. Thomas", "Georgios Theocharous", "Mohammad Ghavamzadeh" ], "title": "High confidence policy improvement", "venue": "In Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Philip S Thomas", "Georgios Theocharous", "Mohammad Ghavamzadeh" ], "title": "High-confidence off-policy evaluation", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Masatoshi Uehara", "Jiawei Huang", "Nan Jiang" ], "title": "Minimax weight and q-function learning for off-policy evaluation", "venue": "Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Junfeng Wen", "Bo Dai", "Lihong Li", "Dale Schuurmans" ], "title": "Batch stationary distribution estimation", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Martha White", "Adam White" ], "title": "Interval estimation for reinforcement-learning algorithms in continuous-state domains", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "Tengyang Xie", "Nan Jiang" ], "title": "Q* approximation schemes for batch reinforcement learning: A theoretical comparison", "venue": "In Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2020 }, { "authors": [ "Tengyang Xie", "Yifei Ma", "Yu-Xiang Wang" ], "title": "Towards optimal off-policy evaluation for reinforcement learning with marginalized importance sampling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mengjiao Yang", "Bo Dai", "Ofir Nachum", "George Tucker", "Dale Schuurmans" ], "title": "Offline policy selection under uncertainty", "venue": "arXiv preprint arXiv:2012.06919,", "year": 2020 }, { "authors": [ "Mengjiao Yang", "Ofir Nachum", "Bo Dai", "Lihong Li", "Dale Schuurmans" ], "title": "Off-policy evaluation via the regularized Lagrangian", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Ming Yin", "Yu-Xiang Wang" ], "title": "Asymptotically efficient off-policy evaluation for tabular reinforcement learning", "venue": "In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2020 }, { "authors": [ "Ming Yin", "Yu Bai", "Yu-Xiang Wang" ], "title": "Near optimal provable uniform convergence in off-policy evaluation for reinforcement learning", "venue": "arXiv preprint arXiv:2007.03760,", "year": 2020 }, { "authors": [ "Ruiyi Zhang", "Bo Dai", "Lihong Li", "Dale Schuurmans" ], "title": "Gendice: Generalized offline estimation of stationary values", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Shangtong Zhang", "Bo Liu", "Shimon Whiteson" ], "title": "Gradientdice: Rethinking generalized offline estimation of stationary values", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Q (ω" ], "title": "The lower bound follows analogously. The strong duality holds when the Slater’s condition is satisfied (Nesterov, 2013), which amounts to saying that the primal problem in (8) is convex and strictly feasible; this requires that Q is convex and there exists at least one solution q ∈ Q that satisfy that constraint strictly, that is, LW(q; D̂n", "venue": null, "year": 2013 }, { "authors": [ "B.4. (Feng" ], "title": "Assume the reward function and kernel function is bounded with supx |r(x)| ≤ rmax and supx,x", "venue": null, "year": 2020 }, { "authors": [ "Mousavi" ], "title": "IQ(ω; D̂n) WHEN Q IS RKHS Similar to LK(q; D̂n), whenQ is taken to be the unit ball K̃ of the RKHS of a positive definite kernel k̃(x, x̄)", "venue": null, "year": 2020 }, { "authors": [ "Feng" ], "title": "Policy Construction We follow a similar setup", "venue": null, "year": 2017 }, { "authors": [ "Tang" ], "title": "2020b). These methods are based on either estimating the value function, or the stationary visitation distribution, which is shown to form a primal-dual relation", "venue": "(Tang et al., 2020a; Uehara et al.,", "year": 2020 }, { "authors": [ "Besides Feng" ], "title": "2020) which directly motivated this work, there has been a recent surge of interest in interval estimation under infinite-horizon OPE", "venue": "2020b; Yin et al.,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Off-policy evaluation (OPE) seeks to estimate the expected reward of a target policy in reinforcement learnings (RL) from observational data collected under different policies (e.g., Murphy et al., 2001; Fonteneau et al., 2013; Jiang & Li, 2016; Liu et al., 2018a). OPE plays a central role in applying reinforcement learning (RL) with only observational data and has found important applications in areas such as medicine, self-driving, where interactive “on-policy” data is expensive or even infeasible to collect. A critical challenge in OPE is the uncertainty estimation, as having reliable confidence bounds is essential for making high-stakes decisions. In this work, we aim to tackle this problem by providing non-asymptotic confidence intervals of the expected value of the target policy. Our method allows us to rigorously quantify the uncertainty of the prediction and hence avoid the dangerous case of being overconfident in making costly and/or irreversible decisions.\nHowever, off-policy evaluation per se has remained a key technical challenge in the literature (e.g., Precup, 2000; Thomas & Brunskill, 2016; Jiang & Li, 2016; Liu et al., 2018a), let alone gaining rigorous confidence estimation of it. This is especially true when 1) the underlying RL problem is long or infinite horizon, and 2) the data is collected under arbitrary and unknown algorithms (a.k.a. behavior-agnostic). As a consequence, the collected data can exhibit arbitrary dependency structure, which makes constructing rigorous non-asymptotic confidence bounds particularly challenging. Traditionally, the only approach to provide non-asymptotic confidence bounds in OPE is to combine importance sampling (IS) with concentration inequalities (e.g., Thomas et al., 2015a;b), which, however, tends to degenerate for long/infinite horizon problems (Liu et al., 2018a). Furthermore,\n∗Equal contribution.\nneither can this approach be applied to the behavior-agnostic settings, nor can it effectively handle the complicated time dependency structure inside individual trajectories. Instead, it requires to use a large number of independently collected trajectories drawn under known policies.\nIn this work, we provide a practical approach for Behavior-agnostic, Off-policy, Infinite-horizon, Non-asymptotic, Confidence intervals based on arbitrarily Dependent data (BONDIC). Our method is motivated by a recently proposed optimization-based (or variational) approach to estimating OPE confidence bounds (Feng et al., 2020), which leverages a tail bound of kernel Bellman statistics (Feng et al., 2019). Our approach achieves a new bound that is both an order-of-magnitude tighter and computationally efficient than that of Feng et al. (2020). Our improvements are based on two pillars 1) developing a new primal-dual perspective on the non-asymptotic OPE confidence bounds, which is connected to a body of recent works on infinite-horizon value estimation (Liu et al., 2018a; Nachum et al., 2019a; Tang et al., 2020a; Mousavi et al., 2020); and 2) offering a new tight concentration inequality on the kernel Bellman statistics that applies to behavior-agnostic off-policy data with arbitrary dependency between transition pairs. Empirically, we demonstrate that our method can provide reliable and tight bounds on a variety of well-established benchmarks.\nRelated Work Besides the aforementioned approach based on the combination of IS and concentration inequalities (e.g., Thomas et al., 2015a), bootstrapping methods have also been widely used in off-policy estimation (e.g., White & White, 2010; Hanna et al., 2017; Kostrikov & Nachum, 2020). But the latter is limited to asymptotic bounds. Alternatively, Bayesian methods (e.g. Engel et al., 2005; Ghavamzadeh et al., 2016a) offers a different way to estimate the uncertainty in RL, but fails to guarantee frequentist coverage. In addition, Distributed RL (Bellemare et al., 2017) seeks to quantify the intrinsic uncertainties inside the Markov decision process, which is orthogonal to the estimation of uncertainty that we consider.\nOur work is built upon the recent advances in behavior-agnostic infinite-horizon OPE, including Liu et al. (2018a); Feng et al. (2019); Tang et al. (2020a); Mousavi et al. (2020), as well as the DICE-family (e.g., Nachum et al., 2019a; Zhang et al., 2020a; Yang et al., 2020b). In particular, our method can be viewed as extending the minimax framework of the infinite-horizon OPE in the infinite data region by Tang et al. (2020a); Uehara et al. (2020); Jiang & Huang (2020) to the non-asymptotic finite sample region.\nOutline For the rest of the paper, we start with the problem statement in Section 2 , and an overview on the two dual approaches to infinite-horizon OPE that are tightly connected to our method in Section 3. We then present our main approach in Section 4 and perform empirical studies in Section 5. The proof and an abundance of additional discussions can be found in Appendix." }, { "heading": "2 BACKGROUND, DATA ASSUMPTION, PROBLEM SETTING", "text": "Consider an agent acting in an unknown environment. At each time step t, the agent observes the current state st in a state space S, takes an action at ∼ π(· | st) in an action space A according to a given policy π; then, the agent receives a reward rt and the state transits to s′t = st+1, following an unknown transition/reward distribution (rt, st+1) ∼ P(· | st, at). Assume the initial state s0 is drawn from an known initial distribution D0. Let γ ∈ (0, 1) be a discount factor. In this setting, the expected reward of π is defined as Jπ := Eπ [∑T t=0 γ trt | s0 ∼ D0 ] , which is the expected total discounted rewards when we execute π starting from D0 for T steps. In this work, we consider the infinite-horizon case with T → +∞. Our goal is to provide an interval estimation of Jπ for a general and challenging setting with significantly released constraints on the data. In particular, we assume the data is behavior-agnostic and off-policy, which means that the data can be collected from multiple experiments, each of which can execute a mix of arbitrary, unknown policies, or even follow a non-fixed policy. More concretely, suppose that the model P is unknown, and we have a set of transition pairs D̂n = (si, ai, ri, s′i) n i=1 collected from previous experiments in a sequential order, such that for each data point i, the (ri, s′i) is drawn from the model P(· | si, ai), while (si, ai) is generated with an arbitrary black box given the previous data points. We formalize both the data assumption and goal as below.\nAssumption 2.1 (Data Assumption). Assume the data D̂n = (si, ai, ri, s′i)ni=1 is drawn from an arbitrary joint distribution, such that for each i = 1, . . . , n, conditional on D̂<i := (sj , aj , rj , s′j)j<i ∪ (si, ai), the subsequent local reward and next state (ri, s′i) are drawn from P(· | si, ai).\nGoal Given a confidence level δ ∈ (0, 1), we want to construct an interval [Ĵ−, Ĵ+] ⊂ R based on the data D̂n, such that Pr(Jπ ∈ [Ĵ−, Ĵ+]) ≥ 1− δ, where Pr(·) is w.r.t. the randomness of the data. The partial ordering on the data points is introduced to accommodate the case that si+1 equals s′j for some j ≤ i. The data assumption only requires that (ri, s′i) is generated from P(· | si, ai), and imposes no constraints on how (si, ai) is generated. This provides great flexibility in terms of the data collection process. In particular, we do not require (si, ai)ni=1 to be independent as always assumed in recent works (Liu et al., 2018a; Mousavi et al., 2020).\nA crucial fact is that our data assumption actually implies a martingale structure on the empirical Bellman residual operator of the Q-function, As we will show in Section 4.1, this enables us to derive a key concentration inequality underpinning our non-asymptotic confidence bounds.\nHere, we summarize a few notations that will simplify the presentation in the rest of work. First of all, we append each (si, ai, ri, s′i) with an action a ′ i ∼ π(· | s′i) following s′i. This can be done for free as long as π is given (See the Remark in Section 3). Also, we write xi = (si, ai), x′i = (s ′ i, a ′ i), and yi = (x′i, ri) = (s ′ i, a ′ i, ri). Correspondingly, define X = S ×A to be the state-action space and Y = X ×R. Denote Pπ(y | x) = P(s′, r | x)π(a′ | s′). In this way, the observed data can be written as pairs of {xi, yi}ni=1, and Assumption 2.1 is equivalent to saying that yi ∼ Pπ(· | xi) given D̂<i, which is similar to a supervised learning setting. We equalize the data D̂n with its empirical measure D̂n = ∑n i=1 δxi,yi/n, where δ is the Delta measure." }, { "heading": "3 TWO DUAL APPROACHES TO INFINITE-HORIZON OFF-POLICY ESTIMATION", "text": "The deficiency of the traditional IS methods on long-/infinite-horizon RL problems (a.k.a. the curse of horizon (Liu et al., 2018a)) has motivated a line of work on developing efficient infinite-horizon value estimation (e.g., Liu et al., 2018a; Feng et al., 2019; Nachum et al., 2019a; Zhang et al., 2020a; Mousavi et al., 2020; Tang et al., 2020a). The main idea is to transform the value estimation problem into estimating either the Q-function or the visitation distribution (or its related density ratio) of the policy π. This section introduces and reinterprets these two tightly connected methods, which serves to lay out a foundation for our main confidence bounds from a primal and dual perspective.\nGiven a policy π, its Q-function is defined as qπ(x) = Eπ [ ∑∞ t=0 γ\ntrt | x0 = x], where the expectation is taken when we execute π initialized from a fixed state-action pair (s0, a0) = x0 = x. Let Dπ,t be the distribution of (xt, yt) = (st, at, s′t, a ′ t, rt) when executing policy π starting from s0 ∼ D0 for\nt steps. The visitation distribution of π is defined as Dπ = ∑∞ t=0 γ\ntDπ,t. Note that Dπ integrates to 1/(1− γ), while we treat it as a probability measure in the notation. The expected reward Jπcan be expressed using either qπ or Dπ as follows:\nJπ := Eπ [ ∞∑ t=0 γtrt ] = Er∼Dπ [r] = Ex∼Dπ,0 [qπ(x)], (1)\nwhere r ∼ Dπ (resp. x ∼ Dπ,0) denotes sampling from the r-(resp. x-) marginal distribution of Dπ (resp. Dπ,0). Eq. (1) plays a key role in the infinite-horizon value estimation by transforming the estimation of Jπ into estimating either qπ or Dπ .\nValue Estimation via Q Function Because Dπ,0(x) = D0(s)π(a|s) is known, we can estimate Jπ by Ex∼Dπ,0 [q̂(x)] with any estimation q̂ of the true Q-function qπ; the expectation under x ∼ Dπ,0 can be estimated to any accuracy with Monte Carlo. To estimate qπ, we consider the empirical and expected Bellman residual operator:\nR̂q(x, y) = q(x)− γq(x′)− r, Rπq(x) = Ey∼Pπ(·|x) [ R̂q(x, y) ] . (2)\nIt is well-known that qπ is the unique solution of the Bellman equation Rπq = 0. Since yi ∼ Pπ(·|xi) for each data point in D̂n, if q = qπ , then R̂q(xi, yi), i = 1, . . . , n are all zero-mean random variables.\nLet ω be any function from X to R, then∑i R̂q(xi, yi)ω(xi) also has zero mean. This motivates the following functional Bellman loss (Feng et al., 2019; 2020; Xie & Jiang, 2020),\nLW(q; D̂n) := sup ω∈W\n{ 1\nn n∑ i=1 R̂q(xi, yi)ω(xi)\n} , (3)\nwhereW is a set of functions ω : X → R. To ensure that the sup is finite,W is typically set to be an unit ball of some normed function spaceWo, such thatW = {ω ∈ Wo : ‖ω‖Wo ≤ 1}. Feng et al. (2019) considers the simple case whenW is taken to be the unit ball K of the reproducing kernel Hilbert space (RKHS) with a positive definite kernel k : X × X → R, in which case the loss has a simple closed form solution:\nLK(q; D̂n) = √√√√ 1 n2 n∑ ij=1 R̂q(xi, yi)k(xi, xj)R̂q(xj , yj). (4)\nNote that the RHS of Eq. (4) is the square root of the kernel Bellman V-statistics in Feng et al. (2019). Feng et al. (2019) showed that, when the support of data distribution D̂n covers the whole space (which may require an infinite data size) and k is an integrally strictly positive definite kernel, LK(q; D̂n) = 0 iff q = qπ . Therefore, one can estimate qπ by minimizing LK(q, D̂n).\nRemark The empirical Bellman residual operator R̂ can be extended to R̂q(x, y) = q(x)− r − γ 1m ∑m `=1 q(s\n′, a′`), where {a′`}mi=1 are i.i.d. drawn from π(·|s′). As m increases, this gives a lower variance estimation of Rπq. If m = +∞, we have R̂q(x, y) = q(x)− r − γEa′∼π(· | s′)[q(s′, a′)], which coincides with the operator used in the expected SARSA (Sutton & Barto, 1998). In fact, without any modification, all results in this work can be applied to R̂q for any m.\nValue Estimation via Visitation Distribution Another way to estimate Jπ in Eq. (1) is to approximate Dπ with a weighted empirical measure of the data (Liu et al., 2018a; Nachum et al., 2019a; Mousavi et al., 2020; Zhang et al., 2020a). The key idea is to assign an importance weight ω(xi) to each data point xi in D̂n. We can choose the function ω : X → R properly such that Dπ and hence Jπ can be approximated by the ω-weighted empirical measure of D̂n as follows:\nJπ ≈ Ĵω := ED̂ωn [r] = 1\nn n∑ i=1 ω(xi)ri, Dπ ≈ D̂ωn := 1 n n∑ i=1 ω(xi)δxi,yi . (5)\nIntuitively, ω can be viewed as the density ratio between Dπ and D̂n, although the empirical measure D̂n may not have well-defined density. Liu et al. (2018a); Mousavi et al. (2020) proposed to estimate ω by minimizing a discrepancy measure between D̂ωn and Dπ. To see this, note that D = Dπ if and only if ∆(D, q) = 0 for any function q, where\n∆(D, q) = ED[γq(x′)− q(x)]− EDπ [γq(x′)− q(x)] = ED[γq(x′)− q(x)] + EDπ,0 [q(x)], (6)\nusing the fact that EDπ [γq(x′) − q(x)] = −EDπ,0 [q(x)] (Theorem 1, Liu et al., 2018a). Also note that the RHS of Eq. (6) can be practically calculated given any D and q without knowing Dπ . Let Q be a set of functions q : X → R. One can define the following loss for ω:\nIQ(ω; D̂n) = sup q∈Q\n{ ∆(D̂ωn , q) } . (7)\nSimilar to LW(q; D̂n), when Q is a ball in RKHS, IQ(ω; D̂n) also has a bilinear closed form analogous to Eq. (4); see Mousavi et al. (2020) and Appendix F. As we show in Section 4, IQ(ω; D̂n) and LW(q; D̂n) are connected to the primal and dual views of our confidence bounds, respectively." }, { "heading": "4 MAIN APPROACH", "text": "Let Q be a large enough function set including the true Q-function qπ, that is, qπ ∈ Q. Following Feng et al. (2020), a confidence interval [Ĵ−Q,W , Ĵ + Q,W ] of Jπ can be constructed as follows:\nĴ+Q,W = sup q∈Q\n{ EDπ,0 [q] s.t. LW(q; D̂n) ≤ εn } , (8)\nand Ĵ−Q,W is defined in a similar way by replacing sup on q ∈ Q with inf . The idea here is to seek the extreme q function with the largest (resp. smallest) expected values in set F := Q ∩ {q : LK(q; D̂n) ≤ εn}. Therefore, Eq. (8) would be a 1− δ confidence interval if qπ is included in F with at least probability 1− δ, which is ensured when qπ ∈ Q and\nPr(LW(qπ; D̂n) ≤ εn) ≥ 1− δ . (9) Feng et al. (2020) showed that in the RKHS case whenW = K, Eq. (9) can be achieved with\nεn = √√√√2cqπ,k ( n− 1 n √ log(1/δ) n + 1 n ) , (10)\nwhen n is an even number, where cqπ,k = supx,y R̂qπ(x, y) 2k(x, x). This was proved using Hoeffding’s inequality for U-statistics (Hoeffding, 1963). To solve Eq. (8) efficiently, Feng et al. (2020) took Q to be a ball in RKHS with random feature approximation. Unfortunately, this method as described by Eq. (8)-(10) has two major disadvantages:\n1) Bound Needs to Be Tightened (Section 4.1) The bound of εn = O(n−1/4) in Eq. (10) is sub-optimal in rate. In Section 4.1, we improve it by an εn = O(n−1/2) bound under the mild Assumption 2.1, which gets rid of the independence requirement between the transition pairs. Our tightened bound is achieved by firstly noting a Martingale structure on the empirical Bellman operator under Assumption 2.1, and then applying an inequality in Pinelis (1992).\n2) Dependence on Global Optimization (Section 4.2) The bound in Eq. (8) is guaranteed to be a 1− δ confidence bound only when the maximization in Eq. (8) is solved to global optimality. With a large n, this leads to a high computational cost, even when choosing Q as the RKHS. Feng et al. (2020) solved Eq. (8) approximately using a random feature technique, but this method suffers from a gap between the theory and practice. In Section 4.2, we address this problem by presenting a dual form of Eq. (8), which sidesteps solving the challenging global optimization in Eq. (8). Moreover, the dual form enables us to better analyze the tightness of the confidence interval and issues regarding the choices of Q andW ." }, { "heading": "4.1 A TIGHTER CONCENTRATION INEQUALITY", "text": "In this section, we explain our method to improve the bound in Eq. (10) by giving a tighter concentration inequality for the kernel Bellman loss in Eq. (4). We introduce the following semi-expected kernel Bellman loss:\nL∗K(q; D̂n) = √√√√ 1 n2 n∑ ij=1 Rπq(xi)k(xi, xj)Rπq(xj) , (11)\nin which we replace the empirical Bellman residual operator R̂q in Eq. (3) with its expected counterpart Rπq, but still take the empirical average over {xi}ni=1 in D̂n. For a more general function set W , we can similarly define L∗W(q; D̂n) by replacing R̂q with Rπq in Eq. (3). Obviously, we have L∗W(q; D̂n) = 0 when q = qπ .\nTheorem 4.1 below shows that LK(q; D̂n) concentrates around L∗K(q; D̂n) with an O(n −1/2) error under Assumption 2.1. At a first glance, it may seem surprising that the concentration bound is able to hold even without any independence assumption between {xi}. An easy way to make sense of this is by recognizing that the randomness in yi conditional on xi is aggregated through averaging, even when {xi} are deterministic. As Assumption 2.1 does not impose any (weak) independence between {xi}, we cannot establish that LK(q; D̂n) concentrates around its mean ED̂n [LK(q; D̂n)] (which is a full expected kernel bellman loss), without introducing further assumptions.\nTheorem 4.1. Assume K is the unit ball of RKHS with a positive definite kernel k(·, ·). Let cq,k := supx∈X ,y∈Y(R̂q(x, y)−Rπq(x))2k(x, x) <∞. Under Assumption 2.1, for any δ ∈ (0, 1), with at\nleast probability 1− δ, we have∣∣∣LK(q; D̂n)− L∗K(q; D̂n)∣∣∣ ≤ √ 2cq,k log(2/δ)\nn . (12)\nIn particular, when q = qπ , we have cqπ,k = supx,y(R̂qπ(x, y)) 2k(x, x), and LK(qπ; D̂n) ≤ √ 2cqπ,k log(2/δ)\nn . (13)\nIntuitively, to see why we can expect an O(n−1/2) bound, note that LK(q, D̂n) consists of the square root of the product of two R̂q terms, each of which contributes an O(n−1/2) error w.r.t. Rπq.\nTechnically, the proof is based on a key observation: Assumption 2.1 ensures that Zi := R̂q(xi, yi)− Rπq(xi), i = 1, . . . , n forms a martingale difference sequence w.r.t. {D̂<i : ∀i = 1, . . . , n}, in the sense that E[Zi | D̂<i] = 0, ∀i. See Appendix B for details. The proof also leverages a special property of RKHS and applies a Hoeffding-like inequality on the Hilbert spaces as in Pinelis (1992) (see Appendix B). For other more general function setsW , we establish in Appendix E a similar bound by using Rademacher complexity, although it yields a less tight bound than Eq. (12) when W = K." }, { "heading": "4.2 DUAL CONFIDENCE BOUNDS", "text": "We derive a dual form of Eq. (8) that sidesteps the need for solving the challenging global optimization in Eq. (8). To do so, let us plug the definition of LW(q; D̂n) into Eq. (3) and introduce a Lagrange multiplier:\nĴ+Q,W = sup q∈Q inf h∈W inf λ≥0\nEDπ,0 [q]− λ ( 1\nn n∑ i=1\nh(xi)R̂q(xi, yi)− εn )\n(14)\n= sup q∈Q inf ω∈Wo\n{ EDπ,0 [q]− 1\nn n∑ i=1 ω(xi)R̂q(xi) + εn ‖ω‖Wo\n} , (15)\nwhere we use ω(x) = λh(x). Exchanging the order of min/max and some further derivation yields the following main result.\nTheorem 4.2. I) LetW be the unit ball of a normed function spaceWo. We have Ĵ+Q,W ≤ F̂+Q (ω) := ED̂ωn [r] + IQ(ω; D̂n) + εn ‖ω‖Wo , ∀ω ∈ Wo , Ĵ−Q,W ≥ F̂−Q (ω) := ED̂ωn [r]− I−Q(ω; D̂n)− εn ‖ω‖Wo , ∀ω ∈ Wo ,\n(16)\nwhere −Q = {−q : q ∈ Q} and hence I−Q(ω; D̂n) = IQ(ω; D̂n) if Q = −Q. Further, we have Ĵ+Q,W = infω∈Wo F̂ + Q (ω) and Ĵ − Q,W = supω∈Wo F̂ − Q (ω) if Q is convex and there exists a q ∈ Q that satisfies the strict feasibility condition that LW(q; D̂n) < εn.\nII) For D̂n and δ ∈ (0, 1), assume Wo and εn ∈ R satisfy Eq. (9) (e.g., via Theorem 4.1). Then for any function set Q with qπ ∈ Q, and any function ω+, ω− ∈ Wo (the choice of Q, ω+, ω− can depend on D̂n arbitrarily), we have\nPr ( Jπ ∈ [ F̂−Q (ω−), F̂ + Q (ω+) ]) ≥ 1− δ . (17)\nTheorem 4.2 transforms the original bound in Eq. (8), framed in terms of q and LW(q; D̂n), into a form that involves the density-ratio ω and the related loss IQ(ω; D̂n). The bounds in Eq. (16) can be interpreted as assigning an error bar around the ω-based estimator Ĵω = ED̂ωn [r] in Eq. (5), with the error bar of I±Q(ω; D̂n) + εn ‖ω‖Wo . Specifically, the first term I±Q(ω; D̂n) measures the discrepancy between D̂ωn and Dπ as discussed in Eq. (7), whereas the second term captures the randomness in the empirical Bellman residual operator R̂qπ .\nCompared with Eq. (8), the global maximization on q ∈ Q is now transformed inside the IQ(ω; D̂n) term, which yields a simple closed form solution in the RKHS case (see Appendix F). In practice, we can optimize ω+ and ω− to obtain the tightest possible bound (and hence recover the primal bound) by minimizing/maximizing F̂+Q (ω) and F̂ − Q (ω), but it is not necessary to solve the optimization to global optimality. WhenWo is an RKHS, by the standard finite representer theorem (Scholkopf & Smola, 2018), the optimization on ω reduces to a finite dimensional optimization, which can be sped up with any favourable approximation techniques. We elaborate on this in Appendix D.\nLength of the Confidence Interval The form in Eq. (16) also makes it much easier to analyze the tightness of the confidence interval. Suppose ω = ω+ = ω− and Q = −Q, the length of the optimal confidence interval is\nlength([Ĵ−Q,W , Ĵ + Q,W ]) = inf\nω∈Wo\n{ 2IQ(ω; D̂n) + 2εn ‖ω‖Wo } .\nGiven εn is O(n−1/2), we can make the overall length of the optimal confidence interval also O(n−1/2), as long asWo is rich enough to include a good density ratio estimator ω∗ that satisfies IQ(ω\n∗; D̂n) = O(n −1/2) and has a bounded norm ‖ω∗‖Wo .\nWe can expect to achieve IQ(ω∗; D̂n) = O(n−1/2), when (1) Q has an O(n−1/2) sequential Rademacher complexity (Rakhlin et al., 2015) (e.g., a finite ball in RKHS); and (2) D̂n is collected following a Markov chain with strong mixing condition and weakly converges to some limit distribution D∞ whose support is X , and therefore we can define ω∗ as the density ratio between Dπ and D∞. Refer to Appendix C for more discussions. Indeed, our experiments show that the lengths of practically constructed confidence intervals do tend to decay with an O(n−1/2) rate.\nChoice ofW and Q To ensure the concentration inequality in Theorem 4.1 is valid, the choice of Wo cannot depend on the data D̂n. Therefore, we should use a separate holdout data to construct a data-dependentWo. In contrast, the choice of Q can depend on the data D̂n arbitrarily, since it is a part of the optimization bound Eq. (8) but not in the tail bound Eq. (9). In this light, one can construct the best possible Q by exploiting the data information in the most favourable way. For example, we can construct an estimator of q̂ ≈ qπ based on any state-of-the-art method (e.g., Q-learning or model-based methods), and set Q to be a ball centering around q̂ such that qπ − q̂ ∈ Q. This enables post-hoc analysis based on prior information on qπ , as suggested in Feng et al. (2020).\nMis-specification of Q and Oracle Upper/Lower Estimates Our result relies on the assumption that qπ ∈ Q. However, as with other statistical estimation problems, there exists no provably way to empirically verify the correctness of model assumptions such as qπ ∈ Q. Because empirical data only reveals the information of the unknown function (in our case qπ) on a finite number data points, but no conclusion can be made on the unseeing data points without imposing certain smoothness assumption. Typically, what we can do is the opposite: reject qπ ∈ Q when the Bellman loss LW(q; D̂n) of all q in Q is larger than the threshold εn. We highlight that, even without verifying qπ ∈ Q, our method can still be viewed as a confidence interval of a best possible (oracle) upper and lower estimation given the data D̂n plus the assumption that qπ ∈ Q, defined as\nĴ+Q,∗ = sup q∈Q\n{ EDπ,0 [q] s.t. R̂q(xi, yi) = R̂qπ(xi, yi), ∀i = 1, . . . , n } . (18)\nIn fact, it is impossible to derive empirical upper bounds lower than Ĵ+Q,∗, as there is no way to distinguish q and qπ if R̂q(xi, yi) = R̂qπ(xi, yi) for all i. But our interval [ĴQ,K, Ĵ+Q,K] provides a 1− δ confidence outer bound of [Ĵ−Q,∗, Ĵ+Q,∗] once Eq. (9) holds, regardless if qπ ∈ Q holds or not. Hence, it is of independent interest to further explore the dual form of Eq. (18), which is another starting point for deriving our bound. We have more discussion in Appendix G.\nLastly, we argue that it is important to include the Q in the bound. Proposition G.1 in Appendix shows that removing the q ∈ Q constraint in Eq. (18) would lead to an infinite upper bound,\nunless the {si, s′i}ni=1 from D̂n almost surely covers the whole state space S in the sense that Prs∼D0(s ∈ {si, s′i}ni=1) = 1." }, { "heading": "5 EXPERIMENTS", "text": "We compare our method with a variety of existing algorithms for obtaining asymptotic and nonasymptotic bounds on a number of benchmarks. We find our method can provide confidence interval that correctly covers the true expected reward with probability larger than the specified success probability 1 − δ (and is hence safe) across the multiple examples we tested. In comparison, the non-asymptotic bounds based on IS provide much wider confidence intervals. On the other hand, the asymptotic methods, such as bootstrap, despite giving tighter intervals, often fail to capture the true values with the given probability in practice.\nEnvironments and Dataset Construction We test our method on three environments: InvertedPendulum and CartPole from OpenAI Gym (Brockman et al., 2016), and a Type-1 Diabetes medical treatment simulator.1 We follow a similar procedure as Feng et al. (2020) to construct the behavior and target policies. more details on environments and data collection procedure are included in Appendix H.1.\nAlgorithm Settings We test the dual bound described in our paper. Throughout the experiment, we always setW = K, the unit ball of the RKHS with positive definite kernel k, and set Q = rQK̃, the ball of radius rQ in the RKHS with another kernel k̃. We take both kernels to be Gaussian RBF kernel and choose rQ and the bandwidths of k and k̃ using the procedure in Appendix H.2. We use a fast approximation method to optimize ω in F+Q (ω) and F − Q (ω) as shown in Appendix D. Once ω is found, we evaluate the bound in Eq. (16) exactly to ensure that the theoretical guarantee holds.\nBaseline Algorithms We compare our method with four existing baselines, including the IS-based non-asymptotic bound using empirical Bernstein inequality by Thomas et al. (2015b), the IS-based bootstrap bound of Thomas (2015), the bootstrap bound based on fitted Q evaluation (FQE) by Kostrikov & Nachum (2020), and the bound in Feng et al. (2020) which is equivalent to the primal bound in (8) but with looser concentration inequality (they use a εn = O(n−1/4) threshold).\nResults Figure 1 shows our method obtains much tighter bounds than Feng et al. (2020), which is because we use a much tighter concentration inequality, even the dual bound that we use can be slightly looser than the primal bound used in Feng et al. (2020). Our method is also more computationally efficient than that of Feng et al. (2020) because the dual bound can be tightened\n1 https://github.com/jxx123/simglucose.\napproximately while the primal bound requires to solve a global optimization problem. Figure 1 (b) shows that we provide increasingly tight bounds as the data size n increases, and the length of the interval decays with an O(n−1/2) rate approximately. Figure 1 (c) shows that when we increase the significance level δ, our bounds become tighter while still capturing the ground truth. Figure 1 (d) shows the percentage of times that the interval fails to capture the true value in a total of 100 random trials (denoted as δ̂) as we vary δ. We can see that δ̂ remains close to zero even when δ is large, suggesting that our bound is very conservative. Part of the reason is that the bound is constructed by considering the worse case and we used a conservative choice of the radius rQ and coefficient cqπ,k in Eq. (13) (See Appendix H.2).\nIn Figure 2 we compare different algorithms on more examples with δ = 0.1. We can again see that our method provides tight and conservative interval that always captures the true value. Although FQE (Bootstrap) yields tighter intervals than our method, it fail to capture the ground truth much more often than the promised δ = 0.1 (e.g., it fails in all the random trials in Figure 2 (a)).\nWe conduct more ablation studies on different hyper-parameter and data collecting procedure. See Appendix H.2 and H.3 for more details." }, { "heading": "6 CONCLUSION", "text": "We develop a dual approach to construct high confidence bounds for off-policy evaluation with an improved rate over Feng et al. (2020). Our method can handle dependent data, and does not require a global optimization to get a valid bound. Empirical results demonstrate that our bounds is tight and valid compared with a range of existing baseline. Future directions include leveraging our bounds for policy optimization and safe exploration." }, { "heading": "A PROOF OF THE DUAL BOUND IN THEOREM 4.2", "text": "Proof. Introducing a Lagrange multiplier, the bound in (8) is equivalent to\nĴ+Q,W = max q∈Q min λ≥0\n{ EDπ,0 [q] − λ ( max h∈W 1 n n∑ i=1 h(xi)R̂q(xi, yi)− εn )}\n= max q∈Q min λ≥0 min h∈W\n{ EDπ,0 [q] − λ ( 1\nn n∑ i=1\nh(xi)R̂q(xi, yi)− εn )}\n= max q∈Q min ω∈Wo\n{ EDπ,0 [q] − 1\nn n∑ i=1 ω(xi)R̂q(xi, yi) + εn ‖ω‖Wo\n} ,\nwhere we use ω = λh(x), such that λ is replaced by ‖ω‖Wo . Define\nM(q, ω; D̂n) = EDπ,0 [q] − 1\nn n∑ i=1 ω(xi)R̂q(xi, yi) + εn ‖ω‖Wo\n= ED̂ωn [r] + ∆(D̂ ω n , q) + εn ‖ω‖Wo .\nThen we have\nmax q∈Q M(q, ω; D̂n) = ED̂ωn [r] + maxq∈Q ∆(D̂ ω n , q) + εn ‖ω‖Wo\n= ED̂ωn [r] + IQ(ω; D̂n) + εn ‖ω‖Wo = F̂+Q (ω).\nTherefore,\nĴ+Q,W = max q∈Q min ω∈Wo M(q, ω; D̂n)\n≤ min ω∈Wo max q∈Q M(q, ω; D̂n)\n= min ω∈Wo\nF̂+Q (ω).\nThe lower bound follows analogously. The strong duality holds when the Slater’s condition is satisfied (Nesterov, 2013), which amounts to saying that the primal problem in (8) is convex and strictly feasible; this requires that Q is convex and there exists at least one solution q ∈ Q that satisfy that constraint strictly, that is, LW(q; D̂n) < εn; note that the objective function Q is linear on q and the constraint function LW(q; D̂n) is always convex on q (since it is the sup a set of linear functions on q following (3))." }, { "heading": "B PROOF OF CONCENTRATION BOUND IN THEOREM 4.1", "text": "Our proof require the following Hoeffding inequality on Hilbert spaces by Pinelis (Theorem 3, 1992); see also Section 2.4 of Rosasco et al. (2010).\nLemma B.1. (Theorem 3, Pinelis, 1992) Let H be a Hilbert space and {fi}ni=1 is a Martingale sequence inH that satisfies supi ‖fi‖H ≤ σ almost surely. We have for any > 0,\nPr (∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ H ≥ ) ≤ 2 exp ( −n 2 2σ2 ) .\nTherefore, with probability at least 1− δ, we have ∥∥ 1 n ∑n i=1 fi ∥∥ H ≤ √ 2σ2 log(2/δ) n .\nLemma B.2. Let k(x, x′) be a positive definite kernel whose RKHS isHk. Define fi(·) = R̂q(xi, yi)k(xi, ·)−Rπq(xi)k(xi, ·).\nAssume Assumption 2.1 holds, then {fi}ni=1 is a Martingale difference sequence inHk w.r.t. T<i := (xj , yj)j<i ∪ (xi). That is, E [fi+1(·) | T<i] = 0. In addition,∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ 2\nHk\n= 1\nn2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) ,\nand ‖fi‖2Hk ≤ cq,k for ∀i = 1, . . . , n.\nProof of Theorem 4.1. Following Lemma B.1 and Lemma B.2, since {fi}ni=1 is a Martingale difference sequence inHk with ‖fi‖Hk ≤ cq,k almost surely, we have with probability at least 1− δ,\n1\nn2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) = ∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ 2\nHk\n≤ 2cq,k log(2/δ) n .\nUsing Lemma B.3 below, we have∣∣∣LK(q; D̂n)− L∗K(q; D̂n)∣∣∣ ≤ ∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ Hk ≤ √ 2cq,k log(2/δ) n .\nThis completes the proof.\nLemma B.3. Assume k(x, x′) is a positive definite kernel. We have∣∣∣LK(q; D̂n)− L∗K(q; D̂n)∣∣∣2 ≤ 1n2 n∑\nij=1\n( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) .\nProof. Define\nĝ(·) = 1 n n∑ i=1 R̂q(xi, yi)k(xi, ·) g(·) = 1 n n∑ i=1 Rπq(xi)k(xi, ·).\nThen we have\n‖ĝ‖2Hk = 1\nn2 n∑ ij=1 R̂q(xi, yi)k(xi, xj)R̂q(xj , yj) = L̂K(q; D̂n),\n‖g‖2Hk = 1\nn2 n∑ ij=1 Rπq(xi)k(xi, xj)Rπq(xj) = L ∗ K(q; D̂n),\n‖ĝ − g‖2Hk = 1\nn2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) .\nThe result then follows the triangle inequality ∣∣‖ĝ‖Hk − ‖g‖Hk ∣∣ ≤ ‖ĝ − g‖Hk .\nB.1 CALCULATION OF cqπ,k\nThe practical calculation of the coefficient cqπ,k in the concentration inequality was discussed in Feng et al. (2020), which we include here for completeness.\nLemma B.4. (Feng et al. (2020) Lemma 3.1) Assume the reward function and kernel function is bounded with supx |r(x)| ≤ rmax and supx,x′ |k(x, x′)| ≤ Kmax, we have:\ncqπ,k := sup x∈X ,y∈Y\n(R̂qπ(x, y)) 2k(x, x) ≤ 4Kmaxr\n2 max\n(1− γ)2 .\nIn practice, we get access to Kmax from the kernel function that we choose (e.g., Kmax = 1 for RBF kernels), and rmax from the knowledge on the environment." }, { "heading": "C MORE ON THE TIGHTNESS OF THE CONFIDENCE INTERVAL", "text": "The benefit of having both upper and lower bounds is that we can empirically access the tightness of the bound by checking the length of the interval [F̂−Q (ω−), F̂ + Q (ω+)]. However, from the theoretical perspective, it is desirable to know a priori that the length of the interval will decrease with a fast rate as the data size n increases. We now show that this is the case ifWo is chosen to be sufficiently rich so that it includes a ω ∈ Wo such that D̂ωn ≈ Dπ .\nTheorem C.1. AssumeWo is sufficiently rich to include a “good” ω∗ inWo with D̂ω ∗ n ≈ Dπ in that\nsup q∈Q ∣∣∣ED̂ω∗n [R̂q(x;x′, r)]− EDπ [R̂q(x;x′, r)]∣∣∣ ≤ cnα , (19) where c and α are two positive coefficients. Then we have\nmax { Ĵ+Q,W − Jπ, Jπ − Ĵ−Q,W } ≤ c nα + εn ‖ω∗‖Wo .\nAssumption (19) holds if D̂n is collected following a Markov chain with certain strong mixing condition and weakly converges to some limit discussion D̂∞ whose support is X , for which we can define ω∗(x) = Dπ(x)/D∞(x). In this case, if Q is a finite ball in RKHS, then we can achieve (19) with α = 1/2, and yields the overall bound of rate O(n−1/2). For more general function classes, α depends on the martingale Rademacher complexity of function set R̂Q = {Rq(x, y) : q ∈ Q} Rakhlin et al. (2015). In our empirical reults, we observe that the gap of the practically constructed bounds tend to follow the O(n−1/2) rate.\nProof. Note that Jπ = EDπ [r] = EDπ [r],\nand IQ(ω; D̂n) = sup\nq∈Q\n{ ED̂ωn [γq(x ′)− q(x)]− EDπ [γq(x′)− q(x)] } .\nBecause ω∗ ∈ W , we have Ĵ+W,Q − Jπ ≤ F̂+Q (ω∗)− Jπ\n= ED̂ωn [r]− EDπ [r] + IQ(ωπ; D̂n) + εn ‖ω ∗‖Wo\n= sup q∈Q\n{ ED̂ωn [ R̂q(x, y) ] − EDπ [ R̂q(x, y) ]} + εn ‖ω∗‖Wo\n≤ c nα + εn ‖ω∗‖Wo .\nThe case of lower bound follows similarly.\nD OPTIMIZATION ONWo Consider the optimization of ω inWo\nF̂+Q (ω) := 1\nn n∑ i=1 riω(xi) + IQ(ω; D̂n) + ‖ω‖Wo\n√ 2cqπ,k log(2/δ)\nn (20)\nAssumeWo is the RKHS of kernel k(x, x̄), that is,Wo = Hk. By the finite representer theorem of RKHS (Smola et al., 2007). the optimization of ω in RKHSHk can be reduced to a finite dimensional optimization problem. Specifically, the optimal solution of (20) can be written into a form of ω(x) = ∑n i=1 k(x, xi)αi with ‖ω‖ 2 Hk = ∑n i,j=1 k(xi, xj)αiαj for some vector α := [αi] n i=1 ∈ Rn. WriteK = [k(xi, xj)]ni,j=1 and r = [ri] n i=1. The optimization of ω reduces to a finite dimensional optimization on α:\nmin α∈Rn\n1 n r>Kα+ IQ(Kα; D̂n) +\n√ αKα\n√ 2cqπ,k log(2/δ)\nn ,\nwhere\nIQ(Kα; D̂n) = max q∈Q\n{ EDπ,0 [q] + 1\nn (T̂q)>Kα\n} ,\nand T̂q = [γq(x′i) − q(xi)]ni=1. When Q is RKHS, we can calculate IQ(Kα; D̂n) using (22) in section F.\nThis computation can be still expensive when n is large. Fortunately, our confidence bound holds for any ω; better ω only gives tighter bounds, but it is not necessary to find the global optimal ω. Therefore, one can use any approximation algorithm to find ω, which provides a trade-off of tightness and computational cost. We discuss two methods:\n1) Approximating α The length of α can be too large when n is large. To address this, we assume αi = g(xi, θ), where g is any parametric function (such as a neural network) with parameter θ which can be much lower dimensional than α. We can then optimize θ with stochastic gradient descent, by approximating all the data averaging 1n ∑n i=1(·) with averages over small mini-batches; this would introduce biases in gradient estimation, but it is not an issue when the goal is only to get a reasonable approximation.\n2) Replacing kernel k Assume the kernel k yields a random feature expansion: k(x, x̄) = Eβ∼π[φ(x, β)φ(x̄, β)], where φ(x, β) is a feature map with parameter β and π is a distribution of β. We draw {βi}mi=1 i.i.d. from π, where m is taken to be much smaller than n. We replace k with k̂(x, x̄) = 1m ∑m i=1 φ(x, βi)φ(x̄, βi) andHk withHk̂, That is, we consider to solve\nĴ+Q,W = min ω∈Hk̂ F̂+Q (ω) := 1n n∑ i=1 riω(xi) + IQ(ω; D̂n) + ‖ω‖Hk̂ √ 2cqπ,k̂ log(2/δ) n . It is known that any function ω in Hk̂ can be represented as ω(x) = 1m ∑m i=1 wiφ(x, βi), for some w = [wi]mi=1 ∈ Rm and satisfies ‖ω‖2Hk̂ = 1 m ∑m i=1 w 2 i . In this way, the problem reduces to optimizing an m-dimensional vector w, which can be solved by standard convex optimization techniques." }, { "heading": "E CONCENTRATION INEQUALITIES OF GENERAL FUNCTIONAL BELLMAN LOSSES", "text": "When K is a general function set, one can still obtain a general concentration bound using Radermacher complexity. Define R̂q ◦ W := {h(x, y) = R̂q(x, y)ω(x) : ω ∈ W}. Using the standard derivation in Radermacher complexity theory in conjunction with Martingale theory (Rakhlin et al., 2015), we have\nsup ω∈W\n{ 1\nn n∑ i=1\n(R̂q(xi, yi)−Rπq(xi))ω(xi) } ≤ 2Rad(R̂q ◦W) + √ 2cq log(2/δ)\nn ,\nwhere Rad(R̂q ◦ K) is the sequential Radermacher complexity as defined in (Rakhlin et al., 2015). A triangle inequality yields\n| Lk(q; D̂n)− Lk(q; D̂n) | ≤ sup ω∈W\n{ 1\nn n∑ i=1\n(R̂q(xi, yi)−Rπq(xi))ω(xi) }\nTherefore,\n| LW(q; D̂n)− LW(q; D̂n) | ≤ 2Rad(R̂q ◦W) + √ 2cq log(2/δ)\nn , (21)\nwhere cq,W = supω∈W supx,y(R̂q(x, y)−Rπq(x))2ω(x)2. WhenW equals the unit ball K of the RKHS related to kernel k, we have cq,k = cq,W , and hence this bound is strictly worse than the bound in Theorem 4.1.\nF CLOSED FORM OF IQ(ω; D̂n) WHEN Q IS RKHS\nSimilar to LK(q; D̂n), whenQ is taken to be the unit ball K̃ of the RKHS of a positive definite kernel k̃(x, x̄), (7) can be expressed into a bilinear closed form shown in Mousavi et al. (2020):\nIQ(ω; D̂n) 2 = A− 2B + C, (22)\nwhere A = E(x,x̄)∼Dπ,0×Dπ,0 [k(x, x̄)]\nB = E(x,x̄)∼D̂ωn×Dπ,0 [ T̂xπk(x, x̄) ] C = E(x,x̄)∼D̂ωn×D̂ωn [ T̂xπT̂ x̄ πk(x, x̄) ] ,\nwere T̂πf(x) = γf(x′) − f(x); in T̂xπT̂x̄πk(x, x̄), we apply T̂x̄π and T̂xπ in a sequential order by treating k as a function of x̄ and then of x." }, { "heading": "G MORE ON THE ORACLE BOUND AND ITS DUAL FORM", "text": "The oracle bound (18) provides another starting point for deriving optimization-based confidence bounds. We derive its due form here. Using Lagrangian multiplier, the optimization in (18) can be rewritten into\nĴ+Q,∗ = max q∈Q min ω M(q, ω; D̂n), (23)\nwhere M∗(q, ω; D̂n) = EDπ,0 [q]− 1\nn n∑ i=1 ω(xi) ( R̂q(xi, yi)− R̂qπ(xi, yi) ) ,\nwhere ω now serves as the Lagrangian multiplier. By the weak duality, we have J∗Q,+ ≤ F̂+Q,∗(ω) := ED̂ωn [r] + IQ(ω; D̂n)︸ ︷︷ ︸\nknown\n+R(ω, qπ)︸ ︷︷ ︸ unknown , ∀ω.\nand\nR(ω, qπ) = 1\nn n∑ i=1 ω(xi)R̂qπ(xi).\nThe derivation follows similarly for the lower bound. So for any ω ∈ Wo, we have [Ĵ−Q,∗, Ĵ+Q,∗] ⊆ [F̂−Q,∗(ω), F̂ + Q,∗(ω)].\nHere the first two terms of F̂+Q,∗(ω) can be empirically estimated (it is the same as the first two terms of (16)), but the third term R(ω, qπ) depends on the unknown qπ and hence need to be further upper bounded.\nOur method can be viewed as constraining ω inW , which is assumed to be the unit ball ofWo, and applying a worst case bound:\nF̂+Q,∗(ω) := ED̂ωn [r] + IQ(ω; D̂n) +R(ω, qπ), ∀ω ∈ Wo ≤ ED̂ωn [r] + IQ(ω; D̂n) + ‖w‖Wo suph∈W R(h, qπ), ∀ω ∈ Wo\n≤ ED̂ωn [r] + IQ(ω; D̂n) + ‖w‖Wo LW(qπ, D̂n), ∀ω ∈ Wo w.p.1−δ ≤ ED̂ωn [r] + IQ(ω; D̂n) + ‖w‖Wo , ∀ω ∈ Wo\n= F̂+Q (ω).\nwhere the last step applies the high probability bound that Pr(LW(qπ, D̂n) ≤ ε) ≥ 1− δ. Similar derivation on the lower bound counterpart gives\nPr ([ F̂−Q,∗(ω), F̂ + Q,∗(ω) ] ⊆ [ F̂−Q (ω), F̂ + Q (ω) ]) ≥ 1− δ.\nTherefore, our confidence bound [F̂−Q (ω), F̂ + Q (ω)] is a 1− δ confidence outer bound of the oracle bound [Ĵ−Q,∗, Ĵ + Q,∗] ⊆ [F̂−Q,∗(ω), F̂+Q,∗(ω)].\nIntroducing Q is necessarily Our method does not require any independence assumption between the transition pairs, the trade-off is that that we have to assume that qπ falls into a function set Q that imposes certain smoothness assumption. This is necessary because the data only provide information regarding qπ on a finite number of points, and qπ can be arbitrarily non-smooth outside of the data points, and hence no reasonable upper/lower bound can be obtained without any smoothness condition that extend the information on the data points to other points in the domain.\nProposition G.1. Unless Prs∼Dπ,0(s /∈ {si, s′i}ni=1) = 0, for any u ∈ R, there exists a function q : S ×A → R, such that\nEDπ,0 [q] = u, R̂q(xi, yi) = R̂qπ(xi, yi), ∀i = 1, . . . , n.\nProof. Let Qnull be the set of functions that are zero on {si, s′i}ni=1, that is, Qnull = {g : S ×A → R : g(s, a) = 0, ∀s ∈ {si, s′i}ni=1, a ∈ A}.\nThen we have R̂π(qπ + g)(xi, yi) = R̂πqπ(xi, yi), ∀i = 1, . . . , n.\nand EDπ,0 [qπ + g] = EDπ,0[qπ] + EDπ,0 [g] = Jπ + EDπ,0 [g]. Taking g(s, a) = zI(s /∈ {si, s′i}ni=1), where z is any real number. Then we have EDπ,0 [qπ + g] = Jπ + zPrs∼Dπ,0(s /∈ {si, s′i}ni=1).\nBecause Prs∼Dπ,0(s /∈ {si, s′i}ni=1). 6= 0, we can take z to be arbitrary value to make EDπ,0 [qπ + g] to take arbitrary value." }, { "heading": "H ABLATION STUDY AND EXPERIMENTAL DETAILS", "text": "H.1 EXPERIMENTAL DETAILS\nEnvironments and Dataset Construction We test our method on three environments: InvertedPendulum and CartPole from OpenAI Gym (Brockman et al., 2016), and a Type-1 Diabetes medical treatment simulator. For Inverted-Pendulum we discretize the action space to be {−1,−0.3,−0.2, 0, 0.2, 0.3, 1}. The action space of CartPole and the medical treatment simulator are both discrete.\nPolicy Construction We follow a similar setup as Feng et al. (2020) to construct behavior and target policies. For all of the environments, we constraint our policy class to be a softmax policy and use PPO (Schulman et al., 2017) to train a good policy π, and we use different temperatures of the softmax policy to construct the target and behavior policies (we set the temperature τ = 0.1 for target policy and τ = 1 to get the behavior policy, and in this way the target policy is more deterministic than the behavior policy). We consider other choices of behavior policies in Section H.3.\nFor horizon lengths, We fix γ = 0.95 and set horizon length H = 50 for Inverted-Pendulum, H = 100 for CartPole, and H = 50 for Diabetes simulator.\nAlgorithm Settings We test the bound in Eq.(16)-(17). Throughout the experiment, we always set W = K, a unit ball of RKHS with kernel k(·, ·). We set Q = rQK̃, the zero-centered ball of radius rQ in an RKHS with kernel k̃(·, ·). We take both k and k̃ to be Gaussian RBF kernel. The bandwidth of k and k̃ are selected to make sure the function Bellman loss is not large on a validation set. The radius is selected to be sufficiently large to ensure that q∗ is included in Q. To ensure a sufficiently large radius, we use the data to approximate a q̂ so that its functional Bellman loss is small than n. Then we set rQ = 10 ∗ ‖q̂‖K̃. We optimize ω using the random feature approximation method described in Appendix D. Once ω+ and ω− are found, we evaluate the bound in Eq. (16) exactly, to ensure the theoretical guarantee holds.\nH.2 SENSITIVITY TO HYPER-PARAMETERS\nWe investigate the sensitivity of our algorithm to the choice of hyper-parameters. The hyper-parameter mainly depends on how we choose our function class Q andW .\nRadius of Q Recall that we choose Q to be a ball in RKHS with radius rQ, that is,\nQ = rQK̃ = {rQf : f ∈ K̃},\nwhere K̃ is the unit ball of the RKHS with kernel k̃. Ideally, we want to ensure that rQ ≥ ‖q∗‖K̃ so that q∗ ∈ Q.\nSince it is hard to analyze the behavior of the algorithm when q∗ is unknown, we consider a synthetic environment where the true q∗ is known. This is done by explicitly specifying a q∗ inside K̃ and then infer the corresponding deterministic reward function r(x) by inverting the Bellman equation:\nr(x) := q∗(x)− γEx′∼Pπ(·|x)[q∗(x′)]. Here r is a deterministic function, instead of a random variable, with an abuse of notation. In this way, we can get access to the true RKHS norm of q∗:\nρ∗ = ‖q∗‖K̃ . For simplicity, we set both the state space S and action space A to be R and set a Gaussian policy π(a|s) ∝ exp(f(s, a)/τ), where τ is a positive temperature parameter. We set τ = 0.1 as target policy and τ = 1 as behavior policy.\nFigure 3 shows the results as we set rQ to be ρ∗, 10ρ∗ and 100ρ∗, respectively. We can see that the tightness of the bound is affected significantly by the radius when the number n of samples is very small. However, as the number n of samples grow (e.g., n ≥ 2× 103 in our experiment), the length of the bounds become less sensitive to the changing of the predefined norm of Q.\nSimilarity Between Behavior Policy and Target Policy We study the performance of changing temperature of the behavior policy. We test on Inverted-Pendulum environment as previous described. Not surprisingly, we can see that the closer the behavior policy to the target policy (with temperature τ = 0.1), the tighter our confidence interval will be, which is observed in Figure 4(a).\nBandwidth of RBF kernels We study the results as we change the bandwidth in kernel k and k̃ forW and Q, respectively. Figure 4(b) shows the length of the confidence interval when we use different bandwidth pairs in the Inverted-Pendulum environment. We can see that we get relatively tight confidence bounds as long as we set the bandwidth in a reasonable region (e.g., we set the bandwidth of k in [0.1, 0.5], the bandwidth of k̃ in [0.5, 3]).\nH.3 SENSITIVITY TO THE DATA COLLECTION PROCEDURE\nWe investigate the sensitivity of our method as we use different behavior policies to collect the dataset D̂n.\nVarying Behavior Policies We study the effect of using different behavior policies. We consider the following cases:\n1. Data is collected from a single behavior policy of form πα = απ + (1− α)π0, where π is the target policy and π0 is another policy. We construct π and π0 to be Gaussian policies of form π(a|s) ∝ exp(f(s, a)/τ) with different temperature τ , where temperature for target policy is τ = 0.1 and temperature for π0 is τ = 1.\n2. The dataset D̂n is the combination of the data collected from multiple behavior policies of form πα defined as above, with α ∈ {0.0, 0.2, 0.4, 0.6, 0.8}.\nWe show in Figure 5(a) that the length of the confidence intervals by our method as we vary the number n of transition pairs and the mixture rate α. We can see that the length of the interval decays with the sample size n for all mixture rate α. Larger α yields better performance because the behavior policies are closer to the target policy.\nVarying Trajectory Length T in D̂n As we collect D̂n, we can either have a small number of long trajectories, or a larger number of short trajectories. In Figure 5(b)-(c), we vary the length T of the trajectories as we collect D̂n, while fixing the total number n of transition pairs. In this way, the number of trajectories in each D̂n would be m = n/T . We can see that the trajectory length does not impact the results significantly, especially when the length is reasonably large (e.g., T ≥ 20)." }, { "heading": "I MORE RELATED WORKS", "text": "We give a more detailed overview of different approaches for uncertainty estimation in OPE.\nFinite-Horizon Importance Sampling (IS) Assume the data is collected by rolling out a known behavior policy π0 up to a trajectory length T , then we can estimate the finite horizon reward by changing Eπ,P[·] to Eπ0,P[·] with importance sampling(e.g., Precup et al., 2000; Precup, 2001; Thomas et al., 2015a;b). Taking the trajectory-wise importance sampling as an example, assume we collect a set of independent trajectories τi := {sit, ait, rit}T−1t=0 , i = 1, . . . ,m up to a trajectory length T by unrolling a known behavior policy π0. When T is large, we can estimate J∗ by a weighted averaging:\nĴ IS = 1\nm m∑ i=1 ω(τi)J(τi) , where ω(τi) = T−1∏ t=0 π(ait|sit) π0(ait|sit) , J(τi) = T−1∑ t=0 γtrit . (24)\nOne can construct non-asymptotic confidence bounds based on Ĵ IS using variants of concentration inequalities (Thomas, 2015; Thomas et al., 2015b). Unfortunately, a key problem with this IS estimator is that the importance weight ω(τi) is a product of the density ratios over time, and hence tends to cause an explosion in variance when the trajectory length T is large. Although improvement can be made by using per-step and self-normalized weights (Precup, 2001), or control variates (Jiang & Li, 2016; Thomas & Brunskill, 2016), the curse of horizon remains to be a key issue to the classical IS-based estimators (Liu et al., 2018a).\nMoreover, due to the time dependency between the transition pairs inside each trajectory, the nonasymptotic concentration bounds can only be applied on the trajectory level and hence decay with the number m of independent trajectories in an O(1/ √ m) rate, though m can be small in practice. We could in principle apply the concentration inequalities of Markov chains (e.g., Paulin, 2015) to the time-dependent transition pairs, but such inequalities require to have an upper bound of certain mixing coefficient of the Markov chain, which is unknown and hard to construct empirically. Our work addresses these limitations by constructing a non-asymptotic bound that decay with the number n = mT of transitions pairs, while without requiring known behavior policies and independent trajectories.\nInfinite-Horizon, Behavior-Agnostic OPE Our work is closely related to the recent advances in infinite-horizon and behavior-agnostic OPE, including, for example, Liu et al. (2018a); Feng et al. (2019); Tang et al. (2020a); Mousavi et al. (2020); Liu et al. (2020); Yang et al. (2020b); Xie et al. (2019); Yin & Wang (2020), as well as the DICE-family (e.g., Nachum et al., 2019a;b; Zhang et al., 2020a; Wen et al., 2020; Zhang et al., 2020b). These methods are based on either estimating the value function, or the stationary visitation distribution, which is shown to form a primal-dual relation (Tang et al., 2020a; Uehara et al., 2020; Jiang & Huang, 2020) that we elaborate in depth in Section 3.\nBesides Feng et al. (2020) which directly motivated this work, there has been a recent surge of interest in interval estimation under infinite-horizon OPE (e.g., Liu et al., 2018b; Jiang & Huang, 2020; Duan et al., 2020; Dai et al., 2020; Feng et al., 2020; Tang et al., 2020b; Yin et al., 2020; Lazic et al., 2020). For example, Dai et al. (2020) develop an asymptotic confidence bound (CoinDice) for DICE estimators with an i.i.d assumption on the off-policy data; Duan et al. (2020) provide a data dependent confidence bounds based on Fitted Q iteration (FQI) using linear function approximation when the off-policy data consists of a set of independent trajectories; Jiang & Huang (2020) provide a minimax method closely related to our method but do not provide analysis for data error; Tang et al. (2020b) propose a fixed point algorithm for constructing deterministic intervals of the true value function when the reward and transition models are deterministic and the true value function has a bounded Lipschitz norm.\nModel-Based Methods Since the model P is the only unknown variable, we can construct an estimator P̂ of P using maximum likelihood estimation or other methods, and plug it into Eq. (1) to obtain a plug-in estimator Ĵ = Jπ,P̂. This yields the model-based approach to OPE (e.g., Jiang & Li, 2016; Liu et al., 2018b). One can also estimate the uncertainty in Jπ,P̂ by propagating the\nuncertatinty in P̂ (e.g., Asadi et al., 2018; Duan et al., 2020), but it is hard to obtain non-asymptotic\nand computationally efficient bounds unless P̂ is assumed to be simple linear models. In general, estimating the whole model P can be an unnecessarily complicated problem as an intermediate step of the possibly simpler problem of estimating Jπ,P.\nBootstrapping, Bayes, Distributional RL As a general approach of uncertainty estimation, bootstrapping has been used in interval estimation in RL in various ways (e.g., White & White, 2010; Hanna et al., 2017; Kostrikov & Nachum, 2020; Hao et al., 2021). Bootstrapping is simple and highly flexible, and can be applied to time-dependent data (as appeared in RL) using variants of block bootstrapping methods (e.g., Lahiri, 2013; White & White, 2010). However, bootstrapping typically only provides asymptotic guarantees; although non-asymptotic bounds of bootstrap exist (e.g., Arlot et al., 2010), they are sophistic and difficult to use in practice and would require to know the mixing condition for the dependent data. Moreover, bootstrapping is time consuming since it requires to repeat the whole off-policy evaluation pipeline on a large number of resampled data.\nBayesian methods (e.g., Engel et al., 2005; Ghavamzadeh et al., 2016b; Yang et al., 2020a) offer another general approach to uncertainty estimation in RL, but require to use approximate inference algorithms and do not come with non-asymptotic frequentist guarantees. In addition, distributional RL (e.g., Bellemare et al., 2017) seeks to quantify the intrinsic uncertainties inside the Markov decision process, which is orthogonal to the epistemic uncertainty that we consider in off-policy evaluation." } ]
2,021
null
SP:dd6a80c29d23d8ee356c637fff15c8d28956fba3
[ "The authors focus on the class imbalance problem and propose an algorithm named ROGA to generated samples of minority classes to balance the quantitative difference between classes. Specifically, the proposed ROGA generates samples of minority classes by using a Genetic Algorithm (GA) to explore the sample space. Moreover, to reduce the noise samples generated by ROGA, the authors propose to calculate the fitness as the Gaussian similarity with the surrounding samples and eliminate the samples with low fitness. My detailed comments are as follows.", "In this work, the authors propose an oversampling technique which creates a population of synthetic samples for an imbalanced dataset using genetic algorithms. Each individual in the GA population corresponds to a synthetic sample, and the fitness function is based on the similarity (in feature space) of the synthetic sample compared to nearby samples of the same and different classes; standard crossover and mutation operators are used. In a limited set of experiments, the proposed approach sometimes outperforms competing approaches." ]
When using machine learning to solve practical tasks, we often face the problem of class imbalance. Unbalanced classes will cause the model to generate preferences during the learning process, thereby ignoring classes with fewer samples. The oversampling algorithm achieves the purpose of balancing the difference in quantity by generating a minority of samples. The quality of the artificial samples determines the impact of the oversampling algorithm on model training. Therefore, a challenge of the oversampling algorithm is how to find a suitable sample generation space. However, too strong conditional constraints can make the generated samples as non-noise points as possible, but at the same time they also limit the search space of the generated samples, which is not conducive to the discovery of better-quality new samples. Therefore, based on this problem, we propose an oversampling algorithm ROGA based on genetic algorithm. Based on random sampling, new samples are gradually generated and the samples that may become noise are filtered out. ROGA can ensure that the sample generation space is as wide as possible, and it can also reduce the noise samples generated. By verifying on multiple datasets, ROGA can achieve a good result.
[]
[ { "authors": [ "Lida Abdi", "Sattar Hashemi" ], "title": "To combat multi-class imbalanced problems by means of oversampling techniques", "venue": "Soft Computing,", "year": 2015 }, { "authors": [ "Md. Adnan Arefeen", "Sumaiya Tabassum Nimi", "Mohammad Sohel Rahman" ], "title": "Neural networkbased undersampling techniques", "venue": "IEEE Transactions on Systems, Man, and Cybernetics: Systems,", "year": 2020 }, { "authors": [ "Jerzy Bala", "J Huang", "H Vafaie", "K Dejong", "Harry Wechsler" ], "title": "Hybrid learning using genetic algorithms and decision trees for pattern classification", "venue": null, "year": 1995 }, { "authors": [ "Colin Bellinger", "Nathalie Japkowicz", "Chris Drummond" ], "title": "Synthetic oversampling for advanced radioactive threat detection", "venue": null, "year": 2015 }, { "authors": [ "Chumphol Bunkhumpornpat", "Krung Sinapiromsaran", "Chidchanok Lursinsap" ], "title": "Safe-level-smote: Safe-level-synthetic minority over-sampling technique for handling the class imbalanced problem", "venue": null, "year": 2009 }, { "authors": [ "Chumphol Bunkhumpornpat", "Krung Sinapiromsaran", "Chidchanok Lursinsap" ], "title": "Dbsmote: Density-based synthetic minority over-sampling technique", "venue": "Applied Intelligence,", "year": 2012 }, { "authors": [ "Nitesh V. Chawla", "Kevin W. Bowyer", "Lawrence O. Hall", "W. Philip Kegelmeyer" ], "title": "Smote: Synthetic minority over-sampling technique", "venue": "Journal of Artificial Intelligence Research,", "year": 2002 }, { "authors": [ "Nitesh V. Chawla", "Aleksandar Lazarevic", "Lawrence O. Hall", "Kevin W. Bowyer" ], "title": "Smoteboost: Improving prediction of the minority class in boosting", "venue": "In European Conference on Knowledge Discovery in Databases: Pkdd,", "year": 2003 }, { "authors": [ "Tianqi Chen", "Carlos Guestrin" ], "title": "Xgboost: A scalable tree boosting system", "venue": null, "year": 2016 }, { "authors": [ "Barnan Das", "Narayanan C Krishnan", "Diane J Cook" ], "title": "Racog and wracog: Two probabilistic oversampling techniques", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2015 }, { "authors": [ "Yanjie Dong", "Xuehua Wang" ], "title": "A new over-sampling approach: Random-smote for learning from imbalanced data sets. In Knowledge Science, Engineering and Management ", "venue": "5th International Conference,", "year": 2011 }, { "authors": [ "Georgios Douzas", "Fernando Bacao" ], "title": "Geometric smote: Effective oversampling for imbalanced learning through a geometric extension of smote", "venue": "arXiv: Learning,", "year": 2017 }, { "authors": [ "Georgios Douzas", "Fernando Bacao" ], "title": "Self-organizing map oversampling (somo) for imbalanced data set learning", "venue": "Expert Systems With Applications,", "year": 2017 }, { "authors": [ "Georgios Douzas", "Fernando Bacao" ], "title": "Improving imbalanced learning through a heuristic oversampling method based on k-means and smote", "venue": "Information Sciences,", "year": 2018 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "C.M. Fonseca", "Peter John Fleming" ], "title": "Genetic algorithms for multiobjective optimization: Formulationdiscussion and generalization", "venue": null, "year": 1993 }, { "authors": [ "Vicente Garcia", "Javier Salvador Sanchez", "Raul Martinfelez", "Ramon A Mollineda" ], "title": "Surrounding neighborhood-based smote for learning from imbalanced data sets", "venue": "Progress in Artificial Intelligence,", "year": 2012 }, { "authors": [ "Haibo He", "E. A Garcia" ], "title": "Learning from imbalanced data", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2009 }, { "authors": [ "Haibo He", "Yang Bai", "E A Garcia", "Shutao Li" ], "title": "Adasyn: Adaptive synthetic sampling approach for imbalanced learning", "venue": null, "year": 2008 }, { "authors": [ "John H Holland" ], "title": "Adaptation in natural and artificial systems", "venue": null, "year": 1975 }, { "authors": [ "Abdollah Homaifar", "Charlene X Qi", "Steven H Y Lai" ], "title": "Constrained optimization via genetic algorithms", "venue": null, "year": 1994 }, { "authors": [ "Shengguo Hu", "Yanfeng Liang", "Lintao Ma", "Ying He" ], "title": "Msmote: Improving classification performance when training data is imbalanced", "venue": null, "year": 2009 }, { "authors": [ "A. José", "Sáez", "Julián", "Luengo", "Jerzy", "Stefanowski", "Francisco", "Herrera" ], "title": "Smote–ipf: Addressing the noisy and borderline examples problem in imbalanced classification by a re-sampling method with filtering", "venue": "Information Sciences,", "year": 2015 }, { "authors": [ "Dong Seong Kim", "Ha Nam Nguyen", "Jong Sou Park" ], "title": "Genetic algorithm to improve svm based network intrusion detection system", "venue": "In Advanced Information Networking and Applications,", "year": 2005 }, { "authors": [ "Fajri Koto" ], "title": "Smote-out, smote-cosine, and selected-smote: An enhancement strategy to handle imbalance in data level", "venue": null, "year": 2014 }, { "authors": [ "S.B. Kotsiantis", "D. Kanellopoulos", "P.E. Pintelas" ], "title": "Handling imbalanced datasets: A review", "venue": null, "year": 2005 }, { "authors": [ "Micha Koziarski" ], "title": "Radial-based undersampling for imbalanced data classification", "venue": "Pattern Recognition,", "year": 2020 }, { "authors": [ "K. Krishna", "M. Narasimha Murty" ], "title": "Genetic k-means algorithm", "venue": "IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics),", "year": 1999 }, { "authors": [ "Xiangrui Li", "Dongxiao Zhu" ], "title": "Crcen: A generalized cost-sensitive neural network approach for imbalanced classification", "venue": null, "year": 2019 }, { "authors": [ "Y. Lin" ], "title": "Support vector machines for classification in nonstandard situations", "venue": "Machine Learning,", "year": 2021 }, { "authors": [ "Ying Liu", "Jianxin Wu", "Zhi Hua Zhou" ], "title": "Exploratory undersampling for class-imbalance", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2005 }, { "authors": [ "Ma", "Suohai Fan" ], "title": "Cure-smote algorithm and hybrid algorithm for feature selection", "venue": "IEEE Transactions on Systems Man and Cybernetics Part B,", "year": 2009 }, { "authors": [ "Hien M. Nguyen", "Eric W. Cooper", "Katsuari Kamei" ], "title": "Borderline over-sampling for imbalanced", "venue": "RSCTC", "year": 2010 }, { "authors": [ "John Wiley", "Sons", "Inc", "1999. William A Rivera", "Amit Goel", "J Peter Kincaid" ], "title": "Oups: A combined approach using smote", "venue": null, "year": 1999 }, { "authors": [ "Jerzy Stefanowski", "Szymon Wilk" ], "title": "Selective pre-processing of imbalanced data for improving", "venue": null, "year": 2014 }, { "authors": [ "Nele Verbiest", "Enislay Ramentol", "Chris Cornelis", "Francisco Herrera" ], "title": "Improving smote with", "venue": null, "year": 2016 }, { "authors": [ "Tianlun Zhang", "Xi Yang" ], "title": "G-smote: A gmm-based synthetic minority oversampling technique", "venue": null, "year": 1990 } ]
[ { "heading": "1 INTRODUCTION", "text": "When modeling the classification problem, the balance of classes can ensure that the information is balanced during the learning process of the model, but the classes are often unbalanced in actual tasks(Kotsiantis et al. (2005); He & Garcia (2009)), which leads to the machine learning model preferring majority class samples. In order to solve this problem, a commonly used method uses oversampling algorithm to increase the minority class samples to balance the gap between the minority class samples and the majority class samples. Therefore, the quality of the generated samples will determine the training quality of the model after oversampling, but it is difficult to characterize this effect. In the past studies, oversampling methods only can estimated the noise samples, such as overlapping samples or outliers, and eliminated it as much as possible. Ensure that the sample does not make training the model more difficult.\nSMOTE(Chawla et al. (2002)) and its derivative algorithms generate new samples by interpolation. There has been a lot of work in the selection of samples to be interpolated, the number of interpolations, or the method of interpolation, and the final goal is to reduce the generation of noise samples by restricting the interpolation position. Some researchers pointed out that sampling should not be performed in the sample space, but should be projected to other spaces(Wang et al. (2007), Zhang & Yang (2018)). In addition, some researchers have proposed that sampling should be taken from the minority distribution (Bellinger et al. (2015), Das et al. (2015)) to ensure the consistency of the distribution. It can be found that in order to reduce the generation of noise samples, the space for generating samples needs to be restricted, that is, the oversampling algorithm can only sample within a limited range.\nHowever, just paying attention to noise generation may not achieve better results. Too strong constraints will cause the sample generation space to be limited, and the generated samples will fall into a local optimal solution, so that it is impossible to find samples that are more conducive to model training. Therefore, in order to reduce the limitation on the sample generation space and reduce noise as much as possible, this paper proposes oversampling algorithm ROGA based on genetic algorithm. ROGA will randomly sample in the feature space before the first iteration, and generate a\nset of artificial samples as the basic population for the next iteration of the genetic algorithm. Random sampling will increase the noise in the samples, but it will also prevent ROGA from falling into a local optimal solution. In order to judge the noise, ROGA will calculate the fitness of each sample based on the Gaussian similarity with surrounding neighbors. The fitness represents the possibility that the sample is a noise sample. In the iterative process, new samples are generated through crossover and mutation operations, and samples with lower fitness are gradually removed, so as to reduce noise.\nROGA will generate new samples in the entire feature space, mainly because the initial population is generated by random sampling in the entire feature space. The crossover and mutation operations of genetic algorithms will continuously change the samples in the population to find samples with higher fitness values, and the screening mechanism will help us eliminate noisy samples in the population. Therefore, ROGA can balance the wide generation space and noise generation. Through experiments on multiple datasets, we found that ROGA can achieve the best scores on some datasets on the F1-score." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 CLASS IMBALANCE", "text": "Data-based methods are represented by oversampling and undersampling. In order to balance the gap in the number of samples, oversampling generates new minority class samples, while undersampling(Arefeen et al. (2020); Koziarski (2020)) reduces redundant majority class samples.\nThe algorithm-based method is to improve the model algorithm to make it more suitable for training on unbalanced data. The first improvement measure is to balance the loss caused by the majority class and the minority class in the cost-sensitive loss function. Cost-sensitive can be combined with various machine learning algorithms, such as Adaboost(Wei (1999)), SVM(Lin (2002)), Decision Tree(Ling et al. (2005)), cross entropy(Li & Zhu (2019)) and so on. The second improvement measure is integrated learning technology. Liu et.al(Liu et al. (2009)) proposed EasyEnsemble and BalanceCascade, performed multiple under-sampling and integrated multiple models. Some researchers have added the ideas of Bagging(Wang & Yao (2009)) and Boosting(Chawla et al. (2003)) on SMOTE." }, { "heading": "2.2 OVERSAMPLING", "text": "The oversampling algorithm is to generate new minority class samples to balance the quantitative difference between classes. The current oversampling algorithm can be divided into random sampling method, interpolation method, distributed sampling method and copy replacement method.\nThe random sampling method is to perform random sampling in the feature space, and the generated samples can easily deviate from the distribution of the original samples and produce noisy data. The interpolation oversampling algorithm represented by SMOTEChawla et al. (2002) restricts the generation of samples between two samples, and generates new minority class samples through linear interpolation. On the basis of the SMOTEChawla et al. (2002) algorithm, a large number of improved algorithms have appeared.\nFirst, in order to further reduce the generation of noise, improvements can be made in the selection of the sample to be interpolated, the interpolation weight of the sample, and the position of the interpolation. SMOTE-Borderline(Nguyen et al. (2011)) believes that the minority class samples located at the decision boundary should be selected, and some researchers believe that interpolation should be performed within the minority class sample clusters generated by the clustering algorithm, such as KMeans-SMOTE(Douzas & Bacao (2018)), CURE-SMOTE(Ma & Fan (2017)), DBSMOTE(Bunkhumpornpat et al. (2012)), SOMO(Douzas & Bacao (2017b)), etc. ADASYN(He et al. (2008)), SMOTE-D(Torres et al. (2016)) and MSMOTE(Hu et al. (2009)) reduce the number of interpolations for some samples by setting the weight of the sample. When Safe-level SMOTE(Bunkhumpornpat et al. (2009)) generates samples, it will be closer to more regions of similar samples. In addition to controlling the noise generation in the generation stage, you can also filter the noises in the samples after generating the samples, such as SMOTE-FRST(Verbiest et al. (2012)), SMOTE-IPF(José et al. (2015)), MDO( Abdi & Hashemi (2015)), etc.\nSecond, the above methods all use linear interpolation when generating samples, but linear interpolation has limitations. Therefore, an improved algorithm for generating samples by nonlinear interpolation has appeared. Random-SMOTE(Dong & Wang (2011)) performs interpolation in a triangular area formed by three samples, and Geometric SMOTE(Douzas & Bacao (2017a)) generates samples in a geometric area instead of a linear area.\nThird, choose a more appropriate generation space, such as projecting to low-dimensional space(Wang et al. (2007)) or high-dimensional space(Zhang & Yang (2018)). Fourth, when selecting neighbor samples, other indicators can be used to measure the distance between samples, such as cosine distance(Koto (2014)), propensity score(Rivera et al. (2014)), surrounding-neighbors(Garcia et al. (2012)) etc.\nThe distribution sampling method is to learn the distribution function or density function of minority class samples, and then sample from the distribution function to obtain a new artificial sample. DEAGO(Bellinger et al. (2015)) generates samples by denoising autoencoder, while RACOG and WRACOG(Das et al. (2015)) used the dependency tree algorithm to estimate the discrete probability distribution, and then generated artificial samples from the probability distribution through Gibbs sampling. The copy replacement method is to copy safe minority class samples and eliminate unsafe majority class samples(Stefanowski & Wilk (2008), Napierala et al. (2010))." }, { "heading": "2.3 GENTIC ALGORITHM", "text": "Genetic algorithm (GA) is a bionic algorithm proposed by Holland(Holland (1975)) in 1975. Its main idea is to simulate the inheritance and mutation of genes during human evolution, and to control the evolution direction of samples through the fitness function. As a heuristic algorithm, it is often used in optimization problems(Rahmatsamii & Michielssen (1999), Fonseca & Fleming (1993), Wright (1990), Homaifar et al. (1994)), using the entire problem space as the search space, and searching for the best value in the search space. Genetic algorithms are also often used to optimize machine learning algorithms, such as SVM(Kim et al. (2005)), K-means(Krishna & Narasimha Murty (1999)), Decision Tree(Bala et al. (1995)), neural network(Montana (1989)), etc.\nGenetic algorithm relies on randomness in the optimization process, which means that the speed of finding the optimal solution is unstable, but the genetic algorithm will not get stuck in a local solution when the fitness function is appropriate. In the process of solving, because it does not rely on the additional knowledge, its search range is wide. It is for this reason that we believe that genetic algorithms can find great patterns that have not yet been discovered." }, { "heading": "3 ROGA", "text": "" }, { "heading": "3.1 GA", "text": "" }, { "heading": "3.1.1 BASIC POPULATION", "text": "In the first iteration, this paper randomly generated N artificial samples as the basic population. In each subsequent iteration, the artificial samples generated in the previous iteration will be used as the basic population for this iteration." }, { "heading": "3.1.2 FITNESS", "text": "The consideration of the degree of adaptation of the sample is mainly based on its attribution as a minority class sample. The fitness value of this sample is calculated by the Gaussian similarity with the surrounding samples, which is computed as:\nfitness (xi) = K∑ j yj ∗ sim (xi, xj) (1)\nsim (xi, xj) = e −\n( √∑n k(xik−xjk) 2 )2\n2σ2 (2)\nxi is the sample that currently needs to calculate fitness. K is the number of neighbors to choose and set to 5 in this paper. yj ∈ {−1, 1}. If xi and xj belong to the same sample, then yj = 1. If they belong to the heterogeneous sample, then yj = −1. σ is the standard deviation of Gaussian kernel and set to 1 in this paper." }, { "heading": "3.1.3 SELECTION", "text": "Before each crossover and mutation operation,M candidates need to be selected from the population based on fitness. The candidates for crossover operation and mutation operation are M samples with the lowest fitness in the population, and M must be an even number. In the traditional genetic algorithm, the crossover operation should select the sample with the highest fitness in the population, hoping that the excellent genes can be inherited to the offspring. However, in the generation of minority samples, each feature value of the sample cannot establish a mapping relationship with the quality of the sample. Therefore, it is impossible to achieve the purpose of inheriting excellent genes when performing crossover operations. On the contrary, it will degrade the samples with high fitness in the population. Therefore, this paper only selects samples with low fitness to perform crossover and mutation operations and retains samples with high fitness in the current population." }, { "heading": "3.1.4 CROSSOVER", "text": "Each crossover operation will generate two new samples from two candidate samples. For each pair of candidate samples(xi, xj), each sample contains m feature values xi = {fi1, fi2, . . . , fim}, and p feature values are randomly selected. For any pair of features fik and fjk, new sample is generated for linear interpolation, as shown in Eq. 3 and Eq. 4.\nf ′ik = α ∗ fik + (1− α) ∗ fjk (3) f ′jk = α ∗ fjk + (1− α) ∗ fik (4)\nα ∈ [0, 1] is randomly generated." }, { "heading": "3.1.5 MUTATION", "text": "Each mutation operation will get a new sample. For each candidate sample xi = {fi1, fi2, . . . , fim}, randomly select p feature values in the candidate sample, and multiply the selected feature value with randomly generated β ∈ [0, 2] as the feature value of the new sample at that position, as shown in Eq. 5. The unselected feature value directly inherits the feature value of the original sample at that position.\nf ′ik = β ∗ fik (5)" }, { "heading": "3.1.6 UPDATE", "text": "After the crossover and mutation operations, 3M new samples will be obtained. After calculating the fitness of all samples, the N highest-fitting samples are selected as the basic population for the next iteration." }, { "heading": "3.2 ROGA", "text": "The ROGA algorithm is divided into two stages, a random sampling stage and a genetic algorithm stage. The pseudo code is shown in Algorithm 1. In the random sampling stage, random sampling is performed in the feature space to obtain the required initial population P . At this stage, the number of samples in the population needs to be calculated first. In this paper, the number of basic populations Nsyn is set to the difference in quantity between majority class samples and minority class samples. The method of random sampling is that randomly samples a value within its range of values for each feature f ∈ {f1, f2, f3, . . . , fn}. In the genetic algorithm stage, new artificial samples are generated in each iteration based on the obtained basic population and the artificial samples with low fitness values are eliminated. First, you need to determine the candidate samples for crossover and mutation operations. Calculate the fitness of each sample in the current population\nin the sample space through Eq. 1 and Eq. 2, and then select the M samples with the lowest fitness. M is set to Nsyn/2 in this article. After performing crossover and mutation operations on candidate samples, 3M new samples P ′ will be obtained, and Nsyn samples with the highest fitness will be selected from P ⋃ P ′ as the basic population for the next iteration.\nAlgorithm 1 ROGA Require: X:all of original samples; Y :all of labels; K:number of neighbors; α: Gaussian kernel\nvariance; T :iteration number; Ensure: P : synthetic samples\n1: Nsyn =Number of synthetic samples 2: P = RandomSampling(Nsyn) 3: for i ∈ [1, T ] do 4: for x ∈ P do 5: Find K nearest neighbors, and compute fitness by Eq. 1 and Eq. 2 6: end for 7: Candidates=select M samples with the lowest fitness from P 8: SynSamplescross = CrossoverOperation(Candidates) 9: SynSamplesmutation =Mutation(Candidates)\n10: P ′ = SynSamplescross ⋃ SynSamplesmutation 11: P = Update(P, P ′) 12: end for" }, { "heading": "4 EXPERIMENT", "text": "" }, { "heading": "4.1 DATASET", "text": "The datasets come from UCI database(Dua & Graff (2017)), and the detailed informations of the datasets are shown in Table 1. The classification goals of all datasets are binary classification problems. If the goal of the dataset is a multi-classification problem, a certain class will be set as the target class, and the remaining categories will be divided into non-target classes and converted into a binary classification problem. Each dataset will be divided into 80% training set and 20% test set." }, { "heading": "4.2 EVOLUTION", "text": "For the binary classification problem, a confusion matrix can be obtained on the results, as shown in Table 2. TP represents the number of positive classes predicted in the positive class, FP represents the number of predicted positive classes in the negative class, FN represents the number of negative classes predicted in the positive class, and TN represents the number of negative class predicted to be negative class. The confusion matrix can be used to calculate the accuracy, precision, recall and F1-score.\nTable 2: Confusion Matrix\nPredicted Positive Predicted Negative\nPositive TP FN Negative FP TN\nAccuracy can be defined as:\nAccuracy = TP + TN\nTP + TN + FP + FN (6)\nPrecision can be defined as:\nPrecision = TP\nTP + FP (7)\nRecall can be defined as:\nRecall = TP\nTP + FN (8)\nF1-score can be defined as:\nF1 = 2 · Recall · Precision Recall + Precision\n(9)" }, { "heading": "4.3 PERFORMANCE", "text": "This paper uses six oversampling algorithms to compare with ROGA, including Baseline, SMOTE(Chawla et al. (2002)), SMOTE-Borderline1(Nguyen et al. (2011)), ADASYM(He et al. (2008)), KMeans-SMOTE(Douzas & Bacao (2018)), and SMOTENC(Chawla et al. (2002)). Baseline represents the unused oversampling algorithm. The experimental results are shown in Table 3. Due to the large fluctuations in the experimental results of ROGA, the experimental results of ROGA are the best results among the 20 experiments. The evaluation method used in the table is F1-score, and the classification algorithm uses XGBoost(Chen & Guestrin (2016)). For each experiment, the hyperparameters of classification algorithm are fixed. The remaining metrics will be presented in the appendix. † represents the best score in this dataset. From Table 3, it can be found that ROGA has achieved the best results on most datasets, and even the improvement effect is more obvious, such as ecoil. ROGA and other oversampling algorithms both measure noise on the distribution to remove noise, and ROGA with a wider range generates samples that are more conducive to model training.\nAt the same time, it is also found that artificial samples generated by the oversampling algorithm on some datasets lead to lower scores on the test set. This shows that the generation space constrained by the oversampling algorithm is not suitable for the distribution of all datasets, which can only balance the number of categories and cannot provide benefits to the training of the model. The limited generation range also causes the oversampling algorithm to be unable to deviate from the current local optimal solution. In order to avoid this kind of solidification, ROGA will perform random sampling in the entire sample space before first iteration. This random sampling can make ROGA deviate from the current local optimum and achieve better results. From the results on the isolet dataset, it can be seen that the oversampling algorithms as a comparison are lower than the baseline in the test set score, while ROGA improves the effect of the model." }, { "heading": "4.4 LIMITATION", "text": "ROGA generates the basic population through random sampling, which expands the sample generation space on the one hand, but also increases uncertainty. Therefore, each artificial sample generated by ROGA is not fixed, and compared with other oversampling algorithms, this uncertainty is more serious.\nFigure 1 shows that the ROGA model test results are unstable. The uncertainty contained in the initial population generation will make the model after training better, or it may not improve the model effect significantly. Therefore, when using ROGA to solve class imbalance, it is necessary to conduct as many experiments as possible to find the artificial samples that perform best on a specific metric. However, under the premise that the influence of artificial samples cannot be well described, the traditional oversampling paradigm that try to compensate for the impact of class imbalance through an experiment is unrealistic. Multiple sampling and evaluation are more conducive to taking advantage of oversampling." }, { "heading": "5 CONCLUSION", "text": "In order to avoid the generation of noise, the current oversampling algorithm performs sampling in a limited generation space. However, the limited generation space will cause the oversampling algorithm to fall into a local optimal solution, so that it cannot effectively generate artificial samples that are beneficial to model learning. In order to balance the extensive generation space and noise generation, this paper proposes the ROGA algorithm. Use random sampling to generate the initial sample population to ensure sample generation space. Then, new artificial samples are continuously generated through genetic algorithm, and noise points in the population are eliminated according to fitness. Experiments have proved that ROGA has achieved the best F1-score on multiple datasets. Therefore, the wide generation space is conducive to the oversampling algorithm to collect more high-quality samples, which can not only balance the difference in quantity but also benefit the model learning." } ]
2,020
null
SP:14c8ee6cc91f94c22b9cd29c98be73e551677937
[ "This paper addresses challenges faced in the multi-task learning (MTL) models used in analyzing multimodal conversational data. The main challenge paper is trying to solve is on how to select relevant auxiliary tasks that avoid negative transfer. The authors explore how the preprocessed data used for feature engineering can be re-used as auxiliary tasks in the model. The authors identified sixteen relevant auxiliary tasks, identified a method to distribute learning capacity between primary and auxiliary tasks and proposed a relative supervision hierarchy between primary and auxiliary tasks. An extensive set of experiments are conducted to show the effectiveness of the approach. ", "This paper studies how the preprocessed data can be reused as auxiliary tasks in primary multi-task learning (MTL) for the multimodal emotion detection task. The authors propose and test three hypotheses for primary MTL. Two different hierarchical-level models, FLAT-MTL hierarchical attention model and HAN-Rock model, are proposed to improve the performance of the primary MTL. " ]
Conversational analysis systems are trained using noisy human labels and often require heavy preprocessing during multi-modal feature extraction. Using noisy labels in single-task learning increases the risk of over-fitting. However, auxiliary tasks could improve the performance of the primary task learning. This approach is known as Primary Multi-Task Learning (MTL). A challenge of MTL is the selection of beneficial auxiliary tasks that avoid negative transfer. In this paper, we explore how the preprocessed data used for feature engineering can be re-used as auxiliary tasks in Primary MTL, thereby promoting the productive use of data in the form of auxiliary supervision learning. Our main contributions are: (1) the identification of sixteen beneficially auxiliary tasks, (2) the method of distributing learning capacity between the primary and auxiliary tasks, and (3) the relative supervision hierarchy between the primary and auxiliary tasks. Extensive experiments on IEMOCAP and SEMAINE data validate the improvements over single-task approaches, and suggest that it may generalize across multiple primary tasks.
[]
[ { "authors": [ "Udit Arora", "William Scott Paka", "Tanmoy Chakraborty" ], "title": "Multitask learning for blackmarket tweet detection", "venue": "In Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining,", "year": 2019 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Tadas Baltrušaitis", "Peter Robinson", "Louis-Philippe Morency" ], "title": "Openface: an open source facial behavior analysis toolkit", "venue": "In Applications of Computer Vision (WACV),", "year": 2016 }, { "authors": [ "Peter Bell", "Pawel Swietojanski", "Steve Renals" ], "title": "Multitask learning of context-dependent targets in deep neural network acoustic models", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2016 }, { "authors": [ "James Bergstra", "Yoshua Bengio" ], "title": "Random search for hyper-parameter optimization", "venue": "The Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Johannes Bjerva" ], "title": "Will my auxiliary tagging task help? estimating auxiliary tasks effectivity in multitask learning", "venue": "In Proceedings of the 21st Nordic Conference on Computational Linguistics,", "year": 2017 }, { "authors": [ "Stefano B Blumberg", "Ryutaro Tanno", "Iasonas Kokkinos", "Daniel C Alexander" ], "title": "Deeper image quality transfer: Training low-memory neural networks for 3d images", "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention,", "year": 2018 }, { "authors": [ "Carlos Busso", "Murtaza Bulut", "Chi-Chun Lee", "Abe Kazemzadeh", "Emily Mower", "Samuel Kim", "Jeannette N Chang", "Sungbok Lee", "Shrikanth S Narayanan" ], "title": "IEMOCAP: Interactive emotional dyadic motion capture database", "venue": "Language resources and evaluation,", "year": 2008 }, { "authors": [ "Rich Caruana" ], "title": "Multitask learning", "venue": "Machine learning,", "year": 1997 }, { "authors": [ "Rich Caruana", "Virginia R De Sa" ], "title": "Promoting poor features to supervisors: Some inputs work better as outputs", "venue": "In Advances in Neural Information Processing Systems,", "year": 1997 }, { "authors": [ "Rich Caruana", "Virginia R de Sa" ], "title": "Using feature selection to find inputs that work better as extra outputs", "venue": "In International Conference on Artificial Neural Networks,", "year": 1998 }, { "authors": [ "Rich Caruana", "Shumeet Baluja", "Tom Mitchell" ], "title": "Using the future to “sort out” the present: Rankprop and multitask learning for medical risk evaluation", "venue": "In Advances in neural information processing systems,", "year": 1996 }, { "authors": [ "Jonathan Chang", "Stefan Scherer" ], "title": "Learning representations of emotional speech with deep convolutional generative adversarial networks", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2017 }, { "authors": [ "Dongpeng Chen", "Brian Kan-Wing Mak" ], "title": "Multitask learning of deep neural networks for lowresource speech recognition", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2015 }, { "authors": [ "Dongpeng Chen", "Brian Mak", "Cheung-Chi Leung", "Sunil Sivadas" ], "title": "Joint acoustic modeling of triphones and trigraphemes by multi-task learning deep neural networks for low-resource speech recognition", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2014 }, { "authors": [ "Hao Cheng", "Hao Fang", "Mari Ostendorf" ], "title": "Open-domain name error detection using a multitask rnn", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Maximilian Christ", "Nils Braun", "Julius Neuffer", "Andreas W" ], "title": "Kempa-Liehr. Time series feature extraction on basis of scalable hypothesis tests (tsfresh–a python package)", "venue": null, "year": 2018 }, { "authors": [ "Gregory F Cooper", "Vijoy Abraham", "Constantin F Aliferis", "John M Aronis", "Bruce G Buchanan", "Richard Caruana", "Michael J Fine", "Janine E Janosky", "Gary Livingston", "Tom Mitchell" ], "title": "Predicting dire outcomes of patients with community acquired pneumonia", "venue": "Journal of biomedical informatics,", "year": 2005 }, { "authors": [ "Yongping Du", "Yunpeng Pan", "Junzhong Ji" ], "title": "A novel serial deep multi-task learning model for large scale biomedical semantic indexing", "venue": "IEEE International Conference on Bioinformatics and Biomedicine (BIBM),", "year": 2017 }, { "authors": [ "Rosenberg Ekman" ], "title": "What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS)", "venue": null, "year": 1997 }, { "authors": [ "Anna Fariha" ], "title": "Automatic image captioning using multitask learning", "venue": "Proceedings of Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Jose Maria Garcia-Garcia", "Victor MR Penichet", "Maria D Lozano" ], "title": "Emotion detection: a technology review", "venue": "In Proceedings of the XVIII international conference on human computer interaction,", "year": 2017 }, { "authors": [ "Joumana Ghosn", "Yoshua Bengio" ], "title": "Multi-task learning for stock selection", "venue": "In Advances in neural information processing systems,", "year": 1997 }, { "authors": [ "Daniel Golovin", "Benjamin Solnik", "Subhodeep Moitra", "Greg Kochanski", "John Karro", "D Sculley" ], "title": "Google vizier: A service for black-box optimization", "venue": "In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining,", "year": 2017 }, { "authors": [ "Ting Gong", "Tyler Lee", "Cory Stephenson", "Venkata Renduchintala", "Suchismita Padhy", "Anthony Ndirango", "Gokce Keskin", "Oguz H Elibol" ], "title": "A comparison of loss weighting strategies for multi task learning in deep neural networks", "venue": "IEEE Access,", "year": 2019 }, { "authors": [ "Judith A Hall", "Debra L Roter", "Danielle C Blanch", "Richard M Frankel" ], "title": "Observer-rated rapport in interactions between medical students and standardized patients", "venue": "Patient Education and Counseling,", "year": 2009 }, { "authors": [ "Kaveh Hassani", "Mike Haley" ], "title": "Unsupervised multi-task feature learning on point clouds", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Devamanyu Hazarika", "Soujanya Poria", "Rada Mihalcea", "Erik Cambria", "Roger Zimmermann" ], "title": "Icon: Interactive conversational memory network for multimodal emotion detection", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Devamanyu Hazarika", "Soujanya Poria", "Amir Zadeh", "Erik Cambria", "Louis-Philippe Morency", "Roger Zimmermann" ], "title": "Conversational memory network for emotion recognition in dyadic dialogue videos", "venue": "In Proceedings of the conference. Association for Computational Linguistics. North American Chapter. Meeting,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Mohammed Hoque", "Matthieu Courgeon", "Jean-Claude Martin", "Bilge Mutlu", "Rosalind W Picard" ], "title": "Mach: My automated conversation coach", "venue": "In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing,", "year": 2013 }, { "authors": [ "Yanping Huang", "Youlong Cheng", "Ankur Bapna", "Orhan Firat", "Dehao Chen", "Mia Chen", "HyoukJoong Lee", "Jiquan Ngiam", "Quoc V Le", "Yonghui Wu" ], "title": "Gpipe: Efficient training of giant neural networks using pipeline parallelism", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Gareth James", "Daniela Witten", "Trevor Hastie", "Robert Tibshirani" ], "title": "An introduction to statistical learning, volume", "venue": null, "year": 2013 }, { "authors": [ "Gail Jefferson" ], "title": "Glossary of transcript symbols with an introduction", "venue": "Pragmatics and Beyond New Series,", "year": 2004 }, { "authors": [ "Joshua Y Kim", "Rafael A Calvo", "Kalina Yacef", "NJ Enfield" ], "title": "A review on dyadic conversation visualizations-purposes, data, lens of analysis", "venue": "arXiv preprint arXiv:1905.00653,", "year": 2019 }, { "authors": [ "Joshua Y Kim", "Greyson Y Kim", "Kalina Yacef" ], "title": "Detecting depression in dyadic conversations with multimodal narratives and visualizations", "venue": "In Australasian Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Alexander Kraskov", "Harald Stögbauer", "Peter Grassberger" ], "title": "Estimating mutual information", "venue": "Physical review E,", "year": 2004 }, { "authors": [ "Kalpesh Krishna", "Shubham Toshniwal", "Karen Livescu" ], "title": "Hierarchical multitask learning for ctcbased speech recognition", "venue": "arXiv preprint arXiv:1807.06234,", "year": 2018 }, { "authors": [ "Siddique Latif", "Rajib Rana", "Sara Khalifa", "Raja Jurdak", "Julien Epps", "Bjórn Wolfgang Schuller" ], "title": "Multi-task semi-supervised adversarial autoencoding for speech emotion recognition", "venue": "IEEE Transactions on Affective Computing,", "year": 2020 }, { "authors": [ "Yann LeCun", "Yoshua Bengio" ], "title": "Convolutional networks for images, speech, and time series", "venue": "The handbook of brain theory and neural networks,", "year": 1995 }, { "authors": [ "Giwoong Lee", "Eunho Yang", "Sung Hwang" ], "title": "Asymmetric multi-task learning based on task relatedness and loss", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Hae Beom Lee", "Eunho Yang", "Sung Ju Hwang" ], "title": "Deep asymmetric multi-task feature learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yuanchao Li", "Tianyu Zhao", "Tatsuya Kawahara" ], "title": "Improved end-to-end speech emotion recognition using self attention mechanism and multitask learning", "venue": "In Interspeech,", "year": 2019 }, { "authors": [ "Zachary C Lipton", "David C Kale", "Charles Elkan", "Randall Wetzel" ], "title": "Learning to diagnose with lstm recurrent neural networks", "venue": "arXiv preprint arXiv:1511.03677,", "year": 2015 }, { "authors": [ "Shengchao Liu", "Yingyu Liang", "Anthony Gitter" ], "title": "Loss-balanced task weighting to reduce negative transfer in multi-task learning", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Wei Liu", "Dragomir Anguelov", "Dumitru Erhan", "Christian Szegedy", "Scott Reed", "Cheng-Yang Fu", "Alexander C Berg" ], "title": "Ssd: Single shot multibox detector", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Navonil Majumder", "Soujanya Poria", "Devamanyu Hazarika", "Rada Mihalcea", "Alexander Gelbukh", "Erik Cambria" ], "title": "Dialoguernn: An attentive rnn for emotion detection in conversations", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Gary McKeown", "Michel Valstar", "Roddy Cowie", "Maja Pantic", "Marc Schroder" ], "title": "The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent", "venue": "IEEE transactions on affective computing,", "year": 2011 }, { "authors": [ "Afonso Menegola", "Michel Fornaciali", "Ramon Pires", "Flávia Vasques Bittencourt", "Sandra Avila", "Eduardo Valle" ], "title": "Knowledge transfer for melanoma screening with deep learning", "venue": "IEEE 14th International Symposium on Biomedical Imaging (ISBI", "year": 2017 }, { "authors": [ "Trisha Mittal", "Uttaran Bhattacharya", "Rohan Chandra", "Aniket Bera", "Dinesh Manocha" ], "title": "M3er: Multiplicative multimodal emotion recognition using facial, textual, and speech cues", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Taylor Mordan", "Nicolas Thome", "Gilles Henaff", "Matthieu Cord" ], "title": "Revisiting multi-task learning with rock: a deep residual auxiliary block for visual detection", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Iftekhar Naim", "M Iftekhar Tanveer", "Daniel Gildea", "Mohammed Ehsan Hoque" ], "title": "Automated prediction and analysis of job interview performance: The role of what you say and how you say it", "venue": "In 2015 11th IEEE international conference and workshops on automatic face and gesture recognition (FG),", "year": 2015 }, { "authors": [ "Preetum Nakkiran", "Gal Kaplun", "Yamini Bansal", "Tristan Yang", "Boaz Barak", "Ilya Sutskever" ], "title": "Deep double descent: Where bigger models and more data hurt", "venue": null, "year": 1912 }, { "authors": [ "F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Soujanya Poria", "Navonil Majumder", "Rada Mihalcea", "Eduard Hovy" ], "title": "Emotion recognition in conversation: Research challenges, datasets, and recent advances", "venue": "IEEE Access,", "year": 2019 }, { "authors": [ "Adriana Romero", "Nicolas Ballas", "Samira Ebrahimi Kahou", "Antoine Chassang", "Carlo Gatta", "Yoshua Bengio" ], "title": "Fitnets: Hints for thin deep nets", "venue": "arXiv preprint arXiv:1412.6550,", "year": 2014 }, { "authors": [ "Brian C Ross" ], "title": "Mutual information between discrete and continuous data sets", "venue": "PloS one,", "year": 2014 }, { "authors": [ "Najmeh Sadoughi", "Carlos Busso" ], "title": "Expressive speech-driven lip movements with multitask learning", "venue": "IEEE International Conference on Automatic Face & Gesture Recognition (FG", "year": 2018 }, { "authors": [ "Ozan Sener", "Vladlen Koltun" ], "title": "Multi-task learning as multi-objective optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Wei Shen", "Xiaonan He", "Chuheng Zhang", "Qiang Ni", "Wanchun Dou", "Yan Wang" ], "title": "Auxiliary-task based deep reinforcement learning for participant selection problem in mobile crowdsourcing", "venue": "In Proceedings of the 29th ACM International Conference on Information & Knowledge Management,", "year": 2020 }, { "authors": [ "Jaak Simm", "Ildefons Magrans de Abril", "Masashi Sugiyama" ], "title": "Tree-based ensemble multi-task learning method for classification and regression", "venue": "IEICE TRANSACTIONS on Information and Systems,", "year": 2014 }, { "authors": [ "Anders Søgaard", "Yoav Goldberg" ], "title": "Deep multi-task learning with low level tasks supervised at lower layers", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),", "year": 2016 }, { "authors": [ "M Iftekhar Tanveer", "Emy Lin", "Mohammed Hoque" ], "title": "Rhema: A real-time in-situ intelligent interface to help people with public speaking", "venue": "In Proceedings of the 20th International Conference on Intelligent User Interfaces,", "year": 2015 }, { "authors": [ "Fei Tao", "Carlos Busso" ], "title": "End-to-end audiovisual speech recognition system with multitask learning", "venue": "IEEE Transactions on Multimedia,", "year": 2020 }, { "authors": [ "Lisa Torrey", "Jude Shavlik" ], "title": "Transfer learning. In Handbook of research on machine learning applications and trends: algorithms, methods, and techniques, pp. 242–264", "venue": "IGI global,", "year": 2010 }, { "authors": [ "Trieu H Trinh", "Andrew M Dai", "Minh-Thang Luong", "Quoc V Le" ], "title": "Learning longer-term dependencies in rnns with auxiliary losses", "venue": "arXiv preprint arXiv:1803.00144,", "year": 2018 }, { "authors": [ "Sen Wu", "Hongyang R Zhang", "Christopher Ré" ], "title": "Understanding and improving information transfer in multi-task learning", "venue": "arXiv preprint arXiv:2005.00944,", "year": 2020 }, { "authors": [ "Rui Xia", "Yang Liu" ], "title": "A multi-task learning framework for emotion recognition using 2d continuous space", "venue": "IEEE Transactions on affective computing,", "year": 2015 }, { "authors": [ "Jianliang Yang", "Yuenan Liu", "Minghui Qian", "Chenghua Guan", "Xiangfei Yuan" ], "title": "Information extraction from electronic medical records using multitask recurrent neural network with contextual word embedding", "venue": "Applied Sciences,", "year": 2019 }, { "authors": [ "Le Yang", "Dongmei Jiang", "Lang He", "Ercheng Pei", "Meshia Cédric Oveneke", "Hichem Sahli" ], "title": "Decision tree based depression classification from audio video and language information", "venue": "In Proceedings of the 6th international workshop on audio/visual emotion challenge,", "year": 2016 }, { "authors": [ "Min Yang", "Wei Zhao", "Wei Xu", "Yabing Feng", "Zhou Zhao", "Xiaojun Chen", "Kai Lei" ], "title": "Multitask learning for cross-domain image captioning", "venue": "IEEE Transactions on Multimedia,", "year": 2018 }, { "authors": [ "Zichao Yang", "Diyi Yang", "Chris Dyer", "Xiaodong He", "Alex Smola", "Eduard Hovy" ], "title": "Hierarchical attention networks for document classification", "venue": "In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2016 }, { "authors": [ "ByungIn Yoo", "Youngjun Kwak", "Youngsung Kim", "Changkyu Choi", "Junmo Kim" ], "title": "Deep facial age estimation using conditional multitask learning with weak label expansion", "venue": "IEEE Signal Processing Letters,", "year": 2018 }, { "authors": [ "Abdallah Yousif", "Zhendong Niu", "Ally S Nyamawe" ], "title": "Citation classification using multitask convolutional neural network model", "venue": "In International Conference on Knowledge Science, Engineering and Management,", "year": 2018 }, { "authors": [ "Jianfei Yu", "Jing Jiang" ], "title": "Learning sentence embeddings with auxiliary tasks for cross-domain sentiment classification", "venue": "Association for Computational Linguistics,", "year": 2016 }, { "authors": [ "Amir Zadeh", "Paul Pu Liang", "Navonil Mazumder", "Soujanya Poria", "Erik Cambria", "LouisPhilippe Morency" ], "title": "Memory fusion network for multi-view sequential learning", "venue": "arXiv preprint arXiv:1802.00927,", "year": 2018 }, { "authors": [ "Amir Zadeh", "Paul Pu Liang", "Soujanya Poria", "Prateek Vij", "Erik Cambria", "Louis-Philippe Morency" ], "title": "Multi-attention recurrent network for human communication comprehension", "venue": "In Proceedings of the... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Nasser Zalmout", "Nizar Habash" ], "title": "Adversarial multitask learning for joint multi-feature and multidialect morphological modeling", "venue": "arXiv preprint arXiv:1910.12702,", "year": 2019 }, { "authors": [ "Fengda Zhu", "Yi Zhu", "Xiaojun Chang", "Xiaodan Liang" ], "title": "Vision-language navigation with selfsupervised auxiliary reasoning tasks", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "The sharp increase in uses of video-conferencing creates both a need and an opportunity to better understand these conversations (Kim et al., 2019a). In post-event applications, analyzing conversations can give feedback to improve communication skills (Hoque et al., 2013; Naim et al., 2015). In real-time applications, such systems can be useful in legal trials, public speaking, e-health services, and more (Poria et al., 2019; Tanveer et al., 2015).\nAnalyzing conversations requires both human expertise and a lot of time, which is what many multimodal conversational analysis systems are trying to solve with automation. However, to build such systems, analysts often require a training set annotated by humans (Poria et al., 2019). The annotation process is costly, thereby limiting the amount of labeled data. Moreover, third-party annotations on emotions are often noisy. Noisy data coupled with limited labeled data increases the chance of overfitting (James et al., 2013).\nFrom the perspective of feature engineering to analyze video-conferences, analysts often employ pre-built libraries (Baltrušaitis et al., 2016; Vokaturi, 2019) to extract multimodal features as inputs to training. This preprocessing phase is often computationally heavy, and the resulting features are only used as inputs. In this paper, we investigate how the preprocessed data can be re-used as auxiliary tasks in Primary Multi-Task Learning (MTL), thereby promoting a more productive use of data, in the form of auxiliary supervised learning. Specifically, our main contributions are (1) the identification of beneficially auxiliary tasks, (2) the method of distributing learning capacity between the primary and auxiliary tasks, and (3) the relative supervision hierarchy between the primary and auxiliary tasks. We demonstrate the value of our approach through predicting emotions on two publicly available datasets, IEMOCAP (Busso et al., 2008) and SEMAINE (McKeown et al., 2011)." }, { "heading": "2 RELATED WORKS AND HYPOTHESES", "text": "Multitask learning has a long history in machine learning (Caruana, 1997). In this paper, we focus on Primary MTL, a less commonly discussed subfield within MTL (Mordan et al., 2018). Primary\nMTL is concerned with the performance on one (primary) task – the sole motivation of adding auxiliary tasks is to improve the primary task performance.\nIn recent years, primary MTL has been gaining attention in computer vision (Yoo et al., 2018; Fariha, 2016; Yang et al., 2018; Mordan et al., 2018; Sadoughi & Busso, 2018), speech recognition (Krishna et al., 2018; Chen & Mak, 2015; Tao & Busso, 2020; Bell et al., 2016; Chen et al., 2014), and natural language processing (NLP) (Arora et al., 2019; Yousif et al., 2018; Zalmout & Habash, 2019; Yang et al., 2019; Du et al., 2017). The benefit of adding multiple tasks is to provide inductive bias through multiple noisy supervision (Caruana, 1997; Lipton et al., 2015; Ghosn & Bengio, 1997). On the other hand, the drawback of adding multiple tasks increases the risk of negative transfer (Torrey & Shavlik, 2010; Lee et al., 2016; 2018; Liu et al., 2019; Simm et al., 2014), which leads to many design considerations. Two of such considerations are, identifying (a) what tasks are beneficial and (b) how much of the model parameters to share between the primary and auxiliary tasks. In addition, because we are performing Primary MTL, we have the third consideration of (c) whether we should prioritize primary supervision by giving it a higher hierarchy than the auxiliary supervision.\nIn contrast with previous MTL works, our approach (a) identifies sixteen beneficially auxiliary targets, (b) dedicates a primary-specific branch within the network, and (c) investigates the efficacy and generalization of prioritizing primary supervision across eight primary tasks.\nSince our input representation is fully text-based, we dive deeper into primary MTL in the NLP community. Regarding model architecture designs for primary MTL in NLP, Søgaard & Goldberg (2016) found that lower-level tasks like part-of-speech tagging, are better kept at the lower layers, enabling the higher-level tasks like Combinatory Categorical Grammar tagging to use these lowerlevel representations. In our approach, our model hierarchy is not based on the difficulty of the tasks, but more simply, we prioritize the primary task. Regarding identifying auxiliary supervisors in NLP, existing works have included tagging the input text (Zalmout & Habash, 2019; Yang et al., 2019; Søgaard & Goldberg, 2016). Text classification with auxiliary supervisors have included research article classification (Du et al., 2017; Yousif et al., 2018), and tweet classification (Arora et al., 2019). There is a large body of work in multimodal sentiment analysis, but not in the use of multimodal auxiliary supervisors, as detailed in the next paragraph.\nMultimodal analysis of conversations has been gaining attention in deep learning research, particularly for emotion recognition in conversations (Poria et al., 2019). The methods in the recent three years have been intelligently fusing numeric vectors from the text, audio, and video modalities before feeding it to downstream layers. This approach is seen in MFN (Zadeh et al., 2018a), MARN (Zadeh et al., 2018b), CMN (Hazarika et al., 2018b), ICON (Hazarika et al., 2018a), DialogueRNN (Majumder et al., 2019), and M3ER (Mittal et al., 2020). Our approach is different in two ways. (1) Our audio and video information is encoded within text before feeding only the text as input. Having only text as input has the benefits of interpretability, and the ability to present the conversational analysis on paper (Kim et al., 2019b). This is similar to how the linguistics community performs manual conversational analysis using the Jefferson transcription system (Jefferson, 2004), where the transcripts are marked up with symbols indicating how the speech was articulated. (2) Instead of using the audio and video information as only inputs to a Single Task Learning (STL) model, the contribution of this paper is that we demonstrate how to use multimodal information in both input and as auxiliary supervisors to provide inductive bias that helps the primary task.\nHypothesis H1: The introduced set of auxiliary supervision features improves primary MTL. We introduce and motivate the full set of sixteen auxiliary supervisions, all based on existing literature: these are grouped into four families, each with four auxiliary targets. The four families are (1) facial action units, (2) prosody, (3) historical labels, and (4) future labels: (1) Facial action units, from the facial action coding system identifies universal facial expressions of emotions (Ekman, 1997). Particularly, AU 05, 17, 20, 25 have been shown to be useful in detecting depression (Yang et al., 2016a; Kim et al., 2019b) and rapport-building (Anonymous, 2021). (2) Prosody, the tone of voice – happiness, sadness, anger, and fear – can project warmth and attitudes (Hall et al., 2009), and has been used as inputs in emotions detection (Garcia-Garcia et al., 2017). (3 and 4) Using features at different historical time-points is a common practice in statistical learning, especially in time-series modelling (Christ et al., 2018). Lastly, predicting future labels as auxiliary tasks can help in learning (Caruana et al., 1996; Cooper et al., 2005; Trinh et al., 2018; Zhu et al., 2020; Shen et al., 2020). Inspired by their work, we propose using historical and future\n(up to four talkturns ago or later) target labels as auxiliary targets. Although historical and future target labels can be used as a pre-training objective and fine-tuned on the current target label, sequential transfer learning is not the focus of this paper.\nGiven that we are extracting actions and prosody families as inputs, we propose to explore whether they can be reused as supervisors (see Fig. 1). Our hypothesis H1 is that re-using them as auxiliary supervision improves primary MTL. This is related to using hints in the existing MTL literature, where the auxiliary tasks promote the learning of the feature (Cheng et al., 2015; Yu & Jiang, 2016).\nHypothesis H2: When the primary branch is given maximum learning capacity, it would not be outperformed by models with primary branch having less than the maximum learning capacity. Deeper models with higher learning capacity produce better results (Huang et al., 2019; Nakkiran et al., 2019; Menegola et al., 2017; Blumberg et al., 2018; Romero et al., 2014). Also, since the auxiliary branch is shared with the primary supervision, the auxiliary capacity should be limited to enforce information transfer and improve performance (Wu et al., 2020). Therefore, given a fixed learning capacity budget, our hypothesis H2 implies that we should allocate the maximum learning capacity to the primary branch because we care only about the primary task performance.\nHypothesis H3: Auxiliary supervision at the lower hierarchy yields better primary MTL as compared to flat-MTL. Having the auxiliary tasks at the same supervisory level as the primary task is inherently sub-optimal because we care only about the performance of the primary task (Mordan et al., 2018). To prioritize the primary task, we could change the model architecture such that the auxiliary supervision is at the lower hierarchy than the primary supervision, which will be discussed in the next section." }, { "heading": "3 MODEL ARCHITECTURE", "text": "" }, { "heading": "3.1 FLAT-MTL HIERARCHICAL ATTENTION MODEL", "text": "We start with an introduction of the Hierarchical Attention Model (HAN) (Yang et al., 2016b). We chose HAN because of its easy interpretability as it only uses single-head attention layers. There are four parts to the HAN model, (1) text input, (2) word encoder, (3) talkturn encoder, and (4) the predictor. In our application, we perform our predictions at the talkturn-level for both IEMOCAP and SEMAINE. For notation, let si represent the i-th talkturn and wit represent the t-th word in the i-th talkturn. Each single talkturn can contain up to T words, and each input talkturn can contain up to L past talkturns to give content context (to discuss in section 4.2).\nText Input Given a talkturn of words, we first convert the words into vectors through an embedding matrix We, and the word selection one-hot vector, wit.\nWord encoder The word encoder comprises of bidirectional GRUs (Bahdanau et al., 2014) and a single head attention to aggregate word embeddings into talkturn embeddings. Given the vectors xit, the bidirectional GRU reads the words from left to right as well as from right to left (as indicated by the direction of the GRU arrows) and concatenates the two hidden states together to form hit. We then aggregate the hidden states into one talkturn embedding through the attention mechanism. uit is the hidden state from feeding hit into a one-layer perceptron (with weights Ww and biases bw). The attention weight (αit) given to uit is the softmax normalized weight of the similarity between itself (uit) and uw, which are all randomly initialized and learnt jointly.\nxit = Wewit, t ∈ [1, T ] uit = relu(Wwhit + bw) −→ h it = −−−→ GRU(xit), t ∈ [1, T ] αit = exp(u>ituw) Σtexp(u>ituw) ←− h it = ←−−− GRU(xit), t ∈ [T, 1] si = Σtαituit\nhit = ( −→ h it, ←− h it)\nTalkturn encoder With the current and past talkturn embeddings (content context, to discuss in section 4.2), the talkturn encoder aggregates them into a single talkturn representation (v) in a similar fashion, as shown below.\n−→ h i = −−−→ GRU(si), i ∈ [1, L] ui = relu(Wshi + bs)\n←− h i = ←−−− GRU(si), i ∈ [L, 1] αi = exp(u>i us)\nΣiexp(u>i us)\nhi = ( −→ h i, ←− h i) v = Σiαiui\nThe simplest way of adding the sixteen (four auxiliary targets from four families of auxiliary supervision) auxiliary task predictors would be to append them to where the primary task predictor is, as illustrated in Fig. 2. That way, all predictors use the same representation v. We refer to this architecture as flat-MTL because the auxiliary supervision is at the same level as the primary supervision. We are unable to test H2 and H3 using this architecture." }, { "heading": "3.2 HAN-ROCK", "text": "We adapted1 the ROCK architecture (Mordan et al., 2018) which was built for Convolutional Neural Networks (LeCun et al., 1995) found in ResNet-SSD (He et al., 2016; Liu et al., 2016) to suit GRUs (Bahdanau et al., 2014) found in HAN (Yang et al., 2016b) (see Fig. 3).\nTo study H3, we bring the auxiliary task predictors forward, so that the auxiliary supervision is of a lower hierarchy than primary supervision (see Fig. 3). It is of a lower hierarchy because the backpropagation from the primary supervision is able to temper the back-propagation from the auxiliary\n1Implementation available at https://github.com/anonymous/placeholder; Please see attached supplementary material for implementation details during the review phase.\nsupervision but not vice-versa. This also sets us up to study H2, the impact of distributing different learning capacities to the auxiliary and primary branch. Each of the auxiliary tasks has its own talkturn encoder but shares one word encoder in the auxiliary branch (to keep the network small). Subscript a indicates whether the word encoder is for the primary or auxiliary branch:\nxit = Wewit, t ∈ [1, T ] uait = relu(Wawhait + baw) −→ h ait = −−−→ GRUa(xit), t ∈ [1, T ], a ∈ {pri, aux} αait = exp(u>aituaw) Σtexp(u>aituaw) ←− h ait = ←−−− GRUa(xit), t ∈ [T, 1], a ∈ {pri, aux} sai = Σtαaituait\nhait = ( −→ h ait, ←− h ait)\nEach task has its own talkturn encoder. Subscript b indicates which of the seventeen tasks – the primary talkturn task or one of the sixteen auxiliary tasks – is the talkturn encoder is dedicated to:\n−→ h abi = −−−−→ GRUab(sai), i ∈ [1, L], a ∈ {pri, aux}, b ∈ {pri, aux1, aux2, ..., aux16} ←− h abi = ←−−−− GRUab(sai), i ∈ [L, 1], a ∈ {pri, aux}, b ∈ {pri, aux1, aux2, ..., aux16}\nhabi = ( −→ h abi, ←− h abi) αabi = exp(u>abiub)\nΣiexp(u>abiub)\nuabi = relu(Wbhabi + bb) vab = Σiαabiuabi\nThe seventeen talkturn embeddings (vab) goes through a concatenation, then the single head attention, aggregating talkturn embeddings across seventeen tasks into one talkturn embedding for the primary task predictor. Subscript c pertains to the fusion module.\nconcatenation: vc = (vab), a ∈ {pri, aux}, b ∈ {pri, aux1, aux2, ..., aux16}\nattention: αc = exp(v>c uc)\nΣcexp(v>c uc)\noverall primary talkturn vector: v = Σcαcvc" }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATA AND PRIMARY TASKS", "text": "We validate our approach using two datasets with a total of eight primary tasks: the IEMOCAP (Busso et al., 2008) and the SEMAINE (McKeown et al., 2011) datasets. Both datasets are used\nin multimodal emotions detection research (Poria et al., 2019). We divided the datasets into train, development, and test sets in an approximate 60/20/20 ratio such that the sets do not share any speaker (Appendix A.1 details the splits).\nThe target labels of the eight primary tasks are all at the talkturn-level. The four primary tasks of IEMOCAP consists of the four-class emotions classification (angry, happy, neutral, sad), and three regression problems – valence (1-negative, 5-positive), activation (1-calm, 5-excited), and dominance (1-weak, 5-strong). The four-class emotions classification target is common for IEMOCAP (Latif et al., 2020; Xia & Liu, 2015; Li et al., 2019; Hazarika et al., 2018b; Mittal et al., 2020), albeit not universal. Some researchers have gone up to five (Chang & Scherer, 2017) or six (Majumder et al., 2019; Hazarika et al., 2018a) or nine-class emotions classification (Zadeh et al., 2018a) target.\nFor SEMAINE, there are four regression problems – activation, intensity, power, valence. We note that the valence, power, and activation tasks might be related across the two datasets, but crossdomain learning is beyond the scope of this paper. We use two standard evaluation metrics, mean absolute error (MAE), and 4-class weighted mean classification accuracy, MA(4)." }, { "heading": "4.2 INPUT", "text": "Multimodal feature extraction is computed using the MONAH framework (Anonymous, 2021). This framework uses a variety of pre-trained models to extract nine multimodal features, associated with the prosody of the speech and the actions of the speaker, and weaves them into a multimodal text narrative. We refer the reader to Anonymous (2021) for the details and efficacy of the MONAH framework. The benefit of the created narrative is that it describes what is said together with how it is said for each talkturn, giving richer nonverbal context to the talkturn (see Fig. 4 for an example). Being fully text-based means that the analysis product can be printed out on paper, without the need for speakers nor monitors to replay the conversation on a computer.\nIn addition to nonverbal context, we concatenated a variable number of preceding talkturns to the current talkturn as content context. Content context has been proven to be useful in CMN (Hazarika et al., 2018b) and ICON (Hazarika et al., 2018a), DialogueRNN (Majumder et al., 2019). The content-context size is tuned as a hyperparameter. The resulting multimodal text narrative, consisting of both nonverbal and context context, is used as the sole input to the model. The impact of progressively removing nonverbal and content context is not a contribution of this paper and hence not analyzed." }, { "heading": "4.3 AUXILIARY TARGETS", "text": "We first clarify the method of extraction for the auxiliary families. The OpenFace algorithm (Baltrušaitis et al., 2016) is used to extract the four continuous facial action units (AU) – AU 05, 17, 20, 25. The Vokaturi algorithm (Vokaturi, 2019) is used to extract the four continuous dimensions in the tone of voice – happiness, sadness, anger, and fear. As for historical and future features, we simply look up the target label for the past four talkturns, and future four talkturns. If any of the historical or future labels are not available, for example, the target label four talkturns ago is not available for the third talkturn, we substitute with the next nearest non-missing label.\nFor all auxiliary targets that reused the input features (actions and prosody), we converted them into a percentile rank that has the range [0,1] using the values from the train partition. This is a subtle but note-worthy transformation. When reusing an input as an auxiliary target, it would be trivial if the input can easily predict the target. For example, given the following MONAH transcript as input, “The woman sadly and slowly said no.” It would be trivial to use a binary (quantized) auxiliary target of “was the tone sad?” because we would only be training the model to look for the word “sadly”. However, if the auxiliary target is a percentile rank (less quantized) of the sadness in tone, then the presence of the word “sadly” increases the predicted rank, but the model could still use the rest of the nonverbal cues (“slowly”) and what is being said (“no”) to predict the degree of sadness.\nThat way, representations learnt for the auxiliary tasks uses more of the input. Additionally, this transformation also helped us side-step the decision to use the multimodal streams exclusively in the inputs or in the supervision (Caruana & De Sa, 1997; Caruana & de Sa, 1998) because we could use them in both.\nPercentile rank also has the convenient property of having the range [0,1]. We scaled the percentile ranks so that they all have the same range as the primary task (see appendix A.2 for transformation details). This to ensures that if we assigned equal loss weights to all tasks, the contribution of every task (both auxiliary and primary) is of the same order of magnitude, an important consideration for MTL (Gong et al., 2019; Hassani & Haley, 2019; Sener & Koltun, 2018)." }, { "heading": "4.4 MODELS, TRAINING, AND HYPERPARAMETERS TUNING", "text": "The overall loss is calculated as the weighted average across all seventeen tasks: (1) we picked a random weight for the primary task from the range [0.50, 0.99]; this ensures that the primary task has the majority weight. (2) For the remaining weights (1 - primary weight), we allocated them to the sixteen auxiliary tasks by: (a) random, (b) linearly-normalized mutual information, or (c) softmaxnormalized mutual information. (a) is self-explanatory. As for (b) and (c), mutual information has been shown that it is the best predictor – compared to entropy and conditional entropy – of whether the auxiliary task would be helpful to primary MTL (Bjerva, 2017). We computed the mutual information (vector m) of each auxiliary variable with the primary target variable (Kraskov et al., 2004; Ross, 2014) using scikit-learn (Pedregosa et al., 2011). Then, we linearly-normalized or softmax-normalized m to sum up to 1. Finally, we multiplied the normalized m with the remaining weights from (2); this ensures that the primary weight and the sixteen auxiliary weight sum up to one. (a), (b), and (c) have ten trials each during hyper-parameters tuning.\nTwo variants of the HAN architectures are used (Fig 2 and 3). Glove word embeddings (300- dimensions) are used to represent the words (Pennington et al., 2014). Hyper-parameters tuning is crucial because different combinations of primary and auxiliary tasks require different sets of hyperparameters. For hyperparameters tuning, we used random search (Bergstra & Bengio, 2012) with thirty trials. We tuned the learning rate, batch size, L2 regularization, the number of GRUs assigned to the primary and auxiliary branches, the auxiliary weights assignment, the content-context size, and lastly the GRU dropout and recurrent dropout (as detailed in Appendix A.3).\nTraining is done on a RTX2070 or a V100 GPU, for up to 350 epochs. Early stopping is possible via the median-stopping rule (Golovin et al., 2017) after the fifth epoch and after every two epochs (i.e., at epoch number 5, 7, 9, ..., 349). Appendix A.4 details the hyperparameters of models that performed the best on the development set.\nFor hypotheses testing, we bootstrapped confidence intervals of the test set performance of both baseline and challenger models as well as the confidence intervals of the differences in performances. Please see appendix A.5 for details on the bootstrap procedure." }, { "heading": "5 RESULTS AND DISCUSSION", "text": "The key takeaways are: (H1) The introduced set of auxiliary supervision improves primary MTL significantly in six of the eight primary tasks. (H2) Maximum learning capacity should be given to the primary branch as a default. (H3) HAN-ROCK is unlikely (in one of the eight tasks) to degrade primary MTL significantly, and sometimes significantly improves it (in four of the eight tasks).\n(H1): To test H1 (whether the introduced set of auxiliary supervision improves Primary MTL), we first train the model with all sixteen auxiliary targets (from families actions, prosody, historical, and future). Then, to differentiate the effect from the historical and future supervision, we set the loss weights from historical and future targets to be zero; effectively, there is only supervision from eight auxiliary targets (actions and prosody). Lastly, for the baseline model (no auxiliary supervision), we set the loss weights from all sixteen auxiliary targets to zero.\nGiven auxiliary supervision, the model significantly outperforms the baseline of not having auxiliary supervision in six out of the eight primary tasks (Table 1). Comparing the baseline model with the model with two auxiliary target families, they significantly outperformed the baseline model in five out of eight primary tasks. The addition of two auxiliary target families (historical and\nfuture labels) sometimes significantly improved primary MTL (valence in IEMOCAP), but it also sometimes significantly made it worse (activation and intensity in SEMAINE). This shows that the value of auxiliary tasks, and the associated risk of negative transfer, depends on the auxiliary task.\n(H2): To test H2 (whether maximum learning capacity should be given to the primary branch), we let P represent the number of GRU assigned to the primary talkturn encoder, and A represent the number of GRU assigned to each of the sixteen auxiliary talkturn encoder. We constrained P + A to be equal to 257. During our experiments, we set P to 1, 64, 128, 192, and 256. We set 256 as the baseline model because it is the maximum learning capacity we can give to the primary branch while giving 1 GRU (= 257− 256) to each of the sixteen auxiliary talkturn encoders.\nIn all primary tasks, the baseline model of assigning 256 GRUs to the primary branch is not significantly outperformed by models that assigned 1, 64, 128, 192 GRUs (Table 2). Generally, the performance decreased as the number of GRUs assigned to the primary talkturn encoder decreased from 256 to 1. We observed significantly worse performance in two out of eight tasks – in power and valence in SEMAINE. Also, assigning 256 GRU to the primary talkturn encoders and 1 to each of the sixteen auxiliary talkturn encoders yields the smallest model2, and thus trains the fastest. Therefore, we recommend that the maximum capacity be given to the primary branch as a default.\nThat said, the presence of an auxiliary branch is still important. The baseline of H1 (no auxiliary supervision, Table 1) can be approximated as P=256 + 16× 1, A=0 because the model architecture is the same, except that the loss weights of all auxiliary tasks are zero. We compared the former to the baseline in Table 2, and found that four out of eight primary tasks have significant improvements by changing the number of talkturn encoders assigned to each auxiliary task from zero to one.\n(H3): To test H3 (whether auxiliary supervision should be given a lower hierarchy), we compare the results from the flat-MTL HAN architecture (baseline) against the HAN-ROCK architecture (Table 3). Placing auxiliary supervision at the lower hierarchy significantly improves primary MTL in four out of eight tasks. In only one out of eight tasks (power in SEMAINE), auxiliary supervision significantly degrades primary MTL. We posit that further improvements are possible through the fusion module with future research.\n2As opposed to, say, assigning 1 GRU to the primary talkturn coder and 256 GRU to each of the sixteen auxiliary talkturn encoder." }, { "heading": "5.1 CLASS-WISE PERFORMANCE AND STATE-OF-THE-ART", "text": "We discuss the IEMOCAP classification model in depth by investigating the class-wise performance under the three hypotheses (Table 4). Generally, we found that all hypotheses effects are stronger in lower resource labels (sad and anger). We also present the performance of M3ER (Mittal et al., 2020), a previous state-of-the-art (SoTA) approach. We do not expect the performance of our textonly input to match the SoTA approach, which is confirmed in Table 4. By fusing numerical vectors from the three modalities prevalent in SoTA approaches (Zadeh et al., 2018a;b; Hazarika et al., 2018b;a; Majumder et al., 2019; Mittal et al., 2020), the inputs are of a much higher granularity as compared to our approach of describing the multimodal cues using discrete words. Although the text-based input is likely to constrain model performance, the multimodal transcription could be helpful for a human to analyze the conversation even before supervised learning. We could also overlay the model perspective on the multimodal transcription to augment human analysis (see Appendix A.6)." }, { "heading": "6 CONCLUSION, LIMITATIONS, AND FUTURE WORK", "text": "We proposed to re-use feature engineering pre-processing data as auxiliary tasks in primary MTL. Three hypotheses were tested for primary MTL. The experimental results confirm H1 – Introducing our set of sixteen auxiliary supervisors resulted in better performance in most primary tasks. For H2, maximum learning capacity should be given to the primary branch. Lastly, for H3, placing the auxiliary supervision in a lower hierarchy is unlikely to hurt performance significantly, and it sometimes significantly improves performance. This is encouraging news for multi-modal conversational analysis systems as we have demonstrated how pre-processed data can be used twice to improve performance, once as inputs, and again as auxiliary tasks. This paper has limitations. The first limitation is that the solutions are evaluated on eight tasks in the conversational analysis domain, and it is not clear if these would generalize outside of this domain. The second limitation is that we have evaluated on HAN, but not other network architectures.\nA challenge to be addressed is the apriori selection of the auxiliary targets. Future research could investigate targets selection, including how to use a much larger range of auxiliary targets, how to decide the optimum number of auxiliary targets, and whether it is possible to perform these automatically." }, { "heading": "A APPENDIX", "text": "A.1 DATASET PARTITIONS\nWe detail the dataset partition in 5 for aid reproducibility.\nA.2 SCALING THE AUXILIARY TARGETS\nWe detail the operations in scaling the percentile scores that range [0,1] to various primary tasks. For IEMOCAP Primary Tasks that are regression problems, we multiply the percentile score by 4 and add 1 to obtain the range [1,5]. For IEMOCAP classification task, we leave the auxiliary targets in the range of [0,1]. As for SEMAINE tasks, which are all regression problems, we multiply the percentile score by 2 and minus 1 to obtain the range [-1,1].\nA.3 RANGE OF HYPER PARAMETERS TUNED\nWe document the range of hyper parameters tuned in Table 6 to aid reproducibility.\nA.4 HYPER PARAMETERS WITH BEST DEVELOPMENT SET PERFORMANCE\nWe document our best performing hyper parameters in Table 7 to aid reproducibility.\nA.5 DETAILS OF COMPUTING THE BOOTSTRAP CONFIDENCE INTERVAL\nBaseline models for each hypothesis are detailed in section 2. All non-baseline models are referred to as challenger models. We created 1000 bootstrap samples of the test set performance by (1) resampling the development set performance, then (2) selecting the set of hyperparameters that resulted in the best development set performance, and (3) looking up the test set performance given the set of best-performing hyperparameters for the development set. To judge whether the challenger outperforms the baseline, we computed the 95 percent confidence interval by (1) performing\nelement-wise subtraction between the resampled test set performance of the baseline against the challenger, (2) removing the top and bottom 2.5 percent from the differences, and (3) observing whether the remaining 95 percent confidence interval includes zero. If it does not include zero, then the difference is statistically significant.\nA.6 VISUALIZATION FROM HAN-ROCK\nWe demonstrate how the HAN-ROCK model could be used to support humans analyze conversations using only text-based inputs. We visualized the attention weights from two models, (1) MTL refers to the classification model with auxiliary supervisors, whilst (2) STL refers to the same model architecture, but its auxiliary supervisors’ loss weights are set to zero. In principle, the MTL model should exhibit attention weights that are less likely to overfit because the weights are tempered by auxiliary supervisors. We observe that across the two models, both use the historical talkturns more so than the current talkturn; secondly, both assign high attention to the second word of the talkturns, which is interesting because the second word is where the multimodal annotations are inserted.\nAs explained in Section 3.2, there are three levels of attention over the internal representations, word (αit), talkturn (αabi), and task (αc). To compute the overall word and talkturn attention, we compute the weighted average of αit and αabi using the αc (task attention) as weights. Once we have the overall word and talkturn attention, we standardize the weights by computing the z-score. Depending on the z-score, we bucket the attention in none (z < 0), low (0 < z < 1), medium (1 < z < 2), or high (2 < z). We plan to validate the efficacy of the attention weights with human users in future research." } ]
2,020
null
SP:313f52f0734140154ab31a602457829ae6eda9c0
[ "The paper proposes a new approach for open set semi-supervised learning, where there are unlabeled data from classes not in the labeled data. The paper uses a contrastive representation learning paradigm to learn a feature encoder and a similarity measurement. Then the paper filters outlier samples by the similarity measurement and further utilizes outlier samples with soft labels. The separate BN layers address the distribution shift between in-class and out-class data.", "This paper considers the problem of semi-supervised learning, where the unlabeled data may include out-of-class samples. To address this task, the paper proposes a method consisting of three steps: (1) detecting out-of-class samples in the unlabeled set, (2) assigning soft-labels to the detected out-of-class samples using class-conditional likelihoods from labeled data, and (3) using auxiliary batch normalization layers to help mitigate the class distribution mismatch problem. Experiments are conducted on CIFAR-10, CIFAR-100, ImageNet datasets. Results show improvements over competing methods." ]
Modern semi-supervised learning methods conventionally assume both labeled and unlabeled data have the same class distribution. However, unlabeled data may include out-of-class samples in practice; those that cannot have one-hot encoded labels from a closed-set of classes in label data, i.e., unlabeled data is an open-set. In this paper, we introduce OpenCoS, a method for handling this realistic semisupervised learning scenario based on a recent framework of contrastive learning. One of our key findings is that out-of-class samples in the unlabeled dataset can be identified effectively via (unsupervised) contrastive learning. OpenCoS utilizes this information to overcome the failure modes in the existing state-of-the-art semisupervised methods, e.g., ReMixMatch or FixMatch. In particular, we propose to assign soft-labels for out-of-class samples using the representation learned from contrastive learning. Our extensive experimental results show the effectiveness of OpenCoS, fixing the state-of-the-art semi-supervised methods to be suitable for diverse scenarios involving open-set unlabeled data. The code will be released.
[]
[ { "authors": [ "Abhijit Bendale", "Terrance E Boult" ], "title": "Towards open set deep networks", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Liron Bergman", "Yedid Hoshen" ], "title": "Classification-based anomaly detection for general data", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin A Raffel" ], "title": "Mixmatch: A holistic approach to semi-supervised learning", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ekin D. Cubuk", "Alex Kurakin", "Kihyuk Sohn", "Han Zhang", "Colin Raffel" ], "title": "Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Olivier Chapelle", "Bernhard Scholkopf", "Alexander Zien" ], "title": "Semi-supervised learning", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Xinlei Chen", "Haoqi Fan", "Ross Girshick", "Kaiming He" ], "title": "Improved baselines with momentum contrastive learning", "venue": "arXiv preprint arXiv:2003.04297,", "year": 2020 }, { "authors": [ "Yanbei Chen", "Xiatian Zhu", "Wei Li", "Shaogang Gong" ], "title": "Semi-supervised learning under class distribution mismatch", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Yun-Chun Chen", "Chao-Te Chou", "Yu-Chiang Frank Wang" ], "title": "Learning to learn in a semi-supervised fashion", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V Le" ], "title": "Randaugment: Practical data augmentation with no separate search", "venue": null, "year": 1909 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "Andrew F Emmott", "Shubhomoy Das", "Thomas Dietterich", "Alan Fern", "Weng-Keen Wong" ], "title": "Systematic construction of anomaly detection benchmarks from real data", "venue": "In Proceedings of the ACM SIGKDD workshop on outlier detection and description,", "year": 2013 }, { "authors": [ "Yves Grandvalet", "Yoshua Bengio" ], "title": "Semi-supervised learning by entropy minimization", "venue": "In NeurIPS,", "year": 2004 }, { "authors": [ "Lan-Zhe Guo", "Zhen-Yu Zhang", "Yuan Jiang", "Yu-Feng Li", "Zhi-Hua Zhou" ], "title": "Safe deep semisupervised learning for unseen-class unlabeled data", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Olivier J Hénaff", "Aravind Srinivas", "Jeffrey De Fauw", "Ali Razavi", "Carl Doersch", "SM Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": null, "year": 1905 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Thomas Dietterich" ], "title": "Deep anomaly detection with outlier exposure", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Saurav Kadavath", "Dawn Song" ], "title": "Using self-supervised learning can improve model robustness and uncertainty", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Dong-Hyun Lee" ], "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "venue": "In ICML Workshop,", "year": 2013 }, { "authors": [ "Kimin Lee", "Honglak Lee", "Kibok Lee", "Jinwoo Shin" ], "title": "Training confidence-calibrated classifiers for detecting out-of-distribution samples", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Kimin Lee", "Kibok Lee", "Honglak Lee", "Jinwoo Shin" ], "title": "A simple unified framework for detecting out-of-distribution samples and adversarial attacks", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Kimin Lee", "Sukmin Yun", "Kibok Lee", "Honglak Lee", "Bo Li", "Jinwoo Shin" ], "title": "Robust inference via generative classifiers for handling noisy labels", "venue": null, "year": 2019 }, { "authors": [ "Zhizhong Li", "Derek Hoiem" ], "title": "Learning without forgetting", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Shiyu Liang", "Yixuan Li", "Rayadurgam Srikant" ], "title": "Enhancing the reliability of out-of-distribution image detection in neural networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Si Liu", "Risheek Garrepalli", "Thomas G Dietterich", "Alan Fern", "Dan Hendrycks" ], "title": "Open category detection with pac guarantees", "venue": "arXiv preprint arXiv:1808.00529,", "year": 2018 }, { "authors": [ "Yunru Liu", "Tingran Gao", "Haizhao Yang" ], "title": "Selectnet: Learning to sample from the wild for imbalanced data training", "venue": "In MSML,", "year": 2020 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Masanori Koyama", "Shin Ishii" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Varun Nair", "Javier Fuentes Alonso", "Tony Beltramelli" ], "title": "Realmix: Towards realistic semi-supervised deep learning algorithms", "venue": "arXiv preprint arXiv:1912.08766,", "year": 2019 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NeurIPS Workshop,", "year": 2011 }, { "authors": [ "Avital Oliver", "Augustus Odena", "Colin A Raffel", "Ekin Dogus Cubuk", "Ian Goodfellow" ], "title": "Realistic evaluation of deep semi-supervised learning algorithms", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Rajat Raina", "Alexis Battle", "Honglak Lee", "Benjamin Packer", "Andrew Y Ng" ], "title": "Self-taught learning: transfer learning from unlabeled data", "venue": "In ICML,", "year": 2007 }, { "authors": [ "Mehdi Sajjadi", "Mehran Javanmardi", "Tolga Tasdizen" ], "title": "Regularization with stochastic transformations and perturbations for deep semi-supervised learning", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Christian Szegedy Sergey Ioffe" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Kihyuk Sohn", "David Berthelot", "Chun-Liang Li", "Zizhao Zhang", "Nicholas Carlini", "Ekin D Cubuk", "Alex Kurakin", "Han Zhang", "Colin Raffel" ], "title": "Fixmatch: Simplifying semi-supervised learning with consistency and confidence", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Jihoon Tack", "Sangwoo Mo", "Jongheon Jeong", "Jinwoo Shin" ], "title": "Csi: Novelty detection via contrastive learning on distributionally shifted instances", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Yisen Wang", "Weiyang Liu", "Xingjun Ma", "James Bailey", "Hongyuan Zha", "Le Song", "Shu-Tao Xia" ], "title": "Iterative learning with open-set noisy labels", "venue": null, "year": 2018 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via nonparametric instance-level discrimination", "venue": null, "year": 2018 }, { "authors": [ "Cihang Xie", "Mingxing Tan", "Boqing Gong", "Jiang Wang", "Alan Yuille", "Quoc V Le" ], "title": "Adversarial examples improve image recognition", "venue": null, "year": 2020 }, { "authors": [ "Qizhe Xie", "Zihang Dai", "Eduard Hovy", "Minh-Thang Luong", "Quoc V Le" ], "title": "Unsupervised data augmentation for consistency training", "venue": null, "year": 1904 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "In BMVC,", "year": 2016 }, { "authors": [ "Xiaohua Zhai", "Avital Oliver", "Alexander Kolesnikov", "Lucas Beyer" ], "title": "S4l: Self-supervised semisupervised learning", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros" ], "title": "Colorful image colorization", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Chen" ], "title": "scratch, contrary to the results of ResNet-50", "venue": null, "year": 2020 }, { "authors": [ "Chen" ], "title": "2020a) and fine-tune the projection header for 5 epochs on", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Despite the recent success of deep neural networks with large-scale labeled data, many real-world scenarios suffer from expensive data acquisition and labeling costs. This has motivated the community to develop semi-supervised learning (SSL; Grandvalet & Bengio 2004; Chapelle et al. 2009), i.e., by further incorporating unlabeled data for training. Indeed, recent SSL works (Berthelot et al., 2019; 2020; Sohn et al., 2020) demonstrate promising results on several benchmark datasets, as they could even approach the performance of fully supervised learning using only a small number of labels, e.g., 93.73% accuracy on CIFAR-10 with 250 labeled data (Berthelot et al., 2020).\nHowever, SSL methods often fail to generalize when there is a mismatch between the classdistributions of labeled and unlabeled data (Oliver et al., 2018; Chen et al., 2020c; Guo et al., 2020), i.e., when the unlabeled data contains out-of-class samples, whose ground-truth labels are not contained in the labeled dataset (as illustrated in Figure 1(a)). In this scenario, various label-guessing techniques used in the existing SSL methods may label those out-of-class samples incorrectly, which in turn significantly harms the overall training through their inner-process of entropy minimization (Grandvalet & Bengio, 2004; Lee, 2013) or consistency regularization (Xie et al., 2019; Sohn et al., 2020). This problem may largely hinder the existing SSL methods from being used in practice, considering the open-set nature of unlabeled data collected in the wild (Bendale & Boult, 2016).\nContribution. In this paper, we focus on a realistic SSL scenario, where unlabeled data may contain some unknown out-of-class samples, i.e., there is a class distribution mismatch between labeled and unlabeled data (Oliver et al., 2018). Compared to prior approaches that have bypassed this problem by simply filtering out them with some heuristic detection scores (Nair et al., 2019; Chen et al., 2020c), the unique characteristic in our approach is to further leverage the information in out-of-class samples by assigning soft-labels to them: they may still contain some useful features for the in-classes.\nSomewhat surprisingly, we found that a recent technique of contrastive unsupervised learning (Wu et al., 2018; He et al., 2020; Chen et al., 2020a) can play a key role for our goal. More specifically, we show that a pre-trained representation via contrastive learning, namely SimCLR (Chen et al., 2020a), on both labeled and unlabeled data enables us to design (a) an effective score for detecting out-of-class samples in unlabeled data, and (b) a systematic way to assign soft-labels to the detected out-of-class samples, by modeling class-conditional likelihoods from labeled data. Finally, we found (c) auxiliary\nbatch normalization layers (Xie et al., 2020) could further help to mitigate the class-distribution mismatch via decoupling batch normalization layers. We propose a generic SSL framework, coined OpenCoS, based on the aforementioned techniques for handling open-set unlabeled data, which can be integrated with any existing SSL methods.\nWe verify the effectiveness of the proposed method on a wide range of SSL benchmarks based on CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and ImageNet (Deng et al., 2009) datasets, assuming the presence of various out-of-class data, e.g., SVHN (Netzer et al., 2011) and TinyImageNet datasets. Our experimental results demonstrate that OpenCoS greatly improves existing state-of-the-art SSL methods (Berthelot et al., 2019; 2020; Sohn et al., 2020), not only by discarding out-of-class samples, but also by further leveraging them into training. We also compare our method to other recent works (Nair et al., 2019; Chen et al., 2020c; Guo et al., 2020) addressing the same class distribution mismatch problem in SSL, and again confirms the effectiveness of our framework, e.g., we achieve an accuracy of 68.37% with 40 labels (just 4 labels per class) on CIFAR-10 with TinyImageNet as out-of-class, compared to DS3L (Guo et al., 2020) of 56.32%.\nOverall, our work highlights the benefit of unsupervised representations in (semi-) supervised learning: such a label-free representation turns out to enhance model generalization due to its robustness on the novel, out-of-class samples." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 SEMI-SUPERVISED LEARNING", "text": "The goal of semi-supervised learning for classification is to train a classifier f : X → Y from a labeled dataset Dl = {x(i)l , y (i) l } Nl i=1 where each label yl is from a set of classes Y := {1, · · · , C}, and an unlabeled dataset Du = {x(i)u }Nui=1 where each yu exists but is assumed to be unknown. In an attempt to leverage the extra information in Du, a number of techniques have been proposed, e.g., entropy minimization (Grandvalet & Bengio, 2004; Lee, 2013) and consistency regularization (Sajjadi et al., 2016). In general, recent approaches in semi-supervised learning can be distinguished by the prior they adopt for the representation of unlabeled data: for example, the consistency regularization technique (Sajjadi et al., 2016) attempt to minimize the cross-entropy loss between any two predictions of different augmentations t1(xu) and t2(xu) from a given unlabeled sample xu, jointly with the standard training for a labeled sample (xl, yl):\nLSSL(xl, xu) := H(yl, f(xl)) + β ·H(f(t1(xu)), f(t2(xu))), (1) where H is a standard cross-entropy loss for labeled data, and β is a hyperparameter. Recently, several “holistic” approaches of various techniques (Zhang et al., 2018; Cubuk et al., 2019) have shown remarkable performance in practice, e.g., MixMatch (Berthelot et al., 2019), ReMixMatch (Berthelot et al., 2020), and FixMatch (Sohn et al., 2020), which we mainly consider in this paper. We note that our scheme can be integrated with any recent semi-supervised learning methods." }, { "heading": "2.2 CONTRASTIVE REPRESENTATION LEARNING", "text": "Contrastive learning (Oord et al., 2018; Hénaff et al., 2019; He et al., 2020; Chen et al., 2020a) defines an unsupervised task for an encoder fe : X → Rde from a set of samples {xi}: assume that a “query” sample xq is given and there is a positive “key” x+ ∈ {xi} that xq matches. Then the contrastive loss is defined to let f to extract the necessary information to identify x+ from xq as follows:\nLcon(fe, xq, x+; {xi}) := − log exp(h(fe(xq), fe(x+))/τ)∑ i exp(h(fe(xq), fe(xi))/τ) , (2)\nwhere h(·, ·) is a pre-defined similarity score, and τ is a temperature hyperparameter. In this paper, we primarily focus on SimCLR (Chen et al., 2020a), a particular form of contrastive learning: for a given {xi}Ni=1, SimCLR first samples two separate data augmentation operations from a pre-defined family T , namely t1, t2 ∼ T , and matches (x̃i, x̃i+N ) := (t1(xi), t2(xi)) as a query-key pair interchangeably. The actual loss is then defined as follows:\nLSimCLR(fe; {xi}Ni=1) := 1\n2N 2N∑ q=1 Lcon(fe, x̃q, x̃(q+N) mod 2N ; {x̃i}2Ni=1 \\ {x̃q}), (3)\nhSimCLR(v1, v2) := CosineSimilarity(g(v1), g(v2)) = g(v1) · g(v2)\n||g(v1)||2||g(v2)||2 , (4)\nwhere g : Rde → Rdp is a 2-layer neural network called projection header. In other words, the SimCLR loss defines a task to identify a “semantically equivalent” sample to xq up to the set of data augmentations T ." }, { "heading": "3 OPENCOS: A FRAMEWORK FOR OPEN-SET SEMI-SUPERVISED LEARNING", "text": "We consider semi-supervised classification problems involving C classes. In addition to the standard assumption of semi-supervised learning (SSL), we assume that the unlabeled dataset Du is open-set, i.e., the hidden labels yu of xu may not be in Y := {1, · · · , C}. In this scenario, existing semisupervised learning techniques may degrade the classification performance, possibly due to incorrect label-guessing procedure for those out-of-class samples. In this respect, we introduce OpenCoS, a generic method for detecting and labeling out-of-class unlabeled samples in semi-supervised learning. Overall, our key intuition is to utilize the unsupervised representation from contrastive learning (Wu et al., 2018; He et al., 2020; Chen et al., 2020a) to leverage such out-of-class samples in an appropriate manner. We present a brief overview of our method in Section 3.1, and describe how our approach, OpenCoS, can handle out-of-class samples in Section 3.2 and 3.3." }, { "heading": "3.1 OVERVIEW OF OPENCOS", "text": "Recall that our goal is to train a classifier f : X → Y from a labeled dataset Dl and an open-set unlabeled dataset Du. Overall, OpenCoS aims to overcome the presence of out-of-class samples in Du through the following procedure:\n1. Pre-training via contrastive learning. OpenCoS first learns an unsupervised representation of f via SimCLR1 (Chen et al., 2020a), using both Dl and Du without labels. More specifically, we learn the penultimate features of f , denoted by fe, by minimizing the contrastive loss defined in (3). We also introduce a projection header g (4), which is a 2-layer MLP as per (Chen et al., 2020a).\n2. Detecting out-of-class samples. From a learned representation of fe and g, OpenCoS identifies an out-of-class unlabeled data Doutu from the given data Du = Dinu ∪ Doutu . This detection process is based on the similarity score between Dl and Du in the representation space of fe and g (see Section 3.2).\n3. Semi-supervised learning with auxiliary loss and batch normalization. Now, one can use any semi-supervised learning scheme to train f using Dl and Dinu , e.g., ReMixMatch (Berthelot et al., 2020). In addition, OpenCoS minimizes an auxiliary loss that assigns a soft-label to each sample in Doutu , which is also based on the representation of fe and g (see Section 3.3). Furthermore, we found maintaining auxiliary batch normalization layers (Xie et al., 2020) for Doutu is beneficial to our loss as they mitigate the distribution mismatch arisen from Doutu .\nPutting it all together, OpenCoS provides an effective and systematic way to detect and utilize outof-class data for semi-supervised learning. Due to its simplicity, our framework can incorporate the most recently proposed semi-supervised learning methods (Berthelot et al., 2019; 2020; Sohn et al., 2020) and improve their performance in the presence of out-of-class samples. Figure 2 illustrates the overall training scheme of OpenCoS." }, { "heading": "3.2 DETECTION CRITERION OF OPENCOS", "text": "For a given labeled dataset Dl and an open-set unlabeled dataset Du, we aim to detect a subset of the unlabeled training data Doutu ⊆ Du whose elements are out-of-class, i.e., yu /∈ Y . A standard way to handle this task is to train a confident-calibrated classifier using Dl (Hendrycks & Gimpel, 2017; Liang et al., 2018; Lee et al., 2018a;b; Hendrycks et al., 2019a;b; Bergman & Hoshen, 2020; Tack et al., 2020). However, such methods typically assume a sufficient number of in-class samples (i.e., large Dl), which does not hold in our case to the label-scarce nature of SSL. This motivates us to consider a more suitable approach which leverages the open-set unlabeled dataset Du for contrastive learning. Then, OpenCoS utilizes the labeled dataset Dl to estimate the class-wise distributions of (pre-trained) embeddings, and use them to define a detection score for Du. We assume that an encoder fe : X → Rde and a projection header g : Rde → Rdp pre-trained via SimCLR on Dl ∪ Du. Motivated by the similarity metric used in the pre-training objective of SimCLR (4), we propose a simple yet effective detection score s(xu) for unlabeled input xu based on the cosine similarity between xu and class-wise prototypical representations {vc}Cc=1 obtained from Dl. Namely, we first define a class-wise similarity score2 simc(xu) for each class c as follows:\nvc (Dl; fe, g) := 1\nN cl ∑ i 1 y (i) l =c · g(fe(x(i)l )), and (5)\nsimc (xu;Dl, fe, g) := CosineSimilarity(g(fe(xu)), vc), (6)\nwhere N cl := |{(x (i) l , y (i) l )|y (i) l = c}| is the sample size of class c in Dl. Then, our detection score s(xu) is defined by the maximal similarity score between xu and the prototypes {vc}Cc=1: s(xu) := max\nc=1,··· ,C simc (xu) . (7)\nIn practice, we use a pre-defined threshold t for detecting out-of-class samples in Du, i.e., we detect a given sample xu as out-of-class if s(xu) < t. In our experiments, we found an empirical value of t := µl − 2σl performs well across all the datasets tested, where µl and σl are mean and standard deviation computed over {s(x(i)l )} Nl i=1, respectively, although more tuning of t could further improve the performance. Further analysis of our detection threshold can be found in Appendix B.4. 1Nevertheless, our framework is not restricted to a single method of SimCLR; it is easily generalizable to other contrastive learning methods (Hénaff et al., 2019; He et al., 2020; Chen et al., 2020b). 2In this work, we adopt the well-known cosine similarity to define our score, but there can be other designs as long as it represents class-wise similarity (Chen et al., 2020d; Vinyals et al., 2016; Snell et al., 2017)." }, { "heading": "3.3 AUXILIARY LOSS AND BATCH NORMALIZATION OF OPENCOS", "text": "Based on the detection criterion defined in Section 3.2, the open-set unlabeled dataset Du can be split into (a) the in-class unlabeled dataset Dinu and (b) the out-of-class unlabeled dataset Doutu . The labeled dataset Dl and Dinu are now used to train the classifier f using any existing semi-supervised learning method (Berthelot et al., 2019; 2020; Sohn et al., 2020).\nIn addition, we propose to further utilize Doutu via an auxiliary loss that assigns a soft-label to each xoutu ∈ Doutu . More specifically, for any semi-supervised learning objective LSSL(xl, xinu ; f), we consider the following loss:\nLOpenCoS = LSSL(xl, xinu ; f) + λ ·H(q(xoutu ), f(xoutu )), (8) where H denotes the cross-entropy loss, λ is a hyperparameter, and q(xoutu ) defines a specific assignment of distribution over Y for xoutu . In this paper, we propose to assign q(xoutu ) based on the class-wise similarity scores simc(xoutu ) defined in (6), again utilizing the contrastive representation fe and g:\nqc(xu) := exp (simc(xu; fe, g)/τ)∑ i exp (simi(xu; fe, g)/τ) , (9)\nwhere τ is a (temperature) hyperparameter.\nAt first glance, assigning a label of Y to xoutu may seem counter-intuitive, as the true label of xoutu is not in Y by definition. However, even when out-of-class samples cannot be represented as one-hot labels, one can still model their class-conditional likelihoods as a linear combination (i.e., soft-label) of Y: for instance, although “cat” images are out-of-class for CIFAR-100, still there are some classes in CIFAR-100 that is semantically similar to “cat”, e.g., “leopard”, “lion”, or “tiger”, so that assigning a soft-label, e.g., 0.1 · “leopard” + 0.2 · “lion” + 0.7 · “tiger”, might be beneficial. Even if out-of-classes are totally different from in-classes, one can assign the uniform labels to ignore them. We empirically found that such soft-labels based on representations learned via contrastive learning offer an effective way to utilize out-of-class samples, while they are known to significantly harm in the vanilla semi-supervised learning schemes. We present detailed discussion on our soft-label assignments in Section 4.4.\nAuxiliary batch normalization. Finally, we suggest to handle a data-distribution shift originated from the class-distribution mismatch (Oliver et al., 2018), i.e., Dl and Doutu are drawn from the different underlying distribution. This may degrade the in-class classification performance as the auxiliary loss utilizes out-of-class samples. To handle the issue, we use additional batch normalization layers (BN; Sergey Ioffe 2015) for training samples in Doutu to disentangle those two distributions. In our experiments, we observe such auxiliary BNs are beneficial when using out-of-class samples via the auxiliary loss (see Section 4.4). Auxiliary BNs also have been studied in adversarial learning literature (Xie et al., 2020): decoupling BNs improves the performance of adversarial training by handling a distribution mismatch between clean and adversarial samples. In this paper, we found that a similar strategy can improve model performance in realistic semi-supervised learning." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we verify the effectiveness of our method over a wide range of semi-supervised learning benchmarks in the presence of various out-of-class data. The full details on experimental setups can be found in Appendix A.\nDatasets. We perform experiments on image classification tasks for several benchmarks in the literature of semi-supervised learning (Berthelot et al., 2020; Sohn et al., 2020): CIFAR-10, CIFAR100 (Krizhevsky et al., 2009), and ImageNet (Deng et al., 2009) datasets. Specifically, we focus on settings where each dataset is extremely label-scarce: only 4 or 25 labels per class are given during training, while the rest of the training data are assumed to be unlabeled. To configure realistic semi-supervised learning scenarios, we additionally assume that unlabeled data contain samples from an external dataset: for example, in the case of CIFAR-10, we use unlabeled samples from SVHN (Netzer et al., 2011) or TinyImageNet3 datasets.\nBaselines. We evaluate MixMatch (Berthelot et al., 2019), ReMixMatch (Berthelot et al., 2020), and FixMatch (Sohn et al., 2020) as baselines in our experimental setup, which are considered to be\n3https://tiny-imagenet.herokuapp.com/\nstate-of-the-art methods in conventional semi-supervised learning. We also compare our method with three prior works applicable to our setting: namely, we consider Uncertainty-Aware Self-Distillation (UASD; Chen et al. 2020c), RealMix (Nair et al., 2019) and DS3L (Guo et al., 2020), which propose schemes to detect and filter out out-of-class samples in the unlabeled dataset: e.g., DS3L learns to re-weight unlabeled samples to reduce the effect of out-of-class samples. Recall that our method uses SimCLR (Chen et al., 2020a) for pre-training. Unless otherwise noted, we also pre-train the baselines via SimCLR for a fair comparison, denoting those fine-tuned models by “-ft,” e.g., MixMatch-ft and UASD-ft. We confirm that fine-tuned models show comparable or better performance compared to those trained from scratch, as presented in Figure 1(b) and Appendix A.2. Also, we report the performance purely obtainable from (unsupervised) SimCLR: namely, we additionally consider (a) SimCLR-le: a SimCLR model with linear evaluation protocol (Zhang et al., 2016; Chen et al., 2020a), i.e., it additionally learns a linear layer with the labeled dataset, and (b) SimCLR-ft: the whole SimCLR model is fine-tuned with the labeled dataset. Somewhat interestingly, these models turn out to be the strongest baselines in our setups; they often outperform the state-of-the-art semi-supervised baselines under large proportions of out-of-class samples (see Table 1). Finally, we remark that our framework can incorporate any conventional semi-supervised methods for training. We denote our method built upon an existing method by “+ OpenCoS”, e.g., ReMixMatch + OpenCoS.\nTraining details. As suggested by Oliver et al. (2018), we have re-implemented all baseline methods considered, including SimCLR, under the same codebase and performed experiments with the same model architecture of ResNet-50 (He et al., 2016).4 Due to the label-scarce nature of semi-supervised learning, we do not use a validation set in our setting. Instead, we checkpoint per 216 training samples and report (a) the median test accuracy of the last 5 checkpoints out of 50 checkpoints in total and (b) the best accuracy among all the checkpoints. We fix τ = 1, the temperature hyperparameter in (9), and λ = 0.5 in (8), in all our experiments. The full details on model architecture and hyperparameters can be found in Appendix A.2 and B.1, respectively." }, { "heading": "4.1 EXPERIMENTS ON VARYING PROPORTIONS OF OUT-OF-CLASS SAMPLES", "text": "We first evaluate the effect of out-of-class unlabeled samples in semi-supervised learning, on varying proportions to the total dataset. We consider CIFAR-10 and TinyImageNet datasets, and synthetically control the proportion between the two in 50K training samples. For example, 80% of proportion means the training dataset consists of 40K samples from TinyImageNet, and 10K samples from CIFAR-10. In this experiment, we assume that 25 labels per class are always given in the CIFAR10 side. We compare three models on varying proportions of out-of-class: (a) a ReMixMatch model trained from scratch (ReMixMatch), (b) a SimCLR model fine-tuned by ReMixMatch (ReMixMatch-ft), and (c) our OpenCoS model applied to ReMixMatch-ft (+ OpenCoS).\nFigure 1(b) demonstrates the results. Overall, we observe that the performance of ReMixMatch rapidly degrades as the proportion of out-of-class samples increases in unlabeled data. While ReMixMatch-ft significantly mitigates this problem, however, it still fails at a larger proportion: e.g., at 80% of out-of-class, the performance of ReMixMatch-ft falls into that of ReMixMatch. OpenCoS, in contrast, successfully prevents the performance degradation of ReMixMatch-ft, especially at the regime that out-of-class samples dominate in-class samples." }, { "heading": "4.2 EXPERIMENTS ON CIFAR DATASETS", "text": "In this section, we evaluate our method on several benchmarks where CIFAR datasets are assumed to be in-class: more specifically, we consider scenarios that either CIFAR-10 or CIFAR-100 is an in-class dataset, with an out-of-class dataset of either SVHN or TinyImageNet. Additionally, we also consider a separate benchmark called CIFAR-Animals + CIFAR-Others following the setup in the related work (Oliver et al., 2018): the in-class dataset consists of 6 animal classes from CIFAR-10, while the remaining samples are considered as out-of-class. We fix every benchmark to have 50K training samples. We assume an 80% proportion of out-of-class, i.e., 10K for in-class and 40K for out-of-class samples, except for CIFAR-Animals + CIFAR-Others, which consists of 30K and 20K samples for in- and out-of-class, respectively. We report ReMixMatch-ft + OpenCoS as it tends to outperform FixMatch-ft + OpenCoS in such CIFAR-scale experiments, while FixMatch-ft + OpenCoS does in the large-scale ImageNet experiments in Section 4.3. Table 1 shows the results: OpenCoS consistently\n4Note that this architecture is larger than Wide-ResNet-28-2 (Zagoruyko & Komodakis, 2016) used in the semi-supervised learning literature (Oliver et al., 2018). We use ResNet-50 following the standard of SimCLR.\nimproves ReMixMatch-ft, outperforming the other baselines simultaneously. For example, OpenCoS improves the test accuracy of ReMixMatch-ft 28.51%→ 68.37% on 4 labels per class of CIFAR-10 + TinyImageNet. Also, we observe large discrepancies between the median and best accuracy of semi-supervised learning baselines, MixMatch-ft, ReMixMatch-ft, and FixMatch-ft, especially in the extreme label-scarce scenario of 4 labels per class, i.e., these methods suffer from over-fitting on out-of-class samples. One can also confirm this significant over-fitting in state-of-the-art SSL methods by comparing other baselines with detection schemes, e.g., USAD-ft, RealMix-ft, and DS3L-ft, which show less over-fitting but with lower best accuracy." }, { "heading": "4.3 EXPERIMENTS ON IMAGENET DATASETS", "text": "We also evaluate OpenCoS on ImageNet to verify its scalability to a larger and more complex dataset. We design 9 benchmarks from ImageNet dataset, similarly to Restricted ImageNet (Tsipras et al., 2019): more specifically, we define 9 super-classes of ImageNet, each of which consists of 11∼118 sub-classes. We perform our experiments on each super-class as an individual dataset. Each of the benchmarks (a super-class) contains 25 labels per sub-class, and we use the full ImageNet as an unlabeled dataset (excluding the labeled ones). In this experiment, we checkpoint per 215 training samples and report the median test accuracy of the last 3 out of 10. We present additional experimental details, e.g., configuration of the dataset, in Appendix A.3. Table 2 shows the results: OpenCoS still effectively improves the baselines, largely surpassing SimCLR-le and SimCLR-ft as well. For example, OpenCoS improves the test accuracy on Bird to 81.78% from FixMatch-ft of 78.73%, also improving SimCLR-le of 75.81% significantly. This shows the efficacy of OpenCoS in exploiting open-set unlabeled data from unknown (but related) classes or even unseen distribution of another dataset in the real-world." }, { "heading": "4.4 ABLATION STUDY", "text": "We perform an ablation study to understand further how OpenCoS works. Specifically, we assess the individual effects of the components in OpenCoS and show that each of them has an orthogonal contribution to the overall improvements. We also provide a detailed evaluation of our proposed detection score (7) compared to other out-of-distribution detection methods.\nComponent analysis. To further analyze the individual contribution of each component of OpenCoS, we incrementally apply these components one-by-one to ReMixMatch-ft (CIFAR-scale) and FixMatch-ft (ImageNet-scale) baselines. Specifically, we consider CIFAR-Animals + CIFAR-Others, CIFAR-10 + SVHN for CIFAR-scale, and Produce, Bird, Food + ImageNet for ImageNet-scale benchmarks. Table 3 summarizes the results, and they indeed confirm that each of what comprises OpenCoS has an orthogonal contribution to improve the accuracy of the benchmarks tested. We observe that leveraging out-of-class samples via auxiliary loss (“Aux. loss”) achieves consistent improvements, and also outperforms the baselines significantly. Finally, we remark auxiliary batch normalization layers (“Aux. BNs”) give a consistent improvement, and it is often significant: e.g., it gives 55.78%→ 57.77% on CIFAR-10 + SVHN. Other detection scores. In Section 3.2, we propose a detection score s(·) (7) for detecting out-ofclass samples in an unlabeled dataset, based on the contrastive representation of SimCLR. This setup is different to the standard out-of-distribution (OOD) detection task (Emmott et al., 2013; Liu et al., 2018): OOD detection targets unseen (i.e., “out-of-distribution”) samples in test time, while our setup aims to detect seen out-of-class samples during training assuming few in-class labels. Due to this lack of labeled information, therefore, the standard techniques for OOD detection (Hendrycks & Gimpel, 2017; Liang et al., 2018; Lee et al., 2018b) are not guaranteed to perform still well in our setup. We examine this in Appendix B.3 by comparing detection performance of such OOD detection scores with ours (7) upon a shared SimCLR representation: in short, we indeed observe that our approach of directly leveraging the contrastive representation could perform better than simply applying OOD scores relying on few labeled samples, e.g., our score achieves an AUROC of 98.10% on the CIFAR-Animals + CIFAR-Others benchmark compared to the maximum softmax probability based score (Hendrycks & Gimpel, 2017) of 80.79%. We present the detailed experimental setups and more results in Appendix B.3.\nEffect of soft-labeling. We emphasize that our soft-labeling scheme can be rather viewed as a more reasonable way to label such out-of-class samples compared to existing state-of-the-art SSL methods, e.g., MixMatch simply assigns its sharpened predictions. On the other hand, a prior work (Li & Hoiem, 2016) has a similar observation to our approach: assigning soft-labels of novel data could be beneficial for transfer learning. This motivate us to consider an experiment to further support the claim that our soft-labeling gives informative signals: we train a classifier by minimizing only the cross-entropy loss with soft-labels (i.e., without in-class samples) from scratch. In Table 4, the trained\nclassifier performs much better than (random) guessing, even close to some baselines although this model does; this supports that generated soft-labels contain informative features of in-classes. The details on experimental setups can be found in Appendix A.2.\nExamples of actual soft-labels. We also present some concrete examples of our soft-labeling scheme in Figure 3 for a better understanding, which are obtained from random unlabeled samples in the CIFAR-10 + TinyImageNet benchmark: Overall, we qualitatively observe that out-of-class samples that share some semantic features to the in-classes (e.g., Figure 3(a)) have relatively high confidence capturing such similarity, while returning very close to uniform otherwise (e.g., Figure 3(b))." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose a simple and general framework for handling novel unlabeled data, aiming toward a more realistic assumption for semi-supervised learning. Our key idea is (intentionally) not to use label information, i.e., by relying on unsupervised representation, when handling novel data, which can be naturally incorporated into semi-supervised learning with our framework: OpenCoS. In contrast to previous approaches, OpenCoS opens a way to further utilize those novel data by assigning them soft-labels, which are again obtained from unsupervised learning. We hope our work would motivate researchers to extend this framework with a more realistic assumption, e.g., noisy labels (Wang et al., 2018; Lee et al., 2019), imbalanced learning (Liu et al., 2020)." }, { "heading": "A TRAINING DETAILS", "text": "A.1 DETAILS ON THE EXPERIMENTAL SETUP\nFor the experiments reported in Table 1, we generally follow the training details of FixMatch (Sohn et al., 2020), including optimizer, learning rate schedule, and an exponential moving average. Specifically, we use Nesterov SGD optimizer with momentum 0.9, a cosine learning rate decay with an initial learning rate of 0.03, and an exponential moving average with a decay of 0.999. The batch size is 64, which is widely adopted in semi-supervised learning (SSL) methods. We do not use weight decay for these models, as they are fine-tuned. We use a simple augmentation strategy, i.e., flip and crop, as a default. We use the augmentation scheme of SimCLR (Chen et al., 2020a) (i.e., random crop with resize, random color distortion, and random Gaussian blur) when a SSL method requires to specify a strong augmentation strategy, e.g., for consistency regularization in the SSL literature (Berthelot et al., 2020; Sohn et al., 2020). We fix the number of augmentations as 2, following Berthelot et al. (2019): e.g., MixMatch-ft generates two augmentations of each unlabeled sample while ReMixMatch-ft generates one weak augmentation and one strong augmentation. In the case of ReMixMatch-ft, we do not use the ramp-up weighting function, the pre-mixup and the rotation loss, which give a marginal difference in fine-tuning, for efficient computation.5 For FixMatch-ft, we set the relative size of labeled and unlabeled batch µ = 1 for a fair comparison with other baselines, and scale the learning rate linearly with µ, as suggested by Sohn et al. (2020). Following Chen et al. (2020c), UASD-ft computes the predictions by accumulative ensembling instead of using an exponential moving average. OpenCoS shares all hyperparameters of the baseline SSL methods, e.g., FixMatch + OpenCoS shares hyperparameters of FixMatch-ft. For the results of ReMixMatch (from scratch) in Figure 1(b), we report the median accuracy of the last 10 checkpoints out of 200 checkpoints, where each checkpoint is saved per 216 training samples.\nA.2 CIFAR EXPERIMENTS\nTraining from scratch. We pre-train all the baselines via SimCLR for a fair comparison, as mentioned in Section 4. In Table 5, we also report the performance of each baseline model when trained from scratch. Here, we report the median accuracy of the last 10 checkpoints out of 500 checkpoints in total. We also present the fine-tuned baselines (see Section 4.2) denoting by “-ft,” e.g., MixMatch-ft. Here, we follow the training details those which originally used in each baseline method. For example, ReMixMatch from scratch uses Adam optimizer with a fixed learning rate of 0.002, and weight decay of 0.02. We use the simple strategy (i.e., flip and crop) and RandAugment (Cubuk et al., 2019) as a weak and strong augmentation, respectively. In addition, we use the ramp-up weighting function, the pre-mixup and the rotation loss for ReMixMatch. We consider the CIFAR-100 + TinyImageNet benchmark assuming 80% proportion of out-of-class, i.e., 10K samples for in-class and 40K samples for out-of-class.\nTraining without in-class samples from scratch. For the experiments reported in Table 4, we train a classifier from scratch only with the unlabeled out-of-class samples and their soft-labels, on the CIFAR-10 benchmarks with 4 labels per class. We use ResNet-50, and SGD with momentum 0.9, weight decay 0.0001, and an initial learning rate of 0.1. The learning rate is divided by 10 after epochs 100 and 150, and total epochs are 200. We set batch size as 128, and use a simple data augmentation strategy, i.e., flip and crop. We minimize the cross-entropy loss between the soft-labels q(·) and model predictions f(·), i.e., L = H(q(xoutu ), f(xoutu )). We use the temperature scaling on the both sides of soft-labels and model predictions for a stable training, specifically 0.1 and 4, respectively.\nAnalysis of model architectures. For all our experiments, we use ResNet-50 following the standard of SimCLR (Chen et al., 2020a). This architecture is larger than Wide-ResNet-28-2 (Zagoruyko & Komodakis, 2016), a more widely adopted architecture in the semi-supervised learning literature (Oliver et al., 2018). We have found that using a larger network, i.e., ResNet-50, is necessary to leverage the pre-trained features of SimCLR: In Table 5, we provide an evaluation on another choice of model architecture, i.e., Wide-ResNet-28-2. The hyperparameters are the same as the experiments on ResNet-50. Here, one can observe that OpenCoS trained on Wide-ResNet-28-2 still improves ReMixMatch-ft, outperforming the other baselines. More importantly, however, we observe that pre-training Wide-ResNet-28-2 via SimCLR does not significantly improve the baselines trained from\n5For a fair comparison, ReMixMatch-ft + OpenCoS shares these settings.\nscratch, contrary to the results of ResNet-50. As also explored by Chen et al. (2020a), we suspect this is due to that pre-training via SimCLR requires a larger model in practice, and suggest future SSL research to explore larger architectures to incorporate more rich features into their methods, e.g., features learned via unsupervised learning (Hénaff et al., 2019; Chen et al., 2020a;b).\nExperiments on more labeled data. We have performed additional experiments on the CIFAR-10 + SVHN benchmark with 400 labels per class, and the results are given in Table 6. One can still observe that OpenCoS consistently outperforms other methods when more labeled data are available.\nA.3 IMAGENET EXPERIMENTS\nBenchmarks of ImageNet dataset. In Section 4.3, we introduce 9 benchmarks from ImageNet dataset, similar to Restricted ImageNet (Tsipras et al., 2019). In detail, we group together subsets of semantically similar classes into 9 different super-classes, as shown in Table 7.\nDetails of ImageNet experiments. For the experiments reported in Table 2, we use a pre-trained ResNet-50 model6 of Chen et al. (2020a) and fine-tune the projection header for 5 epochs on ImageNet. We follow the optimization details of the fine-tuning experiments of SimCLR (Chen et al., 2020a): specifically, we use Nesterov SGD optimizer with momentum 0.9, and a learning rate of 0.00625 (following LearningRate = 0.05 · BatchSize/256). We set the batch size to 32, and report the median accuracy of the last 3 checkpoints out of 10 checkpoints in total. Data augmentation, regularization techniques, and other hyperparameters are the same as CIFAR experiments. In the case of FixMatch-ft + OpenCoS, we empirically observe that it is more beneficial not to discard the detected out-of-class samples in FixMatch training, as it performs better than using in-class samples\n6https://github.com/google-research/simclr\nonly: the auxiliary loss still use only out-of-class samples. Since FixMatch filters low-confidence unlabeled samples out, it is possibly due to a decrease in the number of training data." }, { "heading": "B ABLATION STUDY", "text": "B.1 EFFECTS OF THE TEMPERATURE AND THE LOSS WEIGHT\nIn Section 4, we perform all the experiments with the fixed temperature τ = 1 and loss weight λ = 0.5. To examine the effect of hyperparameters τ and λ, we additionally test the hyperparameters across an array of τ ∈ {0.1, 0.5, 1, 2, 4} and λ ∈ {0.1, 0.5, 1, 2, 4} on the CIFAR-100 + TinyImageNet benchmark with ResNet-50. The results are presented in Table 8. Overall, we found our method is fairly robust on τ and λ.\nB.2 EFFECTS OF OUT-OF-CLASS SAMPLES IN CONTRASTIVE LEARNING\nTo clarify how the improvements of OpenCoS comes from out-of-class samples, we have considered additional CIFAR-scale experiments with 4 labels per class. We newly pre-train and fine-tune SimCLR models using in-class samples only, i.e., 30,000 for CIFAR-Animals, 10,000 for CIFAR-10, CIFAR-100 benchmarks, and compare two baselines: SimCLR-le and ReMixMatch-ft. Interestingly, we found that just merging out-of-class samples to the training dataset improves the performance of SimCLR models in several cases (see Table 9), e.g., SimCLR-le of CIFAR-10 enhances from 55.27% to 58.20% with TinyImageNet. Also, OpenCoS significantly outperforms overall baselines, even when out-of-class samples hurt the performance of SimCLR-le or ReMixMatch-ft. We confirm that the proposed method effectively utilizes contrastive representations of out-of-class samples beneficially, compared to other SSL baselines.\nRobustness to incorrect detection. We observe our method is quite robust on incorrectly detected out-of-class samples, i.e., those samples are still leveraged via auxiliary loss instead of SSL algorithm. We have considered an additional experiment on CIFAR-10 with 250 labels (out of 50,000 samples), that assumes (i) all the unlabeled samples are in-class, and (ii) 80% of those in-class samples are\nincorrectly detected as out-of-class in OpenCoS. Here, we compare OpenCoS with a baseline which only uses the correctly-detected (in-class) samples without auxiliary loss, i.e., the baseline is trained on 10,000 samples while OpenCoS on 50,000 in total. In this scenario, OpenCoS achieves 89.54% in the median test accuracy, while the baseline does 89.27%: this shows that our auxiliary loss does not harm the training even when it is incorrectly applied to in-class samples.\nB.3 EVALUATIONS OF OUR DETECTION SCORE\nBaselines. We consider maximum softmax probability (MSP; Hendrycks & Gimpel 2017), ODIN (Liang et al., 2018), and Mahalanobis distance-based score (Lee et al., 2018b) as baseline detection methods. As MSP and ODIN require a classifier to obtain their scores, we employ SimCLR-le: a SimCLR model which additionally learns a linear layer with the labeled dataset, for both baselines.\nODIN performs an input pre-processing by adding small perturbations with a temperature scaling as follows:\nP (y = c|x;T ) = exp(fc(x)/T )∑ y exp(fy(x)/T ) , x′ = x− · sign(−∇x logP (y = c|x;T )), (10)\nwhere f = (f1, ..., fC) is the logit vector of deep neural network, T > 0 is a temperature scaling parameter, and is a magnitude of noise. ODIN calculates the pre-processed data x′ and feeds it into the classifier to compute the confidence score, i.e., maxy P (y|x′;T ), and identifies it as in-class if the confidence score is higher than some threshold δ. We choose the temperature T and the noise magnitude from {1, 10, 100, 1000} and {0, 0.0005, 0.001, 0.0014, 0.002, 0.0024, 0.005, 0.01, 0.05, 0.1, 0.2}, respectively, by using 2,000 validation data.\nMahalanobis distance-based score (Mahalanobis) assumes the features of the neural network f follows the class-conditional Gaussian distribution. Then, it computes Mahalanobis distance between input x and the closest class-conditional Gaussian distribution, i.e.,\nM(x) = max c −(f(x)− µc)>Σ−1(f(x)− µc), (11)\nwhere µc is the class mean and Σ is the covariance of the labeled data. We fix the covariance matrix as the identity because the number of labeled samples is insufficient to compute it: the feature dimension of SimCLR encoder fe and projection header g are 2048. Moreover, Mahalanobis has the noise magnitude parameter for input pre-processing like ODIN, and use a feature ensemble method of Lee et al. (2018b). We choose from {0, 0.0005, 0.001, 0.0014, 0.002, 0.005, 0.01}, and perform the feature ensemble of intermediate features including fe’s and g’s by using 2,000 validation data.\nMetrics. We follow the threshold-free detection metrics used in Lee et al. (2018b) to measure the effectiveness of detection scores in identifying out-of-class samples.\n• True negative rate (TNR) at 95% true positive rate (TPR). We denote true positive, true negative, false positive, and false negative as TP, TN, FP, and FN, respectively. We measure TNR = TN / (FP+TN) at TPR = TP / (TP+FN) is 95%.\nDetection method TNR at TPR 95% ↑ Detection Accuracy ↑ AUROC ↑ AUPR-in ↑ AUPR-out ↑ MSP 18.20 72.81 80.79 88.72 66.38 ODIN 24.10 79.41 86.55 92.55 72.75 Mahalanobis 90.90 93.57 97.48 98.60 94.36 Ours w/o header 40.44 80.11 88.80 93.46 79.35 Ours 91.20 93.49 98.10 98.79 96.86\nIn this section, we present evaluations of our detection score s(·) (7) under various detection metrics on the CIFAR-Animals + CIFAR-Others with 4 labels per class. Table 10 shows the results: interestingly, our score outperforms MSP and ODIN and also performs comparable to Mahalanobis, even these baselines require more computational costs, e.g., input pre-processing. We confirm that the design of our score is an effective and efficient way to detect out-of-class samples based on the representation of SimCLR. In Figure 4, we provide the receiver operating characteristic (ROC) curves that support the above results. We remark that the projection header g (4) is crucial for the detection, e.g., g enhances AUROC of our score from 88.80% to 98.10%. According to the definition of our score (7), it can be viewed as a simpler version of Mahalanobis without its input pre-processing and feature ensembles under an assumption of identity covariance, which may explain their comparable performances.\nWe additionally provide the performance of OpenCoS among various detection methods, including the above baselines and two artificial methods: we consider (a) Random: a random detection with a probability of 0.5, and (b) Oracle: a perfect detection. For MSP, ODIN, and Mahalanobis, we choose their detection thresholds at TPR 95%. Table 11 shows the results: we observe that the classification accuracy is proportional to the detection performance. Remarkably, our detection method achieves comparable accuracy to Oracle, which is the optimal performance of OpenCoS.\nB.4 EVALUATIONS OF THE DETECTION THRESHOLD\nWe additionally provide the detection performance on various proportions of out-of-class samples, i.e., 50% and 67%, on this benchmark. For each setting, the number of out-of-class samples is fixed at 20K, while in-class samples are controlled to 20K and 10K, respectively. We choose the same detection threshold t := µl − 2σl throughout all experiments: it is a reasonable choice, giving ≈ 95% confidence if the score follows Gaussian distribution. Table 12(a) shows the detection performance of our threshold and its applicability over various proportions of out-of-class samples. Although tuning t gives further improvements though (see Table 12(b)), we fix the threshold without any tuning.\nTable 12: The detection performance across different (a) proportions of out-of-class and (b) detection thresholds in CIFAR-Animals + CIFAR-Others benchmark with 4 labels per class.\n(a) The detection performance of the proposed threshold t := µl − 2σl on varying proportions of out-of-class.\nProportion of out-of-class 40% 50% 67%\nTrue Positive Rate (TPR) 63.61 72.19 81.10 True Negative Rate (TNR) 99.76 99.66 99.55\nAUROC 98.10 98.50 98.90\n(b) The detection performance and median test accuracy across different thresholds t, i.e., k = 1, 2, 3, 4 of t = µl − k · σl. k 1 2 3 4\nTrue Positive Rate (TPR) 37.48 63.61 85.55 98.62 True Negative Rate (TNR) 99.94 99.76 97.22 74.48\nAccuracy 75.47 80.67 81.12 78.37" }, { "heading": "C ALGORITHM", "text": "The full training procedure of OpenCoS is summarized in Algorithm 1.\nAlgorithm 1 OpenCoS: A general framework for open-set semi-supervised learning (SSL). 1: Input: The classifier f , the encoder fe (i.e., penultimate features of f ), and the projection header g. A labeled dataset Dl and an open-set unlabeled data Du.\n2: Pre-train fe and g via SimCLR. 3: Sl = φ 4: for each labeled sample xl ∈ Dl do 5: Sl = Sl ∪ {s(xl; fe, g)} . The similarity score (7). 6: end for 7: t := E [Sl]− 2 √ Var [Sl] . Compute the threshold t. 8: 9: Dinu = φ, Doutu = φ 10: for each unlabeled sample xu ∈ Du do 11: if s(xu; fe, g) < t then 12: Doutu = Doutu ∪ {xu} . Detect out-of-class unlabeled samples. 13: else 14: Dinu = Dinu ∪ {xu} 15: end if 16: end for 17: 18: for each sample xl ∈ Dl, xinu ∈ Dinu , and xoutu ∈ Doutu do 19: q(xoutu )← Compute a soft-label of xoutu . Using contrastive representations (9). 20: LOpenCoS = LSSL(xl, xinu ; f) + λ ·H(q(xoutu ), f(xoutu )) . SSL with auxiliary loss (8). 21: Update parameters of f by computing the gradients of the proposed loss LOpenCoS. 22: end for" } ]
2,020
OPENCOS: CONTRASTIVE SEMI-SUPERVISED LEARN-
SP:794393405d88536ffc86021bf4939b168ff7f791
[ "The authors tackle the problem of self-supervised representation learning, and validate their approach on downstream Reinforcement Learning tasks. Building on the insight that predictability in local patches is a good inductive bias for salient regions that characterize objects, the authors propose a well-reasoned, well-engineered and thoroughly validated pipeline for learning object keypoints without supervision. The authors present a wide range of ablative studies to validate their design choices, and demonstrate the superiority of their method both illustratively as well as quantitatively on a number of standard Atari benchmarks. ", "This paper works on unsupervised discovering keypoints in an Atari game frame to help improving Atari game performance. The keypoint discovery is based on predicting \"predictable\" local structure. I.e., the authors consider points that can not be predicted from its neighbor as good. Experiments show the learned keypoints performs better on 3 Atari games (Table. 1) than a counterpart keypoint discovery method, Transporter. " ]
We propose PermaKey, a novel approach to representation learning based on object keypoints. It leverages the predictability of local image regions from spatial neighborhoods to identify salient regions that correspond to object parts, which are then converted to keypoints. Unlike prior approaches, it utilizes predictability to discover object keypoints, an intrinsic property of objects. This ensures that it does not overly bias keypoints to focus on characteristics that are not unique to objects, such as movement, shape, colour etc. We demonstrate the efficacy of PermaKey on Atari where it learns keypoints corresponding to the most salient object parts and is robust to certain visual distractors. Further, on downstream RL tasks in the Atari domain we demonstrate how agents equipped with our keypoints outperform those using competing alternatives, even on challenging environments with moving backgrounds or distractor objects.
[ { "affiliations": [], "name": "Anand Gopalakrishnan" }, { "affiliations": [], "name": "Sjoerd van Steenkiste" }, { "affiliations": [], "name": "Jürgen Schmidhuber" } ]
[ { "authors": [ "B. Alexe", "Thomas Deselaers", "V. Ferrari" ], "title": "What is an object", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2010 }, { "authors": [ "V. Bapst", "Alvaro Sanchez-Gonzalez", "C. Doersch", "Kimberly L. Stachenfeld", "Pushmeet Kohli", "P. Battaglia", "Jessica B. Hamrick" ], "title": "Structured agents for physical construction", "venue": null, "year": 2019 }, { "authors": [ "Horace B Barlow" ], "title": "Unsupervised learning", "venue": "Neural computation,", "year": 1989 }, { "authors": [ "Peter Battaglia", "Razvan Pascanu", "Matthew Lai", "Danilo Jimenez Rezende" ], "title": "Interaction networks for learning about objects, relations and physics", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Marc G. Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "J. Artif. Int. Res.,", "year": 2013 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Ali Borji", "Laurent Itti" ], "title": "Exploiting local and global patch rarities for saliency detection", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2012 }, { "authors": [ "Neil D.B. Bruce", "John K. Tsotsos" ], "title": "Saliency based on information maximization", "venue": "In Proceedings of the 18th International Conference on Neural Information Processing Systems,", "year": 2005 }, { "authors": [ "Christopher P Burgess", "Loic Matthey", "Nicholas Watters", "Rishabh Kabra", "Irina Higgins", "Matt Botvinick", "Alexander Lerchner. Monet" ], "title": "Unsupervised scene decomposition and representation", "venue": null, "year": 1901 }, { "authors": [ "Michael Chang", "Tomer Ullman", "Antonio Torralba", "Joshua B. Tenenbaum" ], "title": "A compositional object-based approach to learning physical dynamics", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Kaiwen Duan", "Song Bai", "Lingxi Xie", "Honggang Qi", "Qingming Huang", "Qi Tian" ], "title": "Centernet: Keypoint triplets for object detection", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "SM Ali Eslami", "Nicolas Heess", "Theophane Weber", "Yuval Tassa", "David Szepesvari", "Geoffrey E Hinton" ], "title": "Attend, infer, repeat: Fast scene understanding with generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "F.A. Gers" ], "title": "Learning to forget: continual prediction with lstm", "venue": "IET Conference Proceedings,", "year": 1999 }, { "authors": [ "Klaus Greff", "Sjoerd Van Steenkiste", "Jürgen Schmidhuber" ], "title": "Neural expectation maximization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Klaus Greff", "Sjoerd van Steenkiste", "Jürgen Schmidhuber" ], "title": "On the binding problem in artificial neural networks", "venue": "arXiv preprint arXiv:2012.05208,", "year": 2020 }, { "authors": [ "H.V. Hasselt", "A. Guez", "D. Silver" ], "title": "Deep reinforcement learning with double q-learning", "venue": "In AAAI,", "year": 2016 }, { "authors": [ "Matthew J. Hausknecht", "P. Stone" ], "title": "Deep recurrent q-learning for partially observable mdps", "venue": "In AAAI Fall Symposia,", "year": 2015 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Phillip Isola", "Daniel Zoran", "Dilip Krishnan", "Edward H Adelson" ], "title": "Crisp boundary detection using pointwise mutual information", "venue": "In European Conference on Computer Vision,", "year": 2014 }, { "authors": [ "Phillip Isola", "Daniel Zoran", "Dilip Krishnan", "Edward H Adelson" ], "title": "Learning visual groups from co-occurrences in space and time", "venue": "arXiv preprint arXiv:1511.06811,", "year": 2015 }, { "authors": [ "Laurent Itti", "Christof Koch", "Ernst Niebur" ], "title": "A model of saliency-based visual attention for rapid scene analysis", "venue": "IEEE Transactions on pattern analysis and machine intelligence,", "year": 1998 }, { "authors": [ "Martin Jagersand" ], "title": "Saliency maps and attention selection in scale and spatial coordinates: An information theoretic approach", "venue": "In Proceedings of IEEE International Conference on Computer Vision,", "year": 1995 }, { "authors": [ "Tomas Jakab", "Ankush Gupta", "Hakan Bilen", "Andrea Vedaldi" ], "title": "Unsupervised learning of object landmarks through conditional image generation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Michael Janner", "Sergey Levine", "William T. Freeman", "Joshua B. Tenenbaum", "Chelsea Finn", "Jiajun Wu" ], "title": "Reasoning about physical interactions with object-centric models", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Timor Kadir", "Michael Brady" ], "title": "Saliency, scale and image description", "venue": "Int. J. Comput. Vision,", "year": 2001 }, { "authors": [ "Lukasz Kaiser", "Mohammad Babaeizadeh", "Piotr Milos", "Blazej Osinski", "Roy H. Campbell", "Konrad Czechowski", "Dumitru Erhan", "Chelsea Finn", "Piotr Kozakowski", "Sergey Levine", "Afroz Mohiuddin", "Ryan Sepassi", "George Tucker", "Henryk Michalewski" ], "title": "Model based reinforcement learning for atari", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Thomas Kipf", "Elise van der Pol", "Max Welling" ], "title": "Contrastive learning of structured world models", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Adam Kosiorek", "Hyunjik Kim", "Yee Whye Teh", "Ingmar Posner" ], "title": "Sequential attend, infer, repeat: Generative modelling of moving objects", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Andreas Küchler", "Christoph Goller" ], "title": "Inductive learning in symbolic domains using structure-driven recurrent neural networks", "venue": "In Annual Conference on Artificial Intelligence,", "year": 1996 }, { "authors": [ "Tejas D Kulkarni", "Ankush Gupta", "Catalin Ionescu", "Sebastian Borgeaud", "Malcolm Reynolds", "Andrew Zisserman", "Volodymyr Mnih" ], "title": "Unsupervised learning of object keypoints for perception and control", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Brenden M Lake", "Tomer D Ullman", "Joshua B Tenenbaum", "Samuel J Gershman" ], "title": "Building machines that learn and think like people", "venue": "Behavioral and brain sciences,", "year": 2017 }, { "authors": [ "Zhixuan Lin", "Yi-Fu Wu", "Skand Vishwanath Peri", "Weihao Sun", "Gautam Singh", "Fei Deng", "Jindong Jiang", "Sungjin Ahn" ], "title": "SPACE: unsupervised object-oriented scene representation via spatial attention and decomposition", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Matthias Minderer", "Chen Sun", "Ruben Villegas", "Forrester Cole", "Kevin P Murphy", "Honglak Lee" ], "title": "Unsupervised learning of object structure and dynamics from videos", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sarthak Mittal", "Alex Lamb", "Anirudh Goyal", "Vikram Voleti", "Murray Shanahan", "Guillaume Lajoie", "Michael Mozer", "Yoshua Bengio" ], "title": "Learning to combine top-down and bottom-up signals in recurrent neural networks with attention over modules", "venue": "Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Alexander Mott", "Daniel Zoran", "Mike Chrzanowski", "Daan Wierstra", "Danilo Jimenez Rezende" ], "title": "Towards interpretable reinforcement learning using attention augmented agents", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jordan B. Pollack" ], "title": "Recursive distributed representations", "venue": "Artificial Intelligence,", "year": 1990 }, { "authors": [ "F. Scarselli", "M. Gori", "A.C. Tsoi", "M. Hagenbuchner", "G. Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Aleksandar Stanić", "Sjoerd van Steenkiste", "Jürgen Schmidhuber" ], "title": "Hierarchical relational inference", "venue": "In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence,", "year": 2021 }, { "authors": [ "Felipe Petroski Such", "Vashisht Madhavan", "Rosanne Liu", "Rui Wang", "Pablo Samuel Castro", "Yulun Li", "Ludwig Schubert", "Marc G. Bellemare", "Jeff Clune", "Joel Lehman" ], "title": "An atari model zoo for analyzing, visualizing, and comparing deep reinforcement learning agents", "venue": null, "year": 2019 }, { "authors": [ "Sjoerd van Steenkiste", "Michael Chang", "Klaus Greff", "Jürgen Schmidhuber" ], "title": "Relational neural expectation maximization: Unsupervised discovery of objects and their interactions", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Sjoerd van Steenkiste", "Klaus Greff", "Jürgen Schmidhuber" ], "title": "A perspective on objects and systematic generalization in model-based rl. ICML Workshop on Generative Modeling and Model-Based Reasoning for Robotics and AI, 2019", "venue": null, "year": 2019 }, { "authors": [ "Rishi Veerapaneni", "John D Co-Reyes", "Michael Chang", "Michael Janner", "Chelsea Finn", "Jiajun Wu", "Joshua B Tenenbaum", "Sergey Levine" ], "title": "Entity abstraction in visual model-based reinforcement learning", "venue": "In Conference on Robot Learning,", "year": 2019 }, { "authors": [ "Tingwu Wang", "Renjie Liao", "Jimmy Ba", "Sanja Fidler" ], "title": "Nervenet: Learning structured policy with graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Vinicius Zambaldi", "David Raposo", "Adam Santoro", "Victor Bapst", "Yujia Li", "Igor Babuschkin", "Karl Tuyls", "David Reichert", "Timothy Lillicrap", "Edward Lockhart", "Murray Shanahan", "Victoria Langston", "Razvan Pascanu", "Matthew Botvinick", "Oriol Vinyals", "Peter Battaglia" ], "title": "Deep reinforcement learning with relational inductive biases", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yuting Zhang", "Yijie Guo", "Yixin Jin", "Yijun Luo", "Zhiyuan He", "Honglak Lee" ], "title": "Unsupervised discovery of object landmarks as structural representations", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Andrey Zhmoginov", "Ian Fischer", "Mark Sandler" ], "title": "Information-bottleneck approach to salient region discovery", "venue": "ICML Workshop on Self-Supervised Learning,", "year": 2019 }, { "authors": [ "Xingyi Zhou", "Dequan Wang", "Philipp Krähenbühl" ], "title": "Objects as points", "venue": "In arXiv preprint arXiv:1904.07850,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "An intelligent agent situated in the visual world critically depends on a suitable representation of its incoming sensory information. For example, a representation that captures only information about relevant aspects of the world makes it easier to learn downstream tasks efficiently (Barlow, 1989; Bengio et al., 2013). Similarly, when explicitly distinguishing abstract concepts, such as objects, at a representational level, it is easier to generalize (systematically) to novel scenes that are composed of these same abstract building blocks (Lake et al., 2017; van Steenkiste et al., 2019; Greff et al., 2020).\nIn recent work, several methods have been proposed to learn unsupervised representations of images that aim to facilitate agents in this way (Veerapaneni et al., 2019; Janner et al., 2019). Of particular interest are methods based on learned object keypoints that correspond to highly informative (salient) regions in the image as indicated by the presence of object parts (Zhang et al., 2018; Jakab et al., 2018; Kulkarni et al., 2019; Minderer et al., 2019). Many real world tasks primarily revolve around (physical) interactions between objects and agents. Therefore it is expected that a representation based on a set of task-agnostic object keypoints can be re-purposed to facilitate downstream learning (and generalization) on many different tasks (Lake et al., 2017).\nOne of the main challenges for learning representations based on object keypoints is to discover salient regions belonging to objects in an image without supervision. Recent methods take an information bottleneck approach, where a neural network is trained to allocate a fixed number of keypoints (and learn corresponding representations) in a way that helps making predictions about an image that has undergone some transformation (Jakab et al., 2018; Minderer et al., 2019; Kulkarni et al., 2019). However, keypoints that are discovered in this way strongly depend on the specific transformation that is considered and therefore lack generality. For example, as we will confirm in our experiments, the recent Transporter (Kulkarni et al., 2019) learns to prioritize image regions that change over time, even when they are otherwise uninformative. Indeed, when relying on extrinsic object properties (i.e. that are not unique to objects) one becomes highly susceptible to distractors as we will demonstrate.\nIn this work, we propose PermaKey a novel representation learning approach based on object keypoints that does not overly bias keypoints in this way. The key idea underlying our approach is to view objects as local regions in the image that have high internal predictive structure (self-information). We argue that local predictability is an intrinsic property of an object and therefore more reliably captures objectness in images (Alexe et al., 2010). This allows us to formulate a local spatial prediction\nproblem to infer which of the image regions contain object parts. We perform this prediction task in the learned feature space of a convolutional neural network (CNN) to assess predictability based on a rich collection of learned low-level features. Using PointNet (Jakab et al., 2018) we can then convert these predictability maps to highly informative object keypoints.\nWe extensively evaluate our approach on a number of Atari environments and compare to Transporter (Kulkarni et al., 2019). We demonstrate how our method is able to discover keypoints that focus on image regions that are unpredictable and which often correspond to salient object parts. By leveraging local predictability to learn about objects, our method profits from a simpler yet better generalizable definition of an object. Indeed, we demonstrate how it learns keypoints that do not solely focus on temporal motion (or any other extrinsic object property) and is more robust to uninformative (but predictable) distractors in the environment, such as moving background. On Atari games, agents equipped with our keypoints outperform those using Transporter keypoints. Our method shows good performance even on challenging environments such as Battlezone involving shifting viewpoints where the Transporters’ explicit motion bias fails to capture any task-relevant objects. As a final contribution, we investigate the use of graph neural networks (Battaglia et al., 2018) for processing keypoints, which potentially better accommodates their discrete nature when reasoning about their interactions, and provide an ablation study." }, { "heading": "2 METHOD", "text": "To learn representations based on task-agnostic object keypoints we require a suitable definition of an object that can be applied in an unsupervised manner. At a high level, we define objects as abstract patterns in the visual input that can serve as modular building blocks (i.e. they are self-contained and reusable independent of context) for solving a particular task, in the sense that they can be separately intervened upon or reasoned with (Greff et al., 2020). This lets us treat objects as local regions in input space that have high internal predictive structure based on statistical co-occurrence of features such as color, shape, etc. across a large number of samples. Hence, our focus is on their local predictability, which can be viewed as an “intrinsic” object property according to this definition. For example, Bruce & Tsotsos (2005) have previously shown that local regions with high self-information typically correspond to salient objects. More generally, self-information approximated via a set of cues involving center-surround feature differences has been used to quantify objectness (Alexe et al., 2010).\nIn this paper we introduce Prediction ERror MAp based KEYpoints (PermaKey), which leverages this definition to learn about object keypoints and corresponding representations. The main component of PermaKey is a local spatial prediction network (LSPN), which is trained to solve a local spatial prediction problem in feature space (light-blue trapezoid in Figure 1). It involves predicting the value of a feature from its surrounding neighbours, which can only be solved accurately when they belong to the same object. Hence, the error map (predictability map) that is obtained by evaluating the LSPN at different locations carves the feature space up into regions that have high internal predictive structure (see rows 4 & 5 in Figure 2(a)). In what follows, we delve into each of the 3 modules that constitute\nour PermaKey system: 1) the VAE to learn spatial features 2) the LSPN to solve the local spatial prediction task and 3) PointNet to group error maps into keypoints (see Figure 1 for an overview).\nSpatial Feature Embedding To solve the local spatial prediction task we require an informative set of features at each image location. While prior approaches focus on low-level features of image patches (eg. RGB values Isola et al. (2014), or ICA features Bruce & Tsotsos (2005)), we propose to learn features using a Variational Auto-Encoder (VAE; Kingma & Welling (2014)). It consists of an Encoder that parametrizes the approximate posterior qφ(z|x) and a Decoder that parametrizes the generative model pθ(xi|z), which are trained to model the observed data via the ELBO objective:\nLe(θ, φ) = Eqφ(z|x) log[pθ(x|z)]−DKL[qφ(z|x)||p(z)]. (1)\nThe encoder is based on several layers of convolutions that offer progressively more abstract image features by trading-off spatial resolution with depth. Hence, which layer(s) we choose as our feature embedding will have an effect on the outcome of the local spatial prediction problem. While more abstract high-level features are expected to better capture the internal predictive structure of an object, it will be more difficult to attribute the error of the prediction network to the exact image location. On the other hand, while more low-level features can be localized more accurately, they may lack the expressiveness to capture high-level properties of objects. Nonetheless, in practice we find that a spatial feature embedding based on earlier layers of the encoder works well (see also Section 5.3 for an ablation).\nLocal Spatial Prediction Task Using the learned spatial feature embedding we seek out salient regions of the input image that correspond to object parts. Our approach is based on the idea that objects correspond to local regions in feature space that have high internal predictive structure, which allows us to formulate the following local spatial prediction (LSP) task. For each location in the learned spatial feature embedding, we seek to predict the value of the features (across the feature maps) from its neighbouring feature values. When neighbouring areas correspond to the same object-(part), i.e. they regularly appear together, we expect that this prediction problem is easy (green arrow in Figure 3). In contrast, this is much harder when insufficient context is available (red arrow\nFigure 3: Two different prediction scenarios encountered when attempting to solve the LSP task. In the first case, too little context information is available to predict the center patch from its surroundings, likely yielding a prediction error (assuming the character is not always present). Alternatively, when sufficient context is available as in the second case, prediction becomes easy.\nin Figure 3), or when features rarely co-occur. Similarly, when parts of an image remain fixed across a large number observations it is easily predictable regardless of how much context is available.\nWe implement the local spatial prediction task by training a neural network (LSPN in Figure 1) to predict a center patch of features from the 8 first-order neighbouring patches using the learned spatial feature embedding at layer l of the VAE encoder. The weights of this network are trained to solve this prediction problem across images and across feature locations using the following objective:\nLp(ψl) = 1\nNHW ∑ x H,W∑ i,j (fLSPNψl (A[ne(i, j)])−A[i, j]) 2, (2)\nwhere A ∈ RH×W×C is the set of feature maps at layer l of the encoder qφ(z|x), and ne(i, j) are the coordinates of the first-order neighbours surrounding i, j and H,W are the height and width of the spatial feature maps respectively. The features of the neighbouring locations are concatenated before being fed to the network. We train a separate LSPN for each of the chosen feature embedding specified by the encoder layer(s) and record their error at each location to obtain predictability maps that are expected to indicate the presence of objects (rows 4 & 5 in Figure 2(a)).\nExtracting Keypoints from Error Maps The predictability maps can be thought of as being a superposition of the local predictability errors due to each of the individual objects in the scene. Hence, what remains is to extract the k most salient regions (belonging to object parts) as our object keypoints from these predictability maps. We do so by training a PointNet (Jakab et al., 2018) to reconstruct the error maps through a bottleneck consisting of k fixed-width 2D Gaussian windows with learnable centers µ that are the keypoints (Lg loss in Figure 1) as shown below:\nLg(ω) = 1\nMHW M∑ m H,W∑ i,j (fPointNetω (H[m, i, j])−H[m, i, j])2. (3)\nHereH ∈ RM×H×W is the concatenation of layer-wise LSP error maps for various encoder layer(s) l (M in total), and H,W are the height and width of the error maps (potentially up-sampled). Since the bottleneck layer has only limited capacity it is forced to consider locations that are most helpful in minimizing reconstruction error. This also requires the PointNet to consider statistical regularities across the predictability maps that can efficiently be described. In that sense this procedure can be thought of as a way of (spatially) clustering the local errors belonging to the same object by placing keypoints." }, { "heading": "3 USING KEYPOINT REPRESENTATIONS FOR DOWNSTREAM PROCESSING", "text": "Given a set of keypoint locations and their associated features learned in an purely unsupervised framework we examine how to incorporate them into a state representation suitable to solve a set of downstream RL tasks. Here we will consider Atari games as an example, which essentially consist of entities such as a player, enemies, reward objects like health potions, food etc. Player(s) obtain positive reward through collecting these reward objects or killing enemies and in turn risk getting attacked by these enemies. Successful game-play in such environments would therefore require an understanding of these interaction effects between player, enemies and reward objects in order to estimate the best next set of actions to take to maximize the game score.\nPrior work like the Transporter (Kulkarni et al., 2019) uses a CNN to encode features of object keypoints for downstream RL tasks on the Atari domain. However, a CNN permits an implicit encoding of relational factors between keypoints only in a limited sense since it measures interactions between spatially close keypoints through the weight-sharing scheme of the convolutional kernel. An alternative is to use a graph neural network (GNN) (Scarselli et al., 2009; Battaglia et al., 2018) (see also Pollack (1990); Küchler & Goller (1996) for earlier approaches) which have been applied to model relational inductive biases in the state representations of deep RL agents (Zambaldi et al., 2019). In a similar spirit, we consider treating keypoints as an unordered set and using a GNN to explicitly encode relational factors between pairs of such entities. In this case, we spatially average the convolutional features within each keypoint mask to obtain a keypoint feature. These are appended with learned positional embedding and initialized as node values of the underlying graphs. Underlying graphs are fully-connected and edge factors initialized with zeros. We use the Encode-Process-Decode design strategy (Battaglia et al., 2018) for the graph keypoint encoder in our PKey+GNN model for Atari in Section 5.1. The Encode and Decode blocks independently process nodes and edges of underlying graphs. We use an Interaction Network (Battaglia et al., 2016) for the Process block with a single message-passing step to compute updated edges e′ and nodes v′ as shown below:\ne′ij = f e([vi, vj , eij ]), e ′ i = ρ e→v(E′i), v ′ i = f v([vi, e ′ i]). (4)\nHere eij is the edge connecting nodes vi and vj , fe is the edge update function, ρe→v is the edge aggregation function, E′i represents all incoming edges to node vi and f\nv is the node update function. After one round of message-passing by the Interaction Network the resultant nodes are decoded as outputs. These output node features of all nodes are then concatenated and passed through an MLP before being input to the agent. For further implementation details we refer to Appendix A.2.3." }, { "heading": "4 RELATED WORK", "text": "Inferring object keypoints offers a flexible approach to segmenting a scene into a set of informative parts useful for downstream processing. Recent approaches (Jakab et al., 2018; Zhang et al., 2018; Kulkarni et al., 2019; Minderer et al., 2019) apply a transformation (i.e. rotation, deformation, viewpoint shift or temporal-shift) on a source image to generate a target image. An autoencoder network with a keypoint-bottleneck layer is trained to reconstruct the target image given the source image and thereby learns to infer the most salient image regions that differ between source and target image pairs (Jakab et al., 2018). However, since the initial transformation does not focus on intrinsic object properties, they are more susceptible to ’false-positives’ and ’false-negatives’. In extreme scenarios, such approaches may even fail to infer any useful object keypoints at all (e.g. as shown in Figure 2(b)).\nThe more general framework of discovering salient image regions (Itti et al., 1998) is closely related to inferring object keypoints. Early work explored the connections between information theoretic measures of information gain or local entropy (Kadir & Brady, 2001; Jagersand, 1995) and its role in driving fixation(attention) patterns in human vision. For example, Bruce & Tsotsos (2005) use Shannon’s self information metric as a measure of saliency, while Zhmoginov et al. (2019) leverage mutual information to guide saliency masks. An interesting distinction is between local and global saliency, where the former is concerned with (dis)similarity between local image regions and the latter focuses on how (in)frequently information content occurs across an entire dataset (Borji & Itti, 2012). Our prediction objective accounts for both: the LSP account for local saliency, while more global saliency is achieved having the prediction network solve this local prediction problem across an entire dataset of images. In this way, it can also reason about statistical regularities that only occur at this more global level.\nAn alternative to inferring object keypoints (and corresponding representations) is to learn entire object representations. Recent approaches can be broadly categorized into spatial mixture models (Greff et al., 2017; 2019; Burgess et al., 2019), sequential attention models (Eslami et al., 2016; Kosiorek et al., 2018) or hybrid models that are combinations of both (Lin et al., 2020). While these methods have shown promising results, they have yet to scale to more complex visual settings. In contrast, object keypoints focus on more low-level information (i.e. object parts) that can be inferred more easily, even in complex visual settings (Minderer et al., 2019) Recent work on object detection (Duan et al., 2019; Zhou et al., 2019) have explored the use of a center point (i.e. keypoint) and a bounding box as an object abstraction. However, they use ground-truth bounding boxes of objects to infer keypoint locations whereas our method is fully unsupervised. Alternatively Isola et al.\n(2015) propose a self-supervised framework for instance segmentation where local image patches are classified as belonging together based on their spatial or temporal distance.\nGraph neural networks (GNNs) have been previously used for intuitive physics models (Battaglia et al., 2016; Chang et al., 2017; van Steenkiste et al., 2018; Janner et al., 2019; Veerapaneni et al., 2019; Kipf et al., 2020; Stanić et al., 2021). Other work instantiate the policy (Wang et al., 2018) or value function (Bapst et al., 2019) as GNNs for RL tasks on continuous control or physical construction domains. However, the latter assume access to underlying ground-truth object(-part) information. In contrast, we learn our keypoint representations in a fully unsupervised manner from raw images. Further, we do not instantiate the policy or value function as a GNN, but rather use it to model relational factors between learned keypoint for downstream RL tasks in a similar spirit to Zambaldi et al. (2019)." }, { "heading": "5 EXPERIMENTS", "text": "We empirically evaluate the efficacy of our approach on a subset of Atari games (Bellemare et al., 2013) that ensure a good variability in terms of object sizes, colour, types, counts, and movement. As a baseline, we compare against the recently proposed Transporter (Kulkarni et al., 2019), which was previously evaluated on Atari. To evaluate the efficacy of object keypoints to facilitate sample-efficient learning on RL tasks we train agents on the low-data regime of 100,000 interactions following the same experimental protocol as in Kulkarni et al. (2019); Kaiser et al. (2020).\nWe use a deep recurrent Q-learning variant (Hausknecht & Stone, 2015) using double-Q learning (Hasselt et al., 2016), target networks (Mnih et al., 2015) and a 3-step return as our RL algorithm for all agents with a window size of 8 and batch size of 16. To implement the Q-function we use a 128 unit LSTM network (Hochreiter & Schmidhuber, 1997; Gers, 1999). Full experimental details can be found in Appendix A and additional results in Appendix B1." }, { "heading": "5.1 ATARI", "text": "We train PermaKey on Atari and find that it is frequently able to recover keypoints corresponding to object parts. For example, in Figure 2(a) (row 3) it can be seen how our method discovers keypoints correponding to the road and traffic in Enduro (column 3) and player and enemy tank parts (eg. wheels, cannon) in Battlezone (column 4). On Frostbite we find that it learns about the borders of the ice platforms, the player, scoreboard and birds, while on MsPacman we observe that it tracks the various characters like the blue ghosts and Pac-Man. We emphasize that while the precise location of the keypoints often seems to focus on the borders of predictability, their associated window of attention (i.e. in PointNet) is able to capture most of the object part. This is better seen in Figure 4 (and in Figure 10 in Appendix B), where the corresponding windows of attention (available to an agent) are shown. From the error maps (rows 4 and 5 in Figure 2(a)) it can be seen how the LSPN implicitly learns to discover salient parts of the image, and how the choice of feature embedding shifts the focus from local edges to entire objects. In Figure 5 (and Figures 8 and 9) it is shown how PermaKey is stable across multiple different runs.\nCompared to Transporter (Figure 2(a) row 2) we find that our approach often produces qualitatively better keypoints. For example, on Enduro and Battlezone the Transporter fails to place keypoints on salient object parts belonging to the road and traffic or parts of the player and enemy tank (eg. its canon, wheels etc.). This is intuitive, since in these environments moving image regions do not always correspond to objects. Indeed, on Frostbite and MsPacman, where this is not the case, the keypoints produced by Transporter and our method are of comparable quality.\nIn Table 1 we provide quantitative results of training agents that use these keypoints on 4 Atari environments. We observe that agents using PermaKey keypoints (PKey-CNN in Table 1) outperform corresponding variants using Transporter keypoints (original from Kulkarni et al. (2019) and our\n1Code to reproduce all experiments is available at https://github.com/agopal42/permakey.\nre-implementation) on 3 environments2. Notice how on Battlezone we were unable to train a policy that is better than random when using Transporter keypoints. This is intuitive, since in Figure 2(a) we observed how Transporter learns keypoints that are not concerned with salient objects on this environment. For comparison, we also include scores from Kaiser et al. (2020) in Table 1, where a similar experimental setup is used. It can be seen that SimPLE (Kaiser et al., 2020), Rainbow (Hessel et al., 2018) and PPO (Schulman et al., 2017) also achieve worse scores than our PKey agents.\nWhen using a GNN for downstream processing (PKey+GNN) we are able to obtain additional improvements on the Battlezone and Frostbite environments. Meanwhile on MsPacman and Seaquest performance drops, indicating that the stronger spatial inductive bias (i.e. nearby keypoints are processed similarly) implemented by CNNs can be advantageous as well. Note that the relative improvements of (Transporter+GNN) compared to (Transporter+CNN), largely mirrors the relative changes shown by analogous PermaKey variants (CNN vs GNN keypoint encoder) across all environments. Further, it can be seen how PermaKey consistently outperforms Transporter, even when using GNN keypoint encoders for both. On MsPacman we observed a larger standard-deviation for Transporter, compared to the results reported by Kulkarni et al. (2019), although we did tune the hyper-parameters for our re-implementation separately. A possible reason could be due to us using a lower batch size of 16 (compared to 32) and a lower window size of 8 (compared to 16). We were unable to investigate this further due to constraints on the available amount of compute. On Frostbite, we observe a large variances for both (PKey+GNN) and (Transporter+GNN) agents as some policies complete the first level (resulting in very high scores of about 2000 points), while others learn sub-optimal strategies yielding episodic returns around 220." }, { "heading": "5.2 ROBUSTNESS TO DISTRACTORS", "text": "In order to demonstrate the limitations of the explicit motion bias in Transporter we synthesized a modified version of Atari (“noisy” Atari) that contains a coloured strip (either horizontal, vertical or both) at random locations in the environment (see row 1 in Figure 2(b)), thereby creating the illusion\n2Unfortunately, Kulkarni et al. (2019) have not open-sourced their code (needed to reproduce RL results and apply Transporter on other environments) and their self-generated dataset containing ‘ground-truth’ keypoints for evaluation on Atari. We were able to obtain certain hyper-parameters through personal communication.\nGame Transp. (re-imp.) PKey + CNN Seaquest 357.3 (99.10) 562.0 (114.78) MsPacman 470.0 (212.90) 574.7 (290.07)\nMsPacman 5 keypoints 7 keypoints 10 keypoints Transp. (re-imp.) 923.0 (433.95) 983.0 (806.3) 907.0 (317.41)\nPKey + CNN 1004.3 (319.15) 1038.5 (417.1) 1003.3 (313.07)\nTable 3: Keypoint ablation on the MsPacman env.\nof motion. Figure 2(b) (row 3) demonstrates how our method is more resistant against this distractor by focusing on predictability and is able to recover keypoints belonging to object parts as before (eg. player, skull, ladder positions in Montezuma’s Revenge). Indeed, notice how the distractor does not show up in the error maps (rows 4 and 5 in Figure 2(b)) as it corresponds to a predictable pattern that was frequently encountered across the entire dataset3. In contrast, we find that Transporter (row 2) dedicates many of the available keypoints to the distractor as opposed to the static salient object parts.\nAs a quantitative experiment, we consider the downstream performance of agents that receive either Transporter keypoints or PermaKey keypoints on “Noisy” versions of Seaquest and MsPacman. In Table 2 it can be seen how the performance of the PermaKey agent (PKey + CNN) remains largely unaffected on Seaquest (compared to Table 1), while the sensitivity of Transporter to such moving distractors causes a significant drop in performance. On “Noisy” MsPacman, both agents perform worse compared to before, but the PermaKey only to a lesser extent. We hypothesize that the general performance drop is caused by the distractor bars occluding the fruits (collecting fruits yields rewards) located along the horizontal and vertical columns at every timestep. In general, we observe that PermaKey consistently outperforms Transporter on these tasks." }, { "heading": "5.3 ABLATION STUDY", "text": "Number of keypoints We gain additional insight in our approach by modulating the number of keypoints and observe its qualitative effects. On the Frostbite environment we use 15, 20, 25 keypoints (shown in Figure 6), while keeping all the other hyperparameters same as described in Appendix A. When using additional keypoints we observe that PermaKey prioritizes regions in the error map that contribute most to the reconstruction loss. On the other hand, when the number of keypoints is too large, redundant keypoints are placed at seemingly random locations e.g. the border of the image.\nWe also quantitatively measure the effects of varying the number of keypoints {5, 7, 10} produced by PermaKey and Transporter on downstream performance in MsPacman (Table 3), using the same evaluation protocol as before. The agent using PermaKey outperforms the one using Transporter keypoints across the entire keypoint range. Since the Transporter keypoints are biased towards image regions that change across frames, additional keypoints only seem to track the different characters that move around the maze. With PermaKey focusing on predictability, additional keypoints tend to capture other salient parts of the maze, although this did not yield improved downstream performance in this case.\nSpatial resolution of the feature embedding We experiment with the choice of feature layer(s) used for the local spatial prediction task and observe its qualitative effects on the keypoints discovered. On the Space Invaders environment we use the following sets of feature layer(s) = {(0), (0, 1), (2, 3), (0, 1, 2, 3)} and retain the same values for all the other hyperparameters as described in Appendix A. Results are shown in Figure 7 in Appendix B. When only the lower layers are used, we find that the corresponding error maps tend to focus more on highly local image features. In contrast, when only higher layers are used, we find that the loss of spatial resolution leads to highly coarse keypoint localization. In practice, we found that using feature layers 0 and 1 offers the right balance between spatial locality and expressiveness of features.\nThe ice floats in Frostbite, which are captured as keypoints (Figure 2(a)), serve as a useful example that illustrates the importance of integrating multiple spatial resolutions (i.e. that trade off local and global information as also noted in Itti et al. (1998); Jagersand (1995)). Ice floats seen at the default image resolution (layer-0 with ‘conv’ receptive field=4) might seem very predictable in it’s highly local context of other ice float parts. However, casting the same local prediction task of predicting\n3This observation also confirms that the LSPN does not collapse to a naive color thresholder.\nthe ice float features at layer-1 (image resolution 0.5x due to stride=2 in conv-encoder) given only blue background context would lead to some error spikes. This is because at many locations in the image, it seems reasonable to expect blue background patches to surround other blue background patches. Only at certain locations white ice floats are interspersed with blue background patches thereby driving some amount of error (surprise) in local predictability (In Figure 2(a) column 1, compare rows 4&5). Hence, varying the spatial resolution of the feature embedding and integrating the corresponding error maps lets us accommodate objects of varying shapes, sizes and geometry etc." }, { "heading": "6 DISCUSSION", "text": "We have seen how PermaKey, by focusing on local predictability, is able to reliably discover salient object keypoints while being more robust to certain distractors compared to Transporter (Kulkarni et al., 2019). However, it should be noted that PermaKey can only account for a “bottom-up” notion of saliency (Itti, 2007) based on the observed inputs, and does not consider task-dependent information. Therefore a potential limitation of this approach is that it might capture objects that are not relevant for solving the task at hand (i.e. corresponding to “unpredictable distractors”). On the other hand, a purely unsupervised approach to extracting keypoints may allow for greater re-use and generalizability. The same (overcomplete) set of keypoints can now be used to faciliate a number of tasks, for example by querying (and attending to) only the relevant keypoints using top-down attention (Mott et al., 2019; Mittal et al., 2020). In this way it may readily facilitate other tasks, such as a “new object” acting as the key in Montezuma’s Revenge, without having to re-train the bottom-up vision module.\nOther kinds of “unpredictable distractors” could include certain types of image noise having highly unpredictable local structure, such as salt-and-pepper noise. It is unclear how this affects the local spatial prediction problem, since the learned spatial embedding of the VAE may readily suppress such information (due to it being trained for reconstruction). In that case, since PermaKey acts on these learned spatial embeddings, it could leave the quality of error map(s) relatively unchanged. Further, we note that such distractors are typically the result of artificial noise and uncommon in the real world, unlike distractors due to motion, which are therefore more important to handle effectively.\nIn this work, we have trained PermaKey in a purely unsupervised fashion by pre-training it once on a dataset of collected episodes of game-play. On games where new objects are introduced only at much later levels it would be beneficial to continue refining the keypoint module using the unsupervised loss (Kulkarni et al., 2019). Further, it may be interesting to consider whether there are benefits to learning both the keypoint module and policy network in an end-to-end fashion. While this is expected to reduce the re-use and generalizability of the discovered keypoints, it may make it easier to learn relevant keypoints due to directly considering task-specific information (e.g. via policy gradients)." }, { "heading": "7 CONCLUSION", "text": "We proposed PermaKey, a novel approach to learning object keypoints (and representations) that leverages predictability to identify salient regions that correspond to object parts. Through extensive experiments on Atari it was shown how our method is able to learn accurate object keypoints that are more robust to distractors, unlike the Transporter baseline approach. RL agents that employ PermaKey keypoints as input representations show sample-efficient learning on several Atari environments and outperform several other baselines. Further, it was shown how additional improvements can sometimes be obtained when processing keypoints using graph neural networks. Future work includes scaling PermaKey to more complex 3D visual worlds involving changing viewpoints, occlusions, egocentric vision etc. Similarly, designing quantitative evaluation metrics to directly measure the quality of keypoints extracted by different methods without the need for computationally expensive downstream tasks would allow for efficient performance analysis and model design as we begin to scale-up these ideas to more complex visual domains." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Aleksandar Stanić and Aditya Ramesh for valuable discussions. This research was partially funded by ERC Advanced grant no: 742870 and by Swiss National Science Foundation grants: 200021 165675/1 & 200021 192356. We also thank NVIDIA Corporation for donating a DGX-1 as part of the Pioneers of AI Research Award and to IBM for donating a Minsky machine." }, { "heading": "A EXPERIMENT DETAILS", "text": "" }, { "heading": "A.1 DATASETS", "text": "To obtain Atari game frames, we use rollouts of various pre-trained agents in the Atari Model Zoo (Such et al., 2019). We split the aggregated set of game frames for each of the chosen environments into separate train, validation and test sets of 85,000, 5000 and 5000 samples respectively.\nFor “noisy” Atari, we start with the regular Atari dataset and superimpose colored bars (either horizontal, vertical or both) centered at random x-y co-ordinates. This is done on-the-fly during training/evaluation to generate the required Noisy Atari samples." }, { "heading": "A.2 ARCHITECTURE AND TRAINING DETAILS", "text": "" }, { "heading": "A.2.1 PERMAKEY", "text": "We train our method using the Adam optimizer (Kingma & Ba, 2015) with a initial learning rate of 0.0002 and decay rate of 0.85 every 10000 steps. We use a batch size of 32 and in all cases train for 100 epochs, using use early stopping of 10 epochs on the validation set to prevent overfitting. All modules (i.e. the VAE, the prediction network, and PointNet) are trained in parallel using their respective losses. We do not backpropagate gradients between modules to improve stability. The same hyperparameters are used across all environments except for the number of keypoints.\nVariational Autoencoder We use a convolutional neural network with 4 Conv-BatchNorm-ReLU layers for the encoder network of the VAE. The encoder network uses kernel sizes [4, 3, 3, 3], filters [32, 64, 64, 128] and strides [1, 2, 2, 1]. The architecture of the decoder is the transpose of that of the encoder with 2× bi-linear upsampling used to undo the striding. Preliminary experiments were used to determine the kernel size of the convolutional layers in the encoder. There it was observed that a kernel size of 4 on the first layer for 84× 84 images achieves the best results and that larger kernel sizes and strides (≥ 2) reduce the spatial resolution of the feature maps and lead to coarser keypoint localization.\nPrediction Network We use a separate 3-layer MLP with hidden layer sizes [8 × p × p × C, 512, 256, p × p × C] where p denotes height and width of activation patch and C the number of channels of the activation map of encoder CNN. We use linear output activations for the prediction network and use a separate network for each of the selected layers of the VAE encoder that perform a separate local spatial prediction task. We use encoder layers [0, 1] and p = 2 for the local spatial prediction task.\nPointNet For the PointNet network we use the same encoder architecture as for the VAE, but add a final 1 × 1 regressor to K feature-maps corresponding to K keypoints (Jakab et al., 2018) with a standard deviation of 0.1 for the Gaussian masks. We resize predictability maps to 84 × 84 and concatenate them channel-wise before feeding it to the PointNet." }, { "heading": "A.2.2 TRANSPORTER RE-IMPLEMENTATION", "text": "We re-implemented the Transporter model (Kulkarni et al., 2019) for our baseline method. We used an encoder network of 4 Conv-BatchNorm-ReLU layers. The feature extractor network Φ uses kernel sizes [3, 3, 3, 3], filters [16, 16, 32, 32] and strides [1, 1, 2, 1]. The PointNet Ψ uses the same architecture but includes a final 1× 1 regressor to K feature-maps corresponding to K keypoints. 2D co-ordinates are computed from these K maps as described in Jakab et al. (2018). The RefineNet uses the transpose of Φ with 2× bi-linear upsampling to undo striding. We used the Adam optimizer with a learning rate of 0.0002 with a decay rate of 0.9 every 30000 steps and batch size of 64. We trained for a 100 epochs with an early-stopping parameter of 5 to prevent over-fitting.\nThe same hyperparameters are used across all environments except for the number of keypoints." }, { "heading": "A.2.3 RL AGENT AND TRAINING", "text": "The convolutional keypoint encoder in the Transporter and PKey+CNN models consists of 4 ConvBatchNorm-ReLU layers with kernel sizes [3, 3, 3, 3], strides [1, 1, 2, 1] and filters [128, 128, 128, 128]. The final convolution layer activation is flattened and passed through a Dense layer with 128 units and ReLU activation. The convolutional keypoint encoder receives the keypoint features obtained in the following manner: let Φ(xt) a [H,W,C] tensor be the convolutional features and H(xt) a [H,W,K] tensor be the keypoint masks for an input image xt. Keypoint features are obtained by multiplying keypoint masks with convolutional features and superposing masked features of all the K keypoints.\nThe MLP architecture used as the building block throughout the graph neural network consists of a 2-layers, each having 64 ReLU units. We use a similar MLP (but having linear output units) as the positional encoder network to produce learned positional embeddings given keypoint centers. Output node values from the Decode block of the GNN are concatenated and fed into a 2-layer MLP, with 128 ReLU units each before being input to the agent.\nFor training the RL agents, we use Polyak weight averaging scheme at every step (constant=0.005) to update the target Q-network. We use Adam optimizer (Kingma & Ba, 2015) with a learning rate of 0.0002 and clip gradients to maximum norm of 10. For -greedy exploration we linearly anneal from 1 to 0.1 over the entire 100,000 steps (or 400,000 frames).\nDuring training we checkpoint policies by running 10 episodes on a separate validation environment initialized with a seed different from the one used for training. We checkpoint policies based on their mean scores on this validation environment. For final evaluation (reporting test scores), we take the best checkpointed models from 3 different runs and run 10 evaluation episodes for each of the 3 policies on 10 previously unseen test environment seeds. We report the mean (and std. dev.) for the 3 different policies using the median evaluation run over the 10 test seeds. We emphasize that different environment seeds are used for training, validation (checkpointing), and testing. Further we do not add any noise in our policy used for evaluation episodes." }, { "heading": "B ADDITIONAL RESULTS", "text": "" }, { "heading": "B.1 ADDITIONAL VISUALIZATIONS", "text": "" }, { "heading": "B.1.1 NUMBER OF KEYPOINTS", "text": "On the Frostbite environment we modulate the number of keypoints produced by the PointNet while keeping all other hyperparameters same as described above in Appendix A (visualization shown in Figure 6)." }, { "heading": "B.1.2 EFFECT OF LAYER CHOICE", "text": "On the Space Invaders’ environment we modulate the feature layer(s) chosen = {(0), (0, 1), (2, 3), (0, 1, 2, 3)} for the local spatial prediction task while keeping all other hyperparameters same as described above in Appendix A." }, { "heading": "B.1.3 SEEDS", "text": "We evaluate the stability of our keypoint discovery method over 5 random seeds on Enduro and Battlezone environments with all hyperparameter settings same as described in Appendix A and the qualitative results are shown in Figure 8 and Figure 9." }, { "heading": "B.1.4 KEYPOINT MASKED REGIONS", "text": "Figure 10 shows the keypoint mean (center) as well as Gaussian mask around it. We can see clearly from Figure 10 that although the keypoint centers might be slightly focused towards the borders of predictability their corresponding attention windows ensure good coverage of salient object parts in the scene." } ]
2,021
UNSUPERVISED OBJECT KEYPOINT LEARNING USING LOCAL SPATIAL PREDICTABILITY
SP:37921395ed7b214f2921391b2fde1f2ba209719f
[ "The paper \"Do Transformers Understand Polynomial Simplification?\" introduces a new reasoning task (convert polynomials into a normal form) and studies the performance and errors of Transformers on this task. The task itself is quite simple: given a randomly generated term involving small constants, variables, additions, and multiplications, bring the term into a well-defined normal form. Each task has to be solved in a unique sequence of steps. The authors study the performance of Transformers to either simplify the expressions step by step or to predict the simplified version directly.", "The authors analyze the performance of Transformer models on simplifying polynomials and - importantly - generating proofs at the same time. This is a very nice idea that allows to study the performance of Transformers in depth and at the same time in an important setting where verification is performed as part of running the model. And the authors show a strong baseline, with models performing very well in a number of settings. A few areas seem to have been neglected though. For one, the authors only train a 4-layer 4-head model, which is quite small as far as Transformers go. Maybe it's irrelevant for this problem - but having at least one bigger model as a point of comparison would be good. Next, the out-of-distribution question warrants more experiments. Can the Transformers simplify polynomials with way more factors than trained on? With a higher number of variables? Higher degrees? The authors also show that one main problem for Transformers is learning to multiply the coefficients. But - assuming this reviewer understood correctly - the authors do not apply the proof requirement to multiplication. E.g., for \"12*3\" the model has to immediately output \"36\" rather than \"10*3 + 2*3 = 30 + 6 = 36\". Maybe this could help the Transformer learn and be more resilient to coefficient size? So while the current version of the paper is ok, there are a few areas for improvement which prevent it from being a clear accept." ]
Recently researchers have demonstrated that Transformers can be trained to learn symbolic tasks such as solving integration and differential equations in an end-toend fashion. In these setups, for an input symbolic expression, the Transformer predicts the final solution in a single step. Since such tasks may consist of a sequence of logical steps, question remains whether such networks have understood and learnt individual steps to reach the solution. To take a deeper look, we consider the task of polynomial simplification. Polynomials can be written in a simple normal form as a sum of monomials which are ordered in a lexicographic order. For a polynomial which is not necessarily in this normal form, a sequence of simplification steps is applied to reach the fully simplified (i.e., in the normal form) polynomial. For this task, we describe a synthetic Polynomial dataset generation algorithm which generates polynomials with unique proof steps. Then, we conduct an extensive analysis of the Transformer’s abilities to learn the polynomial simplification task along different dimensions.
[]
[ { "authors": [ "Eser Aygün", "Zafarali Ahmed", "Ankit Anand", "Vlad Firoiu", "Xavier Glorot", "Laurent Orseau", "Doina Precup", "Shibl Mourad" ], "title": "Learning to prove from synthetic theorems", "venue": "arXiv preprint arXiv:2006.11259,", "year": 2020 }, { "authors": [ "Ernest Davis" ], "title": "The use of deep learning for symbolic integration: A review of (lample and charton, 2019)", "venue": "arXiv preprint arXiv:1912.05752,", "year": 2019 }, { "authors": [ "Leonardo de Moura", "Soonho Kong", "Jeremy Avigad", "Floris van Doorn", "Jakob von Raumer" ], "title": "The lean theorem prover (system description)", "venue": "Automated Deduction - CADE-25,", "year": 2015 }, { "authors": [ "Marco Gori", "Gabriele Monfardini", "Franco Scarselli" ], "title": "A new model for learning in graph domains", "venue": "In Proceedings", "year": 2005 }, { "authors": [ "Alex Graves", "Marc G Bellemare", "Jacob Menick", "Rémi Munos", "Koray Kavukcuoglu" ], "title": "Automated curriculum learning for neural networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Christopher Hahn", "Frederik Schmitt", "Jens U. Kreber", "Markus N. Rabe", "Bernd Finkbeiner" ], "title": "Transformers generalize to the semantics of logics, 2020", "venue": null, "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Guillaume Lample", "François Charton" ], "title": "Deep learning for symbolic mathematics", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tambet Matiisen", "Avital Oliver", "Taco Cohen", "John Schulman" ], "title": "Teacher-student curriculum learning", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2019 }, { "authors": [ "Aditya Paliwal", "Sarah M. Loos", "Markus N. Rabe", "Kshitij Bansal", "Christian Szegedy" ], "title": "Graph representations for higher-order logic and theorem proving", "venue": "In The Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Bartosz Piotrowski", "Josef Urban", "Chad E. Brown", "Cezary Kaliszyk" ], "title": "Can neural networks learn symbolic rewriting", "venue": null, "year": 2019 }, { "authors": [ "Stanislas Polu", "Ilya Sutskever" ], "title": "Generative language modeling for automated theorem proving", "venue": "CoRR, abs/2009.03393,", "year": 2020 }, { "authors": [ "Anna Rogers", "Olga Kovaleva", "Anna Rumshisky" ], "title": "A primer in bertology: What we know about how bert", "venue": null, "year": 2020 }, { "authors": [ "David Saxton", "Edward Grefenstette", "Felix Hill", "Pushmeet Kohli" ], "title": "Analysing mathematical reasoning abilities of neural models", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Christian Szegedy" ], "title": "A promising path towards autoformalization and general artificial intelligence", "venue": "Intelligent Computer Mathematics,", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jesse Vig" ], "title": "A multiscale visualization of attention in the transformer model", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations,", "year": 2019 }, { "authors": [ "Lucas Willems", "Salem Lahlou", "Yoshua Bengio" ], "title": "Mastering rate based curriculum learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Yuhuai Wu", "Albert Jiang", "Jimmy Ba", "Roger B. Grosse" ], "title": "INT: an inequality benchmark for evaluating generalization in theorem proving", "venue": "CoRR, abs/2007.02924,", "year": 2020 }, { "authors": [ "Willems" ], "title": "tiplication (of numeric coefficients and symbolic variables); where multiplying variables precludes learning to add exponents of similar variables. As these sub-tasks are well-defined and dependencies among them are clear, we explore different types of curriculums based on the Mastering-Rate-based (MR) curriculum learning algorithm", "venue": null, "year": 2020 }, { "authors": [ "Willems" ], "title": "Model parameters and the training configurations remains the same as before6", "venue": "We show the results in Table 17 for COARSE configuration", "year": 2020 } ]
[ { "heading": null, "text": "Recently researchers have demonstrated that Transformers can be trained to learn symbolic tasks such as solving integration and differential equations in an end-toend fashion. In these setups, for an input symbolic expression, the Transformer predicts the final solution in a single step. Since such tasks may consist of a sequence of logical steps, question remains whether such networks have understood and learnt individual steps to reach the solution. To take a deeper look, we consider the task of polynomial simplification. Polynomials can be written in a simple normal form as a sum of monomials which are ordered in a lexicographic order. For a polynomial which is not necessarily in this normal form, a sequence of simplification steps is applied to reach the fully simplified (i.e., in the normal form) polynomial. For this task, we describe a synthetic Polynomial dataset generation algorithm which generates polynomials with unique proof steps. Then, we conduct an extensive analysis of the Transformer’s abilities to learn the polynomial simplification task along different dimensions." }, { "heading": "1 INTRODUCTION", "text": "With the state-of-the-art performance of Deep Neural Nets (DNNs) in perceptual tasks, researchers have started to explore their logical reasoning capabilities, in particular within the domain of Automated Theorem Proving (ATP). In these domains (LEAN (de Moura et al., 2015), HOL Light and Mizar (miz, 2020)), many recent works (Paliwal et al., 2020; Aygün et al., 2020; Hahn et al., 2020) have shown that Graph Neural Networks (Gori et al., 2005; Veličković et al., 2018) and Transformers (Vaswani et al., 2017) can be trained to perform impressively on the theorem-proving task as part of a neuro-symbolic system.\nIn a related but different development, recently Lample & Charton (2019) showed that for symbolic integration and differential equations, a large amount of synthetic end-to-end examples can be generated using symbolic systems. In these tasks, the authors show that Transformer networks can be trained to produce the final solution from an input integral (or differential equation) in a single step. This points to the exciting possibility of using deep neural nets to learn end-to-end theorem provers, and can be beneficial for formal mathematics (Szegedy, 2020). However, the setup combines multiple reasoning steps in a single shot. Additionally, integration (or differential equation solving) is a complex task requiring understanding of the integral symbols, functions, variables, and the basic concepts of arithmetic. As the system in Lample & Charton (2019) is simply trained to output the top solution(s) and a corresponding confidence score(s), it is unclear what internal mechanisms enable these models to solve these problems. This lack of transparency has been noted in this context (Davis, 2019). An earlier work by Piotrowski et al. (2019) showed similar results for certain symbolic manipulation tasks and their work shares the same limitation.\nIn this paper we ask if instead of only producing the end-result of symbolic manipulation or integral, can we have the model produce a human-readable proof as well. While we do not know if these models reason in the way humans do, one way to produce proofs would be to “extract” a proof from the models of the above type by “probing” them in some mannner. The problem of unraveling the inner workings of Transformers by probing is an active area of research; however, at present our understanding is still evolving (Rogers et al., 2020). Hence taking a detour, we instead train the model to produce the full proof.\nInspired by Piotrowski et al. (2019), we explore a novel but simpler setting of polynomial simplification. We illustrate the task with an example. We begin with a polynomial which is a sum of product of factors, where each factor is again a sum of monomials (including constants), as shown below:\nP0 = (2 ∗ x22) ∗ factor︷ ︸︸ ︷ (3 ∗ x12︸ ︷︷ ︸\nterm\n+4) + product︷ ︸︸ ︷ (5 ∗ x21 + x11 ∗ x12) ∗ (3 ∗ x11) ∗ (2), /* Initial */\nTo construct unique simplification steps, first each term in a factor is simplified. Once all factors are simplified (facstep); then within a product, all factors are multiplied (mulstep). Lastly simplified products are summed (sumstep).\nP0 = (2 ∗ x22) ∗ (3 ∗ x12 + 4) + (5 ∗ x21 + x11 ∗ x12) ∗ (3 ∗ x11) ∗ (2), /* FACSTEP */\n= (2 ∗ x22) ∗ (3 ∗ x2 + 4) + (5 ∗ x21 + x11 ∗ x12) ∗ (3 ∗ x11) ∗ (2), (P1), /* FACSTEP */\n= (2 ∗ x22) ∗ (3 ∗ x2 + 4) + (5 ∗ x21 + x1 ∗ x2) ∗ (3 ∗ x1) ∗ (2), (P2), /* MULSTEP */\n= (6 ∗ x32 + 8 ∗ x22) + (5 ∗ x21 + x1 ∗ x2) ∗ (3 ∗ x1) ∗ (2), (P3), /* MULSTEP */\n= (6 ∗ x32 + 8 ∗ x22) + (30 ∗ x31 + 6 ∗ x21 ∗ x2), (P4), /* SUMSTEP */\n= 30 ∗ x31 + 6 ∗ x32 + 6 ∗ x21 ∗ x2 + 8 ∗ x22. (P5), /* ENDPOINT */.\nPiotrowski et al. (2019) explores the task of learning symbolic re-write of an entire expression. In contrast, in our setting, for step-wise prediction, at each step the system needs to find the candidate sub-expression and a relevant simplification type to perform the simplification. This setup resembles the traditional ATP setup where a system needs to learn and execute symbolic steps to reach a final solution. But it is simpler as for each step only one type of simplification is applicable. By proof for an initial polynomial (P0) we mean the sequence of simplification steps (P1 to P5). A model trained on step-wise prediction task, can be used to generate a full proof. Essentially, we start with an initial polynomial, and recursively feed the model output to itself, till it generates the final simplified polynomial (in normal form). A proof is correct when all steps are correct.\nIn the above setting (termed COARSE), all terms in a factor are simplified at once in a facstep, and similarly all factors in a product are simplified at once in a mulstep. Additionally, we define another setting FINER, where a facstep involves simplification of a single term, and a mulstep involves multiplications of only two factors at once, illustrated below with an example (for facstep):\nP0 = (5 ∗ x21 + x11 ∗ x12) ∗ (3 ∗ x11) ∗ (2), /* FACSTEP */\n= (5 ∗ x21 + x1 ∗ x2) ∗ (3 ∗ x11) ∗ (2), /* FACSTEP */\n= (5 ∗ x21 + x1 ∗ x2) ∗ (3 ∗ x1) ∗ (2).\nAs a state-of-the-art model, we explore Transformers. While both Graph Neural Networks and Transformers have been used for single-step representation learning of symbolic theorems and single step goal-theorem scoring, Transformer-based sequence-to-sequence networks have shown superiority in end-to-end tasks in integration, differential equations (Lample & Charton, 2019) and temporal logic (Hahn et al., 2020) domains. Hence for the aforementioned tasks of step-wise polynomial simplification, we explore the Transformer’s ability along several dimensions. Our contributions are the following: 1) we propose polynomial simplification tasks requiring multiple steps of symbolic manipulation, 2) we show how datasets of different configurations can be generated synthetically for the task, 3) we propose an array of metrics to dissect the performance of Transformers, and 4) lastly through extensive experiments we show the performance of the Transformer on this task, establishing a strong baseline for future endeavors.\nResults Summary By varying over coefficient size, proof granularity and input representation (in Tables 1, 2, Appendix Table 6) we observe that 1) full proof accuracy is only slightly lower than single-shot endpoint prediction accuracy in many 1-variable configurations, 2) coarse granular proofs help learn somewhat more accurate proofs, 3) prefix representation helps in most cases but infix sometimes provides higher accuracy. More than 80% errors (Tab. 7 and 8 in Appendix) occur in multiplication steps, and we observe (through independent experiments) Transformer’s struggle\nto learn how to multiply numeric coefficients. By letting the system annotate the candidate subexpression, we observe that the system can understand candidate sub-expressions and which next step to perform explicitly (Tables 3, and Appendix Tables 9, 10, 11). Also, through visualization we observe similar effects (Figures 1, 2 Appendix). We see systems trained for 2-variable outperform corresponding 1-variable systems on 1-variable test sets. For 1 variable, we observe steady and significant higher gains (till 10% for full proof) using curriculum learning (Table 17 Appendix)." }, { "heading": "2 RELATED WORK AND DISCUSSION", "text": "Unlike the problems dealt with by the aforementioned automatic theorem provers and related neuralbased systems, polynomial simplification does not involve any search. Our problem is simpler and tests a specific ability, namely certain kinds of symbol manipulations. This simplicity affords certain advantages (shared by Piotrowski et al. (2019) and Lample & Charton (2019)): (1) We can generate artificial data to train models without limitations on the size. (2) It is easier to test the abilities of the models more thoroughly along multiple axes. (3) Accuracy achieved is much higher than for harder tasks, suggesting that fully solving such tasks may be possible in the near future.\nTo compare with symbolic manipulation systems we note that in more detail the ability tested by our task is the following: the model must be able to identify smallest parts of the polynomial that can be simplified: (1) simplification of a factor, (2) multiplication of two factors, (3) addition of two sub-polynomials. Having identified what simplification to apply the model must produce a new polynomial with just that simplification. This ability is not tested by previous neural-based symbolic manipulation systems such as Piotrowski et al. (2019) and Lample & Charton (2019) and related works such as Saxton et al. (2019) and Hahn et al. (2020). Several recent works have produced synthetic datasets for theorem proving tasks (Aygün et al., 2020; Wu et al., 2020; Polu & Sutskever, 2020), however, their focus remains more on search-based proofs." }, { "heading": "3 POLYNOMIAL SIMPLIFICATION DATASET", "text": "We proceed similarly to Lample & Charton (2019) to generate the symbolic polynomials and simplified steps synthetically using the Sympy library of Python. To have a fine-grained control over the generated polynomials and well-defined proof steps, we consider polynomials which are sums of products1. We also note that symbolic generation using the Sympy library lets us ensure correctness of each generated expressions and validity of each steps." }, { "heading": "3.1 NOTATIONS", "text": "We start with the set of variables xP = {x1, . . . , xnvar}. We represent the starting point polynomial P0 in xP as the sum of products of factors:\nP0 = P1 + P2 + . . . + Pnprod,\nPi = nfaci∏ j=1 fij , (1)\nwhere each factor (fij) has the form f = ∑ k(ak ∗ ∏ l x dkl kl ), where xkl ∈ xP (dropping i, j for clarity). Here coefficients ak ∈ N+, and powers of the variables dkl ∈ N. nprod is the number of products and nfaci denotes the number of factors in Pi. We denote the set of factors as fP = {fij |∃i, Pi = ∏nfaci j=1 fij}. The simplified endpoint polynomial\nis of the form P̂ = ∑q m=1 t̂m, where t̂m = âm ∗ ∏ n xn dmn , where xn ∈ xP . We use the symbol P̂i to denote the simplified form of Pi. The functions terms(), vars(), coeffs() returns a list of terms, variables, coefficients in the input expression. Our sampling algorithm guarantees that the generated polynomial and its simplified endpoint abides by constraints on number of terms, products, factors and variables; limit on degree and coefficient sizes. An example is nprod ∈ {2, . . . ,maxPP} (The full list is provided in Appendix Table 4).\n1The generation algorithm in Lample & Charton (2019) may generate nested sums and products. For such polynomials, an unique proof sequence is hard to define which makes whole proof s harder to evaluate. Our restriction over the form of the polynomial helps us generate unique proofs, which are easier to evaluate." }, { "heading": "3.2 BUILDING A POLYNOMIAL PROOF", "text": "Here, we briefly describe the starting polynomial generation process; detailed algorithm is in the appendix. Any randomly sampled polynomial (represented as a sum of products) can be included as a starting point in the dataset as long as the polynomial respects certain configuration parameters (in Appendix Table 4). This is unlike Lample & Charton (2019), where many randomly generated integrals (or differential equations) might not have a solution. Hence, we randomly sample the constraint parameters in a top-down manner; and then construct terms, factors and products in a bottomup manner using the parameters. We first sample the following 1) a set of participating variables ( xP ), 2) maximum degree for any monomial in the simplified polynomial (mdeg), and 3) the number of products in the starting polynomial (nprod). We then call the algorithm buildProduct (Algorithm 1 in appendix) to create nprod individual products.\nBuilding a Product In buildProduct (Algorithm 1 in Appendix), first we sample nfaci, the maximum number of factors in the product (Pi). We then build factors sequentially. For each new factor, we sample a subset of variables in a factor. We pass on product-level constraints such as maximum degree in a product, maximum terms in a product, and maximum coefficient for a product as rdegree, rterms and rcoeff respectively; and call the sub-routine buildFactor (Algorithm 2 to create a factor. After a factor is sampled, the constraints rdegree, rterms and rcoeff are updated. buildFactor is used to create at most nfaci factors, that all abide by the above constraints and stops if the limit of maximum degree in the product is reached. The terms in a factor are arranged in a lexicographical order. Since, this sequential generation of factors may induce a certain pattern of decreasing degrees and coefficients, we shuffle the factors to create the final product.\nSimplification Steps and Full Proof For both COARSE and FINER configurations, we build the proof steps in the following way: 1) first we do a sequence of facsteps where terms get collected within a factor (such as 2x + 3x to 5x, x1 and 1x becomes x); 2) then a sequence of mulsteps are performed where simplified factors are multiplied out; and 3) lastly, in sumstep simplified products are added together. As mentioned before, the sequence of simplification steps till the endpoint constitute a full proof." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASET", "text": "We vary dataset configurations along the following dimensions: • Number of Variables in polynomial, product and factor is varied between 1 and 2. • Coefficients Size: Maximum coefficient in the polynomial, product and factor are gradually varied from {60, 20, 5} (SMALL), to {120, 40, 8} (MEDIUM) and {300, 100, 10} (LARGE). DEFAULT is {120, 40, 8}. • Maximum degree in polynomial and a factor has two configurations: {6, 3} (DEFAULT), and {12, 5} (MEDIUM DEGREE). • Maximum number of terms in a simplified product and a factor has two configurations: {8, 3} (DEFAULT), and {20, 4} (MEDIUM TERMS). For the latter, we also set maximum products in a sum and maximum factors in a product as 5 and 4 respectively. • No Backtrack: We also try a very large configuration (NO BACKTRACK) where maximum coefficients in polynomial, product and factor are {10125, 3375, 5}, maximum degree in polynomial and factor are set to {9, 3}. Maximum terms in a product is set to 27. This is a configuration, where no sampled factor, or product is ever rejected for violating any higher-level constraint.\nInfix and Prefix We focus on exploring seq2seq networks for all our experiments. We consider the prefix and infix traversals of the abstract syntax tree of the polynomial input as sequences. Lample & Charton (2019) briefly touched upon the usefulness of the prefix notation over infix, but do not provide any empirical evidence supporting the statement. Hence, in our experiments, we consider both INFIX and PREFIX representations." }, { "heading": "4.2 TASKS AND METRICS", "text": "We identify two central tasks : 1) Step-wise prediction: where an input polynomial is provided and the task is to perform the next proof step, and 2) Endpoint Prediction: where given a polynomial, the task is to predict the fully simplified polynomial in a single step. To compare with the Endpoint prediction task, we use the Step-wise prediction task to compute the full proof accuracy as the percentage of proofs where all individual proof steps are accurate2. Apart from the accuracy, we also compare the examples seen by the systems trained in the above two types of tasks. For the Step-wise task, a training example corresponds to an individual simplification step; whereas for the Endpoint task an example is a pair denoting the initial and the endpoint polynomial. We also report the following: 1) error percentages grouped by each different types of steps facstep, mulstep, and sumstep, 2) calibration scores of the systems based on a threshold. To compute accuracy for an example (in both the tasks), we use the simplify method of Sympy library and check symbolically whether the difference between the predicted expression and the ground-truth expression is equal to zero. Calibration: As end-to-end models grow more accurate and their usage increases, it’s important that the users can trust such models. In addition to reporting each simplified step and a confidence score, we also report calibration score computed from the ratio of the top two outputs predicted for each step (using beam width 5). Using a Calibration constant threshold (usually 5), we report the sure rate which is percentage of times when the ratio (in log scale with base e) exceeds the threshold. We also report precision, recall and F-1 score for calibration." }, { "heading": "4.3 MODEL", "text": "Adapting the experimental setup by Lample & Charton (2019)3, we train a seq2seq network to predict the next proof step provided a polynomial as a sequence. For all the experiments, we train a Transformer network (Vaswani et al., 2017) architecture with 4 attention heads, 4 encoder and decoder layers with hidden embedding size of 256. We use an Adam optimizer (Kingma & Ba, 2014) with a learning rate of 10−4. We limit the maximum token length to 512 and use a batch size of 32 polynomial pairs.\nDuring training, we synthetically generate each batch of equations. To avoid collisions between train and test sets, we first use a fixed seed to generate the test and the validation sets of polynomial simplification full proofs and collect the simplified end-points. We make sure that the simplified versions of the input polynomial in the training batches, do not collide with any endpoints in the the test and validation set. Authors in Piotrowski et al. (2019) shows that probability of such collisions in the generated integration dataset by Lample & Charton (2019) to be quite high, and urges to report the test accuracy by accounting for such collisions explicitly.\nDuring inference, we use beam-search with different beam widths (beam 1 and 5) to decode the expressions. For our results, beam width 1 is used for proof accuracy. Calibration results are produced using beam 5 decoding. During decoding, if any malformed (prefix or infix) expressions are generated, we report the percentage of such expressions4." }, { "heading": "4.4 EXPERIMENT ORGANIZATION", "text": "In the next sub-sections, we provide problem space-size estimate (§4.5) to understand if the accuracies are an effect of memorization. Then we vary the proof granularity, coefficient configurations and input representation to test Transformers’ accuracy and errors in both tasks (§4.6). Next, to assess whether Transformers can specifically predict candidate next sub-expression to be simplified, we try an annotated proof setting (§4.6.1). To estimate the learning ability of addition and multiplication on symbolic variables, we test a setting where the coefficients are also symbolic, thus bypassing the need for the Transformer to do integer multiplication. Next, we discuss out-of-distribution general-\n2We have also attempted recursive proof generation, where the output from the decoder is fed to the encoder in the next step. It does not vary from the teacher-forcing since, if in any step the model is wrong, the model does not recover after that.\n3https://github.com/facebookresearch/SymbolicMathematics 4Similar to Lample & Charton (2019), we find that the percentage of malformed outputs was very low (<\n0.5%). So we did not explicitly correct for it.\nization ability of the systems (§4.7). We also explore several curriculum strategies to take advantage of the well-defined sub-tasks and their varying complexities (§4.8). Lastly, we provide layer-wise attention visualizations of a trained system in the Appendix (Figs. 1 & 2)." }, { "heading": "4.5 PROBLEM SPACE SIZE ESTIMATION", "text": "For smaller configurations, it is probable that eventually all simplified polynomials would be included in the training data. To account for this, we estimate the problem space size for each configuration and report the size of training data for comparison. We randomly generate two sets of starting polynomials say S1 and S2, and check for collisions among them. Assuming the actual size is X and uniform distribution over all starting polynomials, the expected number of collisions would be R = S1∗S2X . Using the above method, we estimate the number of un-simplified polynomials and the number of unique endpoints, and report in Appendix Table 5. We observe that compared to the number of training examples it took for the models to converge in both End-point and Step-wise prediction tasks, the space of possible equations is often 25 (or more) times higher.\nSampled polynomials are not uniformly distributed as we assign an equal probability while sampling polynomials of lower and higher degrees, say 3 and 6; whereas there are more polynomials of degree 6 than degree 3. For non-uniform distributions, we expect more collisions as higher probability equations are more likely to occur in both S1 and S2. Moreover, since many equations may map to the same endpoint, such collisions for endpoints are even more likely. Thus, our empirical estimate of the population size provides a lower bound on the true value." }, { "heading": "4.6 INPUT REPRESENTATION", "text": "We report the results for one and two variables, for all configurations in Tables 1 and 2. In Table 1, we include results for both COARSE and FINER configurations. We observe that COARSE proof-steps with PREFIX representation provides the best full proof accuracy for four out of six configurations (especially for larger coefficient sizes). Across COARSE and FINER, in five out of six configurations PREFIX representation increases the full proof accuracy over INFIX, while the improvement is not always substantial. In SMALL COEFF configuration, the FINER setting improves over COARSE for full proof accuracy. From the calibration results, we see that the winning combinations often provide the highest calibration F-1 score (more prominent for 2 variables), indicating lesser ambiguity in the decision made. In Table 2, using PREFIX representation for two variables provides 3 to 4% boosts in full proof accuracy for 4 out of 6 configurations. Since, FINER steps do not improve full proof accuracy for two variables, we report the results in Table 6 in the appendix. However, for NO BACKTRACK, the infix representation clocks a 9.5% improvement over prefix. Comparing with Endpoint accuracy, as coefficient sizes grow from SMALL to NO BACKTRACK, for 1 variable, the Endpoint accuracy is only slightly higher (1 to 2%) than the full proof accuracy. However, for MEDIUM TERMS and MEDIUM DEGREE, the Endpoint accuracy shows a 3.6% and 13% improvement respectively. For 2 variables, Endpoint task accuracy is larger in most cases.\nIn Tables 7 and 8 (in Appendix) we show the model errors for each step type. We observe that more than 80% of the model errors occur in the multiplication step. In the MEDIUM TERMS setting, factor simplification causes 15-25% of the errors, possibly because of higher number of factors to simplify. For 2 variable case, addition step accounts for 10-15% of the errors. In all other cases, both factor simplification and addition cause close to 5% of the model errors each. As mentioned in §4.4, we experimented with symbolic coefficients to mitigate the difficulties with integer multiplication. This however didn’t give good results possibly due to output becoming too long." }, { "heading": "4.6.1 ANNOTATED PROOFS", "text": "In each step, simplification is performed over a sub-expression of the polynomial. To check explicitly, if the system can locate the sub-expression and find the type of simplification step, we devise the annotated proof setting. For each simplification step, we add an intermediate step, in which the model annotates the part of polynomial to operate on. For example, the starting input sequence is “MARK $ (5 ∗ x21 + x1 ∗ x2) ∗ (3 ∗ x1) ∗ (2)”; and the corresponding expected output sequence is “MUL $ #(5 ∗ x21 + x1 ∗ x2) ∗ (3 ∗ x1)# ∗ (2)”. Each sequence has two parts: 1) the step index to perform (MARK, MUL, FAC, SUM), and 2) the polynomial expression. For MARK step, a marker token (#) is used to annotate the candidate sub-expression to be simplified next.\nWe experiment only with INFIX representation. The results for 1 variable and 2 variables are in Table 3 and 9 (in Appendix). The errors per step type are shown in Appendix Tables 10 and 11. Compared to non-annotated setting, while the step-wise accuracy is similar, the proof accuracy suffers often by 7-10%. A reason for such decrease in accuracy is that length of the annotated proofs are twice as long as non-annotated. However, we note that the errors in MARK step are the lowest compared to other types of steps. This indicates that the models are able to learn the candidate sub-expression for simplification, and predict the next operation correctly." }, { "heading": "4.7 OUT-OF-DISTRIBUTION EVALUATION", "text": "We also test out-of-distribution generalization by choosing different test configurations than train. The best 2 Variable models (COARSE/PREFIX) were tested on 1 Variable dataset with same coefficient configuration. We interestingly observe (in Appendix Table 14) that in all settings except one\n(MEDIUM COEFF), the 2 variable models outperform the corresponding 1 variable models. For the LARGE COEFF case, the improvement is close to 6% over the 1 variable model. As expected, the 2 Variable models perform better on 1 variable dataset than 2 variable. The results for OOD evaluation with respect to coefficient limits, polynomial degree and polynomial length (no. of terms in starting polynomial) are discussed in the Appendix (Tables 15 & 16)." }, { "heading": "4.8 CURRICULUM LEARNING", "text": "Simplification steps entail learning of addition and multiplication of numeric coefficients and symbolic variables. But, as some of the individual sub-tasks seem harder to grasp, we employ different types of curricula based on the Mastering-Rate-based (MR) curriculum learning algorithm proposed by Willems et al. (2020)5. For all our experiments, we use the MR algorithm with gAmax Linreg A2D converter functions described in Willems et al. (2020). Model parameters and the training configurations remain the same. We show the results in Table 17 for 1 variable COARSE configuration. As coefficient size grows from SMALL, MEDIUM, LARGE to NO BACKTRACK, improvements in full proof accuracy steadily increase from 1% to 10.84% (COARSE/INFIX). For NO BACKTRACK, the improvement in top-1 accuracy is by 20% from a no curriculum setting. However, we observe for MEDIUM TERMS, there is a drop in accuracy for all curricula and input representations. It is possible that, more carefully designed curricula may improve the results. There is no clear advantage observed between infix or prefix representations. However, compared to learning without a curriculum, the improvement observed for infix representation is often larger than prefix." }, { "heading": "5 CONCLUSION", "text": "We explored the polynomial simplification task to investigate the capabilities and shortcomings of Transformer networks across various dimensions. We proposed a synthetic polynomial generation algorithm which generates constrained polynomials with unique proof steps. While Transformers perform impressively in many settings, reaching above 90% proof accuracies, there were also clear limitations and there are many avenues for future work. Among notable results, in many cases full proof accuracy is lower than endpoint accuracy, but with a low margin. This is perhaps not surprising because the model is trained to optimize for stepwise accuracy and generating a valid proof requires getting all of the multiple proof steps correct. Thus a more proof-centric training approach might further improve proof-wise accuracies. Prefix representation has a slight advantage over infix and coarse proofs have slight advantage over fine proofs. Transformers quickly learn addition, but consistently struggle with multiplication. Carefully designed curriculums can boost full proof accuracy up to 10% for large coefficient sizes. Models trained on two variable datasets often did very well on single variable datasets—even better than the models trained on single variable datasets. Exploring multivariate polynomial manipulations and more general algebraic systems are some immediate future directions, though even for the polynomial simplification task significant gaps remain in our understanding.\n5For full details, please see Appendix Section I." }, { "heading": "B ALGORITHMS", "text": "The polynomial sampling algorithms buildProduct and buildFactor are provided in Algorithms 1 and 2 respectively.\nAlgorithm 2: BuildFactor (Sampling A Factor) Input: xPi , rdegree, rterms, rcoeff Constraints: num vars fac, max coeff fac, max terms fac,\nmax degree fac Output: A factor fj , Number of terms ntermsj\n1 Sample nvar ∈ {1, . . . ,num vars fac} 2 cvars = Sample nvar variables from xPi // Variable set for this factor 3 Sample nterms ∈ {1, . . . ,min(max terms fac, rterms)}\n// # Terms for this factor 4 Sample {dk}ntermsk=1 , s.t. dk ∈ {0, . . . ,min(max degree fac, rdegree)} // Term degrees: degree 0 allows for constant terms 5 Sample {ck}ntermsk=1 , s.t. ck ∈ {1, . . . ,min(max coeff fac, rcoeff)} // Term coefficients 6 for k ← 1 to nterms 1 do 7 selects d[k] variables from cvars with replacement\n// E.g. if d[k] = 4, cvars = [x1, x2]. May sample [x1, x2, x1, x1] 8 Convert the selected d[k] variables to a term // tk = ck ∗ x31 ∗ x2, 9 end\n10 fj = ∑nterms\nk=1 tk 11 return fj ;" }, { "heading": "C TABLE OF CONSTRAINTS AND NOTATIONS", "text": "We provide the full list of constraints and notations in Table 4." }, { "heading": "D PROBLEM SPACE SIZE ESTIMATION", "text": "We present the problem space size estimates here in Table 5.\nE INPUT REPRESENTATION (ADDITIONAL RESULTS)\nWe present the results for FINE configuration for 2 variable setting here in Table 6. The errors made by the models for 1 Variable and 2 Variable settings are presented in Tables 7 and 8 respectively." }, { "heading": "F ANNOTATED PROOF (ADDITIONAL RESULTS)", "text": "We present the results for COARSE and FINE configuration for 2 variable setting for annotated proofs here in Table 9. The errors made by the models for 1 Variable and 2 Variable settings are presented in Tables 10 and 11 respectively." }, { "heading": "G FULLY SYMBOLIC PROOFS", "text": "As > 80% of the errors occurred in multiplication step, we separately tested the Transformer’s ability to do arithmetic, by creating datasets involving multiplication and addition of 4-digit and 9-digit numbers. While the models quickly achieved an accuracy of close to 99% for addition; for multiplication, they could not go beyond even 1% after seeing 2M examples. Hence, we envision a setting where polynomial simplification steps only involve symbolic addition and multiplication, without any arithmetic manipulation. For example, instead of multiplying 3 and 4 as 12, the model will output c1 ∗ c2 given coefficients c1 and c2. The results for 1 Variable setting are presented in Table 12. Here, MEDIUM COEFF and MEDIUM DEGREE denote the same configuration as the case with integer coefficients. The only difference being that the limits of coefficients no longer apply. The\nerrors made by the model for each kind of step are summarized in Table 13. We observe that the proof accuracy is about 20% less than the non-symbolic models. This could be because the intermediate polynomials in the simplification sequence become very long with symbolic coefficients." }, { "heading": "H OUT-OF-DISTRIBUTION EVALUATION", "text": "We present the results for Out-of-Distribution evaluation here. Table 14 contains results for best 2 variable models (Prefix/Coarse) tested on 1 Variable setting. Table 15 contains results for best 1 variable models (Prefix/Coarse) tested on SMALL, MEDIUM and LARGE coefficient setting. As expected, the SMALL and MEDIUM models perform much worse when tested on higher coefficients. We also evaluated the best 1 variable models (Prefix/Coarse) on MEDIUM DEGREE and TERMS set-\ntings, to check generalization with respect to # terms and degree of polynomial. Table 16 contains results for the same. The MEDIUM COEFF model is not able to generalize to more terms or polynomials of higher degree." }, { "heading": "I CURRICULUM LEARNING", "text": "Learning the simplification steps should entail learning the sub-tasks, such as addition and multiplication (of numeric coefficients and symbolic variables); where multiplying variables precludes learning to add exponents of similar variables. As these sub-tasks are well-defined and dependencies among them are clear, we explore different types of curriculums based on the Mastering-Rate-based (MR) curriculum learning algorithm proposed in Willems et al. (2020). Authors in Willems et al.\n(2020) define curriculum learning by 1) a curriculum i.e. a set of tasks C = {c1, . . . , cn}, where a task is set of examples of similar type with a sampling distribution, and 2) a program which for each training step defines the tasks to train the learner given its learning state and the curriculum. Formally, the program d : N → DC , is a sequence of distributions over C. The authors estimate the program function through an attention function which defines attention over the tasks at a time-step, and an attention-to-distribution converter which converts the attention to a distribution over C. Authors observe that other algorithms (Matiisen et al., 2019; Graves et al., 2017) are special cases of the above setting with different choices for program.\nTo learn on tasks that are learnable but not learnt yet, authors define an ordered curriculum OC which is a directed graph over tasks in C. An edge from A to B indicates that learning task A before B is preferable. For supervised learners, the learnability for each task depends on mastering rate (Mc(t)) computed from the normalized mean accuracy for that task at time-step t. At each timestep, the MR algorithm computes attention over a task (ac(t)) from mastering rates of its ancestors and successors. During training to sample batches, a hyperparameter Nb for the curriculum determines the number of batches to be considered at a step, before re-computing the attention over tasks. Using the program d, we first sample Nb ∗ b examples from tasks in C. The model is then trained on randomly sampled Nb minibatches are sampled updating the mastering rates.\nFor polynomial simplification for 1 variable, we define the following tasks ADD, MUL2, MUL3, SCOEFF and MIXED. For ADD, only one factor per product is allowed, so there is no multiplication. For MUL2 and MUL3 only 1 product is allowed with maximum two factors and three factors respectively. SCOEFF points to the SMALL COEFF configuration and MIXED is the final variable size configuration of the target variable configuration. We define the following curriculums: • C: {(ADD, MUL3), (MUL3, MIXED), (ADD, MIXED)}. • C2: {(ADD, MUL2), (MUL2, MUL3), (MUL3, MIXED), (ADD, MIXED)}. • C4: {(ADD, MUL2), (MUL2, MUL3), (MUL3, SCOEFF), (ADD, SCOEFF) (SCOEFF, MIXED)}. For all our experiments, we use the MR algorithm with gAmax Linreg A2D converter functions described in Willems et al. (2020). Model parameters and the training configurations remains the same as before6. We show the results in Table 17 for COARSE configuration. As coefficient size grows from SMALL, MEDIUM, LARGE to NO BACKTRACK - the improvements in full proof accuracy steadily increase from 1% to 10.84%. For NO BACKTRACK, the improvement in top-1 accuracy is by 20% from a no curriculum setting. However, we observe for MEDIUM TERMS, there is a drop in accuracy for all curriculums and input representations. It is possible that, more carefully designed curriculums may improve the results. There is no conceivable pattern observed for infix or prefix representations. However, compared to learning without curriculum, the improvement observed for infix representation is larger than prefix.\n6We use Nb as 10. For other default parameters in CL, please check github.com/lcswillems/ automatic-curriculum." } ]
2,020
DO TRANSFORMERS UNDERSTAND POLYNOMIAL SIMPLIFICATION?
SP:f71ceed51963fee2042d66da98c14aeb91b93f74
[ "The paper presents a method for capturing the shape (type of layers) and their respective parameters of a neural network through the magnetic field induced as the GPU drains power. In particular, the GPU is snooped using an off-the-shelf magnetic induction sensor which is placed along the power cable of the GPU. It turns out that under some assumptions (knowledge of GPU model and deep learning framework, knowledge of input size and ability to launch query of specific batch size), based on the correlation of the power consumption pattern of the GPU with the operations being performed it is possible to recognize the type of operation being performed as well as the respective hyper-parameters with very few errors.", "This paper demonstrates that magnetic side channel information from a GPU (that is processing a deep neural net) can be snooped to recover the architecture and hyperparameters of the neural network. While the concept of side channel information snooping to recover codes/software (including ML models) is widely studied, the novelty claim is that recovering detailed structures of deep models is new. The paper also demonstrates that black-box attacks mounted using a recovered model is quite powerful compared to traditional black-box attacks. " ]
We examine the magnetic flux emanating from a graphics processing unit’s (GPU) power cable, as acquired by a cheap $3 induction sensor, and find that this signal betrays the detailed topology and hyperparameters of a black-box neural network model. The attack acquires the magnetic signal for one query with unknown input values, but known input dimension and batch size. The network reconstruction is possible due to the modular layer sequence in which deep neural networks are evaluated. We find that each layer component’s evaluation produces an identifiable magnetic signal signature, from which layer topology, width, function type, and sequence order can be inferred using a suitably trained classifier and an optimization based on integer programming. We study the extent to which network specifications can be recovered, and consider metrics for comparing network similarity. We demonstrate the potential accuracy of this side channel attack in recovering the details for a broad range of network architectures, including random designs. We consider applications that may exploit this novel side channel exposure, such as adversarial transfer attacks. In response, we discuss countermeasures to protect against our method and other similar snooping techniques.
[]
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for largescale machine learning", "venue": "In 12th {USENIX} symposium on operating systems design and implementation ({OSDI}", "year": 2016 }, { "authors": [ "Lejla Batina", "Dirmanto Jap", "Shivam Bhasin", "S Picek" ], "title": "Csi nn: Reverse engineering of neural network architectures through electromagnetic side channel", "venue": "In Proceedings of the 28th USENIX Security Symposium. USENIX Association,", "year": 2019 }, { "authors": [ "Ian Buck" ], "title": "Gpu computing: Programming a massively parallel processor", "venue": "In International Symposium on Code Generation and Optimization", "year": 2007 }, { "authors": [ "Ambra Demontis", "Marco Melis", "Maura Pintor", "Matthew Jagielski", "Battista Biggio", "Alina Oprea", "Cristina Nita-Rotaru", "Fabio Roli" ], "title": "Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks", "venue": "In 28th USENIX Security Symposium Security", "year": 2019 }, { "authors": [ "Anuj Dubey", "Rosario Cammarota", "Aydin Aysu" ], "title": "Maskednet: The first hardware inference engine aiming power side-channel protection", "venue": "arXiv: Cryptography and Security,", "year": 2019 }, { "authors": [ "Vasisht Duddu", "D Vijay Rao" ], "title": "Quantifying (hyper) parameter leakage in machine learning", "venue": "arXiv preprint arXiv:1910.14409,", "year": 2019 }, { "authors": [ "Vasisht Duddu", "Debasis Samanta", "D Vijay Rao", "Valentina E Balas" ], "title": "Stealing neural networks via timing side channels", "venue": "arXiv preprint arXiv:1812.11720,", "year": 2018 }, { "authors": [ "Daniel Genkin", "Adi Shamir", "Eran Tromer" ], "title": "Rsa key extraction via low-bandwidth acoustic cryptanalysis", "venue": "In Annual Cryptology Conference,", "year": 2014 }, { "authors": [ "Alex Graves", "Santiago Fernández", "Jürgen Schmidhuber" ], "title": "Bidirectional lstm networks for improved phoneme classification and recognition", "venue": "In International Conference on Artificial Neural Networks,", "year": 2005 }, { "authors": [ "Alex Graves", "Santiago Fernández", "Faustino Gomez", "Jürgen Schmidhuber" ], "title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks", "venue": "In Proceedings of the 23rd international conference on Machine learning,", "year": 2006 }, { "authors": [ "David J Griffiths" ], "title": "Introduction to electrodynamics", "venue": null, "year": 2005 }, { "authors": [ "Ed Grochowski", "Murali Annavaram" ], "title": "Energy per instruction trends in intel microprocessors", "venue": "Technology@ Intel Magazine,", "year": 2006 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Sanghyun Hong", "Michael Davinroy", "Yiǧitcan Kaya", "Dana Dachman-Soled", "Tudor Dumitraş" ], "title": "How to 0wn the nas in your spare time", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Xing Hu", "Ling Liang", "Shuangchen Li", "Lei Deng", "Pengfei Zuo", "Yu Ji", "Xinfeng Xie", "Yufei Ding", "Chang Liu", "Timothy Sherwood" ], "title": "Deepsniffer: A dnn model extraction framework based on learning architectural hints", "venue": "In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems,", "year": 2020 }, { "authors": [ "Weizhe Hua", "Zhiru Zhang", "G Edward Suh" ], "title": "Reverse engineering convolutional neural networks through side-channel information leaks", "venue": "In 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC),", "year": 2018 }, { "authors": [ "Paul Kocher", "Joshua Jaffe", "Benjamin Jun" ], "title": "Differential power analysis", "venue": "In Annual international cryptology conference,", "year": 1999 }, { "authors": [ "Paul C Kocher" ], "title": "Timing attacks on implementations of diffie-hellman, rsa, dss, and other systems", "venue": "In Annual International Cryptology Conference,", "year": 1996 }, { "authors": [ "Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Delving into transferable adversarial examples and black-box attacks", "venue": "arXiv preprint arXiv:1611.02770,", "year": 2016 }, { "authors": [ "Chao Luo", "Yunsi Fei", "Pei Luo", "Saoni Mukherjee", "David Kaeli" ], "title": "Side-channel power analysis of a gpu aes implementation", "venue": "In 2015 33rd IEEE International Conference on Computer Design (ICCD),", "year": 2015 }, { "authors": [ "Seong Joon Oh", "Bernt Schiele", "Mario Fritz" ], "title": "Towards reverse-engineering black-box neural networks. In Explainable AI: Interpreting", "venue": "Explaining and Visualizing Deep Learning,", "year": 2019 }, { "authors": [ "Christos H Papadimitriou", "Kenneth Steiglitz" ], "title": "Combinatorial optimization: algorithms and complexity", "venue": "Courier Corporation,", "year": 1998 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow", "Somesh Jha", "Z Berkay Celik", "Ananthram Swami" ], "title": "Practical black-box attacks against machine learning", "venue": "In Proceedings of the 2017 ACM on Asia conference on computer and communications security,", "year": 2017 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, highperformance deep learning library", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Vojtech Petrucha", "David Novotny" ], "title": "Testing and application of an integrated fluxgate sensor drv425", "venue": "Journal of Electrical Engineering,", "year": 2018 }, { "authors": [ "Ville Timonen" ], "title": "Multi-GPU CUDA stress test", "venue": "(accessed Oct", "year": 2020 }, { "authors": [ "Florian Tramèr", "Fan Zhang", "Ari Juels", "Michael K Reiter", "Thomas Ristenpart" ], "title": "Stealing machine learning models via prediction apis", "venue": "In 25th {USENIX} Security Symposium ({USENIX} Security", "year": 2016 }, { "authors": [ "Lingxiao Wei", "Bo Luo", "Yu Li", "Yannan Liu", "Qiang Xu" ], "title": "I know what you see: Power side-channel attack on convolutional neural network accelerators", "venue": "In Proceedings of the 34th Annual Computer Security Applications Conference,", "year": 2018 }, { "authors": [ "Yun Xiang", "Zhuangzhi Chen", "Zuohui Chen", "Zebin Fang", "Haiyang Hao", "Jinyin Chen", "Yi Liu", "Zhefu Wu", "Qi Xuan", "Xiaoniu Yang" ], "title": "Open dnn box by power side-channel attack", "venue": "IEEE Transactions on Circuits and Systems II: Express Briefs,", "year": 2020 }, { "authors": [ "Mengjia Yan", "Christopher W Fletcher", "Josep Torrellas" ], "title": "Cache telepathy: Leveraging shared resource attacks to learn {DNN} architectures", "venue": "In 29th {USENIX} Security Symposium ({USENIX} Security", "year": 2020 }, { "authors": [ "Yuval Yarom", "Katrina Falkner" ], "title": "Flush+ reload: a high resolution, low noise, l3 cache side-channel attack", "venue": "In 23rd {USENIX} Security Symposium ({USENIX} Security", "year": 2014 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The Graphics Processing Unit (GPU) is a favored vehicle for executing a neural network. As it computes, it also hums—electromagnetically. What can this hum tell us? Could listening to the GPU’s electromagnetic (EM) radiation reveal details about the neural network? We study this question and find that magnetic induction sensing reveals a detailed network structure, including both topology and hyperparameter values, from inferences of otherwise unknown networks running on GPUs.\nReverse engineering a network structure has attracted increasing research effort, motivated by several concerns. First, it has been well known that the performance of a network model hinges on its judiciously designed structure—but finding an effective design is no easy task. Significant time and energy is expended in searching and fine-tuning network structures (Zoph et al., 2018). Moreover, in industry, optimized network structures are often considered confidential intellectual property. It is therefore important to understand the extent to which this valuable, privileged information can be compromised.\nWorse yet, a reverse engineered “surrogate” model also makes the black-box “victim” model more susceptible to adversarial transfer attacks (Papernot et al., 2017; Liu et al., 2016), in which a vulnerability identified in the surrogate is exploited on the victim. Success in the exploit is contingent on the ability of the surrogate to successfully model the vulnerabilities of the victim. Recovering accurate, detailed network topology and hyperparameters informs the modeling of a good surrogate.\nWe examine the fluctuation of magnetic flux from the GPU’s power cable, and ask whether a passive observer can glean the information needed to reconstruct neural network structure. Remarkably, we show that, through magnetic induction sensing, a passive observer can reconstruct the complete network structure even for large and deep networks.\nThreat model. We consider an adversary that (i) is able to place a magnetic induction sensor in close proximity to the GPU’s power cable, (ii) knows the dimension of the input feature vector, and (iii) is able to launch a query of known batch size. We also consider that our attacker uses the same deep learning framework (e.g., PyTorch, TensorFlow) as the black-box model. The adversary is otherwise weak, lacking access to the model source, binaries, training data, and underlying training data distribution; without ability to execute code on the host CPU and GPU; and without knowledge of the input values and output results of the launched queries. Not only that—it also lacks direct\naccess to the GPU hardware, beyond the proximity to the power cable. The adversary only requires access to their own GPU hardware, matching the brand/version of the victim, e.g., as purchased on the open market.\nPhysical principle. The GPU consumes energy at a variable rate that depends on operations performed. Every microprocessor instruction is driven by transistor electron flows, and different instructions require different power levels (Grochowski & Annavaram, 2006). The many compute cores of a GPU amplify the fluctuation in energy consumption, and so too the current drawn from the power cable. Current induces magnetic flux governed by the Biot-Savart law (Griffiths, 2005), and current fluctuations induce EM ripples whose propagation through the environment is governed by the Ampère-Maxwell law. Even a cheap, $3 magnetic induction sensor (see Fig. 2) placed within a few millimeters of the power cable suffices to record these EM ripples.\nTechnique and results. To reconstruct the black-box network’s structure, we propose a two-step approach. First, we estimate the network topology, such as the number and types of layers, and types of activation functions, using a suitably trained neural network classifier. Then, for each layer, we estimate its hyperparameters using another set of deep neural network (DNN) models. The individually estimated hyperparameters are then jointly optimized by solving an integer programming problem to enforce consistency between the layers. We demonstrate the potential accuracy of this side-channel attack in recovering the details for a wide range of networks, including large, deep networks such as ResNet101. We further apply this recovery approach to demonstrate black-box adversarial transfer attacks." }, { "heading": "1.1 RELATED WORK: MODEL EXTRACTION BY QUERIES AND SIDE-CHANNEL ANALYSIS", "text": "Our work falls under the umbrella of black-box model extraction. Absent access to the model’s internals, one might infer structure from observed input-output pairs. For instance, Tramèr et al. (2016) demonstrated that, for simple models such as decision trees and support vector machines hosted on a cloud, certain internal information can be extracted via a multitude of queries. This approach, which was extended to infer details of deep neural networks (Oh et al., 2019; Liu et al., 2016; Duddu & Rao, 2019), is typically able to recover certain information, such as optimization learning rate and network structure type, but has not demonstrated recovery of full structural details.\nA contrasting approach, side-channel analysis (SCA), extracts information gained from the physical implementation of a model, rather than in the mathematical model itself. Analysis of timing (Kocher, 1996), power (Kocher et al., 1999; Luo et al., 2015), cache flushes (Yarom & Falkner, 2014), and audio (Genkin et al., 2014) have been prominently demonstrated to extract secret keys from cryptographic procedures such as the Digital Signature and Advanced Encryption Standards.\nSCA was recently used to infer machine learning models by observing power consumption profiles (Xiang et al., 2020; Wei et al., 2018; Dubey et al., 2019), timing information (Duddu et al., 2018) and memory/cache access (Hu et al., 2020; Hong et al., 2019; Hua et al., 2018; Yan et al., 2020). These methods placed a malware process on the machine hosting the black-box model. Our threat model does not involve introducing processes on the host.\nRecently, Batina et al. (2019) exploited EM radiation for network model extraction. They focused on EM radiation from embedded processors. In contrast to GPUs, embedded processors emit a relatively weak EM signal, necessitating delicate measurement devices and even mechanical opening of the chip package.\nOur advance. All these previous works are demonstrated on shallow networks (e.g., fewer than 20 layers). It remains unclear whether these methods can also extract deep network models, ones that are structurally more complex and more prevalent in practice. We demonstrate successful recovery of the full structure of deep networks, such as RetNet101 (He et al., 2016). With that, we hope to raise awareness of the GPU’s EM radiation as an information-rich, easily-probed side channel." }, { "heading": "2 MAGNETIC SIGNALS FROM GPUS", "text": "Before getting into the weeds, let us provide our intuition: we think of the magnetic signal as the GPU’s “speech.” The GPU speaks a series of “words,” demarcated by silence. Each word names the computational step that was executed. Let us now refine this explanation further, and ground it in physical principles.\nWe use step to refer to performing a specific kind of network operation, such as a linear operation, batch normalization, pooling, activation function, etc. A layer is a sequences of steps, e.g., a (i) linear operation, then (ii) pooling, then (iii) activation. While there may be data dependencies between steps, there are no such dependencies within a step.\nThe parallel nature of GPU computation lends itself to a natural implementation of networks, wherein each step is executed in parallel, i.e., single instruction multiple data (SIMD) parallelism. Transitions between steps, however, are synchronized (Buck, 2007): in our example above, activation begins only after pooling completes. This cross-step synchronization allows for implementations structured into modules, or GPU kernels. This modular approach is employed in widely-used deep learning frameworks such as PyTorch and TensorFlow (Paszke et al., 2019; Abadi et al., 2016).\nSignal. Kernel execution demands transistor flips, which place electric load on the GPU processor, in turn emitting magnetic flux from its power cable. An induction sensor measures this flux and produces proportional voltage. The time-varying voltage is our acquired signal (see Fig. 1).\nDifferent steps correspond to different GPU kernels, transistor flips, electric loads, and signal characteristics, which are distinguished even by the naked eye (see Fig. 1). Cross-step synchronization involves idling, dramatically reducing electric load and signal level (see Fig. 1). These rapid sharp drops demarcate steps.\nWe observe that signal strongly correlates to the kind of GPU operations, rather than the specific values of computed floating point numbers. We verify this by examining signals using both PyTorch and TensorFlow and on multiple kinds of GPUs (see Sec. 5).\nThe signal is also affected by the input to the network. Although the specific input data values do not influence the signal, the input data size does. When the GPU launches a network, the size of its single input (e.g., image resolution) is fixed. But the network may be provided with a batch of input data (e.g., multiple images). As the batch size increases, more GPU cores will be utilized in each step. The GPU consequently draws more power, which in turn strengthens the signal. Once all GPU cores are involved, further increase of input batch size will not increase the signal strength, but elongate the execution time until the GPU runs out of memory.\nTherefore, in launching a query to the black-box network model, the adversary should choose a batch size sufficiently large to activate a sufficient number of GPU cores to produce a sufficient signal-to-noise ratio. We find that the range of the proper batch sizes is often relatively large (e.g., 64 ∼ 96 for ImageNet networks), loosely depending on the size of the single input’s features and the parallel computing ability of the GPU. In practice, the adversary can choose the batch size by experimenting with their own GPUs under various image resolutions.\nNotably however, we do not require knowledge of batch size to robustly recover network topology (as opposed to hyperparameters), only that the batch size is sufficiently large enough to provide a clear signal. While we used a consumer friendly sensor with limited sampling rate (see 4.1) and corresponding signal-to-noise ratio (SNR), a sensor with high sampling rate and SNR would correspondingly require a smaller minimum batch size." }, { "heading": "3 SIGNAL ANALYSIS AND NETWORK RECONSTRUCTION", "text": "We prepare for the attack by training several recovery DNN models; we refer to training before the attack as pretraining. After the attacker launches a batch query (whose input and output values are irrelevant) we recover structure from the acquired signal in two stages: (i) topology and (ii) hyperparameters. To recover topology, a pretrained DNN model associates a step to every signal sample. This per-sample classification partitions the signal into segments corresponding to steps. We estimate hyperparameters for each individual segment in isolation, using a step-specific pretrained DNN model, and resolve inconsistencies between consecutive segments using an integer program. The pretraining of our recovery DNN models is hardware-specific, and good recovery requires pretraining on like hardware." }, { "heading": "3.1 TOPOLOGY RECOVERY", "text": "Bidirectional Long Short Term Memory (biLSTM) networks are well-suited for processing timeseries signals (Graves et al., 2005). We train a biLSTM network to classify each signal sample si predicting the step C(si) that generated si (see Fig. 1-b). The training dataset consists of annotated signals constructed automatically (see Sec. 4.2). We train the biLSTM by minimizing the standard cross-entropy loss between the predicted per-sample labels and the ground-truth labels (see Appx. A for details). By identifying the sequence of steps, we recovered the layers of the network, including their type (e.g., fully connected, convolution, recurrent, etc.), activation function, and any subsequent forms of pooling or batch normalization. What remains is to recover layer hyperparameters." }, { "heading": "3.2 HYPERPARAMETER ESTIMATION", "text": "Hyperparameter consistency. The number of hyperparameters that describe a layer type depends on its linear step. For instance, a CNN layer type’s linear step is described by size, padding, kernel size, number of channels, and stride hyperparameters. Hyperparameters within a layer must be intra-consistent. Of the six CNN hyperparameters (stride, padding, dilation, input, output, and kernel size), any one is determined by the other five. Hyperparameters must also be inter-consistent across consecutive layers: the output of one layer must fit the input of the next. A brute-force search of consistent hyperparameters easily becomes intractable for deeper networks; we therefore first estimate hyperparameters for each layer in isolation, and then jointly optimize to obtain consistency.\nInitial estimation. We estimate a specific hyperparameter of a specific layer type, by pretraining a DNN. We pretrain a suite of such DNNs, one for each (layer type, hyperparameter) pairing. Once the layers (and their types) are recovered, we estimate each hyperparameter using these pretrained (layer type, hyperparameter) recovery DNNs.\nEach DNN is comprised of two 1024-node fully connected layers with dropout. The DNN accepts two (concatenated) feature vectors describing two signal segments: the linear step and immediately subsequent step. That subsequent step (e.g., activation, pooling, batch normalization) tends to require effort proportional to the linear step’s output dimensions, thus its inclusion informs the estimated output dimension. Each segment’s feature vector is assembled by (i) partitioning the segment uniformly into N windows, and computing the average value of each window, (ii) concatenating the time duration of the segment. The concatenated feature vector has a length of 2N + 2.\nThe DNN is trained with our automatically generated dataset (see Sec. 4.2). The choice of loss function depends on the hyperparameter type: For a hyperparameter drawn from a wide range, such as a size, we minimize mean squared error between the predicted size and the ground truth (i.e., regression). For a hyperparameter drawn from a small discrete distribution, such as stride, we minimize the cross-entropy loss between the predicted value and the ground truth (i.e., classification). In particular, we used regression for sizes, and classification for all other parameters.\nJoint optimization. The initial estimates of the hyperparameters are generally not fully accurate, nor consistent. To enforce consistency, we jointly optimize all hyperparameters, seeking values that best fit their initial estimates, subject to consistency constraints. Our optimization minimizes the convex quadratic form\nmin xi∈Z0+ ∑ i∈X (xi − x∗i ) 2 , subject to consistency constraints, (1)\nwhere X is the set of all hyperparameters across all layers; x∗i and xi are the initial estimate and optimal value of the i-th hyperparameter, respectively. The imposed consistency constraints are:\n(i) The output size of a layer agrees with the input size of the next next layer. (ii) The input size of the first layer agrees with the input feature size.\n(iii) The output size of a CNN layer does not exceed its input size (due to convolution). (iv) The hyperparameters of a CNN layer satisfy\nsout =\n⌊ sin + 2β − γ(k − 1)− 1\nα + 1\n⌋ , (2)\nwhere α, β, γ, and k denote the layer’s stride, padding, dilation, and kernel size, respectively. (v) Heuristic constraint: the kernel size must be odd.\nAmong these constraints, (i-iii) are linear constraints, which preserves the convexity of the problem. The heuristic (v) can be expressed as a linear constraint: for every kernel size parameter kj , we introduce a dummy variable τj , and require kj = 2τj+1 and τj ∈ Z0+. Constraint (iv) , however, is troublesome, because the appearance of kernel size α and dilation γ, both of which are optimization variables, make the constraint nonlinear.\nSince all hyperparameters are non-negative integers, the objective must be optimized via integer programming: IP in general case is NP-complete (Papadimitriou & Steiglitz, 1998), and the nonlinear constraint (iv) does not make life easier. Fortunately, both α and γ have very narrow ranges in practice: α is often set to be 1 or 2, and γ is usually 1, and they rarely change across all CNN layers in a network. As a result, they can be accurately predicted by our DNN models; we therefore retain the initial estimates and do not optimize for α and γ, rendering (2) linear. Even if DNN models could not reliably recover α and γ, one could exhaustively enumerate the few possible α and γ combinations, and solve the IP problem (1) for each combination, and select the best recovery.\nThe IP problem with a quadratic objective function and linear constraints can be easily solved, even when the number of hyperparameters is large (e.g., > 1000). In practice, we use IBM CPLEX (Cplex, 2009), a widely used IP solver. Optimized hyperparameters remain close to the initial DNN estimates, and are guaranteed to define a valid network structure." }, { "heading": "4 EXPERIMENTAL SETUP", "text": "" }, { "heading": "4.1 HARDWARE SENSORS", "text": "We use the DRV425 fluxgate magnetic sensor from Texas Instruments for reliable high-frequency sensing of magnetic signals (Instruments, 2020; Petrucha & Novotny, 2018). This sensor, though costing only $3 USD, outputs robust analog signals with a 47kHz sampling rate and ±2mT sensing range. For analog to digital conversion (ADC), we use the USB-204 Digital Acquisition card, a 5-Volt ADC from Measurement Computing (Computing, 2020). This allows a 12-bit conversion of the signal, mapping sensor readings from -2mT∼2mT to 0V∼5V." }, { "heading": "4.2 DATASET CONSTRUCTION", "text": "Sensor placement. To avoid interference from other electric components, we place the sensor near the GPU’s magnetic induction source, anywhere along the power cable. Because magnetic flux decays inversely proportional to the squared distance from the source, according to the Biot-Savart law (Griffiths, 2005), we position the sensor within millimeters of the cable casing (see Fig. 2).\nData capture. Pretraining the recovery DNN models (recall Sec. 3) requires an annotated dataset with pairwise corresopndence between signal and step types (see Fig. 2). We can automatically generate an annotated signal for a given network and specific GPU hardware, simply\nby executing a query (with arbitrary input values) on the GPU to acquire the signal. Timestamped ground-truth GPU operations are made available by most deep learning libraries (e.g., torch.autograd.profiler in PyTorch and tf.profiler in TensorFlow). A difficulty in this process lies in the fact that the captured (47kHz) raw signals and the ground truth GPU traces run on different clocks. Similar to the use of clapperboard to synchronize picture and sound in filmmaking, we precede the inference query with a short intensive GPU operation to induce a sharp spike in the signal, yielding a synchronization landmark (see Fig. S3). We implemented this “clapperboard” by filling a vector with random floating point numbers. Training Set. The set of networks to be annotated could in principle consist solely of randomly generated networks, on the basis that data values and “functionality” are irrelevant to us, and the training serves to recover the substeps of a layer; or of curated networks or those found in the wild, on the basis that such networks are more indicative of what lies within the black-box. We chose to construct our training set as a mixture of both of these approaches. All in all we consider 500 networks for training leading to a total of 5708 network steps we aim to identify. We will release the complete training and test datasets, along with source code and hardware schematics for full reproducibility." }, { "heading": "5 RESULTS", "text": "This section presents the major empirical evaluations of our method. We refer the reader to Appx. B for complete results, additional experiments, and more thorough discussion." }, { "heading": "5.1 ACCURACY OF NETWORK RECONSTRUCTION", "text": "Test dataset. We construct a test dataset fully separate from the training dataset. Our test dataset consists of 64 randomly generated networks in a way similar to Sec. 4.2. The number of layers ranges from 30 to 50 layers. To diversify our zoology of test models, we also include smaller networks that are under 10 layers, LSTM networks, as well as ResNets (18, 34, 50, and 101). Altogether, each test network has up to 514 steps. In total, the test dataset includes 5708 network steps, broken down into 1808 activation functions, 1975 additional batch normalization and pooling, and 1925 convolutional, fully connected, and recurrent layers. When we construct these networks, their input image resolutions are randomly chosen from [224×224, 96×96, 64×64, 48×48, 32×32]: the highest resolution is used in ImageNet, and lower ones are used in datasets such as CIFAR.\nTopology reconstruction. As discussed in Sec. 3, we use a biLSTM model to predict the network step for each single sample. Table 1 reports its accuracy, measured on an Nvidia TITAN V GPU. There, we also break the accuracy down into measures of individual types of network steps, with an overall accuracy of 96.8%. An interesting observation is that the training and test datasets are both unbalanced in terms of signal samples (see last column of Table 1). This is because in practice convolutional layers are computationally the most expensive, while activation functions and pooling are lightweight. Also, certain steps like average pooling are much less frequently used. While such data imbalance does reflect reality, when we use them to\ntrain and test, most of the misclassifications occur at those rarely used, lightweight network steps, whereas the majority of network steps are classified correctly.\nWe evaluate the quality of topology reconstruction using normalized Levenshtein distance (i.e., one of the edit distance metrics) that has been used to evaluate network structure similarity (Graves et al., 2006; Hu et al., 2020). Here, Levenshtein distance measures the minimum number of operations— including adding/removing network steps and altering step type—needed to fully rectify a recovered topology. This distance is then normalized by the total number of steps of the target network.\nWe report the detailed results in Fig. S2 in the appendix. Among the 64 tested networks, 40 of the reconstructed networks match precisely their targets, resulting in zero Levenshtein distance. The\naverage normalized Levenshtein distance of all tested networks is 0.118. To provide a sense of how the Levenshtein distance is related to the network’s ultimate performance (i.e., its classification accuracy), we conduct an additional experiment and discuss it in Appx. B.1.\nDNN hyperparameter estimation. Next, we report the test accuracies of our DNN models (discussed in Sec. 3.2) for estimating hyperparameters of convolutional layers. Our test data here consists of 1804 convolutional layers. On average, our DNN models have 96∼97% accuracy. The break-down accuracies for individual hyperparameters are shown in Table S3 of the appendix.\nReconstruction quality measured as classification accuracy. Ultimately, the reconstruction quality must be evaluated by how well the reconstructed network performs in the task that the original network aims for. To this end, we test seven networks, including VGGs, AlexNet, and ResNets, that have been used for CIFAR-10 classification (shown in Table 2). We treat those networks as black-box models and reconstruct them from their magnetic signals. We then train those reconstructed networks and compare their test accuracies with the original networks’ performance. Both the reconstructed and original networks are trained with the same training dataset for the same number of epochs. The results in Table 2 show that for all seven networks, including large networks (e.g., ResNet101), the reconstructed networks perform almost as well as their original versions. We also conduct similar experiments on ImageNet and report the results in Table S1 of Appx. B.2.\nGPU transferability. Our proposed attack requires the adversary to have the same brand/version of GPU as the victim, but not necessarily the same physical copy (see Fig. S3). Here, we obtain two copies of an Nvidia GTX-1080 GPU running on two different machines, using one to generate training data and another one for black-box reconstruction. We demonstrate that in this setting the models can still be well reconstructed. The experiment details and results are described in Appx. B.3." }, { "heading": "5.2 TRANSFER ATTACK", "text": "To demonstrate a potential exploit of this side-channel exposure, we use reconstructed networks to launch adversarial transfer attack. Transfer attack relies on a surrogate model, one that approximates the target model, to craft adversarial examples of the target model. In a black-box setting, it is known to be hard in general case to find an effective surrogate model (Demontis et al., 2019). But under our threat model, the adversary can recover the network structure of a black-box model from the leaked magnetic signals, and use it as the surrogate model.\nHere we test on six networks found in the wild, ranging from VGGs to AlexNet to ResNets (listed in Table 3). For each network (and each column in Table 3), we treat it as a black-box model and reconstruct its structure by running it on four different GPUs (listed in Table 3), obtaining four reconstructed versions. Afterwards, we train each reconstructed network and use it to craft adversarial examples for transfer attacking the original network. The transfer attack success rates are reported in the top four rows of Table 3, which are compared against several baselines shown in the bottom half of the table. Using our side-channel-based surrogate model, the transfer attack success rate is comparable to the white-box transfer attack baseline, that is, using the target model’s network structure (but trained separately) to attack the target model itself. In other words, our sidechannel-based reconstruction effectively turn a black-box attack into a white-box attack.\nWe also conducted additional experiments for transfer attacks on MNIST dataset. We reconstruct network models downloaded online and then launch attacks. The results are reported in Appx. B.4." }, { "heading": "5.3 DISCUSSION: DEFENSES AGAINST MAGNETIC SIDE CHANNEL LEAKAGE", "text": "At this point, we have shown the robustness and accuracy of the magnetic side channel exploits. Traditionally countermeasures fall under the category of either prevention, detection, or jamming.\nSince our approach is passive in that it does not alter any code or hardware operation of GPUs, detection methods which consist of somehow discovering someone is listening to the GPU are not applicable to magnetic leakage. Here we focus on prevention and jamming.\nPrevention As shown in Figure 1, each rise and drop of the magnetic signals correspond to the boundary between GPU operations. This is only possible when the input batch is large enough to keep every GPU operation sustained and stabilized at a high-load state. To prevent this behavior, one can keep the input sufficiently small (e.g. 1 single image) such that the magnetic signals never reach any stable state and suffer from a low signal-to-noise ratio, rendering our sensing setup futile. Another way to prevent magnetic side channel leakage is to use a non-standard framework for inference which the adversary does not have any training data to start with.\nJamming While running on tiny input batch size might be infeasible for large input dataset, we find jamming also an effective defense mechanism. Specifically, during the inference of a large input batch, we ran a third-party CUDA GPU stress test in the background (Timonen, 2020). We found that the magnetic signals are completely distorted because of the constantly high utilization of GPU. Moreover, we observe little speed degradation for the foreground inference. The main caveat with this approach is higher power consumption and the possible effects on the lifetime of a GPU.\nAnother possible defense mechanism results from the fact that we are not tracking the actual dataflow in the GPU. For example, we can correctly identify two GPU operations, convolution and batch norm, in a long sequence. But there is no evidence proving the dataflow follows the same pattern—the output from convolution could be a deadend and batch norm takes input from a previous GPU operation. This mismatch between the dataflow and the underlying network model makes it hard to decipher robustly. While this defense can handle arbitrary input and networks in theory, we are unsure about the implementation hurdles of this defense and how modern deep learning libraries optimize for such unconventional graphs." }, { "heading": "6 CONCLUDING REMARKS", "text": "We set out to study what can be learned from passively listening to a magnetic side channel in the proximity of a running GPU. Our prototype shows it is possible to extract both the high-level network topology and detailed hyperparameters. To better understand the robustness and accuracy, we collected a dataset of magnetic signals by inferencing through thousands of layers on four different GPUs. We also investigated how one might use this side channel information to turn a black-box attack into a white-box transfer attack.\nLimitations. In our formulation, we assume networks progress in a linear fashion and do not handle complex graph network with intricate branching topologies. We cannot tell if a network is trained with dropout since dropout layers do not appear at inference time. Indeed, any operation that only appears during training is beyond the capability of magnetic side channel snooping.\nOur reconstruction DNNs require knowledge of the victim’s GPU model and version. When these are unknown, the adversary may still exhaustively pretrain multiple sets of reconstruction DNNs for all GPU models and at runtime scan through all reconstruction options. Software upgrades, which can lead to significant performance boost and therefore alter the emitted magnetic signals, may be\nviewed as further increasing the set of possible GPU models. In our experiments, we keep all the software versions constant, including OS verion, CUDA version, and PyTorch/Tensorflow version.\nEthical Considerations. Our main intention is to study the extent to which a magnetic side channel can leak information. We hope to help people understand that running deep learning inferences on GPUs can leak critical information, and in particular model architecture and hyperparameters, to nearby induction sensors. Sharing these findings creates the potential for malicious use. We introduced several viable defense mechanisms in 5.3." }, { "heading": "A BILSTM NETWORK STRUCTURE", "text": "Classifying steps in a network model requires taking in a time-series signal and converting it to labeled operations. The EM signal responds only to the GPU’s instantaneous performance, but because the GPU executes a neural network sequence, there is rich context in both the window before and after any one segment of the signal. Some steps are often followed by others, such as pooling operations after a series of convolutions. We take advantage of this bidirectional context in our sequence to sequence classification problem by utilizing a BiLSTM network to classify the observed signal. To retrive a network’s topology, we pass normalized EM values into a two-layer BLSTM network, with dropout of 0.2 in between. From there we compute a categorical crossentropy loss on a time-distributed output that we achieve by sliding an input window across our EM signal. This approach proves robust, and is the method used by all of our experiments, and on all GPU’s tested.\nThe segmented output of our BiLSTM network on our extracted signal is for the most part unambiguous. Operations that follow one another (i.e. convolution, non-linear activation function, pooling) are distinct in their signatures and easily captured from the context enabled by the sliding window signal we use as input to the BiLSTM classifier. Issues arise for very small-sized steps, closer to our sensor’s sampling limit. In such regions a non-linear activation may be over-segmented and split into two (possibly different) activation steps. To ensure consistency we postprocess the segmented results to merge identical steps that are output in sequence, cull out temporal inconsistencies such as pooling before a non-linear activation, and remove activation functions that are larger than the convolutions that precede them." }, { "heading": "B ADDITIONAL EXPERIMENTS", "text": "B.1 USING LEVENSHTEIN DISTANCE TO MEASURE NETWORK RECONSTRUCTION QUALITY To provide a sense of how the Levenshtein edit distance is related to the network’s ultimate performance, we consider AlexNet (referred as model A) and its five variants (refered as model B, C, D, and E, respectively). The variants are constructed by randomly altering some of the network steps in model A. The Levenshtein distances between model A and its variants are 1, 2, 2, 5, respectively (see Fig. S1), and the normalized Levenshtein distances are shown in the brackets of Fig. S1. We then measure the performance (i.e., standard test accuracy) of these models on CIFAR-10. As the edit distance increases, the model’s performance drops.\nC la\nss ifi\nca tio\nn te\nst a\ncc ur\nac y\n0.7\n0.732\n0.765\n0.798\n0.83\nA B C D E\n0.771\n0.8080.8060.811 0.822\nEdit Distance: 0 1(0.05) 2 (0.11) 5 (0.28)2 (0.11)\nFigure S1: The model’s classification accuracy drops as its Levenshtein distance from the original model (model A: AlexNet) increases.\nB.2 RECONSTRUCTION QUALITY ON IMAGENET\nWe treat ResNet18 and ResNet50 for ImageNet classification as our black-box models, and reconstruct them from their magnetic signals. We then train those reconstructed networks and compare their test accuracies with the original networks’ performance. Both the reconstructed and original\nTable S1: Model reconstruction evaluated on ImageNet classification.\nModel ResNet18 ResNet50Original Extracted Original Extracted Top-1 Acc. 64.130 64.608 62.550 61.842 Top-5 Acc. 86.136 86.195 85.482 84.738 KL Div. - 2.39 - 4.85\n10%\n0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 normalized edit distance\n20%\n30%\n40%\n50%\n60%\n10%\n0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 normalized edit distance\n20%\n30%\n40%\n50%\n60%\nFigure S2: Distribution of normalized Levenshtein distance. (left) We plot the distribution of the normalized Levenshtein distances between the reconstructed and target networks. This results, corresponding to Table 1 in the main text, use signals collected on Nvidia TITAN V. (right) We also conduct similar experiments on two Nvidia GTX-1080 GPUs. One is used for collecting training signals, and the other is used for testing our side-channel-based reconstruction algorithm.\nnetworks are trained with the same training dataset for the same number of epochs. The results are shown in Table S1, where we report both top-1 and top-5 classification accuracies. In addition, we also report a KL-divergence measuring the difference between the 1000-class image label distribution (over the entire ImageNet test dataset) predicted by the original network and that predicted by the reconstructed network. Not only are those KL-divergence values small, we also observe that for the reconstructed network that has a smaller KL-divergence from the original network (i.e., ResNet18), its performance approaches more closely to the original network.\nB.3 GPU TRANSFERABILITY\nHere we verify that (i) the leaked magnetic signals are largely related to GPU brand/version but not the other factors such as CPUs and (ii) the signal characteristics from two physical copies of the same GPU type stay consistent.\nWe obtain two copies of an Nvidia GTX-1080 GPU running on two different machines. When we run the same network structure on both GPUs, the resulting magnetic signals are similar to each other, as shown in Fig. S3. This suggests that the GPU cards are indeed the primary sources contributing the captured magnetic signals.\nNext, we use one GPU to generate training data and another one to collect signals and test our black-box reconstruction. The topology reconstruction results are shown in Table S2, arranged in the way similar to Table 1, and the distribution of normalized Levenshtein edit distance over the tested networks are shown in Fig. S2-right. These accuracies are very close to the case wherein a single GPU is used. The later part of the reconstruction pipeline (i.e., the hyperparameter recovery) directly depends on the topology reconstruction. Therefore, it is expected that the final reconstruction is also very similar to the single-GPU results.\nB.4 TRANSFER ATTACKS ON MNIST\nWe also conduct transfer attack experiments on MNIST dataset. We download four networks online, which are not commonly used. Two of them are convolutional networks (referred as CNN1 and CNN2), and the other two are fully connected networks (referred as DNN1 and DNN2). None of these networks appeared in the training dataset. We treat these networks as black-box models, and reconstruct a network for each of them. We then use the four reconstructed models to transfer attack the four original models, and the results are shown in Table S4. As baselines, we also use the four original models to transfer attack each other including themselves.\nTable S2: Classification accuracy of network steps (GTX-1080). Prec. Rec. F1 # samples\nLSTM .997 .999 .998 12186 Conv .985 .989 .987 141164 Fully-connected .818 .969 .887 9301 Add .962 .941 .951 30214 BatchNorm .956 .944 .950 48433 MaxPool .809 .701 .751 1190 AvgPool .927 .874 .900 294 ReLU .868 .859 .863 11425 ELU .861 .945 .901 8311 LeakyReLU .962 .801 .874 3338 Sigmoid .462 .801 .585 5106 Tanh .928 .384 .543 8050 Weigted Avg. .945 .945 .945 -\nTable S3: DNN estimation accuracies. Using the 1804 convolutional layers in our test dataset, we measure the accuracies of our DNN models for estimating the convolutional layers’ hyperparameters. Here, we break the accuracies down into the accuracies for individual hyperparameters.\nKernel Stride Padding Image-in Image-out Precision 0.971 0.976 0.965 0.968 0.965\nRecall 0.969 0.975 0.964 0.969 0.968 F1 Score 0.969 0.975 0.962 0.967 0.965\nIn Table S4, every row shows the transfer attack success rates when we use different source (surrogate) models to attack a specific original model (CNN1, CNN2, DNN1, or DNN2). Each column labeled as “extr.” corresponds to the extracted (reconstructed) model whose target model is given in the previous column right before it. In addition, we also show all the models’ test accuracies on MNIST in the last row of the table. The results show that all the reconstructed models approximate their targets closely, both in terms of their abilities for launching transfer attacks and their classification performance." }, { "heading": "C SENSOR SETUP", "text": "The magnetic induction signal we utilize comes from digitally converting analog readings of a Texas Instruments DRV425 fluxgate sensor with Measurement Computing’s USB-204 Digital Acquisition Card. The sensor samples at a frequency of 47Khz and the converter operates at 50Khz to map the originally -2mT∼2mT readings across 0 to 5 Volts using a 12-bit conversion. Calibrating the sensor requires (a) that the sensor is within range of the electromagnetic signal and that (b) the sensor orientation is consistent. The magnetic induction signal falls off at a rate inversely proportional to distance squared, and so the sensor must be placed within 7mm of the GPU power cable for reliable\nvo lts\n2.5\n5\n5\n2.5\nms60306 12 18 24 5436 42 480\nFigure S3: Here we plot the resulting signals from the same network model deployed on two different instances of a NVIDIA-GTX 1080 (running on two different computers). In the green boxes on the left are the spikes that we inject on purpose (discussed in Sec. 4.2) to synchronize the measured signal with the runtime trace of the GPU operations.\nTable S4: MNIST results. Source Model\nCNN1 extr. CNN2 extr. DNN1 extr. DNN2 extr.\nTa rg\net CNN1 .858 .802 .226 .202 .785 .795 .476 527 CNN2 .395 .319 .884 .878 .354 .351 .354 .211 DNN1 .768 .812 .239 .223 .999 .999 .803 .885 DNN2 .703 .768 .219 .194 .975 .979 .860 .874 Accuracy .989 .987 .993 .991 .981 .981 .980 .983\nmeasurement. Flipping the flat sensor over will result in a sign change of the magnetic induction signal, thus a uniform orientation should be maintained to avoid preprocessing readings across the dataset to align." } ]
2,020
null
SP:93f8114b248a8fbae75eadc40d70c6d38f3faff4
[ "The paper proposes an MCMC based sampling mechanism for GANs. In contrast to earlier work, the proposal distribution is conditioning conditioned on the previous state (here in latent space), which is supposed to help sampling efficiency. This is achieved by a clever re-parametrization of intermediate steps of the MCMC chain. As an example, the authors provide a Langevin version (which uses gradient information) of their method.", "The paper proposes an MCMC sampling strategy for GANs. The idea is clear: for high-dimensional x, making a good proposal is difficult, so they propose to do that in the latent space. Then they use a similar strategy as MH-GAN to compute a rejection strategy. The difference between the two methods is that proposal in MH-GAN does not depend on x while the proposed method does, and the argument is that it results in a higher acceptance rate. " ]
Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs). However, in practice, they typically have poor sample efficiency because of the independent proposal sampling from the generator. In this work, we propose REP-GAN, a novel sampling method that allows general dependent proposals by REParameterizing the Markov chains into the latent space of the generator. Theoretically, we show that our reparameterized proposal admits a closed-form Metropolis-Hastings acceptance ratio. Empirically, extensive experiments on synthetic and real datasets demonstrate that our REP-GAN largely improves the sample efficiency and obtains better sample quality simultaneously.
[]
[ { "authors": [ "Samaneh Azadi", "Catherine Olsson", "Trevor Darrell", "Ian Goodfellow", "Augustus Odena" ], "title": "Discriminator rejection sampling", "venue": null, "year": 2019 }, { "authors": [ "Adi Ben-Israel" ], "title": "The change-of-variables formula using matrix volume", "venue": "SIAM Journal on Matrix Analysis and Applications,", "year": 1999 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale GAN training for high fidelity natural image synthesis", "venue": null, "year": 2019 }, { "authors": [ "Tong Che", "Ruixiang Zhang", "Jascha Sohl-Dickstein", "Hugo Larochelle", "Liam Paull", "Yuan Cao", "Yoshua Bengio" ], "title": "Your GAN is secretly an energy-based model and you should use discriminator driven latent sampling", "venue": null, "year": 2020 }, { "authors": [ "Andrew Gelman", "John B Carlin", "Hal S Stern", "David B Dunson", "Aki Vehtari", "Donald B Rubin" ], "title": "Bayesian data analysis", "venue": "CRC press,", "year": 2013 }, { "authors": [ "Mevlana C Gemici", "Danilo Rezende", "Shakir Mohamed" ], "title": "Normalizing flows on Riemannian manifolds", "venue": "arXiv preprint arXiv:1611.02304,", "year": 2016 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "NeurIPS,", "year": 2014 }, { "authors": [ "Matthew Hoffman", "Pavel Sountsov", "Joshua V Dillon", "Ian Langmore", "Dustin Tran", "Srinivas Vasudevan" ], "title": "Neutra-lizing bad geometry in Hamiltonian Monte Carlo using neural transport", "venue": null, "year": 1903 }, { "authors": [ "Matthew D Hoffman" ], "title": "Learning deep latent gaussian models with markov chain monte carlo", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "ICLR,", "year": 2014 }, { "authors": [ "Daniel Levy", "Matthew D Hoffman", "Jascha Sohl-Dickstein" ], "title": "Generalizing Hamiltonian Monte Carlo with neural networks", "venue": null, "year": 2018 }, { "authors": [ "Yingzhen Li", "Richard E Turner", "Qiang Liu" ], "title": "Approximate inference with amortised mcmc", "venue": "arXiv preprint arXiv:1702.08343,", "year": 2017 }, { "authors": [ "Youssef Marzouk", "Tarek Moselhy", "Matthew Parno", "Alessio Spantini" ], "title": "An introduction to sampling via measure transport", "venue": "arXiv preprint arXiv:1602.05023,", "year": 2016 }, { "authors": [ "Luke Metz", "Ben Poole", "David Pfau", "Jascha Sohl-Dickstein" ], "title": "Unrolled generative adversarial networks", "venue": null, "year": 2017 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": null, "year": 2018 }, { "authors": [ "Radford M Neal" ], "title": "MCMC using Hamiltonian dynamics. Handbook of markov chain monte carlo", "venue": null, "year": 2010 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "Tim Salimans", "Diederik Kingma", "Max Welling" ], "title": "Markov chain monte carlo and variational inference: Bridging the gap", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training GANs", "venue": "NeurIPS,", "year": 2016 }, { "authors": [ "Jiaming Song", "Shengjia Zhao", "Stefano Ermon" ], "title": "A-NICE-MC: Adversarial training for MCMC", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Yuxuan Song", "Qiwei Ye", "Minkai Xu", "Tie-Yan Liu" ], "title": "Discriminator contrastive divergence: semi-amortized generative modeling by exploring energy of the discriminator", "venue": "arXiv preprint arXiv:2004.01704,", "year": 2020 }, { "authors": [ "Akinori Tanaka" ], "title": "Discriminator optimal transport", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Shichang Tang" ], "title": "Lessons learned from the training of gans on artificial datasets", "venue": "IEEE Access,", "year": 2020 }, { "authors": [ "Michalis K Titsias" ], "title": "Learning model reparametrizations: implicit variational inference by fitting MCMC distributions", "venue": "arXiv preprint arXiv:1708.01529,", "year": 2017 }, { "authors": [ "Ryan Turner", "Jane Hung", "Yunus Saatci", "Jason Yosinski" ], "title": "Metropolis-Hastings generative adversarial networks", "venue": null, "year": 2019 }, { "authors": [ "Tongzhou Wang", "Yi Wu", "Dave Moore", "Stuart J Russell" ], "title": "Meta-learning mcmc proposals", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [], "title": "For the change of variables in Eqn", "venue": null, "year": 1999 } ]
[ { "heading": "1 INTRODUCTION", "text": "Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have achieved a great success on generating realistic images in recent years (Karras et al., 2019; Brock et al., 2019). Unlike previous models that explicitly parameterize the data distribution, GANs rely on an alternative optimization between a generator and a discriminator to learn the data distribution implicitly. However, in practice, samples generated by GANs still suffer from problems such as mode collapse and bad artifacts.\nRecently, sampling methods have shown promising results on enhancing the sample quality of GANs by making use of the information in the discriminator. In the alternative training scheme of GANs, the generator only performs a few updates for the inner loop and has not fully utilized the density ratio information estimated by the discriminator. Thus, after GAN training, the sampling methods propose to further utilize this information to bridge the gap between the generative distribution and the data distribution in a fine-grained manner. For example, DRS (Azadi et al., 2019) applies rejection sampling, and MH-GAN (Turner et al., 2019) adopts Markov chain Monte Carlo (MCMC) sampling for the improved sample quality of GANs. Nevertheless, these methods still suffer a lot from the sample efficiency problem. For example, as will be shown in Section 5, MH-GAN’s average acceptance ratio on CIFAR10 can be lower than 5%, which makes the Markov chains slow to mix. As MH-GAN adopts an independent proposal q, i.e., q(x′|x) = q(x′), the difference between samples can be so large that the proposal gets rejected easily.\nTo address this limitation, we propose to generalize the independent proposal to a general dependent proposal q(x′|x). To the end, the proposed sample can be a refinement of the previous one, which leads to a higher acceptance ratio and better sample quality. We can also balance between the exploration and exploitation of the Markov chains by tuning the step size. However, it is hard to design a proper dependent proposal in the high dimensional sample space X because the energy landscape could be very complex (Neal et al., 2010).\nNevertheless, we notice that the generative distribution pg(x) of GANs is implicitly defined as the push-forward of the latent prior distribution p0(z), and designing proposals in the low dimensional latent space is generally much easier. Hence, GAN’s latent variable structure motivates us to design a structured dependent proposal with two pairing Markov chains, one in the sample space X and the other in the latent space Z . As shown in Figure 1, given the current pairing samples (zk,xk), we draw the next proposal x′ in a bottom-to-up way: 1) drawing a latent proposal z′ following q(z′|zk); 2) pushing it forward through the generator and getting the sample proposal x′ = G(z′); 3)\nassigning xk+1 = x′ if the proposal x′ is accepted, otherwise rejected xk+1 = xk. By utilizing the underlying structure of GANs, the proposed reparameterized sampler becomes more efficient in the low-dimensional latent space. We summarize our main contributions as follows:\n• We propose a structured dependent proposal of GANs, which reparameterizes the samplelevel transition x → x′ into the latent-level z → z′ with two pairing Markov chains. We prove that our reparameterized proposal admits a tractable acceptance criterion.\n• Our proposed method, called REP-GAN, serves as a unified framework for the existing sampling methods of GANs. It provides a better balance between exploration and exploitation by the structured dependent proposal, and also corrects the bias of Markov chains by the acceptance-rejection step.\n• Empirical results demonstrate that REP-GAN achieves better image quality and much higher sample efficiency than the state-of-the-art methods on both synthetic and real datasets." }, { "heading": "2 RELATED WORK", "text": "Although GANs are able to synthesize high-quality images, the minimax nature of GANs makes it quite unstable, which usually results in degraded sample quality. A vast literature has been developed to fix the problems of GANs ever since, including novel network modules (Miyato et al., 2018), training mechanism (Metz et al., 2017), and alternative objectives (Arjovsky et al., 2017).\nMoreover, there is another line of work using sampling methods to improve the sample quality of GANs. DRS (Azadi et al., 2019) firstly proposes to use rejection sampling. MH-GAN (Turner et al., 2019) instead uses the Metropolis-Hasting (MH) algorithm with an independent proposal. DDLS (Che et al., 2020) and DCD (Song et al., 2020) apply gradient-based proposals by viewing GAN as an energy-based model. Tanaka (2019) proposes a similar gradient-based method named DOT from the perspective of optimal transport.\nDifferent from them, our REP-GAN introduces a structured dependent proposal through latent reparameterization, and includes all three effective sampling mechanisms, the Markov Chain Monte Carlo method, the acceptance-rejection step, and the latent gradient-based proposal, to further improve the sample efficiency. As shown in Table 1, many existing works are special cases of our REP-GAN.\nOur method also belongs to the thread of works that combine MCMC and neural networks for better sample quality. Previously, some works combine variational autoencoders (Kingma & Welling, 2014) and MCMC to bridge the amorization gap (Salimans et al., 2015; Hoffman, 2017; Li et al., 2017), while others directly learn a neural proposal function for MCMC (Song et al., 2017; Levy et al., 2018; Wang et al., 2018). Our work instead reparameterizes the high-dimensional sample-level transition into a simpler low-dimensional latent space via the learned generator network." }, { "heading": "3 BACKGROUND", "text": "" }, { "heading": "3.1 GAN", "text": "GAN models the data distribution pd(x) implicitly with a generator G : Z → X mapping from a low-dimensional latent space Z to a high-dimensional sample space X ,\nx = G(z), z ∼ p0(z), (1)\nwhere the sample x follows the generative distribution pg(x) and the latent variable z follows the prior distribution p0(z), e.g., a standard normal distribution N (0, I). In GAN, a discriminator D : X → [0, 1] is learned to distinguish samples from pd(x) and pg(x) in an adversarial way\nmin G max D Ex∼pd(x) log(D(x)) + Ez∼p0(z) log(1−D(G(z))). (2)\nGoodfellow et al. (2014) point out that an optimal discriminator D implies the density ratio between the data and generative distributions\nD(x) = pd(x) pd(x) + pg(x) ⇒ pd(x) pg(x) =\n1\nD(x)−1 − 1 . (3)" }, { "heading": "3.2 MCMC", "text": "Markov Chain Monte Carlo (MCMC) refers to a kind of sampling methods that draw a chain of samples x1:K ∈ XK from a target distribution pt(x). We denote the initial distribution as p0(x) and the proposal distribution as q(x′|xk). With the Metropolis-Hastings (MH) algorithm, we accept the proposal x′ ∼ q(x′|xk) with probability\nα (x′,xk) = min ( 1, pt (x\n′) q (xk|x′) pt (xk) q (x′|xk)\n) ∈ [0, 1]. (4)\nIf x′ is accepted, xk+1 = x′, otherwise xk+1 = xk. Under mild assumptions, the Markov chain is guaranteed to converge to pt(x) as K →∞. In practice, the sample efficiency of MCMC crucially depends on the proposal distribution to trade off between exploration and exploitation." }, { "heading": "4 THE PROPOSED REP-GAN", "text": "In this section, we first review MH-GAN and point out the limitations. We then propose our structured dependent proposal to overcome these obstacles, and finally discuss its theoretical properties as well as practical implementations." }, { "heading": "4.1 FROM INDEPENDENT PROPOSAL TO DEPENDENT PROPOSAL", "text": "MH-GAN (Turner et al., 2019) first proposes to improve GAN sampling with MCMC. Specifically, given a perfect discriminator D and a descent (but imperfect) generator G after training, they take the data distribution pd(x) as the target distribution and use the generator distribution pg(x) as an independent proposal\nx′ ∼ q (x′|xk) = q (x′) = pg(x′). (5)\nWith the MH criterion (Eqn. (4)) and the density ratio (Eqn. (3)), we should accept x′ with probability\nαMH (x ′,xk) = min ( 1, pd (x ′) q (xk)\npd (xk) q (x′)\n) = min ( 1, D (xk)\n−1 − 1 D (x′) −1 − 1\n) . (6)\nHowever, to achieve tractability, MH-GAN adopts an independent proposal q(x′) with poor sample efficiency. As the proposed sample x′ is independent of the current sample xk, the difference between the two samples can be so large that it results in a very low acceptance probability. Consequently, samples can be trapped in the same place for a long time, leading to a very slow mixing of the chain.\nA natural solution is to take a dependent proposal q(x′|xk) that will propose a sample x′ close to the current one xk, which is more likely to be accepted. Nevertheless, the problem of such a dependent proposal is that its MH acceptance criterion\nαDEP (x ′,xk) = min ( 1, pd (x\n′) q (xk|x′) pd (xk) q (x′|xk)\n) , (7)\nis generally intractable because the data density pd(x) is unknown. Besides, it is hard to design a proper dependent proposal q(x′|xk) in the high dimensional sample space X with complex landscape. These obstacles prevent us from adopting a dependent proposal that is more suitable for MCMC." }, { "heading": "4.2 A TRACTABLE STRUCTURED DEPENDENT PROPOSAL WITH REPARAMETERIZED MARKOV CHAINS", "text": "As discussed above, the major difficulty of a general dependent proposal q(x′|xk) is to compute the MH criterion. We show that it can be made tractable by considering an additional pairing Markov chain in the latent space.\nAs we know, samples of GANs lie in a low-dimensional manifold induced by the push-forward of the latent variable. Suppose that at the k-th step of the Markov chain, we have a GAN sample xk with latent zk. Instead of drawing a sample x′ directly from a sample-level proposal distribution q(x′|xk), we first draw a latent proposal z′ from a dependent latent proposal distribution q(z′|zk). Afterward, we push the latent z′ forward through the generator and get the output x′ as our sample proposal.\nAs illustrated in Figure 1, our bottom-to-up proposal relies on the transition reparameterization with two pairing Markov chains in the sample space X and the latent space Z . Hence we call it a REP (reparameterized) proposal. Through a learned generator, we transport the transition xk → x′ in the high dimensional space X into the low dimensional space Z , zk → z′, which enjoys a much better landscape and makes it easier to design proposals in MCMC algorithms. For example, the latent target distribution is nearly standard normal when the generator is nearly perfect. In fact, under mild conditions, the REP proposal distribution qREP(x′|xk) and the latent proposal distribution q(z′|zk) are tied with the following change of variables (Gemici et al., 2016; Ben-Israel, 1999)\nlog qREP(x ′|xk) = log q(x′|zk) = log q(z′|zk)−\n1 2 log detJ>z′Jz′ , (8)\nwhere Jz denotes the Jacobian matrix of the push-forward G at z, i.e., [Jz]ij = ∂ xi/∂ zj ,x = G(z).\nNevertheless, it remains unclear whether we can perform the MH test to decide the acceptance of the proposal x′. Note that a general dependent proposal distribution does not meet a tractable MH acceptance criterion (Eqn. (7)). Perhaps surprisingly, it can be shown that with our structured REP proposal, the MH acceptance criterion is tractable for general latent proposals q(z′|zk). Theorem 1. Consider a Markov chain of GAN samples x1:K with initial distribution pg(x). For step k + 1, we accept our REP proposal x′ ∼ qREP(x′|xk) with probability\nαREP (x ′,xk) = min ( 1, p0(z ′)q(zk|z′)\np0(zk)q(z′|zk) · D(xk) −1 − 1 D(x′)−1 − 1\n) , (9)\ni.e. let xk+1 = x′ if x′ is accepted and xk+1 = xk otherwise. Further assume the chain is irreducible, aperiodic and not transient. Then, according to the Metropolis-Hastings algorithm, the stationary distribution of this Markov chain is the data distribution pd(x) (Gelman et al., 2013).\nProof. Note that similar to Eqn (8), we also have the change of variables between pg(x) and p0(z),\nlog pg(x)|x=G(z) = log p0(z)− 1\n2 log detJ>z Jz. (10)\nAccording to Gelman et al. (2013), the assumptions that the chain is irreducible, aperiodic, and not transient make sure that the chain has a unique stationary distribution, and the MH algorithm ensures that this stationary distribution equals to the target distribution pd(x). Thus we only need to show that the MH criterion in Eqn. (9) holds. Together with Eqn. (3), (7) and (8), we have\nαREP(x ′,xk) =\npd (x ′) q (xk|x′)\npd (xk) q (x′|xk) = pd (x\n′)q(zk|z′) ( det J>zkJzk )− 12 pg(xk)pg(x′) pd (xk)q(z′|zk) ( det J>z′Jz′\n)− 12 pg(x′)pg(xk) = q(zk|z′) ( det J>zkJzk\n)− 12 p0(z′)(det J>z′Jz′)− 12 (D(xk)−1 − 1) q(z′|zk) ( det J>z′Jz′\n)− 12 p0(zk)(det J>zkJzk)− 12 (D(x′)−1 − 1) = p0(z\n′)q(zk|z′)(D(xk)−1 − 1) p0(zk)q(z′|zk)(D(x′)−1 − 1) .\n(11)\nHence the proof is completed.\nThe theorem above demonstrates the following favorable properties of our method:\n• The discriminator score ratio is the same as αMH(x′,xk), but MH-GAN is restricted to a specific independent proposal. Our method instead works for any latent proposal q(z′|zk). When we take q(z′|zk) = p0(z′), our method reduces to MH-GAN. • Compared to αDEP(x′,xk) of a general dependent proposal (Eqn. (7)), the unknown data distributions terms are successfully cancelled in the reparameterized acceptance criterion.\n• The reparameterized MH acceptance criterion becomes tractable as it only involves the latent priors, the latent proposal distributions, and the discriminator scores.\nCombining the REP proposal qREP(x′|xk) and its tractable MH criterion αREP(x′,xk), we have developed a novel sampling method for GANs, coined as REP-GAN. See Appendix 1 for a detailed description. Moreover, our method can serve as a general approximate inference technique for Bayesian models by bridging MCMC and GANs. Previous works (Marzouk et al., 2016; Titsias, 2017; Hoffman et al., 2019) also propose to avoid the bad geometry of a complex probability measure by reparameterizing the Markov transitions into a simpler measure. However, these methods are limited to explicit invertible mappings without dimensionality reduction. In our work, we first show that it is also tractable to conduct such model-based reparameterization with implicit models like GANs." }, { "heading": "4.3 A PRACTICAL IMPLEMENTATION", "text": "REP-GAN enables us to utilize the vast literature of existing MCMC algorithms (Neal et al., 2010) to design dependent proposals for GANs. We take Langevin Monte Carlo (LMC) as an example. As an Euler-Maruyama discretization of the Langevin dynamics, LMC updates the Markov chain with\nxk+1 = xk + τ\n2 ∇x log pt(xk) +\n√ τ · ε, ε ∼ N (0, I), (12)\nfor a target distribution pt(x). Compared to MH-GAN, LMC utilizes the gradient information to explore the energy landscape more efficiently. However, if we directly take the (unknown) data distribution pd(x) as the target distribution pt(x), LMC does not meet a tractable update rule.\nAs discussed above, the reparameterization of REP-GAN makes it easier to design transitions in the low-dimensional latent space. Hence, we instead propose to use LMC for the latent Markov chain. We assume that the data distribution also lies in the low-dimensional manifold induced by the generator, i.e., Supp (pd) ⊂ Im(G). This implies that the data distribution pd(x) also has a pairing distribution in the latent space, denoted as pt(z). They are tied with the change of variables\nlog pd(x)|x=G(z) = log pt(z)− 1\n2 log det J>z Jz, (13)\nTaking pt(z) as the (unknown) target distribution of the latent Markov chain, we have the following Latent LMC (L2MC) proposal\nz′ = zk + τ\n2 ∇z log pt(zk) +\n√ τ · ε\n= zk + τ\n2 ∇z log\npt(zk) ( det J>zkJzk )− 12 p0(zk) ( det J>zkJzk\n)− 12 + τ2∇z log p0(zk) +√τ · ε = z+ τ\n2 ∇z log\npd(xk) pg(xk) + τ 2 ∇z log p0(zk) + √ τ · ε\n= z− τ 2 ∇z log(D−1(xk)− 1) + τ 2 ∇z log p0(zk) + √ τ · ε, ε ∼ N (0, I),\n(14)\nwhere xk = G(zk). As we can see, L2MC is made tractable by our structured dependent proposal with pairing Markov chains. DDLS (Che et al., 2020) proposes a similar Langevin proposal by formalizing GANs as an implicit energy-based model, while here we provide a straightforward derivation through reparameterization. Our major difference to DDLS is that REP-GAN also includes a tractable MH correction step (Eqn. (9)), which accounts for the numerical errors introduced by the discretization and ensures that detailed balance holds." }, { "heading": "4.4 EXTENSION TO WGAN", "text": "Our method can also be extended to other kinds of GAN, like Wasserstein GAN (WGAN) (Arjovsky et al., 2017). The WGAN objective is\nmin G max D Ex∼pd(x)[D(x)]− Ex∼pg(x)[D(x)], (15)\nwhere D : X → R is restricted to be a Lipschitz function. Under certain conditions, WGAN also implies an approximate estimation of the density ratio (Che et al., 2020),\nD(x) ≈ log pd(x) pg(x) + const ⇒ pd(x) pg(x) ≈ exp(D(x)) · const. (16)\nFollowing the same derivations as in Eqn. (11) and (14), we will have the WGAN version of REP-GAN. Specifically, with xk = G(zk), the L2MC proposal follows\nz′ = zk + τ\n2 ∇zD(xk) +\nτ 2 ∇z log p0(zk) + √ τ · ε, ε ∼ N (0, I), (17)\nand the MH acceptance criterion is\nαREP−W (x ′,xk) = min ( 1,\nq(zk|z′)p0(z′) q(z′|zk)p0(zk) · exp (D(x ′)) exp (D(xk))\n) . (18)" }, { "heading": "5 EXPERIMENTS", "text": "We show our empirical results both on synthetic and real image datasets." }, { "heading": "5.1 SYNTHETIC DATA", "text": "Following DOT (Tanaka, 2019) and DDLS (Che et al., 2020), we apply REP-GAN to the synthetic Swiss Roll dataset, where data samples lie on a Swiss roll manifold in the two-dimensional space. We construct the dataset by scikit-learn with 100,000 samples, and train a WGAN as in Tanaka (2019), where both the generator and discriminator are fully connected neural networks with leaky ReLU nonlinearities. We optimize the model using the Adam optimizer, with α = 0.0001, β1 = 0.5, β2 = 0.9. After training, we draw 1,000 samples with different sampling methods.\nAs shown in Figure 2, with appropriate step size (τ = 0.01), the gradient-based methods (DDLS and REP-GAN) outperform independent proposals (DRS and MH-GAN) by a large margin, while DDLS is more discontinuous on shape compared to REP-GAN. In DDLS, when the step size becomes too large (τ = 0.1, 1), the numerical error of the Langevin dynamics becomes so large that the chain either collapses or diverges. In contrast, those bad proposals are rejected by the MH correction steps of REP-GAN, which prevents the misbehavior of the Markov chain." }, { "heading": "5.2 REAL IMAGE DATA", "text": "Following MH-GAN (Turner et al., 2019), we conduct experiments on two real-world image datasets, CIFAR-10 and CelebA, for DCGAN (Radford et al., 2015) and WGAN (Arjovsky et al., 2017). Following the conventional evaluation protocol, we initialize each Markov chain with a GAN sample, run it for 640 steps, and take the last sample for evaluation. We collect 50,000 samples to evaluate the Inception Score1 (Salimans et al., 2016). The step size τ of our L2MC proposal is 0.01 on CIFAR-10 and 0.1 on CelebA. We calibrate the discriminator with Logistic Regression as in Turner et al. (2019).\nFrom Table 2, we can see our method outperforms state-of-the-art methods on both datasets. We also plot the Inception Score and acceptance ratio per epoch in Figure 3 based on our re-implementation. Although the training process of GANs is known to be very unstable, our REP-GAN can still outperform previous sampling methods both consistently (superior in most epochs) and significantly (the improvement is larger than the error bar) as shown in the left panel of Figure 3. In the right panel, we find that the average acceptance ratio of MH-GAN is lower than 0.2 in most cases, while REP-GAN has an acceptance ratio of 0.4-0.8, which is known to be a good tradeoff for MCMC algorithms. We also notice that the acceptance ratio goes down as the training continues. We suspect this is because the distribution landscape becomes complex and a constant sampling step size will produce more distinct samples that are more likely to get rejected.\n1For fair comparison, our training and evaluation follows the the official code of MH-GAN (Turner et al., 2019): https://github.com/uber-research/metropolis-hastings-gans\nAblation study. From Table 3, we can see that without the MH correction step, the Langevin steps often result in worse sample quality. Meanwhile, the acceptance is very small on CIFAR-10 without the dependent REP proposal. As a result, our REP-GAN (REP+MH) is the only setup that consistently improves over the baseline and obtains the best Inception Score on each dataset. The only exception is DCGAN on CelebA, where the independent proposal outperforms our REP proposal with a higher acceptance ratio. We believe that this is because the human face samples of CelebA are very alike among each other, such that independent samples from the generator can also be easily accepted. Nevertheless, the acceptance ratio of the independent proposal can be much lower on datasets with diverse sources of images, like CIFAR-10.\nMarkov chain visualization. In Figure 4, we demonstrate two Markov chains sampled with different methods. We can see that MH-GAN is often trapped in the same place because of the independent proposals. DDLS and REP-GAN instead gradually refine the samples with gradient steps. In addition,\ncompared the gradient-based methods, we can see that the MH rejection steps of REP-GAN help avoid some bad artifacts in the images. For example, in the camel-like images marked in red, the body of the camel is separated in the sample of DDLS (middle) while it is not in the sample of REP-GAN (bottom). Note that, the evaluation protocol only needs the last step of the chain, thus we prefer a small step size that finetunes the initial samples for better sample quality. As shown in Figure 5, our REP proposal can also produce very diverse images with a large step size." }, { "heading": "6 CONCLUSION", "text": "In this paper, we have proposed a novel method, REP-GAN, to improve the sampling of GAN. We devise a structured dependent proposal that reparameterizes the sample-level transition of GAN into the latent-level transition. More importantly, we first prove that this general proposal admits a tractable MH criterion. Experiments show our method does not only improve sample efficiency but also demonstrate state-of-the-art sample quality on benchmark datasets over existing sampling methods." }, { "heading": "A APPENDIX", "text": "A.1 ASSUMPTIONS AND IMPLICATIONS\nNote that our method needs a few assumptions on the models for our analysis to hold. Here we state them explicitly and discuss their applicability and potential impacts. Assumption 1. The generator mapping G : Rn → Rm(n < m) is injective, and its Jacobian matrix[ ∂ G(z) ∂ z ] of size m× n, has full column rank for all z ∈ Rn.\nFor the change of variables in Eqn. (8) and (10) to hold, according to Ben-Israel (1999), we need the mapping to be injective and its Jaobian should have full column rank. A mild sufficient condition for injectivity is that the generator only contains (non-degenerate) affine layers and injective nonlinearities, like LeakyReLU. It is not hard to show that such a condition also implies the full rankness of the Jacobian. In fact, this architecture has already been found to benefit GANs and achieved state-of-the-art results (Tang, 2020). The affine layers here are also likely to be non-degenerate because their weights are randomly initialized and typically will not degenerate in practice during the training of GANs.\nAssumption 2. The discriminatorD offers a perfect estimate the density ratio between the generative distribution pg(x) and the data distribution pd(x) as in Eqn. (3).\nThis is a common, critical, but less practical assumption among the existing sampling methods of GANs. It is unlikely to hold exactly in practice, because during the alternative training of GANs, the generator is also changing all the time, and the a few updates of the discriminator cannot fully learn the corresponding density ratio. Nevertheless, we think it can capture a certain extent information of density ratio which explains why the sampling methods can consistently improve over the baseline at each epoch.\nFrom our understanding, the estimated density ratio is enough to push the generator better but not able to bring it up to the data distribution. This could be the reason why the Inception scores obtained by the sampling methods, can improve over the baselines but cannot reach up to that of real data and fully close the gap, even with very long run of the Markov chains.\nHence, there is still much room for improvement. To list a few, one can develop mechanisms that bring more accurate density ratio estimate, or relax the assumptions for the method to hold, or establishing estimation error bounds. Overall, we believe GANs offer an interesting alternative scenario for the development of sampling methods.\nA.2 ALGORITHM PROCEDURE\nWe give a detailed description of the algorithm procedure of our REP-GAN in Algorithm 1.\nAlgorithm 1 GAN sampling with Reparameterized Markov chains (REP-GAN) Input: trained GAN with (calibrated) discriminator D and generator G, Markov chain length K, latent prior distribution p0(z), latent proposal distribution q(z′|zk); Output: an improved GAN sample xK ;\nDraw an initial sample x1: 1) draw initial latent z1 ∼ p0(z) and 2) push forward x1 = G(z1); for each step k ∈ [1,K − 1] do\nDraw a REP proposal x′ ∼ qREP(x′|xk): 1) draw a latent proposal z′ ∼ q(z′|zk), and 2) push forward x′ = G(z′); Calculate the MH acceptance criterion αREP(xk,x′) following Eqn. (9); Decide the acceptance of x′ with probability αREP(xk,x′); if x′ is accepted then\nLet xk+1 = x′, zk+1 = z′ else\nLet xk+1 = xk, zk+1 = zk end if\nend for\nA.3 ADDITIONAL EMPIRICAL RESULTS\nHere we list some additional empirical results of our methods.\nFréchet Inception Distance (FID). We additionally report the comparison of Fréchet Inception Distance (FID) in Table 4. Because previous works do not report FID on these benchmarks, we report our re-implementation results instead. We can see the ranks are consistent with the Inception scores in Table 2 and our method is superior in most cases.\nComputation overhead. In Table 5, we compare different gradient-based sampling methods of GANs. Comparing DDLS and our REP-GAN, they take 88.94s and 88.85s, respectively, hence the difference is negligible. Without the MH-step, our method takes 87.62s, meaning the additional MH-step only costs 1.4% computation overhead, which is also negligible, but it brings a significant improvement of sample quality as shown in Table 3.\nMarkov chain visualization on CelebA. We demonstrate two Markov chains on CelebA with different MCMC sampling methods of WGAN in Figure 6. We can see that on CelebA, the acceptance ratio of MH-GAN becomes much higher than that on CIFAR-10. Nevertheless, the sample quality is still relatively low. In comparison, the gradient-based method can gradually refine the samples with Langevin steps, and our REP-GAN can alleviate image artifacts with MH correction steps.\nA.4 MULTI-MODAL EXPERIMENTS\nAside from the manifold learning example shown in Figure 2, we additionally conduct experiments to illustrate the performance of our sampling methods for multi-modal distributions.\n25-Gaussians. To begin with, we consider the 25-Gaussians dataset widely discussed in previous work (Azadi et al., 2019; Turner et al., 2019; Che et al., 2020). The 25-Gaussians dataset is generated by a mixture of twenty-five two-dimensional isotropic Gaussian distributions with variance 0.01, and means separated by 1, arranged in a grid. We train a small Wasserstein GAN model with the standard WGAN-GP objective following the setup in Tanaka (2019). After training, we draw 1,000 samples with different sampling methods. Similarly, we starts a Markov chain with a GAN sample, run it for 100 steps, and collect the last example for evaluation.\nAs shown in Figure 7, compared to MH-GAN, the gradient-based methods (DDLS and ours) produce much better samples close to the data distribution with proper step size. Comparing DDLS and ours, DDLS tends to concentrate so much on the mode centers that its standard deviation can be even smaller than that of data distribution. Instead, our method preserves more sample diversity while concentrating on the mode centers. When the step size becomes larger, the difference becomes more obvious. When τ = 0.1, as marked with blue circles, the samples of DDLS become so concentrated\nthat some modes are missed. When τ = 1, samples of DDLS diverge far beyond the 5x5 grid. In comparison, our REP-GAN does not suffer from these issues with the MH correction steps accounting for the bias introduced by numerical errors.\nScale to more modes. In the above, we have experimented w.r.t. a relatively easy scenario where the multi-modal distribution only has 5x5 modes (n = 5 modes along each axis). In fact, the distinctions between the sampling methods become even more obvious when we scale to more modes. Specifically, as shown in Figure 8, we also compare them w.r.t. mixture of Gaussians with 9x9 and 13x13 modes, respectively. The rest of the setup is similar to 25-Gaussians. Note that throughout the experiments in this part, we adopt proper step size, τ = 0.01, for the gradient-based methods (DDLS and REP-GAN) by default.\nUnder the more challenging scenarios, we can see that the gradient-based methods still consistently outperforms MH-GAN. Moreover, our REP-GAN has a more clear advantage over DDLS. Specifically, for 9x9 modes, our REP-GAN produces samples that are less noisy (i.e., less examples distinct from the modes), while preserving all the modes. For 9x9 modes, DDLS makes a critical mistake that it drops one of the modes (left down corner, marked with red circle) during the Markov chain update. As discussed above, we believe this is because DDLS has a bias towards regions with high\nprobability, while ignoring the diversity of the distribution. In comparison, REP-GAN effectively prevents such bias by the MH correction steps." } ]
2,020
null
SP:dac2e985d39e3466dafbf20124fdafe0f5b9bd24
[ "This paper studies how to estimate the performance of pruned networks using regression models. The authors first empirically observe that there exist three distinct regions of sparsity: (1) In the low-sparsity regime, pruning does not decrease the accuracy (2) In the mid-sparsity regime, a linear relationship between the sparsity and the accuracy is observed (3) In the high-sparsity regime, pruning does not decrease the accuracy again. Under this observation, the authors proposed a regression model called the rational family and empirically verified its performance. The authors further extended this model to incorporate the network width and depth under some empirical observation called the error-preserving invariant. The authors performed experiments to verify different perspectives of the proposed functional form.", "The authors propose a functional approximation to the error of pruned convolutional neural networks as a function of network hyperparameters. This functional approximation depends on a number of hyperparameters that are fit on the error of already trained and pruned networks on a certain task (in this case, image classification on CIFAR-10 and ImageNet are the tasks under consideration). The authors demonstrate that this fit is very accurate over many orders of magnitude, which demonstrates their hypothesis on the power law nature of the error distribution as a function of the hyperparameters under consideration." ]
We show that the error of iteratively-pruned networks empirically follows a scaling law with interpretable coefficients that depend on the architecture and task. We functionally approximate the error of the pruned networks, showing that it is predictable in terms of an invariant tying width, depth, and pruning level, such that networks of vastly different sparsities are freely interchangeable. We demonstrate the accuracy of this functional approximation over scales spanning orders of magnitude in depth, width, dataset size, and sparsity. We show that the scaling law functional form holds (generalizes) for large scale data (CIFAR-10, ImageNet), architectures (ResNets, VGGs) and iterative pruning algorithms (IMP, SynFlow). As neural networks become ever larger and more expensive to train, our findings suggest a framework for reasoning conceptually and analytically about pruning.
[]
[ { "authors": [ "Davis Blalock", "Jose Javier Gonzalez Ortiz", "Jonathan Frankle", "John Guttag" ], "title": "What is the state of neural network pruning", "venue": "Conference on Machine Learning and Systems,", "year": 2020 }, { "authors": [ "Sutskever", "Dario Amodei" ], "title": "Language models are few-shot learners, 2020", "venue": null, "year": 2020 }, { "authors": [ "Han Cai", "Chuang Gan", "Song Han" ], "title": "Once for all: Train one network and specialize it for efficient deployment", "venue": null, "year": 1908 }, { "authors": [ "Jonathan Frankle", "Gintare Karolina Dziugaite", "Daniel M Roy", "Michael Carbin" ], "title": "Linear mode connectivity and the lottery ticket hypothesis", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Trevor Gale", "Erich Elsen", "Sara Hooker" ], "title": "The state of sparsity in deep neural networks", "venue": "arXiv preprint arXiv:1902.09574,", "year": 2019 }, { "authors": [ "Mitchell A Gordon", "Kevin Duh", "Nicholas Andrews" ], "title": "Compressing bert: Studying the effects of weight pruning on transfer learning", "venue": "arXiv preprint arXiv:2002.08307,", "year": 2020 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Joel Hestness", "Sharan Narang", "Newsha Ardalani", "Gregory Diamos", "Heewoo Jun", "Hassan Kianinejad", "Md Patwary", "Mostofa Ali", "Yang Yang", "Yanqi Zhou" ], "title": "Deep learning scaling is predictable, empirically", "venue": "arXiv preprint arXiv:1712.00409,", "year": 2017 }, { "authors": [ "Steven A. Janowsky" ], "title": "Pruning versus clipping in neural networks", "venue": "Phys. Rev. A,", "year": 1989 }, { "authors": [ "Jared Kaplan", "Sam McCandlish", "Tom Henighan", "Tom B Brown", "Benjamin Chess", "Rewon Child", "Scott Gray", "Alec Radford", "Jeffrey Wu", "Dario Amodei" ], "title": "Scaling laws for neural language models", "venue": null, "year": 2001 }, { "authors": [ "Yann LeCun", "John S Denker", "Sara A Solla" ], "title": "Optimal brain damage", "venue": "In Advances in neural information processing systems,", "year": 1990 }, { "authors": [ "Zhuohan Li", "Eric Wallace", "Sheng Shen", "Kevin Lin", "Kurt Keutzer", "Dan Klein", "Joseph E Gonzalez" ], "title": "Train large, then compress: Rethinking model size for efficient training and inference of transformers", "venue": "arXiv preprint arXiv:2002.11794,", "year": 2020 }, { "authors": [ "Russell Reed" ], "title": "Pruning algorithms-a survey", "venue": "IEEE transactions on Neural Networks,", "year": 1993 }, { "authors": [ "Alex Renda", "Jonathan Frankle", "Michael Carbin" ], "title": "Comparing rewinding and fine-tuning in neural network pruning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jonathan S. Rosenfeld", "Amir Rosenfeld", "Yonatan Belinkov", "Nir Shavit" ], "title": "A constructive prediction of the generalization error across scales", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "F. Appendix" ], "title": "A discussion of extrapolation for our scaling law. Appendix G. A more detailed comparison between our scaling law and that of Rosenfeld et al", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "For decades, neural network pruning—eliminating unwanted parts of the network—has been a popular approach for reducing network sizes or computational demands of inference (LeCun et al., 1990; Reed, 1993; Han et al., 2015). In practice, pruning can reduce the parameter-counts of contemporary models by 2x (Gordon et al., 2020) to 5x (Renda et al., 2020) with no reduction in accuracy. More than 80 pruning techniques have been published in the past decade (Blalock et al., 2020), but, despite this enormous volume of research, there remains little guidance on important aspects of pruning. Consider a seemingly simple question one might ask when using a particular pruning technique:\nGiven a family of neural networks (e.g., ResNets on ImageNet of various widths and depths), which family member should we prune (and by how much) to obtain the network with the smallest parametercount such that error does not exceed some threshold k?\nAs a first try, we could attempt to answer this question using brute force: we could prune every member of a family (i.e., perform grid search over widths and depths) and select the smallest pruned network that satisfies our constraint on error. However, depending on the technique, pruning one network (let alone grid searching) could take days or weeks on expensive hardware.\nIf we want a more efficient alternative, we will need to make assumptions about pruned networks: namely, that there is some structure to the way that their error behaves. For example, that pruning a particular network changes the error in a predictable way. Or that changing the width or depth of a network changes the error when pruning it in a predictable way. We could then train a smaller number of networks, characterize this structure, and estimate the answer to our question.\nWe have reason to believe that such structure does exist for pruning: techniques already take advantage of it implicitly. For example, Cai et al. (2019) create a single neural network architecture that can be scaled down to many different sizes; to choose which subnetwork to deploy, Cai et al. train an auxiliary, black-box neural network to predict subnetwork performance. Although this black-box approach implies the existence of structure, it does not reveal this structure explicitly or make it possible to reason analytically in a fashion that could answer our research question.\nOutside the context of pruning algorithms, such structure has been observed—and further codified explicitly—yielding insights and predictions in the form of scaling laws. Tan and Le (2019) design the EfficientNet family by developing a heuristic for predicting efficient tradeoffs between depth, width, and resolution. Hestness et al. (2017) observe a power-law relationship between dataset size\nand the error of vision and NLP models. Rosenfeld et al. (2020) use a power scaling law to predict the error of all variations of architecture families and dataset sizes jointly, for computer vision and natural language processing settings. Kaplan et al. (2020) develop a similar power law for language models that incorporates the computational cost of training.\nInspired by this line of work, we address our research question about pruning by developing a scaling law to predict the error of networks as they are pruned. To the best of our knowledge, no explicit scaling law holding over pruning algorithms and network types currently exists. In order to formulate such a predictive scaling law, we consider the dependence of generalization error on the pruning-induced density for networks of different depths and width trained on different dataset sizes.\nWe begin by developing a functional form that accurately estimates the generalization error of a specific model as it is pruned (Section 3). We then account for other architectural degrees of freedom, expanding the functional form for pruning into a scaling law that jointly considers density alongside width, depth, and dataset size (Section 4). The basis for this joint scaling law is an invariant we uncover that describes ways that we can interchange depth, width, and pruning without affecting error. The result is a scaling law that accurately predicts the performance of pruned networks across scales. Finally, we use this scaling law to answer our motivating question (Section 7).\nThe same functional form can accurately estimate the error for both unstructured magnitude pruning (Renda et al., 2020) and SynFlow (Tanaka et al., 2020) when fit to the corresponding data, suggesting we have uncovered structure that may be applicable to iterative pruning more generally. And now that we have established this functional form, fitting it requires only a small amount of data (Appendix 5). In summary, our contributions are as follows:\n• We develop a scaling law that accurately estimates the error when pruning a single network. • We observe and characterize an invariant that allows error-preserving interchangeability among\ndepth, width, and pruning density.\n• Using this invariant, we extend our single-network scaling law into a joint scaling law that predicts the error of all members of a network family at all dataset sizes and all pruning densities.\n• In doing so, we demonstrate that there is structure to the behavior of the error of iteratively pruned networks that we can capture explicitly with a simple functional form.\n• Our scaling law enables a framework for reasoning analytically about pruning, allowing us to answer our motivating question and similar questions about pruning." }, { "heading": "2 EXPERIMENTAL SETUP", "text": "Pruning. We study two techniques for pruning neural networks: iterative magnitude pruning (IMP) (Janowsky, 1989; Han et al., 2015; Frankle et al., 2020) in the main body of the paper and SynFlow (Tanaka et al., 2020) in Appendix E. We describe IMP in detail here and SynFlow in Appendix A. IMP prunes by removing a fraction—typically 20%, as we do here—of individual weights with the lowest magnitudes at the end of training.1 We choose these weights globally throughout the network, i.e., without regard to specific layers. We use per-weight magnitude pruning because it is generic, well-studied (Han et al., 2015), and matches the sparsity/accuracy tradeoffs of more complicated methods (Gale et al., 2019; Blalock et al., 2020; Renda et al., 2020).\nPruning weights typically reduces the accuracy of the trained network, so it is standard practice to further train after pruning to recover accuracy. For IMP, we use a practice called weight rewinding, in which the values of unpruned weights are rewound to their values at epoch 10 and the training process is repeated from there to completion. To achieve density levels below 80%, this process is repeated iteratively—pruning by 20%, rewinding, and retraining—until a desired density level is reached. Renda et al. (2020) demonstrate that IMP with weight rewinding achieves state-of-the-art tradeoffs between sparsity and accuracy. For a formal statement of this pruning algorithm, see Appendix A.\nDatasets. We study the image classification tasks CIFAR-10 and ImageNet. Our scaling law predicts the error when training with the entire dataset and smaller subsamples. To subsample a dataset to a size of n, we randomly select n of the training examples without regard to individual classes such\n1We do not prune biases or BatchNorm, so pruning 20% of weights prunes fewer than 20% of parameters.\nthat in expectation we preserve the original dataset distribution (we always retain the entire test set). When performing iterative pruning, we maintain the same subsample for all pruning iterations.\nNetworks. We study three families of neural networks: ResNets for CIFAR-10, ResNets for ImageNet, and (in Appendix E) VGG-style networks for CIFAR-10.2 We develop a scaling law that predicts the error (when pruned) of an entire family of networks with varying widths and—in the case of the CIFAR-10 ResNets—depths. To vary width, we multiply the number of channels in each layer by a width scaling factor. To vary depth of the CIFAR-10 ResNets, we vary the number of residual blocks. We refer to a network by its depth l (the number of layers in the network, not counting skip connections) and its width scaling factor w.\nNotation and terminology. Throughout the paper, we use the following notation and terminology:\n• DN = {xi, yi}Ni=1 is a labeled training set with N examples. A subsample of size n is a subset of DN containing n examples selected uniformly at random. • l and w are, respectively, the depth (i.e., the number of layers, excluding skip connections) and the width scaling factor of a particular network.\n• A collection of networks that vary by width and depth are a network family. • s is the sparsity of a pruned network (i.e., the fraction of weights that have been pruned) and d , 1− s is the density (i.e., the fraction of weights that have not been pruned).\n• (d, l, w, n) is the test error of a network with the specified density, depth, width scaling factor, and dataset size.\n• np (l, w, n) = (1, l, w, n) is the test error of the unpruned network with the specified depth, width scaling factor, and dataset size. When clear from context, we omit (w, l, n) and write np. • ̂( np, d | l, w, n) is an estimate of the error of a pruned model for a scaling law that has been fit\nto a specific network with the specified depth, width scaling factor, and dataset size (Section 3). • ̂ ( np, d, l, w, n) is an estimate of the error of a pruned model with the specified depth, width\nscaling factor, and dataset size for a scaling law that has been fit to a network family (Section 4).\nDimensions. In developing scaling laws, we vary four different dimensions: dataset subsample size (n) and network degrees of freedom density (d), network depth (l), and width scaling factor (w). We consider the following ranges of these values in our experiments in the main body of the paper:\nNetwork Family Ntrain Ntest Densities (d) Depths (l) Width Scalings (w) Subsample Sizes (n)\nCIFAR-10 ResNet 50K 10K 0.8i, i ∈ {0, . . . , 40} 8, 14, 20, 26, 50, 98 2i, i ∈ {−4, . . . , 2} Ni , i ∈ {1, 2, 4, 8, 16, 32, 64} ImageNet ResNet 1.28M 50K 0.8i, i ∈ {0, . . . , 30} 50 2i, i ∈ {−4, . . . , 0} Ni , i ∈ {1, 2, 4}\nWe use sanity checks to filter infeasible or unusual configurations from this table. In many cases, networks become disconnected before we reach the lowest density (e.g., ∼30% of the CIFAR-10 ResNet configurations). We also eliminate configurations where increasing width or depth of the unpruned network lowers test accuracy (e.g., 144 of the 294 CIFAR-10 ResNet configurations of l, w, and n); these are typically unusual, imbalanced configurations (e.g., l = 98, w = 116 ). Of the 12,054 possible CIFAR-10 ResNet configurations, about 8,000 are eliminated based on these sanity checks." }, { "heading": "3 MODELING THE ERROR OF A PRUNED NETWORK", "text": "Our goal in this section is to develop a functional form that accurately models the error of a member of a network family as it is pruned (using IMP here and SynFlow in Appendix E) based on its unpruned error np(w, l, n). In other words, we wish to find a function ̂( np, d | l, w, n) that predicts the error at each density d for a network with a particular depth l, width scaling factor w, and dataset size n.\nIntuition. Since IMP prunes a network 20% at a time, it produces pruned networks at intermediate levels of density dk = 0.8k in the process of creating a final pruned network at density dK = 0.8K . In Figure 1 (left), we plot the error of these pruned networks for members of the CIFAR-10 ResNet family with a different widths w. All of these curves follow a similar pattern:3\n2See Appendix B for full details on architectures and hyperparameters. 3The same patterns occur when varying l and n for CIFAR-10 and w and n for ImageNet (Appendix C). We\nfocus on varying width for CIFAR-10 here for illustrative purposes.\nlog ! = −- log $ + &′\nlog ! = −# log ) + +′\nObservation 1: Low-error plateau. The densest pruned networks (right part of the curves) have approximately the same error as the unpruned network: np(w). We call this the low-error plateau.\nObservation 2: Power-law region. When pruned further, error increases in a linear fashion on the logarithmic axes of the figure. Linear behavior on a logarithmic scale is the functional form of a power law, in which error relates to density through an exponent γ and a coefficient c: (d,w) ≈ cd−γ . In particular, γ controls the slope of the line on the logarithmic axes.\nObservation 3: High-error plateau. When pruned further, error again flattens; we call this the high-error plateau and call the error of the plateau ↑.\nFigure 1 (center) labels these regions for CIFAR-10 ResNet-20 (width scaling factor 1, dataset size N ) and shows an approximation of these regions that is piece-wise linear on logarithmic axes. These observations are our starting point for developing a functional form that estimates error when pruning.\nFunctional form. Our next task is to find a functional form that accurately captures these observations about the relationship between density and error. In prior work, Rosenfeld et al. (2020) observe that the relationship between width and error shares the same general shape: it has a region of lower error, a power-law region, and region of higher error. However, this relationship is different enough from the one we observe (see Appendix G) to merit an entirely new functional form.\nTo develop this functional form, we note that the three regions of the curves in Figure 1 (the low-error plateau, the power-law region, and the high-error plateau) can be described by three power laws: two plateaus with exponent zero and one intermediate region with exponent γ. A functional family that arises frequently in the context of systems that exhibit different power-law regions is the rational family. The particular family member we consider is as follows:4\n̂( np, d | l, w, n) = np ∥∥∥∥∥∥∥ d− jp ( ↑ np ) 1 γ d− jp ∥∥∥∥∥∥∥ γ where j = √ −1 (1)\nThis function’s shape is controlled by np, ↑, γ, and p (visualized in Figure 1, right). np and ↑ are the values of the low and high-error plateaus. γ is the slope of the power-law region on logarithmic axes. p controls the density at which the high-error plateau transitions to the power-law region.\nFitting. To fit ̂( np, d | l, w, n) to the actual data (d, l, w, n), we estimate values for the free parameters ↑, γ, and p by minimizing the relative error δ , ̂( np,d|l,w,n)− (d,l,w,n) (d,l,w,n) using least squares regression. The fit is performed separately for each configuration (l, w, n) for all 30–40 densities, resulting in per-configuration estimates of ̂↑, γ̂, and p̂.\nEvaluating fit. For a qualitative view,5 we plot the actual error6 (d, l, w, n) and the estimated error ̂( np, d | l, w, n) as a function density for CIFAR-10 ResNets of varying widths (Figure 2, left). Our\n4The expression ∥∥∥ d−jad−jb ∥∥∥γ = ( d2+a2d2+b2 ) γ2 meaning Eq. 1 can be rewritten as np( d2+p2( ↑/ np)2/γd2+p2 )γ/2\n5Since the error is a 4-dimensional function, projections of it yield qualitative analysis—see Appendix D. 6We compute the error as the mean across three replicates with different random seeds and dataset subsamples.\n12 10 8 6 4 2 0 log2(density)\n3.0\n3.5\n4.0\n4.5\n5.0\n5.5\n6.0\n6.5\nlo g2\n(d ep\nth )\nlog10(err) vs depth and density ; dsfraction = 1\n12 10 8 6 4 2 0 log2(density)\n4\n3\n2\n1\n0\n1\n2\nlo g2\n(w id\nth )\nlog10(err) vs width and density ; dsfraction = 1\nFigure 3: Projections of (d, l, w, n) onto two-dimensional planes for the CIFAR-10 ResNets, showing contours of constant error. For low enough densities, the contours have linear slopes on the logarithmic axes—depicted by a reference black-dotted line. The density/depth plane (left). The density/width plane (right).\nestimated error appears to closely follow the actual error. The most noticeable deviations occur at large densities, where the error dips slightly when pruning whereas we treat it as flat (see Section 6).\nQuantitatively, we measure the extent to which estimated error departs from the actual error using the mean µ and standard deviation σ of the relative deviation δ. Figure 2 (center) compares the estimated and actual errors for the networks in Figure 2 (left); Figure 2 (right) shows the same comparison for all configurations of l, w, and n on CIFAR-10 and the more than 4000 pruned ResNets that result. The relative deviation on all configurations has mean µ < 2% and standard deviation σ < 4%; this means that, if the actual error is 10%, the estimated error is 9.8± 0.4% (̂ = (1− δ) )." }, { "heading": "4 JOINTLY MODELING ERROR ACROSS ALL DIMENSIONS", "text": "In Section 3, we found a functional form ̂( np, d | l, w, n) (Equation 1) that accurately predicts the error when pruning a specific member of a network family (with depth l and width w) trained with a dataset of size n. The parameters governing Equation 1 ( ↑, p, and γ ) were allowed to vary between different configurations and depend on l, w, n. However, we are interested in a single joint scaling law ̂( np, d, l, w, n) that, given the unpruned network error np(l, w, n), accurately predicts error across all dimensions we consider: all members of a network family that vary in depth and width, all densities, and all dataset sizes. Importantly, the parameters of such a joint scaling law must be constants as a function of all dimensions. In this section, we develop this joint scaling law.\nIntuition: the error-preserving invariant. Our desired scaling law ̂( np, d, l, w, n) will be a fourdimensional function of d, w, l, and n. To develop this, we study the interdependence between density and depth or width by examining two-dimensional projections of the actual error (d, l, w, n) (Figure 3). These plots display contours of constant error as density and depth or width vary.\nConsider the projection onto the plane of density and depth (Figure 3, left). The constant-error contours are linear except for in the densest networks, meaning each contour traces a power-law relationship between d and l. In other words, we can describe all combinations of densities and widths that produce error v using lφd = v, where v is a constant at which network error is v and φ\nis the slope of the contour on the logarithmic axes. The contours of density and width also have this pattern (Figure 3, right), meaning we can describe a similar relationship wψd = v′. Finally, we can combine these observations about depth and width into the expression lφwψd = v′′.\nWe refer to the expression lφwψd as the error-preserving invariant, and we denote it m∗. This invariant captures the observation that there exist many interchangeable combinations of depth, width, and density that achieve the same error and tells us which combinations do so. For example, networks of vastly different densities reach the same error if we vary l and w according to the invariant.\nFunctional form. The invariant allows us to convert the functional form ̂( np, d | l, w, n) for a specific l, w, and n from Section 3 into a joint functional form ̂( np, d, l, w, n) for all l, w, and n. Rewriting the definition of the invariant, d = m ∗\nlφwψ . We can substitute this for d in the functional\nform from Section 3. Finally, by rewriting p as p ′\nlφwψ and canceling, we arrive at the expression:\n̂( np, d | l, w, n) = np ∥∥∥∥∥∥∥ m∗ − jp′ ( ↑ np ) 1 γ m∗ − jp′ ∥∥∥∥∥∥∥ γ = np ∥∥∥∥∥∥∥ lφwψd− jp′ ( ↑ np ) 1 γ lφwψd− jp′ ∥∥∥∥∥∥∥ γ = ̂( np, d, l, w, n) (2)\nwhich is the joint functional form ̂( np, d, l, w, n) of all four dimensions d, l, w, and n. Critically, for this to be a useful joint form, the free parameters e↑, p′, and γ must be constants shared across all possible values of d, l, w, and n. We will assume this is the case and directly quantify how well this assumption holds in the fit section to follow. To glean some qualitative intuition as to why this may be a reasonable assumption, we can examine the relationship between m∗ and the generalization error of pruned networks as we vary depth, width, and dataset size (Figure 4). Across all projections, the annotated e↑ (error of the high-error plateau), γ (slope of the power-law region) and p′ (value of m∗ where the high-error plateau transitions to the power-law region) are the same. Note that in Eq. 2 the dependence on n is implicit, through np. We retain the explicit form ̂(..., n) to stress that the lack of explicit dependency on n is non-trivial and was not known prior to our work.\nFitting. To fit ̂( np, d, l, w, n) to the actual data (d, l, w, n), we estimate values for the free parameters ↑, γ, p′, φ and ψ by minimizing the relative error δ , ̂( np,d,l,w,n)− (d,l,w,n) (d,l,w,n) using least squares regression. The fit is performed jointly over all configurations of d, l, w, and n, resulting in joint estimates of ̂↑, γ̂, p̂, φ̂, and ψ̂. One can also perform a partial fit for a subset of dimensions (e.g., just d, l, and n) by omitting φ and/or ψ (see Appendix D).\nEvaluating fit. In Figure 5, we plot the actual error (d, l, w, n) and the estimated error ̂( np, d, l, w, n) for the CIFAR-10 ResNets and ImageNet ResNets (single depth). As in Section 3, our estimated error appears to closely follow the actual error. Deviations arise mainly at high densities where error dips below np and low densities approaching high error saturation.\nWe again quantify the fit of the estimated error using the mean µ and standard deviation σ of the relative deviation δ. The relative deviation on the joint scaling laws for the CIFAR-10 and Imagenet networks has a mean µ < 2% and standard deviation of σ < 6%.\nTo contextualize these results, Figure 5 (right) quantifies the variation in error we see over multiple trials of the CIFAR-10 experiments due to using different random seeds. It plots the minimum,\nmaximum, and mean errors across the three trials we ran.7 The variation across trials has a standard deviation of σ = 3.4%, sizeable relative to the estimation error of σ = 5.8% for the joint scaling law. This indicates that a significant portion of our error may stem from measurement noise.\nThe functional form has just five parameters and obtains an accurate fit on over 4000 points, suggesting it is a good approximation. In Appendix E, we show that it achieves a similarly good fit for VGG-style networks and for the SynFlow pruning algorithm. In Section 5, we show that it is possible to get a good fit with far fewer points and that the fit has low sensitivity to the choice of points." }, { "heading": "5 ANALYZING THE SENSITIVITY OF THE FIT TO NUMBER OF POINTS", "text": "In Section 4, we showed that our scaling law was accurate when we fit it on all of the available data. Now that we possess the functional form and know that it can accurately model the behavior of IMP, we study the amount of data necessary to obtain a stable,8 accurate fit. This question is especially relevant when the functional form is applied to new settings—new networks, datasets, or pruning algorithms—and we must collect new data to do so. The functional form has only five parameters, suggesting that few experiments will be necessary.\nExperiments. To evaluate the effect of the number of points on the stability and accuracy of the fit, we randomly sample varying numbers of points, fit the scaling law to those points, and evaluate the quality of the fit over all points. We sample these points in two ways.\nExperiment 1. Randomly sample T network configurations (w, l, n, d). This experiment captures the use case of algorithms such as SynFlow (Appendix E), where obtaining data at any density d relies only on possessing the unpruned network, not other densities d′ > d.\nExperiment 2. Randomly sample T network configurations (w, l, n) and include all densities d for each configuration. This experiment captures the use case of algorithms such as IMP, where obtaining data at density d requires obtaining all densities d′ > d. As such, we anticipate that data will be obtained by iteratively pruning a small number of configurations (w, l, n) to low density.\nResults. We perform each experiment for many different values of T on the CIFAR-10 ResNets pruned with IMP. We repeat the experiment at each value of T 30 times (with a different sample of points) and report the mean and standard deviation of µ and σ for the fit. Experiments 1 and 2 appear in Figure 6. The shaded areas represent one standard deviation from the mean in each direction. On Experiment 1, when just 40 configurations of (w, l, d, n) are available, the standard deviation on both µ and σ is just one percentage point. On Experiment 2, when just 15 random configurations of (w, l, n) are available at all densities, we similarly achieve standard deviation below 1%. In both cases, as the number of networks increases, the standard deviation decreases further.\n7We only ran a single trial of the ImageNet experiments due to the significant cost of collecting data. 8Stability is defined as a small change in output relative to a change in input. The requirement here is that a\nchange in choice of points leads to a small expected change in estimation accuracy in expectation.\nThese results show that, now that our scaling law is known, it is possible to obtain an accurate (and stable) estimation using far less data than we used to evaluate the quality of the fit in the main body of the paper. Importantly, the experiments we perform in this section are particularly naive. We make no effort to ensure that the configurations we select represent a diverse range of widths, depths, dataset sizes, and densities. By selecting these configurations in a strategic way, we believe it would be possible to further reduce the number of configurations necessary to obtain a similarly accurate fit." }, { "heading": "6 PRINCIPLES FOR SELECTING A FUNCTIONAL FAMILY", "text": "In this section, we discuss some of the key criteria that led us to select this particular functional form and opportunities for further refinement.\nCriterion 1: Transitions. In Section 3, we observe that, when pruning a neural network, error has a low-error plateau, a power-law region, and a high-error plateau. Between these regions are transitions where error varies smoothly from one region to the next. Matching the shape of these transitions was a key consideration for selecting our function family. To illustrate the importance of properly fitting the transitions, Figure 7 (left) shows two possible functional families for fitting the relationship between density and error for CIFAR-10 ResNets. Actual error is in gray, and the functional form from Section 3 is in blue. In red is the fit for a functional form adapted from the one that Rosenfeld et al. (2020) use to model the relationship between width and error. The difference between these functional families is the way they model transitions, and the one we choose in this paper better models the transitions in our setting. For further discussion of this comparison, see Appendix G.\nCriterion 2: A small number of interpretable parameters. Selecting a functional form is not merely a curve-fitting exercise. We seek the underlying structure that governs the relationships between d, l, w, n, and error in a manner akin to a law of physics. As such, our functional form should have a small number of parameters that are interpretable. In our functional form (Equation 2), each parameter has a clear meaning. The parameters ↑, p′, and γ control the high-error plateau, the transition to the power-law region, and the slope of the power-law region. φ and ψ control the interchangeability of width and depth with density. We approximate error over multiple orders\nof magnitude and over 4,000 configurations of ResNet-20 on CIFAR-10 with just five parameters, indicating we have distilled key information about the behavior of pruning into our functional form.\nSources of systemic error and limitations of our approximation form. By seeking to minimize the number of parameters in our functional form, we leave some phenomena unmodeled. In particular, there are two phenomena we have chosen not to model that introduce systemic error. First, the low-error plateau is not a plateau. Error often improves slightly at high densities before returning to np during the transition to the power-law region. Our model treats the region as flat and treats error as monotonically increasing as density decreases. This source of error accounts for a bias of ∼ 1% relative error in our estimation (Appendix H). Second, we model both transitions (between the power-law region and each plateau) with a single shape and the same transition rate. If we treated each transition separately and used higher-order terms in the rational form, we could potentially reduce some of the residual error in our estimation at the cost of additional complexity." }, { "heading": "7 IMPLICATIONS AND CONCLUSIONS", "text": "Our main contribution is a functional form ̂( np, d, l, w, n) that accurately predicts the error when pruning members of a network family using both IMP and SynFlow. There are several broader implications of our ability to characterize pruning in this way. The mere existence of this functional form means there is indeed structure to the way pruning affects error. Although prior work (Cai et al., 2019) has implicitly relied on such structure, we are the first to explicitly describe it. This functional form enables a framework in which we can reason conceptually and analytically about pruning. In doing so, we can make new observations about pruning that are non-obvious or costly to exhaustively demonstrate empirically. For example, recall our motivating question:\nGiven a family of neural networks, which should we prune (and by how much) to obtain the network with the smallest parameter-count such that its error does not exceed some threshold k?\nThis is an optimization problem—find the configuration of d, l, and w that minimizes parameter-count subject to an error constraint: argminw,l,dm s.t. ̂ = k. For ResNets m ∝ dlw2, yielding:\nl, w, d = argmin l,w,d\nlw2d s.t. np ∥∥∥[lφwψd− jp′( ↑/ np)1/γ] · [lφwψd− jp′]−1∥∥∥γ = k\nwhich is solvable directly without running any further experiments.\nUsing this approach, we can derive a useful insight. In the pruning literature, it is typical to report the minimum density at which the pruned network can match the error np(l, w) of the unpruned network (Han et al., 2015). However, our scaling law suggests that this is not the smallest model that achieves error np(l, w). Instead, it is better to train a larger network with depth l′ and width w′ and prune until error reaches np(l, w), even as that results in error well above np(l′, w′). This analytic result parallels and extends the findings of Li et al. (2020) on NLP tasks.\nFigure 7 (center) illustrates this behavior: it shows the error predicted by our scaling law for CIFAR10 ResNets with varying widths. The dotted black line shows the minimal parameter-count at which we predict it is possible to achieve each error. Importantly, none of the low-error plateaus intersect this black dotted line, meaning a model cannot be minimal until it has been pruned to the point where it increases in error. This occurs because the transitions of our functional form are gradual. On the other hand, if we start with a model that is too large, it will no longer be on the black line when it has been pruned to the point where its error reaches np(l, w).9 In Figure 7 (right), we plot the same information from the actual CIFAR-10 data and see the same phenomena occur in practice. The difference between the estimated and actual optimal parameter count is no more than 25%.\nLooking ahead, there are several opportunities for future work. Better understanding the sources of systematic error (error dips and transition shape) is a promising avenue for making it possible to extrapolate from small-scale settings to large-scale settings (see Appendix F for a forward looking discussion of extrapolation). Furthermore, although we focus on pruning for image classification and networks and pruning methods differ in other contexts (e.g., NLP), the generality of our functional form to different pruning strategies and network families suggests it may have broader applicability.\n9This behavior occurs since m ∝ m∗ for IMP. Interestingly, for SynFlow m ∝ m∗ such that sufficiently large networks are equivalent." }, { "heading": "A PRUNING ALGORITHMS", "text": "A.1 FORMAL STATEMENT OF ITERATIVE MAGNITUDE PRUNING (IMP)\nAlgorithm 1 Iterative Magnitude Pruning (IMP) with weight rewinding to epoch 10 and N iterations.\n1: Create a neural network with randomly initialized weights W0 ∈ Rd and initial pruning mask m = 1d 2: Train W0 to epoch 10, resulting in weights W10 3: for n ∈ {1, . . . , N} do 4: Train m W10 (the element-wise product of m and W10) to final epoch T and weights m WT,n 5: Prune the 20% of weights in m WT,n with the lowest magnitudes. m[i] = 0 if WT,n[i] is pruned 6: Return m and WT,n\nA.2 SYNFLOW\nUnlike IMP, SynFlow is a pruning algorithm that prunes neural networks before any training has taken place (Tanaka et al., 2020). To do so, SynFlow computes the “synaptic strengths” of each connection and prunes those weights with the lowest synaptic strengths (see Algorithm 2 below for the details on computing the synaptic strengths).\nImportantly, SynFlow prunes iteratively. It prunes a small number of weights, recalculates the synaptic strengths once those weights have been fixed to zero, and then prunes again. To prune to sparsity s, SynFlow iteratively prunes from sparsity s n−1 100 to sparsity s n 100 for n ∈ {1, . . . , 100}.\nAfter pruning, SynFlow trains the network normally using the standard hyperparameters. SynFlow computes the synaptic strengths as follows:10\nAlgorithm 2 Computing the synaptic strengths of a network with weights W . 1: Replace all weights w ∈W with their magnitudes |w|. 2: Forward propagate an input of all 1’s 3: Take the sum of the logits R. 4: The synaptic strength for each weight w is the gradient dR\ndw .\nNote that this algorithm leads to exploding activations on deeper networks, so we do not vary network depths in any of our experiments involving SynFlow.\n10https://github.com/ganguli-lab/Synaptic-Flow" }, { "heading": "B EXPERIMENTAL DETAILS", "text": "B.1 RESNETS\nWe study the residual networks (ResNets) designed by He et al. (2016) for CIFAR-10 and ImageNet. ResNets for CIFAR-10 are composed of an initial convolutional layer, three sets of B residual blocks (each with two convolutional layers and a skip connection), and a linear output layer. The sets of blocks have 16, 32, and 64 convolutional channels, respectively.\nResNets for ImageNet are composed of an initial convolutional layer, a max-pooling layer, four sets of residual blocks (each with three convolutional layers and a skip connection), and a linear output layer. The sets of blocks have 64, 128, 256, and 512 convolutional channels, respectively.\nWe place batch normalization before the ReLU activations.\nTo vary the width of the networks, we multiply the number of convolutional channels by the width scaling factor w. To vary the depth of the CIFAR-10 ResNets, we vary the value of B. The depth l of the network is the total number of the layers in the network, not counting skip connections.\nB.2 VGG NETWORKS\nWe study the VGG-16 variant of the VGG networks for CIFAR-10 as provided by the OpenLTH repository.11 The network is divided into five sections, each of which is followed by max pooling with kernel size 2 and stride 2. The sections contain 3x3 convolutional layers arranged as follows:\nSectionWidthLayers 1 64 2 2 128 2 3 256 3 4 512 3 5 512 3\nThe network has ReLU activations and batch normalization before each activation. To vary the width of VGG-16, we multiply each of the per-segment widths by the width scaling factor w.\nWhen pruning VGG, we consider the following configurations:\nNetwork Family Ntrain Ntest Densities (d) Depths (l) Width Scalings (w) Subsample Sizes (n)\nCIFAR-10 VGG-16 50K 10K 0.8i, i ∈ {0, . . . , 37} 16 2i, i ∈ {−4, . . . , 0} Ni , i ∈ {1}\nB.3 TRAINING HYPERPARAMETERS\nWe train CIFAR-10 ResNets and VGG-16 for 160 epochs with a batch size of 128. The initial learning rate is 0.1, and it drops by an order of magnitude at epochs 80 and 120. We optimize using SGD with momentum (0.9). We initialize with He uniform initialization. Data is augmented by normalizing, randomly flipping left and right, and randomly shifting by up to four pixels in any direction (and cropping afterwards). All CIFAR-10 networks are trained on GPUs.\nWe train ImageNet ResNets for 90 epochs with a batch size of 1024. The initial learning rate is 0.4, and it drops by an order of magnitude at epochs 30, 60, and 80. We perform linear learning rate warmup from 0 to 0.4 over the first 5 epochs. We optimize using SGD with momentum (0.9). We initialize with He uniform initialization. Data is augmented by normalizing, randomly flipping left and right, selecting a random aspect ratio between 0.8 and 1.25, selecting a random scaling factor between 0.1 and 1.0, and cropping accordingly. All ImageNet networks are trained on GPUs.\n11github.com/facebookresearch/open_lth" }, { "heading": "C FULL DATA FOR KEY OBSERVATIONS IN SECTION 3", "text": "In this appendix, we show that our observations from Section 3 hold when varying all dimensions (depth, width, and dataset size) on both the CIFAR-10 and ImageNet ResNets for IMP. Figure 8 shows the error versus density when changing width (left) depth (center) and data (right). In Figure 9, we similarly show the dependency of the error on density for Imagenet when varying width (left) and dataset size (right).\nIn Figure 8, we observe that all curves have a similar slope in the power-law region. In Equation 1, this implies that while γ is allowed to vary with l, w and n, it is in practice approximately a constant. Similarly, the high-error plateau ↑ is also shared across curves such that it too is approximately constant. In contrast, the transition from high-error plateau to the power-law region is not constant as a function of density. Section 4 finds exactly this dependency of the transition parameter p." }, { "heading": "D PARTIAL (PROJECTIONS) FIT RESULTS FOR SECTION 4", "text": "In Section 4, we fit the error jointly as a function of all dimensions showing that Equation 2 provides a good approximation to the error in practice. In this appendix, we consider important sub-cases, such as the case when one wishes to scale only one degree of freedom while pruning. This serves both a practical scenario, but also allows for a qualitative visualization of the fit (and typical sources of error), which is otherwise difficult to perform over all dimensions jointly. From a practical standpoint, in this case one need not estimate the parameters associated with the fixed degree of freedom.\nRecall that, given the non-pruned network error np, all dependencies on the individual structural degrees of freedom l, w are captured by the invariant m∗ , lφwψd. This means that, if one wishes to estimate the error while pruning when holding width fixed, we need not estimate ψ. Similarly if depth is held constant, we need not estimate φ.\nFigure 10 shows these partial fits. Shown from left to right are the fits done while pruning and varying width, depth and data respectively. Correspondingly, these fits omit separately ψ or φ or omit both when depth nor width are scaled. The fits were performed with all available density points for each dimension. For CIFAR-10: 7 widths, 224 points for the width partial fit; 7 dataset fractions, 240 points for the data partial fit; 4 depths, 164 points for the depth partial fit. For ImageNet: 5 widths, 83 points for the width partial fit; 3 dataset fractions, 86 points for the data partial fit.\nThis exercise, apart from its practical implications, highlights the fact that there are in effect two groups of parameters comprising the estimation. The first are the parameters ↑, γ and p′ which control the dependency as a function of density (or more generally, as a function of the invariant). The second are φ and ψ which are properties of the architectural degrees of freedom captured by the invariant. Moreover, within the first group of parameters ↑, γ, can be isolated and found from a single pruning curve, as they are not a function of l, w, n." }, { "heading": "E ADDITIONAL PRUNING ALGORITHMS AND ARCHITECTURES", "text": "In this appendix, we show that our functional form applies to both an additional network architecture (VGG-16 on CIFAR-10) and an additional pruning algorithm (SynFlow). We add these additional comparisons in the following Figures:\n• Figure 11: VGG-16 on CIFAR-10 with IMP as width varies. µ < 3%, σ < 7%. Notably, the measurement error in this case is large (σ ∼ 12%), dominating (over the approximation error) the total fit error. The fit averages out some of this error, resulting in a fit error which is lower than the measurement error. • Figure 12: ResNet-20 on CIFAR-10 with SynFlow as width varies. µ < 1%, σ < 4%. • Figure 13: VGG-16 on CIFAR-10 with SynFlow as width varies. µ < 1%, σ < 4%.\nNote that SynFlow suffers from exploding activations on deeper networks, so we do not vary ResNet depth in any of our SynFlow experiments." }, { "heading": "F TOWARDS EXTRAPOLATION", "text": "Background. In the main body, we showed that our scaling law accurately fits the error of pruned neural networks. As, such it has predictive power, allowing us to reason in a principled manner about pruning trade-offs. Similarly, it allows to make predictions about what would happen at larger model and data scales than explored here. Importantly, only a few experiments need be performed to find the coefficients for the scaling law (see Appendix 5).\nHowever, we could ask, how accurately can we estimate the scaling law parameters from even smaller scales? That is, is it possible to fit our scaling law to data from networks with deliberately smaller depths, widths, and dataset sizes and accurately predict the error of larger-scale models? If so, we could make informed decisions about pruning large-scale models through small-scale experiments alone, saving the costs associated with large scale training and pruning.\nOutside the context of pruning, the scaling laws of Rosenfeld et al. (2020) (for both language models and image classification) and Kaplan et al. (2020) (for predicting the expected performance of GPT-3 (Brown et al., 2020) at very large scale) have been shown to extrapolate successfully in this manner.\nResults on CIFAR-10. In Figure 14, we show the result of extrapolating from small-scale networks on CIFAR-10 (w = 18 , 1 4 ; l = 14, 20) to all widths and depths on CIFAR-10. Extrapolation prediction is still accurate: µ < 7%, σ < 6% (vs. µ < 1%, σ < 6% in the main body).\nFuture work. However, extrapolation is particularly sensitive to systemic errors. Specifically, the transitions and the error dips can lead to large deviations when extrapolating. For ImageNet, the error dips (especially on small dataset sizes) are especially pronounced, preventing stable extrapolation. In order to improve extrapolation performance, future work should explore the challenges we discuss in Section 6: approaches to either model or mitigate these dips and to improve the fit of the transitions." }, { "heading": "G COMPARISON OF PRUNING AND NON-PRUNING SCALING LAWS", "text": "In this appendix, we contrast the behavior of the error when pruning with the behavior of the error in the non-pruning setting. Hestness et al. (2017) show the the error follows a saturating power-law form when scaling data (with both low and high-error plateaus) but does not model them. Rosenfeld et al. (2020) unify the dependency on data and model size while approximating the transitions between regions; they propose the following form:\ñ(m,n) = an−α + bm−β + c∞ (3)\n̂(m,n) = 0 ∥∥∥∥ ̃(m,n)̃(m,n)− jη ∥∥∥∥ (4)\nwherem is the total number of parameters and n is the dataset size. a, b, α, β, c∞, and η are constants, and 0 plays the role of ↑ in our notation.\nRosenfeld et al. model the upper transition—from power-law region to the high-error plateau—by a rational form in a fashion similar to the approach we take. The key difference is that we consider a power of the polynomials in the numerator and denominator of the rational form, where in Eq. 3 the power is hidden in the term ̃.\nThe biggest difference arises when considering the lower transition (between the low-error plateau and the power-law region). This transition is captured by Eq. 3. Considering either the width or depth degrees of freedom x ∈ {w, l}, Eq. 3 can be re-written as:\ñ(x) = bxx −βx + cx (5)\nWhere bx and βx are constants and cx is a constant as a function of x (it is only a function of the data size n).\nFigure 15 (right) shows the error versus depth for different dataset sizes. In grey is the actual error, while in red is the best fit when approximating the error by Eq. 5. Qualitatively, one sees that the fit using Eq. 5 does indeed closely match the error in practice.\nRecall that we are interested in comparing the errors as a function of the density. A requirement from any functional form used to model the dependency on the density is to degenerate to the error of the non pruned model np at d = 1. We adapt Eq. 5 by solving the relation between bx and cx meeting this constraint, to arrive at:\ñ(x) = bxx −βx + np − bx (6)\nContrast Eq. 5 with the functional form we propose in Eq. 1, re-written here for convenience:\n̂(d, np | l, w, n) = np ∥∥∥∥∥∥∥ d− jp ( ↑ np ) 1 γ d− jp ∥∥∥∥∥∥∥ γ where j = √ −1 (7)\nThis can be simplified to capture only the lower transition—far enough from the upper transition (d p)—to:\n̂(d, np | l, w, n) = np ∥∥∥∥∥∥∥ d− jp ( ↑ np ) 1 γ d ∥∥∥∥∥∥∥ γ\n(8)\nFigure 15 (left) shows error versus density for different widths. In blue is the fit with Eq. 8 which follows closely the actual error (black) while in red is the fit with Eq. 6 which deviates noticeably in comparison.\nWe have seen that in practice that the form of Eq. 6 does not match well the pruning case, where the mismatch originates from lower transition shape. We have thus reached a phenomenological observation distinguishing the pruning and non-pruning forms; we leave the study of the origins of this phenomenon for future work." }, { "heading": "H THE EFFECT OF ERROR DIPS ON ESTIMATION BIAS", "text": "In this appendix, we consider the effect of the error dips on our estimator as discussed in Section 4. As we mention in that section, when pruning a network, error often dips below np during the low-error plateau.\nRecall that we find the parameters in our estimator (Equation 2) by minimizing the MSE of relative error δ. Our estimation has bias if E (̂− ) 6= 0 where the expectation is over all model and data configurations. Equivalently, the relative bias is µ , Eδ = 0 iff the estimator is unbiased. The Estimator captured by the joint form in Equation 2 is a monotonically increasing function of the density. It is also constrained such that at density d = 1 it is equal to the non-pruned error np. It thus, can not reduce The MSE to zero, as it can not decrease to match the actual error dips. This results in the bias of the relative error µ which in practice is ∼ 1%." } ]
2,020
null
SP:30580fb0f3acf76221f8b031518a30228c4d6162
[ "In this paper, the authors present a few-shot learning model for non-contact physiological signal measurement to build a more accurate and convenient personalized health sensing system. The motivation is that traditional fine-tuning method for this task is difficult since it requires large sets of high-quality training data for specific individuals, due to differences between each individual, measurement environment, and camera sensor condition. Therefore, the authors applied MAML on top of the existing deep learning network (TS-CAN) and implemented a model that aims to learn fast from a small number of training samples. The main contributions of this paper are: a meta-learning model that supports both supervised and unsupervised few-shot adaptation; improved performance by about 40% compared to a baseline that does not use meta learning; empirical analysis of performance for subjects with different skin types. ", "The authors propose a system called MetaPhys for personalized remote physiological sensing from videos. Their system combines a pre-trained CNN with an existing meta learning method (MAML). The investigated both supervised and unsupervised training of their system. Performance evaluation of their methods on benchmark datasets show their model significantly outperforms SOTA methods using multiple metrics as well as for different skin types. They further show that the unsupervised model achieves comparable results to the supervised model." ]
There are large individual differences in physiological processes, making designing personalized health sensing algorithms challenging. Existing machine learning systems struggle to generalize well to unseen subjects or contexts, especially in video-based physiological measurement. Although fine-tuning for a user might address this issue, it is difficult to collect large sets of training data for specific individuals because supervised algorithms require medical-grade sensors for generating the training target. Therefore, learning personalized or customized models from a small number of unlabeled samples is very attractive as it would allow fast calibrations. In this paper, we present a novel meta-learning approach called MetaPhys for learning personalized cardiac signals from 18-seconds of video data. MetaPhys works in both supervised and unsupervised manners. We evaluate our proposed approach on two benchmark datasets and demonstrate superior performance in cross-dataset evaluation with substantial reductions (42% to 44%) in errors compared with state-of-the-art approaches. Visualization of attention maps and ablation experiments reveal how the model adapts to each subject and why our proposed approach leads to these improvements. We have also demonstrated our proposed method significantly helps reduce the bias in skin type.
[]
[ { "authors": [ "Freddy Abnousi", "Guson Kang", "John Giacomini", "Alan Yeung", "Shirin Zarafshar", "Nicholas Vesom", "Euan Ashley", "Robert Harrington", "Celina Yong" ], "title": "A novel noninvasive method for remote heart failure monitoring: the eulerian video magnification applications in heart failure study (amplify)", "venue": "NPJ digital medicine,", "year": 2019 }, { "authors": [ "Guha Balakrishnan", "Fredo Durand", "John Guttag" ], "title": "Detecting pulse from head motions in video", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2013 }, { "authors": [ "Nannapas Banluesombatkul", "Pichayoot Ouppaphan", "Pitshaporn Leelaarporn", "Payongkit Lakhan", "Busarakum Chaitusaney", "Nattapong Jaimchariyatam", "Ekapol Chuangsuwanich", "Wei Chen", "Huy Phan", "Nat Dilokthanakul" ], "title": "Metasleeplearner: A pilot study on fast adaptation of bio-signalsbased sleep stage classifier to new individual subject using meta-learning", "venue": null, "year": 2020 }, { "authors": [ "Serge Bobbia", "Richard Macwan", "Yannick Benezeth", "Alamin Mansouri", "Julien Dubois" ], "title": "Unsupervised skin tissue segmentation for remote photoplethysmography", "venue": "Pattern Recognition Letters,", "year": 2019 }, { "authors": [ "Joy Buolamwini", "Timnit Gebru" ], "title": "Gender shades: Intersectional accuracy disparities in commercial gender classification", "venue": "In Conference on fairness, accountability and transparency,", "year": 2018 }, { "authors": [ "Weixuan Chen", "Daniel McDuff" ], "title": "Deepphys: Video-based physiological measurement using convolutional attention networks", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Janghoon Choi", "Junseok Kwon", "Kyoung Mu Lee" ], "title": "Deep meta learning for real-time target-aware visual tracking", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Gerard De Haan", "Vincent Jeanne" ], "title": "Robust pulse rate from chrominance-based rppg", "venue": "IEEE Transactions on Biomedical Engineering,", "year": 2013 }, { "authors": [ "Justin R Estepp", "Ethan B Blackford", "Christopher M Meier" ], "title": "Recovering pulse rate during motion artifact with a multi-imager array for non-contact imaging photoplethysmography", "venue": "IEEE International Conference on Systems, Man, and Cybernetics (SMC),", "year": 2014 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Thomas B Fitzpatrick" ], "title": "The validity and practicality of sun-reactive skin types i through vi", "venue": "Archives of dermatology,", "year": 1988 }, { "authors": [ "Taesik Gong", "Yeonsu Kim", "Jinwoo Shin", "Sung-Ju Lee" ], "title": "Metasense: few-shot adaptation to untrained conditions in deep mobile sensing", "venue": "In Proceedings of the 17th Conference on Embedded Networked Sensor Systems,", "year": 2019 }, { "authors": [ "Edward Grefenstette", "Brandon Amos", "Denis Yarats", "Phu Mon Htut", "Artem Molchanov", "Franziska Meier", "Douwe Kiela", "Kyunghyun Cho", "Soumith Chintala" ], "title": "Generalized inner loop meta-learning", "venue": null, "year": 1910 }, { "authors": [ "Junfeng He", "Khoi Pham", "Nachiappan Valliappan", "Pingmei Xu", "Chase Roberts", "Dmitry Lagun", "Vidhya Navalpakkam" ], "title": "On-device few-shot personalization for real-time gaze estimation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision Workshops,", "year": 2019 }, { "authors": [ "Timothy Hospedales", "Antreas Antoniou", "Paul Micaelli", "Amos Storkey" ], "title": "Meta-learning in neural networks: A survey", "venue": "arXiv preprint arXiv:2004.05439,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Eugene Lee", "Evan Chen", "Chen-Yi Lee" ], "title": "Meta-rppg: Remote heart rate estimation using a transductive meta-learner", "venue": "Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Jessica Lee", "Deva Ramanan", "Rohit Girdhar" ], "title": "Metapix: Few-shot video retargeting", "venue": "arXiv preprint arXiv:1910.04742,", "year": 2019 }, { "authors": [ "Zhenguo Li", "Fengwei Zhou", "Fei Chen", "Hang Li" ], "title": "Meta-sgd: Learning to learn quickly for few-shot learning", "venue": "arXiv preprint arXiv:1707.09835,", "year": 2017 }, { "authors": [ "Ji Lin", "Chuang Gan", "Song Han" ], "title": "Tsm: Temporal shift module for efficient video understanding", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Xin Liu", "Josh Fromm", "Shwetak Patel", "Daniel McDuff" ], "title": "Multi-task temporal shift attention networks for on-device contactless vitals measurement", "venue": "arXiv preprint arXiv:2006.03790,", "year": 2020 }, { "authors": [ "Andrea Madotto", "Zhaojiang Lin", "Chien-Sheng Wu", "Pascale Fung" ], "title": "Personalizing dialogue agents via meta-learning", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Daniel McDuff", "Ethan Blackford" ], "title": "iphys: An open non-contact imaging-based physiological measurement toolbox", "venue": "41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC),", "year": 2019 }, { "authors": [ "Ewa M Nowara", "Daniel McDuff", "Ashok Veeraraghavan" ], "title": "A meta-analysis of the impact of skin tone and gender on non-contact photoplethysmography measurements", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2020 }, { "authors": [ "Eunbyung Park", "Alexander C Berg" ], "title": "Meta-tracker: Fast and robust online adaptation for visual object trackers", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Ming-Zher Poh", "Daniel J McDuff", "Rosalind W Picard" ], "title": "Advancements in noncontact, multiparameter physiological measurements using a webcam", "venue": "IEEE transactions on biomedical engineering,", "year": 2010 }, { "authors": [ "Ming-Zher Poh", "Daniel J McDuff", "Rosalind W Picard" ], "title": "Non-contact, automated cardiac pulse measurements using video imaging and blind source separation", "venue": "Optics express,", "year": 2010 }, { "authors": [ "Valentina O Puntmann", "M Ludovica Carerj", "Imke Wieters", "Masia Fahim", "Christophe Arendt", "Jedrzej Hoffmann", "Anastasia Shchendrygina", "Felicitas Escher", "Mariuca Vasa-Nicotera", "Andreas M Zeiher" ], "title": "Outcomes of cardiovascular magnetic resonance imaging in patients recently recovered from coronavirus disease", "venue": "JAMA cardiology,", "year": 2020 }, { "authors": [ "Anthony C Smith", "Emma Thomas", "Centaine L Snoswell", "Helen Haydon", "Ateev Mehrotra", "Jane Clemensen", "Liam J Caffery" ], "title": "Telehealth for global emergencies: Implications for coronavirus disease 2019 (covid-19)", "venue": "Journal of telemedicine and telecare, pp. 1357633X20916567,", "year": 2020 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Chihiro Takano", "Yuji Ohta" ], "title": "Heart rate measurement based on a time-lapse image", "venue": "Medical engineering & physics,", "year": 2007 }, { "authors": [ "Wim Verkruysse", "Lars O Svaasand", "J Stuart Nelson" ], "title": "Remote plethysmographic imaging using ambient light", "venue": "Optics express,", "year": 2008 }, { "authors": [ "Mauricio Villarroel", "Sitthichok Chaichulee", "João Jorge", "Sara Davis", "Gabrielle Green", "Carlos Arteta", "Andrew Zisserman", "Kenny McCormick", "Peter Watkinson", "Lionel Tarassenko" ], "title": "Non-contact physiological monitoring of preterm infants in the neonatal intensive care unit", "venue": "npj Digital Medicine,", "year": 2019 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Wenjin Wang", "Albertus C den Brinker", "Sander Stuijk", "Gerard de Haan" ], "title": "Algorithmic principles of remote ppg", "venue": "IEEE Transactions on Biomedical Engineering,", "year": 2016 }, { "authors": [ "Hao-Yu Wu", "Michael Rubinstein", "Eugene Shih", "John Guttag", "Frédo Durand", "William Freeman" ], "title": "Eulerian video magnification for revealing subtle changes in the world", "venue": "ACM transactions on graphics (TOG),", "year": 2012 }, { "authors": [ "Zitong Yu", "Xiaobai Li", "Guoying Zhao" ], "title": "Remote photoplethysmograph signal measurement from facial videos using spatio-temporal networks", "venue": "In Proc. BMVC,", "year": 2019 }, { "authors": [ "Qi Zhan", "Wenjin Wang", "Gerard de Haan" ], "title": "Analysis of cnn-based remote-ppg to understand limitations and sensitivities", "venue": "Biomedical Optics Express,", "year": 2020 }, { "authors": [ "Zheng Zhang", "Jeff M Girard", "Yue Wu", "Xing Zhang", "Peng Liu", "Umur Ciftci", "Shaun Canavan", "Michael Reale", "Andy Horowitz", "Huiyuan Yang" ], "title": "Multimodal spontaneous emotion corpus for human behavior analysis", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The importance of scalable health sensing has been acutely highlighted during the SARS-CoV-2 (COVID-19) pandemic. The virus has been linked to increased risk of myocarditis and other serious cardiac (heart) conditions (Puntmann et al., 2020). Contact sensors (electrocardiograms, oximeters) are the current gold-standard for measurement of heart function. However, these devices are still not ubiquitously available, especially in low-resource settings. The development of video-based contactless sensing of vital signs presents an opportunity for highly scalable physiological monitoring. Furthermore, in clinical settings non-contact sensing could reduce the risk of infection for vulnerable patients (e.g., infants and elderly) and the discomfort caused to them (Villarroel et al., 2019).\nWhile there are compelling advantages of camera-based sensing, the approach also presents unsolved challenges. The use of ambient illumination means camera-based measurement is sensitive to environmental differences in the intensity and composition of the incident light. Camera sensor differences mean that hardware can differ in sensitivity across the frequency spectrum. People (the subjects) exhibit large individual differences in appearance (e.g., skin type, facial hair) and physiology (e.g, pulse dynamics). Finally, contextual differences mean that motions in a video at test time might be different from those seen in the training data. One specific example is that there exists biases in performance across skin types Nowara et al. (2020). This problem is not isolated to physiological measurement as studies have found systematic biases in facial gender classification, with error rates up to 7x higher on women than men and poorer performance on people with darker skin types (Buolamwini & Gebru, 2018). Moreover, there are several challenges in collecting large corpora of high-quality physiological data: 1) recruiting and instrumenting participants is often expensive and requires advanced technical expertise, 2) the data can reveal the identity of the subjects and/or sensitive health information meaning it is difficult for researchers to share such datasets. Therefore, training supervised models that generalize well across environments and subjects is challenging. For these reasons we observe that performance on cross-dataset evaluation is significantly worse than within-dataset evaluation using current state-of-the-art methods (Chen & McDuff, 2018; Liu et al., 2020).\nCalibration of consumer health sensors is often performed in a clinic, where a clinician will collect readings from a high-end sensor to calibrate a consumer-level device the patient owns. The reason for this is partly due to the variability within readings from consumer devices across different individuals. Ideally, we would be able to train a personalized model for each individual; however, standard supervised learning training schemes require large amounts of labeled data. Getting enough physiological training data of each individual is difficult because it requires using medical-grade devices to provide reliable labels. Being able to generate a personalized model from a small amount of training samples would enable customization based on a few seconds or minutes of video captured while visiting a clinic where people have access to a gold-standard device. Furthermore, if this process could be achieved without even the need for these devices (i.e., in an unsupervised manner), that would have even greater impact. Finally, combining remote physiological measurement with telehealth could provide patients’ vital signs for clinicians during remote diagnosis. Given that requests for telehealth appointments have increased more than 10x during COVID-19, and that this is expected to continue into the future (Smith et al., 2020), robust personalized models are of growing importance.\nMeta-learning, or learning to learn, has been extensively studied in the past few years (Hospedales et al., 2020). Instead of learning a specific generalized mapping, the goal of meta-learning is to design a model that can adapt to a new task or context with a small amount of data. Due to the inherent ability for fast adaption, meta-learning is a good candidate strategy for building personalized models (e.g., personalization in dialogue and video retargeting (Madotto et al., 2019; Lee et al., 2019).) However, we argue that meta learning is underused in healthcare where clinicians can quickly adapt their clinical knowledge to different patients. The goal of this work is to develop a meta-learning based personalization framework in remote physiological measurement with a limited amount of data from an unseen individual (task) to mimic how a clinician manually calibrates sensor readings for a specific patient. When meta-learning is applied to remote physiological measurement, there are two kinds of scenarios: 1) supervised adaptation with few samples of labeled data from a clinical grade sensor and 2) unsupervised adaptation with unlabeled data. We hypothesize that supervised adaptation is more likely to yield a robust personalized model with only a few labels, while unsupervised adaptation may personalize the model less effectively but with much lower effort and complexity.\nIn this paper, we propose a novel meta-learning approach to address the aforementioned challenges called MetaPhys. Our contributions are: 1) A meta-learning based deep neural framework, supporting both supervised and unsupervised few-shot adaptation, for camera-based vital sign measurement; 2) A systematic cross-dataset evaluation showing that our system considerably outperforms the state-of-the-art (42% to 52% reduction in heart rate error); 3) To perform an ablation experiment, freezing weights in the temporal and appearance branches to test sensitivity during adaptation; 4) An analysis of performance for subjects with different skin types. Our code, example models, and video results can be found on our github page.1" }, { "heading": "2 BACKGROUND", "text": "Video-Based Physiological Measurement: Video-based physiological measurement is a growing interdisciplinary domain that leverages ubiquitous imaging devices (e.g., webcams, smartphones’ cameras) to measure vital signs and other physiological processes. Early work established that changes in light reflected from the body could be used to capture subtle variations blood volume and motion related to the photoplethysmogram (PPG) (Takano & Ohta, 2007; Verkruysse et al., 2008) and ballistocardiogram (BCG) (Balakrishnan et al., 2013), respectively. Video analysis enables non-contact, spatial and temporal measurement of arterial and peripheral pulsations and allows for magnification of theses signals (Wu et al., 2012), which may help with examination (e.g., (Abnousi et al., 2019)). Based on the PPG and BCG signal, heart rate can be extracted (Poh et al., 2010b; Balakrishnan et al., 2013).\nHowever, the relationship between pixels and underlying physiological changes in a video is complex and neural models have shown strong performance compared to source separation techniques (Chen & McDuff, 2018; Yu et al., 2019; Zhan et al., 2020). Conventional supervised learning requires a large amount of training data to produce a generalized model. However, obtaining a large body of physiological and facial data is complicated and expensive. Current public datasets have limited\n1https://github.com/anonymous0paper/MetaPhys\nnumbers of subjects and diversity in regards of appearance (including skin type), camera sensors, environmental conditions and subject motions. Therefore, if the subject of interest is not in the training data or the video is otherwise different, performance can be considerably degraded, a result that is not acceptable for a physiological sensor.\nLee et al. (Lee et al., 2020) recognized the potential for meta-learning applied to imaging-based cardiac pulse measurement. Their method (Meta-rPPG) focuses on using unsupervised meta-learning and a LSTM encoder-decoder architecture which to our knowledge was not validated in previous work. Instead, our proposed meta-learning framework is built on top of a state-of-the-art on-device network (Liu et al., 2020) and aims to explore the potential of both supervised and unsupervised ondevice personalized meta-learning. Meta-rPPG uses a synthetic gradient generator and a prototypical distance minimizer to perform transductive inference to enable unsupervised meta-learning. This learning mechanism requires a number of rather complex steps. We propose a relatively simpler mechanism that is physiologically and optically grounded (Wang et al., 2016; Liu et al., 2020) and achieves greater accuracy.\nMeta-Learning and Person Specific Models: The ability to learn from a small number of samples or observations is often used as an example of the unique capabilities of human intelligence. However, machine learning systems are often brittle in a similar context. Meta-learning approaches tackle this problem by creating a general learner that is able to adapt to a new task with a small number of training samples, inspired by how humans can often master a new skill without many observations (Hospedales et al., 2020). However, most of the previous work in meta-learning focuses on supervised vision problems (Zoph et al., 2018; Snell et al., 2017) and in the computer vision literature has mainly been applied to image analysis (Vinyals et al., 2016; Li et al., 2017). Supervised regression in video settings has received less attention. One of few examples is object or face tracking (Choi et al., 2019; Park & Berg, 2018). In these tasks, the learner needs to adapt to the individual differences in appearance of the target and then track it across frames, even if the appearance changes considerably over time in the video. Choi et al. (2019) present a matching network architecture providing the meta-learner with information in the form of loss gradients obtained using the training samples.\nThe property of fast adaptation makes meta-learning a good candidate for personalizing models, it has been used in various applications such as dialogue agents (Madotto et al., 2019), gaze estimation (He et al., 2019), sleep stage classification (Banluesombatkul et al., 2020), activity recognition (Gong et al., 2019), and video retargeting (Lee et al., 2019). For example, Banluesombatkul et al. proposed a MAML-based meta-learning system to perform fast adaption of a sleep stage classification model using biosignals (Banluesombatkul et al., 2020). More recently, MetaPix (Lee et al., 2019) leveraged a meta-learning training schema with a small amount of video to adapt a universal generator to a particular background and human in the problem of video retargeting. Similarly, our proposed meta-learning framework is also capable of personalizing a universal remote physiological model to a new person or an environmental setting." }, { "heading": "3 METHOD", "text": "" }, { "heading": "3.1 PHYSIOLOGICAL META-LEARNING", "text": "In camera-based cardiac measurement, the goal is to separate pixel changes due to volumetric variations in blood and pulsatile motions from other variations that are not related to the pulse signal. Examples of “noise” in this context that might impact the performance on the task include: changes in the environment (illumination) and changes in appearance of the subject and motions (e.g., facial expressions, rigid head motions). A model trained within a traditional supervised learning regime might perform well if illumination, non-pulsatile motions, and appearances in the test set are similar to those in the training set. However, empirical evidence shows that performance usually significantly degrades from one dataset to another, suggesting that traditional training is likely to overfit to the training set to some extent (Chen & McDuff, 2018). Therefore, to achieve state-of-the-art performance in remote physiological measurement on cross-dataset evaluation, the system should have: 1) a good initial representation of the mapping from the raw video data to pulse signal, and 2) a strategy for adapting to unseen individuals and environments.\nTo achieve this, we propose a system called MetaPhys, an adaptable meta-learning based on-device framework aimed at efficient and personalized remote physiological sensing. MetaPhys uses a\npretrained convolutional attention network as the backbone (described below) and leverages a novel personalized meta-learning schema to overcome the aforementioned limitations. We adopt ModelAgnostic Meta-Learning (MAML) (Finn et al., 2017) as our personalized parameter update schema. MAML produces a general initialization as the starting point for fast adaptation to a diverse set of unseen tasks with only a few training samples. However, applying MAML to the task of camera-based physiological measurement has differences to many previously explored meta-learning problems. Existing meta-learning approaches are often evaluated on classification or some toy regression tasks due to the lack of regression benchmark datasets (Hospedales et al., 2020). Our problem is a nontrivial vision-based regression task due to the subtle nature of the underlying physiological signal. Algorithm 1 outlines the training process for MetaPhys, we first pretrain the backbone network to get an initial spatial-temporal representation. Then we treat each individual as a task τi. During the training, we split the data into a support set (K video frames) and a query set (K ′ video frames) for each individual (task). The support set is used to update the task’s parameters and yield a personalized model θi. The query set is used to assess the effectiveness of the personalized model and further update the global initialization θ to make future adaptation better. A robust personalized model θi aims to provide a more accurate attention mask to the corresponding motion branch and to preform precise physiological measurement for the target individual as well as the target’s environment. During the testing stage, MetaPhys has the updated global initialization θ̂, and can generate θ̂i for each test individual (task) by optimizing the test support set as θ̂τi ← θ̂ − α∇θ̂Lτif(θ̂). With this training and testing schema, the robust global initialization θ̂ generated from MetaPhys not only leverages the pretrained representation but also learns how to adapt to individual and environmental noise quickly." }, { "heading": "3.2 SPATIAL AND TEMPORAL MODEL ARCHITECTURE BACKBONE", "text": "Our ultimate goal is a computationally efficient on-device meta-learning framework that offers inference at 150 fps. Therefore, we adopt the state-of-the-art architecture (TS-CAN) (Liu et al., 2020) for remote cardiopulmonary monitoring. TS-CAN is an end-to-end neural architecture with appearance and motion branches. The inputs are video frames and the output is the first-derivative of the pulse estimate. Tensor shifting modules (TSM) (Lin et al., 2019) are used that shift frames along the temporal axis allowing for information exchange across time. This helps capture temporal dependencies beyond consecutive frames. The appearance branch and attention mechanism help guide the motion branch to focus on regions with high pulsatile signal (e.g., skin) instead of others (e.g.,\nclothes, hair) (see Fig. 1). However, we discover empirically that this network does not necessarily generalize well across datasets with differences in subjects, lighting, backgrounds and motions (see Table 1). One of the main challenges when employing TS-CAN is that the appearance branch may not generate an accurate mask while testing on unseen subjects or environments because of the differences in appearance of skin pixels. Without a good attention mask, motions from other sources are likely to be given more weight, thus damaging the quality of our physiological estimate." }, { "heading": "3.3 SUPERVISED OR UNSUPERVISED LEARNING", "text": "We explore both supervised and unsupervised training regimes for MetaPhys. Supervised personalization may be suitable in clinical settings that require highly precise adaptation and where there is access to reference devices. Unsupervised personalization may be preferable for consumer measurement when convenience and scalability is of a greater priority and calibration with a clinical grade device might be difficult.\nFor the supervised version of MetaPhys we use the gold standard reference signal from a finger PPG or blood pressure wave (BPW) to train the meta-learner and perform few-shot adaptation when testing. In contrast to the supervised version, in the unsupervised case we use pseudo labels during the training of the MetaPhys meta-learner and parameter updates rather than the ground-truth signal from the medical-grade devices. We use a physiologically-based unsupervised remote physiological measurement model to generate pseudo pulse signal estimates without relying on gold standard measurements. More specifically, we leverage the Plane-Orthogonal-to-Skin (POS) (Wang et al., 2016) method, which is the current state-of-the-art for demixing in this context. POS calculates a projection plane orthogonal to the skin-tone, derived based on optical and physiological principles, that is then used for pulse extraction. In details, POS can be summarized into four steps: 1) spatial averaging each frame, 2) temporal normalization within a certain window size, 3) applying a fixed matrix projection to offset the specular reflections and other noise, 4) band-pass filtering.\nWe observe that even though our unsupervised model uses the POS signal for meta-training, MetaPhys’s performance significantly outperforms POS once trained. As Algorithm 1 illustrates, the pseudo label generator G produces pseudo labels for both K support frames and K ′ query frames for adaptation and parameter updates. We used pseudo labels for the query set (K ′) in training, as we observed similar empirical results in preliminary testing whether we used pseudo labels or ground-truth labels.\nAlgorithm 1 MetaPhys: Meta-learning for physiological signal personalization Require: S: Subject-wise video data Require: A batch of personalized tasks τ where each task τi contains N data points from Si Require: A pseudo label generator G for unsupervised meta-learning\n1: θ ← Pre-training TS-CAN on AFRL dataset 2: for each τi ∈ τ do 3: if Supervised then 4: K ← Sample K support frames from videos of τi with ground truth labels 5: K ′← Sample K ′ query frames from videos of τi with ground truth labels 6: else 7: K ← Sample K support frames from videos of τi with pseudo labels from G 8: K ′← Sample K ′ query frames from videos of τi with pseudo labels from G 9: end if\n10: θτi ← θ − α∇θLτif(K, θ), Update the personalized params. based on indiv. support loss 11: end for 12: θ̂ ← θ − β∇θ ∑ τi Lτif(K ′τi , θτi), Update the global params. based on individuals’ query loss" }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASETS", "text": "AFRL (Estepp et al., 2014): 300 videos of 25 participants (17 males) were recorded at 658x492 resolution and 30 fps. Pulse measurements were recorded via a contact reflectance PPG sensor\nand used for training. Electrocardiograms (ECG) were recorded for evaluating performance. Each participant was recorded six times with increasing head motion in each task (10 degrees/second, 20 degrees/second, 30 degrees/second). The participants were asked to sit still for the first two tasks and perform three motion tasks rotating their head about the vertical axis.\nUBFC (Bobbia et al., 2019): 42 videos of 42 participants were recorded at 640x480 resolution and 30 fps in uncompressed 8-bit RGB format. A CMS50E transmissive pulse oximeter was used to obtain the ground truth PPG data. All the experiments were conducted indoors with different sunlight and indoor illumination. Participants were also asked to play time sensitive mathematical games to augment the heart rate during the data collection.\nMMSE (Zhang et al., 2016): 102 videos of 40 participants were recorded at 1040x1392 resolution and 25 fps. A blood pressure wave signal was measured at 1000 fps as the gold standard. The blood pressure wave was used as the training signal for this data as a PPG signal was not available. The distribution of skin types based on the Fitzpatrick scale Fitzpatrick (1988) is: II=8, III=11, IV=17, V+VI=4." }, { "heading": "4.2 IMPLEMENTATION DETAILS", "text": "MetaPhys was implemented in PyTorch (Paszke et al., 2019), and all the experiments were conducted on a Nvidia 2080Ti GPU. We first implemented the backbone network (TS-CAN) and modified it to use a window size of 20 frames (rather than 10) because we empirically observed a larger window size led to better overall performance. Then, we implemented MetaPhys based on a gradient computation framework called higher (Grefenstette et al., 2019). Compared with most previous meta-learning studies that were trained and evaluated on a single dataset (e.g., miniimagenet (Vinyals et al., 2016)), we used three datasets to perform pretraining and cross-dataset training and evaluation. Our backbone was pretrained on the AFRL dataset, and the training (described in Algorithm 1) and evaluation of our meta-learner were performed with the UBFC and MMSE datasets. We picked the size of the support set (K) for personalization to be 540 video frames for each individual. For a 30 fps video recording this equates to an 18-second recording which is a reasonably short calibration period. During the meta training and adaptation, we used an Adam optimizer (Kingma & Ba, 2014) with an outer learning rate (β) of 0.001 and a stocastic gradient descent (SGD) optimizer with an inner learning rate (θ) of 0.005. We trained the meta-learner for 10 epochs, and performed one step adaptation (i.e., gradient descent).\nAs baselines, we implemented traditional supervised training (TS-CAN) on AFRL and evaluated on MMSE and UBFC. A conventional fine tuning with the support set and testing on the query set was implemented as our adaptation baseline. To assure a fair comparison across all experiments, we forced the test data (test query set) to remain the same within each task. We also implemented\nthree established unsupervised algorithms (CHROM, POS, ICA) using iPhys-Toolbox (McDuff & Blackford, 2019). We applied post-processing to the outputs of all the methods in the same way. We first divided the remainder of the recordings for each participant into 360-frame windows (approximately 12 seconds), with no overlap, and applied a 2nd-order butterworth filter with a cutoff frequency of 0.75 and 2.5 Hz (these represent a realistic range of heart rates we would expect for adults). We then computed four metrics for each window: mean absolute error (MAE), root mean squared error (RMSE), signal-to-noise ratio (SNR) and correlation (ρ) in heart-rate estimations. Unlike most prior work which evaluated performance on whole videos (often 30 or 60 seconds worth of data), we perform evaluation on 12 second sequences which is considerably more challenging as the model has much less information for inference." }, { "heading": "5 RESULTS AND DISCUSSION", "text": "Comparison with the State-of-the-Art: For the MMSE dataset, our proposed supervised and unsupervised MetaPhys with pretraining outperformed the state-of-the-art results by 7% and 42% in MAE, respectively (see Table 1). On the UBFC dataset, supervised and unsupervised MetaPhys with pretraining showed even greater benefits reducing error by 57% and 44% compared to the previous state-of-the-art, respectively. Meta-learning alone is not as effective as meta-learning using weights initialized in a pretraining stage (19% and 50% improvements in MMSE and UBFC). We also compared our methods against the only other meta-learning based method (Meta-rPPG) where we reduced the MAE by 68%. Furthermore, we compared MetaPhys against the traditional personalization method (fine-tuning), and our approach gained a 54% and a 61% improvements in terms of MAE on MMSE and UBFC, respectively. We also evaluated data size of 6s, 12s and 18s for support set during meta training and testing, and the results showed training with 18s (RMSE: 3.12) outperformed 6s (RMSE: 5.43) and 12s (RMSE: 5.53) in the MMSE dataset. A similar trend also has been observed on the UBFC dataset (RMSE of 18s: 3.12, RMSE of 12s: 4.48, RMSE of 6s: 3.46).\nUnsupervised vs. Supervised Adaptation: Next, we examine the difference between using a supervised and unsupervised training regime in MetaPhys. For UBFC, the supervised model (MAE=1.90 BPM), outperformed the unsupervised model (MAE=2.46 BPM). Whereas, for the MMSE dataset the unsupervised model (MAE=1.87 BMP) outperformed the supervised model (MAE=2.98 BMP). The fact that the unsupervised model achieves broadly comparable results to the supervised model is surprising and encouraging because there are many applications where unsupervised adaptation would be more convenient and efficient (e.g., calibrating a heart rate measurement app on a smartphone without needing a reference device). We also observe that the unsupervised model, even though it used the POS signal as training input, significantly outperforms POS on both datasets, suggesting MetaPhys is able to form a better representation.\nVisualizing Adaption: To help us understand why MetaPhys outperforms the state-of-the-art models we visualized the attention masks for different subjects. In Fig. 3-A, we compare the attention masks\nfrom the appearance branch of TS-CAN based on four training schemes which are: 1) supervised training with TS-CAN, 2) pretraining TS-CAN on AFRL and then fine tuning TS-CAN on the support set used for the meta-learning experiments, 3) pretraining on AFRL and supervised MetaPhys training, 4) pretraining on AFRL and unsupervised MetaPhys training. The differences are subtle, but on inspection we can notice that MetaPhys leads to masks that put higher weight on regions with greater pulsatile signal (e.g., forehead and cheeks) and less weight on less important regions (e.g., t-shirt - see P5 as an example). In Fig. 3-B, we visualize the progression of learning for the four different methods. Again the changes during learning are subtle, but the traditional supervised methods seem more likely to overfit even over a relatively small number of epochs meaning that the attention to important regions of the face is not as high as with the meta-learning approaches, presumably because the traditional supervised learning has to capture a more generic model which is not well adapted to any one specific individual.\nFreezing Appearance vs. Motion: We questioned whether the adaptation of the appearance mask was the main or sole reason for the improvements provided by MetaPhys. To test this, we froze the weights in the motion branch of the TS-CAN during the meta-training stage and only updated weights in the appearance branch. From the results of these experiments, we observe that there is a 20% increase in MAE, indicating that MetaPhys not only noticeably improves the quality of the attention mask, but also learns additional temporal dynamics specific to an individual’s pulse waveform.\nRobustness to Skin Type: Our motivation for adopting a meta-learning approach is to improve generalization. One challenge with photoplethysmography methods is their sensitivity to skin type. A larger melanin concentration in people with darker skin leads to higher light absorption compared to lighter skin types (Nowara et al., 2020), thus reducing the reflectance signal to noise ratio. Fig. 2 shows a bar plot of the MAE in heart rate estimates by skin type (we group types I+II and V+VI as there were relatively few subjects in these categories). Both the AFRL and UBFC datasets are heavily skewed towards lighter Caucasian skin type categories. Therefore supervised methods trained on these datasets (e.g., TS-CAN) tend to overfit and not perform well on other skin types. Entirely unsupervised baselines do not perform any better, possibly because they were mostly designed and validated with lighter skin type data as well. While the highest errors for unsupervised MetaPhys still come in the darkest skin type categories, the reduction in error for types V+VI is considerable (68% compared to POS, 50% compared to TS-CAN). We are encouraged that these results are a step towards more consistent performance across people of different appearances.\nLimitations: There is a trend of performance degradation when the skin type gets darker. We acknowledge this limitation and plan to use resampling to help address this bias in future. Both the MMSE and UBFC datasets have somewhat limited head motion and future work will investigate whether meta-learning can help with generalization to other motion conditions." }, { "heading": "6 CONCLUSIONS", "text": "We present a novel unsupervised few-shot adaptation framework for non-contact physiological measurement called MetaPhys. Our proposed method substantially improves on the state-of-the-art and the performance on various skin types, and we also reveal why and how our method achieved such improvement." }, { "heading": "A APPENDIX", "text": "A.1 EVALUATION METRICS\nMean Absolute Error (MAE): The MAE between our model estimates and the gold-standard heart rates from the contact sensor measurements were calculated as follows for each 12-second time window:\nMAE = 1\nT T∑ i=1 |HRi −HR′i| (1)\nRoot Mean Squared Error (RMSE): The RMSE between our model estimates and the goldstandard heart rate from the contact sensor measurements were calculatd as follows for each 12-second time window:\nRMSE = √√√√ i = 1 T T∑ 1 (HRi −HR′i)2 (2)\nIn both cases, HR is the gold-standard heart rate and HR’ is the estimated heart rate from the video respectively. The gold-standard HR frequency was determined from the calculated from gold-standard PPG signal (UBFC dataset) or blood pressure wave (MMSE dataset).\nWe also compute the Pearson correlation between the estimated heart rates and the gold-standard heart rates from the contact sensor measurements across all the subjects.\nSignal-to-Noise Ratios (SNR): We calculate blood volume pulse signal-to-noise ratios (SNR) (De Haan & Jeanne, 2013). This captures the signal quality of the recovered pulse estimates without penalizing heart rate estimates that are slightly inaccurate. The gold-standard HR frequency was determined from the gold-standard PPG waveform (UBFC dataset) or blood pressure wave (MMSE dataset).\nSNR = 10log10\n( ∑240 f=30((Ut(f)Ŝ(f))\n2∑240 f=30(1− Ut(f))Ŝ(f))2)\n) (3)\nwhere Ŝ is the power spectrum of the BVP signal (S), f is the frequency (in BPM), HR is the heart rate computed from the gold-standard device and Ut(f) is a binary template that is one for the heart rate region from HR-6 BPM to HR+6BPM and its first harmonic region from 2*HR-12BPM to 2*HR+12BPM, and 0 elsewhere.\nA.2 CODE AND ADDITIONAL RESULTS:\nAvailable at: https://github.com/anonymous0paper/MetaPhys." } ]
2,020
null
SP:adf55a0c96d1e5ffb8016b8bec41aa0caca79793
[ "This paper proposes a stochastic subset selection method for reducing the storage / transmission cost of datasets. The proposes method minimizes the expected loss over selected datasets. The data selection algorithm consists a candidate selection stage and an autoregressive selection stage, parameterized with neural networks, and are trainable by gradient methods. The authors formulate and tested their approach on four tasks. The problem formulation and methodology are technically sound. The proposed method also seems to be more general than competing methods, such as coreset.", "This work introduces a method to select instances from any set (stochastic subset selection, or SSS). The experiments demonstrate a diverse set of use-cases, including feature selection and core-set selection. The proposed approach is a two-stage method involving candidate selection (learning a function $\\rho$ to determine a Bernoulli probability for each input) and AutoRegressive subset selection (learning a function $f$ to generate probabilities for sampling elements from a reduced set); both stages use the Concrete distribution to ensure differentiability." ]
Current machine learning algorithms are designed to work with huge volumes of high dimensional data such as images. However, these algorithms are being increasingly deployed to resource constrained systems such as mobile devices and embedded systems. Even in cases where large computing infrastructure is available, the size of each data instance, as well as datasets, can provide a huge bottleneck in data transfer across communication channels. Also, there is a huge incentive both in energy and monetary terms in reducing both the computational and memory requirements of these algorithms. For non-parametric models that require to leverage the stored training data at the inference time, the increased cost in memory and computation could be even more problematic. In this work, we aim to reduce the volume of data these algorithms must process through an endto-end two-stage neural subset selection model, where the first stage selects a set of candidate points using a conditionally independent Bernoulli mask followed by an iterative coreset selection via a conditional Categorical distribution. The subset selection model is trained by meta-learning with a distribution of sets. We validate our method on set reconstruction and classification tasks with feature selection as well as the selection of representative samples from a given dataset, on which our method outperforms relevant baselines. We also show in our experiments that our method enhances scalability of non-parametric models such as Neural Processes.
[]
[ { "authors": [ "Muhammed Fatih Balın", "Abubakar Abid", "James Zou" ], "title": "Concrete autoencoders: Differentiable feature selection and reconstruction", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Abhinav Bhatia", "Pradeep Varakantham", "Akshat Kumar" ], "title": "Resource constrained deep reinforcement learning", "venue": "In Proceedings of the International Conference on Automated Planning and Scheduling,", "year": 2019 }, { "authors": [ "Trevor Campbell", "Tamara Broderick" ], "title": "Bayesian coreset construction via greedy iterative geodesic ascent", "venue": "arXiv preprint arXiv:1802.01737,", "year": 2018 }, { "authors": [ "Trevor Campbell", "Tamara Broderick" ], "title": "Automated scalable bayesian inference via hilbert coresets", "venue": "The Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Michael Chan", "Daniel Scarafoni", "Ronald Duarte", "Jason Thornton", "Luke Skelly" ], "title": "Learning network architectures of deep cnns under resource constraints", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2018 }, { "authors": [ "Jianbo Chen", "Le Song", "Martin J Wainwright", "Michael I Jordan" ], "title": "Learning to explain: An information-theoretic perspective on model interpretation", "venue": "arXiv preprint arXiv:1802.07814,", "year": 2018 }, { "authors": [ "Cody Coleman", "Christopher Yeh", "Stephen Mussmann", "Baharan Mirzasoleiman", "Peter Bailis", "Percy Liang", "Jure Leskovec", "Matei Zaharia" ], "title": "Selection via proxy: Efficient data selection for deep learning", "venue": null, "year": 1906 }, { "authors": [ "Gwendoline De Bie", "Gabriel Peyré", "Marco Cuturi" ], "title": "Stochastic deep networks", "venue": "arXiv preprint arXiv:1811.07429,", "year": 2018 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Oren Dovrat", "Itai Lang", "Shai Avidan" ], "title": "Learning to sample", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Harrison Edwards", "Amos Storkey" ], "title": "Towards a neural statistician", "venue": "arXiv preprint arXiv:1606.02185,", "year": 2016 }, { "authors": [ "Yuval Eldar", "Michael Lindenbaum", "Moshe Porat", "Yehoshua Y Zeevi" ], "title": "The farthest point strategy for progressive image sampling", "venue": "IEEE Transactions on Image Processing,", "year": 1997 }, { "authors": [ "Yarin Gal", "Jiri Hron", "Alex Kendall" ], "title": "Concrete dropout", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "HERBERT Hensel" ], "title": "Neural processes in thermoregulation", "venue": "Physiological Reviews,", "year": 1973 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Jonathan Huggins", "Trevor Campbell", "Tamara Broderick" ], "title": "Coresets for scalable bayesian logistic regression", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "arXiv preprint arXiv:1611.01144,", "year": 2016 }, { "authors": [ "Angelos Katharopoulos", "François Fleuret" ], "title": "Not all samples are created equal: Deep learning with importance sampling", "venue": "arXiv preprint arXiv:1803.00942,", "year": 2018 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih", "Jonathan Schwarz", "Marta Garnelo", "Ali Eslami", "Dan Rosenbaum", "Oriol Vinyals", "Yee Whye Teh" ], "title": "Attentive neural processes", "venue": "arXiv preprint arXiv:1901.05761,", "year": 2019 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih", "Jonathan Schwarz", "Marta Garnelo", "Ali Eslami", "Dan Rosenbaum", "Oriol Vinyals", "Yee Whye Teh" ], "title": "Attentive neural processes", "venue": "arXiv preprint arXiv:1901.05761,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "Cifar-10 and cifar-100 datasets", "venue": "URl: https://www. cs. toronto. edu/kriz/cifar. html,", "year": 2009 }, { "authors": [ "Mengtian Li", "Ersin Yumer", "Deva Ramanan" ], "title": "Budgeted training: Rethinking deep neural network training under resource constraints", "venue": "arXiv preprint arXiv:1905.04753,", "year": 2019 }, { "authors": [ "Mu Li", "Wangmeng Zuo", "Shuhang Gu", "Debin Zhao", "David Zhang" ], "title": "Learning convolutional networks for content-weighted image compression", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Yangyan Li", "Rui Bu", "Mingchao Sun", "Wei Wu", "Xinhan Di", "Baoquan Chen" ], "title": "Pointcnn: Convolution on x-transformed points", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Large-scale celebfaces attributes (celeba) dataset", "venue": "Retrieved August,", "year": 2018 }, { "authors": [ "Chris J Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": "arXiv preprint arXiv:1611.00712,", "year": 2016 }, { "authors": [ "Fabian Mentzer", "Eirikur Agustsson", "Michael Tschannen", "Radu Timofte", "Luc Van Gool" ], "title": "Conditional probability models for deep image compression", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Carsten Moenning", "Neil A Dodgson" ], "title": "Fast marching farthest point sampling", "venue": "Technical report,", "year": 2003 }, { "authors": [ "Charles R Qi", "Hao Su", "Kaichun Mo", "Leonidas J Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Charles R Qi", "Hao Su", "Kaichun Mo", "Leonidas J Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Charles Ruizhongtai Qi", "Li Yi", "Hao Su", "Leonidas J Guibas" ], "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Siamak Ravanbakhsh", "Jeff Schneider", "Barnabas Poczos" ], "title": "Deep learning with sets and point clouds", "venue": "arXiv preprint arXiv:1611.04500,", "year": 2016 }, { "authors": [ "Oren Rippel", "Lubomir Bourdev" ], "title": "Real-time adaptive image compression", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Akiyoshi Sannai", "Yuuki Takai", "Matthieu Cordonnier" ], "title": "Universal approximations of permutation invariant/equivariant functions by deep neural networks", "venue": "arXiv preprint arXiv:1903.01939,", "year": 2019 }, { "authors": [ "Ozan Sener", "Silvio Savarese" ], "title": "Active learning for convolutional neural networks: A core-set approach", "venue": "arXiv preprint arXiv:1708.00489,", "year": 2017 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "George Toderici", "Damien Vincent", "Nick Johnston", "Sung Jin Hwang", "David Minnen", "Joel Shor", "Michele Covell" ], "title": "Full resolution image compression with recurrent neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Tongzhou Wang", "Jun-Yan Zhu", "Antonio Torralba", "Alexei" ], "title": "A Efros. Dataset distillation", "venue": "arXiv preprint arXiv:1811.10959,", "year": 2018 }, { "authors": [ "Kai Wei", "Rishabh Iyer", "Jeff Bilmes" ], "title": "Submodularity in data subset selection and active learning", "venue": "In International Conference on Machine Learning,", "year": 1954 } ]
[ { "heading": null, "text": "Current machine learning algorithms are designed to work with huge volumes of high dimensional data such as images. However, these algorithms are being increasingly deployed to resource constrained systems such as mobile devices and embedded systems. Even in cases where large computing infrastructure is available, the size of each data instance, as well as datasets, can provide a huge bottleneck in data transfer across communication channels. Also, there is a huge incentive both in energy and monetary terms in reducing both the computational and memory requirements of these algorithms. For non-parametric models that require to leverage the stored training data at the inference time, the increased cost in memory and computation could be even more problematic. In this work, we aim to reduce the volume of data these algorithms must process through an endto-end two-stage neural subset selection model, where the first stage selects a set of candidate points using a conditionally independent Bernoulli mask followed by an iterative coreset selection via a conditional Categorical distribution. The subset selection model is trained by meta-learning with a distribution of sets. We validate our method on set reconstruction and classification tasks with feature selection as well as the selection of representative samples from a given dataset, on which our method outperforms relevant baselines. We also show in our experiments that our method enhances scalability of non-parametric models such as Neural Processes." }, { "heading": "1 INTRODUCTION", "text": "The recent success of deep learning algorithms partly owes to the availability of huge volume of data (Deng et al., 2009; Krizhevsky et al., 2009; Liu et al., 2015), which enables training of very large deep neural networks. However, the high dimensionality of each data instance and the large size of datasets makes it difficult, especially for resource-limited devices (Chan et al., 2018; Li et al., 2019; Bhatia et al., 2019), to store and transfer the dataset, or perform on-device learning with the data. This problem becomes more problematic for non-parametric models such as Neural Processes (Hensel, 1973; Kim et al., 2019a) which require the training dataset to be stored for inference. Therefore, it is appealing to reduce the size of the dataset, both at the instance (Dovrat et al., 2019; Li et al., 2018b;b) and the dataset level, such that we selects only a small number of samples from the dataset, each of which contains only few selected input features (e.g. pixels). Then, we could use the selected subset for the reconstruction of the entire set (either each instance or the entire dataset) or for a prediction task, such as classification.\nThe simplest way to obtain such a subset is random sampling, but it is highly sub-optimal in that it treats all elements in the set equally. However, the pixels from each image and examples from each dataset will have varying degree of importance (Katharopoulos & Fleuret, 2018) to a target task, whether it is reconstruction or prediction, and thus random sampling will generally incur large loss of accuracy for the target task. There exist some work on coreset construction (Huggins et al., 2016; Campbell & Broderick, 2018; 2019) which proposed to construct a small subset with the most important samples for Bayesian posterior inference. However, these methods cannot be applied straightforwardly to deep learning with an arbitrary target task. How can we then sample elements from the given set to construct a subset, such that it suffers from minimal accuracy loss on any target task? To this end, we propose to learn a sampler that learns to sample the most important samples for a given task, by training it jointly with the target task and additionally meta-learn a sampler over a distribution of datasets for instance selection in the classification task.\nSpecifically, we learn the sampling rate for individual samples in two stages. First we learn a Bernoulli sampling rate for individual sample to efficiently screen out less important elements. Then, to select the most important elements out of this candidate set considering relative importance, we use a Categorical distribution to model the conditional distribution of sampling each element given a set of selected elements. After learning the sampling probability for each stage, we could perform stochastic selection of a given set, with linear time complexity. Our Stochastic Subset Selection (SSS) is a general framework to sample elements from a set, and it can be applied to both feature sampling and instance sampling. SSS can reduce the memory and computation cost required to process data while retaining performance on downstream tasks.\nOur model can benefit from a wide range of practical applications. For example, when sending an image to an edge device with low computing power, instead of sending the entire image, we could send a subset of pixels with their coordinates, which will reduce both communication and inference cost. Similarly, edge devices may need to perform inference on a huge amount of data that could be represented as a set (e.g. video, point clouds) in real-time, and our feature selection could be used to speed up the inference. Moreover, our model could also help with on-device learning on personal data (e.g. photos), as it can select out examples to train the model at a reduced cost. Finally, it can help with the scalability of non-parametric models which requires storage of training examples, such as Neural Processes, to scale up to large-scale problems.\nWe validate our SSS model on multiple datasets for 1D function regression and 2D image reconstruction and classification for both feature selection and instance selection. The results show that our method is able to select samples with minimal decrease on the target task accuracy, largely outperforming random or an existing sampling method. Our contribution in this work is threefold:\n• We propose a novel two-stage stochastic subset selection method that learns to sample a subset from a larger set with linear time complexity, with minimal loss of accuracy at the downstream task.\n• We propose a framework that trains the subset selection model via meta-learning, such that it can generalize to unseen tasks.\n• We validate the efficacy and generality of our model on various datasets for feature selection from an instance and instance selection from a dataset, on which it significantly outperforms relevant baselines." }, { "heading": "2 RELATED WORK", "text": "Set encoding - Permutation invariant networks Recently, extensive research efforts have been made in the area of set representation learning with the goal of obtaining order-invariant (or equivariant) and size-invariant representations. Many propose simple methods to obtain set representations by applying non-linear transformations to each element before a pooling layer (e.g. average pooling or max pooling) (Ravanbakhsh et al., 2016; Qi et al., 2017b; Zaheer et al., 2017; Sannai et al., 2019). However, these models are known to have limited expressive power and sometimes not capable of capturing high moments of distributions. Yet approaches such as Stochastic Deep Network (De Bie et al., 2018) and Set Transformer (Lee et al., 2018) consider the pairwise (or higher order) interactions among set elements and hence can capture more complex statistics of the distributions . These methods often result in higher performance in classification/regression tasks; however, they have run time complexities of O(n2) or higher.\nSubset sampling There exist some works which have been proposed to handle large sets. Dovrat et al. (2019) proposed to learn to sample a subset from a set by generating k virtual points, then matching them back to a subset of the original set. However, such element generation and matching process is highly inefficient. Our method on the other hand only learns to select from the original elements and does not suffer from such overhead. Wang et al. (2018) proposed to distill the knowledge of a large dataset to a small number of artificial data instances. However, these artificial data instances are only for faster training and doesn’t capture the statistics of the original set. Moreover, the instances are generated artificially and can differ from the original set making the method less applicable to other tasks. Also several works (Qi et al., 2017a;c; Li et al., 2018b; Eldar et al., 1997; Moenning & Dodgson, 2003) propose farthest point sampling, which selects k points from a set by ensuring that the selected samples are far from each other on a given metric space.\nImage Compression Due to the huge demand for image and video transfer over the internet, a number of works have attempted to compress images with minimal distortion. These models (Toderici et al., 2017; Rippel & Bourdev, 2017; Mentzer et al., 2018; Li et al., 2018a) typically consist of a pair of encoder and decoder, where the encoder will transfer the image into a compact matrix to reduce the memory footprint and communication cost, while the decoder is used to reconstruct the image back. These methods, while achieving huge successes in the image compression problem, are less flexible than ours. Firstly, our model can be applied to any type of sets (and instances represented as sets), while the aforementioned models mainly work for images represented in tensor form. Furthermore, our method can be applied both at the instance and dataset level.\nRepresentation learning Our instance-sampling model is also related to the Variational Auto Encoder (VAE) (Kingma & Welling, 2013). However, while VAE learns a compact representation of a data point, our model learns a compact representation of a set. Balın et al. (2019) learns a global feature selection model for reconstruction of the input data from selected features via unsupervised learning. Chen et al. (2018) learns instancewise feature selection with the goal of model interpretation by extracting subset of features most informative for a given sample. Our method also falls in this category.\nActive Learning Active learning methods are aimed at selection of data points for labeling given a small labelled set. This domain is different from our method since active learning does not consider the label information but our method does utilize label information. Also, our motivation is quite different. We focus on efficiency in inference and training of non-parametric models by reducing the sizes of the inputs, be it pixels or instances and this greatly differs from the goal of active learning. Methods such as (Sener & Savarese, 2017; Coleman et al., 2019; Wei et al., 2015) all tackle the data selection problem in the active learning setting." }, { "heading": "3 APPROACH", "text": "" }, { "heading": "3.1 PRELIMINARIES", "text": "In this work, we consider data of the type D = {d1, . . . , dn} where individual di’s are possibly represented as input xi and target yi. D is the complete set and we assume that within D, there exists a subset Ds = {si, . . . , sk} ⊂ D such that k n and that for an arbitrarily defined loss function `(., D) that we are interested in optimizing over the full set D, Ds can be used as a proxy for D such that `(., D) ≈ `(., Ds). In what follows, we present a method that learns the conditional distribution p(Ds|D) of the subsetDs via a two stage selection procedure dubbed candidate selection and autoregressive subset selection. The overall objective is then to minimize the loss function with respect to the subset Ds, Ep(Ds|D)[`(., Ds)]. When the set D itself follows a distribution of sets as in the meta-learning framework, then the objective becomes ED[Ep(Ds|D)[`(., Ds)]]. In essence, we seek to construct a subset Ds that is optimally representative of the full set D w.r.t `(.)." }, { "heading": "3.2 STOCHASTIC SUBSET SELECTION", "text": "In order to select Ds, we need to model the interactions among the elements of D and construct Ds based on said interactions. However, when the cardinality |D| of the setD is large or it’s elements di’s are high dimensional, modeling such pairwise interactions becomes computationally infeasible. As such, we first present the candidate selection procedure used to construct a smaller set, Dc, without considering inter-sample dependencies. This is then followed by the autoregressive subset selection procedure used to construct Ds from Dc by modeling inter-sample dependencies. The complete model is depicted in Figure 2." }, { "heading": "3.3 CANDIDATE SELECTION", "text": "We model the task of candidate selection as a random Bernoulli process where the logits of the Bernoulli function are conditioned on the set representation of the full set D and the individual elements di ∈ D. For a set D with cardinality n we define Z := {zi}ni=1 such that zi ∈ {0, 1} and zi = 1 implies that di ∈ Dc and for each di, zi is computed according to:\np(zi|di, D) = Ber(zi; ρ(di, r(D))), (1)\nwhere r(D) is a permutation-invariant function that compresses D into a single vector set representation and ρ(di, r(D)) computes the logits used to calculate the probability of di belonging to Dc. We implement both r(D) and ρ(di, r(D)) as neural networks and specifically for r(D), we use Deep Sets (Zaheer et al., 2017). Since Ber is non-differentiable, we use the continuous relaxations of the Bernoulli distribution introduced in (Maddison et al., 2016; Jang et al., 2016; Gal et al., 2017).\nSpecifically, to sample zi, we execute the following computational routine:\nzi = σ (1 τ ( log πi 1− πi + log u 1− u )) , πi = ρ(di, r(D)), u ∼ Unif(0, 1), (2)\nwhere σ is the Sigmoid function, τ is the temperature for the continuous relaxation and u is sampled from the uniform distribution. τ is set to 0.05 in all our experiments. Given that pair-wise interactions\nAlgorithm 1 Fixed Size Subset Selection Input k(subset size), q(# elements selected at each iteration), D = {d1, d2, . . . , dn} (Full Set) Output Ds = {s1, s2, . . . , sk} (selected subset) 1: procedure STOCHASTIC SUBSET SELECTION(k, q,D) 2: (π1, π2, . . . , πn)← (ρ(d1, r(D)), . . . , ρ(dn, r(D))) 3: zi ∼ Ber(πi) for i = 1, . . . , n. 4: Dc ← {di for i = 1 : n if zi} . Candidate Selection 5: Ds ← ∅ 6: for i = 1, . . . , k/q do . AutoRegressive Subset Selection 7: Ds ← Ds ∪ AUTOSELECT(q,Ds, Dc) . Select q elements 8: return Ds 9: procedure AUTOSELECT(q,Ds, Dc) 10: C = {w1, w2, . . . , wm} ← Dc \\Ds 11: (p1, p2, . . . , pm)← (f(w1, Dc, Ds), f(w2, Dc, Ds), . . . , f(wm, Dc, Ds)) 12: (p1, p2, . . . , pm)← (p1, p2, . . . , pm)/ ∑m j=1 pj 13: Q← Select q elements from C with probability (p1, p2, . . . , pm) 14: return Q\nbetween elements are not considered in this stage, learning p(zi|di, D) ensures that highly activating samples are selected instead of a random subset of the original set." }, { "heading": "3.4 AUTOREGRESSIVE SUBSET SELECTION", "text": "The candidate selection stage can introduce samples with redundant information in Dc since no effort was made to compare the informativity of the elements. To alleviate this issue, we must first model the interactions between the elements of Dc and construct Ds based on the relative importance of individual elements. To construct a representative subset Ds with |Ds| = k, k iterative steps are required and at step i the probability of an element in Dc \\ D(i−1)s belonging to Ds is computed according to:\np(si = d|Dc, D(i−1)s ) = f(d,Dc, D (i−1) s )∑\nd′∈Dc\\D(i−1)s f(d′ , Dc, D (i−1) s )\n∀d ∈ Dc \\D(i−1)s , (3)\nwhere D(i−1)s is the constructed subset at iteration i − 1 and f is a positive function. The key to avoiding samples with redundant information in D(k)s lies in the fact that for each element added to Ds, it’s selection is conditioned on both Dc and all elements in D (i−1) s . We further propose a method that samples q elements from Cat(p1, . . . , pm) in a single pass for efficient training. Specifically, instead of sampling q times from the categorical distribution, we can sample the selection mask for element j from Ber(q ∗ pj). In this routine, the probability of the element j being selected is q ∗ pj which is very close to the original distribution. Algorithm 1 details the entire procedure. The inference complexity depends heavily on the choice of the function f . If f considers the pairwise interactions between all candidate elements and the selected elements, the inference complexity is O(n)+O(k2d/q) where n, d, k correspond to |D|, |Dc| and |Ds| respectively. In our experiments, for the choice of the function f , we utilize either a Set Transformer(Lee et al., 2018) or DeepSets(Zaheer et al., 2017) to model the pairwise interactions between the elements of a given set.\n3.5 CONSTRAINING THE SIZE OF Dc\nFor computational efficiency, we may desire to restrict the size of Dc to save computational cost when constructing Ds. We adopt the idea of Information Bottleneck and constrain the distribution of Z for Dc. Specifically,\nEp(D)[Ep(Ds|D)[`(., Ds)] + βKL[p(Z|D)||r(Z)]] (4)\nwhere r(Z) is a sparse prior. In our experiments, we set the parameter of the Bernoulli spares prior r(Z) to either 0.1 or 0.01 for different levels of sparsity and β is set to 1.0 or 0.001." }, { "heading": "3.6 TASKS", "text": "We now present four tasks for which the described subset selection method is applied.\nSet Reconstruction Given Ds and a network pθ(Y |X,Ds) parameterized by θ, the task is to reconstruct D = (X,Y ). The objective function for this task is given as:\nEp(D)[Ep(Ds|D)[− log pθ(Y |X,Ds)] + βKL[p(Z|D)||r(Z)]] (5)\nMinimizing this objective ensures that we learn a compact subset(Ds) most representative of D and Ds can then be used for other tasks. We implement pθ(Y |X,Ds) as an Attentive Neural Process (ANP) (Kim et al., 2019b). An ANP takes as input a context(Ds in this case) and predicts a distribution of the elements in the original set D. It mimics the behaviour of a Gaussian Process but with reduced inference complexity. The complete model is depicted in Figure 3a.Experimental results for this task can be found in Section 4.1.\nSet Classification/Prediction We can also opt to train the network to predict a single target yD for the set D. For instance, the target could be the class of an image(classification) or the statistics of the set(regression problem). Here, pθ(yD|Ds) is a neural network that predicts the target yD. A set in this task, may be the features from a single example like an image and experimental results can be found in Section 4.2.The model for this task is depicted in Figure 3b. The objective function for this task is given as:\nEp(D)[Ep(Ds|D)[− log pθ(yD|Ds)] + βKL[p(Z|D)||r(Z)]] (6)\nModel # Pixels Storage mAUC\nFull Image All 38804 114KB 0.9157 RS 500 5KB 0.8471\nSSS(rec) 500 5KB 0.8921 SSS(MC) 500 5*5KB 0.9132 SSS(ours) 500 5KB 0.9093\nTable 1: CelebA Attributes Classification.\nDataset Distillation: Instance Selection For this task, we are given a dataset D = {D1, . . . , Dn} where each Di is a set of data points sampled from the entire dataset. Using CelebA as in illustrative example, some Di may consist of |Di| randomly sampled faces from the whole dataset. The goal is to construct Ds for each Di ∈ D. We describe a model capable of taking as input Di ∈ D to perform a task such as the reconstruction of all elements in the given dataset.\nFor a single dataset Di ∈ D, we apply the subset construction method already described to a Ds that can be used to reconstruct all the elements in Di. In essence, Di is distilled into a new dataset Ds with k < |Di| elements. The task then is to reconstruct the entire set Di back conditioned only\n#Instances 2 5 10 15 20 30\nFPS 6.50 4.51 3.07 2.75 2.71 2.29 Random 3.73 1.16 0.90 0.38 0.39 0.20\nSSS(ours) 2.53 1.02 0.59 0.33 0.24 0.17\nTable 2: . FID Score for varying Instance Selection\n#Instances 1 2 5 10\nFPS 0.432 0.501 0.598 0.636 Random 0.444 0.525 0.618 0.663\nSSS(ours) 0.475 0.545 0.625 0.664\nTable 3: Accuracy on miniImagenet\non Ds. As a first step, we represent Ds as a unique representative vector c for each element in the dataset akin to the statistics network used in the Neural Statistician (Edwards & Storkey, 2016) model. Specifically, to reconstruct an element di ∈ Di given Ds, c is computed by applying a stochastic cross-attention\nmechanism on Ds where the stochasticity is supplied by a query α which is computed using di. To obtain varying styles in the generated images, we additionally learn a latent variable w used to perturb c and both are combined to obtain a new element x. The graphical model for this process is depicted in Figure 3c. Additionally, to ensure that c is properly learnt, we add an informativity loss by reconstructing c from the generated samples from the given dataset. The objective for the model depicted in Figure 3c for a single dataset D is :\nL(θ, φ, ψ) = ∑ di∈D [Eqφ(wi|di)[pθ(di|wi, ci)]− KL[qφ(wi|di)||pψ(w)]\n−KL[qφ(αi|di)||pψ(α)]− KL[qφ(ci|Ds, αi)||pφ(c)]] (7)\nwhere pψ(·) are priors on their respective latent variables and qφ(·)’s are implemented with neural networks. All priors are chosen to be Gaussian with zero mean and unit variance. This objective is combined with the informativity loss on all samples in Di. It is important to note that c is computed using only Di for every element in D. In addition to Equation 7 and the informativity loss, the model is optimized together with the subset selection model already described. When the model is fully optimized, it is applied to the instance selection task on the given dataset. In summary, the purpose of the generative model introduced is to train the subset selection module for the instance selection task. Experimetal results for this task can be found in Section 4.3.\nDataset Distillation: Classification Finally in the dataset distillation task, we consider the problem of selecting prototypes to be used for few-shot classification. Here, we adopt Prototypical Networks (Snell et al., 2017) and apply the subset selection model to the task of selecting representative prototypes from each class to be used for classifying new instances. By learning to select the prototypes, we can remove outliers that would otherwise change the class prediction boundaries in the classification task. The complete graphical model for this task is given in Figure 3d where again Ds corresponds to the selected prototypes and x∗ and y∗ correspond to query and class label respectively. Experimental results for this task can be found in Section 4.3." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we present our experimental results. Model architectures and training hyper parameters are specified in the Appendix C." }, { "heading": "4.1 FEATURE SELECTION EXPERIMENTS", "text": "Function Reconstruction - Approximation Our first experiment is on 1D function reconstruction. Suppose that we have a function f : [a, b]→ R. We first construct a set of data points of that function: D = {(x1, y1 = f(x1)), (x2, y2 = f(x2)), . . . , (xn, yn = f(xn))} where (x1, x2, . . . , xn) are uniformly distributed along the x-axis within the interval [a, b]. Now if we have a family of functions (f (1), f (2), . . . , f (N)), this will lead to a family of sets (D(1), D(2), . . . , D(N)). We train our model which consists of the subset selection model p(Ds|D) and a task network p(Y |X,Ds) (e.g. ANP), on this data set and report the reconstruction loss, which is the negative log-likelihood.\nWe compare SSS with Random Select(RS) (randomly selects a subset of D and uses an ANP to reconstruct the set), and Learning to Sample (LTS) (Dovrat et al., 2019) to sample k elements and uses an ANP to reconstruct the set). Figure 4a shows the performance (reconstruction loss) of our models(SSS) and the baselines.\nSSS out-performs Random Select (RS), verifying that the subset selection model p(Ds|D) can learn a meaningful distribution for the selected elements. Our model also out-performs the Learning to Sample (LTS) baseline.\nThrough the visualization of the selected points in Figure 5, we can see that out model tends to pick out more points (presented as red dots) in the drifting parts of the curve, which is reasonable since these parts are harder to reconstruct. The other two baselines sometimes fails to do that, which leads to inaccurate reconstructions.\nImage Reconstruction Given an image we learn to select a core subset of pixels that best reconstructs the original image. Here, x is 2-dimensional and y is 3-dimensional for RGB images. An ANP is then used to reconstruct the remaining pixels from a set of context elements (selected subset in our case). We conduct this experiment on the CelebA dataset (Liu et al., 2018). Figure 4b shows that our model significantly outperforms ANP with RS (as in the original ANP paper) and the LTS baseline. Figure 6 shows the reconstruction samples of our model which are visually better than the reconstruction of the baselines for the same number of pixels." }, { "heading": "4.2 CLASSIFICATION/REGRESSION", "text": "In this subsection, we validate our model on the prediction task. The goal is to learn to select a subset for a target task such as classification or regression. We again use the CelebA dataset, but this time the selected pixels are used to give predictions for 40 attributes of a celebrity’s face (in a multi-task learning setting). For our proposed model, only the selected pixels are used for prediction (other pixels’ values are set to zeros). Table 1 shows that using only 500 pixels (∼1.3% of total pixels in an image), we can achieve a mean AUC of 0.9093 (99.3% of the accuracy obtained with the full image). Figure 4c shows the classification performance (in terms of mean AUC) versus the number of pixels selected. The AUC with selected pixels learned from our SSS is significantly higher than that of the random pixels baseline, showing the effectiveness of our subset selection method. We also include another baseline, namely SSS(rec). This is our stochastic subset selection model trained for reconstruction, but then later used for classification. Our model outperforms this variant, showing the effectiveness of training with the target task. Note that LTS cannot be applied to this experimental setup because during training, the generated virtual points cannot be converted back to an image in matrix form (due to the virtual coordinate), thus we cannot train the LTS model with CNN-based classification on the target task.\nAblation Study Since our method is stochastic, the predictive distribution can be written as Ep(Ds|D) [pθ(yD|Ds)], and we can use Monte Carlo sampling to get the prediction in practice. However, throughout the experiment section, we only reported the result with one sampled subset, since it gives the best reduction in memory and computational cost. This can be seen as MC sampling with one sample. We compare it against another variant: SSS(MC) with MC sampling (5 samples). It should be noted that by doing MC sampling with 5 samples, the computational cost (inference) is increased by 5 times, and the memory requirement can be increased by up to 5 times too. Table 1 shows that our model achieves comparable performance with that variant, thus justifying that it can achieve good performance for target tasks, while reducing memory and computation requirement." }, { "heading": "4.3 DATASET DISTILLATION", "text": "Instance Selection We present results on the instance selection task applied to a whole dataset. In this task, we use the CelebA dataset since it has an imbalance both in terms of gender and race. A dataset is constructed by sampling 200 random images from the full dataset. In this experiment, we seek to select only a few(5-30) representative images from these generated datasets. On this task, our subset selection module is trained via the procedure detailed in Section 3.6 on instance selection. To evaluate the effectiveness of the SSS model, we evaluate the model in terms of the diversity in the selected subset using the Fréchet Inception Distance(FID Score) (Heusel et al., 2017) which measures the similarity and diversity between two datasets. We compare our model with the model that randomly samples instances from the full dataset. Additionally, we compare our method with the Farthest Point Sampling(FPS) algorithm which selects k points from a given set by computing distances on a metric space between all elements and selecting those elements that are furthest from each other. FPS in general seeks to obtain a wide coverage over a given set and hence is a suitable baseline. The results of this experiment is presented in Table 4 where our selection method achieves a lower FID score compared to FPS and Random Sampling. Additionally, given that the dataset is highly imbalanced, FPS performs worst since by selecting the furthest elements in the given set it cannot capture the true representation of the whole dataset even when compared with Random Sampling. Also for small sample selection, our method outperforms FPS and Random Sampling significantly since our method is able to model the interactions within the full dataset and hence can select the most representative subset.\nClassification We use the miniImageNet dataset (Vinyals et al., 2016) and go from a 20 shot classification task to one of 1,2,5 or 10 shot classification task. We again compare with Random Sampling and FPS and apply them together with SSS for the reduction in shot. The results for this experiment is shown in Table 3, where it can be observed that SSS can learn to select more representative prototypes compared to the other methods especially in the few-shot problems where the choice of prototypes matters more. All models were trained for 300 epochs and the best model was picked using a validation set." }, { "heading": "5 CONCLUSION", "text": "In this paper, we have proposed a stochastic subset selection method to reduce the size of an arbitrary set while preserving performance on a target task. Our selection method utilizes a Bernoulli mask to perform candidate selection, and a stack of Categorical distributions to iteratively select a core subset from the candidate set. As a result, the selection process does take the dependencies of the set’s members into account. Hence, it can select a compact set that avoids samples with redundant information. By using the compact subset in place of the original set for a target task, we can save memory, communication and computational cost. We hope that this can facilitate the use of machine learning algorithm in resource-limited systems such as mobile and embedded devices." }, { "heading": "A APPENDIX", "text": "Organization This supplementary file is organized as follows. We provide the full pseudo-code for the Greedy Training Algorithm.We then show some visualization of our method for feature selection (in both 1D function and CelebA dataset) and report the results with multiple runs of the instance selection experiment, as well as its visualization. Qualitative results for Instance Selection as applied to the few-shot classification task are provided together with model specifications.\nA.1 GREEDY TRAINING ALGORITHM\nAlgorithm 2 shows our greedy training algorithm with stochastic gradient descent. The idea of the greedy training algorithm is to train the auto-regressive model to select the best next q elements from the candidate set to minimize the target loss on the selected samples. By doing this, we do not have to run the auto-regressive model k/q time during training, thus reducing the computational cost.\nAlgorithm 2 Greedy Training Algorithm Input k(max subset size)\nq(# elements selected at each iteration) p(D) (distribution of sets) α (learning rate) a target task with loss function `(·, ·)\nOutput trained model with converged θ and φ 1: θ, φ← initialization 2: while not converged do 3: Sample a minibatch with m sets D(1), D(2), . . . , D(m) from p(D) 4: D(j)c ∼ p(D(j)c |D(j)) for j = 1 . . .m 5: i ∼ random sample from (0, . . . , k − q) 6: I(j) ∼ random i-element subset of D(j)c for j = 1 . . .m 7: Q(j) ∼ select a q-element subset from D(j)c \\ I(j) (with the auto-regressive model) 8: θ ← θ − α∇θ 1m ∑m j=1 `(·, I (j) ∪Q(j)), φ← φ− α∇φ 1m ∑m j=1 `(·, I (j) ∪Q(j))\nB INSTANCE SELECTION SAMPLES\nIn this section, we show more examples of our 1D and CelebA experiments on how the models select the set elements for the target task.\nB.0.1 1D FUNCTION - RECONSTRUCTION\nFigure 8 shows the reconstruction samples of our model on the 1D function dataset, which is objectively better than that of Learning to Sample (LTS) or Random Subset (RS). Since RS selects the set elements randomly, it can leave out important part of the 1D curve leading to wrong reconstructions. LTS also selects insufficient amount of set elements in some parts of the curves, resulting in suboptimal reconstructions.\nB.1 CELEBA\nFigure 9 shows the selected pixels of our model for both the classification and reconstruction task. For the attribute classification task, the model tends to select pixels mainly from the face, since the task is to classify characteristics of the person. For reconstruction, the selected pixels are more evenly distributed, since the background also contributes significantly to the reconstruction loss.\nB.2 DATASET DISTILLATION: INSTANCE SELECTION\nIn Table 4, we represent the full results for the Instance Selection model on the CelebA dataset. For these experiments, we construct a set by randomly sampling 200 face images from the full dataset. To evaluate the model, we create multiple such datasets and run the baselines(Random Sampling\nand FPS) and SSS on the same datasets. The FID metric is then computed on the instances and averaged on all the randomly constructed datasets. For FPS, we use the open-source implementation in https://github.com/rusty1s/pytorch_cluster. Further, we provide qualitative results on a single dataset in Figure 10 where we show how our model picks 5 instances from the full set of 200 images face images.\nB.3 DATASET DISTILLATION: CLASSIFICATION\nIn Figure 11 we provide visualizations for the instance selection problem as applied to the few-shot classification task. Here, we go from a 20-shot to a 1-shot classification problem where the prototype is selected from the support using SSS." }, { "heading": "C MODEL SPECIFICATIONS", "text": "SSS consists of r(D), ρ(di, r(D)) and f(d,Dc, D (i−1) s . We describe the models in this section.\nFor all experiments, r(D) is implemented as DeepSets. This means that we take the mean of all the samples in a set to obtain the set representation.\nρ(di, r(D)) is implemented as a neural network with the following specifications: there are 3 Linear layers each followed by ReLU activation. Also, all inputs are projected into feature space using 3 Linear layers, each followed by ReLU activation.\nIn the set classification task, f(d,Dc, D (i−1) s is implemented as a Set Transformer network. All other experiments use DeepSets." } ]
2,020
null
SP:874a441d5c5c7582a1d548bc5d0c635ed032434f
[ "This paper studies the dynamic regret of online multiple mirror descent, which is online mirror descent with M repeated steps on each of T sequential loss functions. The authors show three bounds for the dynamic regret of OMMD, which generalizes OMGD [Zhang et al. '17]: C_T (the path length of the minimizer sequence), S_T (the sum of squared segment lengths), and G_T (the squared dual gradient norm of the points played).", "This work derives a new upper bound on the dynamic regret for online convex optimization in the restricted setting where the comparison sequence is made up of the minimizers x^_1,...,x*_T of the loss sequence. There are three main parameters that control regret in this case: the path length C_T = ||x*_2-x*_1||+...+||x*_T-x*_{T-1}||. The squared path length C_T = ||x*_2-x*_1||^2+...+||x*_T-x*_{T-1}||^2. And the sum G_T of squared loss gradient norms evaluated at x*_1,...,x*_T. " ]
We study the problem of online convex optimization, where a learner makes sequential decisions to minimize an accumulation of strongly convex costs over time. The quality of decisions is given in terms of the dynamic regret, which measures the performance of the learner relative to a sequence of dynamic minimizers. Prior works on gradient descent and mirror descent have shown that the dynamic regret can be upper bounded using the path length, which depend on the differences between successive minimizers, and an upper bound using the squared path length has also been shown when multiple gradient queries are allowed per round. However, they all require the cost functions to be Lipschitz continuous, which imposes a strong requirement especially when the cost functions are also strongly convex. In this work, we consider Online Multiple Mirror Descent (OMMD), which is based on mirror descent but uses multiple mirror descent steps per online round. Without requiring the cost functions to be Lipschitz continuous, we derive two upper bounds on the dynamic regret based on the path length and squared path length. We further derive a third upper bound that relies on the gradient of cost functions, which can be much smaller than the path length or squared path length, especially when the cost functions are smooth but fluctuate over time. Thus, we show that the dynamic regret of OMMD scales linearly with the minimum among the path length, squared path length, and sum squared gradients. Our experimental results further show substantial improvement on the dynamic regret compared with existing alternatives.
[]
[ { "authors": [ "Heinz H Bauschke", "Jonathan M Borwein" ], "title": "Joint and separate convexity of the bregman distance", "venue": "Studies in Computational Mathematics,", "year": 2001 }, { "authors": [ "Amir Beck", "Marc Teboulle" ], "title": "Mirror descent and nonlinear projected subgradient methods for convex optimization", "venue": "Operations Research Letters,", "year": 2003 }, { "authors": [ "Omar Besbes", "Yonatan Gur", "Assaf Zeevi" ], "title": "Non-stationary stochastic optimization", "venue": "Operations Research,", "year": 2015 }, { "authors": [ "Nicolo Cesa-Bianchi", "Gábor Lugosi" ], "title": "Prediction, Learning, and Games", "venue": "Cambridge university press,", "year": 2006 }, { "authors": [ "Koby Crammer", "Ofer Dekel", "Joseph Keshet", "Shai Shalev-Shwartz", "Yoram Singer" ], "title": "Online passive-aggressive algorithms", "venue": "Journal of Machine Learning Research,", "year": 2006 }, { "authors": [ "John C Duchi" ], "title": "Introductory lectures on stochastic optimization", "venue": "The mathematics of data,", "year": 2018 }, { "authors": [ "John C Duchi", "Shai Shalev-Shwartz", "Yoram Singer", "Ambuj Tewari" ], "title": "Composite objective mirror descent", "venue": "In Proceedings of the Conference on Learning Theory,", "year": 2010 }, { "authors": [ "Eric C Hall", "Rebecca M Willett" ], "title": "Online convex optimization in dynamic environments", "venue": "IEEE Journal of Selected Topics in Signal Processing,", "year": 2015 }, { "authors": [ "Elad Hazan", "Satyen Kale" ], "title": "Beyond the regret minimization barrier: optimal algorithms for stochastic strongly-convex optimization", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Elad Hazan", "Adam Tauman Kalai", "Satyen Kale", "Amit Agarwal" ], "title": "Logarithmic regret algorithms for online convex optimization", "venue": "In Proceedings of the Conference on Learning Theory,", "year": 2006 }, { "authors": [ "Ali Jadbabaie", "Alexander Rakhlin", "Shahin Shahrampour", "Karthik Sridharan" ], "title": "Online optimization:Competing with dynamic comparators", "venue": "In Proceedings of the International Conference on Artificial Intelligence and Statistics,", "year": 2015 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "URL http: //www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf", "year": 2009 }, { "authors": [ "Minghong Lin", "Adam Wierman", "Lachlan LH Andrew", "Eno Thereska" ], "title": "Dynamic right-sizing for power-proportional data centers", "venue": "IEEE/ACM Transactions on Networking,", "year": 2012 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K. Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Aryan Mokhtari", "Shahin Shahrampour", "Ali Jadbabaie", "Alejandro Ribeiro" ], "title": "Online optimization in dynamic environments: Improved regret rates for strongly convex problems", "venue": "In Proceedings of the IEEE Conference on Decision and Control,", "year": 2016 }, { "authors": [ "Arkadii Nemirovsky", "David Yudin" ], "title": "Problem Complexity and Method Efficiency in Optimization", "venue": null, "year": 1983 }, { "authors": [ "Sasha Rakhlin", "Karthik Sridharan" ], "title": "Optimization, learning, and games with predictable sequences", "venue": "In Proceedings of the International Conference on Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Shahin Shahrampour", "Ali Jadbabaie" ], "title": "Distributed online optimization in dynamic environments using mirror descent", "venue": "IEEE Transactions on Automatic Control,", "year": 2017 }, { "authors": [ "Shai Shalev-Shwartz" ], "title": "Online learning and online convex optimization", "venue": "Foundations and Trends in Machine Learning,", "year": 2012 }, { "authors": [ "Shai Shalev-Shwartz", "Yoram Singer" ], "title": "Logarithmic Regret Algorithms for Strongly Convex Repeated Games", "venue": "The Hebrew University,", "year": 2007 }, { "authors": [ "Ming Shi", "Xiaojun Lin", "Sonia Fahmy", "Dong-Hoon Shin" ], "title": "Competitive online convex optimization with switching costs and ramp constraints", "venue": "In Proceedings of the IEEE Conference on Computer Communications,", "year": 2018 }, { "authors": [ "Tianbao Yang", "Lijun Zhang", "Rong Jin", "Jinfeng Yi" ], "title": "Tracking slowly moving clairvoyant: Optimal dynamic regret of online learning with true and noisy gradient", "venue": "In Proceedings of International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Jianjun Yuan", "Andrew Lamperski" ], "title": "Online control basis selection by a regularized actor critic algorithm", "venue": "In Proceedings of the IEEE American Control Conference,", "year": 2017 }, { "authors": [ "Lijun Zhang", "Tianbao Yang", "Jinfeng Yi", "Rong Jin", "Zhi-Hua Zhou" ], "title": "Improved dynamic regret for non-degenerate functions", "venue": "In Proceedings of the International Conference on Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Martin Zinkevich" ], "title": "Online convex programming and generalized infinitesimal gradient ascent", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2003 } ]
[ { "heading": "1 INTRODUCTION", "text": "Online optimization refers to the design of sequential decisions where system parameters and cost functions vary with time. It has applications to various classes of problems, such as object tracking (Shahrampour & Jadbabaie, 2017), networking (Shi et al., 2018), cloud computing (Lin et al., 2012), and classification (Crammer et al., 2006). It is also an important tool in the development of algorithms for reinforcement learning (Yuan & Lamperski, 2017) and deep learning (Mnih et al., 2015).\nIn this work, we consider online convex optimization, which can be formulated as a discrete-time sequential learning process as follows. At each round t, the learner first makes a decision xt ∈ X , where X is a convex set representing the solution space. The learner then receives a convex cost function ft(x) : X → R and suffers the corresponding cost of ft(xt) associated with the submitted decision. The goal of the online learner is to minimize the total accrued cost over a finite number of rounds, denoted by T . For performance evaluation, prior studies on online learning often focus on the static regret, defined as the difference between the learner’s accumulated cost and that of an optimal fixed offline decision, which is made in hindsight with knowledge of ft(·) for all t:\nRegsT = T∑ t=1 ft(xt)−min x∈X T∑ t=1 ft(x).\nA successful online algorithm closes the gap between the online decisions and the offline counterpart when normalized by T , i.e., sustaining sublinear static regret in T . In the literature, there are various online algorithms (Zinkevich, 2003; Cesa-Bianchi & Lugosi, 2006; Hazan et al., 2006; Duchi et al., 2010; Shalev-Shwartz, 2012) that guarantee a sublinear bound on the static regret.\nHowever, algorithms that guarantee performance close to that of a static decision may still perform poorly in dynamic settings. Consequently, the static regret fails to accurately reflect the quality of decisions in many practical scenarios. Therefore, the dynamic regret has become a popular metric in recent works (Besbes et al., 2015; Mokhtari et al., 2016; Yang et al., 2016; Zhang et al., 2017), which allows a dynamic sequence of comparison targets and is defined by\nRegdT = T∑ t=1 ft(xt)− T∑ t=1 ft(x ∗ t ),\nwhere x∗t = argminx∈X ft(x) is a minimizer of the cost at round t.\nIt is well-known that the online optimization problem may be intractable in a dynamic setting, due to arbitrary fluctuation in the cost functions. Hence, achieving a sublinear bound on the dynamic regret may be impossible. However, it is possible to upper bound the dynamic regret in terms of certain regularity measures. One of the measures to represent regularity is the path length, defined by\nCT = T∑ t=2 ‖x∗t − x∗t−1‖, (1)\nwhich illustrates the accumulative variation in the minimizer sequence. For instance, the dynamic regret of online gradient descent for convex cost functions can be bounded by O( √ T (1 + CT )) (Zinkevich, 2003). 1 For strongly convex functions, the dynamic regret of online gradient descent can be reduced to O(CT ) (Mokhtari et al., 2016). When the cost functions are smooth and strongly convex, by allowing the learner to make multiple queries to the gradient of the cost functions, the regret bound can be further improved to O(min(CT , ST )), where ST represents the squared path length, defined by\nST = T∑ t=2 ‖x∗t − x∗t−1‖2, (2)\nwhich can be smaller than the path length when the distance between successive minimizers is small. All the aforementioned studies require the cost functions to be Lipschitz continuous. However, there are many commonly used cost functions, e.g., the quadratic function that do not meet the Lipschitz condition. In addition, the above works rely on measuring distances using Euclidean norms, which hinders the projection step in gradient descent update for some constraint sets, e.g., probability simplex (Duchi, 2018).\nBesides gradient descent, mirror descent is another well-known technique of online convex optimization (Hall & Willett, 2015; Jadbabaie et al., 2015). Mirror descent uses the Bregman divergence, which generalizes the Euclidean norm used in the projection step of gradient descent, thus acquiring expanded applicability to a broader range of problems. In addition, the Bregman divergence is only mildly dependent on the dimension of decision variables (Beck & Teboulle, 2003; Nemirovsky & Yudin, 1983), so that mirror descent is optimal among first-order methods when the decision variables have high dimensions (Duchi et al., 2010). In this work we focus on the mirror descent approach.\nIn previous works on online mirror descent, the learner queries the gradient of each cost function only once, and performs one step of mirror descent to update its decision (Hall & Willett, 2015; Shahrampour & Jadbabaie, 2017). In this case, the dynamic regret has an upper bound of order O( √ T (1 + CT )), which is the same as that of online gradient descent in (Zinkevich, 2003). In this work, we investigate whether it is possible to improve the dynamic regret when the learner performs multiple mirror descent steps in each online round, while relaxing the Lipschitz continuity condition on the cost functions.\nTo this end, we analyze the performance of the Online Multiple Mirror Descent (OMMD) algorithm, which uses multiple steps of mirror descent per online round. When the cost functions are smooth and strongly convex, we show that the upper bound on the dynamic regret can be reduced from\n1A more general definition of the dynamic regret was introduced in (Zinkevich, 2003), which allows comparison against an arbitrary sequence {ut}Tt=1. We note that the regret bounds developed in (Zinkevich, 2003) also hold for the specific case of ut = x∗t .\nO( √ T (1 + CT )) to O(min(CT , ST , GT )), where GT represent the sum squared gradients, i.e.,\nGT = T∑ t=1 ‖∇ft(xt)‖2∗, (3)\nwhere ‖.‖∗ denotes the dual norm. The sum squared gradients GT can be smaller than both the path length and squared path length, especially when the cost functions fluctuate drastically over time. In contrast to the aforementioned works, our analysis does not require the cost functions to be Lipschitz continuous. Furthermore, our numerical experiments suggest substantially reduced dynamic regret compared with the best known alternatives, including single-step dynamic mirror descent (Hall & Willett, 2015), online multiple gradient descent (Zhang et al., 2017), and online gradient descent Zinkevich (2003)." }, { "heading": "2 ONLINE MULTIPLE MIRROR DESCENT", "text": "In this section, we describe OMMD and discuss how the learner can improve the dynamic regret by performing multiple mirror descent steps per round. Before delving into the details, we proceed by stating several definitions and standard assumptions." }, { "heading": "2.1 PRELIMINARIES", "text": "Definition 1: The Bregman divergence with respect to the regularization function r(·) is defined as Dr(x, y) = r(x)− r(y)− 〈∇r(y), x− y〉.\nThe Bregman divergence is a general distance-measuring function, which contains the Euclidean norm and the Kullback-Leibler divergence as two special cases.\nUsing the Bregman divergence, a generalized definition of strong convexity is given in (ShalevShwartz & Singer, 2007).\nDefinition 2: A convex function f(·) is λ-strongly convex with respect to a convex and differentiable function r(·) if\nf(y) + 〈∇f(y), x− y〉+ λDr(x, y) ≤ f(x), ∀x, y ∈ X .\nFollowing many prior studies on mirror descent, we assume that the cost functions are λ-strongly convex, where the above generalized strong convexity definition is used. We further assume that the cost functions are L-smooth, and the regularization function r(·) is Lr-smooth and 1-strongly convex with respect to some norm (refer to App. A for definitions). We note that these are standard assumptions commonly used in the literature after the group of studies began by (Hazan et al., 2006; Shalev-Shwartz & Singer, 2007), to provide stronger regret bounds by constraining the curvature of cost functions.\nWe further make a standard assumption that the Bregman divergence is Lipschitz continuous as follows:\n|Dr(x, z)−Dr(y, z)| ≤ K‖x− y‖, ∀x, y, z ∈ X , where K is a positive constant. We note that this condition is much milder than the condition of Lipschitz continuous cost functions required in (Zhang et al., 2017; Mokhtari et al., 2016; Hall & Willett, 2015). There is a notable weakness in such bounds. Since the sequence of cost functions are revealed to the learner, the learner has no control over it. If these cost functions happen to not meet the Lipschitz condition, earlier analyses that require this condition become inapplicable. In this work, we do not require the cost functions to be Lipschitz continuous. Instead, we move the Lipschitz continuity condition from the cost functions to the Bregman divergence to broaden the application of our work. The main benefit of this is that the regularization function and the corresponding Bregman divergence is within the control of the learner. The learner can carefully design this regularization function to satisfy the Lipschitz continuity of the associated Bregman divergence with a small factor. For example, in the particular case of the KL divergence, which is obtained by the choosing negative entropy as the regularization function, on the set X = {x| ∑d i=1 xi = 1;xi ≥ 1 D}, the constant K is of O(logD). Other examples of many widely used Bregman divergences that satisfy this condition are given in (Bauschke & Borwein, 2001).\nAlgorithm 1 Online Multiple Mirror Descent Input: Arbitrary initialization of x1 ∈ X ; step size α; time horizon T . Output: Sequence of decisions {xt : 1 ≤ t ≤ T}. 1: for t = 1, 2, . . . , T do 2: submit xt ∈ X and receive ft(·) 3: set y1t = xt 4: for i = 1, 2, . . . ,M do 5: yi+1t = argminy∈X {〈∇ft(yit), y〉+ 1αDr(y, y i t)}\n6: end for 7: set xt+1 = yM+1t 8: end for" }, { "heading": "2.2 ONLINE CONVEX OPTIMIZATION WITH OMMD", "text": "We consider online optimization over a finite number of rounds, denoted by T . At the beginning of every round t, the learner submits a decision represented by xt, which is taken from a convex and compact set X . Then, an adversary selects a function ft(·) and the learner suffers the corresponding cost ft(xt). The learner then updates its decision in the next round. With standard mirror descent, this is given by\nxt+1 = argmin x∈X\n{〈∇ft(xt), x〉+ 1\nα Dr(x, xt)} (4)\nwhere α is a fixed step size, andDr(·, ·) is the Bregman divergence corresponding to the regularization function r(·). The update in equation 4 suggests that the learner aims to stay close to the current decision xt as measured by the Bregman divergence, while taking a step in a direction close to the negative gradient to reduce the current cost at round t.\nOMMD uses mirror descent in its core as the optimization workhorse. However, in contrast to classical online optimization methods, where the learner queries the gradient of each cost function only once, OMMD is designed to take advantage of the curvature of cost functions by allowing the learner to make multiple queries to the gradient in each round. This is especially important when the successive cost functions have similar curvatures. In particular, in order to track x∗t+1 the learner needs to access the gradient of the cost function, i.e.,∇ft+1(·). Unfortunately, this information is not available until the end of round t+ 1. However, if the successive functions have similar curvatures, the gradient of ft(·) is a reasonably accurate estimate for the gradient of ft+1(·). In this case, every time that the learner queries the gradient of ft(·), it finds a point that is likely to be closer to the minimizer of ft+1(·). Hence, it may benefit the learner to perform multiple mirror descent steps in each round.\nThus, the learner generates a series of decisions, represented by y1t , y 2 t , . . . , y M+1 t , via the following updates:\ny1t = xt, y i+1 t = argmin y∈X {〈∇ft(yit), y〉+\n1 α Dr(y, y i t)}, i = 1, 2, . . . ,M. (5)\nThen, by setting xt+1 = yM+1t , the learner proceeds to the next round, and the procedure continues. Note that M is independent of T .\nApplying multiple steps of mirror descent can reveal more information about the sequence of minimizers. It can reduce the dynamic regret, but only if the series of decisions in equation 5 helps decrease the distance to the minimizer x∗t+1. Therefore, quantifying the benefit of OMMD over standard mirror descent requires careful analysis on the impact of the fluctuation of ft(·) over time. To this end, we provide an analysis to bound the dynamic regret of OMMD in the next section." }, { "heading": "3 THEORETICAL RESULTS", "text": "The following lemma paves the way for the proposed analysis on the dynamic regret of OMMD. It bounds the distance of the learner’s future decision from the current optimal solution, after a single step of mirror descent.\nLemma 1 Assume that ft(·) is λ-strongly convex with respect to a differentiable function r(·), and is L-smooth. Single-step mirror descent with a fixed step size α ≤ 1L guarantees the following:\nDr(x ∗ t , xt+1) ≤ βDr(x∗t , xt),\nwhere x∗t is the unique minimizer of ft(·), and β = 1− 2αλ1+αλ .\nLemma 1 is proved in App. B in the supplementary material.\nRemark 1. Lemma 1 states that a mirror descent step reduces the distance (measured by the Bregman divergence) of the learner’s decisions to the current minimizer. This generalizes the results in (Mokhtari et al., 2016; Zhang et al., 2017), where similar bounds were derived for online gradient descent when the distance was measured in Euclidean norms. In particular, those results correspond to the special choice of r(x) = ‖x‖22, which reduces the Bregman divergence to Euclidean distance, i.e., Dr(x, y) = ‖x− y‖2. Lemma 1 indicates that the distance between the next decision xt+1 and the minimizer x∗t is strictly smaller than the distance between the current decision xt and the minimizer at round t. This implies that if the minimizers of the functions ft(·) and ft+1(·), which are x∗t and x∗t+1 respectively, are not far from each other, applying mirror descent multiple times enables the online learner to more accurately track the sequence of optimal solutions x∗t .\nThe succeeding theorems provide three separate upper bounds on the dynamic regret of OMMD, based on path length CT (as defined in equation 1), squared path length ST (as defined in equation 2), and sum squared gradients (as defined in equation 3).\nTheorem 2 Assume that r(·) is Lr-smooth and 1-strongly convex with respect to some norm ‖ · ‖ and that the cost functions are L-smooth and λ-strongly convex with respect to r(·). Let xt be the sequence of decisions generated by OMMD with a fixed step size 12λ < α ≤ 1 L and M ≥ d ( 1 2 + 1 2αλ ) logLre mirror descent steps per round. The dynamic regret satisfies the following bound: T∑ t=1 ft(xt)− ft(x∗t ) ≤ ( Kλ 2αλ− 1 (1 + √ LrβM ) (1− √ LrβM ) ) (CT + ‖x∗1 − x1‖).\nwhere β is the shrinking factor derived in Lemma 1, and K is the Lipschitz constant associated with Dr(·, ·).\nThe proof of Theorem 2 is given in App. C in the supplementary material.\nRemark 2. It has been shown in (Hall & Willett, 2015) that single-step mirror descent guarantees an upper bound of O( √ T (1 + CT )) on the dynamic regret for convex cost functions. With that bound, a sublinear path length is not sufficient to guarantee sublinear dynamic regret. In contrast, Theorem 2 implies that OMMD reduces the upper bound to O(CT ) when the cost functions are strongly convex and smooth, which implies that a sublinear path length is sufficient to yield sublinear dynamic regret.\nRemark 3. The range of M where the bound in Theorem 2 holds is usually wide. For example, it is M ≥ 3 and M ≥ 5 for the two experiments shown in Section 4.\nTheorem 3 Under the same convexity and smoothness conditions stated in Theorem 2, let xt be the sequence of decisions generated by OMMD with a fixed step size α ≤ 1L and M ≥ d ( 1 2 + 1 2αλ ) log 2Lre mirror descent steps per round. For any arbitrary positive constant θ, the dynamic regret is upper bounded by T∑ t=1 ft(xt)− ft(x∗t ) ≤ T∑ t=1 ‖∇ft(x∗t )‖2∗ 2θ + ( LLr + θ 1− 2LrβM )( ST + ‖x∗1 − x1‖2 2 ) .\nTheorem 3 is proved in App. D in the supplementary material.\nSince the gradient at x∗t is zero if x ∗ t is in the relative interior of the feasibility set X , i.e., ‖∇ft(x∗t )‖ = 0, the above theorem can be simplified to the following corollary.\nCorollary 4 If x∗t belongs to the relative interior of the feasibility set X for all t, the dynamic regret bound in Theorem 3 is of order O(ST ).\nWhen the cost functions drift slowly, the distances between successive minimizers are small. Hence, the squared path length ST , which relies on the square of those distances, can be significantly smaller than the path length CT . In this case, Theorem 3 and Corollary 4 can provide a tighter regret bound than Theorem 2.\nTheorem 5 Under the same convexity and smoothness conditions stated in Theorem 2, let xt be the sequence of decisions generated by OMMD with a fixed step size α > 12λ . The following bound holds on the dynamic regret:\nT∑ t=1 ft(xt)− ft(x∗t ) ≤ α2λ 4αλ− 2 GT .\nThe proof of Theorem 5 is given in App. E in the supplementary material.\nRemark 4. Interestingly, Theorem 5 implies that sublinear dynamic regret can be achieved when the gradient of the cost functions shrink over time. For instance, if ‖∇ft(x)‖∗ = O(1/tγ) for some γ > 0, Theorem 5 guarantees O(T 1−2γ) dynamic regret. This is especially important when the cost functions decrease while the minimizers fluctuate. In this scenario, the path length CT and squared path length ST may grow linearly, whereas diminishing gradients ensure sublinear GT .\nTheorem 2, Corollary 4, and Theorem 5, respectively, state that the dynamic regret of OMMD is upper bounded linearly by path length CT , squared path length ST , and sum squared gradients GT . This immediately leads to the following result.\nCorollary 6 Under the same convexity and smoothness conditions stated in Theorem 2, the dynamic regret of OMMD with suitably chosen α and M has an upper bound of O(min(CT , ST , GT )).\nRemark 5. We note that (Mokhtari et al., 2016) and (Zhang et al., 2017) provide upper bounds of O(CT ) and O(min(CT , ST )), respectively, on the dynamic regret of online gradient descent with single and multiple gradient queries, while (Hall & Willett, 2015) presents an upper bound of O( √ T (1 + CT )) on the dynamic regret of online single-step mirror descent. Corollary 6 shows that OMMD can improve the dynamic regret bound to O(min(CT , ST , GT )). Furthermore, in contrast to these studies, our analysis does not require the cost functions to be Lipschitz continuous.\nThe quantities Ct, ST , and GT represent distinct aspects of an online learning problem and are not generally comparable. The following example demonstrates the benefit of having multiple upper bounds and taking their minimum. Consider a sequence of quadratic programming problems of the form ft(x) = ‖Atx− bt‖2 over the d-dimensional probability simplex. Assume that for any t ≥ 1, we have the parameter sequence of\nAt = { diag( 1tp1 , 0, 0, . . . , 0), if t is odd diag(0, 1tp1 , 0, . . . , 0), if t is even,\nand b = [ 1tp2 , 1 tp2 , . . . , 1 tp2 ] ′, where p1 and p2 are positive constants such that p2 ≤ p1. In this setting, we observe that CT = O(T ) and GT = O(T 1−p1−p2). Thus, GT can be considerably smaller than CT . On the other hand, it is also possible that CT is smaller than GT in other cases. For example, let At = diag(1/2, 0, . . . , 0) on odd rounds and At = diag(0,−1/2, . . . ,−1/2) , and bt be the unity vector for all t. In this case, we observe that CT = O(1), while the sum gradient scales linearly with time, i.e., GT = O(T ). Thus, neither CT nor GT alone can provide a small regret bound for all cases. Similar examples can be found in comparison between ST and GT but are omitted for brevity." }, { "heading": "4 EXPERIMENTS", "text": "We investigate the performance of OMMD via numerical experiments in two different learning scenarios (with further experiments presented in App. G in the supplementary material). First, we consider a ridge regression problem on the CIFAR-10 dataset (Krizhevsky, 2009). Then, we study a case of online convex optimization where the difference between successive minimizers diminishes as time progresses. We compare OMMD with the following alternatives: Online Gradient Descent (OGD) (Zinkevich, 2003), Online Multiple Gradient Descent (OMGD) (Zhang et al., 2017), and Dynamic Mirror Descent (DMD) (Hall & Willett, 2015).\nIn the first experiment, we consider multi-class classification with ridge regression. In this task, the learner observes a sequence of labeled examples (ω, z), where ω ∈ Rd, and the label z, denoting the class of the data example, is drawn from a discrete space Z = {1, 2, . . . , c}. We use the CIFAR-10 image dataset, which contains 5 × 104 data samples. Each data sample ω is a color image of size 32× 32 pixel that can be represented by a 3072-dimensional vector, i.e., d = 3072. Data samples correspond to color images of objects, including airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. Hence, there are c = 10 different classes. For ridge regression, the cost function associated with batch of data samples at round t, i.e., (ω1,t, z1,t), . . . , (ωb,t, zb,t), is given by\nf(x, (ωt, zt)) = ‖ωTt x− zt‖22,\nwhere x is the optimization variable, which is constrained by the set X = {x : x ∈ Rd+, ‖ x ‖1= 1}, and (ωt, zt) compactly represents the batch of data samples at round t, i.e., ωt = [ω1,t, ω2,t, . . . , ωb,t]\nT and zt = [z1,t, z2,t, . . . , zb,t]T . The goal of the learner is to classify streaming images online by tracking the unknown optimal parameter x∗t . We use the negative entropy regularization function, i.e., r(x) = ∑d j=1 xj log(xj), which is strongly convex with respect to the l1-norm. Then, the mirror descent update in equation 4 leads to the following update:\nyi+1t,j = yit,j exp(−α∇ft(yit,j))∑d j=1 y i t,j exp(−α∇ft(yit,j)) , (6)\nwhere xt,j and ∇ft(xt)j denote the j-th component of xt and ∇ft(xt) respectively. The proof of the above closed-form update is given in App. F in the supplementary material. In our experiment, we set batch size to 20 data samples per online round, and set α = 0.1.\nIn Fig. 1, we compare the performance of OMMD with DMD, OGD, and OMGD in terms of the dynamic regret. We see that the methods based on mirror descent perform better than those based on gradient descent as generally expected. Furthermore, OMMD with M = 10 can reduce the dynamic regret up to 30% in comparison with DMD. The dynamic regret associated with all algorithms grow linearly with the number of rounds. This is because the sequence of minimizers x∗t depend on batches of samples that are independent over time, so that they do not converge. We note that this is common for online optimization in dynamic settings where steady fluctuation in the environment results in linear dynamic regret.\nNext, we study the performance of OMMD in solving a sequence of quadratic programming problems of the form ft(x1, x2) = ρ‖x1 − at‖2 + ‖x2 − bt‖2, where ρ is a positive constant, at and bt are time-variant vectors, and the decision variable are x1 ∈ Rd1 , x2 ∈ Rd2 , such that d1 + d2 = d. In our experiment, we set ρ = 10, d1 = 500, and d = 1000. We assume that bt is time-invariant and for all rounds t we have bt = 2, while at satisfies the recursive formula at+1 = at + 1/ √ t with initial value a1 = −1.5. We further set the step size α = 0.03. We use the same regularization function and constraint set as in the previous experiment.\nFrom Fig. 2, we observe that the performance advantage of OMMD is even more pronounced. As time progresses and the difference between the successive cost functions becomes less significant, the difference between the minimizers decreases. In this case, OMMD can significantly improve the performance of online optimization by reducing the gap between the learner’s decisions and the minimizers sequence. In particular, compared with DMD, OMMD with M = 10 reduces the dynamic regret up to 80% after 2500 rounds." }, { "heading": "5 RELATED WORKS", "text": "The problem of online convex optimization has been extensively studied in the literature since the seminal work of (Zinkevich, 2003). Most prior works study various online algorithms that guarantee a sublinear bound on the static regret (Zinkevich, 2003; Cesa-Bianchi & Lugosi, 2006; Hazan et al., 2006; Duchi et al., 2010; Shalev-Shwartz, 2012). Here we review the most relevant works with a focus on the dynamic regret." }, { "heading": "5.1 DYNAMIC REGRET OF ONLINE GRADIENT DESCENT", "text": "Dynamic regret was first introduced in (Zinkevich, 2003) for the analysis of online gradient descent, where anO( √ TCT ) bound on the dynamic regret was derived for convex functions. When the learner\nhas knowledge of the path length beforehand, the dynamic regret can be upper bounded byO( √ TCT )\n(Yang et al., 2016). For strongly convex cost functions, the upper bound on the dynamic regret can be reduced to O(CT ) (Mokhtari et al., 2016). The above works make only a single query to the gradient of the cost functions in every round. By allowing the learner to make multiple gradient queries per online round, the regret bound can be improved to O(min(CT , ST )) when the cost functions are smooth and strongly convex (Zhang et al., 2017). The analysis in all aforementioned studies requires the cost functions to be Lipschitz continuous. However, many commonly used cost functions do not satisfy this condition over an unbounded feasible set, e.g., the quadratic function, and even when the feasible set is bounded the Lipschitz factor can be excessively large, especially when the cost functions are strongly convex. Therefore, Lipschitz continuity of the cost functions is not assumed in our analysis. Instead, we move this condition from the cost functions to the Bregman divergence, which the learner can control and design. In addition, the above works rely on measuring distances using Euclidean norms, while the updates with Euclidean distance are challenging for some constraint sets, e.g., probability simplex (Duchi, 2018). It is known that gradient descent does not perform as well as mirror descent, especially when the input dimension is high (Nemirovsky & Yudin, 1983; Beck & Teboulle, 2003)." }, { "heading": "5.2 DYNAMIC REGRET OF ONLINE MIRROR DESCENT", "text": "The dynamic regret of online single-step mirror descent was studied in (Hall & Willett, 2015), where an upper bound of O( √ T (1 + CT )) was derived for convex cost functions. To take advantage of smoothness in cost functions, an adaptive algorithm based on optimistic mirror descent (Rakhlin & Sridharan, 2013) was proposed in (Jadbabaie et al., 2015), which contains two steps of mirror descent per online round. However, different from our work, in that variant the learner is allowed to make only a single query about the gradient. The algorithm further requires some prior prediction of the gradient in each round, which is used in the second mirror descent step. The dynamic regret bound was given in terms of a combination of the path length CT , deviation between the predictions and the actual gradients DT , and functional variation FT = ∑T t=1 maxx∈X |ft(x)− ft−1(x)|. 2 Unfortunately, to achieve this bound, the algorithm requires the design of a time-varying step size that depends on the optimal solution in the previous step, which prevents direct numerical comparison with OMMD. Therefore, in Section 4 we have experimented only with the method of (Hall & Willett, 2015).\nAll aforementioned works make only a single query to the gradient of the cost functions in every online round. In contrast, in this work, we allow the learner to make multiple gradient queries per round. The learner then uses this information to update its decision via multiple steps of mirror descent. In this way, we show the dynamic regret can be upper bounded linearly by the minimum among the path length, squared path length, and sum squared gradients. Furthermore, as opposed to the aforementioned works, our analysis does not require the cost functions to be Lipschitz continuous.\nFinally, there is also recent work in the literature on distributed online mirror descent (Shahrampour & Jadbabaie, 2017). As expected, it is more challenging to achieve performance guarantee in distributed optimization. We focus on centralized online convex optimization in this work.\n2We note that the regret bounds derived in (Jadbabaie et al., 2015) is under the same definition of (Zinkevich, 2003)." }, { "heading": "6 CONCLUSION", "text": "We have studied online convex optimization in dynamic settings. By applying the mirror descent step multiple times in each round, we show that the upper bound on the dynamic regret can be reduced significantly from O( √ T (1 + CT )) to O(min(CT , ST , GT )), when the cost functions are strongly convex and smooth. In contrast to prior studies (Hall & Willett, 2015; Zhang et al., 2017; Mokhtari et al., 2016), our analysis does not require the cost functions to be Lipschitz continuous. Numerical experiments with the CIFAR-10 dataset, and sequential quadratic programming, and additional examples show substantial improvement on the dynamic regret compared with existing alternatives." }, { "heading": "A ADDITIONAL DEFINITIONS", "text": "Definition 3: A function f(·) is L-smooth, if there exists a positive constant L such that\nf(y) ≤ f(x) + 〈∇f(x), y − x〉+ L 2 ‖y − x‖2, ∀x, y ∈ X .\nDefinition 4: A convex function f(·) is λ-strongly convex with respect to some norm ‖ · ‖, if there exists a positive constant λ such that\nf(y) + 〈∇f(y), x− y〉+ λ 2 ‖x− y‖2 ≤ f(x), ∀x, y ∈ X .\nDefinition 5: A function f(·) is Lipschitz continuous with factor G if for all x and y in X , the following holds:\n|f(x)− f(y)| ≤ G‖x− y‖, ∀x, y ∈ X ." }, { "heading": "B PROOF OF LEMMA 1", "text": "Consider single-step mirror descent update as follows:\nx = argmin y∈X\n{ ft(x ′) + 〈∇ft(x′), y − x′〉+ 1\nα Dr(y, x\n′) } . (7)\nStrong convexity of the above minimization objective implies\nft(x ′) + 〈∇ft(x′), x− x′〉+\n1 α Dr(x, x ′) ≤ (8)\nft(x ′) + 〈∇ft(x′), y − x′〉+\n1 α Dr(y, x ′)− 1 α Dr(y, x), ∀y ∈ X .\nFurthermore, from the smoothness condition, we have\nft(x) ≤ ft(x′) + 〈∇ft(x′), x− x′〉+ L\n2 ‖x− x′‖2. (9)\nSubstituting equation 9 into equation 8, and setting y = x∗t , we obtain\nft(x)− L 2 ‖x− x′‖2 + 1 α Dr(x, x ′) ≤ (10)\nft(x ′) + 〈∇ft(x′), x∗t − x′〉+\n1 α Dr(x ∗ t , x ′)− 1 α Dr(x ∗ t , x).\nSince α ≤ 1L , and regularization function r(·) is 1-strongly convex, we have 1\nα Dr(x, x\n′) ≥ LDr(x, x′) ≥ L\n2 ‖x− x′‖2. (11)\nNext, we exploit the strong convexity of the cost function, i.e.,\nft(x ′) + 〈∇ft(x′), x∗t − x′〉 ≤ ft(x∗t )− λDr(x∗t , x′). (12)\nCombining equation 10, equation 11, and equation 12, we obtain\nft(x) ≤ ft(x∗t )− λDr(x∗t , x′) + 1\nα Dr(x\n∗ t , x ′)− 1\nα Dr(x\n∗ t , x). (13)\nNext, we use the result of (Hazan & Kale, 2014), which states that for evey λ-strongly convex function ft(.), the following bound holds:\nft(x)− ft(x∗t ) ≥ λDr(x∗t , x), (14) where x∗t = argminx∈X ft(x). Combining the above with equation 13, we obtain\nDr(x ∗ t , x) ≤ βDr(x∗t , x′), (15)\nwhere β = 1− 2λα1+λα ." }, { "heading": "C PROOF OF THEOREM 2", "text": "C.1 KEY LEMMAS\nThe following two lemmas pave the way for our regret analysis leading to Theorem 2. Lemma 7 presents an alternative form for the mirror descent update.\nLemma 7 Suppose there exists zt+1 that satisfies ∇r(zt+1) = ∇r(xt) − α∇ft(xt), for some strongly convex function r(·), and step size α. Then, the following updates are equivalent\nxt+1 = argmin x∈X Dr(x, zt+1), (16)\nxt+1 = argmin x∈X\n{ 〈∇ft(xt), x〉+ 1\nα Dr(x, xt)\n} . (17)\nProof. We begin by expanding equation 16 as follows:\nxt+1 = argmin x∈X {r(x)− r(zt+1)− 〈∇r(zt+1), x− zt+1〉}\n= argmin x∈X {r(x)− 〈∇r(zt+1), x〉}\n= argmin x∈X {r(x)− 〈∇r(xt)− α∇ft(xt), x〉}\n= argmin x∈X {α〈∇ft(xt), x〉+ r(x)− r(xt)− 〈∇r(xt), x− xt〉}\n= argmin x∈X\n{〈∇ft(xt), x〉+ 1\nα Dr(x, xt)}. (18)\nThus, the update in equation 16 is equivalent to equation 17.\nLemma 8 Under the same convexity and smoothness condition stated in Theorem 2, let xt be the sequence of decisions generated by OMMD. Then, the following bound holds:\n‖xt+1 − x∗t ‖ ≤ √ LrβM‖xt − x∗t ‖, (19)\nwhere Lr is the smoothness factor of the regularization function r(·), and β is the shrinking factor obtained in Lemma 1.\nProof. Using the result of Lemma 1, OMMD with M mirror descent steps guarantees\nDr(x ∗ t , xt+1) ≤ βMDr(x∗t , xt). (20)\nSince the regularization function r(·) is 1-strongly convex, we have\n‖x∗t − xt+1‖2\n2 ≤ r(x∗t )− r(xt+1)− 〈∇r(xt+1), x∗t − xt+1〉. (21)\nNext, we exploit the smoothness condition of the regularization function r(·), i.e.,\nr(x∗t )− r(xt)− 〈∇r(xt), x∗t − xt〉 ≤ Lr 2 ‖x∗t − xt‖2. (22)\nBy combining the above with equation 20, and equation 21, and using the definition of Bregman divergence, we obtain\n‖xt+1 − x∗t ‖2 ≤ LrβM‖xt − x∗t ‖2. (23)\nTaking the square root on both sides of equation 23 completes the proof.\nC.2 PROOF OF THE THEOREM\nNow, we are ready to present the proof of Theorem 2. In this proof, we will use the following properties of Bregman divergence.\n(a) By direct substitution, the following equality holds for any x, y, z ∈ X , 〈∇r(z)−∇r(y), x− y〉 = Dr(x, y)−Dr(x, z) +Dr(y, z). (24)\n(b) If x = argminx′∈X Dr(x ′, z), i.e., x is the Bregman projection of z into the set X , then for any arbitrary point y ∈ X , we have Dr(y, z) ≥ Dr(y, x) +Dr(x, z). (25)\nTo bound the dynamic regret, we begin by using the strong convexity of the cost function ft(·), i.e., ft(xt)− ft(x∗t ) ≤ 〈∇ft(xt), xt − x∗t 〉 − λDr(x∗t , xt)\n≤ 1 α 〈∇r(xt)−∇r(zt+1), xt − x∗t 〉 − λDr(x∗t , xt) ≤ 1 α ( Dr(x ∗ t , xt)−Dr(x∗t , zt+1) +Dr(xt, zt+1) ) − λDr(x∗t , xt) ≤ 1 α ( Dr(x ∗ t , xt)−Dr(x∗t , xt+1)−Dr(xt+1, zt+1) +Dr(xt, zt+1) ) − λDr(x∗t , xt)\n≤ ( 1 α − λ ) Dr(x ∗ t , xt) + 1 α ( Dr(xt, zt+1)−Dr(xt+1, zt+1) ) ≤ ( 1 α − λ )(ft(xt)− ft(x∗t ) λ ) + 1 α ( Dr(xt, zt+1)−Dr(xt+1, zt+1) ) , (26)\nwhere in the second line we have used the alternative mirror descent update stated in Lemma 7, i.e., ∇ft(xt) = (1/α)(∇r(xt) − ∇r(zt+1)). To obtain the third line, we have utilized the Bregman divergence property in equation 24. We have used the Bregman projection property in equation 25 in the fourth line. By omitting some negative terms, and using equation 14, we obtain the right-hand side of equation 26.\nThus, if α > 12λ , we have\nft(xt)− ft(x∗t ) ≤ λ\n2αλ− 1\n( Dr(xt, zt+1)−Dr(xt+1, zt+1) ) (a) ≤ λK 2αλ− 1 ‖xt+1 − xt‖\n≤ λK 2αλ− 1\n( ‖xt+1 − x∗t ‖+ ‖xt − x∗t ‖ ) (b) ≤ λK 2αλ− 1 (1 + √ LrβM )‖xt − x∗t ‖, (27)\nwhere we have used the Lipschitz continuity of Bregman divergence to obtain inequality (a), and we have applied Lemma 8 to obtain inequality (b). Summing equation 27 over time, we have\nRegdT = T∑ t=1 ft(xt)− ft(x∗t ) ≤ λK 2αλ− 1 (1 + √ LrβM ) T∑ t=1 ‖xt − x∗t ‖. (28)\nNow, we proceed to bound ∑T t=1 ‖xt − x∗t ‖ as follows:\nT∑ t=1 ‖xt − x∗t ‖ = ‖x1 − x∗1‖+ T∑ t=2 ‖xt − x∗t ‖\n≤ ‖x1 − x∗1‖+ T∑ t=2 ‖xt − x∗t−1‖+ ‖x∗t−1 − x∗t ‖\n(a) ≤ ‖x1 − x∗1‖+ T∑ t=2 √ LrβM‖xt−1 − x∗t−1‖+ T∑ t=2 ‖x∗t − x∗t−1‖, (29)\nwhere we used the result of Lemma 8 to obtain inequality (a). If M ≥ d ( 1 2 + 1 2αλ ) logLre, we have\nβM = ( 1− 2αλ\n1 + αλ\n)M ≤ exp ( −2Mαλ 1 + αλ ) < 1 Lr , (30)\nwhich implies LrβM < 1. Therefore, by combining equation 29 and equation 30, we have\nT∑ t=1 ‖xt − x∗t ‖ ≤ ‖x1 − x∗1‖ 1− √ LrβM + ∑T t=2 ‖x∗t − x∗t−1‖ 1− √ LrβM . (31)\nFinally, substituting equation 31 into equation 28 completes the proof." }, { "heading": "D PROOF OF THEOREM 3", "text": "In order to bound the dynamic regret, we begin by the smoothness condition of the cost function ft(.), i.e.,\nft(xt)− ft(x∗t ) ≤ 〈∇ft(x∗t ), xt − x∗t 〉+ L\n2 ‖xt − x∗t ‖2\n≤ ‖∇ft(x∗t )‖∗‖xt − x∗t ‖+ L\n2 ‖xt − x∗t ‖2. (32)\nNext, we use the fact\n‖∇ft(x∗t )‖∗‖xt − x∗t ‖ ≤ ‖∇ft(x∗t )‖2∗ 2θ + θ‖xt − x∗t ‖2 2 , (33)\nfor any arbitrary positive constant θ > 0. Thus, we have\nft(xt)− ft(x∗t ) ≤ ‖∇ft(x∗t )‖2∗\n2θ +\n(L+ θ)‖xt − x∗t ‖2\n2 . (34)\nSumming equation 34 over time, we obtain\nRegdT = T∑ t=1 ft(xt)− ft(x∗t ) ≤ T∑ t=1 ‖∇ft(x∗t )‖2∗ 2θ + L+ θ 2 T∑ t=1 ‖xt − x∗t ‖2. (35)\nNow, we proceed by bounding ∑T t=1 ‖xt − x∗t ‖2 as follows:\nT∑ t=1 ‖xt − x∗t ‖2 = ‖x1 − x∗1‖2 + T∑ t=2 ‖xt − x∗t−1 + x∗t−1 − x∗t ‖2\n≤ ‖x1 − x∗1‖2 + T∑ t=2 ( 2‖xt − x∗t−1‖2 + 2‖x∗t−1 − x∗t ‖2 ) ≤ ‖x1 − x∗1‖2 + 2βMLr\nT∑ t=1 ‖xt − x∗t−1‖2 + 2 T∑ t=2 ‖x∗t−1 − x∗t ‖2. (36)\nWe note that if M ≥ d ( 1 2 + 1 2αλ ) log 2Lre, then 2βMLr < 1. Therefore, from equation 36 we can obtain\nT∑ t=1 ‖xt − x∗t ‖2 ≤ ‖x1 − x∗1‖2 1− 2βMLr +\n2\n1− 2βMLr T∑ t=2 ‖x∗t − x∗t−1‖2. (37)\nSubstituting equation 37 into equation 35 completes the proof." }, { "heading": "E PROOF OF THEOREM 5", "text": "The proof of Theorem 5 initially follows the first half of the proof of Theorem 2, which is repeated here for completeness.\nTo analyze the dynamic regret, we first use the strong convexity of the cost function ft(·), i.e.,\nft(xt)− ft(x∗t ) ≤ 〈∇ft(xt), xt − x∗t 〉 − λDr(x∗t , xt)\n≤ 1 α 〈∇r(xt)−∇r(zt+1), xt − x∗t 〉 − λDr(x∗t , xt) ≤ 1 α ( Dr(x ∗ t , xt)−Dr(x∗t , zt+1) +Dr(xt, zt+1) ) − λDr(x∗t , xt) ≤ 1 α ( Dr(x ∗ t , xt)−Dr(x∗t , xt+1)−Dr(xt+1, zt+1) +Dr(xt, zt+1) ) − λDr(x∗t , xt)\n≤ ( 1 α − λ ) Dr(x ∗ t , xt) + 1 α ( Dr(xt, zt+1)−Dr(xt+1, zt+1) ) ≤ ( 1 α − λ )(ft(xt)− ft(x∗t ) λ ) + 1 α ( Dr(xt, zt+1)−Dr(xt+1, zt+1) ) , (38)\nwhere in the second line we have used the alternative mirror descent update stated in Lemma 7, i.e., ∇ft(xt) = (1/α)(∇r(xt) − ∇r(zt+1)). To obtain the third line, we have utilized the Bregman divergence property in equation 24. We have used the Bregman projection property in equation 25 in the fourth line. By omitting some negative terms, and using equation 14, we obtain the right-hand side of equation 38.\nTherefore, if α > 12λ , we have\nft(xt)− ft(x∗t ) ≤ λ\n2αλ− 1 (Dr(xt, zt+1)−Dr(xt+1, zt+1)) . (39)\nNow we continue to bound Dr(xt, zt+1). By the definition of Bregman divergence, we have\nDr(xt, zt+1) +Dr(zt+1, xt) = 〈∇r(xt)−∇r(zt+1), xt − zt+1〉 = 〈α∇ft(xt), xt − zt+1〉 ≤ ‖α∇ft(xt)‖∗‖xt − zt+1‖\n≤ α 2\n2 ‖∇ft(xt)‖2∗ +\n‖xt − zt+1‖2\n2 . (40)\nThe strong convexity of the regularization function implies\n‖xt − zt+1‖2\n2 ≤ r(zt+1)− r(xt)− 〈∇r(xt), zt+1 − xt〉 = Dr(zt+1, xt). (41)\nCombining the above with equation 40, we obtain\nDr(xt, zt+1) ≤ α2\n2 ‖∇ft(xt)‖2∗. (42)\nBy substituting equation 42 into equation 40, and summing over time, we have\nRegdT = T∑ t=1 ft(xt)− ft(x∗t ) ≤ α2λ 4αλ− 2 ‖∇ft(xt)‖2∗. (43)" }, { "heading": "F CLOSED-FORM UPDATE FOR MIRROR DESCENT", "text": "In this section, we derive the close-form mirror descent update in equation 6.\nLet r(y) = ∑d j=1 yj log(yj) be the negative entropy. Then, we have\nDr(y, y i t) = d∑ j=1 [ yj log(yj)− yit,j log(yit,j)− (log(yit,j) + 1)(yj − yit,j) ] =\nd∑ j=1 yj log( yj yit,j ) + 〈1, y − yit〉 = DKL(y, yit), (44)\nwhere yit,j denotes the j-th component of the decision vector y i t, and DKL(y, y i t) represents the KL divergence between y and yit.\nNow consider the update in equation 5, which can be written as follows:\nminimizey∈X 〈∇ft(yit), y〉+ 1\nα d∑ j=1 yj log( yj yit,j ) (45)\nsubject to 〈1, y〉 = 1, y ≥ 0.\nThe Lagrangian of the above problem is given by\nL(y, λ, γ) = 〈∇ft(yit), y〉+ d∑ j=1 [ 1 α yj log( yj yit,j ) + λyj − γjyj ] − λ, (46)\nwhere λ ∈ R and γ ∈ Rd+ are Lagrange multipliers corresponding to the constraints. Next, we take derivative with respect to y to obtain\n∂\n∂yj L(y, λ, γ) = ∇ft(yit)j +\n1 α log(yj) + 1 α − 1 α log(yit,j) + λ− γj . (47)\nSetting the above to zero results in the following closed-form update:\nyi+1t,j = yit,j exp(−α∇ft(yit,j))∑d j=1 y i t,j exp(−α∇ft(yit,j)) . (48)" }, { "heading": "G ADDITIONAL EXPERIMENTS", "text": "In this section, we present additional experiments to study the performance of OMMD. In the first experiment, we use the MNIST dataset. In the second experiment, we consider a switching problem where the cost function switches between two quadratic functions after a specific number of rounds.\nFirst, we consider the well-known MNIST digits dataset, where every data sample ω is an image of size 28× 28 pixel that can be represented by a 784-dimensional vector, i.e., d = 784. Each sample corresponds to one of the digits in {0, 1, . . . , 9}, and thus, there are c = 10 different classes. The goal of the learner is to classify streaming digit images in an online fashion.\nWe consider a robust regression problem, where the cost function for the batch of data samples at time t is given by\nf(x, (ωt, zt)) = ‖ωTt x− zt‖21,\nwhere x is the optimization variable, belonging to the constraint set is X = {x : x ∈ Rn+, ‖ x ‖1= 1}. We use the negative entropy regularization function, i.e., r(x) = ∑d i=1 xi log(xi), which is strongly convex with respect to the l1−norm. We set the step size α = 0.1 and use a batch size of 20 data examples per round.\nFrom Fig. 3, we again observe that OMMD consistently outperforms the other alternatives. In particular, compared with DMD, applying M = 10 steps of mirror descent can reduce the dynamic regret up to 20%. We also see that the dynamic regret grows linearly with the number of rounds,\nwhich is a natural consequence of steady fluctuation in the sequence of dynamic minimizers x∗t as explained before.\nNext, we consider the case where the cost function switches between two functions. Both functions are in the quadratic form ft(x) = ‖Atx− bt‖22, where At ∈ Rd×d, and bt ∈ Rd. In particular, we assume that the parameter At is chosen among\nA (1) t = diag(\n1 tp1 , 1 tp1 , . . . , 1\ntp1︸ ︷︷ ︸ d1 , 0, 0, . . . , 0︸ ︷︷ ︸ d2 ), and A(2)t = diag(0, 0, . . . , 0︸ ︷︷ ︸ d1 , 1 tp1 , 1 tp1 , . . . , 1 tp1︸ ︷︷ ︸ d2 ),\nsuch that d1 + d2 = d, and bt = [ 1tp2 , . . . , 1 tp2 ] ′. Therefore, at each round the cost function is either f (1) t (x) = ‖A (1) t x− bt‖22 or f (2) t (x) = ‖A (2) t x− bt‖22. We assume that the cost function switches between f (1)t (·) and f (2) t (·) every τ rounds. In our experiment, we set d1 = 10, d = 1000, p1 = 0.9, and p2 = 0.1. We further set the switching period τ = 10, and parameter α = 0.02. The dynamic regret roughly reflects the accumulated mismatch error over time.\nIn Fig. 4, we compare the performance of OMMD with that of other alternatives in terms of the dynamic regret. OMMD with M = 10 nearly halves the dynamic regret of DMD after 300 rounds. Furthermore, the benefit of applying multiple steps of mirror descent can be significant even for smaller values of M ." } ]
2,020
null