Brain and Mind wiki
Advertisement

In many problems dealing with parameter spaces, it is not always sufficient or efficient enough to use the conventional gradient of a space. In many optimization problems, such as supervised learning and source separation, it is more efficient to use the natural gradient when implementing the learning rule.

A recurring problem in optimization is the minimization of a quantity such as the cost function or mutual information. The strategy is to use the gradient of the parameter space to locate the overall minimum. This strategy may often be suboptimal, or oscillate near local minima when the conventional gradient is taken into account.

A substantial improvement over this problem is to use the natural gradient of the parameter space. The natural gradient is the gradient that represents the steepest direction of the function. This is accomplished by considering a different metric tensor for the parameter space of the problem.

In conventional Euclidean space, we usually define orthonormal coordinates, such that the length of a small vector is where are the components of w i.e. the projection to each axon. This is because, in Euclidean space, the metric tensor , gives rise to the familiar inner product formula

,

and length formula where i = 1,2,3.

In a more general (possibly curved) n-dimensional Riemannian space, the metric tensor differs. Usually, we consider a parameter (vector) n-dimensional space where, for example, w may be the weight vectors of a neural network. For Euclidean spaces, the length of a small increment in the space of w would be given by the formula mentioned above. For a curved manifold S, though, the length formula is:

, i,j ranging from 0 to n, where G = gij is the Riemannian metric tensor.

Suppose we have also defined a function in S, L(w) that we want to minimize (assume, for example that L(w) is the error function of a neural network model). In order to minimize the function efficiently we need to follow the steepest slope of the function to reach faster to the minimum. Suppose we move by a small increment of square length . The steepest descent direction is the one that minimizes .


It can be proved [1] that the steepest descent gradient is given by

where is the inverse of , the metric (which also happens to be the transpose of G).

Natural gradient in the matrix space[]

References[]

[1] Amari, S.-I. (1998) Natural Gradient Works Efficiently in Learning, NEURAL COMPUTATION, 10; 2, 251-276

Advertisement