Optimal Learning Rate from the Hessian: Examples

How does it work in practice?

See https://medium.com/@ranjeettate/learning-rate-in-gradient-descent-and-second-derivatives-632137dad3b5 for an introduction

If D is the number of variables against which the Loss function is to be minimized, then calculating the Hessian is O(D²). Calculating the inverse of a matrix is almost O(D³), which is two orders greater than simply calculating the gradient. Is any value to doing this? Let’s look at some examples.

Another motivation for looking at examples is that most people (myself included) have an intuition that this is wrong: If the Hessian is not diagonal, then the best step is not in the “direction” of the gradient. How can it be that taking the steepest step is not locally the best thing to do to descend a hillside? How can it be that the gradient doesn’t define the best direction?

We’ll need to calculate the gradient of a function and then do various tensor calculations. The package ‘numdifftools’ has well-documented Gradient and Hessian modules, and ‘tensorflow’ has all the tensor operations we will need. Plus numpy, matplotlib etc as usual.

def f(x):
return x[0]**2 + x[1]**2

Here is its contour plot:

Image for post
Image for post
Paraboloid of revolution

Here is the code for using nd.gradient:

point = [-3, -3]dfoo = nd.Gradient(f)grad_foo = tf.reshape(tf.Variable(dfoo(point)), shape = [2,])

Result: The minimum is at the origin (0, 0). At the point (3, 3), the gradient is (-6, -6), the negative gradient is (6, 6), which would cause us to overshoot and indefinitely oscillate between (-3, -3) and (3, 3). At least the gradient is in the correct direction as the minimum. Since “learning rate” is a parameter, we can use anything less than 0.5 and we will converge.

This is the main part of the code for calculating the optimal step using the Hessian:

# gradient of foo
dfoo = nd.Gradient(f)
# def optimal_delta_foo(point):
grad_foo = tf.reshape(tf.Variable(dfoo(point)), shape = [2,1])

# Hessian (matrix of mixed partial derivatives) of foo2
hess_foo = tf.Variable(nd.Hessian(f)(point))

# inverse of the Hessian
inv_hess_foo = tf.matrix_inverse(hess_foo, adjoint=False, name=None)

# optimal vector change in the variables, note the '-'
optimal_delta = -tf.matmul(inv_hess_foo, grad_foo)

# reshape and return optimal_delta
optimal_delta = tf.reshape(optimal_delta, shape = [1,2])
grad_foo = tf.reshape(grad_foo, shape = [2,])

The results (which can be checked by hand) are

gradient: 
[-6. -6.]
Hessian:
[[2. 0.]
[0. 2.]]
inverse Hessian:
[[ 0.5 0.]
[0. 0.5]]
optimal vector change in variables:
[3 3]

So for this case at least, using the Hessian nails the minimum in one step! Further, this “Hessian” step has the same coordinates as the gradient. So we’ve gained a bit in terms of scaling, but not much more.

def f(x):
return x[0]**2 - x[0] * x[1]/2. + (x[1]/3.)**2

and its contour plot

Image for post
Image for post
Paranoid paraboloid

Note that the minimum is still at the origin, but already we see that the (-)gradient at the point (-3, -3) = (4.5, -0.83) doesn’t even “point” in the correct direction! So our intuition about the gradient being the correct direction is not right.

Calculating the Hessian step, we get:

gradient: 
[-4.5 0.83333333]
Hessian:
[[ 2. -0.5 ]
[-0.5 0.22222222]]
inverse Hessian:
[[ 1.14285714 2.57142857]
[ 2.57142857 10.28571429]]
optimal vector change in variables:
[3.0 3.0]

Again, using the Hessian nails the minimum in one step! Just to make the point again: in this case the gradient and the ideal step are not in the same “direction”.

There are lots of physics optimization problems minimizing times or distances along paths on curved surfaces. We are trying to minimize path length (?) on the curved surface representing the fucntion. This is a similar situation to that of the great circles on the surface of the earth, the shortest distance between two points on the same latitude (except the equator) is not along the latitude lines (which are the same as the gradients of the longitude).

So far so good. But now, let’s try it on a more complicated function, similar to the function whose contour map we showed at the beginning of this article.

The 2D function we will try is:

def f(x):
return np.sin(x[0]) ** 2 + np.sin(10 + x[1] * x[0]/2.) * np.cos(x[0])

and its contour map is

Image for post
Image for post
Contour map of above function

Let’s zoom in near the obvious central minimum:

Image for post
Image for post
Zooming in near central minimum near (3.25, 2.5)

Let’s do gradient descent with a scalar learning rate of 0.5, starting from the point (3.1, 1.3)

Image for post
Image for post
Gradient Descent

So it is sort of in the well, bouncing around a bit since the learning rate is too high.

Now, what about our “magical” Hessian ideal step? Starting at the same point as before, (3.1, 1.3), the Hessian ideal step is (-0.11, -0.44)! This is completely in the wrong direction! That step increases the value of the function to 0.97. While the steepest step may not be the best step to take (as we’ve seen with the squished paraboloid), it certainly cannot be the case that stepping uphill is the best! Perhaps this method should be called “Gradient indecent”

So what went wrong? Let’s take a look at the Hessian. Our derivation assumed that the extremum we were looking for (gradient = 0) was a minimum, and we implicitly assumed that the Hessian has “positive curvature” (slope increases). But at the point (3.1, 1.3), the Hessian is

Hessian: 
[[ 1.20250029 -1.007762 ]
[-1.007762 -1.2574737 ]]

This has a negative determinant, implying (in 2D) that one eigenvalue is positive and one is negative, so we are likely to be heading towards a saddle point extremum.

Let’s look at a contour plot of the determinant of the Hessian:

Image for post
Image for post
Contour plot of det(Hessian(f))

Comparing to the plot of the function itself

Image for post
Image for post
Contour plot of f

we see that the region around the central minimum with a positive determinant is much smaller than what we perceive as the valley itself. The point (3.1, 1.3) we started at is squarely in the middle of a negative determinant region, quite close to a saddle point extremum at about (3, 1) and the surrounding peaks.

Another way to see what goes wrong is to look at the optimal step Hessian vector field:

Image for post
Image for post
The Hessian Vector Field of f

The vectors at the starting point (3.1, 1.3) are all pointing away from the central minimum, they are separated from it by a contour where the length of the vector field is 0 and hence the integral curves cannot reach the minimum desired.

What can we do?

Let’s do gradient descent with a scalar learning rate of 0.5, starting from the point (3.1, 1.8)

Image for post
Image for post
Gradient Descent from det(Hessian) > 0

So it is sort of in the well, but bouncing around a lot and possibly climbing out of the valley.

How does Hessian step do, starting from the point (3.1, 1.8)?

Image for post
Image for post
Hessian Descent from point with det(Hessian) > 0

which has pretty much converged by the third iteration.

So it “works”, but does it work? Is this already part of Gradient Descent or another optimization technique?

Is it worth it trying to figure out practical implementations?

I stop to miau to cats.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store