Hessian matrix
Part of a series of articles about  
Calculus  







Specialized 

In mathematics, the Hessian matrix or Hessian is a square matrix of secondorder partial derivatives of a scalarvalued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants".
Definitions and properties[edit]
Suppose f : ℝ^{n} → ℝ is a function taking as input a vector x ∈ ℝ^{n} and outputting a scalar f(x) ∈ ℝ. If all second partial derivatives of f exist and are continuous over the domain of the function, then the Hessian matrix H of f is a square n×n matrix, usually defined and arranged as follows:
or, by stating an equation for the coefficients using indices i and j,
The Hessian matrix is a symmetric matrix, since the hypothesis of continuity of the second derivatives implies that the order of differentiation does not matter (Schwarz's theorem)
The determinant of the Hessian matrix is called the Hessian determinant.^{[1]}
The Hessian matrix of a function f is the Jacobian matrix of the gradient of the function: H(f(x)) = J(∇f(x)).
Applications[edit]
Inflection points[edit]
If f is a homogeneous polynomial in three variables, the equation f = 0 is the implicit equation of a plane projective curve. The inflection points of the curve are exactly the nonsingular points where the Hessian determinant is zero. It follows by Bézout's theorem that a cubic plane curve has at most 9 inflection points, since the Hessian determinant is a polynomial of degree 3.
Secondderivative test[edit]
The Hessian matrix of a convex function is positive semidefinite. Refining this property allows us to test whether a critical point x is a local maximum, local minimum, or a saddle point, as follows:
If the Hessian is positivedefinite at x, then f attains an isolated local minimum at x. If the Hessian is negativedefinite at x, then f attains an isolated local maximum at x. If the Hessian has both positive and negative eigenvalues, then x is a saddle point for f. Otherwise the test is inconclusive. This implies that at a local minimum the Hessian is positivesemidefinite, and at a local maximum the Hessian is negativesemidefinite.
Note that for positivesemidefinite and negativesemidefinite Hessians the test is inconclusive (a critical point where the Hessian is semidefinite but not definite may be a local extremum or a saddle point). However, more can be said from the point of view of Morse theory.
The secondderivative test for functions of one and two variables is simple. In one variable, the Hessian contains just one second derivative; if it is positive, then x is a local minimum, and if it is negative, then x is a local maximum; if it is zero, then the test is inconclusive. In two variables, the determinant can be used, because the determinant is the product of the eigenvalues. If it is positive, then the eigenvalues are both positive, or both negative. If it is negative, then the two eigenvalues have different signs. If it is zero, then the secondderivative test is inconclusive.
Equivalently, the secondorder conditions that are sufficient for a local minimum or maximum can be expressed in terms of the sequence of principal (upperleftmost) minors (determinants of submatrices) of the Hessian; these conditions are a special case of those given in the next section for bordered Hessians for constrained optimization—the case in which the number of constraints is zero. Specifically, the sufficient condition for a minimum is that all of these principal minors be positive, while the sufficient condition for a maximum is that the minors alternate in sign, with the 1×1 minor being negative.
Critical points[edit]
If the gradient (the vector of the partial derivatives) of a function f is zero at some point x, then f has a critical point (or stationary point) at x. The determinant of the Hessian at x is called, in some contexts, a discriminant. If this determinant is zero then x is called a degenerate critical point of f, or a nonMorse critical point of f. Otherwise it is nondegenerate, and called a Morse critical point of f.
The Hessian matrix plays an important role in Morse theory and catastrophe theory, because its kernel and eigenvalues allow classification of the critical points.^{[2]}^{[3]}^{[4]}
Use in optimization[edit]
Hessian matrices are used in largescale optimization problems within Newtontype methods because they are the coefficient of the quadratic term of a local Taylor expansion of a function. That is,
where ∇f is the gradient (∂f/∂x_{1}, ..., ∂f/∂x_{n}). Computing and storing the full Hessian matrix takes Θ(n^{2}) memory, which is infeasible for highdimensional functions such as the loss functions of neural nets, conditional random fields, and other statistical models with large numbers of parameters. For such situations, truncatedNewton and quasiNewton algorithms have been developed. The latter family of algorithms use approximations to the Hessian; one of the most popular quasiNewton algorithms is BFGS.^{[5]}
Such approximations may use the fact that an optimization algorithm uses the Hessian only as a linear operator H(v), and proceed by first noticing that the Hessian also appears in the local expansion of the gradient:
Letting Δx = rv for some scalar r, this gives
i.e.,
so if the gradient is already computed, the approximate Hessian can be computed by a linear (in the size of the gradient) number of scalar operations. (While simple to program, this approximation scheme is not numerically stable since r has to be made small to prevent error due to the term, but decreasing it loses precision in the first term.^{[6]})
Other applications[edit]
The Hessian matrix is commonly used for expressing image processing operators in image processing and computer vision (see the Laplacian of Gaussian (LoG) blob detector, the determinant of Hessian (DoH) blob detector and scale space).
Generalizations[edit]
Bordered Hessian[edit]
A bordered Hessian is used for the secondderivative test in certain constrained optimization problems. Given the function f considered previously, but adding a constraint function g such that g(x) = c, the bordered Hessian is the Hessian of the Lagrange function :^{[7]}
If there are, say, m constraints then the zero in the upperleft corner is an m × m block of zeros, and there are m border rows at the top and m border columns at the left.
The above rules stating that extrema are characterized (among critical points with a nonsingular Hessian) by a positivedefinite or negativedefinite Hessian cannot apply here since a bordered Hessian can neither be negativedefinite nor positivedefinite, as if is any vector whose sole nonzero entry is its first.
The second derivative test consists here of sign restrictions of the determinants of a certain set of n – m submatrices of the bordered Hessian.^{[8]} Intuitively, one can think of the m constraints as reducing the problem to one with n – m free variables. (For example, the maximization of f(x_{1},x_{2},x_{3}) subject to the constraint x_{1}+x_{2}+x_{3} = 1 can be reduced to the maximization of f(x_{1},x_{2},1–x_{1}–x_{2}) without constraint.)
Specifically, sign conditions are imposed on the sequence of leading principal minors (determinants of upperleftjustified submatrices) of the bordered Hessian, for which the first 2m leading principal minors are neglected, the smallest minor consisting of the truncated first 2m+1 rows and columns, the next consisting of the truncated first 2m+2 rows and columns, and so on, with the last being the entire bordered Hessian; if 2m+1 is larger than n+m, then the smallest leading principal minor is the Hessian itself.^{[9]} There are thus n–m minors to consider, each evaluated at the specific point being considered as a candidate maximum or minimum. A sufficient condition for a local maximum is that these minors alternate in sign with the smallest one having the sign of (–1)^{m+1}. A sufficient condition for a local minimum is that all of these minors have the sign of (–1)^{m}. (In the unconstrained case of m=0 these conditions coincide with the conditions for the unbordered Hessian to be negative definite or positive definite respectively).
Vectorvalued functions[edit]
If f is instead a vector field f : ℝ^{n} → ℝ^{m}, i.e.
then the collection of second partial derivatives is not a n×n matrix, but rather a thirdorder tensor. This can be thought of as an array of m Hessian matrices, one for each component of f:
This tensor degenerates to the usual Hessian matrix when m = 1.
Generalization to the complex case[edit]
In the context of several complex variables, the Hessian may be generalized. Suppose , and we write . Then one may generalize the Hessian to . Note that if satisfies the ndimensional Cauchy–Riemann conditions, then the complex Hessian matrix is identically zero.
Generalizations to Riemannian manifolds[edit]
Let be a Riemannian manifold and its LeviCivita connection. Let be a smooth function. We may define the Hessian tensor
 by ,
where we have taken advantage of the first covariant derivative of a function being the same as its ordinary derivative. Choosing local coordinates we obtain the local expression for the Hessian as
where are the Christoffel symbols of the connection. Other equivalent forms for the Hessian are given by
 and .
See also[edit]
 The determinant of the Hessian matrix is a covariant; see Invariant of a binary form
 Polarization identity, useful for rapid calculations involving Hessians.
 Jacobian matrix
 Hessian equations
Notes[edit]
 ^ Binmore, Ken; Davies, Joan (2007). Calculus Concepts and Methods. Cambridge University Press. p. 190. ISBN 9780521775410. OCLC 717598615.
 ^ Callahan, James J. (2010). Advanced Calculus: A Geometric View. Springer Science & Business Media. p. 248. ISBN 9781441973320.
 ^ Casciaro, B.; Fortunato, D.; Francaviglia, M.; Masiello, A., eds. (2011). Recent Developments in General Relativity. Springer Science & Business Media. p. 178. ISBN 9788847021136.
 ^ Domenico P. L. Castrigiano; Sandra A. Hayes (2004). Catastrophe theory. Westview Press. p. 18. ISBN 9780813341262.
 ^ Nocedal, Jorge; Wright, Stephen (2000). Numerical Optimization. Springer Verlag. ISBN 9780387987934.
 ^ Pearlmutter, Barak A. (1994). "Fast exact multiplication by the Hessian" (PDF). Neural Computation. 6 (1): 147–160. doi:10.1162/neco.1994.6.1.147.
 ^ Hallam, Arne (October 7, 2004). "Econ 500: Quantitative Methods in Economic Analysis I" (PDF). Iowa State.
 ^ Neudecker, Heinz; Magnus, Jan R. (1988). Matrix Differential Calculus with Applications in Statistics and Econometrics. New York: John Wiley & Sons. p. 136. ISBN 9780471915164.
 ^ Chiang, Alpha C. (1984). Fundamental Methods of Mathematical Economics (Third ed.). McGrawHill. p. 386. ISBN 9780070108134.
Further reading[edit]
 Lewis, David W. (1991). Matrix Theory. Singapore: World Scientific. ISBN 9789810206895.
 Magnus, Jan R.; Neudecker, Heinz (1999). "The Second Differential". Matrix Differential Calculus : With Applications in Statistics and Econometrics (Revised ed.). New York: Wiley. pp. 99–115. ISBN 047198633X.