


The bisection method works very robustly but often very slowly. A faster approach takes advantage of the derivative of the function to get a pointer to the root. Newton's method (also called NewtonRaphson method) follows such an approach. For the value of f(x) at a value of x that we believe is close to the root, we can ask what is an estimate of the distance h to the root? That is, we seek h where
If h is small, we can approximate this with a truncation of the Taylor expansion of the function about the point x :
which assumes that the function is well behaved. Then for small h the quadratic term can be ignored and solving for h gives
Since this approximation will not usually lead immediately to the root, we can create an iterative formula :
This formula can provide the root very quickly if
It's best to use the analytical formula for the derivative if it is known. Discrete Newton (or Secant) Method If an analytical derivative is not available, then a numerical derivative can substitute for it. The 2 point or central derivative around the point x for small h , leads to the iterative formula
This formula is called the discrete Newton method or the secant method. For this method you need two initial values. Newton's Caveats Beware the numerical errors that can enter into the Newton methods, especially for the discrete version. If the derivative is large, it means the function is rapidly varying and nonlinearly and so the extrapolation to the root may be far off the mark. (It also means the first order truncation of the Taylor expansion is invalid.) For the 2point derivative calculation, if f(x+h) and f(xh)are large values then the difference may be small and cause strong sensitivity to the accuracy of the low order bits. We know from the Chapter 2: Tech : Floating Point that care must be taken when dealing with the precision of floating point numbers. References & Web Resources Most introductory numerical computing books cover the Newton methods. See, for example,
Latest update: Dec.15.2003 


