The Newton-Raphson Method is a powerful and widely used numerical technique for finding the roots of equations. This iterative method is particularly useful for solving non-linear equations, providing rapid convergence under suitable conditions. It is one of the most efficient root-finding algorithms employed in various scientific and engineering fields.
The Newton-Raphson Method is an iterative technique for approximating the roots of a function. It refines an initial guess by following the function's tangent line, progressively leading to the actual root.
The Newton-Raphson iteration is defined by the formula:
Let x0 be the approximate root of f(x) = 0 and let x1 = x0 + h be the correct root. Then f(x1) = 0
⇒ f(x0 + h) = 0….(1)
By expanding the above equation using Taylor’s theorem, we get:
f(x0) + hf1(x0) + … = 0
⇒ h = -f(x0) /f’(x0)
Therefore, x1 = x0 – f(x0)/ f’(x0)
Now, x1 is the better approximation than x0.
Similarly, the successive approximations x2, x3, …., xn+1 are given by
xn+1=xn−f(xn)f′(xn)
This is called Newton Raphson formula.
The geometric interpretation of the Newton-Raphson Method involves drawing a tangent to the function at an initial guess. The point where the tangent intersects the x-axis provides the next approximation of the root.
The rate of convergence of the Newton-Raphson Method is quadratic, meaning the error decreases significantly with each iteration. However, convergence is not guaranteed if:
Example 1: Find the cube root of 12 using the Newton Raphson method assuming x0 = 2.5.
Solution: xn+1=1b[(b−1)xn+axnb−1]
From the given, a = 12, b = 3
Let x0 be the approximate cube root of 12, i.e., x0 = 2.5.
So, x1 = (⅓) [2x0 + 12/x02]
= (⅓) [2(2.5) + 12/(2.5)2]
= (⅓) [5 + 12/6.25]
= (⅓)(5 + 1.92)
= 6.92/3
= 2.306
Now,
x2 = (⅓)[2x1 + 12/x12]
= (1/3) [2(2.306) + 12/(2.306)2]
= (⅓) [4.612 + 12/5.3176]
= (⅓) [4.612 + 2.256]
= 6.868/3
= 2.289
Therefore, the approximate cube root of 12 is 2.289.
Example 2: Find a real root of the equation -4x + cos x + 2 = 0, by Newton Raphson method up to four decimal places, assuming x0 = 0.5.
Solution: Given equation: -4x + cos x + 2 = 0
x0 = 0/5
Let f(x) = -4x + cos x + 2
f’(x) = -4 – sin x
Now,
f(0) = -4(0) + cos 0 + 2 = 1 + 2 = 3 > 0
f(1) = -4(1) + cos 1 + 2 = -4 + 0.5403 + 2 = -1.4597 < 0
Thus, a root lies between 0 and 1.
Let us find the first approximation.
x1 = x0 – f(x0)/f’(x0)
= 0.5 – [-4(0.5) + cos 0.5 + 2]/ [-4 – sin 0.5]
= 0.5 – [(-2 + 2 + cos 0.5)/ (-4 – sin 0.4)]
= 0.5 – [cos 0.5/ (-4 – sin 0.5)]
= 0.5 – [0.8775/ (-4 – 0.4794)]
= 0.5 – (0.8775/-4.4794)
= 0.5 + 0.1958
= 0. 6958
The Newton-Raphson Method is implemented in various programming languages to automate root-finding calculations.
#include <iostream>
#include <cmath>
using namespace std;
double func(double x) { return x * x - 4; }
double derivFunc(double x) { return 2 * x; }
void newtonRaphson(double x) {
double h = func(x) / derivFunc(x);
while (abs(h) >= 0.0001) {
h = func(x) / derivFunc(x);
x = x - h;
}
cout << "The root is: " << x << endl;
}
int main() {
double x0 = 3;
newtonRaphson(x0);
return 0;
}
def f(x):
return x**2 - 4
def df(x):
return 2*x
def newton_raphson(x):
while abs(f(x) / df(x)) >= 0.0001:
x = x - f(x) / df(x)
return x
x0 = 3
root = newton_raphson(x0)
print("The root is:", root)
#include <stdio.h>
#include <math.h>
double func(double x) { return x*x - 4; }
double derivFunc(double x) { return 2*x; }
void newtonRaphson(double x) {
double h = func(x) / derivFunc(x);
while (fabs(h) >= 0.0001) {
h = func(x) / derivFunc(x);
x = x - h;
}
printf("The root is: %lf\n", x);
}
int main() {
double x0 = 3;
newtonRaphson(x0);
return 0;
}
The Newton-Raphson method is used for finding the roots of equations numerically. It is widely applied in engineering, physics, and computational mathematics.
It should be used when you have a good initial guess and the function is differentiable, ensuring fast convergence to a root.
The method stops when |f(x)| < tolerance, meaning the function value is close enough to zero or the change between iterations is minimal.
Secant Method and Bisection Method are more robust alternatives. They don’t require derivatives but may be slower in convergence.