A Class of Iterative Methods for Solving Nonlinear Equations with Optimal Fourth-order Convergence

In this paper we construct a new third-order iterative method for solving nonlinear equations for simple roots by using inverse function theorem. After that a class of optimal fourth-order methods by using one function and two first derivative evaluations per full cycle is given which is obtained by improving the existing third-order method with help of weight function. Some physical examples are given to illustrate the efficiency and performance of our methods.


Introduction
Nonlinear equations plays an important role in science and engineering. Finding an analytic solution is not always possible. Therefore, numerical methods are used in such situations. The classical Newton's method is the best known iterative method for solving nonlinear equations. To improve the local order of convergence and efficiency index, many modified third-order methods have been presented in literature. For detail we refer [ [16], [9], [1], [15], [5]] and references therein.
obtained the following variant of the Newton method . It is shown that this is third-order. Many authors used this idea to approximate the integral ∫ x xn f ′ (t)dt by different rule. For more detail, one can see [ [12], [13], [14], [8], [9], [15], [5] ] and the references there in. If we approximated the integral in (1.2) by (1.5) we get the same formula (1.1). Next Homeier [9] used Newton's theorem (1.2) for the inverse function x = f −1 (y) = g(y) instead of y = f (x), that is Then the method (1.4) takes the form , . This method is again third-order. Here we state following definitions: Definition 1.1. Let f(x) be a real function with a simple root α and let x n be a sequence of real numbers that converge towards α. The order of convergence m is given by where ζ is the asymptotic error constant and m ∈ R + . Definition 1.2. Let β be the number of function evaluations of the new method. The efficiency of the new method is measured by the concept of efficiency index [17,11] and defined as where µ is the order of the method.
Kung and Traub [10] presented a hypothesis on the optimality of roots by giving 2 n−1 as the optimal order. This means that the Newton iteration by two evaluations per iterations is optimal with 1.414 as the efficiency index. By taking into account the optimality concept many authors have tried to build iterative methods of optimal higher order of convergence.
This paper is organized as follows: in section 2, we describe the new third-order iterative method by using the concept of inverse function theorem. In the next section we optimize the method of Chun et. al [4] with help of weight function. Finally in the last section we give some physical example and our new methods are compared in the performance with some well known methods.

Development of the method and convergence analysis
In this section we use the concept of inverse function to derive variants of Bisectrix Newton's Method. In the formula (1.1), function y = f (x) has been used. Here we use inverse function x = f −1 (y) = g(y) instead of y = f (x). Then we have and that y = f (x) = 0, we obtain the following method: Now we prove that order of convergence of this method is also three. Proof. Let e n = x n −α be the error in the n th iterate and , h = 1, 2, 3.... We provide the Taylor series expansion of each term involved in (2.2). By Taylor expansion around the simple root in the n th iteration, we have and, we have Further more it can be easily find By considering this relation, we obtain At this time, we should expand f ′ (y n ) around the root by taking into consideration (2.7). Accordingly, we have This confirms the result.

Optimal fourth-order iterative method
By using circle of curvature concept Chun et. al. [4] constructed a third-order iterative methods defined by The order of this method three is with three (one derivative and two function) evaluations per full iteration. Clearly its efficiency index (3 1/3 ≈ 1.4422) is not high (optimal= (4 1/3 ≈ 1.5844). We now make use of weight function approach to build our optimal class based on (3.1) by a simple change in its first step. Thus we consider where G(t) is a real-valued weight function with t = f ′ (yn) f ′ (xn) and a is a real constant. The weight function should be chosen such that order of convergence arrives at optimal level four without using more function evaluations. The following theorem indicates under what conditions on the weight functions and constant a in (3.2), the order of convergence will arrive at the optimal level four: Proof. Using (2.4) and (2.5) and a = 2/3 in the first step of (3.2), we have ] . (3.5) Furthermore, we have By virtue of (3.6) and (3.3), we attain Finally using (3.7) in (3.2), we can have the following general equation, which reveals the fourth-order convergence This proves the theorem.
It is obvious that our novel class of iterations requires three evaluations per iteration, i.e. two first derivative and one function evaluations. Thus our new methods are optimal. Now by choosing appropriate weight functions as presented in (3.2), we can give number of optimal two-step iterative methods. Here we are giving one of them as follows: Example 4.1 [2] Consider Plank's radiation law where λ is the wavelength of the radiation, t is the absolute temperature of the blackbody, k is Boltzmann's constant, h is the Planck's constant and c is the speed of light. This formula calculate the energy density within an isothermal blackbody. Now we want to find wavelength λ which maximize energy density ϕ(λ). For maximum of ϕ(λ), it can easily seen that (ch/λkT )e ch/λkT e ch/λkT − 1 = 5.
Let x = ch/λkT , then it becomes 0.42864e-9 0.10085e-40 0.30899e-167 Now the above equation can be rewritten as Our aim to find the root of the equation f 1 (x) = 0. Clearly zero is its one root, which is not of our interest. If we take x = 5, then R.H.S. of (4.2) becomes zero and L.H.S. is e −5 ≈ 6.74 × 10 −3 . This implies one root of the equation f 1 (x) = 0 is near to 5. So that here we compare some well known methods to our methods with initial guess 5, which are given in table 1. It can be rewritten as An engineer has estimated the depth to be x = 2.5. Here we find the root of the equation f 2 (x) = 0 with initial guess 2.5 and compare some well known methods to our methods, which are given in table 2.  [6] The vertical stress σ z generated at point in an elastic continuum under the edge of a strip footing supporting a uniform pressure q is given by Boussinesq's formula to be: A scientist is interested to estimate the value of x at which the vertical stress σ z will be 25 percent of the footing stress q.
Initially it is estimated that x = 0.4. The above can be rewritten as for σ z is equal to 25 percent of the footing stress q: Now we find the root of the equation f 3 (x) = 0 with initial guess 0.4 and compare some well known methods to our methods, which are given in table 3. Table 3. Errors Occurring in the estimates of the root of function f 3 by the methods described below with initial guess x 0 = 0.4.

Conclusion
In this present paper we have given a new third-order and a class of the optimal fourth-order iterative methods for simple roots for solving nonlinear equations. The third-order method is obtained by using inverse function theorem and the class optimal fourth-order method is obtained with help of weight function using in the existing third-order method without using any function evaluations. Three physical examples are given to illustrate the superior performance of our methods by comparing them with some well existing third and fourth-order iterative methods.