A New Algorithm for Spectral Conjugate Gradient in Nonlinear Optimization

CJG is a nonlinear conjugation gradient. Algorithms have been used to solve large-scale unconstrained enhancement problems. Because of their minimal memory needs and global convergence qualities, they are widely used in a variety of fields. This approach has lately undergone many investigations and modifications to enhance it. In our daily lives, the conjugate gradient is incredibly significant. For example, whatever we do, we strive for the best outcomes, such as the highest profit, the lowest loss, the shortest road, or the shortest time, which are referred to as the minimum and maximum in mathematics, and one of these ways is the process of spectral gradient descent. For multidimensional unbounded objective function, the spectrum conjugated gradient (SCJG) approach is a strong tool. In this study, we describe a revolutionary SCG technique in which performance is quantified. Based on assumptions, we constructed the descent condition, sufficient descent theorem, conjugacy condition, and global convergence criteria using a robust Wolfe and Powell line search. Numerical data and graphs were constructed utilizing benchmark functions, which are often used in many classical functions, to demonstrate the efficacy of the recommended approach. According to numerical statistics, the suggested strategy is more efficient than some current techniques. In addition, we show how the unique method may be utilized to improve solutions and outcomes.


Introduction
Gradient's procedures are among the most efficient algorithms easy implementation, convergence properties, and capacity to provide various unconstrained multi-objective optimization problems. The CJG approach is widely used for optimization because of its quick convergence rate, small memory footprint, and simple iterations [1]. Here you may find a simple definition of an unrestricted optimization strategy.
where is an -dimensional Euclidean space and is a continuously differentiable function. The CJG technique creates a sequence of iterates [2]. There are several steps to the CJG technique, including iteration. (2) where is the iteration point at the moment, > is a step length and is the direction of the search. The first direction of search is usually the gradient's negative value which is the steepest descent direction [3], i.e., 0 − 0 , A recursive definition follows the following directions: in which =∇ ( ). Different will result in various conjugate gradient algorithms. The following are some well-known formulas.
, , , The above corresponding methods, HS is known as Hestenes and Steifel [4], FR is Fletcher and Reeves [5], PR is Polak and Ribiere [6], LS is Liu and Storey [7], DY is Dai and Yuan [8], Conjugate Descent [9], hybrid by Zhang, L. [10], Ahmed A. Mustafa [11], and lastly Ahmed A. Mustafa and Salah G. Shareef [12]. Many researchers have examined the convergence of the CJG method under various line searches, and some have used an exact line search to derive the step size (ELS). Others employ a line search known as the strong Wolfe line search condition (SWL), which is described as follows: The spectral CJG-technique (SCJG), which was initially offered by Barzilai and Borwein [13], is another well-known method that may be utilized to address the problem (1). The following factors determine the direction +1 : where is the spectral gradient parameter. The SCJG-method surpasses the more powerful CJG method. Based on certain acceptable assumptions, many researchers have proposed a spectral conjugate gradient as Andrei, Jiang, and Raydan [14,15,16] and the benefits of these algorithms over many traditional methods.

Derivation of the New Method with
Its Algorithm

Derivation of the New Formula
There are many proposed spectral conjugate gradient formulas for example see [17,18].
We'll calculate a new spectral parameter in this section. The SCJG-search method's orientation is normally as follows: − (7) In 2015, Shuo-Wang proposed the following formula for calculating the value of beta: see [21]. The following conjugacy requirement is established by Dai and Liao − > see [22].
Dividing both sides of the above equation by , we get Put (10) in (9) we obtain the new search direction

Convergence Analysis
The novel algorithms' descent constraint, enough descent quality, conjugacy requirement, and global convergence are all developed in this section. To this aim, we make some assumptions on the function of the objective as follows: (1) At the start point 0 is limited below on the level set continuous and differentiable in area of the level set * : ( ) ( 0 )+. (2) In the gradient ( ) is Lipschitz continuous, hence for any there exists a constant > such that ‖ ( ) − ( )‖ ‖ − ‖ Theorem3.1: Consider a CJG method with the use direction of search given as (11), then, condition will hold for all with exact and inexact line search.
Proof: If , then we will have −‖ ‖ . Hence the condition of descent is hold. Assume that . Now, we prove the search direction (11) is the descent direction at ( ) Multiply all ends of the equation (11) by to obtain and we multiply and divide the last two terms by respectively. This implies that We know that and by Wolfe condition . This implies that ‖ ‖ − ‖ ‖ , we use the above relations we have Therefore

Theorem3.2:
Suppose that the direction of search is given by (11). We assume that the step size satisfies strong Wolfe conditions (4)  For proof see [23]. From the previous information, we can obtain the following convergence theorem of the conjugate gradient methods.
Theorem3.5: Suppose that assumption (i) is true. Consider conjugate gradient method of the form (11) and is obtained by strong Wolfe conditions (4) and (5) Rewrite (7) and (8), we get Squaring the above equation, we have Dividing both sides of equation (15) by ( ) , therefore we end up with We know that and by Wolfe condition − − . This implies that ‖ ‖ − ‖ ‖ then

Numerical Results
This portion is devoted to testing the implementation of a new direction of search. We compare the fresh search direction of the conjugate gradient algorithm with Conjugate Gradient by (CD) and (   The above diagram is an explanation of Table 2. Table 2 and figure 1 show the rate of improvement in the new algorithm with the standard algorithms (CD) and (MCD). The numerical results of the new method are better than the standard algorithm. As we notice that (NOIS), (NOFS) of the standard algorithm (CD) are about 100% that means the fresh algorithm has improved on the standard algorithm (CD) prorate (16.251%) in (NOIS) and prorate (26.1619%) in (NOFS) and standard algorithm (MCD) is about 100% that means the novel algorithm has improved on the classical algorithm (MCD) prorate (87.88%) in (NOIS) and prorate (86.2854%) in (NOFS) in general, the new (54.1446%) compared with classical algorithms (CD) and (MCD). Figure 2 shows the comparison between the new algorithm and the classical algorithms (CD) and (MCD) according to the total number of iterations (NOIS) and the total number of functions (NOFS).

Conclusion
In this paper, we proposed a new spectral conjugate gradient method that has some properties of global convergence. Numerical results have shown that this new algorithm performs better than (CD) and (MCD). In the future, we can, and in some way, we proposed many new spectral conjugate gradients of unconstrained optimization.

Abbreviations
CJG conjugation gradient, SCJG spectral conjugated gradient, Min minimum, =∇ ( ) gradient, A parameter has different formulas, ELS exact line search, MCD modified of conjugate gradient, SWL strong Wolfe line search, NOIS number of iterations, NOFS number of functions.