Katta.Mallaiah and Veladi Srinivas

In the present paper, we establish the existence of two unique common fixed point theorems with a new contractive condition for four self-mappings in the S-metric space. First, we establish a common fixed-point theorem by using weaker conditions such as compatible mappings of type-(E) and subsequentially continuous mappings. Further, in the next theorem, we use another set of weaker conditions like sub-compatible and sub-sequentially continuous mappings, which are weaker than occasionally weak compatible mappings. Moreover, it is observed that the mappings in these two theorems are sub-sequentially continuous, but these mappings are neither continuous nor reciprocally continuous mappings. These two results will extend and generalize the existing results of [7] and [9] in the S-metric space. Furthermore, we also provide some suitable examples to justify our outcomes.

]]>K. Deva and S. Mohanaselvi

A picture fuzzy set is a more powerful tool to deal with uncertainties in the given information as compared to fuzzy set and intuitionistic fuzzy set and has energetic applications in decision-making. The aim of this study is to develop a new possibility measure for ranking picture fuzzy numbers and then some of its basic properties are proved. The proposed method provides the same ranking order as the score function in the literature. Moreover, the new possibility measure can provide additional information for the relative comparison of the picture fuzzy numbers. A picture fuzzy multi attribute decision-making problem is solved based on the possibility matrix generated by the proposed method after being aggregated using picture fuzzy Einstein weighted averaging aggregation operator. To verify the importance of the proposed method, an picture fuzzy multi attribute decision-making strategy is presented along with an application for selecting suitable alternative. The superiority of the proposed method and limitations of the existing methods are discussed with the help of a comparative study. Finally, a numerical example and comparative analysis are provided to illustrate the practicality and feasibility of the proposed method.

]]>Arash Pourkia

Braid groups and their representations are at the center of study, not only in low-dimensional topology, but also in many other branches of mathematics and theoretical physics. Burau representation of the Artin braid group which has two versions, reduced and unreduced, has been the focus of extensive study and research since its discovery in 1930's. It remains as one of the very important representations for the braid group. Partly, because of its connections to the Alexander polynomial which is one of the first and most useful invariants for knots and links. In the present work, we show that interesting representations of braid group could be achieved using a simple and intuitive approach, where we simply analyse the path of strands in a braid and encode the over-crossings, under-crossings or no-crossings into some parameters. More precisely, at each crossing, where, for example, the strand crosses over the strand we assign t to the top strand and b to the bottom strand. We consider the parameter t as a relative weight given to strand relative to , hence the position for t in the matrix representation. Similarly, the parameter b is a relative weight given to strand relative to , hence the position for b in the matrix representation. We show this simple path analyzing approach that leads us to an interesting simple representation. Next, we show that following the same intuitive approach, only by introducing an additional parameter, we can greatly improve the representation into the one with much smaller kernel. This more general representation includes the unreduced Burau representation, as a special case. Our new path analyzing approach has the advantage that it applies a very simple and intuitive method capturing the fundamental interactions of the strands in a braid. In this approach we intuitively follow each strand in a braid and create a history for the strand as it interacts with other strands via over-crossings, under-crossings or no-crossings. This, directly, leads us to the desired representations.

]]>Vishally Sharma and A. Parthiban

An assignment of intergers to the vertices of a graph subject to certain constraints is called a vertex labeling of . Different types of graph labeling techniques are used in the field of coding theory, cryptography, radar, missile guidance, -ray crystallography etc. A DCL of is a bijective function from node set of to such that for each edge , we allot 1 if divides or divides & 0 otherwise, then the absolute difference between the number of edges having 1 & the number of edges having 0 do not exceed 1, i.e., . If permits a DCL, then it is called a DCG. A complete graph , is a graph on nodes in which any 2 nodes are adjacent and lilly graph is formed by joining , sharing a common node. i.e., , where is a complete bipartite graph & is a path on nodes. In this paper, we propose an interesting conjecture concerning DCL for a given , besides, discussing certain general results concerning DCL of complete graph -related graphs. We also prove that admits a DCL for all . Further, we establish the DCL of some -related graphs in the context of some graph operations such as duplication of a node by an edge, node by a node, extension of a node by a node, switching of a node, degree splitting graph, & barycentric subdivision of the given .

]]>Endang Rusyaman Kankan Parmikanti Diah Chaerani and Khoirunnisa Rohadatul Aisy Muslihin

Lubricating oil is still a primary need for people dealing with machines. The important thing of lubricating oil is viscosity which is closely related to surface tension. Fluid viscosity states the measure of friction in the fluid, while surface tension is the tendency of the fluid to stretch due to attractive forces between the molecules (cohesion). We want to know how and to what extent the relationship between viscosity and surface tension of the lubricating oil is. This paper will discuss the analysis of a model in the form of an exponential fractional differential equation that states the relationship between surface tension and viscosity of lubricating oil. The Modified Homotopy Perturbation Method (MHPM) will be used to determine the solution of the fractional differential equation. This study indicates a relationship between viscosity and surface tension in the form of fractional differential equation in which the existence and uniqueness of the solution are guaranteed. From the analysis of the solution function both analytically and geometrically supported by empirical data, it can be concluded that there is a strong exponential relationship between viscosity and surface tension in lubricating oil.

]]>Thakhani Ravele Caston Sigauke and Lordwell Jhamba

Solar power poses challenges to the management of grid energy due to its intermittency. To have an optimal integration of solar power on the electricity grid it is important to have accurate forecasts. This study discusses the comparative analysis of semi-parametric extremal mixture (SPEM), generalised additive extreme value (GAEV) or quantile regression via asymmetric Laplace distribution (QR-ALD), additive quantile regression (AQR-1), additive quantile regression with temperature variable (AQR-2) and penalised cubic regression smoothing spline (benchmark) models for probabilistic forecasting of hourly global horizontal irradiance (GHI) at extremely high quantiles ( = 0.95, 0.97, 0.99, 0.999 and 0.9999). The data used are from the University of Venda radiometric in South Africa and are from the period 1 January 2020 to 31 December 2020. Empirical results from the study showed that the AQR-2 is the best fitting model and gives the most accurate prediction of quantiles at = 0.95, 0.97, 0.99 and 0.999, while at 0.9999-quantile the GAEV model has the most accurate predictions. Based on these results it is recommended that the AQR-2 and GAEV models be used for predicting extremely high quantiles of hourly GHI in South Africa. The predictions from this study are valuable to power utility decision-makers and system operators when making high-risk decisions and regulatory frameworks that require high-security levels. This is the first application to conduct a comparative analysis of the proposed models using South African solar irradiance data, to the best of our knowledge.

]]>Siham Rabee Ramadan Hamed Ragaa Kassem and Mahmoud Rashwaan

Calibration estimation approach is a widely used method for increasing the precision of the estimates of population parameters. It works by modifying the design weights as little as possible by minimizing a given distance function to the calibrated weights respecting a set of constraints related to specified auxiliary variables. This paper proposes a goal programming approach for generalized calibration estimation. In the generalized calibration estimation, multi study variables will be considered by incorporating multi auxiliary variables. Almost all calibration estimation's literature proposed calibrated estimators for the population mean of only one study variable. And nevertheless, up to researcher's knowledge, there is no study that considers calibration estimation approach for multi study variables. According to the correlation structure between the study variables, estimating the calibrated weights will be formulated in two different models. The theory of the proposed approach is presented and the calibrated weights are estimated. A simulation study is conducted in order to evaluate the performance of the proposed approach in the different scenarios compared by some existing calibration estimators. The Simulation results of the four generated populations show that the proposed approach is more flexible and efficient compared to classical methods.

]]>Suvimol Phanyaem

The Cumulative Sum (CUSUM) chart is widely used and has many applications in different fields such as finance, medical, engineering, and other fields. In real applications, there are many situations in which the observations of random processes are serially correlated, such as a hospital admission in the medical field, a share price in the economic field, or a daily rainfall in the environmental field. The common characteristic of control charts that has been used to evaluate the performance of control charts is the Average Run Length (ARL). The primary goals of this paper are to derive the explicit formula and develop the numerical integral equation of the ARL for the CUSUM chart when observations are seasonal autoregressive models with exogenous variable, SARX(P,r)_{L} with exponential white noise. The Fredholm Integral Equation has been used for solving the explicit formula of ARL, and we used numerical methods including the Midpoint rule, the Trapezoidal rule, the Simpson's rule, and the Gaussian rule to approximate the numerical integral equation of ARL. The uniqueness of solutions is guaranteed by using Banach's Fixed Point Theorem. In addition, the proposed explicit formula was compared with their numerical methods in terms of the absolute percentage difference to verify the accuracy of the ARL results and the computational time (CPU). The results obtained indicate that the ARL from the explicit formula is close to the numerical integral equation with an absolute percentage difference of less than 1%. We found an excellent agreement between the explicit formulas and the numerical integral equation solutions. An important conclusion of this study was that the explicit formulas outperformed the numerical integral equation methods in terms of CPU time. Consequently, the proposed explicit formulas and the numerical integral equation have been the alternative methods for finding the ARL of the CUSUM control chart and would be of use in fields like biology, engineering, physics, medical, and social sciences, among others.

Abdul Hadi Bhatti and Sharmila Binti Karim

Numerical methods are regularly established for the better approximate solutions of the ordinary differential equations (ODEs). The best approximate solution of ODEs can be obtained by error reduction between the approximate solution and exact solution. To improve the error accuracy, the representations of Wang Ball curves are proposed through the investigation of their control points by using the Least Square Method (LSM). The control points of Wang Ball curves are calculated by minimizing the residual function using LSM. The residual function is minimized by reducing the residual error where it is measured by the sum of the square of the residual function of the Wang Ball curve's control points. The approximate solution of ODEs is obtained by exploring and determining the control points of Wang Ball curves. Two numerical examples of initial value problem (IVP) and boundary value problem (BVP) are illustrated to demonstrate the proposed method in terms of error. The results of the numerical examples by using the proposed method show that the error accuracy is improved compared to the existing study of Bézier curves. Successfully, the convergence analysis is conducted with a two-point boundary value problem for the proposed method.

]]>Abimibola Victoria Oladugba and Brenda Mbouamba Yankam

The variance dispersion graphs (VDGs) and the fraction of design space (FDS) graphs are two graphical methods that effectively describe and evaluate the points of best and worst prediction capability of a design using the scaled prediction variance properties. These graphs are often utilized as an alternative to the single-value criteria such as D- and E- when they fail to describe the true nature of designs. In this paper, the VDGs and FDS graphs of third-order orthogonal uniform composite designs (OUCD_{4}) and orthogonal array composite designs (OACD_{4}) using the scaled-prediction variance properties in the spherical region for 2 to 7 factors are studied throughout the design region and over a fraction of space. Single-valued criteria such as D-, A- and G-optimality are also studied. The results obtained show that the OUCD_{4} is more optimal than the OACD_{4} in terms of D-, A- and G-optimality. The OUCD_{4} was shown to possess a more stable and uniform scaled-prediction variance throughout the design region and over a fraction of design space than the OACD_{4} although the stability of both designs slightly deteriorated towards the extremes.

Noura S. Mohamed Moshira A. Ismail and Sanaa A. Ismail

Finite mixture models have been used in many fields of statistical analysis such as pattern recognition, clustering and survival analysis, and have been extensively applied in different scientific areas such as marketing, economics, medicine, genetics and social sciences. Introducing mixtures of new generalized lifetime distributions that exhibit important hazard shapes is a major field of research aiming at fitting and analyzing a wider variety of data sets. The main objective of this article is to present a full mathematical study of the properties of the new finite mixture of the three-parameter Weibull extension model, considered as a generalization of the standard Weibull distribution. The new proposed mixture model exhibits a bathtub-shaped hazard rate among other important shapes in reliability applications. We analytically prove the identifiability of the new mixture and investigate its mathematical properties and hazard rate function. Maximum likelihood estimation of the model parameters is considered. The Kolmogrov-Smirnov test statistic is used to fit two famous data sets from mechanical engineering to the proposed model, the Aarset data and the Meeker and Escobar datasets. Results show that the two-component version of the proposed mixture is a superior fit compared to various lifetime distributions, either one-component or two-component lifetime distributions. The new proposed mixture is a significant statistical tool to study lifetime data sets in numerous fields of study.

]]>Saimir Tola Alfred Daci and Gentian Zavalani

This paper presents numerical simulations and comparisons between different approaches concerning elastic thin rods. Elastic rods are ideal for modeling the stretching, bending, and twisting deformations of such long and thin elastic materials. The static solution of Kirchhoff's equations [2] is produced using ODE45 solver where Kirchhoff and reference system equations are combined instantaneously. Solutions using formulations are based on Euler's elastica theory [1] which determines the deformed centerline of the rod by solving a boundary-value problem, on the Discreet Elastic Rod method using Bishop frame (DER) [5,6] which is based on discrete differential geometry, it starts with a discrete energy formulation and obtains the forces and equations of motion by taking the derivative of energies. Instead of discretizing smooth equations, DER solves discrete equations and obeys geometrical exactness. Using DER we measure torsion as the difference of angles between the material and the Bishop frame of the rod so that no additional degree of freedom is needed to represent the torsional behavior. We found excellent agreement between our Kirchhoff-based solution and numerical results obtained by the other methods. In our numerical results, we include simulation of the rod under the action of the terminal moment and illustrations of the gravity effects.

]]>Bhuwaneshwar Kumar Gupt Mankupar Swer Md. Irphan Ahamed B. K. Singh and Kh. Herachandra Singh

In this paper, the problem of optimum stratification of heteroscedastic populations in stratified sampling is considered for a known allocation under Simple Random Sampling With and Without Replacement (SRSWR & SRSWOR) design. The known allocation used in the problem is one of the model-based allocations proposed by Gupt [1,2] under a superpopulation model considered by Hanurav [3], Rao [4], and Gupt and Rao [5] which was modified by the author (Gupt [1,2]) to a more general form. The problem of finding optimum boundary points of stratification (OBPS) in stratifying populations considered here is based on an auxiliary variable which is highly correlated with the study variable. Equations giving the OBPS have been derived by minimizing the variance of estimator of the population mean. Since the equations giving OBPS are implicit and difficult for solving, some methods of finding approximately optimum boundary points of stratification (AOBPS) have also been obtained as the solutions of the equations giving OBPS. While deriving equations giving OBPS and methods of finding AOBPS, basic statistical definitions, tools of calculus, analytic functions and tools of algebra are used. While examining the efficiencies of the proposed methods of stratification, they are tested in a few generated populations and a live population. All the proposed methods of stratification are found to be efficient and suitable for practical applications. In this study, although the proposed methods are obtained under a heteroscedastic superpopulation model for level of heteroscedasticity one, the methods have shown robustness in empirical investigation in varied levels of heteroscedastic populations. The stratification methods proposed here are new as they are derived for an allocation, under the superpopulation model, which has never been used earlier by any researcher in the field of construction of strata in stratified sampling. The proposed methods may be a fascinating piece of work for researchers amidst the vigorously progressing theoretical research in the area of stratified sampling. Besides, by virtue of exhibiting high efficiencies in the performance of the methods, the work may provide a practically feasible solution in the planning of socio-economic survey.

]]>M. F. Zairul Fuaad N. Razali H. Hishamuddin and A. Jedi

The accuracy and efficiency of water tank system problems can be determined by comparing the Symmetrized Implicit Midpoint Rule (IMR) with the IMR. Static and dynamic analyses are part of a mathematical model that uses energy conservation to generate a nonlinear Ordinary Differential Equation. Static analysis provides optimal working points, while dynamic analysis outputs an overview of the system behaviour. The procedure mentioned is tested on two water tank designs, namely, cylindrical and rectangular tanks with two different parameters. The Symmetrized IMR is used in this study. Results show that the two-step active Symmetrized IMR applied on the proposed mathematical model is precise and efficient and can be used for the design of appropriate controls. The cylindrical water tank model takes the fastest time in emptying the water tank. The approach of the various water tank models shows an increase in accuracy and efficiency in the range of parameters used for practical model applications. The results of the numerical method show that the two-step Symmetrized IMR provides more precise stability, accuracy and efficiency for the fixed step size measurements compared with other numerical methods.

]]>Habti Abeida

Absolutely Continuous non-singular complex elliptically symmetric distributions (referred to as the nonsingular CES distributions) have been extensively studied in various applications under the assumption of nonsingularity of the scatter matrix for which the probability density functions (p.d.f's) exist. These p.d.f's, however, can not be used to characterize the CES distributions with a singular scatter matrix (referred to as the singular CES distributions). This paper presents a generalization of the singular real elliptically symmetric (RES) distributions studied by Díaz-García et al. to singular CES distributions. An explicit expression of the p.d.f of a multivariate non-circular complex random vector with singular CES distribution is derived. The stochastic representation of the singular non-circular CES (NC-CES) distributions and the quadratic forms in NC-CES random vector are proved. As special cases, explicit expressions for the p.d.f's of multivariate complex random vectors with singular non-circular complex normal (NC-CN) and singular non-circular complex Compound-Gaussian (NC-CCG) distributions are also derived. Some useful properties of singular NC-CES distributions and their conditional distributions are derived. Based on these results, the p.d.f's of non-circular complex t-distribution, K-distribution, and generalized Gaussian distribution under singularity are presented. These general results degenerate to those of singular circular CES (C-CES) distributions when the pseudo-scatter matrix is equal to the zero matrix. Finally, these results are applied to the problem of estimating the parameters of a complex-valued non-circular multivariate linear model in the presence either of singular NC-CES or C-CES distributed noise terms by proposing widely linear estimators

]]>H.Priya and B. Srutha Keerthi

The aim of the paper is to obtain the First Hankel Determinant and the Second Hankel determinant. We shall make use of few lemmas which are based on Caratheodory's class of analytic functions. We establish a new Sakaguchi class of univalent function, further we estimate the sharp bound for initial coefficients and using the Bessel function expansion. We have discussed about the coefficient as well for the Second Hankel Determinant. The results are obtained for Sakaguchi kind. Our results travel along exploring the stages of Hankel Determinants. Various types of technologies like wire, optical or other electromagnetic systems are used for the transmission of data in one device to another. Filters play an important role in the process that can remove disorted signals. By using different parameter values for the function belongs to Sakaguchi kind of functions the Low pass filter and High pass filter can be designed and that can be done by the coefficient estimates.

]]>Ugah Tobias Ejiofor Mba Emmanuel Ikechukwu Eze Micheal Chinonso Arum Kingsley Chinedu Mba Ifeoma Christy Urama Chinasa and Comfort Njideka Ekene-Okafor

It is not uncommon to find an outlier in the response variable in linear regression. Such a deviant value needs to be detected and scrutinized to find out why it is not in agreement with its fitted value. Srikantan [1] has developed a test statistic for detecting the presence of an outlier in the response variable in a multiple linear regression model. Approximate critical values of this test statistic are available and are obtained based on the first-order Bonferroni upper bound. The exact critical values are not available and a result of that, tests carried out on the basis of this approximate critical values may not be very accurate. In this paper, we obtained more accurate and precise critical values of this test statistic for large sample sizes (herein called asymptotic critical values) to improve on the tests that use these critical values. The procedure involved using the exact probability density function of this test statistic to obtain its asymptotic critical values. We then compared these asymptotic critical values with the approximate critical values obtained. An application to simulation results for linear regression models was used to examine the power of this test statistic. The asymptotic critical values obtained were found to be more accurate and precise. Also, the test performed better under these asymptotic values (the power performance of this test statistic was found to better when the asymptotic critical values were used).

]]>Ivana Mala Vaclav Sladek and Diana Bilkova

Normality tests are used in the statistical analysis to determine whether a normal distribution is acceptable as a model for the data analysed. A wide range of available tests employs different properties of normal distribution to compare empirical and theoretical distributions. In the present paper, we perform the Monte Carlo simulation to analyse test power. We compare commonly known and applied tests (standard and robust versions of the Jarque-Bera test, Lilliefors test, chi-square goodness-of-fit test, Shapiro-Francia test, Cramer-von Mises goodness-of-fit test, Shapiro-Wilk test, D'Agostino test, and Anderson-Darling test) to the test based on robust L-moments. In the text, in Jarque-Bera type test the moment characteristics of skewness and kurtosis are replaced with their robust versions - L-skewness and L-kurtosis. The distributions with heavy tails (lognormal, Weibull, loglogistic and Student) are used to draw random samples to show the performance of tests when applied on data with outliers. Small sample properties (from 10 observations) are analysed up to large samples of 200 observations. Our results concerning the properties of the classical tests are in line with the conclusion of other recent articles. We concentrate on properties of the test based on L-moments. This normality test is comparable to well-performing and reliable tests; however, it is outperformed by the most powerful Shapiro-Wilks and Shapiro-Francia tests. It works well for Student (symmetric) distribution, comparably with the most frequently used Jarque-Berra tests. As expected, the test is robust to the presence of outliers in comparison with sensitive tests based on product moments or correlations. The test turns out to be very universally reliable.

]]>B. M. Cerna Maguiña Dik D. Lujerio Garcia and Héctor F. Maguiña

In this work, using the basic tools of functional analysis, we obtain a technique that allows us to obtain important results, related to quadratic equations in two variables that represent a natural number and differential equations. We show the possible ways to write an even number that ends in six, as the sum of two odd numbers and we establish conditions for said odd numbers to be prime, also making use of a suitable linear functional we obtain representations of natural numbers of the form in order to obtain positive integer solutions of the equation quadratic where is a natural number given that it ends with one. And finally, we show with three examples the use of the proposed technique to solve some ordinary and partial linear differential equations. We believe that the third corollary of our first result of this investigation can help to demonstrate the strong Goldbach conjecture.

]]>Betty Subartini Ira Sumiati Sukono Riaman and Ibrahim Mohammed Sulaiman

At present, three numerical solution methods have mainly been used to solve fractional-order chaotic systems in the literature: frequency domain approximation, predictor–corrector approach and Adomian decomposition method (ADM). Based on the literature, ADM is capable of dealing with linear and nonlinear problems in a time domain. Also, the Adomian decomposition method (ADM) is among the efficient approaches for solving linear and non-linear equations. Numerical solution method is one of the critical problems in theoretical research and in the applications of fractional-order systems. The solution is decomposed into an infinite series and the integral transformation to a differential equation is implemented in this work. Furthermore, the solution can be thought of as an infinite series that converges to an exact solution. The aim of this study is to combine the Adomian decomposition approach with a different integral transformation, including Laplace, Sumudu, Natural, Elzaki, Mohand, and Kashuri-Fundo. The study's key finding is that employing the combined method to solve fractional ordinary differential equations yields good results. The main contribution of our study shows that the combined numerical methods considered produce excellent numerical performance for solving fractional ordinary differential equations. Therefore, the proposed combined method has practical implications in solving fractional order differential equations in science and social sciences, such as finding analytical and numerical solutions for secure communication system, biological system, financial risk models, physics phenomenon, neuron models and engineering application.

]]>Ni Made Ayu Astari Badung Adji Achmad Rinaldo Fernandes and Waego Hadi Nugroho

This study aims to compare the size of distance (Euclidean distance, Manhattan distance, and Mahalanobis distance) and linkage (average linkage, single linkage, and complete linkage) in integrated cluster analysis with Multiple Discriminant Analysis on Home Ownership Credit Bank consumers in Indonesia. The data used are secondary data from the 5C assessment on Bank consumers in Indonesia. The data contain notes on the 5 C assessment as well as 3 credit collectability (current, special mention, and substandard) from Home Ownership Credit customers. The population in this study were all Home Ownership Credit customers in all banks in Indonesia. The sampling technique used was purposive random sampling. The sample size is 300 customers from customer data at three branches of Bank in Indonesia. This research is a quantitative study using cluster analysis integrated with multiple discriminant analysis. The best method for classifying Home Ownership Credit Bank customers based on the 5C variable assessment is an integrated cluster analysis with Multiple Discriminant Analysis based on the Mahalanobis distance with 2 clusters, namely the high cluster and the low cluster. Use of an integrated cluster with Multiple Discriminant Analysis to compare distance and linkage measures. In addition, the objects used are Home Ownership Credit Bank customers in Indonesia.

]]>Erlinda Citra Lucki Efendi Adji Achmad Rinaldo Fernandes and Maria Bernadetha Theresia Mitakda

This study aims to estimate the nonparametric truncated spline path functions of linear, quadratic, and cubic orders at one and two knot points and determine the best model on the variables that affect the timely payment of House Ownership Credit (HOC). In addition, this study aims to test the hypothesis to determine the variables that have a significant effect on punctuality in paying House Ownership Credit (HOC). The data used in this study are primary data. The variables used are service quality and lifestyle as exogenous variables, willingness to pay as mediating variables and on time to pay as endogenous variables. Analysis of the data used in this study is a nonparametric path using R software. The results showed that the best model was obtained on a nonparametric truncated spline linear path model with 2 knot points. The model has the smallest GCV value of 25.9059 and R^{2} value of 96.96%. In addition, the results of hypothesis testing on function estimation have a significant effect on the relationship between service quality and willingness to pay, the relationship between service quality and on time to pay, the relationship between lifestyle and willingness to pay, and the relationship between lifestyle and on time pay. The novelty of this research is to model and test the hypothesis of nonparametric regression development, namely nonparametric truncated spline paths of linear, quadratic and cubic orders.

Jetsada Singthongchai Noppakun Thongmual and Nirun Nitisuk

This research is about estimating parameters in simple linear regression model. Regression model is applied for predictive in many filed. Ordinary lest square (OLS) approach and Maximum likelihood (ML) approach are employed for estimating parameter in simple linear regression model when the assumption is not violated. This research interested in simple linear regression model when the assumption is violated. Simple Averaging (SA) approach is an alternative for estimating parameters in simple linear regression model where assumptions are not successfully used. We improved SA approach based on the median which is called the improved Simple Averaging (ISA) approach. For comparing the two approaches for estimating parameter in simple linear regression model, ISA approach is compared with SA approach under Root Mean Square Error (RMSE) which reflected accuracy of prediction in simple linear regression. By using the sample, the results showed that ISA approach is better than SA approach where the value of RMSE of ISA approach is less than the value of RMSE of SA approach. Therefore, ISA approach is better than SA approach. Our study suggests ISA approach to estimating parameter on simple linear regression because ISA approach accuracy than SA approach and ISA approach simplify the estimation of parameters in the simple linear regression model. Hence, ISA approach an alternative for estimating parameters in simple linear regression model when the assumptions are not successfully used.

]]>B. M. Cerna Maguiña and Janet Mamani Ramos

Although it is true that there are several articles that study quadratic equations in two variables, they do so in a general way. We focus on the study of natural numbers ending in one, because the other cases can be studied in a similar way. We have given the subject a different approach, that is why our bibliographic citations are few. In this work, using basic tools of functional analysis, we achieve some results in the study of integer solutions of quadratic polynomials in two variables that represent a given natural number. To determine if a natural number ending in one is prime, we must solve equations (i) , (ii) , (iii) . If these equations do not have an integer solution, then the number P is prime. The advantage of this technique is that, to determine if a natural number p is prime, it is not necessary to know the prime numbers less than or equal to the square root of p. The objective of this work was to reduce the number of possibilities assumed by the integer variables in the equation (i), (ii), (iii) respectively. Although it is true that this objective was achieved, we believe that the lower limits for the sums of the solutions of equations (i), (ii), (iii), were not optimal, since in our recent research we have managed to obtain limits lower, which reduce the domain of the integer variables solve equations (i), (ii), (iii), respectively. In a future article we will show the results obtained. The methodology used was deductive and inductive. We would have liked to have a supercomputer, to build or determine prime numbers of many millions of digits, but this is not possible, since we do not have the support of our respective authorities. We believe that the contribution of this work to number theory is the creation of linear functionals for the study of integer solutions of quadratic polynomials in two variables, which represent a given natural number. The utility of large prime numbers can be used to encode any type of information safely, and the scheme shown in this article could be useful for this process.

]]>Malik Saad Al-Muhja Habibulla Akhadkulov and Nazihah Ahmad

Approximation Theory is a branch of analysis and applied mathematics requiring that the approximation process preserves certain -shaped properties defined at a finite interval , such as convexity in all or parts of the interval. The (Co)convex and Unconstrained Polynomial (COCUNP) approximation is one of the key estimations of the approximation theory that Kopotun has recently raised for ten years. Numerous studies have been conducted on modern methods of weighted approximation to construct the best degree of approximation. In developing COCUNP a novel technique, the Lebesgue Stieltjes integral-i technique is used to resolve certain disadvantages, such as Riemann's integrable functions, which do not have the degree of the best approximation in norm space. In order to achieve the main goal, Derivation of New Degree (DOND) of the best COCUNP approximation was constructions. The theoretical results revealed that, in general, the new degrees of best approximation were able to smaller errors compared to the existing literature in the same estimating. In conclusion, this study has successfully developed DOND for the best (Co)convex Polynomial (COCP) weighted approximation.

]]>Magi P M Sr.Magie Jose and Anjaly Kishore

Let be a simple graph of order and let be the Seidel matrix of , defined as where if the vertices and are adjacent and if the vertices and are not adjacent and if . Let be the diagonal matrix where denotes the degree of the vertex of . The Seidel Laplacian matrix of a graph is defined as and the Seidel signless Laplacian matrix of a graph is defined as . The zero-divisor graph of a commutative ring , denoted by , is a simple undirected graph with all non-zero zero-divisors as vertices and two distinct vertices are adjacent if and only if . In this paper, we find the Seidel polynomial and Seidel Laplacian polynomial of the join of two regular graphs using the concept of schur complement and coronal of a square matrix. Also we describe the computation of the Seidel Laplacian and Seidel signless Laplacian eigenvalues of the join of more than two regular graphs, using the well known Fiedler's lemma and apply these results to describe these eigenvalues for the zero-divisor graph on . Further we find the Seidel Laplacian and Seidel signless Laplacian spectrum of the zero-divisor graph of for some values of , say , where are distinct primes. We also prove that 0 is a simple Seidel Laplacian eigenvalue of , for any .

]]>Sidite Duraj Eriola Sila and Elida Hoxha

The study of fixed points in the metric spaces plays a crucial role in the development of Functional Analysis. It is evolved by generalizing the metric space or improving the contractive conditions. Recently, the partial rectangular metric space and its topology have been the center of study for many researchers. They have defined open and closed balls the equivalent Cauchy sequences and Cauchy sequences, convergent sequences which are used as tools in many achieved results. In this paper, two facts for equivalent Cauchy sequences in a partial rectangular metric space are provided by using an ultra - altering distance function. Furthermore, some results of Cauchy sequences in a partial rectangular metric space are highlighted. There is proved that under some conditions the equivalent Cauchy sequences are Cauchy sequences in a partial rectangular metric space. Some fixed point results have been taken as applications of our new conditions of Cauchy sequences and equivalent Cauchy sequences in a partial rectangular metric space for orbitally continuous functions . To illustrate the obtained results some examples are given.

]]>Sameen Ahmed Khan

The main aim of this article is to start with an expository introduction to the trigonometric ratios and then proceed to the latest results in the field. Historically, the exact ratios were obtained using geometric constructions. The geometric methods have their own limitations arising from certain theorems. In view of the certain limitations of the geometric methods, we shall focus on the powerful techniques of equations in deriving the exact trigonometric ratios using surds. The cubic and higher-order equations naturally arise while deriving the exact trigonometric ratios. These equations are best expressed using the expansions of the cosines and sine of multiple angles using the Chebyshev polynomials of the first and second kind respectively. So, we briefly present the essential properties of the Chebyshev polynomials. The equations lead to the question of reduced polynomials. This question of the reduced polynomials is addressed using the Euler's totient function. So, we describe the techniques from theory of equations and reduced polynomials. The trigonometric ratios of certain rational angles (when measured in degrees) give rise to rational trigonometric ratios. We shall discuss these along with the related theorems. This is a frontline area of research connecting trigonometry and number theory. Results from number theory and theory of equations are presented wherever required.

]]>Vladislav V. Lyubimov

The aim of this paper is to obtain three types of expressions for calculating the probability of implementing palindromic digit combinations on a finite equally possible combination of zeros and ones. When calculating the probability of implementation of palindromic digit combinations, the classical definition of probability is applied. The main results of the paper are formulated in the form of three theorems. Moreover, the consequences of these theorems and typical examples of calculating the probability of implementing palindromic digit combinations in a data string of binary code are considered. All formulated theorems and their consequences are accompanied by proofs. The obtained numerical results of the paper can be used in the analysis of numerical computer data written as a binary code string in BIN format files. It should also be noted that the combinatorial expressions described in the article for calculating the number of palindromic combinations of digits in the binary number system can be used in number theory and in various branches of computer science. The development of these results from the point of view of obtaining an expression for calculating the number of palindromic combinations of digits in the binary number system contained in two-dimensional data arrays is also of immediate theoretical and practical interest. However, these results are not presented in this work, but they can be considered in subsequent publications.

]]>Mardeen Sh. Taher and Salah G. Shareef

It is known that the conjugate gradient method is still a popular method for many researchers who are focused in solving the large-scale unconstrained optimization problems and nonlinear equations because the method avoids the computation and storage of some matrices so the memory's requirements of the method are very small. In this work, a modified Perry conjugate gradient method which fulfills a global convergence with standard assumptions is shown and analyzed. The idea of new method is based on Perry method by using the equation which is founded via Powell in 1978. The weak Wolfe–Powell search conditions are used to choose the optimal line search, under the line search and suitable conditions, we prove both descent and sufficient descent conditions. In particular, numerical results show that the new conjugate gradient method is more effective and competitive when compared to other of standard conjugate gradient methods including: - CG- Hestenes and Stiefel (H/S) method, CG-Perry method, CG- Dai and Yuan (D/Y) method. The comparison is completed under a group of standard test problems with various dimensions from the CUTEst test library and the comparative performances of the methods are evaluated by total the number of iterations and the total number of function evaluations.

]]>Ridhwan Reyaz Ahmad Qushairi Mohamad Yeaou Jiann Lim Muhammad Saqib Zaiton Mat Isa and Sharidan Shafie

Studies on Casson fluid are essential in the development of the manufacturing and engineering fields since it is widely used there. Meanwhile, fractional derivative has been known to be a constructive paradox that can be beneficial in the future. In this study, the development fractional derivative on Casson fluid flow is investigated. A fractional Casson fluid model with effect of thermal radiation is derived together with momentum and energy equations. The Caputo definition of fractional derivative is used in the mathematical formulation. Casson fluid with constant wall temperature over an oscillating plate in the presence of thermal radiation is considered. Solutions were obtained by using Laplace transform and are presented in the form of Wright function. Graphical analysis on velocity and temperature profiles was conducted with variations in parametric values such as fractional parameter, Grashof number, Prandtl number and radiation parameter. Numerical computations were carried out to investigate behaviours of skin friction and Nusselt number. It is found that when the fractional parameter is increased, the velocity and temperature profiles will also increase. Existence of fractional parameter in both velocity and temperature profiles shows the transitional phenomenon of both profiles from an unsteady state to steady state, providing a new perspective on Casson fluid flow. An increment in both profiles is also observed when the thermal radiation parameter is increased. The present results are also validated with published results, and it is found that they are in agreement with each other.

]]>Nurfarah Zulkifli and Nor Muhainiah Mohd Ali

Let be a finite group. The probability that two selected elements from and from are chosen at random in a way that the greatest common divisor also known as gcd, of the order of and , which is equal to one, is called as the relative coprime probability. Meanwhile, another definition states that the vertices or nodes are the elements of a group and two distinct vertices or nodes are adjacent if and only if their orders are coprime and any of them is in the subgroup of the group and this is called as the relative coprime graph. This research focuses on determining the relative coprime probability and graph for cyclic subgroups of some nonabelian groups of small order and their associated graph properties by referring to the definitions and theorems given by previous researchers. Besides, various results of the relative coprime probability for nonabelian groups of small order are obtained. As for the relative coprime graph, the result shows that the domination number for each group is one whereas the number of edges and the independence number for each group vary. Types of graphs that can be formed are either star graph, planar graph or complete subgraph depending on the order of the subgroup of a group.

]]>Domenico P.L. Castrigiano

Unbounded (and bounded) Toeplitz operators (TO) with rational symbols are analysed in detail showing that they are densely defined closed and have finite dimensional kernels and deficiency spaces. The latter spaces as well as the domains, ranges, spectral and Fredholm points are determined. In particular, in the symmetric case, i.e., for a real rational symbol the deficiency spaces and indices are explicitly available. — The concluding section gives a brief overview on the research on unbounded TO in order to locate the present contribution. Regarding properties of unbounded TO in general, it furnishes some new results recalling the close relationship to Wiener-Hopf operators and, in case of semiboundedness, to singular operators of Hilbert transformation type. Specific symbols considered in the literature admit further analysis. Some conclusions are drawn for semibounded integrable and real square-integrable symbols. There is an approach to semibounded TO, which starts from closable semibounded forms related to a Toeplitz matrix. The Friedrichs extension of the TO associated with such a form is studied. Finally, analytic TO and Toeplitz-like operators are briefly examined, which in general differ from the TO treated here.

]]>K. Kumara Swamy Swatmaram Bipan Hazarika and P. Sumati Kumari

It has been a century since the Banach fixed point theorem was established, and because of this, the result is the progenitor in some ways. This seems essential to revisit fixed point theorems in specific and in light of most of those. Those are numerous and prevalent in mathematics, as we will demonstrate. Fixed point theorems can be noticed in advanced mathematics, economics, micro-structures, geometry, dynamics, computational mathematics, and differential equations. space is to broaden and extrapolate the paradigm of the concept of metric space. The characteristic of a space, in essence, is to comprehend the topological features of three points rather than two points via the perimeter of a triangle, where the metric indicates the distance between two points. The domain of space is significantly larger than that of the class of space. Hence we utilised this generalized space in order to obtain common tripled fixed point for three mappings using rational type contractions in the setting of spaces. Recently, Khomadram et al have developed coupled fixed point theorems in spaces via rational type contractions. The main aim of our paper is to broaden and extrapolate the paradigm of Khomadram's results into tripled fixed point theorems. Therefore, examples are offered to support our findings.

]]>Nik Nur Amiza Nik Ismail Azwani Alias and Fatimah Noor Harun

Internal solitary waves have been documented in several parts of the world. This paper intends to look at the effects of the variable topography and rotation on the evolution of the internal waves of depression. Here, the wave is considered to be propagating in a two-layer fluid system with the background topography is assumed to be rapidly and slowly varying. Therefore, the appropriate mathematical model to describe this situation is the variable-coefficient Ostrovsky equation. In particular, the study is interested in the transition of the internal solitary wave of depression when there is a polarity change under the influence of background rotation. The numerical results using the Pseudospectral method show that, over time, the internal solitary wave of elevation transforms into the internal solitary wave of depression as it propagates down a decreasing slope and changes its polarity. However, if the background rotation is considered, the internal solitary waves decompose and form a wave packet and its envelope amplitude decreases slowly due to the decreasing bottom surface. The numerical solutions show that the combination effect of variable topography and rotation when passing through the critical point affected the features and speed of the travelling solitary waves.

]]>Robert Reynolds and Allan Stauffer

Carl Johan Malmsten (1846) and David Beirens de Haan (1847) published work containing some interesting integrals. While no formal derivations of the integrals in his book Nouvelles Tables d'Intègrales Dèfines are available in current literature deriving and evaluating such formulae are useful in all aspects of science and engineering whenever such formulae are used. Formulae in the book of Bierens de Haan are used in connection with certain potential problems where there is the need to determine the vector potential of two parallel, infinitely long, tubular rectangular conductors carrying cur-rents in opposite directions. In this current work we supply formal derivations for some of these integrals along with deriving some special cases as new integrals in order to expand upon the book of Bierens de haan to aid in potential research where these formulae are applicable. Updating book of integrals is always a useful exercise as it keeps the volume accurate and more useful for potential readers and researchers. Formal derivations are also useful as they help in verifying the correctness of integrals in such volumes. The definite integral we derived in this work is given by (1) in terms of the Lerch function, where the parameters a; k; m; and p are general complex numbers subject to their restrictions. This formal derivation is then used to derive the correct version of a definite integral transform along with new formulae. Some of the results in this work are new.

]]>P Jamsheena and A V Chithra

Let be a commutative ring with unity. The essential ideal graph of , denoted by , is a graph with vertex set consisting of all nonzero proper ideals of A and two vertices and are adjacent whenever is an essential ideal. An essential ideal of a ring is an ideal of (), having nonzero intersection with every other ideal of . The set contains all the maximal ideals of . The Jacobson radical of , , is the set of intersection of all maximal ideals of . The comaximal ideal graph of , denoted by , is a simple graph with vertices as proper ideals of A not contained in and the vertices and are associated with an edge whenever . In this paper, we study the structural properties of the graph by using the ring theoretic concepts. We obtain a characterization for to be isomorphic to the comaximal ideal graph . Moreover, we derive the structure theorem of and determine graph parameters like clique number, chromatic number and independence number. Also, we characterize the perfectness of and determine the values of for which is split and claw-free, Eulerian and Hamiltonian. In addition, we show that the finite essential ideal graph of any non-local ring is isomorphic to for some .

]]>Adeniji A A Noufe H. A Mkolesia A C and Shatalov M Y

Predator-prey models are the building blocks of the ecosystems as biomasses are grown out of their resource masses. Different relationships exist between these models as different interacting species compete, metamorphosis occurs and migrate strategically aiming for resources to sustain their struggle to exist. To numerically investigate these assumptions, ordinary differential equations are formulated, and a variety of methods are used to obtain and compare approximate solutions against exact solutions, although most numerical methods often require heavy computations that are time-consuming. In this paper, the traditional differential transform (DTM) method is implemented to obtain a numerical approximate solution to prey-predator models. The solution obtained with DTM is convergent locally within a small domain. The multi-step differential transform method (MSDTM) is a technique that improves DTM in the sense that it increases its interval of convergence of the series expansion. One predator-one prey and two-predator-one prey models are considered with a quadratic term which signifies other food sources for its feeding. The result obtained numerically and graphically shows point DTM diverges. The advantage of the new algorithm is that the obtained series solution converges for wide time regions and the solutions obtained from DTM and MSDTM are compared with solutions obtained using the classical Runge-Kutta method of order four. The results demonstrated is that MSDTM computes fast, is reliable and gives good results compared to the solutions obtained using the classical Runge-Kutta method.

]]>Prapart Pue-on

In this manuscript, the fractional residual power series (FRPS) method is employed in solving a system of linear fractional Fredholm integro-differential equations. The significant role of this system in various fields has attracted the attention of researchers for a decade. The definition of fractional derivative here is described in the Caputo sense. The proposed method relies on the generalized Taylor series expansion as well as the fact that the fractional derivative of stationary is zero. The process starts by constructing a residual function by supposing the finite order of an approximate power series solution that prescribes the initial conditions. Then, utilizing some conditions, the residual functions are converted to a linear system for the power series coefficients. Solving the linear system reveals the coefficients of the fractional power series solution. Finally, by substituting these coefficients into the supposed form of a solution, the approximate fractional power series solutions are derived. This technique has the advantage of being able to be applied directly to the problem and spending less time on computation. It is not only an easy method for implementation of the problem, but also provides productive results after a few iterations. Some problems with known solutions emphasize the procedure's simplicity and reliability. Moreover, the obtained exact solution demonstrated the efficiency and accuracy of the presented method.

]]>Mahmoud Riad Mahmoud Moshera. A. M. Ahmad and Badiaa. S. Kh. Mohamed

The Lomax distribution (or Pareto II) was first introduced by K. S. Lomax in 1954. It can be readily applied to a wide range of situations including applications in the analysis of the business failure life time data, economics, and actuarial science, income and wealth inequality, size of cities, engineering, lifetime, and reliability modeling. In his pioneering paper, Shannon 1948 defined the notion of entropy as a mathematical measure of information, which is sometimes called Shannon entropy in his honor. He laid the groundwork for a new branch of mathematics in which the notion of entropy plays a fundamental role over different areas of applications such as statistics, information theory, financial analysis, and data compression. [Ebrahimi and Pellerey 14] introduced the residual entropy function because the entropy shouldn't be applied to a system that has survived for some units of time, and therefore, the residual entropy is used to measure the ageing and characterize, classify and order lifetime distributions. In this paper, the estimation of the entropy and residual entropy of a two parameter Lomax distribution under a generalized Type-II hybrid censoring scheme are introduced. The maximum likelihood estimation for the entropy is provided and the Bayes estimation for the residual entropy is obtained. Simulation studies to assess the performance of the estimates with different sample sizes are described, finally conclusions are discussed.

]]>Dilcu Barnes and Saeed Maghsoodloo

This paper focuses on the renewal function which is simply the mathematical expectation of number of renewals in a stochastic process. Renewal functions are important, and they have various applications in many fields. However, obtaining an analytical expression for the renewal function may be very complicated and even impossible. Therefore, researchers focused on developing approximation methods for them. The purpose of this paper is to explore the renewal functions for non-negligible repair for the most common reliability underlying distributions using the first four raw moments of the failure and repair distributions. This article gives the approximate number of cycles, number of failures and the resulting availability for particular distributions assuming Mean Time to Repair is not negligible and that Time to Restore, or repair has a probability density function denoted as r(t). When Mean Time to Repair is not negligible and Time to Restore has a probability density function denoted as r(t), the expected number of failures, cycles and the resulting availability were obtained by taking the Laplace transforms of corresponding renewal functions. An approximation method for obtaining the expected number of cycles, number of failures and availability using raw moments of failure and repair distributions are provided. Results show that the method produces very accurate results for especially large values of time t.

]]>Brenda Mbouamba Yankam and Abimibola Victoria Oladugba

Experimenters often evaluate the steadiness and consistency of designs over the region of interest by means of the prediction variance capabilities using the variance dispersion graph and the fraction of design space graph. The variance dispersion graph and the fraction of design space graph effectively describe the prediction variance capabilities of a design in the region of interest. However, the prediction variance capabilities of third-order response surface designs have not been studied in the literature. In this paper, the prediction variance capabilities of two third-order response surface designs term augmented orthogonal uniform composite designs and orthogonal array composite designs in the cuboidal region for 3≤k≤7 with center points are examined. The prediction variance capabilities are evaluated using the variance dispersion graph and the fraction of design space graph. Also, D-, E-, G-and T-optimality criteria are used in evaluating these designs in terms of single-value criterion. The results obtained show that the augmented orthogonal uniform composite designs have better prediction variance capabilities in the cuboidal region in the terms of the variance dispersion graphs for factors 3 and 4. The augmented orthogonal uniform composite designs also have better prediction variance capabilities for 3≤k≤7 compare to the orthogonal array composite designs in terms of the fraction of design space graph. The augmented orthogonal uniform composite designs are shown to be superior over the orthogonal array composite designs in terms of D-, E-, G-and T-optimality criteria for single-value criterion. This shows that the performances of the prediction variance capabilities of third-order response surface designs can be clearly visualized by means of the variance dispersion graph and fraction of design space and should be consider over the single-value criteria even though the single value-criteria show some degree of design performance. The augmented orthogonal uniform composite design is should often be considered in experimentation over the orthogonal array composite design since the augmented orthogonal uniform composite design performance better.

]]>Veronika Starodub Ruslan V. Skuratovskii and Sergii S. Podpriatov

We research triangle cubics and conics in classical geometry with elements of projective geometry. In recent years, N.J. Wildberger has actively dealt with this topic using an algebraic perspective. Triangle conics were also studied in detail by H.M. Cundy and C.F. Parry recently. The main task of the article is development of a method for creating curves that pass through triangle centers. During the research, it was noticed that some different triangle centers in distinct triangles coincide. The simplest example: an incenter in a base triangle is an orthocenter in an excentral triangle. This is the key for creating an algorithm. Indeed, we can match points belonging to one curve (base curve) with other points of another triangle. Therefore, we get a new fascinating geometrical object. During the research number of new triangle conics and cubics are derived, their properties in Euclidian space are considered. In addition, it is discussed corollaries of the obtained theorems in projective geometry, which proves that all of the discovered results could be transferred to the projective plane. It is well known that many modern cryptosystems can be naturally transformed into elliptic curves. We investigate the class of curves applicable in cryptography.

]]>Fitriani Indah Emilia Wijayanti Budi Surodjo Sri Wahyuni and Ahmad Faisol

Let R be a ring, K,M be R-modules, L a uniserial R-module, and X a submodule of L. The triple (K,L,M) is said to be X-sub-exact at L if the sequence K→X→M is exact. Let σ(K,L,M) is a set of all submodules Y of L such that (K,L,M) is Y -sub-exact. The sub-exact sequence is a generalization of an exact sequence. We collect all triple (K,L,M) such that (K,L,M) is an X-sub exact sequence, where X is a maximal element of σ(K,L,M). In a uniserial module, all submodules can be compared under inclusion. So, we can find the maximal element of σ(K,L,M). In this paper, we prove that the set σ(K,L,M) form a category, and we denoted it by C_{L}. Furthermore, we prove that C_{Y} is a full subcategory of C_{L}, for every submodule Y of L. Next, we show that if L is a uniserial module, then C_{L} is a pre-additive category. Every morphism in C_{L} has kernel under some conditions. Since a module factor of L is not a submodule of L, every morphism in a category C_{L} does not have a cokernel. So, C_{L} is not an abelian category. Moreover, we investigate a monic X-sub-exact and an epic X-sub-exact sequence. We prove that the triple (K,L,M) is a monic X-sub-exact if and only if the triple Z-modules (, , ) is a monic -sub-exact sequence, for all R-modules N. Furthermore, the triple (K,L,M) is an epic X-sub-exact if and only if the triple Z-modules (, , ) is a monic -sub-exact, for all R-module N.

Tserenbat Oirov Gereltuya Terbish and Nyamsuren Dorj

The paper focuses on the estimation of the force of mortality of living time distribution. We use a third-order B-spline function to construct the logarithm for force of mortality of living time. The number of the knots, their locations and B-spline coefficients based on a sample of observations are estimated by the maximum likelihood estimation method. Evaluation of B-spline parameters estimated by maximum likelihood estimation tested with criteria of the modified chi-squared goodness of the fit statistic. An algorithm developed to calculate Sequential Procedure for the modified chi-squared goodness of the fit testing. The Matlab code was written using the algorithm. Within this evaluation, the number of knots in the model has significantly reduced. The developed method was used to explain the mortality rate of women aged 0 to 69 among the Mongolian population in 2019 and estimate the life expectancy of Mongolians. The results of this experiment provided an excellent estimation of the force of mortality. Construction of a mortality rate estimation gives possibilities to determine mortality trends and force of mortality. Here, force of mortality is further used to construct a survival function, a lifetime distribution function, and a lifetime distribution probability density function. The method can also be used in financial market models and in models that estimate the useful life of equipment.

]]>Robert Reynolds and Allan Stauffer

The aim of this paper is to provide a table of definite integrals which includes both known and new integrals. This work is important because we provide a formal derivation for integrals in [7] not currently present in literature along with new integrals. By deriving new integrals we hope to expand the current list of integral formulae which could assist in research where applicable. The authors apply their contour integral method [9] to an integral in [8] to achieve this new integral formula in terms of the Lerch function. In this present work, the authors provide a formal derivation for an interesting Exponential Fourier transform and express it in terms of the Lerch function. The Exponential Fourier transform has many real world applications namely, in the field of Electrical engineering, in the work of electrical transients by [10] and in the field of Civil engineering, in the work of stress analysis of boundary load on soil by [11]. The definite integral we derived in this work is given by (1) where the variables . This formal derivation is then used to derive the correct version of a definite integral transform along with new formulae. Some of the results in this work are new.

]]>Kittisak Tinpun

Let S be a semigroup and let G be a subset of S. A set G is a generating set G of S which is denoted by . The rank of S is the minimal size or the minimal cardinality of a generating set of S, i.e. rank . In last twenty years, the rank of semigroups is worldwide studied by many researchers. Then it lead to a new definition of rank that is called the relative rank of S modulo U is the minimal size of a subset such that generates S, i.e. rank . A set with is called generating set of S modulo U. The idea of the relative rank was generalized from the concept of the rank of a semigroup and it was firstly introduced by Howie, Ruskuc and Higgins in 1998. Let X be a finite chain and let Y be a subchain of X. We denote the semigroup of full transformations on X under the composition of functions. Let be the set of all transformations from X to Y which is so-called the transformation semigroup with restricted range Y. It was firstly introduced and studied by Symons in 1975. Many results in were extended to results in . In this paper, we focus on the relative rank of semigroup and the semigroup of all orientation-preserving transformations in . In Section 2.1, we determine the relative rank of modulo the semigroup of all order-preserving or order-reversing transformations. In Section 2.2, we describe the results of the relative rank of modulo the semigroup . In Section 2.3, we determine the relative rank of modulo the semigroup of all orientation-preserving or orientation-reversing transformations. Moreover, we obtain that the relative rank modulo and modulo are equal.

]]>Bhagwan Dass Vijay Prakash Tomar Krishan Kumar and Vikas Ranga

The concept of fuzzy sets presented by Zadeh has conquered an enormous achievement in numerous fields. Uncertainty in real world is ubiquitous. Entropy is an important tool with uncertainty and fuzziness. In this article, we propose new measure of directed divergence on fuzzy set. The extension of the fuzzy sets and one that integrated with other theories have been applied by some researchers. To prove the validity of measure, some axioms are proved. Using the proposed measure, we generate a method about decision making criteria and give a suitable method. In this article, we describe directed divergence measure for fuzzy set. Properties of proposed measure are discussed. In the real world, the multicriteria decision making is a very practical method and has a wide range of uses. By using multicriteria decision making, we can find best choice among the given criteria. In recent years, many researchers extensively apply fuzzy directed divergence for multicriteria decision making. Also some researchers defined the application of parameterized Hesitant Fuzzy Soft Set theory in decision making. In this article, we shall investigate the multiple criteria decision making problem under fuzzy environment. Application of introduced measure is given for decision making problem. A numerical example is given for decision making problem. In a fuzzy multicriteria problem, the analysis is given by an illustration example of the new define approach regarding admission preference of a student for post graduate course of science stream.

]]>Bhuwaneshwar Kumar Gupt F. Lalthlamuanpuii and Md. Irphan Ahamed

In survey planning, sometimes, there arises situation to use cluster sampling because of nature of spatial relationship between elements of population or physical feature of land over which elements are dispersed or unavailability of reliable list of elements. At the same time, there requires technique and strategy for ensuring precision of the sample in representing the parent population. Although several theoretical cum practical works have been done in cluster sampling, stratified sampling and stratified cluster sampling, so far, the problem of stratified cluster sampling for a study variable based on an auxiliary variable, which is required in practice, has never been approached. For the first time, this paper deals with the problem of optimum stratification of population of clusters in cluster sampling with clusters of equal size of a characteristic y under study based on highly correlated concomitant variable x for allocation proportional to stratum cluster totals under a super population model. Equations giving optimum strata boundaries (OSB) for dividing population, in which sampling unit of the population is a cluster, are obtained by minimising sampling variance of the estimator of population mean. As the equations are implicit in nature, a few methods of finding approximately optimum strata boundaries (AOSB) are deduced from the equations giving OSB. In deriving the equations, mathematical tools of calculus and algebra are used in addition to statistical methods of finding conditional expectation of variance. All the proposed methods of stratification are empirically examined by illustrating in live data, population of villages in Lunglei and Serchhip districts of Mizoram State, India, and found to perform efficiently in stratifying the population. The proposed methods may provide practically feasible solution in planning socio-economic survey.

]]>Piyatida Phanthuna and Yupaporn Areepong

A modified exponentially weighted moving average (EWMA) scheme expanded from an EWMA chart is an instrument for immediate detection on a small shifted size. The objective of this research is to suggest the average run length (ARL) with the explicit formula on a modified EWMA control chart for observations of a seasonal autoregressive model of order p^{th} (SAR(p)_{L}) with exponential residual. A numerical integral equation method is brought to approximate ARL for checking an accuracy of explicit formulas. The results of two methods show that their ARL solutions are close and the percentage of the absolute relative change (ARC) is obtained to less than 0.002. Furthermore, the modified EWMA chart with the SAR(p)_{L} model is tested to shift detection when the parameters c and are changed. The ARL and the relative mean index (RMI) results are found to be better when c and are increased. In addition, the modified EWMA control chart is compared to performance with the EWMA scheme and such that their results encourage the modified EWMA chart for a small shift. Finally, this explicit formula can be applied to various real-world data. For example, two data about information and communication technology are used for the validation and the capability of our techniques.

Shahira Shafie and Abdul Malek Yaakob

Networked rule bases in fuzzy system, acknowledged as fuzzy network, carries multiple stages of development in decision making processes that involves the uncertainty in the data used as medium in various field. Fuzzy network promotes transparency in multicriteria decision making (MCDM) whereby the criteria are divided into subsystems of cost and benefit to ensure good assessment performance. By considering Hesitant fuzzy sets (HFS), which gives the permission of a set of possible values to present the membership degree of an element, we develop a novel approach that applies fuzzy network and the maximizing deviation method in solving MCDM problem. Fuzzy network addresses transparency in the formulation and maximizing deviation method can restore weight information in MCDM problems whether partially known or fully unknown. The proposed method is applied in case study of stock evaluation that carries opinion evaluated by several decision makers and compared in terms of performance using Spearman rho correlation.

]]>Sharmila Karim and Haslinda Ibrahim

Permutation is an interesting subject to explore until today where it is widely applied in many areas. This paper presents the use of factorial numbers for generating starter sets where starter sets are used for listing permutation. Previously starter sets are generated by using their permutation under exchange-based and cycling based. However, in the new algorithm, this process is replaced by factorial numbers. The base theory is there are number of distinct starter sets. Every permutation has its decimal number from zero until for Lexicographic order permutation only. From a decimal number, it will be converted to a factorial number. Then the factorial number will be mapped to its corresponding starter sets. After that, the Half Wing of Butterfly will be presented. The advantage of the use of factorial numbers is the avoidance of the recursive call function for starter set generation. In other words, any starter set can be generated by calling any decimal number. This new algorithm is still in the early stage and under development for the generation of the half wing of butterfly representation. Case n=5 is demonstrated for a new algorithm for lexicographic order permutation. In conclusion, this new development is only applicable for generating starter sets as a lexicographic order permutation due to factorial numbers is applicable for lexicographic order permutation.

]]>Yik-Siong Pang Nor Aishah Ahad and Sharipah Soaad Syed Yahaya

Multivariate outliers can exist in two forms, casewise and cellwise. Data collection typically contains unknown proportion and types of outliers which can jeopardize the location estimation and affect research findings. In cases where the two coexist in the same data set, traditional distance-based trimmed mean and coordinate-wise trimmed mean are unable to perform well in estimating location measurement. Distance-based trimmed mean suffers from leftover cellwise outliers after the trimming whereas coordinate-wise trimmed mean is affected by extra casewise outliers. Thus, this paper proposes new robust multivariate location estimation known as α-distance-based trimmed median () to deal with both types of outliers simultaneously in a data set. Simulated data were used to illustrate the feasibility of the new procedure by comparing with the classical mean, classical median and α-distance-based trimmed mean. Undeniably, the classical mean performed the best when dealing with clean data, but contrarily on contaminated data. Meanwhile, classical median outperformed distance-based trimmed mean when dealing with both casewise and cellwise outliers, but still affected by the combined outliers' effect. Based on the simulation results, the proposed yields better location estimation on contaminated data compared to the other three estimators considered in this paper. Thus, the proposed can mitigate the issues of outliers and provide a better location estimation.

]]>Mans L Mananohas Charles E Mongi Dolfie Pandara Chriestie E J C Montolalu and Muhammad P M Mo'o

The weight enumerator of a code is a homogeneous polynomial that provides a lot of information about the code. In this case, for the development of a code, research on the weight enumerator is very important. In this study, we focus on the code . Let be the weight enumerator of the code . Fujii and Oura showed that is generated by and . Indeed, we show that is an element of the polynomial ring . We know that the weight enumerator of all self-dual double-even (Type II) code is generated by and . Recall is a type II code. Thus, is an element of the polynomial ring and . One of the motivations of this research is to investigate the connection between these two polynomial rings in representing . Let and be the coefficients of polynomial that represent as an element of and , respectively. We find that is an element of the polynomial . In addition, we also show that there are no weight enumerators of Type II code generated by and that can be written uniquely as isobaric polynomials in five homogeneous polynomial elements of degrees 8, 24, 24, 24, 24.

]]>Alaa Hassan Noreldeen Wageeda M. M. and O. H. Fathy

Polynomial: algebra is essential in commutative algebra since it can serve as a fundamental model for differentiation. For module differentials and Loday's differential commutative graded algebra, simplified homology for polynomial algebra was defined. In this article, the definitions of the simplicial, the cyclic, and the dihedral homology of pure algebra are presented. The definition of the simplicial and the cyclic homology is presented in the Algebra of Polynomials and Laurent's Polynomials. The long exact sequence of both cyclic homology and simplicial homology is presented. The Morita invariance property of cyclic homology was submitted. The relationship was introduced, representing the relationship between dihedral and cyclic (co)homology in polynomial algebra. Besides, a relationship , was examined, defining the relationship between dihedral and cyclic (co)homology of Laurent polynomials algebra. Furthermore, the Morita invariance property of dihedral homology in polynomial algebra was investigated. Also, the Morita property of dihedral homology in Laurent polynomials was studied. For the dihedral homology, the long exact sequence was obtained of the short sequence . The long exact sequence of the short sequence was obtained from the reflexive (co)homology of polynomial algebra. Studying polynomial algebra helps calculate COVID-19 vaccines.

]]>Hussein Eledum and Hytham Hussein Awadallah

In the multiple linear regression model, the problem of multicollinearity may come together with autocorrelation; therefore, several methods of estimation are developed to deal with this case; Two-Stage Ridge Regression (TR) is one of them. This article's main objective is to run a Monte Carlo simulation to investigate the impact of both problems, Multicollinearity and Autocorrelation, in multiple linear regression model on the performance of the TR. The simulation is carried out under different levels of multicollinearity, and sets of autocorrelation coefficient, taking into account different sample sizes. Some new properties for the TR method, including expectation, variance and mean square error, are droved. In contrast, the study also has developed some techniques to estimate the biasing parameter for the TR by modifying some popular techniques used in ridge regression (RR). Moreover, Mean Square Error is used as a base for evaluation and comparison. The empirical findings from the simulations revealed that the TR estimator performs better than the RR, and the values of the biasing parameter under the TR are always less than that under the RR. This paper contributes to the existing literature on developing new estimation methods used to overcome the presence of mixed problems in a linear regression model and studying their properties.

]]>Md. Irphan Ahamed Bhuwaneshwar Kumar Gupt and Manoshi Phukon

In stratified sampling, ever since Dalenius [1] undertook the problem of optimum stratification, the research in the area has been progressing in various perspectives and dimensions till date. Amidst the multifaceted developments in the trend of the research, consideration of the topic by taking into account various aspects such as different sample selection methods and allocations, study variable based stratification, auxiliary variable based stratification, superpopulation models, extension to two study variables for a single auxiliary variable, extension to two stratification variables for a single study variable etc., are a few noteworthy ones. However, with regard to considering optimum stratification of heteroscedastic populations, as live populations are generally heteroscedastic, it was Gupt and Ahamed [2,3] who considered the problem for a few allocations under a heteroscedastic regression superpopulation (HRS) model. As a sequel to the work of the authors, in this paper, the problem of optimum stratification for an objective variable y based on a concomitant variable x under the HRS model is considered for an allocation proposed by Gupt [4,5] and termed as Generalised Auxiliary Variable Optimum Allocation (GAVOA). Methods of stratification in the form of equations and approximate solutions to the equations which stratify populations at optimum strata boundaries (OSB) and approximately optimum strata boundaries (AOSB) respectively are obtained. Mathematical analysis is used in minimizing sampling variance of the estimator of population mean and deriving all the proposed methods of stratification. The proposed equations divide heteroscedastic populations, symmetrical or moderately skewed or highly skewed, at OSB, but, the equations are implicit in nature and not easy in solving. Therefore, a few methods of finding AOSB are deduced from the equations through analytically justified steps of approximation. The methods may provide practically feasible solutions in survey planning in stratifying heteroscedastic population of any level of heteroscedasticity and the work may contribute, to some extent, theoretically in the research area. The methods are empirically examined in a few generated heteroscedastic data of varied shapes with some assumed levels of heteroscedasticity and found to perform with high efficiency. The proposed methods of stratification are restricted to the particular allocation used.

]]>Pradeep Shende and Arvind Kumar Sinha

Data is generating at an exponential pace with the advancement in information technology. Such data highly contain uncertain and vague information. The rough set approximation is a way to find information in the data-set under uncertainty and to classify objects of the dataset. This work presents a mathematical approach to evaluate the data-sets uncertainties and their application to data reduction. In this work, we have extended the multi-granulation variable precision rough set in the context of uncertainty optimization. We develop an uncertainty optimization-based multi-granular rough set (UOMGRS) to minimize the uncertainties in the data set more effectively. Using UOMGRS, we find the most informative attribute in the feature space. It is desirable to minimize the rough set boundary region using the attribute having the highest approximation quality. Thus we group the attributes whose relative quality of approximation is the maximum to maximize the positive region and to minimize the uncertain region. We compare the UOMGRS with the single granulation rough set (SGRS) and the multi-granular rough set (MGRS). By our proposed method, we require only an average of 62% attributes for approximation whereas, SGRS and MGRS need an average of at least 72% of attributes in the data set for approximation of the concepts in the data-set. Our proposed method requires less amount of data for the classification of objects in the dataset. The method helps minimize the uncertainties in the dataset in a more efficient way.

]]>Samsul Arifin Hanni Garminia and Pudji Astuti

Not a long time ago, Ghorbani and Nazemian [2015] introduced the concept of dimension of valuation which measures how much does the ring differ from the valuation. They've shown that every Artinian ring has a finite valuation dimensions. Further, any comutative ring with a finite valuation dimension is semiperfect. However, there is a semiperfect ring which has an infinite valuation dimension. With those facts, it is of interest to further investigate property of rings that has a finite dimension of valuation. In this article we define conditions that a Noetherian ring requires and suffices to have a finite valuation dimension. In particular we prove that, if and only if it is Artinian or valuation, a Noetherian ring has its finite valuation dimension. In view of the fact that a ring needs a semi perfect dimension in terms of valuation, our investigation is confined on semiperfect Noetherian rings. Furthermore, as a finite product of local rings is a semi perfect ring, the inquiry into our outcome is divided into two cases, the case of the examined ring being local and the case where the investigated ring is a product of at least two local rings. This is, first of all, that every local Noetherian ring possesses a finite valuation dimension, if and only if it is Artinian or valuation. Secondly, any Notherian Ring generated by two or more local rings is shown to have a finite valuation dimension, if and only if it is an Artinian.

]]>Robert Reynolds and Allan Stauffer

It is always useful to improve the catalogue of definite integrals available in tables. In this paper we use our previous work on Lobachevsky integrals to derive entries in the tables by Bierens De Haan and Anatolli Prudnikov featuring errata results and new integral formula for interested readers. In this work we derive a definite integral given by (1) in terms of the Lerch function. The importance of this work lies in the derivation of known and new results not presently found in current literature. We used our contour integral method and applied it to an integral in Prudnikov and used it to derive a closed form solution in terms of a special function. The advantage of using a special function is the added benefit of analytic continuation which widens the range of computation of the parameters. Special functions have significance in mathematical analysis, functional analysis, geometry, physics, and other applications. Special functions are used in the solutions of differential equations or integrals of elementary functions. Special functions are linked to the theory of Lie groups and Lie algebras, as well as certain topics in mathematical physics.

]]>Ali Naji Shaker

A partial differential equation has been using the various boundary elements techniques for getting the solution to eigenvalue problem. A number of mathematical concepts were enlightened in this paper in relation with eigenvalue problem. Initially, we studied the basic approaches such as Dirichlet distribution, Dirichlet process and the Model of mixed Dirichlet. Four different eigenvalue problems were summarized, viz. Dirichlet eigenvalue problems, Neumann eigenvalue problems, Mixed Dirichlet-Neumann eigenvalue problem and periodic eigenvalue problem. Dirichlet eigenvalue problem was analyzed briefly for three different cases of value of λ. We put the result for multinomial as its prior is Dirichlet distribution. The result of eigenvalues for the ordinary differential equation was extrapolated. The Basic mathematics was also performed for λ calculations which follow iterative method.

]]>Jamila Jawdat and Ayat Kamal

This paper deals with Quasi-Chebyshevity in the Bochner function spaces , where X is a Banach space. For W a nonempty closed subset of X and x ∊ X, an element w0 in W is called "best approximation" to x from W, if , for all w in W. All best approximation points of x from W form a set usually denoted by P_{W} (x). The set W is called "proximinal" in X if P_{W} (x) is non empty, for each x in X. Now, W is said to be "Quasi-Chebyshev" in X whenever, for each x in X, the set P_{W} (x) is nonempty and compact in X. This subject was studied in general Banach spaces by several authors and some results had been obtained. In this work, we study Quasi-Chebyshevity in the Bochner L^{p}- spaces. The main result in this paper is that: given W a Quasi-Chebyshev subspace in X then L^{p}(μ, W) is Quasi-Chebyshev in , if and only if L^{1} (μ, W) is Quasi-Chebyshev in L^{1}(μ, X). As a consequence, one gets that if W is reflexive in X such that X satisfies the sequential KK-property then L^{p}(μ, W) is Quasi-Chebyshev in .

Faiz Zulkifli Zulkifley Mohamed Nor Afzalina Azmee and Rozaimah Zainal Abidin

Ordinal regression is used to model the ordinal response variable as functions of several explanatory variables. The most commonly used model for ordinal regression is the proportional odds model (POM). The classical technique for estimating the unknown parameters of this model is the maximum likelihood (ML) estimator. However, this method is not suitable for solving problems with extreme observations. A robust regression method is needed to handle the problem of extreme points in the data. This study proposes Huber M-estimator as a robust method to estimate the parameters of the POM with a logistic link function and polytomous explanatory variables. This study assesses ML estimator performance and the robust method proposed through an extensive Monte Carlo simulation study conducted using statistical software, R. Measurement for comparisons are bias, RMSE, and Lipsitzs' goodness of fit test. Various sample sizes, percentages of contamination, and residual standard deviations are considered in the simulation study. Preliminary results show that Huber estimates provide the best results for parameter estimation and overall model fitting. Huber's estimator has reached a 50% breakdown point for data containing extreme points that are quite far from most points. In addition, the presence of extreme points that have only a distance of two times far from most points has no major impact on ML estimates. This means that the estimates for ML and Huber may yield the same results if the model's residual values are between -2 and 2. This situation may also occur for data with a percentage of contamination below 5%.

]]>S. Nasrin R. N. Mondal and M. M. Alam

Riga plate is the span wise array of electrodes and permanent magnets that creates a plane surface and produced the electromagnetic hydrodynamic fluid behavior and mostly used in industrial processes with fluid flow affairs. In cases where an external application of a magnetic or electric field is required, better flow is obtained by the involvement of the Riga plate. Riga plate acts as an agent to reduce the skin friction and enhance the heat transfer phenomena. It also diminishes the turbulent effects, so that it is possible to get an efficient flow control and it increases the performance of the machine. So the numerical investigation of the unsteady Couette flow with Hall and ion-slip current effects past between two Riga plates has been studied and the numerical solutions are acquired by using explicit finite difference method and estimated results have been gained for several values of the dimensionless parameter such as pressure gradient parameter, Hall and Ion-slip parameters, modified Hartmann number, Prandtl number, and Eckert number. In this article, the importance of the modified Hartmann number on the flow profiles is immense owing to the Riga plate. The expression of skin friction and Nusselt number has been computed and the outcomes of the relevant parameters on various distributions have been sketched and presented as well as graphically.

]]>Christoph Fuhrmann Hanns-Ludwig Harney Klaus Harney and Andreas M¨uller

The present article derives the minimal number N of observations needed to approximate a Bayesian posterior distribution by a Gaussian. The derivation is based on an invariance requirement for the likelihood . This requirement is defined by a Lie group that leaves the unchanged, when applied both to the observation(s) and to the parameter to be estimated. It leads, in turn, to a class of specific priors. In general, the criterion for the Gaussian approximation is found to depend on (i) the Fisher information related to the likelihood , and (ii) on the lowest non-vanishing order in the Taylor expansion of the Kullback-Leibler distance between and , where is the maximum-likelihood estimator of , given by the observations . Two examples are presented, widespread in various statistical analyses. In the first one, a chi-squared distribution, both the observations and the parameter are defined all over the real axis. In the other one, the binomial distribution, the observation is a binary number, while the parameter is defined on a finite interval of the real axis. Analytic expressions for the required minimal N are given in both cases. The necessary N is an order of magnitude larger for the chi-squared model (continuous ) than for the binomial model (binary ). The difference is traced back to symmetry properties of the likelihood function . We see considerable practical interest in our results since the normal distribution is the basis of parametric methods of applied statistics widely used in diverse areas of research (education, medicine, physics, astronomy etc.). To have an analytical criterion whether the normal distribution is applicable or not, appears relevant for practitioners in these fields.

]]>Reza Pakyari

Geometric Extreme Exponential Distribution (GEE) is one of the statistical models that can be useful in fitting and describing lifetime data. In this paper, the problem of estimation of the reliability R = P(Y < X) when X and Y are independent GEE random variables with common scale parameter but different shape parameters has been considered. The probability R = P(Y < X) is also known as stress-strength reliability parameter and demonstrates the case where a component has stress X and is subjected to strength Y. The reliability R = P(Y < X) has applications in engineering, finance and biomedical sciences. We present the maximum likelihood estimator of R and study its asymptotic behavior. We first study the asymptotic distribution of the maximum likelihood estimators of the GEE parameters. We prove that the maximum likelihood estimators and so the reliability R have asymptotic normal distribution. A bootstrap confidence interval for R is also presented. Monte Carlo simulations are performed to assess he performance of the proposed estimation method and validity of the confidence interval. We found that the performance of the maximum likelihood estimator and also the bootstrap confidence interval is satisfactory even for small sample sizes. Analysis of a dataset has been given for illustrative purposes.

]]>Samsul Arifin Hanni Garminia and Pudji Astuti

We present some methods for calculating the module's uniserial dimension that finitely generated over a DVD in this article. The idea of a module's uniserial dimension over a commutative ring, which defines how far the module deviates from being uniserial, was recently proposed by Nazemian etc. They show that if R is Noetherian commutative ring, which implies that every finitely generated module over R has uniserial dimension. Ghorbani and Nazemians have shown that R is Noetherian (resp. Artinian) ring if only the ring R X R has (resp. finite) valuation dimension. The finitely generated modules over valuation domain are further examined from here. However, since the region remains too broad, further research into the module's uniserial dimensions that finitely generated over a DVD is needed. In the case of a DVD R, a finitely generated module over R can, as is well-known, be divided into a direct sum of torsion and a free module. Therefore, first, we present methods for determining the primary module's uniserial dimension, and then followed by methods for the general finitely generated module. As can be observed, the module's uniserial dimension is a function of the elementary divisors and the rank of the non torsion module item, which is the major finding of this work.

]]>Hermansah Dedi Rosadi Abdurakhman and Herni Utami

In this research, we propose a Nonlinear Auto-Regressive network with exogenous inputs (NARX) model with a different approach, namely the determination of the main input variables using a stepwise regression and exogenous input using a deterministic seasonal dummy. There are two approaches in making a deterministic seasonal dummy, namely the binary and the sine-cosine dummy variables. Approximately half the number of input variables plus one is contained in the neurons of the hidden layer. Furthermore, the resilient backpropagation learning algorithm and the tangent hyperbolic activation function were used to train each network. Three ensemble operators are used, namely mean, median, and mode, to solve the overfitting problem and the single NARX model's weakness. Furthermore, we provide an empirical study using actual data, where forecasting accuracy is determined by Mean Absolute Percent Error (MAPE). The empirical study results show that the NARX model with binary dummy exogenous is the most accurate for trend and seasonal with multiplicative properties data patterns. For trend and seasonal with additive properties data patterns, the NARX model with sine-cosine dummy exogenous is more accurate, except the fact that the NARX model uses the mean ensemble operator. Besides, for trend and non-seasonal data patterns, the most accurate NARX model is obtained using the mean ensemble operator. This research also shows that the median and mode ensemble operators, which are rarely used, are more accurate than the mean ensemble operator for data that have trend and seasonal patterns. The median ensemble operator requires the least average computation time, followed by the mode ensemble operator. On the other hand, all of our proposed NARX models' accuracy consistently outperforms the exponential smoothing method and the ARIMA method.

]]>M. Fariz Fadillah Mardianto Gunardi and Herni Utami

Fourier series is a function that is often used Mathematically and Statistically especially for modeling. Here, Fourier series can be constructed as an estimator in nonparametric regression. Nonparametric regression is not only using cross section data, but also longitudinal data. Some of nonparametric regression estimators have been developed for longitudinal data case, such as kernel, and spline. In this study, we concentrate to develop an inference analysis that related to Fourier series estimator in nonparametric regression for longitudinal data. Nonparametric regression based on Fourier series is capable to model data relationship with fluctuation or oscillation pattern that represents with sine and cosine functions. For point estimation analysis, Penalized Weighted Least Square (PWLS) is used to determine an estimator for parameter vector in nonparametric regression. Different with previous studies, PWLS is used to get smooth estimator. The result is an estimator for nonparametric regression curve for longitudinal data based on Fourier series approach. In addition, this study also investigated the asymptotic properties of the nonparametric regression curve estimators using the Fourier series approach for longitudinal data, especially linearity and consistency. Some study cases based on previous research and a new study case is given to make sure that Fourier series estimator in nonparametric regression has good performance in longitudinal data modeling. This study is important in order to develop further inferences Statistics, such as interval estimation and test hypothesis that related nonparametric regression with Fourier series estimator for longitudinal data.

]]>Jewgeni H. Dshalalow Kizza Nandyose and Ryan T. White

This paper deals with a class of antagonistic stochastic games of three players A, B, and C, of whom the first two are active players and the third is a passive player. The active players exchange hostile attacks at random times of random magnitudes with each other and also with player C. Player C does not respond to any attacks (that are regarded as a collateral damage). There are two sustainability thresholds M and T are set so that when the total damages to players A and B cross M and T, respectively, the underlying player is ruined. At some point (ruin time), one of the two active players will be ruined. Player C's damages are sustainable and some rebuilt. Of interest are the ruin time and the status of all three players upon as well as at any time t prior to . We obtain an analytic formula for the joint distribution of the named processes and demonstrate its closed form in various analytic and computational examples. In some situations pertaining to stock option trading, stock prices (player C) can fluctuate. So in this case, it is of interest to predict the first time when an underlying stock price drops or significantly drops so that the trader can exercise the call option prior to the drop and before maturity T. Player A monitors the prices upon times assigning 0 damage to itself if the stock price appreciates or does not change and assumes a positive integer if the price drops. The times are themselves damages to player B with threshold T. The "ruin" time is when threshold M is crossed (i.e., there is a big price drop or a series of drops) or when the maturity T expires whichever comes first. Thus a prior action is needed and its time is predicted. We illustrate the applicability of the game on a number of other practical models, including queueing systems with vacations and (N,T)-policy.

]]>Arvind Kumar Sinha and Srikumar Panda

The main objective of the paper is to study the three-dimensional fractional Fourier Mellin transforms (3DFRFMT), their basic properties and applicability due to mainly use in the radar system, reconstruction of grayscale images, in the detection of the human face, etc. Only the fractional Fourier transform is based on time-frequency distribution, whereas only the fractional Mellin transform is on scale covariant transformation. Both transforms can discover action in the definite assortment. The fractional Fourier transform is applicable for controlling the range of shift, whereas the fractional Mellin transform is accustomed to managing the range of rotation and scaling of the function. So, combining both transformations, we get an elegant expression for 3DFRFMT, which can be used in several fields. The paper introduces the concept of three-dimensional fractional Fourier Mellin transforms and their applications. Modulation property is the most useful concept in the signal system, radar technology, pattern reorganization, and many more in the integral transform. Parseval's identity applies to the conservation of energy in the universe. Thus we establish the modulation theorem, Parseval's theorem, scaling theorem, analytic theorem for three-dimensional fractional Fourier Mellin transform. We also give some examples of three-dimensional fractional Fourier-Mellin transform on some functions. Finally, we provide three-dimensional fractional Fourier-Mellin transform applications for solving homogeneous and non-homogeneous Mboctara partial differential equations that we can apply with advantages to solve the different types of problems in signal processing systems. The transform is beneficial in a maritime strategy as a co-realtor to control moments in any specific three-dimensional space. The concept is the most powerful tool to deal with any information system problems. After obtaining the generalization, we can explore many more ideas in applying three-dimensional fractional Fourier-Mellin transformations in many real word problems.

]]>S. A. Ojobor and A. Obihia

The aim of this paper is to solve numerically the Cauchy problems of nonlinear partial differential equation (PDE) in a modified variational iteration approach. The standard variational iteration method (VIM) is first studied before modifying it using the standard Adomian polynomials in decomposing the nonlinear terms of the PDE to attain the new iterative scheme modified variational iteration method (MVIM). The VIM was used to iteratively determine the nonlinear parabolic partial differential equation to obtain some results. Also, the modified VIM was used to solve the nonlinear PDEs with the aid of Maple 18 software. The results show that the new scheme MVIM encourages rapid convergence for the problem under consideration. From the results, it is observed that for the values the MVIM converges faster to exact result than the VIM though both of them attained a maximum error of order 10^{-9}. The resulting numerical evidences were competing with the standard VIM as to the convergence, accuracy and effectiveness. The results obtained show that the modified VIM is a better approximant of the above nonlinear equation than the traditional VIM. On the basis of the analysis and computation we strongly advocate that the modified with finite Adomian polynomials as decomposer of nonlinear terms in partial differential equations and any other mathematical equation be encouraged as a numerical method.

Zahari Md Rodzi Abd Ghafur Ahmad Norul Fadhilah Ismail and Nur Lina Abdullah

The hesitant fuzzy set (HFS) concept as an extension of fuzzy set (FS) in which the membership degree of a given element, called the hesitant fuzzy element (HFE), is defined as a set of possible values. A large number of studies are concentrating on HFE and HFS measurements. It is not just because of their crucial importance in theoretical studies, but also because they are required for almost any application field. The score function of HFE is a useful method for converting data into a single value. Moreover, the scoring function provides a much easier way to determine each alternative's ranking order for multi-criteria decision-making (MCDM). This study introduces a new hesitant degree of HFE and the z-score function of HFE, which consists of z-arithmetic mean, z-geometric mean, and z-harmonic mean. The z-score function is developed with four main bases: a hesitant degree of HFE, deviation value of HFE, the importance of the hesitant degree of HFE, α, and importance of the deviation value of HFE, β. These three proposed scores are compared with the existing scores functions to identify the proposed z-score function's flexibility. An algorithm based on the z-score function was developed to create an algorithm solution to MCDM. Example of secondary data on supplier selection for automated companies is used to prove the algorithms' capability in ranking order for MCDM.

]]>Nazrina Aziz Zahirah Hasim and Zakiyah Zain

Acceptance sampling is an important technique in quality assurance; its main goal is to achieve the most accurate decision in accepting lot using minimum resources. In practice, this often translates into minimizing the required sample sizes for the inspection, while satisfying the maximum allowable risks by consumer and producer. Numerous sampling plans have been developed over the past decades, the most recent being the incorporation of grouping to enable simultaneous inspection in the two-sided chain sampling which considers information from preceding and succeeding samples. This combination offers improved decision accuracy with reduced inspection resources. To-date, two-sided group chain sampling plan (TSGCh) for characteristic based on truncated lifetime has only been explored for Pareto distribution of the 2^{nd} kind. This article introduces TSGCh sampling plan for products with lifetime that follows generalized exponential distribution. It focuses on minimizing consumer's risk and operates with three acceptance criteria. The equations that derived from the set conditions involving generalized exponential and binomial distributions are mathematically solved to develop this sampling plan. Its performance is measured on the probability of lot acceptance and number of minimum groups. A comparison with the established new two-sided group chain (NTSGCh) indicates that the proposed TSGCh sampling plan performs better in terms of sample size requirement and consumers' protection. Thus, this new acceptance sampling plan can reduce the inspection time, resources, and costs via smaller sample size (number of groups), while providing the desired consumers' protection.

Sardar G Amen Ali F Jameel and Abdul Malek Yaakob

The Bezier curve is a parametric curve used in the graphics of a computer and related areas. This curve, connected to the polynomials of Bernstein, is named after the design curves of Renault's cars by Pierre Bézier in the 1960s. There has recently been considerable focus on finding reliable and more effective approximate methods for solving different mathematical problems with differential equations. Fuzzy differential equations (known as FDEs) make extensive use of various scientific analysis and engineering applications. They appear because of the incomplete information from their mathematical models and their parameters under uncertainty. This article discusses the use of Bezier curves for solving elevated order fuzzy initial value problems (FIVPs) in the form of ordinary differential equation. A Bezier curve approach is analyzed and updated with concepts and properties of the fuzzy set theory for solving fuzzy linear problems. The control points on Bezier curve are obtained by minimizing the residual function based on the least square method. Numerical examples involving the second and third order linear FIVPs are presented and compared with the exact solution to show the capability of the method in the form of tables and two dimensional shapes. Such findings show that the proposed method is exceptionally viable and is straightforward to apply.

]]>Iskandar Shah Mohd Zawawi Zarina Bibi Ibrahim and Khairil Iskandar Othman

Block methods that approximate the solution at several points in block form are commonly used to solve higher order differential equations. Inspired by the literature and ongoing research in this field, this paper intends to explore a new derivation of block backward differentiation formula that employs independent parameter to provide sufficient accuracy when solving second order ordinary differential equations directly. The use of three backward steps and five independent parameters are considered adequately in generating the variable coefficients of the formulas. To ascertain only one parameter exists in the derived formula, the order of the method is determined. Such independent parameter retains the favorable convergence properties although the values of parameter will affect the zero stability and truncation error. An ability of the method to compute the approximated solutions at two points concurrently is undeniable. Another advantage of the method is being able to solve the second order problems directly without recourse to the technique of reducing it to a system of first order equations. The essential of the error analysis is to observe the effect of independent parameter on the accuracy, in the sense that with certain appropriate values of parameter, the accuracy is improved. The performance of the method is tested with some initial value problems and the numerical results confirm that the maximum error and average error obtained by the proposed method are smaller at certain step size compared to the other conventional direct methods.

]]>V. I. Struchenkov and D. A. Karpov

Being a continuation of the paper published in Mathematics and Statistics, vol. 7, No. 5, 2019, this article describes the algorithm for the first stage of spline- approximation with an unknown number of elements of the spline and constraints on its parameters. Such problems arise in the computer-aided design of road routes and other linear structures. In this article we consider the problem of a discrete sequence approximation of points on a plane by a spline consisting of line segments conjugated by circular arcs. This problem occurs when designing the longitudinal profile of new and reconstructed railways and highways. At the first stage, using a special dynamic programming algorithm, the number of elements of the spline and the approximate values of its parameters that satisfy all the constraints are determined. At the second stage, this result is used as an initial approximation for optimizing the spline parameters using a special nonlinear programming algorithm. The dynamic programming algorithm is practically the same as in the mentioned article published earlier, with significant simplifications due to the absence of clothoids when connecting straight lines and curves. The need for the second stage is due to the fact that when designing new roads, it is impossible to implement dynamic programming due to the need to take into account the relationship of spline elements in fills and in cuts, if fills will be constructed from soils of cuts. The nonlinear programming algorithm is based on constructing a basis in zero spaces of matrices of active constraints and adjusting this basis when changing the set of active constraints in an iterative process. This allows finding the direction of descent and solving the problem of excluding constraints from the active set without solving systems of linear equations in general or by solving linear systems of low dimension. As an objective function, instead of the traditionally used sum of squares of the deviations of the approximated points from the spline, the article proposes other functions, taking into account the specifics of a specific project task.

]]>Shih Yu Chang and Hsiao-Chun Wu

In linear algebra, the trace of a square matrix is defined as the sum of elements on the main diagonal. The trace of a matrix is the sum of its eigenvalues (counted with multiplicities), and it is invariant under the change of basis. This characterization can be used to define the trace of a tensor in general. Trace inequalities are mathematical relations between different multivariate trace functionals involving linear operators. These relations are straightforward equalities if the involved linear operators commute, however, they can be difficult to prove when the non-commuting linear operators are involved. Given two Hermitian tensors H_{1} and H_{2} that do not commute. Does there exist a method to transform one of the two tensors such that they commute without completely destroying the structure of the original tensor? The spectral pinching method is a tool to resolve this problem. In this work, we will apply such spectral pinching method to prove several trace inequalities that extend the Araki–Lieb–Thirring (ALT) inequality, Golden–Thompson(GT) inequality and logarithmic trace inequality to arbitrary many tensors. Our approaches rely on complex interpolation theory as well as asymptotic spectral pinching, providing a transparent mechanism to treat generic tensor multivariate trace inequalities. As an example application of our tensor extension of the Golden–Thompson inequality, we give the tail bound for the independent sum of tensors. Such bound will play a fundamental role in high-dimensional probability and statistical data analysis.

Terry E. Moschandreou

The problem to The Clay Math Institute "Navier-Stokes, breakdown of smooth solutions here on an arbitrary cube subset of three dimensional space with periodic boundary conditions is examined. The incompressible Navier-Stokes Equations are presented in a new and conventionally different way here, by naturally reducing them to an operator form which is then further analyzed. It is shown that a reduction to a general 2D N-S system decoupled from a 1D non-linear partial differential equation is possible to obtain. This is executed using integration over n-dimensional compact intervals which allows decoupling. The operator form is considered in a physical geometric vorticity case, and a more general case. In the general case, the solution is revealed to have smooth solutions which exhibit finite-time blowup on a fine measure zero set and using the Prékopa-Leindler and Gagliardo-Nirenberg inequalities it is shown that for any non zero measure set in the form of cube subset of 3D there is no finite time blowup for the starred velocity for large dimension of cube and small d. In particular vortices are shown to exist and it is shown that zero is in the attractor of the 3D Navier-Stokes equations.

]]>Sharmeen Binti Syazwan Lai Nur Huda Nabihan Binti Md Shahri Mazni Binti Mohamad Hezlin Aryani Binti Abdul Rahman and Adzhar Bin Rambli

An imbalanced data problem occurs in the absence of a good class distribution between classes. Imbalanced data will cause the classifier to be biased to the majority class as the standard classification algorithms are based on the belief that the training set is balanced. Therefore, it is crucial to find a classifier that can deal with imbalanced data for any given classification task. The aim of this research is to find the best method among AdaBoost, XGBoost, and Logistic Regression to deal with imbalanced simulated datasets and real datasets. The performances of these three methods in both simulated and real imbalanced datasets are compared using five performance measures, namely sensitivity, specificity, precision, F1-score, and g-mean. The results of the simulated datasets show that logistic regression performs better than AdaBoost and XGBoost in highly imbalanced datasets, whereas in the real imbalanced datasets, AdaBoost and logistic regression demonstrated similarly good performance. All methods seem to perform well in datasets that are not severely imbalanced. Compared to AdaBoost and XGBoost, logistic regression is found to predict better for datasets with severe imbalanced ratios. However, all three methods perform poorly for data with a 5% minority, with a sample size of n = 100. In this study, it is found that different methods perform the best for data with different minority percentages.

]]>Adeyeye Oluwaseun and Omar Zurni

Some of the issues relating to the human immunodeficiency virus (HIV) epidemic can be expressed as a system of nonlinear first order ordinary differential equations. This includes modelling the spread of the HIV virus in infecting CD4+T cells that help the human immune system to fight diseases. However, real life differential equation models usually fail to have an exact solution, which is also the case with the nonlinear model considered in this article. Thus, an approximate method, known as the block method, is developed to solve the system of first order nonlinear differential equation. To develop the block method, a linear block approach was adopted, and the basic properties required to classify the method as convergent were investigated. The block method was found to be convergent, which ascertained its usability for the solution of the model. The solution obtained from the newly developed method in this article was compared to previous methods that have been adopted to solve same model. In order to have a justifiable basis of comparison, two-step length values were substituted to obtain a one-step and two-step block method. The results show the newly developed block method obtaining accurate results in comparison to previous studies. Hence, this article has introduced a new method suitable for the direct solution of first order differential equation models without the need to simplify to a system of linear algebraic equations. Likewise, its convergent properties and accuracy also give the block method an edge over existing methods.

]]>Siti Aisyah Zakaria Nor Azrita Mohd Amin Noor Fadhilah Ahmad Radi and Nasrul Hamidin

High ground-level ozone (GLO) concentrations will adversely affect human health, vegetations as well as the ecosystem. Therefore, continuous monitoring for GLO trends is a good practice to address issues related to air quality based on high concentrations of GLO. The purpose of this study is to introduce stationary and non-stationary model of extreme GLO. The method is applied to 25 selected stations in Peninsular Malaysia. The maximum daily GLO concentration data over 8 hours from year 2000 to 2016 are used. The factors of this distribution are anticipated using maximum likelihood estimation. A comparison between stationary (constant model) and non-stationary (linear and cyclic model) is performed using the likelihood ratio test (LRT). The LRT is based on the larger value of deviance statistics compared to a chi-square distribution providing the significance evidence to non-stationary model either there is linear trend or cyclic trend. The best fit model between selected models is tested by Akaike's Information Criterion. The results show that 25 stations conform to the non-stationary model either linear or cyclic model, with 14 stations showing significant improvement over the linear model in location parameter while 11 stations follow the cyclic model. This study is important to identify the trends of ozone phenomenon for better quality risk management.

]]>Rawaa Ibrahim Esa Rasha H Ibraheem and Al.i F Jameel

There has recently been considerable focus on finding reliable and more effective numerical methods for solving different mathematical problems with integral equations. The Runge–Kutta methods in numerical analysis are a family of iterative methods, both implicit and explicit, with different orders of accuracy, used in temporal and modification for the numerical solutions of integral equations. Fuzzy Integral equations (known as FIEs) make extensive use of many scientific analysis and engineering applications. They appear because of the incomplete information from their mathematical models and their parameters under fuzzy domain. In this paper, the sixth order Runge-Kutta is used to solve second-kind fuzzy Volterra integral equations numerically. The proposed method is reformulated and updated for solving fuzzy second-kind Volterra integral equations in general form by using properties and descriptions of fuzzy set theory. Furthermore a Volterra fuzzy integral equation, based on the parametric form of a fuzzy numbers, transforms into two integral equations of the second kind in the crisp case under fuzzy properties. We apply our modified method using the specific example with a linear fuzzy integral Volterra equation to illustrate the strengths and accurateness of this process. A comparison of evaluated numerical results with the exact solution for each fuzzy level set is displayed in the form of table and figures. Such results indicate that the proposed approach is remarkably feasible and easy to use.

]]>Hafed H Saleh Azmi A. and Ali. F. Jameel

There has recently been considerable focus on finding reliable and more effective approximate methods for solving biological mathematical models in the form of differential equations. One of the well-known approximate or semi-analytical methods for solving linear, nonlinear differential well as partial differential equations within various fields of mathematics is the Variational Iteration Method (VIM). This paper looks at the use of fuzzy differential equations in human immunodeficiency virus (HIV) infection modeling. The main advantage of the method lies in its flexibility and ability to solve nonlinear equations easily. VIM is introduced to provide approximate solutions for linear ordinary differential equation system including the fuzzy HIV infection model. The model explains the amount of undefined immune cells, and the immune system viral load intensity intrinsic that will trigger fuzziness in patients infected by HIV. CD4+T-cells and cytototoxic T-lymphocytes (CTLs) are known for the immune cells concerned. The dynamics of the immune cell level and viral burden are analyzed and compared across three classes of patients with low, moderate and high immune systems. A modification and formulation of the VIM in the fuzzy domain based on the use of the properties of fuzzy set theory are presented. A model was established in this regard, accompanied by plots that demonstrate the reliability and simplicity of the methods. The numerical results of the model indicate that this approach is effective and easily used in fuzzy domain.

]]>E. N. Sinyukova S. V. Drahanyuk and O. O. Chepok

All-round development of the everyday logic of students should be considered as one of the most important tasks of general secondary education on the whole and general secondary mathematics education in particular. We discuss the problem of organization in teachers' training institutions of higher education and the expedient training of the future math teachers at institutions of general secondary education. The main goal is to ensure their ability to realize all their future professional activities and the necessary participation in forming the everyday logic of their pupils. The authors think that vocational educational program of training is that the future secondary school math teachers must contain a separate course of mathematical logic including at least 90 training hours (3 credits ECTS). Although the content filling of the course cannot be irrespective of the general level of arrangement of mathematics education in the corresponding country, it ought to be a subject of discussion of the international mathematics community and managers in the sphere of higher mathematics education. Simultaneously, the role, the place, and the expedient structure of such a course in the corresponding training programs should be under discussion. The article represents the authors' point of view on the problems indicated above. The research has a qualitative characteristic as a whole. Only some of its conclusions have statistical corroboration.

]]>Siham Rabee Ramadan Hamed Ragaa Kassem and Mahmoud Rashwaan

Calibration estimation is one of the most important ways to improve the precision of the survey estimates. It is a method in which the designs weights are modified as little as possible by minimizing a given distance measure to the calibrated weights respecting a set of constraints related to suitable auxiliary information. This paper proposes a new approach for Multivariate Calibration Estimation (MCE) of the population mean of a study variable under stratified random sampling scheme using two auxiliary variables. Almost all literature on calibration estimation used Lagrange multiplier technique in order to estimate the calibrated weights. While Lagrange multiplier technique requires all equations included in the model to be differentiable functions, some un- differentiable functions may be faced in some cases. Hence, it is essential to look for using another technique that can provide more flexibility in dealing with the problem. Accordingly, in this paper, using goal programming approach is newly suggested as a different approach for MCE. The theory of the proposed calibration estimation is presented and the calibrated weights are estimated. A comparison study is conducted using actual and generated data to evaluate the performance of the proposed approach for multivariate calibration estimator with other existing calibration estimators. The results of this study prove that using the proposed GP approach for MCE is more flexible and efficient compared to other calibration estimation methods of the population mean.

]]>Luthfatul Amaliana Ani Budi Astuti and Nur Silviyah Rahmi

Per capita expenditure of an area is a welfare indicator of the community. It is also a reflection of the economic capacity in meeting basic needs. Bali is the second richest province in Indonesia. This study aims to model the per capita expenditure of Bali at the sub-district level using Spatial-EBLUP (SEBLUP) approach in SAE. Small area estimation (SAE) modeling is an indirect estimation approach capable of increasing the effectiveness of sample sizes and minimizing variance. The heterogeneity of an area is influenced by other areas around. Everything is related to one another, but something closer will be more influential than something far away. Therefore, the spatial effect can be included in the random effect of a model small area, which is called as SEBLUP model. The selection of a spatial weights matrix is very important in spatial data modeling. It represents the neighborhood relationship of each spatial observation unit. A SEBLUP model needs a spatial weights matrix, which can be based on distance (radial distance and power distance), contiguity (queen), and a combination of distance and contiguity (radial distance and queen contiguity). The result of the implementation of the SEBLUP approach in per capita expenditure of Bali shows that the SEBLUP model with radial distance spatial weights matrix is the best model with the smallest ARMSE. South Denpasar Sub-district is the most prosperous sub-district with the highest per capita expenditure in Bali. Meanwhile, Abang Sub-district is the smallest per capita expenditure.

]]>A. Torres-Hernandez and F. Brambila-Paz

In this paper an approximation to the zeros of the Riemann zeta function has been obtained for the first time using a fractional iterative method which originates from a unique feature of the fractional calculus. This iterative method, valid for one and several variables, uses the property that the fractional derivative of constants are not always zero. This allows us to construct a fractional iterative method to find the zeros of functions in which it is possible to avoid expressions that involve hypergeometric functions, Mittag-Leffler functions or infinite series. Furthermore, we can find multiple zeros of a function using a singe initial condition. This partially solves the intrinsic problem of iterative methods, which in general is necessary to provide N initial conditions to find N solutions. Consequently the method is suitable for approximating nontrivial zeros of the Riemann zeta function when the absolute value of its imaginary part tends to infinity. Some examples of its implementation are presented, and finally 53 different values near to the zeros of the Riemann zeta function are shown.

]]>Wichayaporn Jantanan Anusorn Simuen Winita Yonthanthum and Ronnason Chinram

Ideal theory plays an important role in studying in many algebraic structures, for example, rings, semigroups, semirings, etc. The algebraic structure Г-semigroup is a generalization of the classical semigroup. Many results in semigroups were extended to results in Г-semigroups. Many results in ideal theory of Г-semigroups were widely investigated. In this paper, we first focus to study some novel ideals of Г-semigroups. In Section 2, we define almost interior Г-ideals and weakly almost interior Г-ideals of Г-semigroups by using the concept ideas of interior Г-ideals and almost Г-ideals of Г-semigroups. Every almost interior Г-ideal of a Г-semigroup S is clearly a weakly almost interior Г-ideal of S but the converse is not true in general. The notions of both almost interior Г-ideals and weakly almost interior Г-ideals of Г-semigroups are generalizations of the notion of interior Г-ideal of a Г-semigroup S. We investigate basic properties of both almost interior Г-ideals and weakly almost interior Г-ideals of Г-semigroups. The notion of fuzzy sets was introduced by Zadeh in 1965. Fuzzy set is an extension of the classical notion of sets. Fuzzy sets are somewhat like sets whose elements have degrees of membership. In the remainder of this paper, we focus on studying some novelties of fuzzy ideals in Г-semigroups. In Section 3, we introduce fuzzy almost interior Г-ideals and fuzzy weakly almost interior Г-ideals of Г-semigroups. We investigate their properties. Finally, we give some relationship between almost interior Г-ideals [weakly almost interior Г-ideals] and fuzzy almost interior Г-ideals [fuzzy weakly almost interior Г-ideals] of Г-semigroups.

]]>Nurfa Risha and Muhammad Farchani Rosyid

We studied isometric stochastic flows of a Stratonovich stochastic differential equation on spheres, i.e., on the standard sphere and Gromoll-Meyer exotic sphere . In this case, and are homeomorphic but not diffeomorphic. The standard sphere can be constructed as the quotient manifold with the so-called -action of S^{3}, whereas the Gromoll-Meyer exotic sphere as the quotient manifold with respect to the so-called -action of S^{3}. The corresponding continuous-time stochastic process and its properties on the Gromoll-Meyer exotic sphere can be obtained by constructing a homeomorphism . The stochastic flow can be regarded as the same stochastic flow on S^{7}, but viewed in Gromoll-Meyer differential structure. The flow on and the corresponding flow on constructed in this paper have the same regularities. There is no difference between the stochastic flow's appearance on S^{7} viewed in standard differential structure and the appearance of the same stochastic flow viewed in the Gromoll-Meyer differential structure. Furthermore, since the inverse mapping h^{-1} is differentiable on , the Riemannian metric tensor on , i.e., the pull-back of the Riemannian metric tensor G on the standard sphere , is also differentiable. This fact implies, for instance, the fact that the Fokker-Planck equation associated with the stochastic flow and the Fokker-Planck equation associated with the stochastic differential equation have the same regularities provided that the function β is C^{1}-differentiable. Therefore both differential structures on S^{7} give the same description of the dynamics of the distribution function of the stochastic process understudy on seven spheres.

Jayanta Biswas Pritam Kayal and Debabrata Samanta

Non-Negative Matrix Factorization (NMF) is utilized in many important applications. This paper presents development of an efficient low rank approximate NMF algorithm for feature extraction related to text mining and spectral data analysis. NMF can be used for clustering. NMF factorizes a positive matrix A to two positive matrices W and H matrices where A=WH. The proposal uses k-means clustering algorithm to determine the centroid of each cluster and assigns the centroid coordinates of each cluster as one column for W matrix. The initial choice of W matrix is positive. The H matrix is determined with gradient descent algorithm based on thin QR optimization. The performance comparison of the proposed NMF algorithm is illustrated with results. The accurate choice of initial positive W matrix reduces approximation error and the use of thin QR algorithm in combination with gradient descent approach provides rapid convergence rate for NMF. The proposed algorithm is implemented with the randomly generated matrix in MATLAB environment. The number of significant singular values of the generated matrix is selected as the number of clusters. The error and convergence rate comparison of the proposed algorithm with the current algorithms are demonstrated in this research. The accurate measurement of execution time for individual program is not possible in MATLAB. The average time execution over 200 iterations is therefore calculated with an increasing iteration count of the proposed algorithm and the comparative results are presented.

]]>Alanazi Talal Abdulrahman Randa Alharbi Osama Alamri Dalia Alnagar and Bader Alruwaili

A supersaturated design is an important method that relies on factorial designs whose number of factors is greater than experiments' number. The analysis of supersaturated designs is challenging due to the complexity of the design matrix. This problem is challenging due to the fact that the design matrix has a complicated structure. Identification of the variable including the active factor plays an essential role when supersaturated design is used to analyse the data. A variable selection technique to screen active effects in the SSDs and regression analysis are applied to our case study. This study set out to examine the actual reasons for the spread of electronic games statistically such as Saudi society. An online survey provided quantitative data from 200 participants. Respondents were randomly divided into two conditions (Yes+, No-) and asked to respond to one of two sets of the causes of electronic games. The responses was analysed using contrast method with supersaturated designs and regression methods using the SPSS computer software to determine the actual causes that led to the spread of electronic games. The findings indicated that because of their constant preoccupation, some parents resort to such games in order to get rid of the child's inconvenience and insufficient awareness among parents of the dangers of these games, and excessive pampering is the factor that led to the spread of electronic games in Saudi society statistically. On this basis, it is recommended that Saudi government professionals develop an operational plan to study these causes to take actions. In future investigations, no recent studies address the external environmental aspects that could influence gaming among individuals, and hence further research is required in this field.

]]>Rejula Mercy. J and S. Elizabeth Amudhini Stephen

Springs are important members often used in machines to exert force, absorb energy and provide flexibility. In mechanical systems, wherever flexibility or relatively a large load under the given circumstances is required, some form of spring is used. In this paper, non-traditional optimization algorithms, namely, Ant Lion Optimizer, Grey Wolf Optimizer, Dragonfly optimization algorithm, Firefly algorithm, Flower Pollination Algorithm, Whale Optimization Algorithm, Cat Swarm Optimization, Bat Algorithm, Particle Swarm Optimization, Gravitational Search Algorithm are proposed to get the global optimal solution for the closed coil helical spring design problem. The problem has three design variables and eight inequality constraints and three bounds. The mathematical formulation of the objective function U is to minimize the volume of closed coil helical spring subject to constraints. The design variables considered are Wire diameter d, Mean coil diameter D, Number of active coils N of the spring. The proposed methods are tested and the performance is evaluated. Ten non-traditional optimization methods are used to find the minimum volume. The problem is computed in the MATLAB environment. The experimental results show that Particle Swarm Optimization outperforms other methods. The results show that PSO gives better results in terms of consistency and minimum value in terms of time and volume of a closed coil helical spring compared to other methods. When compared to other Optimization methods, PSO has few advantages like simplicity and efficiency. In the future, PSO could be extended to solve other mechanical element problems.

]]>Adeyinka Solomon Ogunsanya Waheed Babatunde Yahya Taiwo Mobolaji Adegoke Christiana Iluno Oluwaseun R. Aderele and Matthew Iwada Ekum

In this work, a three-parameter Weibull Inverse Rayleigh (WIR) distribution is proposed. The new WIR distribution is an extension of a one-parameter Inverse Rayleigh distribution that incorporated a transformation of the Weibull distribution and Log-logistic as quantile function. The statistical properties such as quantile function, order statistic, monotone likelihood ratio property, hazard, reverse hazard functions, moments, skewness, kurtosis, and linear representation of the new proposed distribution were studied theoretically. The maximum likelihood estimators cannot be derived in an explicit form. So we employed the iterative procedure called Newton Raphson method to obtain the maximum likelihood estimators. The Bayes estimators for the scale and shape parameters for the WIR distribution under squared error, Linex, and Entropy loss functions are provided. The Bayes estimators cannot be obtained explicitly. Hence we adopted a numerical approximation method known as Lindley's approximation in other to obtain the Bayes estimators. Simulation procedures were adopted to see the effectiveness of different estimators. The applications of the new WIR distribution were demonstrated on three real-life data sets. Further results showed that the new WIR distribution performed credibly well when compared with five of the related existing skewed distributions. It was observed that the Bayesian estimates derived performs better than the classical method.

]]>Nurul Sima Mohamad Shariff and Waznatul Widad Mohamad Ishak

Retirement savings decision is related to the individual judgment on savings planning, and preparation for the retirement. Several factors may affect this decision towards retirement savings. Some of them are demographic factors and other determinants, such as financial knowledge and management, future expectation, social influences and risk tolerance. Due to this interest, this study aims to impact of such factors on retirement savings decision. Furthermore, this study will also discuss the retirement savings decision among Malaysians at different age groups. The data were collected through a survey strategy by using a set of questionnaires. The questions were divided into several sections on the demographic profile, Likert-scale questions on the factors, and the retirement savings decisions. The technique sampling used in this study is a random sampling with 385 respondents. As such, several statistical procedures will be utilized such as the reliability test, Kruskal-Wallis H test, and the ordered probit model. The results of this study found that age, financial knowledge and management, future expectation, and social influences were the significant determinants towards retirement savings decision in Malaysia.

]]>Harliza Mohd Hanif Daud Mohamad and Rosma Mohd Dom

The complexity of a method has been discussed in the decision-making area since complexity may impose some disadvantages such as loss of information and a high degree of uncertainty. However, there is no empirical justification to determine the complexity level of a method. This paper focuses on introducing a method of measuring the complexity of the decision-making method. In the computational area, there is an established method of measuring complexity named Big-O Notation. This paper adopts the method for determining the complexity level of the decision-making method. However, there is a lack of applying Big-O in the decision-making method. Applying Big-O in decision-making may not be able to differentiate the complexity level of two different decision-making methods. Hence, this paper introduces a Relative Complexity Index (RCI) to cater to this problem. The basic properties of the Relative Complexity Index are also discussed. After the introduction of the Relative Complexity Index, the method is implemented in Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method.

]]>Zahari Md Rodzi Abd Ghafur Ahmad Nur Sa’aidah Ismail Wan Normila Mohamad and Sarahiza Mohmad

Dual hesitant fuzzy set (DHFS) consists of two parts: membership hesitant function and non-membership hesitant function. This set supports more exemplary and flexible access to set degrees for each element in the domain and can address two types of hesitant in this situation. It can be considered a powerful tool for expressing uncertain information in the decision-making process. The function of z-score, namely z-arithmetic mean, z-geometric mean, and z-harmonic mean, has been proposed with five important bases, these bases are hesitant degree for dual hesitant fuzzy element (DHFE), DHFE deviation degree, parameter α (the importance of the hesitant degree), parameter β (the importance of the deviation degree) and parameter ϑ (the importance of membership (positive view) or non-membership (negative view). A comparison of the z-score with the existing score function was made to show some of their drawbacks. Next, the z-score function is then applied to solve multi-criteria decision making (MCDM) problems. To illustrate the proposed method's effectiveness, an example of MCDM specifically in pattern recognition has been shown.

]]>Oluremi Davies Ogun

The contents of this paper apply to researches in the fields of economics, statistics – physical or life sciences, other social sciences, accounting and finance, business management and mathematics – core and applied. First, I discussed the misconception and the implications thereof, inherent in the conventional practice of entering interest rates as natural or untransformed series in data analysis most especially, regression models. The trends and variabilities of both transformed and untransformed interest rate series were shown to be similar thereby enhancing the likelihood of similar performances in regressions. By extension therefore, the indicated conventional practice unnecessarily and unjustifiably precluded elasticity inference on the coefficients of interest rates and summing up to procedural inefficiency as an independent computation of elasticity became the only available option. Percentages were not the equivalence of percentage changes and thus only series in growth terms hence, percentage changes should be spared log transformation. Secondly, the paper stressed the imperative to avoid unwieldy and theory incongruent expressions in post preliminary data analysis, by flagging the idea that regression models, in particular, of the growth varieties, should as much as practicable, sync with the dictates of modern time series econometrics in the specification of final equations.

]]>R. Sivaraman

One of the Greatest mathematicians of all time, Gotfried Leibniz, introduced amusing triangular array of numbers called Leibniz's Harmonic triangle similar to that of Pascal's triangle but with different properties. I had introduced entries of Leibniz's triangle through Beta Integrals. In this paper, I have proved that the Beta Integral assumption is exactly same as that of entries obtained through Pascal's triangle. The Beta Integral formulation leads us to establish several significant properties related to Leibniz's triangle in quite elegant way. I have shown that the sum of alternating terms in any row of Leibniz's triangle is either zero or a Harmonic number. A separate section is devoted in this paper to prove interesting results regarding centralized Leibniz's triangle numbers including obtaining a closed expression, the asymptotic behavior of successive centralized Leibniz's triangle numbers, connection between centralized Leibniz's triangle numbers and Catalan numbers as well as centralized binomial coefficients, convergence of series whose terms are centralized Leibniz's triangle numbers. All the results discussed in this section are new and proved for the first time. Finally, I have proved two exceedingly important theorems namely Infinite Hockey Stick theorem and Infinite Triangle Sum theorem. Though these two theorems were known in literature, the way of proving them using Beta Integral formulation is quite new and makes the proof short and elegant. Thus, by simple re-formulation of entries of Leibniz's triangle through Beta Integrals, I have proved existing as well as new theorems in much compact way. These ideas will throw a new light upon understanding the fabulous Leibniz's number triangle.

]]>Anne M. Fernando Ana Vivas Barber and Sunmi Lee

Understanding the dynamics of Malaria can help in reducing the impact of the disease. Previous research proved that including animals in the human transmission model, or 'zooprophylaxis', is effective in reducing transmission of malaria in the human population. This model studies plasmodium vivax malaria and has variables for animal population and mosquito attraction to animals. The existing time-independent Malaria population ODE model is extended to time-dependent model with the differences explored. We introduce the seasonal mosquito population, a Gaussian profile based on data, as a variant for the previous models. The seasonal reproduction number is found using the next generation matrix, endemic and stability analysis is carried out using dynamical systems theory. The model includes short and long term human incubation periods and sensitivity analysis on parameters and all simulations are over three year period. Simulations show for each year larger peaks in the infected populations and seasonal reproduction number during the summer months and we analyze which parameters have more sensitivity in the model and in the seasonal reproduction number. Analysis provides conditions for disease free equilibrium (DFE) and the system is found to be locally asymptotically stable around the DFE when R_{0}<1, furthermore we find the uniqueness of the endemic equilibrium point. The sensitivity analysis for the parameters shows that the model was not sensitive to the exact values of the long or short term periods as it was to the average number of contacts between host and mosquito or rate of disease progression for mosquitoes. This model shows that inclusion of variable mosquito population informs how domestic animals in the human population can be more effectively used as a method of reducing the transmission of malaria. The most relevant contribution of this work is including the time evolution of mosquito population and simulations show how this feature affects human infection dynamics. An analytical expression for the endemic equilibrium point is provided for future work to establish conditions over which an epidemic may be prevented.

Kuntida Kawinwit Akapak Charoenloedmongkhon and Sanoe Koonprasert

Integral equations are essential tools in various areas of applied mathematics. A computational approach to solving an integral equation is important in scientific research. The Haar wavelet collocation method (HWCM) with operational matrices of integration is one famous method which has been applied to solve systems of linear integral equations. In this paper, an approximated analytical method based on the Haar wavelet collocation method is applied to the system of diffusion convection partial differential equations with initial and boundary conditions. This system determines the enzymatic glucose fuel cell with the chemical reaction rate of the Morrison equation. The enzymatic glucose fuel cell model describes the concentration of glucose and hydrogen ion that can be converted into energy. During the process, the model reduces to the linear integral equation system including computational Haar matrices. The computational Haar matrices can be computed by HWCM coding in the Maple program. Illustrated examples are provided to demonstrate the preciseness and effectiveness of the proposed method. The results are shown as numerical solutions of glucose and hydrogen ion.

]]>Juan Carlos Ferrando

If T is a (densely defined) self-adjoint operator acting on a complex Hilbert space H and I stands for the identity operator, we introduce the delta function operator at T. When T is a bounded operator, then is an operator-valued distribution. If T is unbounded, is a more general object that still retains some properties of distributions. We provide an explicit representation of in some particular cases, derive various operative formulas involving and give several applications of its usage in Spectral Theory as well as in Quantum Mechanics.

]]>Ida Kurnia Waliyanti Indah Emilia Wijayanti and M. Farchani Rosyid

Jordan ring is one example of the non-associative rings. We can construct a Jordan ring from an associative ring by defining the Jordan product. In this paper, we discuss the properties of non-associative rings by studying the properties of the Jordan rings. All of the ideals of a non-associative ring R are non-associative, except the ideal generated by the associator in R. Hence, a quotient ring can be constructed, where is the ideal generated by associators in R. The fundamental theorem of the homomorphism ring can be applied to the non-associative rings. By a little modification, we can find that is isomorphic to . Furthermore, we define a module over a non-associative ring and investigate its properties. We also give some examples of such modules. We show if M is a module over a non-associative ring R, then M is also a module over if is contained in the annihilator of R. Moreover, we define the tensor product of modules over a non-associative ring. The tensor product of the modules over a non-associative ring is commutative and associative up to isomorphism but not element by element.

]]>Jackel Vui Lung Chew Jumat Sulaiman and Andang Sunarto

A porous medium equation is a nonlinear parabolic partial differential equation that presents many physical occurrences. The solutions of the porous medium equation are important to facilitate the investigation on nonlinear processes involving fluid flow, heat transfer, diffusion of gas-particles or population dynamics. As part of the development of a family of efficient iterative methods to solve the porous medium equation, the Half-Sweep technique has been adopted. Prior works in the existing literature on the application of Half-Sweep to successfully approximate the solutions of several types of mathematical problems are the underlying motivation of this research. This work aims to solve the one-dimensional porous medium equation efficiently by incorporating the Half-Sweep technique in the formulation of an unconditionally-stable implicit finite difference scheme. The noticeable unique property of Half-Sweep is its ability to secure a low computational complexity in computing numerical solutions. This work involves the application of the Half-Sweep finite difference scheme on the general porous medium equation, until the formulation of a nonlinear approximation function. The Newton method is used to linearize the formulated Half-Sweep finite difference approximation, so that the linear system in the form of a matrix can be constructed. Next, the Successive Over Relaxation method with a single parameter was applied to efficiently solve the generated linear system per time step. Next, to evaluate the efficiency of the developed method, deemed as the Half-Sweep Newton Successive Over Relaxation (HSNSOR) method, the criteria such as the number of iterations, the program execution time and the magnitude of absolute errors were investigated. According to the numerical results, the numerical solutions obtained by the HSNSOR are as accurate as those of the Half-Sweep Newton Gauss-Seidel (HSNGS), which is under the same family of Half-Sweep iterations, and the benchmark, Newton-Gauss-Seidel (NGS) method. The improvement in the numerical results produced by the HSNSOR is significant, and requires a lesser number of iterations and a shorter program execution time, as compared to the HSNGS and NGS methods.

]]>Jamal Salah

In 1859, Bernhard Riemann, a German mathematician, published a paper to the Berlin Academy that would change mathematics forever. The mystery of prime numbers was the focus. At the core of the presentation was indeed a concept that had not yet been proven by Riemann, one that to this day baffles mathematicians. The way we do business could have been changed if the Riemann hypothesis holds true, which is because prime numbers are the key element for banking and e-commerce security. It will also have a significant influence, impacting quantum mechanics, chaos theory, and the future of computation, on the cutting edge of science. In this article, we look at some well-known results of Riemann Zeta function in a different light. We explore the proofs of Zeta integral Representation, Analytic continuity and the first functional equation. Initially, we observe omitting a logical undefined term in the integral representation of Zeta function by the means of Gamma function. For that we propound some modifications in order to reasonably justify the location of the non-trivial zeros on the critical line: s= 1/2 by assuming that ζ(s) and ζ(1-s) simultaneously equal zero. Consequently, we conditionally prove Riemann Hypothesis.

]]>Ftameh Khaled and Pah Chin Hee

It is widely recognized that the theory of quadratic stochastic operator frequently arises due to its enormous contribution as a source of analysis for the investigation of dynamical properties and modeling in diverse domains. In this paper, we are motivated to construct a class of quadratic stochastic operators called mixing quadratic stochastic operators generated by geometric distribution on infinite state space . We also study regularity of such operators by investigating of the limit behavior for each case of the parameter. Some of non-regular cases proved for a new definition of mixing operators by using the shifting definition, where the new parameters satisfy the shifted conditions. A mixing quadratic stochastic operator was established on 3-partitions of the state space and considered for a special case of the parameter Ɛ. We found that the mixing quadratic stochastic operator is a regular transformation for and is a non-regular for . Also, the trajectories converge to one of the fixed points. Stability and instability of the fixed points were investigated by finding of the eigenvalues of Jacobian matrix at these fixed points. We approximate the parameter Ɛ by the parameter , where we established the regularity of the quadratic stochastic operators for some inequalities that satisfy . We conclude this paper by comparing with previous studies where we found some of such quadratic stochastic operators will be non-regular.

]]>Norshakila Abd Rasid Zarina Bibi Ibrahim Zanariah Abdul Majid and Fudziah Ismail

This paper proposed a new alternative approach of the implicit diagonal block backward differentiation formula (BBDF) to solve linear and nonlinear first-order stiff ordinary differential equations (ODEs). We generate the solver by manipulating the numbers of back values to achieve a higher-order possible using the interpolation procedure. The algorithm is developed and implemented in C ++ medium. The numerical integrator approximates few solution points concurrently with off-step points in a block scheme over a non-overlapping solution interval at a single iteration. The lower triangular matrix form of the implicit diagonal causes fewer differentiation coefficients and ultimately reduces the execution time during running codes. We choose two intermediate points as off-step points appropriately, which are proven to guarantee the method's zero stability. The off-step points help to increase the accuracy by optimizing the local truncation error. The proposed solver satisfied theoretical consistency and zero-stable requirements, leading to a convergent multistep method with third algebraic order. We used the well-known and standard linear and nonlinear stiff IVP problems used in literature for validation to measure the algorithm's accuracy and processor time efficiency. The performance metrics are validated by comparing them with a proven solver, and the output shows that the alternative method is better than the existing one.

]]>Samingun Handoyo Ying-Ping Chen Gugus Irianto and Agus Widodo

The aim of the research is to find the best performance both of logistic regression and linear discriminant which their threshold uses some various values. The performance tools used for evaluating classifier model are confusion matrix, precision-recall, F1 score and receiver operation characteristic (ROC) curve. The Audit-risk data set are used for the implementation of the proposed method. The screening data and dimension reduction by using principal component analysis (PCA) are the first step that must be conducted before the data are divided into the training and testing set. After the training process for obtaining the classifier model parameters has been completed, the calculation of performance measures is done only on the testing set where the various constants are added to the threshold value of both classifier models. The logistic regression classifier has the best performance of 94% on the precision-recall, 91.7% on the F1-score, and 0.906 on the area under curve (AUC) where the threshold values are on the interval between 0.002 and 0.018. On the other hand, the linear discriminant classifier has the best performance when the threshold value is 0.035 and its performance value is respectively the precision-recall of 94%, the F1-score of 91.7%, and the AUC of 0.846.

]]>Dwi Sulistyaningsih Eko Andy Purnomo and Purnomo

This study is focused on investigating errors made by students and the various causal factors in working on trigonometry problems by applying sine and cosine rules. Samples were taken randomly from high school students. Data were collected in two ways, namely a written test that was referred to Polya's strategy and interviews with students who made mistakes. Students' errors were analyzed with the Newman concept. The results show that all types of errors occurred with a distribution of 3.83, 19.15, 24.74, 24.89 and 27.39% for reading errors (RE), comprehension error (CE), transformation errors (TE), process skill errors (PSE), and encoding errors (EE), respectively. The RE, CE, TE, PSE, and EE are marked by errors in reading symbols or important information, misunderstanding information and not understanding what is known and questioned, cannot change problems into mathematical models and also incorrectly use signs in arithmetic operations, student inaccuracies in the process of answering and also their lack of understanding in fraction operations, and the inability to deduce answers, respectively. An anomaly occurs because it turns out students who have medium trigonometry achievements make more mistakes than students who have low achievement.

]]>Evgjeni Xhafaj Daniela Halidini Qendraj Alban Xhafaj and Etleva Halidini

The study explores the number of factors that affect the use of Google Classroom in Albanian universities, by using the methodological developments of partial least squares structural equation modelling technique (PLS– SEM). This technique has been used because it allows flexibility in modelling the relationship between constructs (or factors) and in exploring theoretical concepts. An alternative model is introduced by extending the Unified Theory of Acceptance and Use of Technology (UTAUT2) and by integrating new relation between constructs. Our data are from a study of 528 students from 4 Albanian universities during the year 2020. Using Importance-Performance Matrix Analysis (IPMA) our analysis suggest that Habit is the construct that has the greatest importance in determining the Behavioral Intention towards Google Classroom, whereas Behavioral Intention has the greatest importance related to Use Behavior of Google Classroom. The results of the study show that Habit, and Hedonic Motivation have a greater impact on the Behavioral Intention to use Google Classroom. Additionally, we find that all constructs of the alternative model have an important influence to Behavioral Intention to Google Classroom and explain 65.3 per cent of its variance. This study will help the Higher Educations Institutions in assessing the factors that influence the use of Google Classroom, in such a way that this platform should be used as a support tool in the future.

]]>Viktor Pandra Badrun Kartowagiran and Sugiman

The aims of this research are: 1) producing the test instrument of mathematics skill on elementary school which is valid and reliable, 2) finding out the characteristics of the test instrument of mathematics skill on elementary school. The instrument test development in this research uses the development model of Wilson, Oriondo and Antonio which is modified. The number of testing sample in this research is 160 students in each class. This research results: 1) the validity index of aiken v is 0.979 in grade IV and 0.988 in grade V. The coefficient of instrument skill in class IV and V are 0.883 and 0.954. 2) the compatibility model in this research is it is suitable for 1PL model or parameter b (difficulty level). The result of parameter analysis of test item in class IV and V, shows that the overall item is in good category which is between -2 to 2. The case indicates that the overall item is accepted and reliable to be used for measuring the development of mathematics skill of elementary school students.

]]>A. A. Dahalan and J. Sulaiman

The computational technique has become a significant area of study in physics and engineering. The first method to evaluate the problems numerically was a finite difference. In 2002, a computational approach, an explicit finite difference technique, was used to overcome the fuzzy partial differential equation (FPDE) based on the Seikkala derivative. The application of the iterative technique, in particular the Two Parameter Alternating Group Explicit (TAGE) method, is employed to resolve the finite difference approximation resulting after the fuzzy heat equation is investigated in this article. This article broadens the use of the TAGE iterative technique to solve fuzzy problems due to the reliability of the approaches. The development and execution of the TAGE technique towards the full-sweep (FS) and half-sweep (HS) techniques are also presented. The idea of using the HS scheme is to reduce the computational complexity of the iterative methods by nearly/more than half. Additionally, numerical outcomes from the solution of two experimental problems are included and compared with the Alternating Group Explicit (AGE) approaches to clarify their feasibility. In conclusion, the families of the TAGE technique have been used to overcome the linear system structure through a one-dimensional fuzzy diffusion (1D-FD) discretization using a finite difference scheme. The findings suggest that the HSTAGE approach is surpassing in terms of iteration counts, time taken, and Hausdorff distance relative to the FSTAGE and AGE approaches. It demonstrates that the number of iterations for HSTAGE approach has decreased by approximately 71.60-72.95%, whereas for the execution time, the implementation of HSTAGE method is between 74.05-86.42% better. Since TAGE is ideal for concurrent processing, this method has been seen as the key benefit as it consumes sets of independent tasks that can be performed at the same time. The ability of the suggested technique is projected to be useful for the advanced exploration in solving any multi-dimensional FPDEs.

]]>Restituto M. Llagas Jr.

Studying mathematics comprises acquiring a positive disposition toward mathematics and seeing mathematics as an effective way of looking at real-life situations. This study aimed to correlate the disposition to Mathematics of prospective Filipino teachers to some teacher-related variables. The participants were the prospective Filipino teachers at the University of Northern Philippines (UNP) and at the Divine Word College of Vigan (DWCV). Two sets of instruments were utilized in the study – the self-report questionnaire and the Mathematics Dispositional Functioning Inventory developed by Beyers [1]. Frequency and percentage, weighted mean, and chi-square were utilized for data analysis. Results show that the overall disposition to mathematics of the participants is "Positive". The cognitive, affective, and conative aspects received a positive disposition. However, some items show an uncertain disposition to mathematics. The participants' profile variables have no significant relationship with their cognitive and conative disposition to mathematics. A training plan was conceptualized to provide information on the results of the study, to enhance the awareness and understanding of dispositions, to equip appropriate methods in solving mathematical problems, and to provide enrichment activities that will foster a positive disposition to mathematics and consequently will improve prospective teachers' and students' performance. Teachers are influential to the development of the students of effective ways of learning, doing, and thinking about mathematics. Understanding how attitudes are learned to establish an association between the teacher's disposition and students' attitude and performance. Thus, fostering dispositions to mathematics through training improves prospective Filipino teachers' and students' performance.

]]>A. A. Aminu S. E. Olowo I. M. Sulaiman N. Abu Bakar and M. Mamat

Max-plus algebra is a discrete algebraic system developed on the operations max () and plus (), where the max and plus operations are defined as addition and multiplication in conventional algebra. This algebraic structure is a semi-ring with its elements being real numbers along with ε=-∞ and e=0. On the other hand, the synchronized discrete event problem is a problem in which an event is scheduled to meet a deadline. There are two aspects of this problem. They include the events running simultaneously and the completion of the lengthiest event at the deadline. A recent survey on max-plus linear algebra shows that the operations max () and plus () play a significant role in modeling of human activities. However, numerous studies have shown that there are very limited literatures on the application of the max-plus algebra to real-life problems. This idea motivates the basic algebraic results and techniques of this research. This paper proposed the discrepancy method of max-plus for solving n×n system of linear equations with n≤n, and further show that an nxn linear system of equations will have either a unique solution, an infinitely many solutions or no solution whiles nxn linear system of equations has either an infinitely many solutions or no solution in (). Also, the proposed concept was extended to the job-shop problem in a synchronized event. The results obtained have shown that the method is very efficient for solving n×n system of linear equations and is also applicable to job-shop problems.

]]>Pakwan Riyapan Sherif Eneye Shuaib Arthit Intarasit and Khanchit Chuarkham

Epidemic models are essential in understanding the transmission dynamics of diseases. These models are often formulated using differential equations. A variety of methods, which includes approximate, exact and purely numerical, are often used to find the solutions of the differential equations. However, most of these methods are computationally intensive or require symbolic computations. This article presents the Differential Transformation Method (DTM) and Multi-Step Differential Transformation Method (MSDTM) to find the approximate series solutions of an SVIR rotavirus epidemic model. The SVIR model is formulated using the nonlinear first-order ordinary differential equations, where S; V; I and R are the susceptible, vaccinated, infected and recovered compartments. We begin by discussing the theoretical background and the mathematical operations of the DTM and MSDTM. Next, the DTM and MSDTM are applied to compute the solutions of the SVIR rotavirus epidemic model. Lastly, to investigate the efficiency and reliability of both methods, solutions obtained from the DTM and MSDTM are compared with the solutions from the Runge-Kutta Order 4 (RK4) method. The solutions from the DTM and MSDTM are in good agreement with the solutions from the RK4 method. However, the comparison results show that the MSDTM is more efficient and converges to the RK4 method than the DTM. The advantage of the DTM and MSDTM over other methods is that it does not require a perturbation parameter to work and does not generate secular terms. Therefore the application of both methods

]]>B. K. Buzdov

When cooling living biological tissue (active, non-inert medium), cryomedicine uses cryo-instruments with various forms of cooling surface. Cryoinstruments are located on the surface of biological tissue or completely penetrate into it. With a decrease in the temperature of the cooling surface, an unsteady temperature field appears in the tissue, which in the general case depends on three spatial coordinates and time. To date, there are a large number of scientific publications that consider mathematical models of cryodestruction of biological tissue. However, in the overwhelming majority of them, the Pennes equation (or some of its modifications) is taken as the basis of the mathematical model, from which the linear nature of the dependence of heat sources of biological tissue on the desired temperature field is visible. This character of the dependence does not allow one to describe the actually observed spatial localization of heat. In addition, Pennes' model does not take into account the fact that the freezing of the intercellular fluid occurs much earlier than the freezing of the intracellular fluid and the heat corresponding to these two processes is released at different times. In the proposed work, a new mathematical model of cooling and freezing of living biological tissue are built with a flat rectangular applicator located on its surface. The model takes into account the above features and is a three-dimensional boundary-value problem of the Stefan type with nonlinear heat sources of a special type and has applications in cryosurgery. A method is proposed for the numerical study of the problem posed, based on the use of locally one-dimensional difference schemes without explicitly separating the boundary of the influence of cold and the boundaries of the phase transition. The method was previously successfully tested by the author in solving other two-dimensional problems arising in cryomedicine.

]]>J. Uma Maheswari A. Anbarasan and M. Ravichandran

The notion of complex valued metric spaces proved the common fixed point theorem that satisfies rational mapping of contraction. In the contraction mapping theory, several researchers demonstrated many fixed-point theorems, common fixed-point theorems and coupled fixed-point theorems by using complex valued metric spaces. The idea of b-metric spaces proved the fixed point theorem by the principle of contraction mapping. The notion of complex valued b-metric spaces, and this metric space was the generalization of complex valued metric spaces. They explained the fixed point theorem by using the rational contraction. In the metric spaces, we refer to this metric space as a quasi-metric space, the symmetric condition d(x, y) = d(y, x) is ignored. Metric space is a special kind of space that is quasi-metric. The Quasi metric spaces were discussed by many researchers. Banach introduced the theory of contraction mapping and proved the theorem of fixed points in metric spaces. We are now introducing the new notion of complex quasi b-metric spaces involving rational type contraction which proved the unique fixed point theorems with continuous as well as non-continuous functions. Illustrate this with example.

]]>Vipin Verma and Mannu Arya

Many researchers have been working on recurrence relation which is an important topic not only in mathematics but also in physics, economics and various applications in computer science. There are many useful results on recurrence relation sequence but there main problem to find any term of recurrence relation sequence we need to find all previous terms of recurrence relation sequence. There were many important theorems obtained on recurrence relations. In this paper we have given special identity for generalized kth order recurrence relation. These identities are very useful for finding any term of any order of recurrence relation sequence. Authors define a special formula in this paper by this we can find direct any term of a recurrence relation sequence. In this recurrence relation sequence to find any terms we need to find all previous terms so this result is very important. There is important property of a relation between coefficients of recurrence relation terms and roots of a polynomial for second order relation but in this paper, we gave this same property of recurrence relation of all higher order recurrence relation. So finally, we can say that this theorem is valid all order of recurrence relation only condition that roots are distinct. So, we can say that this paper is generalization of property of a relation between coefficients of recurrence relation terms and roots of a polynomial. Theorem: - Let C_{1} and C_{2} are arbitrary real numbers and suppose the equation (1) Has X_{1} and X_{2} are distinct roots. Then the sequence is a solution of the recurrence relation (2) . For n= 0, 1, 2 …where β_{1} and β_{2} are arbitrary constants. Proof: - First suppose that of type we shall prove is a solution of recurrence relation (2). Since X_{1}, X_{2} and X_{3} are roots of equation (1) so all are satisfied equation (1) so we have, . Consider . This implies . So the sequence is a solution of the recurrence relation. Now we will prove the second part of theorem. Let is a sequence with three . Let . So (3). (4). Multiply by X_{1} to (3) and subtracts from (4). We have similarly we can find . So we can say that values of β_{1} and β_{2} are defined as roots are distinct. So non- trivial values ofβ_{1} and β_{2} can find and we can say that result is valid. Example: Let be any sequence such that n≥3 and a_{0}=0, a_{1}=1, a_{2}=2. Then find a_{10} for above sequence. Solution: The polynomial of above sequence is . Solving this equation we have roots are 1, 2, and 3 using above theorem we have (7). Using a_{0}=0, a_{1}=1, a_{2}=2 in (7) we have β_{1}+β_{2}+β_{3}=0 (8). β_{1}+2β_{2}+3β_{2}=1 (9).β_{1}+4β_{2}+9β_{3}=2 (10) Solving (8), (9) and (10) we have , , . This implies . Now put n=10 we have a_{10}=-27478. Recurrence relation is a very useful topic of mathematics, many problems of real life may be solved by recurrence relations, but in recurrence relation there is a major difficulty in the recurrence relation. If we want to find 100th term of sequence, then we need to find all previous 99 terms of given sequence, then we can get 100th term of sequence but above theorem is very useful if coefficients of recurrence relation of given sequence satisfies the condition of the above theorem, then we can apply above theorem and we can find direct any term of sequence without finding all previous terms.

Nik Muhammad Farhan Hakim Nik Badrul Alam Nazirah Ramli and Norhuda Mohammed

Fuzzy time series is a powerful tool to forecast the time series data under uncertainty. Fuzzy time series was first initiated with fuzzy sets and then generalized by intuitionistic fuzzy sets. The intuitionistic fuzzy sets consider the degree of hesitation in which the degree of non-membership is incorporated. In this paper, a fuzzy set time series forecasting model based on intuitionistic fuzzy sets via delegation of hesitancy degree to the major grade de-i-fuzzification approach was developed. The proposed model was implemented on the data of student enrollments at the University of Alabama. The forecasted output was obtained using the fuzzy logical relationships of the output, and the performance of the forecasted output was compared with the fuzzy time series forecasting model based on fuzzy sets using the mean square error, root mean square error, mean absolute error, and mean absolute percentage error. The results showed that the forecasting model based on induced fuzzy sets from intuitionistic fuzzy sets performs better compared to the fuzzy time series forecasting model based on fuzzy sets.

]]>Auni Aslah Mat Daud

An important part of the study of epidemic models is the local stability analysis of the equilibrium points. The linear algebra method which is commonly employed is the well-known Routh-Hurwitz criteria. The criteria give necessary and sufficient conditions for all of the roots of the characteristic polynomial to be negative or have negative real parts. To date, there are no epidemic models in the literature which employ Lienard-Chipart criteria. This note recommends an alternative linear algebra method namely Lienard-Chipart criteria, to significantly simplify the local stability analysis of epidemic models. Although Routh-Hurwitz criteria are a correct method for local stability analysis, Lienard-Chipart criteria have advantages over Routh-Hurwitz criteria. Using Lienard-Chipart criteria, only about half of the Hurwitz determinants inequalities are required, with the remaining conditions of each set concern with only the sign of the alternate coefficients of the characteristic polynomial. The Lienard-Chipart criteria are especially useful for polynomials with symbolic coefficients, as the determinants are usually significantly more complicated than original coefficients as degree of the polynomial increases. Lienard-Chipart criteria and Routh-Hurwitz criteria have similar performance for systems of dimension five or less. Theoretically, for systems of dimension higher than five, verifying Lienard-Chipart criteria should be much easier than verifying Routh-Hurwitz criteria and the advantage of Lienard-Chipart criteria may become clear. Examples of local stability analysis using Lienard-Chipart criteria for two recently proposed models are demonstrated to show the advantages of simplified Lienard-Chipart criteria over Routh-Hurwitz criteria.

]]>Muhammad Ammar Shafi Mohd Saifullah Rusiman and Siti Nabilah Syuhada Abdullah

The colon and rectum is the final portion of the digestive tube in the human body. Colorectal cancer (CRC) occurs due to bacteria produced from undigested food in the body. However, factors and symptoms needed to predict tumor size of colorectal cancer are still ambiguous. The problem of using linear regression arises with the use of uncertain and imprecise data. Since the fuzzy set theory's concept can deal with data not to a precise point value (uncertainty data), this study applied the latest fuzzy linear regression to predict tumor size of CRC. Other than that, the parameter, error and explanation for the both models were included. Furthermore, secondary data of 180 colorectal cancer patients who received treatment in general hospital with twenty five independent variables with different combination of variable types were considered to find the best models to predict the tumor size of CRC. Two models; fuzzy linear regression (FLR) and fuzzy linear regression with symmetric parameter (FLRWSP) were compared to get the best model in predicting tumor size of colorectal cancer using two measurement statistical errors. FLRWSP was found to be the best model with least value of mean square error (MSE) and root mean square error (RMSE) followed by the methodology stated.

]]>Navya Pratyusha M Rajyalakshmi K Apparao B V and Charankumar G

Pittsburgh Sleep Quality Index (PSQI) Scoring (Buysse et al. 1989) is a powerful method to measure the sleep quality index based on the scores of various factors namely duration of sleep, sleep disturbance, sleep latency, day dysfunction due to sleepiness, sleep efficiency, need meds to sleep and overall sleep quality. Mainly we focused on the smart phones' usage and its impact on the quality of sleep at the bed time. Many studies have proved that the usage of smart phones at bed time affects the sleep quality, health and productivity. In the present study, we have collected data randomly from the middle-aged adults and observed the relation between gender and the quality of sleep using phi coefficient. It is clearly observed that as we move from males to females, we move negatively from good sleep quality to poor sleep quality. It indicates that males have poor sleep quality than females. We also performed an analysis of variance to test the hypothesis that there is any association between the smart phones' usage and its impact on quality of sleep at bed time.

]]>Leontiev V. L.

The algorithm of the generalized Fourier method associated with the use of orthogonal splines is presented on the example of an initial boundary value problem for a region with a curvilinear boundary. It is shown that the sequence of finite Fourier series formed by the method algorithm converges at each moment to the exact solution of the problem – an infinite Fourier series. The structure of these finite Fourier series is similar to that of partial sums of an infinite Fourier series. As the number of grid nodes increases in the area under consideration with a curvilinear boundary, the approximate eigenvalues and eigenfunctions of the boundary value problem converge to the exact eigenvalues and eigenfunctions, and the finite Fourier series approach the exact solution of the initial boundary value problem. The method provides arbitrarily accurate approximate analytical solutions to the problem, similar in structure to the exact solution, and therefore belongs to the group of analytical methods for constructing solutions in the form of orthogonal series. The obtained theoretical results are confirmed by the results of solving a test problem for which both the exact solution and analytical solutions of discrete problems for any number of grid nodes are known. The solution of test problem confirm the findings of the theoretical study of the convergence of the proposed method and the proposed algorithm of the method of separation of variables associated with orthogonal splines, yields the approximate analytical solutions of initial boundary value problem in the form of a finite Fourier series with any desired accuracy. For any number of grid nodes, the method leads to a generalized finite Fourier series which corresponds with high accuracy to the partial sum of the Fourier series of the exact solution of the problem.

]]>I M Sulaiman M Mamat M Y Waziri U A Yakubu and M Malik

Conjugate Gradient (CG) method is the most prominent iterative mathematical technique that can be useful for the optimization of both linear and non-linear systems due to its simplicity, low memory requirement, computational cost, and global convergence properties. However, some of the classical CG methods have some drawbacks which include weak global convergence, poor numerical performance both in terms of number of iterations and the CPU time. To overcome these drawbacks, researchers proposed new variants of the CG parameters with efficient numerical results and nice convergence properties. Some of the variants of the CG method include the scale CG method, hybrid CG method, spectral CG method, three-term CG method, and many more. The hybrid conjugate gradient (CG) algorithm is among the efficient variant in the class of the conjugate gradient methods mentioned above. Some interesting features of the hybrid modifications include inherenting the nice convergence properties and efficient numerical performance of the existing CG methods. In this paper, we proposed a new hybrid CG algorithm that inherits the features of the Rivaie et al. (RMIL*) and Dai (RMIL+) conjugate gradient methods. The proposed algorithm generates a descent direction under the strong Wolfe line search conditions. Preliminary results on some benchmark problems show that the proposed method efficient and promising.

]]>Deepshikha Deka Bhanita Das Bhupen K Baruah and Bhupen Baruah

Research, development and extensive use of generalized form of distributions in order to analyze and modeling of applied sciences research data has been growing tremendously. Weibull and Fréchet distribution are widely discussed for reliability and survival analysis using experimental data from physical, chemical, environmental and engineering sciences. Both the distributions are applicable to extreme value theory as well as small and large data sets. Recently researchers develop several probability distributions to model experimental data as these parent models are not adequate to fit in some experiments. Modified forms of the Weibull distribution and Fréchet distribution are more flexible distributions for modeling experimental data. This article aims to introduce a generalize form of Weibull distribution known as Fréchet-Weibull Distribution (FWD) by using the T-X family which extends a more flexible distribution for modeling experimental data. Here the pdf and cdf with survival function [S(t)], hazard rate function [h(t)] and asymptotic behaviour of pdf and survival function and the possible shapes of pdf, cdf, S(t) and h(t) of FWD have been studied and the parameters are estimated using maximum livelihood method (MLM). Some statistical properties of FWD such as mode, moments, skewness, kurtosis, variation, quantile function, moment generating function, characteristic function and entropies are investigated. Finally the FWD has been applied to two sets of observations from mechanical engineering and shows the superiority of FWD over other related distributions. This study will provide a useful tool to analyze and modeling of datasets in Mechanical Engineering sciences and other related field.

]]>S. Padmashini and S. Pethanachi Selvam

Domination in graphs is to dominate the graph G by a set of vertices , vertex set of G) when each vertex in G is either in D or adjoining to a vertex in D. D is called a perfect dominating set if for each vertex v is not in D, which is adjacent to exactly one vertex of D. We consider the subset C which consists of both vertices and edges. Let denote the set of all vertices V and the edges E of the graph G. Then is said to be a corporate dominating set if every vertex v not in is adjacent to exactly one vertex of , where the set P consists of all vertices in the vertex set of an edge induced sub graph , (E_{1} a subset of E) such that there should be maximum one vertex common to any two open neighborhood of different vertices in V(G[E_{1}]) and Q, the set consists of all vertices in the vertex set V_{1}, a subset of V such that there exists no vertex common to any two open neighborhood of different vertices in V_{1}. The corporate domination number of G, denoted by , is the minimum cardinality of elements in C. In this paper, we intend to determine the exact value of corporate domination number for the Cartesian product of the Cycle and Path .

Oluwaseun Adeyeye and Zurni Omar

Various algorithms have been proposed for developing block methods where the most adopted approach is the numerical integration and collocation approaches. However, there is another conventional approach known as the Taylor series approach, although it was utilised at inception for the development of linear multistep methods for first order differential equations. Thus, this article explores the adoption of this approach through the modification of the aforementioned conventional Taylor series approach. A new methodology is then presented for developing block methods, which is a more accurate method for solving second order ordinary differential equations, coined as the Modified Taylor Series (MTS) Approach. A further step is taken by presenting a generalised form of the MTS Approach that produces any k-step block method for solving second order ordinary differential equations. The computational complexity of this approach after being generalised to develop k-step block method for second order ordinary differential equations is calculated and the result shows that the generalised algorithm involves less computational burden, and hence is suitable for adoption when developing block methods for solving second order ordinary differential equations. Specifically, an alternate and easy-to-adopt approach to developing k-step block methods for solving second order ODEs with fewer computations has been introduced in this article with the developed block methods being suitable for solving second order differential equations directly.

]]>Jasmine Lee Jia Min and Syafrina Abdul Halim

Increased flood risk is recognized as one of the most significant threats in most parts of the world, resulting in severe flooding events which have caused significant property and human life losses. As there is an increase in the number of extreme flash flood events observed in Klang Valley, Malaysia recently, this paper focuses on modelling extreme daily rainfall within 30 years from year 1975 toyear 2005 in Klang Valley using generalized extreme value (GEV) distribution. Cyclic covariate is introduced in the distribution because of the seasonal rainfall variation in the series. One stationary (GEV) and three nonstationary models (NSGEV1, NSGEV2, and NSGEV3) are constructed to assess the impact of cyclic covariates on the extreme daily rainfall events. The better GEV model is selected using Akaike's information criterion (AIC), bayesian information criterion (BIC) and likelihood ratio test (LRT). The return level is then computed using the selected fitted GEV model. Results indicate that the NSGEV3 model with cyclic covariate trend presented in location and scale parameters provides better fits the extreme rainfall data. The results showed the capability of the nonstationary GEV with cyclic covariates in capturing the extreme rainfall events. The findings would be useful for engineering design and flood risk management purposes.

]]>Nurhaida Subanar Abdurakhman and Agus Maman Abadi

This article deals with problems of detecting abrupt changes in time series ba Change Point Model (CPM) framework. We propose a fuzzification in a Fuzzy Time Series (FTS) model to eliminate a trend in a contaminated dependent series. The independent residuals are then inputed on the CPM method. In simulating an abrupt change, an ARIMA(1,1,1) and variance of the model are considered. The abrupt change is modelled as an AO (Additive Outlier) type of outliers. The minimum weight or breaksize of the abrupt change is defined based on the ARIMA variance formulated in this article. The percentage of uncorrelated residuals obtained by the FTS model and the percentage of correct detection of the proposed procedure are shown by simulation. The proposed detecting algorithm is implemented to detect abrupt changes in monthly tourism series in literature, i.e., in Taiwan and in Bali. The first series shows a slowly increasing trend with one abrupt change while the second series exhibits not only a slowly increasing trend but also a strong seasonal pattern with two abrupt changes. For comparison, we detect the changes in the empirical examples on an existing automatic detection procedure using tso package in R. For the first example, the results show that both detecting procedures give exactly a similar location of one change point where the package recognises it as an AO type of outliers. The abrupt change is related to the period of SARS outbreak in Taiwan. On the second example, the proposed procedure locates 4 change points which form two locations of changes, i.e., the first two change points are within 2 time points so do the last two change points. The locations are closed to times of Bali Bombing events. Meanwhile, the automatic procedure recognizes only one AO outlier on the series.

]]>Kusno

Formulation of developable patches is beneficial for modeling of the plate-metal sheet in the based-metal-industries objects. Meanwhile, installing the developable patches on a frame of the items and making a hole on these objects surface still need some practical techniques for developing. For these reasons, this research aims to introduce some methods for fitting a curve segment, cutting the developable patches, and adjusting their formulas. Using these methods can design various profile shapes of rubber filer installed on a frame of the objects and create a fissure or hole on the patches' surface. The steps are as follows. First, we define the planes containing the patches' generatrixes and orthogonal to the boundary curves. Then, it fits the Hermite and Bézier curve, via arranging some control points data on these planes, to model the rubber filler shapes. Second, we numerically evaluate a method for cutting the patches with a plane and adjusting the patches' form by modifying their formula from a linear interpolation form into a combination of curve and vectors forms. As a result, it can present some equations and procedures for plotting required curves, cutting surfaces, and modifying the extensible or narrowable shape of Hermite patches. These methods offer some advantages and contribute to designing the based-metal-sheets' object surfaces, especially modeling various forms of rubber filer profiles installed on a frame of the objects and making hole shapes on the plate-metal sheets.

]]>Viliam Ďuriš

Various problems in the real world can be viewed as the Constraint Satisfaction Problem (CSP) based on several mathematical principles. This paper is a guideline for complete automation of the Timetable Problem (TTP) formulated as CSP, which we are able to solve algorithmically, and so the advantage is the possibility to solve the problem on a computer. The theory presents fundamental concepts and characteristics of CSP along with an overview of basic algorithms used in terms of its solution and forms the TTP as CSP and delineates the basic properties and requirements to be met in the timetable. The theory in our paper is mostly based on the Jeavons, Cohen, Gyssens, Cooper, and Koubarakis work, on the basis of which we've constructed a computer programme, which verifies the validity and functionality of the Constraint satisfaction method for solving the Timetable Problem. The solution of the TTP, which is characterized by its basic characteristics and requirements, was implemented by a tree-based search algorithm to a program and our main contribution is an algorithmic verification of constraints abilities and reliability when solving a TTP by means of constraints. The created program was also used to verify the time complexity of the algorithmic solution.

]]>Suparman Abdellah Salhi and Mohd Saifullah Rusiman

Moving average (MA) is a time series model often used for pattern forecasting and recognition. It contains a noise that is often assumed to have a Gaussian distribution. However, in various applications, noise often does not have this distribution. This paper suggests using Laplacian noise in the MA model, instead. The comparison of Gaussian and Laplacian noises was also investigated to ascertain the right noise for the model. Moreover, the Bayesian method was used to estimate the parameters, such as the order and coefficient of the model, as well as noise variance. The posterior distribution has a complex form because the parameters are concerened with the combination of spaces of different dimensions. Therefore, to overcome this problem, the Markov Chain Monte Carlo (MCMC) reversible jump algorithm is adopted. A simulation study was conducted to evaluate its performance. After it has worked properly, it was applied to model human heart rate data. The results showed that the MCMC algorithm can estimate the parameters of the MA model. This was developed using Laplace distributed noise. Moreover, when compared with the Gaussian, the Laplacian noise resulted in a higher order model and produced a smaller variance.

]]>Wilhemina Adoma Pels Atinuke Olusola Adebanji and Sampson Twumasi-Ankrah

The study focused on the Generalized Pareto Distribution (GPD) under the Peak Over Threshold approach (POT). Twenty-one estimation methods were considered for extreme value modeling and their performances were compared. Our goal is to identify the best method in various conditions by the use of a systematic simulation study. Some other estimators which were initially not created under the POT framework (NON-POT) were also compared concurrently with the ones under the POT framework. The simulation results under varying shape parameters showed the Zhang Estimator as "best" in performance for NON-POT in estimating both the shape and scale parameter for heavy-tailed cases. In the POT framework, the Zhang Estimator again performed "best" in estimating very heavy tails for the shape and very short tails for the scale regardless of the value of the scale parameter. When varying sample size, under the NON-POT framework, the Zhang estimator performed as "best" heavy-tailed whiles for the POT framework, the Pickands Estimator was "best" in performance at estimating the shape parameter for large sample sizes and the Zhang, small sample sizes.

]]>Yakhshiboev M. U.

The case of one-dimensional and multidimensional non-convolutional integral operators in Lebesgue spaces is considered in this paper. The convergence in the norm and almost everywhere of non-convolution integral operators in Lebesgue spaces was insufficiently studied. The kernels of non-convolutional integral operators do not need to have a monotone majorant, so the well-known results on the convergence almost everywhere of convolutional averages are not applicable here. The kernels of nonconvolutional integral operators take into account different behaviors at and depending on (which is important in applications) and cover the situation in the particular case of convolutional and non-convolutional integral operators. We are interested in the behavior of function as . Theorems on convergence almost everywhere in the case of one-dimensional and multidimensional nonconvolution integral operators in Lebesgue spaces are proved. The theorems proved are more general ones (including for convolutional integral operators) and cover a wide class of kernels.

]]>Vladimir A. Skorokhodov

The problem of reachability on graphs with restriction is studied. Such restrictions mean that only those paths that satisfy certain conditions are valid paths on the graph. Because of this, for classical optimization problems one has to consider only a subset of feasible paths on the graph, which significantly complicates their solution. Reachability constraints arise naturally in various applied problems, for example, in the problem of navigation in telecommunication networks with areas of strong signal attenuation or when modeling technological processes in which there is a condition for the order of actions or the compatibility of operations. General concepts of a graph with non-standard reachability and a valid path on it are introduced. It is shown that the classical graphs, as well as graphs with restrictions on passing through the selected arcs subsets are special cases of graphs with non-standard reachability. General approach to solving the shortest path problem on a graph with non-standard achievability is developed. This approach consists in constructing an auxiliary graph and reducing the shortest path problem on a graph with non-standard reachability to a similar problem on an auxiliary graph. The theorem on the correspondence of the paths of the original and auxiliary graphs is proved.

]]>E. N. Sinyukova and O. L. Chepok

It is well known that concepts of a geodesic line and a geodesic mapping are among the most fundamental concepts of classical theory of Riemannian spaces. In geometry, concept of Riemannian space has been formed as a generalization of the concept of a smooth surface in a three-dimensional Euclidean space. It has turned out to be possible to extend to Riemannian space the concept of a geodesic point of a curve and to represent a geodesic line of Riemannian space as a curve that consists exclusively of geodesic points. The fact has allowed understanding not only the local but also the global character of basic equations of geodesic mappings' theory of Riemannian spaces that have been originally received as a result of local investigations. An example of the global solution of the so-called new form of basic equations in the theory of geodesic mappings of Riemannian spaces is built in the article. Sphere that is considered as a subset of Euclidean space , forms its topological background. Investigations are based on the concept of equidistant Riemannian space. They are carried out according to the atlas that consists of two charts, obtained with the help of a stereographic projection.

]]>Adejumo T. Joel. Omonijo D. Ojo Owolabi A. Timothy Okegbade A. Ibukun Odukoya A. Jonathan and Ayedun C. Ayedun

Over the years, non-parametric test statistics have been the only solution to solve data that do not follow a normal distribution. However, giving statistical interpretation used to be a great challenge to some researchers. Hence, to overcome these hurdles, another test statistics was proposed called Rank transformation test statistics so as to close the gap between parametric and non-parametric test statistics. The purpose of this study is to compare the conclusion statement of Rank transformation test statistics with its equivalent non parametric test statistics in both one and two samples problems using real-life data. In this study, (2018/2019) Post Unified Tertiary Matriculation Examinations (UTME) results of prospective students of Ladoke Akintola University of Technology (LAUTECH) Ogbomoso across all faculties of the institution were used for the analysis. The data were subjected to nonparametric test statistics which include; Asymptotic Wilcoxon sign test and Wilcoxon sum Rank (both Asymptotic and Distribution) using Statistical Packages for Social Sciences (SPSS). In the same vein, R-statistical programming codes were written for Rank Transformation test statistics. Their P-values were extracted and compared with each other with respect to the pre-selected alpha level (α) = 0.05. Results in both cases revealed that there is a significant difference in the median of the scores across all faculties since their type I error rate are less than the preselected alpha level 0.05. Therefore, Rank transformation test statistics is recommended as alternative test statistics to non-parametric test in both one sample and two-sample problems.

]]>Retno Ayu Cahyoningtyas Solimun and Adji Achmad Rinaldo Fernandes

The purpose of this research is to develop structural modeling with metric and nonmetric measurement scales. Also, this study compares the level of efficiency between the first order and second-order models. The application of structural modeling in agriculture is the satisfaction of farmers in East Java. The data used in this study are about perceptions by distributing questionnaires to farmers in East Java Province in 2020. The respondents in this study were 155 districts in East Java Province. Therefore, the sampling technique chosen is probability sampling, which is a proportional area random sampling. The results are obtained that the first-order model is better than the second-order model because it has the lowest MSE value and the highest R2. The results of the path analysis for the first order and second-order models produce the same results that there is a significant positive effect between the gratitude variables on the farmer satisfaction variable. That is, the more gratitude felt by farmers, the satisfaction will be increased by East Java Farmers. On the other hand, the test results showed that demographic variables did not significantly influence gratitude variables.

]]>Priya Arora and V. P Tomar

Background: Measuring the information and removal of uncertainty are the essential nature of human thinking and many world objectives. Information is well used and beneficial if it is free from uncertainty and fuzziness. Shannon was the primitive who coined the term entropy for measure of uncertainty. He also gave an expression of entropy based on probability distribution. Zadeh used the idea of Shannon to develop the concept of fuzzy sets. Later on, Atanassov generalized the concept of fuzzy set and developed intuitionistic fuzzy sets. Purpose: Sometimes we do not have complete information about fuzzy set or intuitionistic fuzzy sets. Some partial information is known about them i.e either only few values of membership function or non membership function are known or a relationship between them is known or some inequalities governing these parameters are known. Kapur has measured the partial information given by a fuzzy set. In this paper, we have attempted to quantify partial information given by intuitionistic fuzzy sets by considering all the cases. Methodologies: We analyze some well-known definitions and axioms used in the field of fuzzy theory. Principal Results: We have devised methods to measure the incomplete information given about intuitionistic fuzzy sets. Major Conclusions: By devising the methods of measuring partial information about IFS, we can use this information to get an idea about the given set and use this information wisely to make a good decision.

]]>Jonathan Kwaku Afriyie Sampson Twumasi-Ankrah Kwasi Baah Gyamfi Doris Arthur and Wilhemina Adoma Pels

Unit root tests for stationarity have relevancy in almost every practical time series analysis. Deciding on which unit root test to use is a topic of active interest. In this study, we compare the performance of the three commonly used unit root tests (i.e., Augmented Dickey-Fuller (ADF), Phillips-Perron (PP), and Kwiatkowski Phillips Schmidt and Shin (KPSS)) in time series. Based on literature, these unit root tests sometimes disagree in selecting the appropriate order of integration for a given series. Therefore, the decision to use a unit root test relies essentially on the judgment of the researcher. Suppose we wish to annul the subjective decision. In that case, we have to locate an objective basis that unmistakably characterizes which test is the most appropriate for a particular time series type. Thus, this study seeks to unravel this problem by providing a guide on which unit root tests to utilize when there is a disagreement between them. A simulation study of eight (8) univariate time series models with eight (8) different sample sizes, three (3) differencing orders, and nine different parameter values were performed. It was observed from the results that the performance of the three tests improved as the sample size increased. Based on comparing the overall performance, the KPSS was the "best" unit root test to use when there is disagreement.

]]>Jirapud Limthanakul and Nopparat Pochai

Chloride is a well-known chemical compound that is very useful in industry and agricultural, chloride can be transformed to hypochlorite, chlorite, chlorate and perchlorate, chloride and their substances are not dangerous if we used in the optimal level. Groundwater that contaminated chloride and their substances impacts human health, for an example, if we drink water that contaminated chloride exceed 250 mg/L it can cause heart problems and contribute to high blood pressure. to avoid this problem, we used mathematical models to explain groundwater contamination with chloride and their substances. Transient groundwater flow model provides the hydraulic head of groundwater, in this model we will get the level of groundwater, next, we need to find its velocity and direction by using the result in first model put into second model. Groundwater velocity model provides x- and z-direction vector in groundwater, after computation we will plugin the result into the last model to approximated the chloride concentration in groundwater. Groundwater contamination dispersion model provides chloride, hypochlorite, chlorite, chlorate and perchlorate concentration. The proposed explicit finite difference techniques are used to approximate the model solution. Explicit method was used to solved hydraulic head model. Forward space described groundwater velocity model. Forward time and central space used to predict transient groundwater contaminated models. The simulations can be used to indicate when each simulated zone becomes a hazardous zone or a protection zone.

]]>Aleksandr Bochkov Dmitrii Pervukhin Aleksandr Grafov and Veronika Nikitina

The quality of construction of Lorenz curves depends on the features of the information used. As a rule, information is represented by a sample of values of the studied indicator, which is checked for unevenness. Economic indicators of income and cost, and features of their samples are considered. The feature of the cost economic indicator associated with the presence in the sample of its values of the clot is highlighted (the concentration of values on a small segment of the entire range of sample). It is shown that the established order of constructing empirical laws based on such samples does not give the desired effect when constructing Lorenz curves due to the loss of information content of the sample in the places of the clot. The purpose of this article is to improve the quality of the Lorenz curve by increasing the information content of the sample with a clot by applying the clustering procedure when constructing an empirical law. A step-by-step clustering procedure is proposed for dividing the entire range of sample into intervals to construct an empirical distribution law, which is an element of the novelty of this study. A specific example shows how to improve the quality of building a Lorenz curve using this procedure. In addition, it is shown that Lorenz curves for economic indicators can be constructed directly on the basis of the empirical distribution law and at the same time take into account its features.

]]>Shams A. Ahmed and Mohamed Elbadri

Newell Whitehead Segal (NWS) equation has been used in describing many natural phenomena arising in fluid mechanics and hence acquired more attention. Studies in the past gave importance to obtaining numerical or analytical solutions of this kind of equations by employing methods like Modified Homotopy Analysis Transform method (MHATM), Adomian Decomposition method (ADM), Homotopy Analysis Sumudu Transform method (HASTM), Fractional Complex Transform (FCT) coupled with He's polynomials method (FCT-HPM) and Fractional Residual Power Series method (FRPSM). This research aims to demonstrate an efficient analytical method called the Sumudu Decomposition Method (SDM) for the study of analytical and numerical solutions of the NWS of fractional order. The coupling of Adomian Decomposition method with Sumudu transform method simplifies the calculation. From the numerical results obtained, it is evident that SDM is easy to execute and offers accurate results for the NWS equation than with other methods such as FCT-HPM and FRPSM. Therefore, it is easy to apply the coupling of Adomian Decomposition technique with Sumudu transform method, and when applied to nonlinear differential equations of fractional order, it yields accurate results.

]]>Temitope Olu Ogunlade Oluwatayo Michael Ogunmiloro Segun Nathaniel Ogunyebi Grace Ebunoluwa Fatoyinbo Joshua Otonritse Okoro Opeyemi Roselyn Akindutire Omobolaji Yusuf Halid and Adenike Oluwafunmilola Olubiyi

This work concerns a deterministic and stochastic model describing the transmission of typhoid fever infection in human host community, where the vaccination of susceptible births and immigrants as well as screening and treatment of carriers and infected individuals are considered in the model build - up. The well-posedness and computation of the basic reproduction number R_{typ} of the deterministic model are obtained and analysed. The deterministic model is further transformed into a stochastic model, where the drift and diffusion parts of the model are obtained, and the existence and uniqueness of the stochastic model are discussed. Numerical simulations involving the model parameters of R_{typ} showed that vaccination of susceptible births and influx of immigrants as well as screening and treatment of carriers and infected humans are effective in bringing the threshold R_{typ}(R_{typ})≈0.7944) below 1, and the results of other simulations suggest more health policies are to be implemented, as low R_{typ} may not be guaranteed because vaccination wanes over time. In addition, the numerical simulations of the stochastic model equations describing the sub - population of human individuals in the total human host community are carried out using the computational software MATLAB.

Chatarina Enny Murwaningtyas Sri Haryatmi Kartiko Gunardi and Herry Pribawanto Suryawan

This paper deals with an Indonesian option pricing using mixed fractional Brownian motion to model the underlying stock price. There have been researched on the Indonesian option pricing by using Brownian motion. Another research states that logarithmic returns of the Jakarta composite index have long-range dependence. Motivated by the fact that there is long-range dependence on logarithmic returns of Indonesian stock prices, we use mixed fractional Brownian motion to model on logarithmic returns of stock prices. The Indonesian option is different from other options in terms of its exercise time. The option can be exercised at maturity or at any time before maturity with profit less than ten percent of the strike price. Also, the option will be exercised automatically if the stock price hits a barrier price. Therefore, the mathematical model is unique, and we apply the method of the partial differential equation to study it. An implicit finite difference scheme has been developed to solve the partial differential equation that is used to obtain Indonesian option prices. We study the stability and convergence of the implicit finite difference scheme. We also present several examples of numerical solutions. Based on theoretical analysis and the numerical solutions, the scheme proposed in this paper is efficient and reliable.

]]>Piyali Mallick and Lakshmi Narayan De

In this work, we propose a stochastic inventory model under the situations that delay in imbursement is acceptable. Most of the inventory model on this topic supposed that the supplier would offer the retailer a fixed delay period and the retailer could sell the goods and accumulate revenue and earn interest with in the credit period. They also assumed that the trade credit period is independent of the order quantity. Limited investigators developed EOQ model under permissible delay in payments, where trade credit is connected with the order quantity. When the order quantity is a lesser amount of the quantity at which the delay in payment is not permitted, the payments for the items must be made immediately. Otherwise, the fixed credit period is permitted. However, all these models were completely deterministic in nature. In reality, this trade credit period cannot be fixed. If it is fixed, then retailer will not be interested to buy higher quantity than the fixed quantity at which delay in payment is permitted. To reflect this situation, we assumed that trade credit period is not static but fluctuates with the ordering quantity. The demand throughout any arrangement period follows a probability distribution. We have calculated the total variable cost for every unit of time. The optimum ordering policy of the scheme can be found with the aid of three theorems (proofs are provided). An algorithm to determine the best ordering rule with the assistance of the propositions is established and numerical instances are provided for clarification. Sensitivity investigation of all the parameters of the model is presented and deliberated. Some previously published results are special cases of the consequences gotten in this paper.

]]>R. Sivaraman

Computation of day of a week from given date belonging to any century has been a great quest among astronomers and mathematicians for long time. In recent centuries, thanks to efforts of some great mathematicians we now know methods of accomplishing this task. In doing so, people have developed various methods, some of which are very concise and compact but not much accessible explanation is provided. The chief purpose of this paper is to address this issue. Also, almost all known calculations involve either usage of tables or some pre-determined codes usually assigned for months, years or centuries. In this paper, I had established the mathematical proof of determining the day of any given date which is applicable for any number of years even to the time of BCE. I had provided the detailed mathematical derivation of month codes which were key factors in determining the day of any given date. Though the procedures for determining the day of given date are quite well known, the way in which they arrived is not so well known. This paper will throw great detail in that aspect. To be precise, I had explained the formula obtained by German Mathematician Zeller in detail and tried to simplify it further which will reduce its complexity and at the same time, would be as effective as the original formula. The explanations for Leap Years and other astronomical facts were clearly presented in this paper to aid the derivation of the compact form of Zeller's Formula. Some special cases and illustrations are provided wherever necessary to clarify the computations for better understanding of the concepts.

]]>Hani Syahida Zulkafli George Streftaris and Gavin J. Gibson

Hypoglycaemia is a condition when blood sugar levels in body are too low. This condition is usually a side effect of insulin treatment in diabetic patients. Symptoms of hypoglycaemia vary not only between individuals but also within individuals making it difficult for the patients to recognize their hypoglycaemia episodes. Given this condition, and because the symptoms are not exclusive to only hypoglycaemia, it is very important for patients to be able to identify that they are having a hypoglycaemia episode. Consistency models are statistical models that quantify the consistency of individual symptoms reported during hypoglycaemia. Because there are variations of consistency model, it is important to identify which model best fits the data. The aim of this paper is to asses and verify the models. We developed an assessment method based on stochastic latent residuals and performed posterior predictive checking as the model verification. It was found that a grouped symptom consistency model with multiplicative form of symptom propensity and episode intensity threshold ﬁts the data better and has more reliable predictive ability as compared to other models. This model can be used in assisting patients and medical practitioners to quantify patients' reporting symptoms capability, hence promote awareness of their hypoglycaemia episodes so that corrective actions can be quickly taken.

]]>Edy Nurfalah Irvana Arofah Ika Yuniwati Andi Haslinah and Dwi Retno Lestari

This work is a research development of two-tier multiples choice diagnostic test instruments on calculus material. The purpose of this study is; 1) Obtaining the construction of a two-tier multiples choice diagnostic test based on the validity of the contents and Constable, 2) obtaining the quality of two-tier multiples choice diagnostic tests based on the reliability value. The method used is focused on the construction of diagnostic tests. The development research was adapted from the Retnawati development model. The research generated: 1) Construction of a two-tier multiples choice diagnostic test based on the validity of the contents and the construction obtained that the two-tier multiples choice diagnostic test is proven valid. 2) The quality of two-tier multiples choice diagnostic tests based on the reliability value gained that the compiled two-tier diagnostic test instruments. The validity of the content is evidenced by the average validity index (V), for the two-tier multiples choice diagnostic test instrument obtained an average validity index (V) of 0.9333 and for an interview guideline instrument acquired the validity index (V) 0.7556 in which both the validity index (V) approaches the value 1. Whereas for the validity of the construction acquired three dominant factors based on the scree-plot and corresponds to many factors on the calculus material examined in this study. The quality of two-tier multiples choice diagnostic tests is compiled of two-tier diagnostic test instruments based on the reliability value gained.

]]>N. A. Abdul Rahman

Fuzzy delay differential equation has always been a tremendous way to model real-life problems. It has been developed throughout the last decade. Many types of fuzzy derivatives have been considered, including the recently introduced concept of strongly generalized differentiability. However, considering this interpretation, very few methods have been introduced, obstructing the potential of fuzzy delay differential equations to be developed further. This paper aims to provide solution for fuzzy nonlinear delay differential equations and the derivatives considered in this paper is interpreted using the concept of strongly generalized differentiability. Under this method, the calculations will lead to two cases i.e. two solutions, and one of the solutions is decreasing in the diameter. To fulfil this, a method resulting from the elegant combination of fuzzy Sumudu transform and Adomian decomposition method is used, it is termed as fuzzy Sumudu decomposition method. A detailed procedure for solving fuzzy nonlinear delay differential equations with the mentioned type of derivatives is constructed in detail. A numerical example is provided afterwards to demonstrate the applicability of the method. It is shown that the solution is not unique, and this is in accord with the concept of strongly generalized differentiability. The two solutions can later be chosen by researcher with regards to the characteristic of the problems. Finally, conclusion is drawn.

]]>Andy Liew Pik Hern Aini Janteng and Rashidah Omar

Let S to be the class of functions which are analytic, normalized and univalent in the unit disk . The main subclasses of S are starlike functions, convex functions, close-to-convex functions, quasiconvex functions, starlike functions with respect to (w.r.t.) symmetric points and convex functions w.r.t. symmetric points which are denoted by , and K_{S} respectively. In recent past, a lot of mathematicians studied about Hankel determinant for numerous classes of functions contained in S. The qth Hankel determinant for and is defined by . is greatly familiar so called Fekete-Szeg¨o functional. It has been discussed since 1930's. Mathematicians still have lots of interest to this, especially in an altered version of . Indeed, there are many papers explore the determinants H_{2}(2) and H_{3}(1). From the explicit form of the functional H_{3}(1), it holds H_{2}(k) provided k from 1-3. Exceptionally, one of the determinant that is has not been discussed in many times yet. In this article, we deal with this Hankel determinant . From this determinant, it consists of coefficients of function f which belongs to the classes and K_{S} so we may find the bounds of for these classes. Likewise, we got the sharp results for and K_{s} for which a_{2} = 0 are obtained.

Siti Hajar Khairuddin Mohd Hilmi Hasan and Manzoor Ahmed Hashmani

Fuzzy C-Means (FCM) is one of the mostly used techniques for fuzzy clustering and proven to be robust and more efficient based on various applications. Image segmentation, stock market and web analytics are examples of popular applications which use FCM. One limitation of FCM is that it only produces Gaussian membership function (MF). The literature shows that different types of membership functions may perform better than other types based on the data used. This means that, by only having Gaussian membership function as an option, it limits the capability of fuzzy systems to produce accurate outcomes. Hence, this paper presents a method to generate another popular shape of MF, the trapezoidal shape (trapMF) from FCM to allow more flexibility to FCM in producing outputs. The construction of trapMF is using mathematical theory of Gaussian distributions, confidence interval and inflection points. The cluster centers or mean (μ) and standard deviation (σ) from the Gaussian output are fully used to determine four trapezoidal parameters; lower limit a, upper limit d, lower support limit b, and upper support limit c with the assistance of function trapmf() in Matlab fuzzy toolbox. The result shows that the mathematical theory of Gaussian distributions can be applied to generate trapMF from FCM.

]]>Ali F Jameel Sardar G Amen Azizan Saaban Noraziah H Man and Fathilah M Alipiah

Delay differential equations (known as DDEs) are a broad use of many scientific researches and engineering applications. They come because the pace of the shift in their mathematical models relies all the basis not just on their present condition, but also on a certain past cases. In this work, we propose an algorithm of the approximate method to solve linear fuzzy delay differential equations using the Homotopy Perturbation Method with double parametric form fuzzy numbers. The detailed algorithm of the approach to fuzzification and defuzzificationis analysis is provided. In the initial conditions of the proposed problem there are uncertainties with regard to the triangular fuzzy number. A double parametric form of fuzzy numbers is defined and applied for the first time in this topic for the present analysis. This method's simplicity and ability to overcome delay differential equations without complicating Adomian polynomials or incorrect nonlinear assumptions. The approximate solution is compared with the exact solution to confirm the validity and efficiency of the method to handle linear fuzzy delay differential equation. To show the features of this proposed method, a numerical example is illustrated, involving first order fuzzy delay differential equation. These findings indicate that the suggested approach is very successful and simple to implement.

]]>O. S. Deepa

The reliability of the product has developed a dynamic issue in a worldwide business market. Generally acceptance sampling guarantees the superiority of the product. In acceptance sampling plan, increasing the sample size may lead to minimization of customers' risk of accepting bad lots and producers' risk of rejecting good lots to a certain level but will increase the cost of inspection. Hence truncation of life test time may be introduced to reduce the cost of inspection. Modified Average Sample Number (MASN) for Improved Double Sampling Plan (IDSP) based on truncated life test for popular exponentiated family such as exponentiated gamma, exponentiated lomax and exonentiated Weibull distribution are considered. The modified ASN creates a band width for average sample number which is much useful for the consumer and producer. The interval for average sample number makes the choice of consumer with a maximum and minimum sample size which is of much benefit without any loss for the producer. The probability of acceptance and average sample number based on modified double sampling plan for lower and upper limit is computed for the exponentiated family. Optimal parameters of IDSP under various exponentiated families with different shape parameters were computed. The proposed plan is compared over traditional double sampling and modified double sampling using Gamma distribution, Weibull distribution and Birnbaum-Saunders distribution and shows that the proposed plan with respect to exponentiated family performs better than all other plans. The tables were provided for all distributions. Comparative study of tables based on proposed exponentiated family and earlier existing plan are also done.

]]>Nor Syahmina Kamarudin and Syahida Che Dzul-Kifli

The dynamics of a multidimensional dynamical system may sometimes be inherited from the dynamics of its classical dynamical system. In a multidimensional case, we introduce a new map called a -action on space X induced by a continuous map as such that, where, and is a map of the form . We then look at how topological transitivity of f effects the behaviour of k-type transitivity of the -action, . To verify this, we look specifically at spaces called 1-step shifts of finite type over two symbols which are equipped with a map called the shift map, . We apply some topological theories to prove the -action on 1-step shifts of finite type over two symbols induced by the shift map, is k-type transitive for all whenever is topologically transitive. We found a counterexample which shows that not all maps are k-type transitive for all . However, we have also found some sufficient conditions for k-type transitivity for all. In conclusions, the map on 1-step shifts of finite type over two symbols induced by the shift map is k-type transitive for all whenever either the shift map is topologically transitive or satisfies the sufficient conditions. This study helps to develop the study of k-chaotic behaviours of -action on the multidimensional dynamical system, contributions, and its application towards symbolic dynamics.

]]>Ali F Jameel Akram H. Shather N.R. Anakira A. K. Alomari and Azizan Saaban

This research focuses on the approximate solutions of second-order fuzzy differential equations with fuzzy initial condition with two different methods depending on the properties of the fuzzy set theory. The methods in this research based on the Optimum homotopy asymptotic method (OHAM) and homotopy analysis method (HAM) are used implemented and analyzed to obtain the approximate solution of second-order nonlinear fuzzy differential equation. The concept of topology homotopy is used in both methods to produce a convergent series solution for the propped problem. Nevertheless, in contrast to other destructive approaches, these methods do not rely upon tiny or large parameters. This way we can easily monitor the convergence of approximation series. Furthermore, these techniques do not require any discretization and linearization relative with numerical methods and thus decrease calculations more that can solve high order problems without reducing it into a first-order system of equations. The obtained results of the proposed problem are presented, followed by a comparative study of the two implemented methods. The use of the methods investigated and the validity and applicability of the methods in the fuzzy domain are illustrated by a numerical example. Finally, the convergence and accuracy of the proposed methods of the provided example are presented through the error estimates between the exact solutions displayed in the form of tables and figures.

]]>Sirasak Sasiwannapong Saowanit Sukparungsee Piyapatr Busababodhin and Yupaporn Areepong

The control chart is an important tool in multivariate statistical process control (MSPC), which for monitoring, control, and improvement of the process control. In this paper, we propose six types of copula combinations for use on a Multivariate Exponentially Weighted Moving Average (MEWMA) control chart. Observations from an exponential distribution with dependence measured with Kendall's tau for moderate and strong positive and negative dependence (where ) among the observations were generated by using Monte Carlo simulations to measure the Average Run Length (ARL) as the performance metric and should be sufficiently large when the process is in-control on a MEWMA control chart. In this study, we develop an approach performance on the MEWMA control chart based on copula combinations by using the Monte Carlo simulations.The results show that the out-of-control (ARL_{1}) values for were less than for in almost all cases. The performances of the Farlie-Gumbel-Morgenstern×Ali-Mikhail-Haq copula combination was superior to the others for all shifts with strong positive dependence among the observations and . Moreover, when the magnitudes of the shift were very large, the performance metric values for observations with moderate and strong positive and negative dependence followed the same pattern.

Diah Ayu Widyastuti Adji Achmad Rinaldo Fernandes Henny Pramoedyo Nurjannah and Solimun

Regression analysis has three approaches in estimating the regression curve, namely: parametric, nonparametric, and semiparametric approaches. Several studies have discussed modeling with the three approaches in cross-section data, where observations are assumed to be independent of each other. In this study, we propose a new method for estimating parametric, nonparametric, and semiparametric regression curves in spatial data. Spatial data states that at each point of observation has coordinates that indicate the position of the observation, so between observations are assumed to have different variations. The model developed in this research is to accommodate the influence of predictor variables on the response variable globally for all observations, as well as adding coordinates at each observation point locally. Based on the value of Mean Square Error (MSE) as the best model selection criteria, the results are obtained that modeling with a nonparametric approach produces the smallest MSE value. So this application data is more precise if it is modeled by the nonparametric truncated spline approach. There are eight possible models formed in this research, and the nonparametric model is better than the parametric model, because the MSE value in the nonparametric model is smaller. As for the semiparametric regression model that is formed, it is obtained that the variable X_{2} is a parametric component while X_{1} and X_{3} are the nonparametric components (Model 2). The regression curve estimation model with a nonparametric approach tends to be more efficient than Model 2 because the linearity assumption test results show that the relationship of all the predictor variables to the response variable shows a non-linear relationship. So in this study, spatial data that has a non-linear relationship between predictor variables and responses tends to be better modeled with a nonparametric approach.

Habshah Midi and Jama Mohamed

The support vector regression (SVR) model is currently a very popular non-parametric method used for estimating linear and non-linear relationships between response and predictor variables. However, there is a possibility of selecting vertical outliers as support vectors that can unduly affect the estimates of regression. Outliers from abnormal data points may result in bad predictions. In addition, when both vertical outliers and high leverage points are present in the data, the problem is further complicated. In this paper, we introduced a modified robust SVR technique in the simultaneous presence of these two problems. Three types of SVR models, i.e. eps-regression (ε-SVR), nu-regression (v-SVR) and bound constraint eps-regression (ε-BSVR), with eight different kernel functions are integrated into the new proposed algorithm. Based on 10-fold cross-validation and some model performance measures, the best model with a suitable kernel function is selected. To make the selected model robust, we developed a new double SVR (DSVR) technique based on fixed parameters. This can be used to detect and reduce the weight of influential observations or anomalous points in the data set. The effectiveness of the proposed technique is verified by using a simulation study and some well-known contaminated data sets.

]]>Luthfatul Amaliana Solimun Adji Achmad Rinaldo Fernandes and Nurjannah

WarpPLS analysis has three algorithms, namely the outer model parameter estimation algorithm, the inner model, and the hypothesis testing algorithm which consists of several choices of resampling methods namely Stable1, Stable2, Stable3, Bootstrap, Jackknife, and Blindfolding. The purpose of this study is to apply the WarpPLS analysis by comparing the six resampling methods based on the relative efficiency of the parameter estimates in the six methods. This study uses secondary data from the questionnaire with 1 variable being formative and 2 variables being reflective. Secondary data for the Infrastructure Service Satisfaction Index (IKLI) were obtained from the Study Report on the Regional Development Planning for Economic Growth and the Malang City Gini Index in 2018, while secondary data for the Social Capital Index (IMS) and Community Development Index (IPMas) were obtained from the Research Report on Performance Indicators Regional Human Development Index and Poverty Rate of Malang City in 2018. The results of this study indicate that based on two criteria used, namely the calculation of relative efficiency and measure of fit as a model good, it can be concluded that the Jackknife resampling method is the most efficient, followed with the Stable1, Bootstrap, Stable3, Stable2, and Blindfolding methods.

]]>Azumah Karim Ananda Omutokoh Kube and Bashiru Imoro Ibn Saeed

Global temperature change is an important indicator of climate change. Climate time series data are characterized by trend, seasonal/cyclical as well as irregular components. Adequately modeling these components cannot be overemphasized. In this paper, we have proposed an approach of modeling temperature data using semiparametric additive generalized linear model. We have derived a penalized maximum likelihood estimation of the additive component of the semiparametric generalized linear models, that is, of regression coefficients and smooth functions. A statistical modeling with real time series data set was conducted on temperature data. The study has provided indications on the gain of using semiparametric modeling in situations where a signal component can be additively decomposed in to trend, cyclical and irregular components. Thus, we recommend semiparametric additive penalized models as an option to fit time series data sets in modelling the different component with different functions to adequately explain the relation inherent in data.

]]>S. Al-Ahmad I. M. Sulaiman M. Mamat and L. G. Puspa

The method of differential transform (DTM) is among the famous mathematical approaches for obtaining the differential equations solutions. This is due to its simplicity and efficient numerical performance. However, the major drawback of the DTM is obtaining a truncated series solution which is often a good approximation to the true solution of the equation in a specified region. In this study, a modification of DMT scheme known as MDTM is proposed for obtaining an accurate approximation of ordinary differential equations of second order. The scheme whose procedure is designed via DTM, the Laplace transforms and finally Padé approximation, gives a good approximate for the true solution of the equations in a large region. The proposed approach would be able to overcome the difficulty encountered using the classical DTM, and thus, can serve as an alternative approach for obtaining the solutions of these problems. Preliminary results are presented based on some examples which illustrate the strength and application of the defined scheme. Also, all the obtained results corresponded to exact solutions.

]]>Noraishikin Zulkarnain Noorhelyna Razali Nuryazmin Ahmat Zainuri Haliza Othman and Alias Jedi

Mathematics is one of the major subjects that every engineering student needs to learn. However, every student may have different views and interests on Mathematics subjects because of their different levels of thinking. To foster the appreciation of engineering students on the applications of Mathematics in engineering courses and help them apply and enhance their mathematical knowledge, the Fundamental Engineering Unit at the Faculty of Engineering and Built Environments, Universiti Kebangsaan Malaysia (UKM), organised the first ‘Mathematics Day' on Thursday, May 4, 2017. For their final year project, 12 students participated in a competition where they used mathematical or statistical applications to create a poster. The competition was judged by the academic assessors, industry and UKM alumni. This study examines the mathematical elements and applications in students' posters. The relevance of the elements and topics in the Engineering Mathematics course in the posters is reviewed. Reports from students who were present during the competition are also analysed to determine the effectiveness of the activity. The expected outcome of the student reports is interpreted using a statistical descriptive method, and results indicate that the students had a positive reaction to the activity.

]]>Amit Kumar Rana

Fuzzy sets theory is a very useful technique to increase effectiveness and efficiency of forecasting. The conventional time series is not applicable when the variable of time series are words variables i.e. variables with linguistic terms. As India and most of the Asian countries are of agriculture-based economy with very smaller farmer land holding area in comparison to America, Australia and Europe counterparts, it becomes more important for these countries to have an approximate idea regarding future crop production. It not only will help in planning policies for future but also will be a great help for farmers and agro based companies for their future managements. For small area production, soft computing technique is an important and effective tool for predicting production, as agriculture production involve a high degree of uncertainties in many parameters. In the present study, 21 years agricultural crop yield data is used and a comparative analysis of forecast is done with three fuzzy models. The robustness of the model is tested on real time agricultural farm production data of wheat crop of G.B. Pant University of Agriculture and Technology Pantnagar, India. As soft computing techniques involve uncertainty of the system under study, it becomes more and more important for forecasting models to be accurate with the prediction. The efficiency of the three models is examined on the basis of statistical errors. The models under study are judged on the basis of Mean Square Error and average percentage error. The results of the study are in case of small area production prediction and will encourage for predicting large scale production.

]]>Mahesh Puri Goswami and Naveen Jha

In this article, we investigate bicomplex triple Laplace transform in the framework of bicomplexified frequency domain with Region of Convergence (ROC), which is generalization of complex triple Laplace transform. Bicomplex numbers are pairs of complex numbers with commutative ring with unity and zero-divisors, which describe physical interpretation in four dimensional spaces and provide large class of frequency domain. Also, we derive some basic properties and inversion theorem of triple Laplace transform in bicomplex space. In this technique, we use idempotent representation methodology of bicomplex numbers, which play vital role in proving our results. Consequently, the obtained results can be highly applicable in the fields of Quantum Mechanics, Signal Processing, Electric Circuit Theory, Control Engineering, and solving differential equations. Application of bicomplex triple Laplace transform has been discussed in finding the solution of third-order partial differential equation of bicomplex-valued function.

]]>Solimun Adji Achmad Rinaldo Fernandes and Retno Ayu Cahyoningtyas

Nonlinear principal component analysis is used for data that has a mixed scale. This study uses a formative measurement model by combining metric and nonmetric data scales. The variable used in this study is the demographic variable. This study aims to obtain the principal component of the latent demographic variable and to identify the strongest indicators of demographic formers with mixed scales using samples of students of Brawijaya University based on predetermined indicators. The data used in this study are primary data with research instruments in the form of questionnaires distributed to research respondents, which are active students of Brawijaya University Malang. The used method is nonlinear principal component analysis. There are nine indicators specified in this study, namely gender, regional origin, father's occupation, mother's occupation, type of place of residence, father's last education, mother's last education, parents' income per month, and students' allowance per month. The result of this study shows that the latent demographic variable with samples of a student at Brawijaya University can be obtained by calculating its component scores. The nine indicators formed in PC1 or X_{1} were able to store diversity or information by 19.49%, while the other 80.51% of diversity or other information was not saved in this PC. From these indicators, the strongest indicator in forming latent demographic variables with samples of a student of Brawijaya University is the origin of the region (I_{2}) and type of residence (I_{5}).

Abdeslam Serroukh and Khudhayr A. Rashedi

The aim of this paper is to address the problem of variance break detection in time series in wavelet domain. The maximal overlapped discrete wavelet transform (MODWT) decomposes the series variance across scales into components known as the wavelet variances. We introduce all scale wavelet coefficients based test statistic that allows detecting a break in the homogeneity of the variance of a series through changes in the mean of wavelet variances. The statistic makes use of the traditional CUSUM (cumulative sum) based test designed to test for a break in the mean and constructed using cumulative sums of the square of wavelet coefficients. Under moments and mixing conditions, the test statistic satisfies the functional central limit theorem (FCLT) for a broad class of time series models. The overall performance of our test statistic is compared to the traditional Inclan [8] test statistic. The effectiveness of our statistic is supported by good performances reported in simulations and is as reliable as the traditional statistic. Our method provides a nonparametric test procedure that can be applied to a large class of linear and non linear models. We illustrate the practical use of our test procedure with the quarterly percentage changes in the Americans personal savings data set over the period 1970-2016. Both statistics detect a break in the variance in the second quarter of 2001.

]]>Iryna Halushchak Zoriana Novosad Yurii Tsizhma and Andriy Zagorodnyuk

In this paper, we extend complex polynomial dynamics to a set of multisets endowed with some ring operations (the metric ring of multisets associated with supersymmetric polynomials of infinitely many variables). Some new properties of the ring of multisets are established and a homomorphism to a function ring is constructed. Using complex homomorphisms on the ring of multisets, we proposed a method of investigations of polynomial dynamics over this ring by reducing them to a finite number of scalarvalued polynomial dynamics. An estimation of the number of such scalar-valued polynomial dynamics is established. As an important example, we considered an analogue of the logistic map, defined on a subring of multisets consisting of positive numbers in the interval [0; 1]: Some possible application to study the natural market development process in a competitive environment is proposed. In particular, it is shown that using the multiset approach, we can have a model that takes into account credit debt and reinvestments. Some numerical examples of logistic maps for different growth rate multiset [r] are considered. Note that the growth rate [r] may contain both "positive" and "negative" components and the examples demonstrate the influences of these components on the dynamics.

]]>Girija K. P. Devadas Nayak C Sabitha D’Souza and Pradeep G. Bhat

Graph labeling is an assignment of integers to the vertices or the edges, or both, subject to certain conditions. In literature we find several labelings such as graceful, harmonious, binary, friendly, cordial, ternary and many more. A friendly labeling is a binary mapping such that where and represents number of vertices labeled by 1 and 0 respectively. For each edge assign the label , then the function f is cordial labeling of G if and , where and are the number of edges labeled 1 and 0 respectively. A friendly index set of a graph is { runs over all f riendly labeling f of G} and it is denoted by FI(G). A mapping is called ternary vertex labeling and represents the vertex label for . In this article, we extend the concept of ternary vertex labeling to 3-vertex friendly labeling and define 3-vertex friendly index set of graphs. The set runs over all 3 ? vertex f riendly labeling f f or all is referred as 3-vertex friendly index set. In order to achieve , number of vertices are partitioned into such that for all with and la- bel the edge by where . In this paper, we study the 3-vertex friendly index sets of some standard graphs such as complete graph K_{n}, path P_{n}, wheel graph W_{n}, complete bipartite graph K_{m,n} and cycle with parallel chords PC_{n}.

Mahmoud M. El-Borai and Khairia El-Said El-Nadi

Some singular integral evolution equations with wide class of closed operators are studied in Banach space. The considered integral equations are investigated without the existence of the resolvent of the closed operators. Also, some non-linear singular evolution equations are studied. An abstract parabolic transform is constructed to study the solutions of the considered ill-posed problems. Applications to fractional evolution equations and Hilfer fractional evolution equations are given. All the results can be applied to general singular integro-differential equations. The Fourier Transform plays an important role in constructing solutions of the Cauchy problems for parabolic and hyperbolic partial differential equations. This means that the Fourier transform is suitable but under conditions on the characteristic forms of the partial differential operators. Also, the Laplace transform plays an important role in studying the Cauchy problem for abstract differential equations in Banach space. But in this case, we need the existence of the resolvent of the considered abstract operators. This note is devoted to exploring the Cauchy problem for general singular integro-partial differential equations without conditions on the characteristic forms and also to study general singular integral evolution equations. Our approach is based on applying the new parabolic transform. This transform generalizes the methods developed within the regularization theory of ill-posed problems.

]]>Z. R. Rakhmonov A. Khaydarov and J. E. Urunbaev

Mathematical models of nonlinear cross diffusion are described by a system of nonlinear partial parabolic equations associated with nonlinear boundary conditions. Explicit analytical solutions of such nonlinearly coupled systems of partial differential equations are rarely existed and thus, several numerical methods have been applied to obtain approximate solutions. In this paper, based on a self-similar analysis and the method of standard equations, the qualitative properties of a nonlinear cross-diffusion system with nonlocal boundary conditions are studied. We are constructed various self-similar solutions to the cross diffusion problem for the case of slow diffusion. It is proved that for certain values of the numerical parameters of the nonlinear cross-diffusion system of parabolic equations coupled via nonlinear boundary conditions, they may not have global solutions in time. Based on a self-similar analysis and the comparison principle, the critical exponent of the Fujita type and the critical exponent of global solvability are established. Using the comparison theorem, upper bounds for global solutions and lower bounds for blow-up solutions are obtained.

]]>Viliam Ďuriš and Timotej Šumný

The accuracy of geometric construction is one of the important characteristics of mathematics and mathematical skills. However, in geometrical constructions, there is often a problem of accuracy. On the other hand, so-called 'Optical accuracy' appears, which means that the construction is accurate with respect to the drawing pad used. These "optically accurate" constructions are called approximative constructions because they do not achieve exact accuracy, but the best possible approximation occurs. Geometric problems correspond to algebraic equations in two ways. The first method is based on the construction of algebraic expressions, which are transformed into an equation. The second method is based on analytical geometry methods, where geometric objects and points are expressed directly using equations that describe their properties in a coordinate system. In any case, we obtain an equation whose solution in the algebraic sense corresponds to the geometric solution. The paper provides the methodology for solving some specific tasks in geometry by means of algebraic geometry, which is related to cubic and biquadratic equations. It is thus focusing on the approximate geometrical structures, which has a significant historical impact on the development of mathematics precisely because these tasks are not solvable using a compass and ruler. This type of geometric problems has a strong position and practical justification in the area of technology. The contribution of our work is so in approaching solutions of geometrical problems leading to higher degrees of algebraic equations, whose importance is undeniable for the development of mathematics. Since approximate constructions and methods of solution resulting from approximate constructions are not common, the content of the paper is significant.

]]>R. Sivaraman

Huge amount of literature has been written and published about Golden Ratio, but not many had heard about its generalized version called Metallic Ratios, which are introduced in this paper. The methods of deriving them were also discussed in detail. This will help to explore further in the search of universe of real numbers. In mathematics, sequences play a vital role in understanding of the complexities of any given problem which consist of some patterns. For example, the population growth, radioactive decay of a substance, lifetime of an object all follow a sequence called "Geometric Progression". In fact, the rate at which the recent novel corona virus (COVID – 19) is said to follow a Geometric Progression with common ratio approximately between 2 and 3. Almost all branches of science use sequences, for instance, genetic engineers use DNA sequence, Electrical Engineers use Morse-Thue Sequence and this list goes on and on. Among the vast number of sequences used for scientific investigations, one of the most famous and familiar is the Fibonacci Sequence named after the Italian mathematician Leonard Fibonacci through his book "Liber Abaci" published in 1202. In this paper, I shall try to introduce sequences resembling the Fibonacci sequence and try to generalize it to identify general class of numbers called "Metallic Ratios".

]]>Savita Rathee and Priyanka Gupta

In late sixties, Furi and Vignoli proved fixed point results for α-condensing mappings on bounded complete metric spaces. Bugajewski generalized the results to "weakly F-contractive mappings" on topological spaces(TS). Bugajeski and Kasprzak proved several fixed point results for "weakly F-contractive mapping" using the approach of lower(upper) semi-continuous functions. After that, by modifying the concept of "weakly F-contractive mappings", the coupled fixed point results were proved by Cho, Shah and Hussain on topological space. On different spaces, common coupled fixed point results were discussed by Liu, Zhou and damjanovic, Nashine and Shatanawi and many other authors. In this work, we prove the common coupled fixed point theorems by adopting the modified definition of weakly F-contractive mapping r : T→T; where T is a topological space. After that, we extend the result of Cho, Shah and Hussain for Banach spaces to common coupled quasi solutions enriched with a relevant transitive binary relation. Also, we give an example in the support of proved result. Our results extend and generalize several existing results in the literature.

]]>Waego Hadi Nugroho Ni Wayan Surya Wardhani Adji Achmad Rinaldo Fernandes and Solimun

Robust regression analysis is an analysis that is used if there is an outlier in a regression model. Outliers cause data to be abnormal. The most commonly used parameter estimation method is Ordinary Least Squares (OLS). However, outliers in models cause the estimator of the least-squares in the model to be biased, so handling of outliers is required. One of the regressions used for outliers is robust regression. Robust regression method that can be used is M-Estimation. By using Tukey's Bisquare weighted function, a robust M-estimation method can estimate parameters in a model, for example in malnutrition data in East Java Province 2017 to 2012. This study aims to compare the robust method of M-estimation and OLS method on data with several different levels of significance, which is 1%, 5%, and 10%. The predictor variables used in this study were the percentage of poor society, population density, and some health facilities. R^{2} is used to compare the OLS method and the robust method of M-estimation. The results obtained that robust regression is the best method to handle the model if there are outliers in the data. It was supported by almost all results of the value of R^2 on each data that M-estimation has a higher value than the OLS method.

Gwang Hui Kim

The present work continues the study for the superstability and solution of the Pexider type functional equation , which is the mixed functional equation represented by sum of the sine, cosine, tangent, hyperbolic trigonometric, and exponential functions. The stability of the cosine (d'Alembert) functional equation and the Wilson equation was researched by many authors: Baker [7], Badora [5], Kannappan [14], Kim ([16, 19]), and Fassi, etc [11]. The stability of the sine type equations was researched by Cholewa [10], Kim ([18], [20]). The stability of the difference type equation for the above equation was studied by Kim ([21], [22]). In this paper, we investigate the superstability of the sine functional equation and the Wilson equation from the Pexider type difference functional equation , which is the mixed equation represented by the sine, cosine, tangent, hyperbolic trigonometric functions, and exponential functions. Also, we obtain additionally that the Wilson equation and the cosine functional eqaution in the obtained results can be represented by the composition of a homomorphism. In here, the domain (G; +) of functions is a noncommutative semigroup (or 2-divisible Abelian group), and A is an unital commutative normed algebra with unit 1A. The obtained results can be applied and expanded to the stability for the difference type's functional equation which consists of the (hyperbolic) secant, cosecant, logarithmic functions.

]]>Yousef Al-Qudah Faisal Yousafzai Mohammed M. Khalaf and Mohammad Almousa

The main motivation behind this paper is to study some structural properties of a non-associative structure as it hasn't attracted much attention compared to associative structures. In this paper, we introduce the concept of an ordered A^{*}G^{**}-groupoid and provide that this class is more generalized than an ordered AG-groupoid with left identity. We also define the generated left (right) ideals in an ordered A^{*}G^{**}-groupoid and characterize a (2; 2)-regular ordered A^{*}G^{**}-groupoid in terms of these ideals. We then study the structural properties of an ordered A^{*}G^{**}-groupoid in terms of its semilattices, (2; 2)-regular class and generated commutative monoids. Subsequently, compare -fuzzy left/right ideals of an ordered AG-groupoid and respective examples are provided. Relations between an -fuzzy idempotent subsets of an ordered A^{*}G^{**}-groupoid and its -fuzzybi-ideals are discussed. As an application of our results, we get characterizations of (2; 2)-regular ordered A^{*}G^{**}-groupoid in terms of semilattices and -fuzzy left (right) ideals. These concepts will help in verifying the existing characterizations and will help in achieving new and generalized results in future works.

Abdishukurova Guzal Narmanov Abdigappar and Sharipov Xurshid

The concept of differential invariant, along with the concept of invariant differentiation, is the key in modern geometry [1]-[10]. In the Erlangen program [3] Felix Klein proposed a unified approach to the description of various geometries. According to this program, one of the main problems of geometry is to construct invariants of geometric objects with respect to the action of the group defining this geometry. This approach is largely based on the ideas of Sophus Lee, who introduced continuous geometry groups of transformations, now known as Lie groups, into geometry. In particular, when considering classification problems and equivalence problems in differential geometry, differential invariants with respect to the action of Lie groups should be considered. In this case, the equivalence problem of geometric objects is reduced to finding a complete system of scalar differential invariants. The interpretation of the k- order differential invariant as a function on the space of k- jets of sections of the corresponding bundle made it possible to operate with them efficiently, and using invariant differentiation, new differential invariants can be obtained. Differential invariants with respect to a certain Lie group generate differential equations for which this group is a symmetry group. This allows one to apply the well-known integration methods to such equations, and, in particular, the Li- Bianchi theorem [4]. Depending on the type of geometry, the orders of the first nontrivial differential invariants can be different. For example, in the space R^{3} equipped with the Euclidean metric, the complete system of differential invariants of a curve is its curvature and torsion, which are second and third order invariants, respectively. Note that scalar differential invariants are the only type of invariants whose components do not change when changing coordinates. For this reason, scalar differential invariants are effectively used in solving equivalence problems. In this paper differential invariants of Lie group of one parametric transformations of the space of two independent and three dependent variables are studied. It is shown method of construction of invariant differential operator. Obtained results applied for finding differential invariants of surfaces.

V. I. Struchenkov and D. A. Karpov

The article discusses the solution of applied problems, for which the dynamic programming method developed by R. Bellman in the middle of the last century was previously proposed. Currently, dynamic programming algorithms are successfully used to solve applied problems, but with an increase in the dimension of the task, the reduction of the counting time remains relevant. This is especially important when designing systems in which dynamic programming is embedded in a computational cycle that is repeated many times. Therefore, the article analyzes various possibilities of increasing the speed of the dynamic programming algorithm. For some problems, using the Bellman optimality principle, recurrence formulas were obtained for calculating the optimal trajectory without any analysis of the set of options for its construction step by step. It is shown that many applied problems when using dynamic programming, in addition to rejecting unpromising paths lead to a specific state, also allow rejecting hopeless states. The article proposes a new algorithm for implementing the R. Bellman principle for solving such problems and establishes the conditions for its applicability. The results of solving two-parameter problems of various dimensions presented in the article showed that the exclusion of hopeless states can reduce the counting time by 10 or more times.

]]>Nik Muhammad Farhan Hakim Nik Badrul Alam Ajab Bai Akbarally and Silvestru Sever Dragomir

Hermite-Hadamard type inequalities related to convex functions are widely being studied in functional analysis. Researchers have refined the convex functions as quasi-convex, h-convex, log-convex, m-convex, (a,m)-convex and many more. Subsequently, the Hermite-Hadamard type inequalities have been obtained for these refined convex functions. In this paper, we firstly review the Hermite-Hadamard type inequality for both convex functions and log-convex functions. Then, the definition of composite convex function and the Hermite-Hadamard type inequalities for composite convex functions are also reviewed. Motivated by these works, we then make some refinement to obtain the definition of composite log-convex functions, namely composite-^{-1} log-convex function. Some examples related to this definition such as GG-convexity and HG-convexity are given. We also define k-composite log-convexity and k-composite-^{-1} log-convexity. We then prove a lemma and obtain some Hermite-Hadamard type inequalities for composite log-convex functions. Two corollaries are also proved using the theorem obtained; the first one by applying the exponential function and the second one by applying the properties of k-composite log-convexity. Also, an application for GG-convex functions is given. In this application, we compare the inequalities obtained from this paper with the inequalities obtained in the previous studies. The inequalities can be applied in calculating geometric means in statistics and other fields.

Leonid N. Yasnitsky and Sergey L. Gladkiy

One of the main problems in modern mathematical modeling is to obtain high-precision solutions of boundary value problems. This study proposes a new approach that combines the methods of artificial intelligence and a classical analytical method. The use of the analytical method of fictitious canonic regions is proposed as the basis for obtaining reliable solutions of boundary value problems. The novelty of the approach is in the application of artificial intelligence methods, namely, genetic algorithms, to select the optimal location of fictitious canonic regions, ensuring maximum accuracy. A general genetic algorithm has been developed to solve the problem of determining the global minimum for the choice and location of fictitious canonic regions. For this genetic algorithm, several variants of the function of crossing individuals and mutations are proposed. The approach is applied to solve two test boundary value problems: the stationary heat conduction problem and the elasticity theory problem. The results of solving problems showed the effectiveness of the proposed approach. It took no more than a hundred generations to achieve high precision solutions in the work of the genetic algorithm. Moreover, the error in solving the stationary heat conduction problem was so insignificant that this solution can be considered as precise. Thus, the study showed that the proposed approach, combining the analytical method of fictitious canonic regions and the use of genetic optimization algorithms, allows solving complex boundary-value problems with high accuracy. This approach can be used in mathematical modeling of structures for responsible purposes, where the accuracy and reliability of the results is the main criterion for evaluating the solution. Further development of this approach will make it possible to solve with high accuracy of more complicated 3D problems, as well as problems of other types, for example, thermal elasticity, which are of great importance in the design of engineering structures.

]]>Ni Wayan Surya Wardhani Waego Hadi Nugroho Adji Achmad Rinaldo Fernandes and Solimun

WANT-E is a tool created to purify methane gas from organic waste intended as a substitute for renewable gas fuel. The WANT-E product is new because it is necessary to do research related to the public interest in WANT-E products. This study uses primary data obtained from questionnaires with variables based on Theory of Planned Behavior (TPB), namely behavior attitudes, subjective norms, perceived behavior control, and behavior interests that are spread to the community of Cibeber Village, Cikalong Subdistrict, Tasikmalaya Regency that uses LPG gas cylinders or stove using sampling techniques in the form of the judgment sampling method. The analysis used is SEM with the WarpPLS approach, which is to determine the effect of relationships between variables. The results of the analysis obtained the effect of a positive relationship between behavior attitudes variables on subjective norms, behavior attitudes toward perceived behavior control, subjective norms of behavior interests, and perceived behavior control of behavior interests. Then the influence of indirect relations on subjective norms and perceived behavior control was obtained as mediation between behavior attitudes toward behavior interests.

]]>Artykbaev Abdullaaziz and Nurbayev Abdurashid Ravshanovich

This article discusses geometric quantities associated with the concept of surfaces and the indicatrix of a surface in four-dimensional Galileo space. In this case, the second orderly line in the plane is presented as a surface indicatrix. It is shown that with the help of the Galileo space movement, the second orderly line can be brought to the canonical form. The movement in the Galileo space is radically different from the movement in the Euclidean space. Galileo movements include parallel movement, axis rotation, and sliding. Sliding gives deformation in the Euclidean space. The surface indicatrix is deformed by the Galileo movement. When the indicatrix is deformed, the surface will be deformed. In the classification of three-dimensional surface points in the four-dimensional Galileo phase, the classification of the indicatrix of the surface at this point was used. This shows the cyclic state of surface points other than Euclidean geometry. The geometric characteristics of surface curves were determined using the indicatrix test. It is determined what kind of geometrical meaning the identified properties have in the Euclidean phase. It is shown that the Galilean movement gives surface deformation in the Euclidean sense. Deformation of the surface is indicated by the fact that the Gaussian curvature remains unchanged.

]]>M. Khalifa Saad R. A. Abdel-Baky F. Alharbi and A. Aloufi

In a theory of space curves, especially, a helix is the most elementary and interesting topic. A helix, moreover, pays attention to natural scientists as well as mathematicians because of its various applications, for example, DNA, carbon nanotube, screws, springs and so on. Also there are many applications of helix curve or helical structures in Science such as fractal geometry, in the fields of computer aided design and computer graphics. Helices can be used for the tool path description, the simulation of kinematic motion or the design of highways, etc. The problem of the determination of parametric representation of the position vector of an arbitrary space curve according to the intrinsic equations is still open in the Euclidean space E^{3} and in the Minkowski space . In this paper, we introduce some characterizations of a non-null slant helix which has a spacelike or timelike axis in . We use vector differential equations established by means of Frenet equations in Minkowski space . Also, we investigate some differential geometric properties of these curves according to these vector differential equations. Besides, we illustrate some examples to confirm our findings.

Narmanov Abdigappar and Parmonov Hamid

The problem of integrating equations of mechanics is the most important task of mathematics and mechanics. Before Poincare's book "Curves Defined by Differential Equations", integration tasks were considered as analytical problems of finding formulas for solutions of the equation of motion. After the appearance of this book, it became clear that the integration problems are related to the behavior of the trajectories as a whole. This, of course, stimulated methods of qualitative theory of differential equations. Present time, the main method in this problem has become the symmetry method. Newton used the ideas of symmetry for the problem of central motion. Further, Lagrange revealed that the classical integrals of the problem of gravitation bodies are associated with invariant equations of motion with respect to the Galileo group. Emmy Noether showed that each integral of the equation of motion corresponds to a group of transformations preserving the action. The phase flow of the Hamilton system of equations, in which the first integral serves as the Hamiltonian, translates the solutions of the original equations into solutions. The Liouville theorem on the integrability of Hamilton equations was created on this idea. The Liouville theorem states that phase flows of involutive integrals generate an Abelian group of symmetries Hamiltonian methods have become increasingly important in the study of the equations of continuum mechanics, including fluids, plasmas and elastic media. In this paper it is considered the problem on the Hamiltonian system which describes of motion of a particle which is attracted to a fixed point with a force varying as the inverse cube of the distance from the point. We are concerned with just one aspect of this problem, namely the questions on the symmetry groups and Hamiltonian symmetries. It is found Hamiltonian symmetries of this Hamiltonian system and it is proven that Hamiltonian symmetry group of the considered problem contains two dimensional Abelian Lie group. Also it is constructed the singular foliation which is generated by infinitesimal symmetries which invariant under phase flow of the system. In the present paper, smoothness is understood as smoothness of the class C^{∞}.

Jae Won Lee Dae Ho Jin and Chul Woo Lee

Jin [1] defined an ()-type connection on semi-Riemannian manifolds. Semi-symmetric nonmetric connection and non-metric ∅-symmetric connection are two important examples of this connection such that () = (1; 0) and () = (0; 1), respectively. In semi-Riemannian geometry, there are few literatures for the lightlike geometry, so we expose new theories for non-degenerate submanifolds in semi-Riemannian geometry. The goal of this paper is to study a characterization of a (Lie) recurrent lightlike hypersurface M of an indefinite Kaehler manifold with an ()-type connection when the charateristic vector field is tangnet to M. In the special case that an indefinite Kaehler manifold of constant holomorphic sectional curvature is an indefinite complex space form, we investigate a lightlike hypersurface of an indefinite complex space form with an ()-type connection when the charateristic vector field is tangnet to M. Moreover, we show that the total space, the complex space form, is characterized by the screen conformal lightlike hypersurface with an ()-type connection. With a semi-symmetric non-metric connection, we show that an indefinite complex space form is flat.

]]>Mohammad Almousa

Many different problems in mathematics, physics, engineering can be expressed in the form of integral equations. Among these are diffraction problems, population growth, heat transfer, particle transport problems, electrical engineering, elasticity, control, elastic waves, diffusion problems, quantum mechanics, heat radiation, electrostatics and contact problems. Therefore, the solutions which are obtained by the mathematical methods play an important role in these fields. The most two basic types of integral equations are called Fredholm (FIEs) and Volterra (VIEs). In many instances, the ordinary and partial differential equations can be converted into Fredhom and Volterra integral equations that are solved more effectively. We aim through this research to present an improved Adomian decomposition method based on modified Bernstein polynomials (ADM-MBP) to solve nonlinear integral equations of the second kind. We introduced efficient method, constructed on modified Bernstein polynomials. The formulation is developed to solve nonlinear Fredholm and Volterra integral equations of second kind. This method is tested for some examples from nonlinear integral equations. Maple software was used to obtain the solutions of these examples. The results demonstrate reliability of the proposed method. Generally, the proposed method is very convenient to apply to find the solutions of Fredholm and Volterra integral equations of second kind.

]]>Moustafa Omar Ahmed Abu-Shawiesh Muhammad Riaz and Qurat-Ul-Ain Khaliq

In this study, a robust control chart as an alternative to the Tukey's control chart (TCC) based on the modified trimmed standard deviation (MTSD), namely MTSD-TCC, is proposed. The performance of the proposed and the competing Tukey's control chart (TCC) is measured using different length properties such as average run length (ARL), standard deviation of run length (SDRL), and median run length (MDRL). Also, the study covered normal and contaminated cases. We have observed that the proposed robust control chart (MTSD-TCC) is quite efficient at detecting process shifts. Also, it is evident from the simulation results that the proposed robust control chart (MTSD-TCC) offers superior detection ability for different trimming levels as compared to the Tukey's control chart (TCC) under the contaminated process setups. As a result, it is recommended to use the proposed robust control chart (MTSD-TCC) for process monitoring. An application numerical example using real-life data is provided to illustrate the implementation of the proposed robust control chart (MTSD-TCC) which also supported the results of the simulation study to some extent.

]]>Anuradha and SeemaMehra

In 2016, Muralisankar and Jeyabal introduced the concept of ε-Compatible maps and studied the set of common fixed points. They generalized the Banach contraction, Kannan contraction, Reich contraction and Bianchini type contraction to obtain some common fixed point theorems for ε-Compatible mappings which don't involve the suitable containment of the ranges for the given mappings in the setting of metric spaces. Motivated by this new concept of mappings, we establish a new approach for some common fixed point theorems via ϵ -compatible maps in context of complete partial metric space including a directed graph G=(V,E). By the remarkable work of Jachymski in 2008, we extend the results obtained by Muralisankar and Jeyabal in 2016. In 2008, Jachymski obtained some important fixed point results introduced by Ran and Reurings (2004) using the languages of graph theory instead of partial order and gave an interesting approach in this direction. After that, his work is considered as a reference in this domain. Sometimes, there are some mappings which do not satisfy the contractive nature on whole set M(say) but these can be made contractive on some subset of M and this can be done by including graph as shown in our Example 2.6 which is provided to substantiate the validity of our results.

]]>Zahari Md Rodzi and Abd Ghafur Ahmad

In this paper, by combining hesitant fuzzy soft sets (HFSSs) and fuzzy parameterized, we introduce the idea of a new hybrid model, fuzzy parameterized hesitant fuzzy soft sets (FPHFSSs). The benefit of this theory is that the degree of importance of parameters is being provided to HFSSs directly from decision makers. In addition, all the information is represented in a single set in the decision making process. Then, we likewise ponder its basic operations such as AND, OR, complement, union and intersection. The basic properties such as associative, distributive and de Morgan's law of FPHFSSs are proven. Next, in order to resolve the multi-criteria decision making problem (MCDM), we present arithmetic mean score and geometry mean score incorporated with hesitant degree of FPHFSSs in TOPSIS. This algorithm can cater some existing approach that suggested to add such elements to a shorter hesitant fuzzy element, rendering it equivalent to another hesitant fuzzy element, or to duplicate its elements to obtain two sequence of the same length. Such approaches would break the original data structure and modify the data. Finally, to demonstrate the efficacy and viability of our process, we equate our algorithm with existing methods.

]]>Solimun and Adji Achmad Rinaldo Fernandes

The use of regression analysis has not been able to deal with the problems of complex relationships with several response variables and the presence of intervening endogenous variables in a relationship. Analysis that is able to handle these problems is path analysis. In path analysis there are several assumptions, one of which is the assumption of residual normality. If the normality residual assumptions are not met, then estimating the parameters can produce a biased estimator, a large and not consistent range of estimators. Unmet residual normality problems can be overcome by using resampling. Therefore in this study, a simulation study was conducted to apply resampling with the blindfold method to the condition that the normality assumption is not met with various levels of resampling in the path analysis. Based on the simulation results, different levels of closeness occur consistently at different resampling quantities. At a low level of closeness, it is consistent with the resampling magnitude of 1000. At a moderate level, a consistent level of resampling of 500 occurs. At a high level of closeness, it is consistent with the amount of resampling 1400.

]]>Yona Eka Pratiwi Kusbudiono Abduh Riski and Alfian Futuhul Hadi

The development of an increasingly rapid industrial development resulted in increasingly intense competition between industries. Companies are required to maximize performance in various fields, especially by meeting customer demand with agreed timeliness. Scheduling is the allocation of resources to the time to produce a collection of jobs. PT. Bella Agung Citra Mandiri is a manufacturing company engaged in making spring beds. The work stations in the company consist of 5 stages consisting of ram per with three machines, clamps per 1 machine, firing mattresses with two machines, sewing mattresses three machines and packing with one machine. The model problem that was solved in this study was Hybrid Flowshop Scheduling. The optimization method for solving problems is to use the metaheuristic method Migrating Birds Optimization. To avoid problems faced by the company, scheduling is needed to minimize makespan by paying attention to the number of parallel machines. The results of this study are scheduling for 16 jobs and 46 jobs. Decreasing makespan value for 16 jobs minimizes the time for 26 minutes 39 seconds, while for 46 jobs can minimize the time for 3 hours 31 minutes 39 seconds.

]]>Muhammad Asim Khan and Norhashidah Hj. Mohd Ali

The fractional diffusion equation is an important mathematical model for describing phenomena of anomalous diffusion in transport processes. A high-order compact iterative scheme is formulated in solving the two-dimensional time fractional sub-diffusion equation. The spatial derivative is evaluated using Crank-Nicolson scheme with a fourth-order compact approximation and the Caputo derivative is used for the time fractional derivative to obtain a discrete implicit scheme. The order of convergence for the proposed method will be shown to be of . Numerical examples are provided to verify the high-order accuracy solutions of the proposed scheme.

]]>RaziraAniza Roslan Chin Su Na and Darmesah Gabda

The standard method of the maximum likelihood has poor performance in GEV parameter estimates for small sample data. This study aims to explore the Generalized Extreme Value (GEV) parameter estimation using several methods focusing on small sample size of an extreme event. We conducted simulation study to illustrate the performance of different methods such as the Maximum Likelihood (MLE), probability weighted moment (PWM) and the penalized likelihood method (PMLE) in estimating the GEV parameters. Based on the simulation results, we then applied the superior method in modelling the annual maximum stream flow in Sabah. The result of the simulation study shows that the PMLE gives better estimate compared to MLE and PMW as it has small bias and root mean square errors, RMSE. For an application, we can then compute the estimate of return level of river flow in Sabah.

]]>Khadizah Ghazali Jumat Sulaiman Yosza Dasril and Darmesah Gabda

In this paper, we proposed an alternative way to find the Newton direction in solving large-scale unconstrained optimization problems where the Hessian of the Newton direction is an arrowhead matrix. The alternative approach is a two-point Explicit Group Gauss-Seidel (2EGGS) block iterative method. To check the validity of our proposed Newton’s direction, we combined the Newton method with 2EGGS iteration for solving unconstrained optimization problems and compared it with a combination of the Newton method with Gauss-Seidel (GS) point iteration and the Newton method with Jacobi point iteration. The numerical experiments are carried out using three different artificial test problems with its Hessian in the form of an arrowhead matrix. In conclusion, the numerical results showed that our proposed method is more superior than the reference method in term of the number of inner iterations and the execution time.

]]>Mohd Saifullah Rusiman Siti Nasuha Md Nor Suparman and Siti Noor Asyikin Mohd Razali

This paper is focusing on the application of robust method in multiple linear regression (MLR) model towards diabetes data. The objectives of this study are to identify the significant variables that affect diabetes by using MLR model and using MLR model with robust method, and to measure the performance of MLR model with/without robust method. Robust method is used in order to overcome the outlier problem of the data. There are three robust methods used in this study which are least quartile difference (LQD), median absolute deviation (MAD) and least-trimmed squares (LTS) estimator. The result shows that multiple linear regression with application of LTS estimator is the best model since it has the lowest value of mean square error (MSE) and mean absolute error (MAE). In conclusion, plasma glucose concentration in an oral glucose tolerance test is positively affected by body mass index, diastolic blood pressure, triceps skin fold thickness, diabetes pedigree function, age and yes/no for diabetes according to WHO criteria while negatively affected by the number of pregnancies. This finding can be used as a guideline for medical doctors as an early prevention of stage 2 of diabetes.

]]>Nur Hanim Mohd Salleh Husna Hasan and Fariza Yunus

Extreme temperature has been carried out around the world to provide awareness and proper opportunity for the societies to prepare necessary arrangements. In this present paper, the first order Markov chain model was applied to estimate the probability of extreme temperature based on the heat wave scales provided by the Malaysian Meteorological Department. In this study, the 24-year period (1994-2017) daily maximum temperature data for 17 meteorological stations in Malaysia was assigned to the four heat wave scales which are monitoring, alert level, heat wave and emergency. The analysis result indicated that most of the stations had three categories of heat wave scales. Only Chuping station had four categories while Bayan Lepas, Kuala Terengganu, Kota Bharu and Kota Kinabalu stations had two categories. The limiting probabilities obtained at each station showed a similar trend which the highest proportion of daily maximum temperature occurred in the scale of monitoring and followed by the alert level. This trend is apparent when the daily maximum temperature data revealed that Malaysia is experiencing two consecutive days of temperature below 35℃.

]]>Puguh Wahyu Prasetyo Indah Emilia Wijayanti Halina France-Jackson and Joe Repka

In the development of Theory Radical of Rings, there are two kinds of radical constructions. The first radical construction is the lower radical construction and the second one is the upper radical construction. In fact, the class π of all prime rings forms a special class and the upper radical class of forms a radical class which is called the prime radical. An upper radical class which is generated by a special class of rings is called a special radical class. On the other hand, we also have the class of all semiprime rings which is weakly special class of rings. Moreover, we can construct a special class of modules by using a given special class of rings. This condition motivates the existence of the question how to construct weakly special class modules by using a given weakly special class of rings. This research is a qualitative research. The results of this research are derived from fundamental axioms and properties of radical class of rings especially on special and weakly special radical classes. In this paper, we introduce the notion of a weakly special class of modules, a generalization of the notion on a special class of modules based on the definition of semiprime modules. Furthermore, some properties and examples of weakly special classes of modules are given. The main results of this work are the definition of a weakly special class of modules and their properties.

]]>Suparman

A piecewise constant model is often applied to model data in many fields. Several noises can be added in the piecewise constant model. This paper proposes the piecewise constant model with a gamma multiplicative noise and a method to estimate a parameter of the model. The estimation is done in a Bayesian framework. A prior distribution for the model parameter is chosen. The prior distribution for the parameter model is multiplied with a likelihood function for the data to build a posterior distribution for the parameter. Because a number of models are also parameters, a form of the posterior distribution for the parameter is too complex. A Bayes estimator cannot be calculated easily. A reversible jump Monte Carlo Markov Chain (MCMC) is used to find the Bayes estimator of the model parameter. A result of this paper is the development of the piecewise constant model and the method to estimate the model parameter. An advantage of this method can simultaneously estimate the constant piecewise model parameter.

]]>Che Haziqah Che Hussin Ahmad Izani Md Ismail Adem Kilicman and Amirah Azmi

This paper aims to propose and investigate the application of Multistep Modified Reduced Differential Transform Method (MMRDTM) for solving the nonlinear Korteweg-de Vries (KdV) equation. The proposed technique has the advantage of producing an analytical approximation in a fast converging sequence with a reduced number of calculated terms. MMRDTM is presented with some modification of the reduced differential transformation method (RDTM) which is the nonlinear term is replaced by related Adomian polynomials and then adopting a multistep approach. Consequently, the obtained approximation results do not only involve smaller number of calculated terms for the nonlinear KdV equation, but also converge rapidly in a broad time frame. We provided three examples to illustrates the advantages of the proposed method in obtaining the approximation solutions of the KdV equation. To depict the solution and show the validity and precision of the MMRDTM, graphical inputs are included.

]]>Bahtiar Jamili Zaini and Shamshuritawati Sharif

Bivariate data consist of 2 random variables that are obtained from the same population. The relationship between 2 bivariate data can be measured by correlation coefficient. A correlation coefficient computed from the sample data is used to measure the strength and direction of a linear relationship between 2 variables. However, the classical correlation coefficient results are inadequate in the presence of outliers. Therefore, this study focuses on the performance of different correlation coefficient under contaminated bivariate data to determine the strength of their relationships. We compared the performance of 5 types of correlation, which are classical correlations such as Pearson correlation, Spearman correlation and Kendall’s Tau correlation with other robust correlations, such as median correlation and median absolute deviation correlation. Results show that when there is no contamination in data, all 5 correlation methods show a strong relationship between 2 random variables. However, under the condition of data contamination, median absolute deviation correlation denotes a strong relationship compared to other methods.

]]>Gautam Choudhury Akhil Goswami Anjana Begum and Hemanta Kumar Sarmah

The single server queue with two types of heterogeneous services with generalized vacation for unreliable server have been extended to include several types of generalizations to which attentions has been paid by several researchers. One of the most important results which deals with such types of models is the “Stochastic Decomposition Result”, which allows the system behaviour to be analyzed by considering separately distribution of system (queue) size with no vacation and additional system (queue) size due to vacation. Our intention is to look into some sort of united approach to establish stochastic decomposition result for two types of general heterogeneous service queues with generalized vacations for unreliable server with delayed repair to include several types of generalizations. Our results are based on embedded Markov Chain technique which is considerably a most powerful and popular method wisely used in applied probability, specially in queueing theory. The fundamental idea behind this method is to simplify the description of state from two dimensional states to one dimensinal state space. Finally, the results that are derived is shown to include several types of generalizations of some existing well- known results for vacation models, that may lead to remarkable simpliﬁcation while solving similar types of complex models.

]]>Inessa I. Pavlyuk and Sergey V. Sudoplatov

Approximations of syntactic and semantic objects play an important role in various ﬁelds of mathematics. They can create theories and structures in one given class by means of others, usually simpler. For instance, in certain situations, inﬁnite objects can be approximated by ﬁnite or strongly minimal ones. Thus, complicated objects can be collected using simpliﬁed ones. Among these objects, Abelian groups, their ﬁrst order theories, connections and dynamics are of interests. Theories of Abelian groups are characterized by Szmielew invariants leading to the study and descriptions of approximations in terms of these invariants. In the paper we apply a general approach for approximating theories to the class of theories of Abelian groups which characterizes the approximability of a theory of Abelian groups by a given family of theories of Abelian groups in terms of Szmielew invariants and their limits. We describe some forms of approximations for theories of Abelian groups. In particular, approximations of theories of Abelian groups by theories of ﬁnite ones are characterized. In addition, we describe approximations by quasi-cyclic and torsion-free Abelian groups and their combinations with respect to given families of prime numbers. Approximations and closures of families of theories with respect to standard Abelian groups for various sets of prime numbers are also described.

]]>Supawan Yena and Nopparat Pochai

Nitrogen is emitted extensively by industrial companies, increasing nitrogen compounds such as ammonia, nitrate, and nitrite in soil and water as a result of nitrogen cycle reactions. Groundwater contamination with nitrates and nitrites impacts human health. Mathematical models can explain groundwater contamination with nitrates and nitrites. Hydraulic head model provides the hydraulic head of groundwater. Groundwater velocity model provided x- and y- direction vector in groundwater. Groundwater contamination distribution model provides nitrogen, nitrate and nitrite concentration. Finite difference techniques are approximate the models solution. Alternating direction explicit method was used to clarify hydraulic head model. Centered space explained groundwater velocity model. Forward time central space was used to predict groundwater transportation of contamination models. We simulate different circumstances to explain the pollution in leachate water underground, paying attention to the toxic nitrogen, ammonia, nitrate, nitrite blended in the water.

]]>Mohammed M. B. Adam M. B. Zulkafli H. S. and Ali N.

This paper proposes three different statistics to be used to represent the magnitude observations in each class when estimating the statistical measures from the frequency table for continuous data. The existing frequency tables use the midpoint as the magnitude of observations in each class, which results in an error called grouping error. Using the midpoint is due to the assumption that the observations in each class are uniformly distributed and concentrated around their midpoint, which is not always valid. In this research, construction of the frequency tables using the three proposed statistics, the arithmetic mean, median, and midrange and midpoint are respectively named, Method 1, Method 2, Method 3, and the Existing method. The four methods are compared using root-mean-squared error (RMSE) by performing simulation studies using three distributions, normal, uniform, exponential distributions. The simulation results are validated using real data, Glasgow weather data. The ﬁndings indicated that using the arithmetic mean to represent the magnitude of observations in each class of the frequency table leads to minimal error relative to other statistics. It is followed by using the median, for data simulated from normal and exponential distributions, and using midrange for data simulated from uniform distribution. Meanwhile, in choosing the appropriate number of classes used in constructing the frequency tables, among seven different rules used, the freedman and Diaconis rule is the recommended rule.

]]>Ludwik Byszewski Denis Blackmore Alexander A. Balinsky Anatolij K. Prykarpatski and Mirosław Lu´styk

As a ﬁrst step, we provide a precise mathematical framework for the class of control problems with delays (which we refer to as the control problem) under investigation in a Banach space setting, followed by careful deﬁnitions of the key properties to be analyzed such as solvability and complete controllability. Then, we recast the control problem in a reduced form that is especially amenable to the innovative analytical approach that we employ. We then study in depth the solvability and completeness of the (reduced) nonlinearly perturbed linear control problem with delay parameters. The main tool in our approach is the use of a Borsuk–Ulam type ﬁxed point theorem to analyze the topological structure of a suitably reduced control problem solution, with a focus on estimating the dimension of the corresponding solution set, and proving its completeness. Next, we investigate its analytical solvability under some special, mildly restrictive, conditions imposed on the linear control and nonlinear functional perturbation. Then, we describe a novel computational projection-based discretization scheme of our own devising for obtaining accurate approximate solutions of the control problem along with useful error estimates. The scheme effectively reduces the inﬁnite-dimensional problem to a sequence of solvable ﬁnite-dimensional matrix valued tasks. Finally, we include an application of the scheme to a special degenerate case of the problem wherein the Banach–Steinhaus theorem is brought to bear in the estimation process.

]]>Fausto Galetto

Pooling p-values arises both in practical (in any science and engineering applications) and theoretical (statistical) issues. The p-value (sometimes p value) is a probability used as a statistical decision quantity: in practical applications, it is used to decide if an experimenter has to believe that his/her collected data confirm or disconfirm his/her hypothesis about the “reality” of a phenomenon. It is a real number, determination of a Random Variable, uniformly distributed, related to the data provided by the measurement of a phenomenon. Almost all statistical software provides p-values when statistical hypotheses are considered, e.g. in Analysis of Variance and regression methods. Combining the p-values from various samples is crucial, because the number of degrees of freedom (df) of the samples we want to combine is influencing our decision: forgetting this can have dangerous consequences. One way of pooling p-values is provided by a formula of Fisher; unfortunately, this method does not consider the number of degrees of freedom. We will show other ways of doing that and we will prove that theory is more important than any formula which does not consider the phenomenon on which we have to decide: the distribution of the Random Variables is fundamental in order to pool data from various samples. Manager, professors and scholars should remember Deming’s profound knowledge and Juran’s ideas; profound knowledge means “understanding variation (type of variation)” in any process, production or managerial; not understanding variation causes cost of poor quality (more than 80% of sales value) and do not permits a real improvement.

]]>Anton Epifanov

Paper contains the results of the analysis of the laws of functioning of discrete dynamical systems, as mathematical models of which, using the apparatus of geometric images of automatons, are used numerical sequences which interpreted as sequences of second coordinates of points of geometric images of automatons. The geometric images of the laws of the functioning of the automaton are reduced to numerical sequences and numerical graphs. The problem of constructing an estimate of the complexity of the structures of such sequences is considered. To analyze the structure of sequences, recurrence forms are used that give characteristics of the relative positions of elements in the sequence. The parameters of recurrent forms are considered, which characterize the lengths of the initial segments of sequences determined by recurrence forms of fixed orders, the number of changes of recurrent forms required to determine the entire sequence, the place of change of recurrence forms, etc. All these parameters are systematized into the special spectrum of dynamic parameters used for the recurrent determination of sequences, which is used as a means of constructing estimates of the complexity of sequences. In this paper, it also analyzes return sequences (for example, Fibonacci numbers), for the analysis of the properties of which characteristic sequences are used. The properties of sequences defining approximations of fundamental mathematical constants (number e, pi number, golden ratio, Euler constant, Catalan constant, values of Riemann zeta function, etc.) are studied. Complexity estimates are constructed for characteristic sequences that distinguish numbers with specific properties in a natural series, as well as for characteristic sequences that reflect combinations of the properties of numbers.

]]>Leontiev V. L.

The problem of approximating of a surface given by the values of a function of two arguments in a finite number of points of a certain region in the classical formulation is reduced to solving a system of algebraic equations with tightly filled matrixes or with band matrixes. In the case of complex surfaces, such a problem requires a significant number of arithmetic operations and significant computer time spent on such calculations. The curvilinear boundary of the domain of general type does not allow using classical orthogonal polynomials or trigonometric functions to solve this problem. This paper is devoted to an application of orthogonal splines for creation of approximations of functions in form of finite Fourier series. The orthogonal functions with compact supports give possibilities for creation of such approximations of functions in regions with arbitrary geometry of a boundary in multidimensional cases. A comparison of the fields of application of classical orthogonal polynomials, trigonometric functions and orthogonal splines in approximation problems is carried out. The advantages of orthogonal splines in multidimensional problems are shown. The formulation of function approximation problem in variational form is given, a system of equations for coefficients of linear approximation with a diagonal matrix is formed, expressions for Fourier coefficients and approximations in the form of a finite Fourier series are written. Examples of approximations are considered. The efficiency of orthogonal splines is shown. The development of this direction associated with the use of other orthogonal splines is discussed.

]]>Supawan Yena and Nopparat Pochai

Leachate contamination in a landfill causes pollution that flowing down to the groundwater. There are many methods to measure the groundwater quality. Mathematical models are often used to describe the groundwater flow. In this research, the affection of landfill construction to groundwater-quality around rural area is focused. Three mathematical models are combined. The first model is a two-dimensional groundwater flow model. It provides the hydraulic head of the groundwater. The second model is the velocity potential model. It provides the groundwater flow velocity. The third model is a two-dimensional vertically averaged groundwater pollution dispersion model. The groundwater pollutant concentration is provided. The forward time centered technique with the centered in space, the forward in space and the backward in space with all boundaries are used to obtain approximate hydraulic head, the flow velocity in x- and y- directions, respectively. The approximated groundwater flow velocity is used to input into a two-dimensional vertically averaged groundwater pollution dispersion model. The forward time centered space technique with the centered in space, the forward in space and the backward in space with all boundaries are used to obtain approximate the groundwater pollutant concentration. The proposed explicit forward time centered spaced finite difference techniques to the groundwater flow model the velocity potential model and the groundwater pollution dispersion model give good agreement approximated solutions.

]]>Jindrich Klufa

The entrance examinations tests were shorted from 50 questions to 40 questions at the Faculty of International Relations at University of Economics in Prague due to time reasons. These tests are the multiple choice question tests. The multiple choice question tests are suitable for entrance examinations at University of Economics in Prague - the tests are objective and results can be evaluated quite easily and quickly for large number of students. On the other hand, a student can obtain certain number of points in the test purely by guessing the right answers. This shortening of the tests from 50 questions to 40 questions has negative influence on the probability distributions of number of points in the tests (under assumption of the random choice of answers). Therefore, this paper is suggested a solution of this problem. The comparison of these three ways of acceptance of applicants to study the Faculty of International Relations at University of Economics from probability point of view is performed in present paper. The results of this paper show that there has been a significant improvement of the probability distributions of number of points in the tests. The obtained conclusions can be used for admission process at the Faculty of International Relations in coming years.

]]>GeorgiaIrina Oros and Alina Alb Lupas

In this paper, we define the operator I^{m} : differential-integral operator, where S^{m} is S˘al˘agean differential operator and Lm is Libera integral operator. By using the operator I^{m} the class of univalent functions denoted by is defined and several differential subordinations are studied. Even if the use of linear operators and introduction of new classes of functions where subordinations are studied is a well-known process, the results are new and could be of interest for young researchers because of the new approach derived from mixing a differential operator and an integral one. By using this differential–integral operator, we have obtained new sufficient conditions for the functions from some classes to be univalent. For the newly introduced class of functions, we show that is it a class of convex functions and we prove some inclusion relations depending on the parameters of the class. Also, we show that this class has as subclass the class of functions with bounded rotation, a class studied earlier by many authors cited in the paper. Using the method of the subordination chains, some differential subordinations in their special Briot-Bouquet form are obtained regarding the differential–integral operator introduced in the paper. The best dominant of the Briot-Bouquet differential subordination is also given. As a consequence, sufficient conditions for univalence are stated in two criteria. An example is also illustrated, showing how the operator is used in obtaining Briot–Bouquet differential subordinations and the best dominant.

Mostafa Ftouhi Mohammed Barmaki and Driss Gretete

The class of amenable groups plays an important role in many areas of mathematics such as ergodic theory, harmonic analysis, representation theory, dynamical systems, geometric group theory, probability theory and statistics. The class of amenable groups contains in particular all finite groups, all abelian groups and, more generally, all solvable groups. It is closed under the operations of taking subgroups, taking quotients, taking extensions, and taking inductive limits. In 1959, Harry Kesten proved that there is a relation between the amenability and the estimates of symmetric random walk on finitely generated groups. In this article we study the classification of locally compact compactly generated groups according to return probability to the origin. Our aim is to compare several geometric classes of groups. The central tool in this comparison is the return probability on locally compact groups. we introduce several classes of groups in order to characterize the geometry of locally compact groups compactly generated. Our aim is to compare these classes in order to better understand the geometry of such groups by referring to the behavior of random walks on these groups. As results, we have found inclusion relationships between these defined classes and we have given counterexamples for reciprocal inclusions.

]]>Zainidin Eshkuvatov Massamdi Kommuji Rakhmatullo Aloev Nik Mohd Asri Nik Long and Mirzoali Khudoyberganov

A hypersingular integral equations (HSIEs) of the first kind on the interval [ 1 ; 1 ] with the assumption that kernel of the hypersingular integral is constant on the diagonal of the domain is considered. Truncated series of Chebyshev polynomials of the third and fourth kinds are used to find semi bounded (unbounded on the left and bounded on the right and vice versa) solutions of HSIEs of first kind. Exact calculations of singular and hypersingular integrals with respect to Chebyshev polynomials of third and forth kind with corresponding weights allows us to obtain high accurate approximate solution. Gauss-Chebyshev quadrature formula is extended for regular kernel integrals. Three examples are provided to verify the validity and accuracy of the proposed method. Numerical examples reveal that approximate solutions are exact if solution of HSIEs is of the polynomial forms with corresponding weights.

]]>Nurazlina Abdul Rashid Norashikin Nasaruddin Kartini Kassim and Amirah Hazwani Abdul Rahim

Classification studies are widely applied in many areas of research. In our study, we are using classification analysis to explore approaches for tackling the classification problem for a large number of measures using partial least square discriminant analysis (PLS-DA) and decision trees (DT). The performance for both methods was compared using a sample data of breast tissues from the University of Wisconsin Hospital. A partial least square discriminant analysis (PLS-DA) and decision trees (DT) predict the diagnosis of breast tissues (M = malignant, B = benign). A total of 699 patients diagnose (458 benign and 241 malignant) are used in this study. The performance of PLS-DA and DT has been evaluated based on the misclassification error and accuracy rate. The results show PLS-DA can be considered as a good and reliable technique to be used when dealing with a large dataset for the classification task and have good prediction accuracy.

]]>Nurul Shazwani Mohamed Sharifah Kartini Said Husain and Faridah Yunos

Given two algebras and , if lies in the Zariski closure of the orbit , we say that is a degeneration of . We denote this by . Degenerations (or contractions) were widely applied to a range of physical and mathematical point of view. The most well-known example oriented to the application on degenerations is limiting process from quantum mechanics to classical mechanics under that corresponds to the contraction of the Heisenberg algebras to the abelian ones of the same dimension. Research on degenerations of Lie, Leibniz and other classes of algebras are very active. Throughout the paper we are dealing with mathematical background with abstract algebraic structures. The present paper is devoted to the degenerations of low-dimensional nilpotent Leibniz algebras over the field of complex numbers. Particularly, we focus on the classification of three-dimensional nilpotent Leibniz algebras. List of invariance arguments are provided and its dimensions are calculated in order to find the possible degenerations between each pair of algebras. We show that for each possible degenerations, there exists construction of parameterized basis on parameter We proof the non-degeneration case for mentioned classes of algebras by providing some reasons to reject the degenerations. As a result, we give complete list of degenerations and non-degenerations of low-dimensional complex nilpotent Leibniz algebras. In future research, from this result we can find its rigidity and irreducible components.

]]>Busyra Latif Mat Salim Selamat Ainnur Nasreen Rosli Alifah Ilyana Yusoff and Nur Munirah Hasan

Newell-Whitehead-Segel (NWS) equation is a nonlinear partial differential equation used in modeling various phenomena arising in fluid mechanics. In recent years, various methods have been used to solve the NWS equation such as Adomian Decomposition method (ADM), Homotopy Perturbation method (HPM), New Iterative method (NIM), Laplace Adomian Decomposition method (LADM) and Reduced Differential Transform method (RDTM). In this study, the NWS equation is solved approximately using the Semi Analytical Iterative method (SAIM) to determine the accuracy and effectiveness of this method. Comparisons of the results obtained by SAIM with the exact solution and other existing results obtained by other methods such as ADM, LADM, NIM and RDTM reveal the accuracy and effectiveness of the method. The solution obtained by SAIM is close to the exact solution and the error function is close to zero compared to the other methods mentioned above. The results have been executed using Maple 17. For future use, SAIM is accurate, reliable, and easier in solving the nonlinear problems since this method is simple, straightforward, and derivative free and does not require calculating multiple integrals and demands less computational work.

]]>Patricia Abelairas-Etxebarria and Inma Astorkiza

The Exploratory Data Analysis raised by Tuckey [19] has been used in multiple research in many areas but, especially, in the area of the social sciences. This technique searches behavioral patterns of the variables of the study, establishing a hypothesis with the least possible structure. However, in recent times, the inclusion of the spatial perspective in this type of analysis has been revealed as essential because, in many analyses, the observations are spatially autocorrelated and/or they present spatial heterogeneity. The presence of these spatial effects makes necessary to include spatial statistics and spatial tools in the Exploratory Data Analysis. Exploratory Spatial Data Analysis includes a set of techniques that describe and visualize those spatial effects: spatial dependence and spatial heterogeneity. It describes and visualizes spatial distributions, identifies outliers, finds distribution patterns, clusters and hot spots and suggests spatial regimes or other forms of spatial heterogeneity and, it is being increasingly used. With the objective of reviewing the last applications of this technique, this paper, firstly, shows the tools used in Exploratory Spatial Data Analysis and, secondly, reviews the latest Exploratory Spatial Data Analysis applications focused on different areas in the social sciences particularly. As conclusion, it should be noted the growing interest in the use of this spatial technique to analyze different aspects of the social sciences including the spatial dimension.

]]>Agung Prabowo Agus Sugandha Agustini Tripena Mustafa Mamat Sukono and Ruly Budiono

Linear regression is widely used in various fields. Research on linear regression uses the OLS and ML method in estimating its parameters. OLS and ML method require many assumptions to complete. It is frequently found there is an unconditional assumption that both methods are not successfully used. This paper proposes a new method which does not require any assumption with a condition. The new method is called SAM (Simple Averaging Method) to estimate parameters in the simple linear regression model. The method may be used without fulfilling assumptions in the regression model. Three new theorems are formulated to simplify the estimation of parameters in the simple linear regression model with SAM. By using the same data, the simple linear regression model parameter estimation is conducted using SAM. The result shows that the obtained regression parameter is not quite far different. However, to measure the accuracy of both methods, a comparison of errors made by each method is conducted using Root Mean Square Error (RMSE) and Mean Averaged Error (MAE). By comparing the values of RMSE and MAE for both methods, SAM method may be used to estimate parameters in the regression equation. The advantage of SAM is free from all assumptions required by regression, such as error normality assumption while the data should be from the normal distribution.

]]>Jirapud Limthanakul and Nopparat Pochai

A source of contaminated groundwater is governed by the disposal of waste material on a land fill. There are many people in rural areas where the primary source of drinking water is well water. This well water may be contaminated with groundwater from landfills. In this research, a two-dimensional mathematical model for long-term contaminated groundwater pollution measurement around a land fill is proposed. The model is governed by a combination of two models. The first model is a transient two-dimensional groundwater flow model that provides the hydraulic head of the groundwater. The second model is a transient twodimensional advection-diffusion equation that provides the groundwater pollutant concentration. The proposed explicit finite difference techniques are used to approximate the hydraulic head and the groundwater pollutant concentration. The simulations can be used to indicate when each simulated zone becomes a hazardous zone or a protection zone.

]]>Nor Asmaa Alyaa Nor Azlan Effendi Mohamad Mohd Rizal Salleh Oyong Novareza Dani Yuniawan Muhamad Arfauz A Rahman Adi Saptari and Mohd Amri Sulaiman

The purpose of this review paper is to set an augmentation approach and exemplify distribution of augmentation works in Simplex method. The augmentation approach is classified into three forms whereby it comprises addition, substitution and integration. From the diversity study, the result shows that substitution approach appeared to be the highest usage frequency, which is about 45.2% from the total of percentage. This is then followed by addition approach which makes up 32.3% of usage frequency and integration approach for about 22.6% of usage frequency which makes it the least percentage of the overall usage frequency approach. Since it is being the least usage percentage, the paper is then interested to foresee a future study of integration approach that can be performed from the executed distribution of the augmentation works according to Simplex's computation stages. A theme screening is then conducted with a set of criteria and themes to come out with a proposal of new integration approach of augmentation of Simplex method.

]]>Arif Rahman Oke Oktavianty Ratih Ardia Sari Wifqi Azlia and Lavestya Dina Anggreni

Some researches need data homogeneity. The dispersion of data causes research towards an absurd direction. The outlier makes unrealistic homogeneity. The research can reject the extreme data as outlier to estimate trimmed arithmetic mean. Because of the wide data dispersion, it will fail to identify the outliers. The study will evaluate the confidence interval and compare it with the acceptance tolerance. There are three types of invalidity of data gathering: outliers, too wide dispersion, distracted central tendency.

]]>Zahari Md Rodzi and Abd Ghafur Ahmad

The purpose of this work is to present a new theory namely fuzzy parameterized dual hesitant fuzzy soft sets (FPDHFSSs). This theory is an extension of the existing dual hesitant fuzzy soft set whereby the set of parameters have been assigned with respective weightage accordingly. We also introduced the basic operation functions for instance intersection, union, addition and product operations of FPDHFSSs. Then, we proposed the concept of score function of FPDHFSSs of which these scores function were determined based on average mean, geometry mean and fractional score. The said scores function then were divided into the membership and non-membership elements where the distance of FPDHFSSs was introduced. The proposed distance of FPDHFSSs has been applied in TOPSIS which will be able to solve the problem of fuzzy dual hesitant fuzzy soft set environment.

]]>Alec John Villamar Marionne Gayagoy Flerida Matalang and Karen Joy Catacutan

This study aimed to determine the usefulness of Mathematics subjects in the accounting courses for Bachelor of Science in Accountancy. Mathematics subjects, which include College Algebra, Mathematics of Investment, Business Calculus and Quantitative Techniques, were evaluated through its Course Learning Objectives, while its usefulness for accounting courses which include Financial Accounting, Advance Accounting, Cost Accounting, Management Advisory Services, Auditing and Taxation, was evaluated by the students. Descriptive research was employed among all students in their 5^{th}-year in BS-Accountancy who were done with all the Accounting Subjects in the Accountancy Program and they all passed the different Mathematics subjects prerequisite to their courses. A survey questionnaire was used to gather data. Using descriptive statistics, results showed that Mathematics of Investment is the most useful subject in the different accounting courses particularly in Financial Accounting, Advance Accounting and Auditing. Further, by using Mean, the results showed that several skills that can be acquired in the Mathematics subjects are found to be useful in accounting courses and the use of the fundamental operations is the most useful skill in all accounting subjects.

Rafid S. A. Alshkaki

Differential equations are used in modelling many disciplines, in engineering, chemistry, physics, biology, economics, and other fields of sciences, hence can be used to understand and to determine the underlying probabilistic behavior of phenomena through their probability distributions. This paper came to use a simple form of differential equations, namely, the linear form, to determine the probabilistic distributions of some of the most important and popular sub class of discrete distributions used in real-life, the Poisson, the binomial, the negative binomial, and the logarithmic series distributions. A class of finite number of inflated points power series distributions, that contains the Poisson, the binomial, the negative binomial, and the logarithmic series distributions as some of its members, was defined and some of its characteristics properties, along with characterization of the 3-points inflated of these four distributions, through a linear differential equation for their probability generating functions were given. Further, some previous known results were shown to be special cases of our results.

]]>Hasibun Naher Humayra Shafia Md. Emran Ali and Gour Chandra Paul

In this article, the nonlinear partial fractional differential equation, namely the KdV equation is renewed with the help of modified Riemann- Liouville fractional derivative. The equation is transformed into the nonlinear ordinary differential equation by using the fractional complex transformation. The goal of this paper is to construct new analytical solutions of the space and time fractional nonlinear KdV equation through the extended -expansion method. The work produces abundant exact solutions in terms of hyperbolic, trigonometric, rational, exponential, and complex forms, which are new and more general than existing results in literature. The newly generated solutions show that the executed method is a well-organized and competent mathematical tool to investigate a class of nonlinear evolution fractional order equations.

]]>Llesh Lleshaj and Alban Korbi

In this study analyzed 20 different countries that are the origin state of foreign investors, which have invested in Albania (this sample represents 95% of FDI (Foreign Direct Investments) stocks, 2007 - 2014). The analysis technic used is the gravity model of FDI stocks in Albania. The main independent variables in this analysis are GDP, the level of business taxes, the difference of GDP per capita, the similarity economies, etc. The result of this study: The level of FDI stocks in Albania is lower than its potential compare with FDI stock average in the States of the Balkan Region.

]]>Anuradha Seema Mehra and Said Broumi

Motivated by the concepts of fuzzy metric and m-metric spaces, we introduced the notion of Non- Archimedean fuzzy m-metric space which is an extension of partial fuzzy metric space. We present some examples in support of this new notion. Regarding this notion, its topological structure and some properties are specified simultaneously. At the end, some fixed point results are also provided.

]]>Igor Sinitsyn and Vladimir Sinitsyn

Analytical methods of the mathematical statistics of random vectors and matrices based on the parametrization of the distributions are widely used. These methods permit to design practically simple software when it is possible to have the definite information about analytical properties of the distributions under research. The main difficulty in practical applications of the methods based on the parametrization of the distributions is the rapid increase of the number of equations for the moments, the semiinvariants or the coefficients of the truncated orthogonal expansions of the dimension or the state vector (extended in the general case) and the maximal order of the moments involved. The number of equations for the parameters becomes exceedingly large in such cases. For structural parametrization and/or approximation of the probability densities of the random vectors we shall apply the ellipsoidal densities, i.e. the densities whose planes of the levels of equal probability are similar concentric ellipsoids (the ellipses for two-dimensional vectors, the ellipsoids for three-dimensional vectors, the hyperellipsoids for the vectors of the dimension more than three). In particular, a normal distribution in any finite-dimensional space has an ellipsoidal structure. The distinctive characteristics of such distributions consists in the fact that their probability densities are the functions of positively determined quadratic form where is an expectation of the random vector is some positively determined matrix. Ellipsoidal approximation method (EAM) cardinally reduces the number of parameters till () where being the number of probabilistic moments. While using ellipsoidal linearization method (ELM) we get Basic EAM and ELM foundations and applications to problems of mathematical statistics and ellipsoidal distributions with invariant measure in populational Volterra differential stochastic nonlinear systems are considered.

]]>Aripov M. Mukimov A. and Mirzayev B.

We study the asymptotic behavior (for ) of solutions of the Cauchy problem for a nonlinear parabolic equation with a double nonlinearity, describing the diffusion of heat with nonlinear heat absorption at the critical value of the parameter ᵝ. For numerical computations as an initial approximation we used founded the long time asymptotic of the solution. Numerical experiments and visualization were carried for one and two dimensional case.

]]>Emil V. Veitsman

This paper is aimed to find a connection between i-dimensional spaces (i=0,…, ‘n') and the long-range j-dimensional attractive forces (j=0,…, ‘m') creating these spaces. The connection is fundamental and unrelated to any processes going in the spaces being studied. A theorem is formulated and strictly proved showing in which cases the long-ranged attractive forces can form real spaces of different dimensions ( i=0,…,n). The existence of the attraction between masses is defined by divergence of the vector of interaction between masses. Weak anisotropic real spaces are studied by rotating an ellipsoid for (3ζ)D-cases when its eccentricity ε<<1. Such spaces cannot be in equilibrium, the time of their existence is substantially limited. The greater is anisotropy, the shorter is the lifetime of such substance. The latter cannot be in equilibrium, the time of their existence is substantially limited.

]]>Taehan Bae and Maral Mazjini

Recent studies on correlated Poisson processes show that the backward simulation methods are computationally efficient, and incorporate flexible and extremal correlation structures in a multivariate risk system. These methods rely on the fact that the past arrival times of a Poisson process given the number of events over a time interval, [0; T], are the order statistics of uniform random variables on [0; T]. In this paper, we discuss an extension of the backward methods to a correlated negative binomial L´evy process which is an appealing model for over-dispersed count data such as operational losses. To obtain the conditional uniformity for the negative binomial L´evy process, we consider a particular setting in which the time interval is partitioned into equally spaced sub-intervals with unit length and the terminal time T is set to be the number of sub-intervals. Under this setting, the resulting joint probability of the increment series, conditional on the number of events over [0; T], say l, is uniform for any points in the support of a [T; l]-simplex lattice. Based on this result, we establish a backward simulation method similar to that of Poisson process. Both the conditional independence and conditional dependence cases are discussed with illustrations of the corresponding time correlation patterns.

]]>Mykola Bokalo and Olha Sus

In this paper, we consider the initial-value problem for parabolic variational inequalities (subdifferential inclusions) with Volterra type operators. We prove the existence and the uniqueness of the solution. Furthermore, the estimates of the solution are obtained. The results are achieved using the Banach's fixed point theorem (the principle of compression mappings). The motivation for this work comes from the evolutionary variational inequalities arising in the study of frictionless contact problems for linear viscoelastic materials with long-term memory. Also, such kind of problems have their application in constructing different models of the injection molding processes.

]]>Benjamin Kedem Lemeng Pan and Paul J. Smith Chen Wang

It is shown how to estimate any threshold probability from data below or even far below the threshold through repeated fusion of the data with externally generated random samples. This is referred to as repeated out of sample fusion (ROSF). A comparison of the approach with peaks-over-threshold (POT) across different tail types shows that ROSF provides more precise point and interval estimates based on moderately large samples.

]]>Sh. A. Dildabayev and G. K. Zakir'yanova

Up to now remains open the question of constructing fundamental solutions of the two-dimensional statics of an elastic body with arbitrary anisotropy. Also in the scope of BEM method, the question of calculating stresses in boundary points and points located close to the boundary of the region still remains actual. In this work, fundamental solutions of the static problem for elastic plane with arbitrary anisotropic properties are obtained as the sum of residues with complex variable function. The assessments of fundamental solution and theirs derivatives are presented in closed form. In the distribution space obtained are the regular representations for the Somigliana formulas and the stress calculation formulas. The numerical implementation of the BIE method in direct formulation has been realized in standard way. The test results performed for circular hole in anisotropic plane of rhombic system show a higher compliance with the boundary values of displacements and stresses and with nodes placed close to boundary. The results of analysis of the stress-strain state in the vicinity of rectangular mining chambers located deep from day surface are presented in tables and pictures of isolines.

]]>D. A. Karpov and V. I. Struchenkov

This article deals with the problem of approximation of plane curves defined by a sequence of points by a spline of a given type. This task arises when developing methods for computer-aided design of linear structures: railways and roads, trenches for laying pipelines, canals, etc. Its fundamental differences from the problems are considered in the theory of splines and its applications are as follows: spline elements are of various types (straight line segments and circles joined by clothoids), the boundaries of the elements and even their number is unknown; also there are restrictions - inequalities on the parameters of the elements. Continuity of the curve, the tangent, and the curvature is provided. Clothoids are missing if curvature continuity is not required, for example, when designing pipelines. The above mentioned features of the task do not allow using the achievements of the theory of splines and nonlinear programming. We cannot recognize the individual elements of the desired spline by a given sequence of points. Therefore, it is not possible to implement their selection separately. We must search for the spline as a whole. The article presents a mathematical model and a new algorithm for solving the problem using dynamic programming.

]]>Afif Shihabuddin Norhaslinda Ali and Mohd Bakri Adam

Air pollution index (API) is a common tool used to describe the air quality in the environment. High level of API indicates the greater level of air pollution which will gives bad impact on human health. Statistical model for high level of API is important for the purpose of forecasting the level of API so that the public can be warned. In this study, extremes of API are modelled using Generalized Pareto Distribution (GPD). Since the values of API are determined by the value of five pollutants namely sulphur dioxide, nitrogen dioxide, carbon monoxide, ozone and suspended particulate matter, data on API exhibit non-stationarity. Standard method for modelling the non-stationary extremes using GPD is by fixing the high constant threshold and incorporating the covariate model in the GPD parameters for data above the threshold to account for the non-stationarity. However, high constant threshold value might be high enough on certain covariate for GPD approximation to be a valid model for extreme values, but not on the other covariates which leads to the violation of the asymptotic basis of GPD model. New method for the threshold selection in non-stationary extremes modelling using regression tree is proposed to the API data. Regression tree is used to partition the API data into a stationary group with similar covariate condition. Then, a high threshold value can be applied within a group. Study shows that model for extremes of API using tree-based threshold gives a good fit and provides an alternative to the model based on standard method.

]]>Norin Rahayu Shamsuddin and Nor Idayu Mahat

Clustering with heterogeneous variables in a dataset is no doubt a challenging process owing to different scales in a data. The paper introduced a SimMultiCorrData package in R to generate the artificial dataset for clustering. The construction of artificial dataset with various distribution helps to mimic the scenario of nature of real datasets. Our experiments shows that the clusterability of a dataset are influenced by various factors such as overlapping clusters, noise, sub-cluster, and unbalance objects within the clusters.

]]>F. Z. Che Rose M. T. Ismail and N. A. K. Rosili

The existence of outliers in financial time series may affect the estimation of economic indicators. Detection of outliers in structural time series framework by using indicator saturation approach has become our main interest in this study. The reference model used is local level model. We apply Monte Carlo simulations to assess the performance of impulse indicator saturation for detecting additive outliers in the reference model. It is found that the significance level, α = 0.001 (tiny) outperformed the other target size in detecting various size of additive outliers. Further, we apply the impulse indicator saturation to detection of outliers in FTSE Bursa Malaysia Emas (FBMEMAS) index. We discover that there were 14 outliers identified corresponding to several economic and financial events.

]]>Ling, A. S. C. Darmesah, G. Chong, K. P. and Ho, C. M.

The losses caused by cocoa black pod disease around the world exceeded $400 million due to inaccurate forecasting of cocoa black pod disease incidence which leads to inappropriate spraying timing. The weekly cocoa black pod disease incidence is affected by external factors, such as climatic variables. In order to overcome this inaccuracy of spraying timing, the forecasting disease incidence should consider the influencing external factors such as temperature, rainfall and relative humidity. The objective of this study is to develop a Autoregressive Integrated Moving Average with external variables (ARIMAX) model which tries to account the effects due to the climatic influencing factors, to forecast the weekly cocoa black pod disease incidence. With respect to performance measures, it is found that the proposed ARIMAX model improves the traditional Autoregressive Integrated Moving Average (ARIMA) model. The results of this forecasting can provide benefits especially for the development of decision support system in determine the right timing of action to be taken in controlling the cocoa black pod disease.

]]>Nurazlina Abdul Rashid Wan Siti Esah Che Hussain Abd Razak Ahmad and Fatihah Norazami Abdullah

Classification methods are fundamental techniques designed to find mathematical models that are able to recognize the membership of each object to its proper class on the basis of a set of measurements. The issue of classifying objects into groups when variables in an experiment are large will cause the misclassification problems. This study explores the approaches for tackling the classification problem of a large number of independent variables using parametric method namely PLS-DA and PCA+LDA. Data are generated using data simulator; Azure Machine Learning (AML) studio through custom R module. The performance analysis of the PLS-DA was conducted and compared with PCA+LDA model using different number of variables (p) and different sample sizes (n). The performance of PLS-DA and PCA+LDA has been evaluated based on minimum misclassification rate. The results demonstrated that PLS-DA performed better than the PCA+LDA for large sample size. PLS-DA can be considered to have a good and reliable technique to be used when dealing with large datasets for classification task.

]]>Noor Hidayah Mohd Zaki Aqilah Nadirah Saliman Nur Atikah Abdullah Nur Su Ain Abu Hussain and Norani Amit

A queuing system is a process to measure the efficiency of a model by underlying the concepts of queue models: arrival and service time distributions, queue disciplines and queue behaviour. The main aim of this study is to compare the behaviour of a queuing system at check-in counters using the Queuing Theory Model and Fuzzy Queuing Model. The Queuing Theory Model gives performance measures of a single value while the Fuzzy Queuing Model has a range of values. The Dong, Shah and Wong (DSW) algorithm is used to define the membership function of performance measures in the Fuzzy Queuing Model. Based on the observation, the problem often occurs when customers are required to wait in the queue for a long time, thus indicating that the service systems are inefficient. Data including the variables were collected, such as arrival time in the queue (server) and service time. Results show that the performance measures of the Queuing Theory Model lie in the range of the computed performance measures of the Fuzzy Queuing Model. Hence, the results obtained from the Fuzzy Queuing Model are consistent to measure the queuing performance of an airline company in order to solve the problem in waiting line and will improve the quality of services provided by airline company.

]]>Zakiah I. Kalantan and Faten Alrewely

Mixture distributions have received considerable attention in life applications. This paper presents a finite Laplace mixture model with two components. We discuss the model properties and derive the parameters estimations using the method of moments and maximum likelihood estimation. We study the relationship between the parameters and the shape of the proposed distribution. The simulation study discusses the effectiveness of parameters estimations of Laplace mixture distribution.

]]>Hafizah Bahaludin Mimi Hafizah Abdullah Lam Weng Siew and Lam Weng Hoe

In recent years, there has been a growing interest in financial network. The financial network helps to visualize the complex relationship between stocks traded in the market. This paper investigates the stock market network in Bursa Malaysia during the 2008 global financial crisis. The financial network is based on the top hundred companies listed on Bursa Malaysia. Minimal spanning tree (MST) is employed to construct the financial network and uses cross-correlation as an input. The impact of the global financial crisis on the companies is evaluated using centrality measurements such as degree, betweenness, closeness and eigenvector centrality. The results indicate that there are some changes on the linkages between securities after the financial crisis, that can have some significant effect in investment decision making.

]]>Mihail Cocos

The Fundamental Theorem of Riemannian geometry states that on a Riemannian manifold there exist a unique symmetric connection compatible with the metric tensor. There are numerous examples of connections that even locally do not admit any compatible metrics. A very important class of symmetric connections in the tangent bundle of a certain manifolds (afinnely flat) are the ones for which the curvature tensor vanishes. Those connections are locally metric. S.S. Chern conjectured that the Euler characteristic of an affinely at manifold is zero. A possible proof of this long outstanding conjecture of S.S. Chern would be by verifying that the space of locally metric connections is path connected. In order to do so one needs to have practical criteria for the metrizability of a connection. In this paper, we give necessary and sufficient conditions for a connection in a plane bundle above a surface to be locally metric. These conditions are easy to be veri ed using any local frame. Also, as a global result we give a necessary condition for two connections to be metric equivalent in terms of their Euler class.

]]>Zurab Kvatadze and Beqnu Pharjiani

On the probabilistic space (Ω ,F , P ) we consider a given two-component stationary (in the narrow sense) sequence , where is the controlling sequence and the members of the sequence are the observations of some random variable which are used in the construction of kernel estimates of Rosenblatt-Parzen type for an unknown density of the variable . The cases of conditional independence and chain dependence of these observations are considered. The upper bounds are established for mathematical expectations of the square of deviation of the obtained estimates from .

]]>Roselaine Neves Machado and Luiz Guerreiro Lopes

There are many simultaneous iterative methods for approximating complex polynomial zeros, from more traditional numerical algorithms, such as the well-known third order Ehrlich–Aberth method, to the more recent ones. In this paper, we present a new family of combined iterative methods for the simultaneous determination of simple complex zeros of a polynomial, which uses the Ehrlich iteration and a correction based on King's family of iterative methods for nonlinear equations. The use of King's correction allows increasing the convergence order of the basic method from three to six. Some numerical examples are given to illustrate the convergence behaviour and effectiveness of the proposed sixth order Ehrlich-like family of combined iterative methods for the simultaneous approximation of simple complex polynomial zeros.

]]>Norazaliza Mohd Jamil

The material of pipelines transporting water is usually polymers. Chlorine as oxidant agent is added into the water system to prevent the spread of some disease. However, exposure to a chlorinated environment could lead to polymer pipe degradation and crack formation which ultimately reaches a complete failure for the pipes. To save labor, time and operating cost for predicting a failure time for a polymer pipe, we focus on its modeling and simulation. A current kinetic model for the corrosion process of polymers due to the action of chlorine is extensively analyzed from the mathematical point of view. By using the nondimensionalization method, the number of parameters in the original governing equations of the kinetic model has been reduced. Then, the dimensionless set of differential equations is numerically solved by the Runge Kutta method. There are two sets of simulations which are low chlorine concentration and high chlorine concentration, and we captured some essential characteristics for both types. This approach enables us to obtain better predictive capabilities, hence increasing our understanding of the corrosion process.

]]>Reem Allogmany Fudziah Ismail and Zarina Bibi Ibrahim

In this paper, we present an implicit two-point block method for solving directly the general second order ordinary differential equations (ODEs). The method incorporates the first and second derivatives of f(x; y; y'), which are the third and fourth derivatives of the solution. The method is derived using Hermite Interpolating Polynomial as the basic function. A performance comparison of the two-point block method is compared in term of accuracy to several existing methods, which have order almost equal or higher than that of the new method. Numerical results interpret the accuracy and efficacy of the new method. Application of the new method is discussed.

]]>A. Artykbaev and B. M. Sultanov

The linear transformation of the plane is considered, whose matrix belongs to the Heisenberg group. The transformation matrix is neither symmetric nor orthogonal. But the determinant is one. The class of the second-order curves is studied, which is obtained from each other by the transformation under consideration. The invariant values of curves of this class are proved. In particular, the conservation of the product of semi-axes of curves in this class is proved, as well as the equality of the areas for the ellipses of the class under consideration. The obtained invariants of the second order curves are applied to curves of the second order, which is the indicatrix of the surface. Conclusion: a theorem is obtained which proves the invariance of the total curvature of a surface in a Euclidean space of the class under consideration is a transformation, which is a deformation.

]]>Abdussakir

The concept of the topological index of a graph is increasingly diverse because researchers continue to introduce new concepts of topological indices. Researches on the topological indices of a graph which initially only examines graphs related to chemical structures begin to examine graphs in general. On the other hand, the concept of graphs obtained from an algebraic structure is also increasingly being introduced. Thus, studying the topological indices of a graph obtained from an algebraic structure such as a group is very interesting to do. One concept of graph obtained from a group is subgroup graph introduced by Anderson et al in 2012 and there is no research on the topology index of the subgroup graph of the symmetric group until now. This article examines several topological indices of the subgroup graphs of the symmetric group for trivial normal subgroups. This article focuses on determining the formulae of various Zagreb indices such as first and second Zagreb indices and co-indices, reduced second Zagreb index and first and second multiplicatively Zagreb indices and several eccentricity-based topological indices such as first and second Zagreb eccentricity indices, eccentric connectivity, connective eccentricity, eccentric distance sum and adjacent eccentric distance sum indices of these graphs.

]]>Lucy Twumwaah Afriyie Bashiru I. I. Saeed and Abukari Alhassan

Statistical surveys are conducted to estimate population parameters where there are reasons restricting the use of the total population. In practice, there are two different survey strategies (i.e. simple and complex survey designs) to be implemented and the choice of a strategy depends on several factors including the characteristics of the population, the nature of the research questions, etc. However, when the complex survey design is used, standard statistical methods that do not take into account the complex nature of the survey design may lead to inaccurate estimates. In Ghana, living standard surveys are conducted using complex survey design involving stratifications, clustering and estimation of survey weights. In this study, bootstrap resampling methods are used to explore the effect of complex survey design in the analysis of child labour prevalence rate. The relative efficiency of the complex survey design approach was determined by using design effect (deff). Data from the Ghana Living Standard Survey Round 6 (GLSS 6) conducted by the Ghana Statistical Service in 2012 was used for the analysis and the target population was children aged 5–17 years. The results from the simulation study shows that relative efficient estimates are obtained when the complex survey design characteristics are considered in the analysis. Thus, ignoring the characteristics of complex survey design could lead to unrealistic estimates.

]]>Aloev R. D. Eshkuvatov Z. K. Khudoyberganov M. U. and Nematova D. E.

In the paper, we propose a systematic approach to design and investigate the adequacy of the computational models for a mixed dissipative boundary value problem posed for the symmetric t-hyperbolic systems. We consider a two-dimensional linear hyperbolic system with variable coefficients and with the lower order term in dissipative boundary conditions. We construct the difference splitting scheme for the numerical calculation of stable solutions for this system. A discrete analogue of the Lyapunov's function is constructed for the numerical verification of stability of solutions for the considered problem. A priori estimate is obtained for the discrete analogue of the Lyapunov's function. This estimate allows us to assert the exponential stability of the numerical solution. A theorem on the exponential stability of the solution of the boundary value problem for linear hyperbolic system and on stability of difference splitting scheme in the Sobolev spaces was proved. These stability theorems give us the opportunity to prove the convergence of the numerical solution.

]]>Renz Jimwel S. Mina and Jerico B. Bacani

Numerous researches have been devoted in finding the solutions , in the set of non-negative integers, of Diophantine equations of type (1), where the values p and q are fixed. In this paper, we also deal with a more generalized form, that is, equations of type (2), where n is a positive integer. We will present results that will guarantee the non-existence of solutions of such Diophantine equations in the set of positive integers. We will use the concepts of the Legendre symbol and Jacobi symbol, which were also used in the study of other types of Diophantine equations. Here, we assume that one of the exponents is odd. With these results, the problem of solving Diophantine equations of this type will become relatively easier as compared to the previous works of several authors. Moreover, we can extend the results by considering the Diophantine equations (3) in the set of positive integers.

]]>Nurulkamal Masseran Lai Hoi Yee Muhammad Aslam Mohd Safari and Kamarulzaman Ibrahim

Poverty is an important issue that needs to be addressed by all countries. Poverty is related to a group of people earning a low income (lower-tail of the income distribution). In Malaysia, low-income earners are classified as the B40 group. This study aims to describe the behavior of the low-income distribution using the power law model. For this purpose, an inverse Pareto model was applied for describing the lower tail data of Malaysian household income. A robust and efficient estimator, called the probability integral transform statistic estimator, was utilized for estimating the shape parameter of the inverse Pareto distribution. Based on the fitted inverse Pareto model, not all households in the B40 group complied with the power law behavior. However, the power law was able to provide a good description for the group of B40 that was below the poverty line. Based on the inverse Pareto model, the parametric Lorenz curve and the Gini index were derived to provide a robust measure of the income inequality of poor households in Malaysia.

]]>Xin Yi Kh'ng Su Yean Teh and Hock Lye Koh

Low-lying atoll islands that depend heavily on fresh groundwater for survival are particularly vulnerable to sea level rise (SLR), which calls for appropriate climate action SDG 13. As the sea level rises, the associated increase in surface seawater inundation and subsurface saltwater intrusion will reduce the availability of fresh groundwater due to permanent salinization of groundwater with corresponding thinning of freshwater lens. This paper provides scientific insights on how freshwater lens in atoll islands respond to SLR. Simulations on saturated-unsaturated variable-density groundwater flow with salt transport are performed by the groundwater flow and solute transport model SUTRA (Saturated-Unsaturated Transport) developed by the U.S. Geological Survey. Model simulations and statistical analyses suggest that freshwater lens thickness depends mainly on groundwater recharge rate, island size and aquifer hydraulic conductivity. The impact of various geo-hydrologic parameters on fresh groundwater sustainability is then analyzed to explore feasibility of increasing groundwater recharge through rainwater harvesting, as a mitigation measure. The implication to the achievement of sustainable clean water and sanitation for all (SDG 6) is also discussed.

]]>Shahryar Sorooshian and Yasaman Parsia

Multi Attribute Decision Making (MADM) is an asset to provide solutions for our todays' complex issues and problems. The fact of the matter is that the main source of information in many MADMs is a panel of experts. However, in some cases, there is a possibility of lack of knowledge by the panel to rank or weight one or a few particular criterion/criteria for the decision making. Therefore, the decision maker needs an altered source of information to complete the decision making process. Hence, WSM (Weighted Sum Method) by means of the most popular MADM techniques is selected; and as a prior aim of this article, a modified version of the WSM is proposed as a solution for multiple criteria decision makers by way of a solution for the cases when there is a need for another source of information to rank or weight the particular criterion/criteria. The modified WSM is presented in five stages. The validity, through feasibility, of the modified WSM is tested and verified in a numerical example. Additionally, following this article, future researches could use the same approach for modification of other MADMs to deal with two or more sources of information.

]]>Swasti Maharani Toto Nusantara Abdur Rahman As'ari and Abdul Qohar

Critical thinking is a skill needed for education. Critical thinking has two main components i.e. ability and critical thinking disposition. The purpose of this research is to describe the disposition of critical thinking of mathematics education students especially analyticity and systematicity component when solving non-routine problem (the problem that is not logical and incomplete). This research is a qualitative descriptive study. The stages in this study are first, students are given three non-routine questions. The second stage, the researchers observed directly and recorded the subject when working on the problem. Third, interviewing the subject related to non-routine problem resolution. Fourth, concluded by describing the disposition of critical thinking of mathematics teacher candidate students, especially analyticity and systematicity components. The results showed that the disposition of critical thinking of first-year college students in mathematics education major is still low. They have not analyzed the problems and answers well and have not written the answers in order and lack of focus when solving non-routine problems. They not yet have a high sense about the irregularities of the problem. It is highly recommended for further research that there is a need for advanced development to improve the disposition of critical thinking students.

]]>Beshimov R. B. and Zhuraev R. M.

In this paper, we study some topological properties of connected topological groups. From a logical point of view, the concept of a topological group arises as a simple combination of the concepts of a group and a topological space. In the same set G, operations multiplication and topological closure are specified simultaneously.

]]>Ivy Barley Gabriel Asare Okyere Henry Man’tieebe Kpamma James Baah Achamfour David Kweku and Godfred Zaachi

Economic trade amongst the various West African economies can either lead to mutual gains or losses. It is therefore important to assess the extent to which dependence amongst these countries can have on their economies. The linear correlation coefficient is normally used as a measure of dependence between random variables. However, there are some limitations when used for economic variables like the stock market; as they do not follow the elliptical distribution. Copulas, however are scale-free methods of constructing dependence structures amongst the stock markets, even in cases of data perturbations. The aim of this study is to assess the impact of data perturbations on the copula models. The maximum likelihood estimation method was the parameter estimation method used for the Archimedean copulas. The Clayton, Joe, Frank and Gumbel copulas were estimated. The Gumbel copula was the most robust copula in all the cases of data perturbations.

]]>Ruggero Ferro

An analogy with how life would be evolving in a town where one is moving in, may help us to understand what could be meant by discovery, insight and invention in mathematics. The relevant key common features of these two environments (life in an another town and mathematics) are: 1) the involved mental abilities to deal with the situation, 2) the realization that anything observed is contingent, 3) the discovery of the motivations of what has been done and of their influence up to the present via insight, 4) the need to understand the motivations and the manners of realization of what was done to continue the development, 5) the continuous evolution of needs and requirements that opens new problems that demand insight and invention for their solutions, 6) not every solution meets the goals and requirements with the same short range and long range convenience, thus a preventive evaluation is convenient according to criteria to be established, though a conclusive evaluation can be done only afterward. What observed will justify the support of a dynamic attitude toward mathematics and the refusal of the one claiming that everything can't be but the way it is, according to a priori mental evidence which is unduly assumed.

]]>Young Whan Lee and Gwang Hui Kim

In 2001, Maksa and P´ales [12] introduced a new type’s stability: hyperstability for a class of linear functional equation. Riedel and Sahoo [14] have generalized a functional equation associated with the distance between the probability distributions, which is . Elfen etc. [7] obtained the solution of the functional equation on semigroup G. The aim of this paper is to investigate the hyperstability and the Hyers-Ulam stability for the above Logarithm-type functional equation considered by Elfen, etc. Namely, if f is an approximative equation related to the above equation, then it is a solution of this equation which exists within " bound of a given approximative function f.

]]>Ana Vivas-Barber and Sunmi Lee

Influenza infection shows a wide range of severity and it is well known that a significant proportion of individuals is asymptomatic or experience mild infections. Also, It is widely accepted that influenza transmission dynamics depends on age distributions. An integro-partial differential system is considered for influenza transmission dynamics, which includes the standard Susceptible-Infected-Recovered (SIR) classes with a quarantine (Q) class and an asymptomatic class (A). In this work, we extend the previous model to an integro-partial differential model by including age-structure. We establish the existence of an endemic steady state distribution and its explicit expression. Then, an analytic expression for the basic reproduction number is obtained. Furthermore, we prove the local and global stability of the disease-free equilibrium. Some numerical simulations of the basic reproduction number have been carried out using age-dependent influenza parameter values. This study can provide effective interventions and implementing age-dependent countermeasures.

]]>Barbara Abraham-Shrauner

Exact traveling (solitary) wave solutions of nonlinear partial differential equations (NLPDEs) are analyzed for third-order nonlinear evolution equations. These equations have indeterminant homogenous balance and therefore cannot be solved by the Power Index Method (PIM). Some evolution equations are linearizable where solutions are transferred from those of a linear PDE. For other evolution equations transforming to a NLPDE which has a homogenous balance gives rise to possible solutions by the PIM. The solutions for evolution equations that are not linearizable are developed here.

]]>Cem Onat and Mahmut Daskin

Excess air coefficient (λ) is the most important parameter characterizing the combustion efficiency. Conventional measurement of λ is practiced by way of the flue analyze device with high market priced. Estimating of the λ from flame images is crucial in perspective of the combustion control because of decreasing structural dead time of the combustion process. Beside, estimation systems can be used continuously in a closed loop control system, unlike conventional analyzers. This paper represents a basic λ prediction system with a neural network for small scale nut coal burner equipped with a CCD camera. The proposed estimation system has two inputs. First input is stack gas temperature simply measuring from the flue. To choose the second input, eleven different matrix parameters have been evaluated together with flue gas temperature values and performed by matrix-based multiple linear regression analysis. As a result of these analyses, it has been seen that the trace of image matrix obtained from the flame image provides higher accuracy than the other matrix parameters. This instantaneous trace value of image source matrix is then filtered from high frequency dynamics by means of a low pass filter. Experimental data of the inputs and λ are synchronously matched by a neural network. Trained algorithm has reached R=0.984 in terms of accuracy. It is seen from the result that proposed estimating system using flame image with assistance of the stack gas temperature can be preferred in combustion control systems.

]]>Pierpaolo Angelini

I realized that it is possible to construct an original and well-organized theory of multiple random quantities by accepting the principles of the theory of concordance into the domain of subjective probability. A very important point relevant to such a construction is consequently treated in this paper by showing that a coherent prevision of a bivariate random quantity coincides with the notion of -product of two vectors while a coherent prevision of a quadruple random quantity coincides with the notion of -product of two affine tensors. Metric properties of the notion of -product mathematically characterize both the notion of coherent prevision of a generic bivariate random quantity and the notion of coherent prevision of a generic quadruple random quantity. Coherent previsions of bivariate and quadruple random quantities can be used in order to obtain fundamental metric expressions of bivariate and quadruple random quantities.

]]>Medhat Edous and Omar Eidous

This paper proposes an approximation to the standard normal distribution function. The introduced approximation formula is very simple and it has a very acceptable accurate. By comparing the proposed approximation with other existing approximations, it can be observed that the proposed one has a simple, easily computable formula and it gives a good accurate with maximum absolute error equals 0.000444.

]]>Samuel Bertrand Liyimbeme Mouchili

Since Galois rings are the generalization of Galois fields, the question we tried to answer is: How to move from the discrete logarithm in Galois fields to the one in Galois rings? That concept of the discrete logarithm in Galois rings is a little bit different from the one in Galois fields. Here, the discrete logarithm of an element is the tuple, which is not the case in Galois fields. However, thanks to the multiplicative representation of elements in Galois rings, each element can be uniquely represented in the form: ; where k is a nonnegative integer, is a generator of the Galois ring (the definition of a generator in a Galois ring will be given later on). Then the tuple will be called: the discrete logarithm of . The notion of generators in Galois rings comes from the one in the group theory. The Knowledge of the generators in multiplicative groups allows, as well to determine the generators in Galois rings ; p is a prime number and m is a nonnegative integer greater than or equal to two. These new concepts of discrete logarithm and generators in Galois rings will help to securely share common information and to perform ElGamal encryption in Galois rings.

]]>Md. Jahurul Islam Md. Shahidul Islam and Md. Shafiqul Islam

In this paper, we discuss Hausdorff measure and Hausdorff dimension. We also discuss iterated function systems (IFS) of the generalized Cantor sets and higher dimensional fractals such as the square fractal, the Menger sponge and the Sierpinski tetrahedron and show the Hausdorff measures and Hausdorff dimensions of the invariant sets for IFS of these fractals.

]]>Marian Anton and Landon Renzullo

The field of computational topology is evolving rapidly and new algorithms are updated and released at a rapid pace. A good reference for currently available opensource libraries with peer-review publication can be found in [7]. In this paper we examine the descriptive potential of a combinatorial data structure known as Generating Set in constructing the boundary maps of a simplicial complex. By refining the approach of [1] in generating these maps, we provide algorithms that allow for relations among simplices to be easily accounted for. In this way we explicitly generate faces of a complex only once, even if a face is shared among multiple simplices. The result is a useful interface for constructing complexes with many relations and for extending our algorithms to ∆-complexes. Once we efficiently retrieve the representatives of "living" simplices i.e., of those that have not been related away, the construction of the boundary maps scales well with the number of relations and provides a simpler alternative to JavaPlex [8]. We note that the generating data of a complex is equivalent in information to its incidence matrix and provide efficient algorithms for converting from an incidence matrix to a Generating Set.

]]>Chun P.B Ibrahim A.A and Kamoh N.M

The use of the adjacency matrix of a graph as a generator matrix for some classes of binary codes had been reported and studied. This paper concerns the utilization of the stable variety of Cayley regular graphs of odd order for efficient interconnection networks as studied, in the area of Codes Generation and Analysis. The Use of some succession scheme in the construction of a stable variety of the Cayley regular graph had been considered. We shall enumerate the adjacency matrices of the regular Cayley graphs so constructed which are of odd order (2m+1), for m≥3 as in [1]. Next, we would show that the Matrices are cyclic and can be used in the generation of cyclic codes of odd lengths.

]]>Pokutnyi Oleksandr

Sufficient conditions for the existence of solutions for a weakly linear perturbed boundary value problem are obtained in the so called resonance (critical) case. Iterative process for finding solutions has been presented. Necessary and sufficient conditions of the existence of solutions, bounded solutions, generalized solutions and quasi solutions are obtained.

]]>Siloko, I. U. Ishiekwene, C. C. and Oyegue, F. O.

The bivariate kernel density estimator is fundamental in data smoothing methods especially for data exploration and visualization purposes due to its ease of graphical interpretation of results. The crucial factor which determines its performance is the bandwidth. We present new methods for bandwidth selection in bivariate kernel density estimation based on the principle of gradient method and compare the result with the biased cross-validation method. The results show that the new methods are reliable and they provide improved methods for a choice of smoothing parameter. The asymptotic mean integrated squared error is used as the measure of performance of the new methods.

]]>Nora Dörmann

Let X_{i}, i ≥ 1, describe the lifetimes of items with finite mean μ = E (X_{i}) which are successively placed in service. In order to estimate the replacement rate ^{1}/_{μ} or related quantities, the random variables X_{i} are usually assumed to be independent and identically distributed. It is shown that a nonparametric estimation of the replacement rate and other reciprocal functions of renewal theory is possible while using a delta method with weakened requirements upon the global growth of f which also allows dependent observations and respects the unboundedness of the analyzed reciprocal functions. Moreover, results on the moments and, furthermore, on corresponding simulations are included.

Yulia Koroleva

The paper deals with study of Stokes-Brinkman system with varying viscosity that describes the fluid flow along an ensemble of partially porous cylindrical particles using the cell approach. Existence and uniqueness of the solution to the system is proved for an arbitrary varying viscosity. Some uniform estimates on the velocity of flow are derived. Moreover, an axillary weighted Friedrichs inequality was proved for the solution of the considered system. The numerical illustration of the obtained results is given.

]]>C. Filosa J. H. M. Thije Boonkkamp and W. L. IJzerman

Ray tracing is a technique used in geometric optics for calculating the light distribution at the target of an optical system. Monte Carlo (MC) ray tracing is very common in non-imaging optics. We propose a new ray tracing method that employs the phase space of the source and the target of the system. The new method gives a more accurate target distribution than classical MC ray tracing and requires less computational time. It is tested for two-dimensional optical systems. The results for the paraboloid reflector are provided.

]]>Gülistan Kaya Gök

Let M_{2,m} be a free metabelian nilpotent Lie algebra of rank 2 and nilpotency class m-1. It is shown that M_{2,m} admits a minimal presentation whose set of defining relators consists of certain types of basic commutators of length at most m.

Dhouha Mejri Mohamed Limam and Claus Weihs

Combining methods from Statistical Process Control (SPC) in order to benefit from more than one method's efficiency has been recently challenged. One of the reasons is that real life problems change overtime and a small improvement can lead to a very big profit. Ensemble methods from data mining domain have recently shown their effectiveness when used with SPC. The first combined control chart based on dynamic ensemble method, called Dynamic weighted control chart, is designed especially for monitoring concept drift in online processes. This article presents a new model of combining more than two control charts based on ensemble methods as well as error rates classifications to optimize the shift identification and control. This method can be applied for offline and online processes. It is based on a three step learning model: first a preprocessing step to prepare the data for classification. Second, an ensemble method based on Dynamic Weighted Majority (DWM) is applied to aggregate the decisions of the different charts at the end of the each batch. Finally, shifts are identified based on the misclassification error rates of DWM. Dynamic Ensemble Control chart model benefits from the knowledge from classification and control to give a most precise information about the process. Experiments have shown that the latter is better than the use of individual charts and classifies the variable which is responsible for the out of control.

]]>V. A. Meshkoff

«Shnoll effect» proved to be at the histograms study of a wide variety of processes. This paper examines the effect mainly for the examples of radioactive decay and chemical reactions. S.E. Shnoll supposed that the observed processes are caused by unknown Cosmophysical effects. In this article, we suggested not only a qualitative explanation of the effect, but also its mathematical model. It allows to get some quantitative estimation and to optimize the process of observation and data handling. To this end, we developed a quantitative method of estimation «similarity of histograms» that allows the use of standard computer programs. As far as «Shnoll effect» at present is not currently recognized by the scientific community, we suppose that the use of mathematical model and adequate methods of data handling allow synonymously solving that problem.

]]>Huriye Kadakal Mahir Kadakal and İmdat İşcan

In this article, by using an integral identity together with both the Hölder, Power-Mean integral inequalities and Hermite-Hadamard's inequality, we establish several new inequalities for n-time differentiable s-convex and s-concave functions in the second sense.

]]>Otakar Kříž

An algorithm SP (= Symptom Proximity) is suggested for solving discrete diagnostic problem. It is based on probabilistic approach to decision-making under uncertainty, however, it does not use knowledge integration from marginal distributions.

]]>Hisham Mahdi and Fadwa Nasser

The purpose of this paper is to investigate the concepts of minimal and maximal regular open sets and their relations with minimal and maximal open sets. We study several properties of such concepts in a semi-regular space. It is mainly shown that if X is a semi-regular space, then m_{i}O(X) = m_{i}RO(X). We introduce and study new type of sets called minimal regular generalized closed. A special interest type of topological space called rT_{min} space is studied and obtain some of its basic properties.

Peter Kopanov and Miroslav Marinov

We examine the properties of a cumulative distribution function which is related to the Bernoulli process. Results figuring in a paper ^{[1]} are shown and new ones are included. Most of them are connected to the behaviour of the probability density function (derivative) of the given distribution.

Fiaz Hussain and Saima Zainab

In this paper, we establish strong convergence and Δ-convergence theorems for the class of generalized non-expansive multi-valued maps in a CAT(0) space. Our work extends and improves some recent results announced in the current literature.

]]>L. Rob Verdooren

The most popular designs for fitting the second-order polynomial model are the central composite designs of Box and Wilson [2] and the designs of Box and Behnken [1]. For k = 2, 4, 6 and 8, the uniform shell designs of Doehlert [4] require fewer experimental runs than the central composite or Box-Behnken designs. In analytic chemistry the Doehlert designs are widely used. The uniform shell designs are based on a regular simplex, this is the geometric figure formed by k + 1 equally spaced points in a k -dimensional space; an equilateral triangle is a two-dimensional regular simplex. The shell designs are used for fitting a response surface to k independent factors over a spherical region. Doehlert (1930 - 1999) proposed in 1970 the design for k = 2 factors starting from an equilateral triangle with sides of length 1, to construct a regular hexagon with a centre point at (0, 0). The n = 7 experimental points are (1, 0), (0.5, 0.866), (0, 0), (-0.5, 0.866), (-1, 0), (-0.5, -0.866) and (0.5, -0.866).The 6 outer points lie on a circle with a radius 1 and centre (0, 0). This Doehlert design has an equally spaced distribution of points over the experimental region, a so-called uniform space filler, where the distances between neighboring experiments are equal. Response surface designs are usually applied by scaling the coded factor ranges to the ranges of the experimental factors. The first factor covers the interval [-1, + 1], the second factor covers the interval [-0.866, + 0.866]. Doehlert design for four factors needs only 21 trials. Doehlert and Klee [5] show how to rotate the uniform shell designs to minimize the number of levels of the factors. Most of the rotated uniform shell designs have no more than five levels of any factor; the central composite design has five levels of every factor. The D-Optimality determinant criterion of the variance matrix of Doehlert designs will be compared with central composite designs and Box-Behnken designs, see Rasch et al. [6].

]]>Katia Vigo Ingar and Maria José Ferreira da Silva

The objective of this article is to present part of a doctoral thesis, which deals with an extension of Duval's study in relation to apprehensions in the graphic register of a two-variable function. Its relevance is extensive in teaching and learning Differential Calculus of two variables since the information the graph of this type of functions may provide is important to build knowledge on two-variable functions and for its applications. For graphic representation and knowledge building, we rely on CAS Mathematica, given that its dynamism allows performing operations in the graphic register. Because of this, we ask ourselves, how do apprehensions take place in the CAS graphic register of two-variable functions? Our research is qualitative and exploratory since the proposed object of study has not been studied a lot. We believe the interaction of apprehensions in the CAS graphic register allows students to conjecture properties of the two-variable functions when, for instance, a student applies those notions to optimization problems.

]]>Emrah Hanifi Firat

In economies that are open to foreign markets the numerical value of the currencies as a macroeconomic variable is of great importance especially when the mutual dependency among the economies is concerned. When it is considered in terms of political economy, the targeted level of the currencies have vital importance especially in economies that have the characteristics of export-driven growth and in economies that struggle not to disrupt the picture in macroeconomic design. When it is considered that each time series has a structure that is sensitive to its own internal dynamics (sometimes these dynamics are expressed as the time series components), these dynamics provide us with coordinates for estimations and may eliminate the compulsory dependency on the outsourced variables at a serious level. This is exactly what has been done in this study. First of all, the non-linear time series analyses are examined in terms of linearity tests, and the linearity tests are applied for all parties and for different time periods. Then, the SETAR Modelling, which is the title of the study, has been applied in order to explain the non-linear pattern in detail. The SETAR Modelling process and other definitions statistical analyses of this model have been applied in relevant parities for separate time periods. The SETAR model, which is one of the TAR Group modeling, shows a better performance than many other linear and non-linear modeling. In this study, the secondary purpose is to express that the SETAR model performance is superior to the other models by considering the observation values of the parities.

]]>Iftikhar I. M. Naqash

Inequitable distribution of investment funds on governorates and the autonomy of the Kurdistan Region with own investment policy are often mentioned as the main causes of the huge regional differences in social development in Iraq. In this paper, the differences in social development among 18 Iraqi governorates will be analyzed by using two different methods: first, 12 indicators of education, health and economic level are given equal weights and the length of the distance between every governorate and the governorate with the maximum standardized score for each individual indicator is combined into a Composite Regional Social Development Index (CRSDI_{equal}); second, unequal weights are given for each indicator depending on the indicator's loading in the first principal component to identify the weight of that indicator in a Composite Regional Social Development Index (CRSDI_{unequal}). Both methods will result in the same ranking of the 18 governorates with respect to their social development level.

Victor Chulaevsky

Exponential decay of eigenfunctions and of their correlators is shown to occur in two Anderson models on the lattice of arbitrary dimension, with summable decay of infinite-range correlations of the random potential. For the proof, we check the applicability of the Fractional Moment Method.

]]>Öznur Kulak and A. Turan Gürkanlı

Let ω_{1}, ω_{2} be slowly increasing functions and let ω_{3} be weight function on ℝ^{n}. In section 2 we define a bilinear multiplier from L(p_{1}, q_{1}, ω_{1}dμ) (ℝ^{n}) × L(p_{2}, q_{2}, ω_{2}dμ) (ℝ^{n}) to L(p_{3}, q_{3}, ω_{3}dμ) (ℝ^{n}) by a bounded operator B_{m}, where 1≤ p_{1}, p_{2}, p_{3}, q_{1}, q_{2}, q_{3} < ∞ and m (ξ,η) is a bounded, measurable function on ℝ^{n} × ℝ^{n}. We denote the space of bilinear multipliers of this type by BM (L(p_{1}, q_{1}, ω_{1}dμ) × L(p_{2}, q_{2}, ω_{2}dμ), L(p_{3}, q_{3}, ω_{3}dμ)), and study of the basic properties of this space. We give methods of construction examples of bilinear multipliers. Similarly in section 3, by using variable exponent Lorentz space, we define the bilinear multipliers from L( p_{1} (x), q_{1} (x)) × L( p_{2} (x), q_{2} (x)) to L( p_{3} (x), q_{3} (x)) and discuss basic properties of the space of bilinear multipliers BM (L( p_{1} (x), q_{1} (x)) × L( p_{2} (x), q_{2} (x)), L( p_{3} (x), q_{3} (x))).

Fahid Al Eibood and Omar Eidous

This paper considers a parametric model for grouped data collected via line transect technique. The weighted exponential model is studied and investigated when the data are assumed to be grouped in the intervals. The maximum likelihood method is adopted for purpose of estimation. The resultant estimator of the population abundance is compared with the corresponding estimator that developed for ungrouped data by using the Laake stakes real data.

]]>Quay van der Hoff and Temple H. Fay

In this article, a new predator-prey model having predator saturation is proposed. The model resembles a classical Rosenzweig-MacArthur type model, but comes with an added function, the population saturation function of the predator. This function of the predator population is a factor in the predator fertility term in the model. Consequently the model behaves better than the Rosenzweig-MacArthur model since all solutions are bounded within the population quadrant. An invariant region arises where the Poincaré-Bendixon theorem can be applied. In most cases there is but a single critical value, either an attracting spiral point suggesting a stable population pair or an unstable node, resulting in a unique limit cycle. This model is fully described and an analysis of the stability of critical values is provided. The robustness of the model is demonstrated based on the classification of Gunawardena [8].

]]>Nitaya Jantakoon

The key atmospheric variables that impact crops are weather and rainfall. Extreme rainfall or drought at critical periods of a crop's development can have dramatic influences on productivity and yields. The analysis of effect of rainfall is needed to evaluate crop production in Northeastern Thailand. Two operations were performed on the Fuzzy Logic model; the fuzzification operation and defuzzification operation. The model predicted outputs were compared with the actual rainfall data. Simulation results reveal that predicted results are in good agreement with measured data. Prediction Error and Root Mean Square Error (RMSE) were calculated, and on the basis of the results obtained, it can be suggested that fuzzy methodology is efficiently capable of handling scattered data.

]]>Simona Gozzo and Venera Tomaselli

This paper proposes an innovative methodological approach to measure sociometric status in small groups of pupils. Although it uses indirect data collected by interview, in this study the sociometric status is analysed by direct observation. This method is specifically suitable when the target population concerns pre-school children. Their cognitive competence, in fact, is not as well developed as their relational abilities. Hence, the indicators constructed are more reliable than the measures derived by the subjective perception of interviewed pupils. The Network Analysis methods allow for the definition of sociometric status by means of regular equivalence. Employing lambda sets and cliques, then, we specify further roles into distinctive small groups. The results show that sociometric status can be revealed by regular equivalence. Besides, the Network Analysis approach allows for the observation of further relational skills, not strictly associated with traditional social roles, detectable only through lambda sets and cliques.

]]>Helmut Vorkauf

A parsimonious and robust new method, based on information theory, to analyze multidimensional contingency tables is presented. It swiftly reveals the important relations between dependent and independent variables and casually detects confounding effects in a straightforward manner. The method in its simplicity could replace logistic regression and log-linear analysis that, in dealing with their limitations and defects, have grown complicated and convoluted.

]]>GÜlistan Kaya GÖk

Let M_{2,m,3} be a free solvable nilpotent Lie algebra of rank 2 and nilpotency class m - 1. We show that M_{2,m,3} admits a minimal presentation whose set of defining relators consists of certain types of basic commutators using techniques in Gröbner-Shirshov basis theory.

Adepoju K.A Shittu O.I and Chukwu A.U

The classical Fisher-Snedecor test which compares several population means depends on the underlined assumptions which include; independent of populations, constant variance and absence of outlier among others .Arguably the source of violation of some of these assumptions is the outlier which lead to unequal variances. Outlier leads to inequality in the variances of the populations which consequently leads to the failure of the classical-F to take correct decision in terms of the null hypothesis. A series of robust tests have been carried out to ameliorate these lapses with some degrees of inaccuracies and limitations in terms of inflating the type 1 error and the power of different combination of parameters at various sample sizes while still uses the conventional F-table. This study focuses on developing robust F-test called exponentiated F test with the introduction of one shape parameter to the conventional F-distribution capable of taking decisions on ANOVA that are robust to the existence of outlier. The performance of the robust F test was compared with the existing F-tests in the literature using the power of test. Real life and simulated data were used to illustrate the applicability and efficiency of the proposed distribution over the existing ones. Experimental data with balanced and unbalanced design were used with populations sizes k=3 and k=5 were simulated with 10000 replications and varying degrees of outliers were ejected randomly. The results obtained indicate that the Proposed Exponentiated-F test is uniformly most powerful than the conventional-F tests for analysis of variance in the presence of outlier and is therefore recommended for use by researchers.

]]>Sarbjit Singh

This paper presents an inventory model for perishable items with constant demand, for which holding cost increases with time, the items considered in the model are deteriorating items with a constant rate of deterioration θ. In the majority of the earlier studies the holding cost has been considered to be constant, which is not true in most of the practical situations as the insurance cost and record keeping costs or even cost of keeping the items in the cold storage increases with time. In this paper the time dependent linear holding cost has been considered, the holding cost for the items increases with time. The approximate optimal solution has been obtained. The results are illustrated with the help of numerical examples.

]]>O.V. Troshkin

2D-flows of an ideal incompressible fluid are treated in a rectangular. If analytical (resolved in series of powers of coordinates), the stationary flows are uniquely determined with the inflow vorticity. When excluded vortices of a spectral origin, such flows prove to be stable.

]]>Xiao Liu

There is the work by Bridges et al (1999) on the key features of a constructive proof of the implicit function theorem, including some applications to physics and mechanics. For mixtures of logistic distributions such information is lacking, although a special instance of the implicit function theorem prevails therein. The theorem is needed to see that the ridgeline function, which carries information about the topography and critical points of a general logistic mixture problem, is well-defined [2]. In this paper, we express the implicit function theorem and related constructive techniques in their multivariate extension and propose analogs of Bridges and colleagues' results for the multivariate logistic mixture setting. In particular, the techniques such as the inverse of Lagrange's mean value theorem [4] allow to prove that the key concept of a logistic ridgeline function is well-defined in proper vicinities of its arguments.

]]>Alexander D. Bruno

Here we present a way of computation of asymptotic expansions of solutions to algebraic and differential equations and present a survey of some of its applications. The way is based on ideas and algorithms of Power Geometry. Power Geometry has applications in Algebraic Geometry, Differential Algebra, Nonstandard Analysis, Microlocal Analysis, Group Analysis, Tropical/Idempotent Mathematics and so on. We also discuss a connection of Power Geometry with Idempotent Mathematics.

]]>Sibanee Sahu and Sarat Kumar Acharya

The paper is concerned with the study of change point problem in the inter-arrival time and service time of single server queues. Maximum likelihood estimators of the parameters are derived. A test statistics has been developed and its properties have been studied.

]]>Omar Eidous and Samar Al-Salman

This paper presents a one-term approximation to the cumulative normal distribution functions. The absolute maximum error of the proposed approximation is 0.0018 less than 0.003 of Polya's approximation. Comparisons between the proposed approximation and the different approximations with one-term that stated in the literature are given.

]]>Adepoju, K.A Chukwu, A.U and Shittu, O.I

We propose the Kumaraswamy-F (KUMAF) distribution which is a generalization of the conventional Fisher Snedecor (F-distribution). The new distribution can be used even when one or more of the regular assumptions are violated. It is obtained with the addition of two shape parameters to a continuous F-distribution which is commonly used to test the null hypothesis in the Analysis of Variance (ANOVA test).The statistical properties of the proposed distribution such as moments, moment generating function, the asymptotic behavior among others were investigated. The method of maximum likelihood is used to estimate the model parameters and the observed information matrix is derived. The distribution is found to be more flexible and robust to regular assumptions of the conventional F-distribution. In future research, the flexibility of this distribution as well as its robustness using a real data set will be examined. The new distribution is recommended for used in most applications where the assumption underlying the use of conventional F distribution for one-way analysis of variance are violated such as homogeneity of variance or normality assumption probably as result of the presence of outlier(s). It is instructive to note that the new distribution preserves the originality of the data without transformation.

]]>Arash Pourkia

First, referring to our previous work, 'Hopf cyclic cohomology in braided monoidal categories', we reduce the restriction of the ambient category C being symmetric. We let C to be non-symmetric but assume only the restriction, ψ^{2} = id, on the braid map correspond to the Hopf algebra H, which is the main player in the theory. We define a family of examples of such desired braided Hopf algebras, H, living in the category of anyonic vector spaces. Next, on one hand, we will prove that these anyonic Hopf algebras are the enveloping (Hopf) algebras of particular quantum Lie algebras, which we will construct. On the other hand, we will show that, analogous to the non-super and the super case, the well known relationship between the periodic Hopf cyclic cohomology of an enveloping (super) algebra and the (super) Lie algebra homology also holds for these particular quantum Lie algebras.

Robert Erdahl and Viacheslav Grishukhin

By a Voronoi parallelotope P(a) we mean a parallelotope determined by linear in normal vectors p inequalities with a non-negative quadratic form a(p) as right hand side. For a positive form a, it was studied by Voronoi in his famous memoir. For a set of vectors P, we call its dual a set of vectors P^{*} such that ∈ {0;±1} for all p ∈ P and q ∈ P^{*}. We prove that Minkowski sum of an irreducible Voronoi parallelotope P(a) and a segment z(u) is a Voronoi parallelotope if and only if u = we, where w > 0 and e is a vector of the dual of the set of normal vectors of all facets of P(a). Then the segment z(u) is described by the same set of inequalities with wa_{e}(p)=w as right hand side and P(a) + z(u) = P(a + wa_{e}). A similar assertion is true for Minkowski sum of a reducible Voronoi parallelotope with a segment.

Bhatt Milind B.

Independence of suitable function of order statistics, linear relation of conditional expectation, recurrence relations between expectations of function of order statistics, distributional properties of exponential distribution, record valves, lower record statistics, product of order statistics and Lorenz curve etc.. are various approaches available in the literature for the characterization of the power function distribution. In this research note different path breaking approach for the characterization of power function distribution through the expectation of function of order statistics is given and provides a method to characterize the power function distribution which needs any arbitrary non constant function only.

]]>Sergey Krylov

The paper shows meta-mathematical prerequisites for basic concepts of rigorous science called mathematics. These concepts explore a very simple idea concerning the hypothesis that all surrounding physical processes are basically algorithmic processes - as understandable as well as partially or fully incomprehensible ones. Mathematics is very successful in studying, formal describing and utilizing of such processes, because mathematics is based on similar algorithmic ideas, methods, and structures. These facts allow us to formulate more precisely useful mathematical (meta-scientific) concepts concerning some well-known scientific problems in various rigorous theories, including the theory of "object calculus", the theory of automatic cognition, the theory of biological evolution, the theory of heterogeneous electronic systems, the theory of logics in various chemical transformations, the basic architecture of completely programmable universal (multi-purpose) synthesizers-analyzers for various objects, and so on.

]]>Imdat Iscan Mustafa Aydin and Sema Dikmenoglu

In this paper, we establish some estimates, involving the Euler Beta function and the Hypergeometric function of the integral for the class of functions whose certain powers of the absolute value are harmonically convex.

]]>Beshimov R.B. and Mamadaliev N.K.

In the paper it is proved that if a covariant functor F : Comp → Comp is weakly normal, then for any infinite Tychonoff space X following inequalities hold: d( (X)) ≤ d(X), d( (X)) ≤ d(X), wd( (X)) ≤ wd(X), wd( (X)) ≤ wd(X).

]]>Molete Mokhele and Caston Sigauke

Electricity demand exhibits a large degree of randomness in South Africa, particularly in summer. Its description requires a detailed analysis using statistical methodologies, in particular stochastic processes. The paper presents a Markov chain analysis of peak electricity demand. The data used is from South Africa's power utility company Eskom, for the period 2000 to 2011. This modelling approach is important to decision makers in the electricity sector particularly in scheduling maintenance and refurbishments of power-plants. The randomness effect is accountable to meteorological factors and major electricity appliance usage. Aggregated data on daily electricity peak demand is used to develop the transition probability matrices, steady-state probabilities, mean return- and the first passage times. Such analysis is important to Eskom and other energy companies in planning load-shifting, load analysis and scheduling of electricity particularly during peak period in summer.

]]>Akram.H. Begmatov M.E. Muminov and Z.H. Ochilov

We study new problem of reconstruction of a function in a strip from their given integrals with known weight function along polygonal lines. We obtained two simply inversion formulas for the solution to the problem. We prove uniqueness and existence theorems for solutions and obtain stability estimates of a solution to the problem in Sobolev's spaces and thus show their weak ill-posedness. Then we consider integral geometry problems with perturbation. The uniqueness theorems are proved and stability estimates of solutions in Sobolev spaces are obtained.

]]>Victor Chulaevsky

We study the regularity of the conditional distribution of the empiric mean of a finite sample of IID random variables with a bounded common probability density, conditional on the sample "uctuations", and extend a prior result, proved for strictly positive smooth densities, to a larger class of smooth densities vanishing at one or more points of their support.

]]>Beshimov R. B. Mamadaliev N. K. and Mukhamadiev F. G.

In the paper the local density and the local weak density of topological spaces are investigated. It is proved that for stratifiable spaces the local density and the local weak density coincide, these cardinal numbers are preserved under open mappings, are inverse invariant of a class of closed irreducible mappings. Moreover, it is showed that the functor of probability measures of finite supports preserves the local density of compacts.

]]>Ye-zhi Xiao and Sha Fu

This study proposes a grey-correlation multi-attribute decision-making method based on intuitionistic trapezoidal fuzzy numbers to solve the problem that the attribute weight depends on the various statuses and the attribute values offer multi-attribute decision making in the form of intuitionistic trapezoidal fuzzy numbers. Firstly, this paper gives the definitions of intuitionistic trapezoidal fuzzy numbers, and the distance formula. Then, the grey-correlation coefficient about the intuitionistic trapezoidal fuzzy numbers is obtained through grey-correlation analysis. The correlation degree between different options is obtained through calculation based on the correlation coefficient. With that, the options are ranked based on the values to identify the optimal option. Finally, the result of analysis of examples demonstrates the feasibility and effectiveness of the proposed method.

]]>Sha Fu

This paper takes the time weight and attribute weight in different periods into consideration to propose a dynamic triangular fuzzy number type multi-attribute decision making method to solve the problem with multi-attribute decision making with triangular fuzzy number as the attribute value. This method utilizes the characteristics of the triangular fuzzy number in order to establish the correlation model between the evaluation scheme and the positive and negative ideal scheme, and obtain comprehensive ranking of the evaluation scheme, thus acquiring the decision making result. At last, this paper demonstrates the feasibility and validity of the proposed methods through instance analysis.

]]>Wun-Yi Shu(許文郁)

In the late 1990s, observations of type Ia supernovae led to the astounding discovery that the universe is expanding at an accelerating rate. The explanation of this anomalous acceleration has been one of the great problems in physics since that discovery. We propose cosmological models that can simply and elegantly explain the cosmic acceleration via the geometric structure of the spacetime continuum, without introducing a cosmological constant into the standard Einstein field equation, negating the necessity for the existence of dark energy. In this geometry, the three fundamental physical dimensions length, time, and mass are related in new kind of relativity. There are four conspicuous features of these models: 1) the speed of light and the gravitational "constant" are not constant, but vary with the evolution of the universe, 2) time has no beginning and no end; i.e., there is neither a big bang nor a big crunch singularity, 3) the spatial section of the universe is a 3-sphere, and 4) in the process of evolution, the universe experiences phases of both acceleration and deceleration. One of these models is selected and tested against current cosmological observations, and is found to fit the redshift- luminosity distance data quite well.

]]>Karwan H. F. Jwamer and Hawsar Ali HR

This paper deals with the behavior of the solution and asymptotic behaviors of eigenvalues of a fourth order boundary value problem, having the following definition: (1) with boundary conditions: Where and are real valued functions and ρ(χ)=1, and λ is a spectral parameter in which . Here it has been assumed that and .

]]>Zul Amry and Adam Baharum

The main purpose of this study is to find the Bayesian forecast of ARMA model under Jeffrey's prior assumption with quadratic loss function. The point forecast model is obtained based on the mean of the marginal conditional posterior predictive in mathematical expression. Furthermore, the point forecast model of the Bayesian forecasting compared to the traditional forecasting. The simulation shows that the forecast accuracy of Bayesian forecasting is better than the traditional forecasting and the descriptive statistics of Bayesian forecasting are closer to the true value than the traditional forecasting.

]]>Robert Bruner Khairia Mira Laura Stanley and Victor Snaith

Let p be a prime. We calculate the connective unitary K-theory of the smash product of two copies of the classifying space for the cyclic group of order p, using a Kunneth formula short exact sequence. As a corollary, using the Bott exact sequence and the mod 2 Hurewicz homomorphism we calculate the connective orthogonal K-theory of the smash product of two copies of the classifying space for the cyclic group of order two.

]]>V. Amarendra Babu and T. Anitha

We introduce the concept of vague implicative LI – ideals of lattice implication algebra and discuss some of their properties. We study the relationship between v-implicative filters, vague ILI - ideals and ILI – ideals. Extension property of a vague implicative LI – ideal is built.

]]>Victor Chulaevsky

We prove an optimized estimate for the regularity of the conditional distribution of the empiric mean of a finite sample of IID random variables, conditional on the sample "fluctuations". Prior results, based on bounds in probability, provided a Hölder-type regularity of the conditional distribution. We establish a Lipschitz regularity, using bounds in expectation. The new estimate, extending a well-known property of Gaussian IID samples, is a crucial ingredient of the Multi-Scale Analysis of multi-particle Anderson-type random Hamiltonians in a Euclidean space. In particular, the H¨older regularity of the multi-particle eigenvalue distribution, sufficient for the localization analysis of N-particle lattice Hamiltonians, with N ≥ 3, needs to be replaced by Lipschitz regularity for similar Hamiltonians in the Euclidean space.

]]>Ali M. Mosammam

The Kalman filter is a recursive estimator and plays a fundamental role in statistics for filtering, prediction and smoothing. The key element in any recursive estimator is the estimate of the current state, xk, at time k, based on observations up to and including observation k and the Kalman filter enables the estimate of the state to be updated as new observations become available. In this paper we have tried to derive the Kalman filter as simple as possible.

]]>Kwadwo Agyei Nyantakyi B. L. Peiris and L. H. P. Gunaratne

Change-point analysis is a powerful tool for determining whether a change has taken place or not. In this paper we study the structural changes in the Conditional Quantile Polynomial Distributed Lag (QPDL) model using change-point analysis. We employ both the Binary Segmentation (BinSeg) and Cumulative Sum (Cusum) methods for this analysis. We studied the structural changes in both correctly specified and misspecified QPDL models. As an economic application we considered the production of rubber and its price returns. We observe that both Cusum and BinSeg methods correctly detected the structural changes for both the correctly specified and the misspecified QPDL model. The Cusum method gave the exact positions where the structural changes occurred and the BinSeg gave the approximated positions where the changes occurred. Both methods were able to detect the shift in time for both the mean and variance for the missspecified QPDL model, hence both methods were better for predicting structural stability in a QPDL models. The impact of this is that, when there are changes made to a data knowingly or unknowingly, they can be detected, as well as when these changes were effected. We further observed that both methods were powerful tools that better characterizes the changes, controls the overall error rate, robust to outliers, more flexible and simple to use.

]]>Mohammad Shafique Fatima Abbas and Atif Nazir

The two dimensional stagnation flows towards a shrinking sheet of Newtonian fluids has been solved numerically by using SOR Iterative Procedure. The similarity transformations have been used to reduce the highly nonlinear partial differential equations to ordinary differential equations. The results have been calculated on three different grid sizes to check the accuracy of the results. The problem relates to the flows towards a shrinking sheet when and if the flows towards a stretching sheet. The numerical results for Newtonian fluids are found in good agreement with those obtained previously.

]]>Sorokin O.S.

The K-theoretical aspect of the commutative mophic rings is established using the arithmetical properties of the morphic rings in order to obtain a ring of all Smith normal forms of matrices over the morphic ring. The internal structure and basic properties of such rings are discussed as well as their presentations by the Witt vectors. In a case of a commutative von Neumann regular rings the famous Grothendieck group K_{0}(R) obtains the alternative description.

A. C. Paul and S. Chakraborty

Let U be a non-zero σ-square closed Lie ideal of a 2-torsion free σ-prime Τ-ring M satisfying the condition aαbβc = aβbαc for all a, b, c ∈ M and α, β ∈ Τ, and let d be a derivation of M such that dσ = σd. We prove here that (i) if d acts as a homomorphism on U, then d = 0 or U ⊆ Z(M), where Z(M) is the centre of M; and (ii) if d acts as an anti-homomorphism on U, then d = 0 or U⊆ Z(M).

]]>Mehsin Jabel Atteya

The main purpose of this paper is to study and investigate some results concerning generalized Jordan derivation and generalized derivation on semiprime ring R, where D an additive mapping on R such that for all and D acts as left centralizer.

]]>R. M. Dzhabarzadeh

In this paper we present two criteria for the existence of common eigen values of several operator pencils having discrete spectrum. One of the given criteria is proved by using analogs of resultant for several operator pencils; proof of the other criterion requires the use of the results of the multiparameter spectral theory. In both cases the number of operator pencils is finite, operator pencils act, generally speaking, in different Hilbert spaces.

]]>Mehsin Jabel Atteya and Dalal Ibraheem Rasen

The main purpose of this paper is study and investigate a skew-commuting and skew-centralizing d and g be a derivations on noncommutative prime ring and semiprime ring R, we obtain the derivation d(R)=0 (resp. g(R)=0 ) .

]]>K.N.S. Yadava Shruti and J.Pandey

Some models have been proposed for the projection of future size of population for short and long terms under the stability conditions with changed regime of fertility schedule. The main aim of this paper is to see the size of population if fertility is curtailed up to the level of replacement, especially in developing countries. Models have been illustrated taking a set of real and hypothetical data consistent with the current demographic scenario of India. It was found that the proposed models are the extended forms of the models developed by the previous researchers and the projected population is more or less consistent with them.

]]>Behnam Talaee

Let R be a ring and M an R−module. A module N ∈ [M] is called M-small if, N ≪ K for some K ∈ [M]. Torsion theory cogenerated by M−small modules is introduced and investigated in [9]. Also as a generalization of M−small modules, −M−small modules are studied in [6]. In this paper we will introduce M−delta (briefly M − D) modules and investigate the torsion theory cogenerated by such modules. We will get some equivalent conditions for when N is equal to its torsion theory submodule cogenerated by M − D modules. Especially we show that D(N;A) = 0 for all A ∈ [M] iff N = ReD[M](N). Some other important properties about this kind of modules will be obtained.

]]>N. Azimi and M. Amirabadi

A non-nilpotent finite group whose proper subgroups are all nilpotent (or a finite group without non-nilpotent proper subgroups) is well-known (called Schmidt group). O.Yu. Schmidt (1924) studied such groups and proved that such groups are solvable. More recently Zarrin generalized Schmidt's Theorem and proved that every finite group with less than 22 non-nilpotent subgroups is solvable. In this paper, we show that every locally graded group with less than 22 non-nilpotent subgroups is solvable.

]]>J. Venetis

In this paper, the author obtains an analytic exact form of the unit step function, which is also known as Heaviside function and constitutes a fundamental concept of the Operational Calculus. Particularly, this function is equivalently expressed in a closed form as the summation of two inverse trigonometric functions. The novelty of this work is that the exact representation which is proposed here is not performed in terms of non – elementary special functions, e.g. Dirac delta function or Error function and also is neither the limit of a function, nor the limit of a sequence of functions with point wise or uniform convergence. Therefore it may be much more appropriate in the computational procedures which are inserted into Operational Calculus techniques.

]]>S. V. S. Girija A. V. Dattatreya Rao and G. V. L. N. Srihari

In this paper, a new discrete circular model, the Wrapped Binomial model is constructed by applying the method of wrapping a discrete linear model. The characteristic function of the Wrapped Binomial Distribution is also derived and the population characteristics are studied.

]]>Erhan Piskin

In this work, we consider the initial boundary value problem for the Kirchhoff-type equations with damping and source terms in a bounded domain. We prove the blow up of the solution with positive initial energy by using the technique of [26] with a modification in the energy functional due to the different nature of problems. This improves earlier results in the literature [3, 9, 13, 21].

]]>Swapnil Srivastava and Punish Kumar

In this paper, we have defined the concept of p-map and studied some properties of p-map. By using this map, we have shown that p(G) is a subgroup of G and S = {x : p(x) = e} is a right transversal (with identity) of p(G) in G which becomes group by using p-map and some more conditions. Finally, we have shown that G be an extension of p(G).

]]>Bhatt Milind B.

Normally the mass of a root has a uniform distribution but some of have different uniform distribution named generalized uniform distribution (GUD). The characterization result based on expectation of function of order statistics has been obtained for generalized uniform distribution. Applications are given for illustrative purpose.

]]>Chris Gilbert Waltzek

This paper builds on Goldbach’s weak conjecture, showing that all primes to infinity are composed of 3 smaller primes, suggesting that the modern definition of a prime number may be incomplete, requiring revision. The results indicate that prime numbers should include 1 as a prime number and 2 as a composite number, adding a new dimension to the most fundamental of all integers.

]]>Uri Itai

Generalization of subdivision schemes refining points to schemes refining more complex geometric objects has become popular in recent years. In this paper we generalize corner-cutting schemes in order to refine curves taking into account the geometry of the curves. We provide conditions guaranteeing that these schemes are well defined and converge to surfaces with contentious tangents.

]]>Sergey V. Sudoplatov

We apply a general approach for distributions of binary isolating and semi-isolating formulas to the class of strongly minimal theories. For this aim we introduce and use the notion of forcing of infinity. Structures associated with binary formulas, in strongly minimal theories, and containing compositions and Boolean combinations are characterized: a list of basic structural properties for these structures, including the forcing of infinity, is presented, and it is shown that structures satisfying this list of properties are realized in strongly minimal theories.

]]>Rustamova S.Mastura

In this paper, a formula for calculating the Martinelli-Bochner integral of functions from L^{p} in the half-space is obtained.

K.P. Samanta B. C. Chandra and C.S.Bera

In this paper we have obtained some novel generating functions of -a modification of Gegenbauer polynomials, by utilizing L. Wesiner’s grouptheoretic method. By giving suitable interpretations to both the index (n) and the parameter (λ) of the polynomial under consideration, we obtain, in section 2, a set of infinitesimal operators known as raising and the lowering operators which generates a four dimensional Lie algebra. Finally, in Section 3, a novel generating function of the modified Gegenbauer polynomials which in turn yields a number of new and known results on generating functions.

]]>Iryna Dubovets’ka Oleksandr Masyutka and Mikhail Moklyachuk

Spectral theory of isotropic random fields in Euclidean space developed by M. I. Yadrenko is exploited to find solution to the problem of optimal linear estimation of the functional which depends on unknown values of a periodically correlated (cyclostationary with period T) with respect to time isotropic on the sphere S_{n} in Euclidean space En random field ζ(j, x), j ∈ Z, x ∈ S_{n}. Estimates are based on observations of the field ζ(j, x) + θ(j, x) at points (j, x), j = 0,−1,−2, . . . , x ∈ S_{n}, where θ(j, x) is an uncorrelated with ζ(j, x) periodically correlated with respect to time isotropic on the sphere S_{n} random field. Formulas for computing the value of the mean-square error and the spectral characteristic of the optimal linear estimate of the functional Aζ are obtained. The least favorable spectral densities and the minimax (robust) spectral characteristics of the optimal estimates of the functional Aζ are determined for some special classes of spectral densities.

Chii-Huei Yu and Bing-Huei Chen

This paper uses the mathematical software Maple for the auxiliary tool to study two types of multiple integrals. We can obtain the infinite series forms of these two types of multiple integrals by using binomial series and integration term by term theorem. On the other hand, we propose some examples to do calculation practically. The research methods adopted in this study involved finding solutions through manual calculations and verifying these solutions by using Maple.

]]>Dais George

Heavy-tailed distributions have wide applications in life-time contexts, especially in reliability and risk modeling. So we consider the estimation problem of reliability, R = P(X > Y ) for small samples, when X and Y are two independent but not identically distributed random variables belonging to the family of heavy-tailed distributions, using a robust estimator, namely the harmonic moment estimator. Extensive simulation studies are carried out to study the performance of this estimator. The relative efficiency of the estimator with the well known Hill estimator is studied. We obtain the sampling distribution of the parameters of the distribution as well as that of estimator of R which will help us to study the properties of the estimators. Also we find out the asymptotic confidence intervals of R and its performance is studied with respect to average width and the coverage probability, through simulations.

]]>Amit Choudhury

Of all statistical distributions, the standard normal is perhaps the most popular and widely used. Its use often involves computing the area under its probability curve. Unlike many other statistical distributions, there is no closed form theoretical expression for this area in case of the normal distribution. Consequently it has to be approximated. While there are a number of highly complex but accurate algorithms, some simple ones have also been proposed in literature. Even though the simple ones may not be very accurate, they are nevertheless useful as accuracy has to be gauged vis-à-vis simplicity. In this short paper, we present another simple approximation formula to the cumulative distribution function of standard normal distribution. This new formula is fairly good when judged vis-à-vis its simplicity.

]]>Alexander A. Butov

The optimal control problem for the intensity of observation events of the process of random walk is considered for the case of counting Poisson process in semimartingale terms. The linear function of the intensity as a cost of observations and the expected value of the quadratic form of errors of estimation as a cost of an error are reckoned in a loss function. The analogues result for the problem of the optimal intensity of stochastic approximation is presented.

]]>R. Subba Rao A. Naga Durgamamba and R.R.L. Kantam

In this paper, a hybrid group acceptance sampling plan is introduced for a truncated life test if life times of the items follow size biased Lomax model. The minimum number of testers and acceptance number are obtained when the consumer’s risk and the test termination time and group size are pre-specified. The operating characteristic values, minimum ratios of the true mean life to the specified mean life for the given producer’s risk are also derived. The results are discussed through an example, a comparative study of proposed sampling plan with existing sampling plan are elaborated.

]]>S.V.S. Girija A.J.V. Radhika and A.V. Dattatreya Rao

One of the available techniques of constructing circular models, offsetting has not been paid much attention, in particular for the construction of arc models. Here making use of the method of offsetting on bivariate distributions, l-arc models are constructed. The method of transforming a bivariate linear random variable to its directional component is called OFFSETTING and the respective distribution of directional component is called offset distribution which is a univariate circular model. By employing the concept of arc models, we obtain Offset Semicircular Cauchy model. Here we obtain Arc models directly by applying offsetting on a linear bivariate models such as Bivariate Beta and Bivariate Exponential models. Existence of these arc models occur in natural phenomenon. Some of the newly proposed semicircular/arc models are bimodal models and the population characteristics of the offset semicircular and arc models are studied.

]]>Said Broumi Florentin Smarandache and Pabitra Kumar Maji

S.Broumi and F.Smarandache introduced the concept of intuitionistic neutrosophic soft set as an extension of the soft set theory. In this paper we have applied the concept of intuitionistic neutrosophic soft set to rings theory .The notion of intuitionistic neutrosophic soft set over ring (INSSOR for short ) is introduced and their basic properties have been investigated.The definitions of intersection, union, AND, and OR operations over ring (INSSOR) have also been defined. Finally, we have defined the product of two intuitionistic neutrosophic soft set over ring.

]]>V. Amarendra Babu M. Srinivasa Reddy and P.V.Srinivasa Rao

A partial semiring is a structure possessing an infinitary partial addition and a binary multiplication, subject to a set of axioms. The partial functions under disjoint-domain sums and functional composition is a partial semiring. In this paper we introduce the notions of ( R, S ) - partial bi-semimodule and ( R, S ) - homomorphism of ( R, S ) - partial bi-semimodules and extended the results on partial semimodules over partial semirings by P. V. Srinivasa Rao [8] to ( R, S ) - partial bi-semimodules.

]]>B. Satyanarayana R. Durga Prasad and L. Krishna

The notion of BCK-algebras was initiated by Imai and Iseki in 1966 as a genera-lization of both classical and non-classical posi-tional calculus. In 1999, Huang and Chen introduced the notion of n-fold positive implicative ideals in BCK-algebras. In 2011, Satyanarayana and Durga Prasad introduced foldness of intuitionistic fuzzy positive implicative ideals in BCK-algebras. In this paper, we introduce the notion of n-fold positive implicative ideals, n-fold positive implicative Artinian (shortly, PI^{n} -Artinian) and n-fold positive implicative Noe-therian (shortly, PI^{n}-Noetherian) BCK-algebras and study some of its properties.

Rachid Assel Mouez Dimassi and Claudio Fernandez

The main purpose of this note is to study spectral properties of the Stark magnetic Hamiltonian : , on the Hilbert space L^{2}(R^{2}). We show that if the potential V satisfies some mild regularity conditions and is sufficiently decaying at infinity, then the operator H(μ, ϵ) has possibly at most a finite number of eigenvalues.

Maddalena Cavicchioli

We present various formulae in closed form for the spectral density of multivariate or univariate ARMA models subject to Markov switching, and describe some new properties of them. Many examples and numerical applications are proposed to illustrate the behaviour of the spectral density. This turns out to be useful in order to investigate various concepts of stationarity via spectral analysis.

]]>Maksym Luz and Mikhail Moklyachuk

The problem of optimal estimation of the linear functionals and depending on the unknown values of stochastic process ξ(t), t ∈ R, with stationary nth increments from observations of the process at points t < 0 is considered. Formulas for calculating the mean square error and the spectral characteristic of optimal linear estimates of the functionals are derived in the case where the spectral density of the process is exactly known. Formulas that determine the least favorable spectral densities and the minimax (robust) spectral characteristic of the optimal linear estimates of the functionals are proposed in the case where the spectral density of the process is not exactly known, but a set of admissible spectral densities is given.

]]>Zhenmin Chen

Checking whether or not the population distribution, from which a random sample is drawn, is a specified distribution is a popular topic in statistical analysis. Such a problem is usually named as goodness-of-fit test. Numerous research papers have been published in this area. The purpose of this short paper is to provide a goodness-of-fit test statistic which works for many kinds of censored data formed by order statistics. This is an extension of the work presented in Chen and Ye (2009). The method can be used for censored samples that are commonly used in reliability analysis including left censored data, right censored data and doubly censored data.

]]>Said Broumi and Florentin Smarandache

Hesitancy is the most common problem in decision making, for which hesitant fuzzy set can be considered as a useful tool allowing several possible degrees of membership of an element to a set. Recently, another suitable means were defined by Zhiming Zhang [1], called interval valued intuitionistic hesitant fuzzy sets, dealing with uncertainty and vagueness, and which is more powerful than the hesitant fuzzy sets. In this paper, four new operations are introduced on interval-valued intuitionistic hesitant fuzzy sets and several important properties are also studied.

]]>Md.Jalilul Islam Mondal and Tapan Kumar Roy

The aim of this paper is to find multi criteria decision making problems to a selected project using intuitionistic fuzzy soft matrix based on generalized operators of t-norm and t-conorm. We use the concept of level operators of intuitionistic fuzzy sets [ K.T.Atanassov, On intuitionistic fuzzy sets theory , Springer – Verlag 2012 ] to define intuitionistic fuzzy soft level matrix. Finally, we give an application of decision making problem by using the operators of t-norm and t-conorm .

]]>Chien-Wei Chang Yen-Huang Hsu and H. T. Liu

Let p, q, T be positive real numbers, B = {x ∈ R^{n} : }, ∂B = {x ∈ R^{n} : }, x^{∗} ∈ B, △ be the Laplace operator in R^{n}. In this paper, the following the initial boundary value problem with localized reaction term is studied: , where u_{0} ≥ 0. The existence of the unique classical solution is established. When x^{∗} = 0, quenching criteria is given. Moreover, the rate of change of the solution at the quenching point near the quenching time is studied.

J Madhusudan Rao and P Sumati Kumari

This paper proves the existence of periodic and fixed points for self maps satisfying some contractive conditions in symmetric space and also we prove coincidence and fixed points without continuity requirement satisfying a slightly more general Seghal’s contractive conditions with suitable example.

]]>V. Kaviyarasu

This paper tries to study the designing of new attribute sampling plan towards Quick Switching Conditional Repetitive Group Sampling System (QSCRGSS)-3 indexed through Average Outgoing Quality (AOQ), Average Outgoing Quality Limit (AOQL) and its Operating Ratio (OR). Tables are provided with numerical illustrations for newly developed plan for its various plan parameters.

]]>Alexander G. Gein and Mikhail P. Shushpanov

We construct the system of 11 defining relations for the 3-generated free modular lattice. This system is proved to be minimal. Systems of defining relations for lattices close to modular one are studied.

]]>R.K. Saxena

The object of this article is to investigate the solutions of one-dimensional linear fractional diffusion equations defined by (2.1) and (4.1). The solutions are obtained in a closed and elegant forms in terms of the H-function and generalized Mittag - Leffler functions, which are suitable for numerical computation. The derived results include the results for the one-dimentional linear fractional telegraph equation due to Orsingher and Beghin [1], and recently derived results by Saxena ,Mathai and Haubold [2].

]]>Marian Matłoka

We consider and study a new class of convex functions that are called- (h_{1},h_{2})preinvex functions on the co-ordinates. Some Hermite-Hadamard inequalities for the (h_{1},h_{2})– preinvex functions on the co-ordinates and its variant forms are derived. Some our theorems are new and other generalize some results of Dragomir and Latif.

Chii-Huei Yu

This paper takes the mathematical software Maple as the auxiliary tool to study four types of integrals. We can obtain the Fourier series expansions of these four types of integrals by using integration term by term theorem. On the other hand, we provide two examples to do calculation practically. The research methods adopted in this study involved finding solutions through manual calculations and verifying these solutions by using Maple.

]]>Alberto Cavicchioli Friedrich Hegenbarth Yurij V. Muranov and Fulvia Spaggiari

In this paper we describe some relations between various structure sets which arise naturally for a Browder-Livesay ltration of a closed topological mani- fold. We use the algebraic surgery theory of Ranicki for realizing the surgery groups and natural maps on the spectrum level. We obtain also new relations between Browder{Quinn surgery obstruction groups and structure sets. Finally we illustrate several examples and applications.

]]>R.F. Al Subaie and M.A. Mourou

We consider a singular differential operator Δ on the half line which generalizes the Bessel operator. Using harmonic analysis tools corresponding to Δ, we construct and investigate a new continuous wavelet transform on [0,∞[ tied to Δ. We apply this wavelet transform to invert an intertwining operator between Δ and the second derivative operator d^{2}/dx^{2}.

HAKAN OZTURK

The main interest of the present paper is to study α-cosymplectic manifolds that satisfy some certain tensor conditions. In particular, we consider α-cosymplectic manifolds with flatness conditions. We prove that there can not exist ϕ-projectively flat α-cosymplectic manifolds whose scalar curvature is zero for the dimension is greater than three. Furthermore, we work with special weakly Ricci-symmetric α-cosymplectic manifolds. We conclude the paper with an example on α-cosymplectic manifolds.

]]>Sergey Akimov and Olga Kvan

The article is about the problem of calculating the probability of discreteness and continuity sample from the general totality. There is a definition of discreteness. The main task of research is the definition of continuity or discreteness of unknown data. We consider the existing methodology as a method of finding the frequency of repetition of individual values variants of totality under test. The presented procedure is mathematically described. The basic disadvantage of this procedure: this procedure has great difficulties in interpreting the results. Based on the foregoing, the task of creating an algorithm determining the continuous or discrete becomes very important. The new algorithm is also based on the search for a match in the data array. However, now we use not only the array, but the quantity of changes between two successive values. To do it we need a sorting procedure of array from a minimum value to maximum one. In addition, we introduce the concept of "step" as a minimum amount of change between two values in the discrete series. An iterative method for detecting the matches in the array and defining the identity of the changes of the neighboring values is proposed in the article. Thus we have obtained three key values that define the continuity or discreteness. It has been found empirically that each of these values change its sensitivity based on the number of observations in the array. We also identified factors, which usage as (dependence on the number of values in the data) helps to attribute data array to the continuous or discrete distribution.

]]>Parivash Shams Derakhsh and Parisa Shams Derakhsh

The most famous classical variational principle is the so-called Brachistochroneproblem. In this work, Homotopy perturbation method (HPM) is applied to the Brachistochrone problem that arises invariational problems. The results reveal the efficiency and the accuracy of the proposed method. Homotopy perturbation method yields solutions in convergent series forms with easy computation

]]>M. R. Rismanchian S. Sedghi N. Shobkolaei and K.P.R. Rao

In this paper, we define the concept of almost generalized (S; T)-contractive condition, and prove some common fixed point results for four mappings satisfying almost generalized (S; T)− contractive condition in partially ordered fuzzy metric spaces.

]]>Tuncay Tunc

In this study, we have constructed a sequence of new positive linear operators with two variable by using Szasz-Mirakyan and Bernstein Operators, and investigated its approximation properties.

]]>Harun-Or-Roshid Md. Nur Alam M. F. Hoque and M. Ali Akbar

In this paper, we propose a new extended (G'/G)-expansion method to construct exact traveling wave solutions for nonlinear evolution equations. To check the validity and effectiveness of our method, we implement it to the (2+1)-dimensional typical breaking soliton equation. The results that we get are more general and successfully recover most of the previously known solutions which have been found by other sophisticated methods. Many of these solutions are found for the first time. Moreover, our method is direct, concise, elementary, effective and can be used for many other nonlinear evolution equations.

]]>Boris Shekhtman

It is well-known that the following properties of a matrix are equivalent: a matrix is non-derogatory if and only if is cyclic if and only if it is simple and if and only if it is 1-regular. In this article we attempt to extend these properties to a sequence of commuting matrices and examine the relation between them.

]]>Rovshan A. Bandaliev

In this paper a two-weight criterion for multidimensional Hardy type operator and its dual operator acting from weighted Lebesgue spaces into weighted Musielak-Orlicz spaces is proved. As application we prove the boundedness of multidimensional geometric mean operator in the weighted Musielak-Orlicz spaces. In particular, from obtained results implies the boundedness of multidimensional Hardy operator and its dual operator acting from usual weighted Lebesgue spaces into weighted variable Lebesgue spaces. In this paper we establish integral-type necessary and sufficient condition on weights, which provides the boundedness of the multidimensional Hardy type operator from weighted Lebesgue spaces into weighted Musielak-Orlicz spaces.

]]>Kevin K. H. Cheung

A classical result due to Steinitz states that a graph is isomorphic to the graph of some 3-dimensional polytope P if and only if it is planar and 3-connected. If a graph G is isomorphic to the graph of a 3-dimensional polytope inscribed in a sphere, it is said to be of inscribable type. The problem of determining which graphs are of inscribable type dates back to 1832 and was open until Rivin proved a characterization in terms of the existence of a strictly feasible solution to a system of linear equations and inequalities which we call sys(G), which, surprisingly, also appears in the context of the Traveling Salesman Problem. Using such a characterization, various classes of graphs of inscribable type can be described. Dillencourt and Smith gave a characterization of 3-connected 3-regular planar graphs that are of inscribable and a linear-time algorithm for recognizing such graphs. In this paper, their results are generalized to r-edge-connected r-regular graphs for odd r ≥ 3 in the context of the existence of strictly feasible solutions to sys(G). An answer to an open question raised by D. Eppstein concerning the inscribability of 4-regular graphs is also given.

]]>VICTOR CHULAEVSKY

We propose a new probabilistic approach to the analysis of decay of the Green’s functions and the eigenfunctions of the Anderson Hamiltonians on countable graphs. Our method is close in spirit to the Fractional Moment Method, but we show how the use of the fractional moments can be avoided, so that exponential decay of the Green’s functions can be established in some models where the fractional moments diverge, due to low regularity of the random potential. We elucidate the exceptional role of the Holder continuity condition, usual in the FMM, in terms of Cramer’s condition in the large deviations problem for a suitably constructed rigorous path expansion.

]]>Parivash Shams Derakhsh and Jafar Biazar

In this paper we develop a framework for necessary condition for the existence of noise terms for systems of partial differential and integral equations with (HPM) method. We show that the noise terms are conditional and are generated for inhomogeneous equations if specific criteria are justified. And to illustrate the capability and reliability of this method we numerically test our approach for a variety of systems of inhomogeneous problems.

]]>Md. Nur Alam M. Ali Akbar and Harun-Or-Roshid

Exact solutions of nonlinear evolution equations (NLEEs) play very important role to make known the inner mechanism of compound physical phenomena. In this paper, the new generalized (G'/G)-expansion method is used for constructing the new exact traveling wave solutions for some nonlinear evolution equations arising in mathematical physics namely, the (3+1)-dimensional Zakharov-Kuznetsov equation and the Burgers equation. As a result, the traveling wave solutions are expressed in terms of hyperbolic, trigonometric and rational functions. This method is very easy, direct, concise and simple to implement as compared with other existing methods. This method presents a wider applicability for handling nonlinear wave equations. Moreover, this procedure reduces the large volume of calculations.

]]>N.A. Aliev O.H. Asadova and A.M. Aliev

In this paper solution of mixed complex boundary value problem of first order is considered. The basic term in the problem with respect to space variables, has Cauchy-Riemann operator. We first use Laplace transformation to introduce spectral problem. Then we investigate for corresponding Fredholm’s type. The spectral problem here is different from classical boundary value problems. Here boundary conditions are nonlocal and global and in general linear. At the end we find asymptotic expansionfor the solution of spectral problemwhich depends on unknown complex parameter. With the help of this asymptotic expansion we prove existence and uniqueness of mixed problem.

]]>imdat İşcan

In this paper, some new integral inequalities of Hermite-Hadamard type related to the geometrically convex functions are established and some applications to special means of positive real numbers are also given.

]]>B. S. Trivedi and M. N. Patel

In this paper, we are concerned with the situations, where sometimes value two is reported erroneously as one in relation to size biased generalized negative binomial distribution (SBGNBD) with probability α. We have obtained the Maximum likelihood estimator and Bayes estimator under general entropy loss function. A simulated study is carried out to access the performance of the maximum likelihood estimators and Bayes estimators. Also comparison has been made between maximum likelihood estimator and Bayes estimator.

]]>Divo Dharma Silalahi Putri Aulia Wahyuningsih and Fahri Arief Siregar

The most popular nonparametric density estimates is kernel density estimate. This estimate depends on the bandwidth choice which was given the optimization to kernel optimality process. We proposed Epanechnikov kernel which is the most optimal kernel in the AMISE. The resample data as replicate samples has been obtained by using bootstrap mechanism to provide the information about the sampling distribution. Then the resample data was used in Epanechnikov kernel simulation to estimate the optimal solution. This study was simulated using oil contents (%) data at various periods after pollination. The oil contents (%) were obtained by extraction of oil palm mesocarp. The result show that, Epanechnikov kernel using resamples data from bootstrap could be used for nonparametric optimization cases such as oil content (%) of oil palm mesocarp.

]]>R. K. Saxena

In this paper , we derive the solutions of fractional master equation defined by (2.1) and fractional diffusion equation defined by (3.3). The method followed in deriving the solution is that of Laplace and Fourier transforms. The solutions are obtained in a neat and compact forms in terms of the generalized Mittag –Leffler function and Fox’ H-function. The results established are of general character and include some known results, as special cases.

]]>Sergey Gurov

Point and interval probability estimates for an event that has never been observed in a Bernoulli trial series (0-event) are proposed and validated. In this case, the classical statistical methods yield a zero point estimate, which is often unacceptable in practice. Nonzero point and interval probability estimates for a 0-event are proposed and validated. A classification of samples by size for the case of a 0-event is proposed.

]]>Guy Jumarie

In order to convince the sceptical reader, we herein give another proof of the fact that the Leibniz rule for fractional derivatives applies whenever we are dealing with non-differentiable functions, as they occur for instance, when one considers problems involving fractal space-time

]]>Md.Jalilul Islam Mondal and Tapan Kumar Roy

The purpose of this paper is to put forward the notion of intuitionistic fuzzy soft matrix theory and some basic results. In this paper, we define intuitionistic fuzzy soft matrices and have introduced some new operators with weights, some properties and their proofs and examples which make theoretical studies in intuitionistic fuzzy soft matrix theory more functional. Moreover, we have given one example on weighted arithmetic mean for decision making problem.

]]>M.S. Pukhta

If has all its zeros on K≤1, then it was recently proved by Dewan and Ahuja [3] that for every real or complex number where In this paper, we improve the above result and obtain new inequality for the polar derivative of a polynomial.

]]>Majid Mirmiran

Necessary and sufficient conditions in terms of lower cut sets are given for the strong insertion of a Baire-one function between two comparable real-valued functions on the topological spaces that sets are

]]>Erhan Pişkin

We study the system of nonlinear integro- differential equations with strong damping and weak damping terms, in a bounded domain with the initial and Dirich let boundary conditions. The existence of global solutions by using the potential well method, and the energy decay estimate by applying a lemma of Komornik [3]

]]>K.P.R. Rao G.N.V. Kishore and P.R.Sobhana Babu

In this paper, we obtain a unique common fixed point theorem for four self mappings satisfying Meir-Keeler type contractive condition in partial metric spaces, which is slightly different from the result of Aydi and Karapinar [5].

]]>M.S. Pukhta

Let p(z) be a polynomial of degree n which does not vanish in , k≧1, then for 1≦R≦k Bidkham and Dewan [J. Math. Anal. Appl. 166 (1992),191-193] proved In this paper we shall present several intersting generalizations and a re nement of this result which includes some results due to Malik, Govil and others. We also present a re nement of some other results.

]]>Mehdi Delkhosh

In many of applied sciences, various Self-adjoint differential equations are generated, where, methods for their solution is very complex. Usually, numerical methods used to solve them. Leighton et al were investigated oscillation properties of solutions of self-adjoint differential equations of the fourth order, with specific conditions. In this paper, we use a new method for the solving a class of Self-adjoint differential equations of the fourth order. We use a variable change in the equation, and then obtain an analytical solution for the equation with a specific condition. Because in this method, an analytical solution is obtained, therefore, it is not necessary to use numerical methods to solve the problem.

]]>MAJID MIRMIRAN

A necessary and sufficient condition in terms of lower cut sets are given for the insertion of a Baire-one function between two comparable real-valued functions on the topological spaces that are .

]]>Jinxia Ma and Rand R. Wilcox

The paper considers the problem of testing the hypothesis that J≧2 dependent groups have equal population measures of location when using a robust estimator and there are missing values. For J = 2, methods have been studied based on trimmed means. But the methods are not readily extended to the case J > 2. Here, two alternative test statistics were considered, one of which performed poorly in some situations. The one method that performed well in simulations is based on a very simple test statistic with the null distribution approximated via a basic bootstrap technique. The method uses all of the available data to estimate each of the marginal (population) trimmed means. Other robust measures of location were considered, for which imputation methods have been derived, but in simulations the actual Type I error probability was estimated to be substantially less than the nominal level, even when there are no missing values.

]]>