<?xml version="1.0" encoding="utf-8" ?>
<rss version="2.0"> 
<channel>
<title><![CDATA[Mathematics and Statistics]]></title>
<link><![CDATA[https://www.hrpub.org/journals/jour_info.php?id=34]]></link>
<description><![CDATA[Mathematics and Statistics, owned and published by Horizon Research Publishing Co. Ltd, is an international peer-reviewed journal that publishes original and high-quality research papers in all areas of mathematics and statistics. As an important academic exchange platform, scientists and researchers can know the most up-to-date academic trends and seek valuable primary sources for reference.]]></description>
<language>en-us</language>
<pubDate>2026-04-14 17:57:32</pubDate>
<lastBuildDate>2026-04-14 17:57:32</lastBuildDate>
<generator>ZWWY RSS Generator</generator>
<item>
<title><![CDATA[Hybrid Fuzzy–Neutrosophic Weibull Stress–Strength Reliability Model with Simulation and Real Data Analysis]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15841]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apri&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Naser Odat&nbsp; &nbsp;</p><p>This work fills an important gap in the conventional stress-strength reliability analysis, which usually considers precise parameter values and thus neglects the existing uncertainty in practical engineering systems. To remedy this deficiency, a hybrid fuzzy-neutrosophic Weibull stress-strength reliability model is proposed, which considers the interval-valued scale parameters to model the parameter uncertainty and introduces a sensitivity parameter <img src=image/13444143_01.gif> for graded safety analysis. Thus, the proposed model provides interval-valued reliability estimates <img src=image/13444143_02.gif> that capture the uncertainty. With the help of jute fiber strength information and Monte Carlo simulation, the findings reveal that for strict safety requirements <img src=image/13444143_01.gif>=0.001, the fuzzy reliability is only 11.9%, whereas the classical value is 53.5%, and thus the classical approach may overestimate the reliability by about 4.5 times. Additionally, neutrosophic extension enables uncertainty-based reliability intervals, e.g., [0.0925, 0.1486], when <img src=image/13444143_01.gif> = 0.001, and the width of the interval (0.0561) quantifies the parameter uncertainty. Furthermore, a new three-tier decision strategy is proposed to interpret the interval estimates and provide accept, reject, and retest decisions. The main contributions of this work are the formulation of a new unified reliability model, which encompasses both fuzzy graded evaluation and neutrosophic parameter uncertainty, providing a more realistic and practical tool for engineers in safety-critical applications, e.g., medical devices and aerospace engineering. Furthermore, simulation results validate the statistical consistency and efficiency of the new reliability model, demonstrating a decrease in mean squared error (MSE) up to 95% as the sample size increases.</p>]]></description>
<pubDate>Apri 2026</pubDate>
</item>
<item>
<title><![CDATA[Fuzzy Semigroups via Semigroups]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15840]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apri&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Anjeza Krakulli&nbsp; &nbsp;and Elton Pasku&nbsp; &nbsp;</p><p>The theory of fuzzy semigroups is a branch of mathematics that arose in the early 1990s as an effort to characterise the properties of semigroups by means of the properties of their fuzzy subsystems, which include fuzzy subsemigroups and their analogues, fuzzy one (resp. two)-sided ideals, fuzzy quasi-ideals, fuzzy bi-ideals, etc. To be more precise, a fuzzy subsemigroup of a given semigroup <img src=image/13443713_01.gif> is just a ∧-prehomomorphism <img src=image/13443713_02.gif> of <img src=image/13443713_01.gif> to ([0, 1],∧). Variations of this, which correspond to the other previously mentioned fuzzy subsystems, can be obtained by imposing certain properties on f. From the work of Kuroki, Mordeson, Malik, and many of their descendants, it turns out that fuzzy subsystems play a similar role in the structure theory of semigroups that play their non-fuzzy analogues. The aim of the present paper is to show that this similarity is not coincidental. As a first step to this, we prove that there is a 1-1 correspondence between fuzzy subsemigroups of <img src=image/13443713_03.gif> and subsemigroups of a certain type of <img src=image/13443713_04.gif>. Restricted to fuzzy one-sided ideals, this correspondence identifies the above fuzzy subsystems with their analogues in <img src=image/13443713_04.gif>. Using these identifications, we prove that the characterisation of the regularity of semigroups in terms of fuzzy one-sided ideals and fuzzy quasi-ideals can be obtained as an implication of the corresponding non-fuzzy analogue. In a further step, we give new characterisations of semilattices of left simple semigroups in terms of left simple fuzzy subsemigroups, and of completely regular semigroups in terms of completely simple fuzzy subsemigroups. Both left simple fuzzy subsemigroups and completely simple fuzzy subsemigroups are defined here for the first time, and the corresponding characterisations generalise well-known characterisations of the corresponding semigroups.</p>]]></description>
<pubDate>Apri 2026</pubDate>
</item>
<item>
<title><![CDATA[New Results Supporting the Validity of the Lindelöf Hypothesis Using the Maynard–Guth Hamiltonian]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15839]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apri&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Zeraoulia Rafik&nbsp; &nbsp;Simeon Casanova Trujillo&nbsp; &nbsp;and Alvaro Humberto Salas&nbsp; &nbsp;</p><p>This paper develops a quantum–spectral model for prime counting in almost-short intervals, motivated by the Maynard–Guth estimate and the long-standing Riemann and Lindelöf hypotheses. We refine a perturbed Schrödinger-type Hamiltonian whose potential encodes both the main term and the exponentially decaying error term of the prime number theorem in these intervals, and we analyze its uncertainty relation, symmetry/self-adjoint structure, and spectral behavior. Numerically, we compute eigenvalues and eigenfunctions using finite-difference and finite-element discretizations on large domains and compare the unfolded spacing statistics with random-matrix predictions; we further probe Lindelöf-type boundedness by evaluating <img src=image/13439829_01.gif> at arguments derived from the computed spectrum. The resulting spectrum is real, exhibits symmetry and band structure, and shows level repulsion compatible with a Hilbert–Pólya operator candidate, while the zeta evaluations remain bounded (and frequently close to 1) in the tested ranges. The main contribution is an explicit bridge from Maynard–Guth prime-density/error terms to a concrete Hamiltonian framework that yields testable spectral and thermodynamic predictions. Limitations include finite discretization ranges and the use of conjectural inputs in parts of the prime-gap discussion; no claim of a proof is made, but the results provide quantitative guidance for future rigorous and computational work.</p>]]></description>
<pubDate>Apri 2026</pubDate>
</item>
<item>
<title><![CDATA[On Properties Preserved via Types and Tuples of Relations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15811]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Temurboy E. Rajabov&nbsp; &nbsp;and Sergey V. Sudoplatov&nbsp; &nbsp;</p><p>In mathematics, many natural properties influence or are obtained as a result of the influence of various types of preservation of certain attributes. This applies to geometric, algebraic, logical, dynamic, probabilistic and other mathemat-ical objects that allow us to reveal and classify relationships of the surrounding reality, define new structures and build new significant systems that make it possible to solve theoretical and applied problems. Among these constructions, the fundamental ones are substructures, quotients, filtered products of structures, their combinations, combinations with respect to given families of predicates and equivalence relations, semantic and syntactic generic constructions, etc. Various kinds of preservation of properties were used in the classification of countable models of complete theories and algebras of binary formulae. We study general possibilities of preservation of properties via types producing natural classes of relations. We apply a general approach and characterize properties of reflexivity, non-reflexivity, irreflexivity, symmetry, asymmetry, antisymmetry, transitivity, non-transitivity, and linearity in terms of preservation via appropriate types, together with the properties of orders, preorders and equivalence relations. In terms of preservation, we prove characterizations for precomplete, complete, pre-dense, and dense binary relations, together with linear and dense orders. Besides, we characterize the existence of contours and the projectivity property. Natural generalizations of these properties, axiomatizing spherical orders, are also characterized, forming a description of spherical orders by their preservations. A series of open problems that naturally arise under the consideration of various kinds of type-preservation is posed.</p>]]></description>
<pubDate>Feb 2026</pubDate>
</item>
<item>
<title><![CDATA[An Approximate Analytical Series Solution for Multi-Dimensional Fractional Navier-Stokes Equations: A Hybrid Iterative Approach]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15810]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Keerthika V&nbsp; &nbsp;S. P. Geetha&nbsp; &nbsp;and R. Prahalatha&nbsp; &nbsp;</p><p>The fractional Navier-Stokes (FNS) equations, as a generalization of the classical model, offer a powerful approach for describing complex fluid flow phenomena governed by fractional dynamics. However, the nonlinear structure and fractional operators in these equations pose substantial challenges in obtaining analytical or approximate solutions. This study introduces a novel hybrid analytical iterative scheme, termed the Natural Transform Iterative Algorithm (NTIA), developed for efficiently solving multidimensional time-fractional Navier–Stokes equations defined in the Caputo sense. The proposed method integrates the Natural transform, a Laplace-type integral transform adept at handling fractional derivatives with the new algorithm of the Daftardar-Gejji and Jafari method (DGJM) to construct rapidly convergent series solutions. Unlike conventional methods, NTIA avoids linearization, discretization, and perturbation assumptions, thereby reducing computational complexity while maintaining analytical accuracy. The algorithm's performance is demonstrated through several benchmark examples involving one-, two-, and three-dimensional FNS equations. Graphical and numerical comparisons with known closed-form solutions confirm the high accuracy, stability, and convergence rate of the NTIA across varying fractional orders. The method effectively captures steady-state and transient fluid behaviors while smoothly approaching classical results as the fractional order tends to unity. The findings highlight the reliability and robustness of NTIA as an analytical tool for nonlinear fractional PDEs. Its computational efficiency and adaptability make it a promising approach for solving a broad spectrum of problems in fluid mechanics, diffusion phenomena, and applied fractional dynamics.</p>]]></description>
<pubDate>Feb 2026</pubDate>
</item>
<item>
<title><![CDATA[Adaptive Wild Bootstrap Confidence Intervals Using Skewness-Calibrated Two-Point Asymmetric Bernoulli Weights in Linear Regression]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15809]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Elmanani Simamora&nbsp; &nbsp;Abil Mansyur&nbsp; &nbsp;Mukhtar&nbsp; &nbsp;Muliawan Firdaus&nbsp; &nbsp;and Rizki Habibi&nbsp; &nbsp;</p><p>In the two-point asymmetric Bernoulli scheme, a logistic function is used to calibrate the probability against residual skewness directly. A basic linear regression model with right-skewed errors and a multiple model with left-skewed errors are then subjected to Monte Carlo evaluations. Three methods are used to create confidence intervals: the percentile method, the bias-corrected and accelerated method, and the bootstrap-t method. Two-point asymmetric Bernoulli can produce shorter intervals than classical weights while maintaining coverage probabilities at the nominal level, according to simulation results. The bias-corrected and accelerated method gives the best balance between speed and precision. The percentile method provides shorter intervals, and bootstrap-t usually offers more stable coverage. The simulation results show that the two-point asymmetric Bernoulli method is a more effective way to resample data, as it is more flexible and dependable, mainly when the data exhibits heteroscedasticity and asymmetric residual distributions. The two-point weights are simple, which is a positive aspect because it allows the method to circumvent the problems associated with the standard bootstrap method. Comparative studies on real data show that the generalised linear model's interval length is wider and less stable than that of the two-point asymmetric Bernoulli.</p>]]></description>
<pubDate>Feb 2026</pubDate>
</item>
<item>
<title><![CDATA[Fusion Sampling Validation in Data Partitioning for Machine Learning]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15808]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Christopher Godwin Udomboso&nbsp; &nbsp;Caston Sigauke&nbsp; &nbsp;and Ini Adinya&nbsp; &nbsp;</p><p>Effective data partitioning is known to be crucial in machine learning. Traditional cross-validation methods like K-fold Cross-Validation (KFCV) enhance model robustness but often compromise generalisation assessment due to high computational demands and extensive data shuffling. To address these issues, the integration of the Simple Random Sampling (SRS), which, despite providing representative samples, can result in non-representative sets with imbalanced data. The study introduces a hybrid model, Fusion Sampling Validation (FSV), combining SRS and KFCV to optimise data partitioning. FSV aims to minimise biases and merge the simplicity of SRS with the accuracy of KFCV. The study used three datasets of 10,000, 50,000, and 100,000 samples, generated with a normal distribution (mean 0, variance 1) and initialised with seed 42. KFCV was performed with five folds and ten repetitions, incorporating a scaling factor to ensure robust performance estimation and generalisation capability. FSV integrated a weighted factor to further enhance performance and generalisation. Evaluations focused on mean estimates (ME), variance estimates (VE), mean squared error (MSE), bias, the rate of convergence for mean estimates (ROC ME), and the rate of convergence for variance estimates (ROC VE). The results indicated that FSV consistently outperformed SRS and KFCV, with ME values of 0.000863, VE of 0.949644, MSE of 0.952127, bias of 0.016288, ROC ME of 0.005199, and ROC VE of 0.007137. These results demonstrate the superior accuracy (reduced error and bias) and reliability of FSV (stable performance with increasing sample size and repeated trials). Moreover, since FSV reduces the computational demands of repeated KFCV while preserving representativeness, it becomes particularly well suited to environments with constrained resources and large-scale datasets. Hence, FSV offers a practical solution for improving model evaluation where both efficiency and accuracy are critical.</p>]]></description>
<pubDate>Feb 2026</pubDate>
</item>
<item>
<title><![CDATA[A Depiction of L-Fuzzy Congruence Kernel of Pseudo Complemented Semilattice in View of Computer Science and Logic]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15807]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>E. S. Rama Ravi Kumar&nbsp; &nbsp;K. K. Viswanathan&nbsp; &nbsp;Ch. Baby Rani&nbsp; &nbsp;J. Venkateswara Rao&nbsp; &nbsp;M. N. Srinivas&nbsp; &nbsp;and H. Niranjan&nbsp; &nbsp;</p><p>This paper investigates L-fuzzy congruence kernels in pseudo-complemented semilattices (PCS) and their relevance to computer science and logic. L-fuzzy congruences enable flexible approximate reasoning and support the structural analysis of fuzzy algebraic systems, providing a nuanced approach to modeling uncertainty. In applications, such as relational databases, these congruences model imprecise and graded relationships between data entities, while in logical frameworks, they extend binary truth values to multi-valued semantics. We define an L-fuzzy relation on a PCS, introduce the corresponding L-fuzzy kernel, and extend it formally to an L-fuzzy congruence kernel. Necessary and sufficient conditions are established for an L-fuzzy subset of a PCS to be compatible with such a kernel via an L-fuzzy relation. Additionally, the concept of a kernel ideal is developed and algebraically characterized within a PCS using a defined relation, establishing a clear correspondence between fuzzy congruences and ideal-based structures. These constructions contribute to the structural understanding of fuzzy algebraic systems and offer practical tools for classifying and interpreting fuzzy relational models under uncertainty. The results bridge theoretical fuzzy algebra with computational applications, supporting the handling of ambiguity in intelligent systems, fuzzy databases, and logical frameworks where partial truth and graded membership are essential for robust modeling.</p>]]></description>
<pubDate>Feb 2026</pubDate>
</item>
<item>
<title><![CDATA[Evaluation of Variance Normalized ANOVA-Simultaneous Component Analysis (VN-ASCA) through Controlled Simulations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15691]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Harriet Achiaa Dwamena&nbsp; &nbsp;N. K. Frempong&nbsp; &nbsp;and A. O. Adebanji&nbsp; &nbsp;</p><p>ANOVA-Simultaneous Component Analysis (ASCA) is a powerful approach for analyzing multi-factorial experimental designs with multivariate responses. However, standard ASCA methods assume homogeneous variance across experimental groups, an assumption often violated in realworld data, particularly in biological and chemical studies. This paper introduces Variance Normalized ASCA (VNASCA), a novel extension that addresses heterogeneous variance through a weighted least squares framework. We provide a comprehensive mathematical derivation of the VNASCA methodology, including its weight calculation strategies, permutation-based significance testing, and variance partitioning approach. The method incorporates robust options for weight estimation, regularization for numerical stability, and automatic effect selection for enhanced interpretability. Practical implementation guidelines for parameter tuning are provided, and the method's performance is validated through both controlled simulations and application to real agricultural data from maize trials in Ghana. The experimental results demonstrate that VN-ASCA provides improved prediction accuracy in the presence of heterogeneous variance while maintaining interpretability. The application to multi-environment maize trials in Ghana (50 plots, 100 traits, 4 locations) demonstrates practical utility, with adaptive VN-ASCA showing measurable improvements in root mean squared error and substantially stronger statistical evidence for environmental effects. These findings suggest that accounting for variance heterogeneity can enhance the detection and interpretation of factorial effects in complex multivariate datasets.</p>]]></description>
<pubDate>Feb 2026</pubDate>
</item>
<item>
<title><![CDATA[A Multi-fuzzy Set Theoretic Framework for Unanimity Measures]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15690]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Priyanka P&nbsp; &nbsp;Sabu Sebastian&nbsp; &nbsp;Ramakrishnan T V&nbsp; &nbsp;Bijumon R&nbsp; &nbsp;Rejeesh C John&nbsp; &nbsp;Haseena C&nbsp; &nbsp;and Gafoor I&nbsp; &nbsp;</p><p>This paper introduces and explores a range of distance measures defined on multi-fuzzy sets, emphasizing both their mathematical foundations and applicability to real-world decision environments. Classical distance metrics such as Minkowski, Hamming, and Euclidean measures are extended to the multi-fuzzy context, and their behaviour is analysed at both the set and element levels. The proposed formulations are rigorously analyzed at both the set level and element level to capture variations in structure and similarity more precisely. The study further examines how these measures are affected when multi-fuzzy sets are transformed via crisp functions or adjusted using fuzzy weight matrices. The Minkowski distance in the original multi-fuzzy sets dominates or bounds the corresponding distance in the multi-fuzzy weighted sets via fuzzy matrix transformation. In addition to these classical extensions, this paper introduces new deviation-based and normalised measures aimed at quantifying unanimity and consensus within group decision-making processes. By extending classical statistical notions such as mean, variance, and standard deviation into the multi-fuzzy domain, the authors develop refined methods for assessing agreement among individual judgments. These are further strengthened through the use of weighted criteria to reflect varying importance. A numerical case study is provided to demonstrate the practical effectiveness of the proposed approach in real-world consensus evaluation. By improving the accuracy of collective decision-making models, the research contributes to transparent, equitable, and evidence-based decision support systems in fields such as education, healthcare, and policy analysis. The study is primarily theoretical and validated through a limited case study; future work may involve empirical validation across larger datasets or multi–fuzzy–neutrosophic extensions.</p>]]></description>
<pubDate>Feb 2026</pubDate>
</item>
<item>
<title><![CDATA[Design-Consistent Variance Estimation in Multistage Complex Surveys: A Simulation and MICS-Based Comparative Study]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15689]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Ali Satty&nbsp; &nbsp;and Zakariya M. S. Mohammed&nbsp; &nbsp;</p><p>Ignoring survey design features such as clustering, stratification, and unequal weighting can lead to underestimated standard errors (SEs) and misleading inference in regression models. This study compares three designconsistent variance estimators, Taylor linearization (TL), Fay's balanced repeated replication (BRR), and the Rao–Wu–Yue bootstrap, using Monte Carlo simulations based on the two-stage stratified structure of UNICEF's Multiple Indicator Cluster Surveys (MICS). Nine scenarios combine intraclass correlation (ICC = 0.01–0.10) with weight variability (<img src=image/13444009_01.gif>=0.2–1.0) to assess 95% coverage, SE calibration, and confidence interval (CI) width. Coverage was generally near nominal when clustering was weak to moderate (ICC ≤ 0.05), with mild under-coverage (about 90%) at ICC = 0.10 across methods. SEs were well calibrated (SE-ratios ≈ 0.93–1.03). CI width was driven primarily by weight heterogeneity, increasing markedly with larger <img src=image/13444009_01.gif>, whereas ICC had a smaller impact. In an application to 2018–2019 MICS data on childhood diarrhea, point estimates (odds ratios) were identical across methods; BRR and RWY bootstrap yielded slightly wider, more conservative CIs. Overall, TL is most efficient under moderate design effects, while replication methods offer greater robustness when clustering and weight dispersion are high, providing practical guidance for MICS-type analyses.</p>]]></description>
<pubDate>Feb 2026</pubDate>
</item>
<item>
<title><![CDATA[Arithmetic of Points on Elliptic Curve <img src=image/13443798_01.gif> over <img src=image/13443798_02.gif>-adic Field <img src=image/13443798_03.gif> modulo <img src=image/13443798_04.gif> in Projective Coordinates]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15688]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>T. Sai Tejaswini&nbsp; &nbsp;and P. Anuradha Kameswari&nbsp; &nbsp;</p><p>Elliptic Curve Cryptosystems (ECC) have emerged as a powerful alternative to traditional public-key cryptosystems, offering equivalent security with significantly smaller key sizes. The efficiency of ECC, in terms of minimizing encryption time and enhancing computational performance, is strongly influenced by the number of point addition and doubling computations required in elliptic curve arithmetic. In this context, the study of arithmetic of points in elliptic curves plays a crucial role. This study emphasizes computations in projective coordinates of points on elliptic curves defined over the <img src=image/13443798_05.gif>-adic field <img src=image/13443798_06.gif>. A comparative study shows that arithmetic in projective coordinates reduces the number of operations required, thereby enhancing the efficiency relative to the affine coordinate system. The coordinate-level <img src=image/13443798_05.gif>-adic expansions of the arithmetic may be obtained by employing <img src=image/13443798_05.gif>-adic expansion techniques to the arithmetic of points on the elliptic curve <img src=image/13443798_07.gif> in projective coordinates for <img src=image/13443798_08.gif> = 2, 3, ... In this paper, coordinate-level <img src=image/13443798_05.gif>-adic expansions of the arithmetic of points on <img src=image/13443798_09.gif> in projective coordinates are formulated, an algorithm for the computations is given illustrating the step-by-step process for computing point addition and doubling in <img src=image/13443798_09.gif> and its efficiency over the arithmetic of points on <img src=image/13443798_09.gif> in affine coordinates is described. This provides a systematic framework for performing elliptic curve arithmetic efficiently.</p>]]></description>
<pubDate>Feb 2026</pubDate>
</item>
<item>
<title><![CDATA[Contractive Conditions for Fixed Points in Complete Neutrosophic Fuzzy Metric Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15687]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Dritan Gerbeti&nbsp; &nbsp;Puneetha&nbsp; &nbsp;Kastriot Zoto&nbsp; &nbsp;Hawa Ibnouf Osman Ibnouf&nbsp; &nbsp;and K. Dinesh&nbsp; &nbsp;</p><p>Fixed point theory constitutes a fundamental pillar of nonlinear analysis and has found extensive applications in mathematical modeling, optimization, computer science, and engineering. Classical results such as Banach's contraction principle have been generalized to various settings, including fuzzy metric spaces, cone metric spaces, and modular metric spaces. However, these frameworks often prove inadequate for modeling uncertainty involving indeterminacy and inconsistency. To address this limitation, neutrosophic fuzzy metric spaces (NFMSs) provide a powerful mathematical structure by integrating fuzzy distance measures with neutrosophic logic. In this paper, we establish several new fixed point theorems for single-valued mappings in complete neutrosophic fuzzy metric spaces under different generalized contractive conditions. The proposed contractions extend classical Banach-type, nonlinear, and rational-type contractions by incorporating neutrosophic fuzzy control functions. Using iterative techniques and properties of t-norms, we prove the existence and uniqueness of fixed points and demonstrate the convergence of the associated Picard iterative sequences. The obtained results significantly generalize and unify several existing fixed point theorems in fuzzy metric spaces, cone metric spaces, and modular metric spaces. An illustrative example is provided to validate the applicability of the main results. The primary contribution of this work lies in enriching the theoretical foundation of neutrosophic fuzzy analysis and offering a unified approach to handling uncertainty, vagueness, and inconsistency within fixed point theory. Although this study is mainly theoretical, the results have potential implications for optimization theory, decision-making models, and computational intelligence systems operating under indeterminate or conflicting information. Future research may focus on extending these results to multivalued mappings and their applications in real-world neutrosophic models.</p>]]></description>
<pubDate>Feb 2026</pubDate>
</item>
<item>
<title><![CDATA[Error Analysis of the Galerkin Finite Element Method for the Gray-Scott Model]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15686]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Osama T. Al-Bairaqdar&nbsp; &nbsp;and Younis A. Sabawi&nbsp; &nbsp;</p><p>This paper proposes an extensive finite element analysis of the one-dimensional Gray-Scott reaction-diffusion model (GSRDM), which is a fundamental framework for examining pattern formation in chemical and biological phenomena. A fully discrete numerical strategy was constructed utilizing the Galerkin finite element method (GFEM) for spatial discretization and the backward Euler (BE) scheme for temporal discretization. The nonlinear term is meticulously treated using a fully discrete formulation, preserving its authentic characteristics. The stability and convergence analysis of the discrete formulation is rigorously investigated by employing the error splitting and elliptic projection techniques with a special treatment for the nonlinear reaction terms. Numerical investigations employing a MATLAB script validated the predicted convergence rates and affirmed the precision of the proposed techniques. The impact of space and time-step refinements is examined comprehensively, supported by exact solutions and norm-based error analysis. A comparison with referenced works is discussed to demonstrate the effectiveness of the proposed scheme. This research offers a robust and adaptable framework for further studies on nonlinear reaction-diffusion systems.</p>]]></description>
<pubDate>Feb 2026</pubDate>
</item>
<item>
<title><![CDATA[Neumann Problem for a Smooth Bounded Domain in the Heisenberg Group <img src=image/13443204_01.gif>]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15685]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Apeksha&nbsp; &nbsp;and Mukund Madahav Mishra&nbsp; &nbsp;</p><p>Boundary value problems arise while studying differential equations and play a fundamental role in diverse areas of scientific, medical, and engineering disciplines, such as medical sciences involving diffusion processes of drugs, neuroscience, environmental studies, modelling in economics and finance, and simulations for computer graphics. Consequently, their study becomes essential in real-world applications. Two boundary value problems, namely the Dirichlet and Neumann problems associated with the Laplace equation, are of substantial significance in the discipline of partial differential equations. The Dirichlet problem involves finding a harmonic function within a domain, subject to the condition that its values coincide with a given continuous function on the boundary. On the other hand, the Neumann problem demands a solution in the form of a harmonic function whose normal derivative equals a specified function on the boundary of the domain. These problems acquire increased significance when the regularity of the associated differential operator is degraded. The Heisenberg group, a non-abelian and a non-compact Lie group, becomes a nice object to study these boundary value problems as being the simplest example having said properties in association with a subelliptic Laplace like operator called the Kohn-Laplacian. Gaveau was the first to discuss the Dirichlet problem for the Kohn–Laplacian on the Heisenberg groups in 1977. Later, Jerison further discussed it by calculating estimates in the Dirichlet problem in a smooth domain D, along with the regularity of the solution. The Neumann problem for the Kohn-Laplacian on the Koranyi ball in the Heisenberg group was initially addressed by Kumar, Dubey and Mishra in 2016, which was further generalized to H-type groups by Pandey and Mishra for certain gauge balls in H-type groups. We further generalize the existence and uniqueness results of the Neumann problem for the Kohn-Laplacian for bounded domains with smooth boundary that have no characteristic points in the Heisenberg group. We have established certain estimates of the derivatives of the fundamental solution and obtained the necessary and sufficient condition for the solvability of the interior Neumann problem for the same.</p>]]></description>
<pubDate>Feb 2026</pubDate>
</item>
<item>
<title><![CDATA[An Extended Construction of Hopfian Free-torsion Abelian Groups]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15684]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Abderrahim Bouzendaga&nbsp; &nbsp;and Seddik Abdelalim&nbsp; &nbsp;</p><p>The study of Hopficity in Abelian groups has been largely motivated by the fundamental results of Baumslag, who proved that torsion groups are always Hopfian regardless of their cardinality, but left several questions open concerning torsion-free groups. Later, Corner addressed some of these questions by providing counterexamples showing that a direct sum of two Hopfian groups can be non-Hopfian, and that a group with an automorphism group of order two does not guarantee Hopficity. These results highlighted the need for new constructions to explore Hopficity in torsion-free Abelian groups. Our work introduces a new approach based on divisibility techniques, as it contributes to the understanding of free-torsion groups with respect to the Hopficity property, providing new insights into their structural properties and implications within group theory. Our analysis also demonstrates how divisibility properties, as well as the introduction of totally invariant subgroups and homomorphisms, can be used to establish Hopficity in specific Abelian groups, particularly those that are free-torsion. In order to reach all of this, we start by taking a group defined as an infinite direct sum of cyclic groups; then we construct a specific subgroup generated by two particular families of elements; and finally we show that this group is Hopfian through results from the theory of divisible subgroups.</p>]]></description>
<pubDate>Feb 2026</pubDate>
</item>
<item>
<title><![CDATA[On the Summatory Function of <img src=image/13443653_24.gif>]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15683]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Sinyavsky O. V.&nbsp; &nbsp;</p><p>An asymptotic formula is derived for the summatory function <img src=image/13443653_02.gif>, where <img src=image/13443653_03.gif> denotes the total number of prime factors of <img src=image/13443653_16.gif> counted with multiplicity, and <img src=image/13443653_04.gif> is a multiplicative arithmetical function satisfying <img src=image/13443653_05.gif> for primes <img src=image/13443653_06.gif> and non-negative integers <img src=image/13443653_07.gif>, where <img src=image/13443653_08.gif> and <img src=image/13443653_01.gif> for <img src=image/13443653_09.gif>. The study builds on a rich history in analytic number theory, including classical results by Dirichlet on the divisor function <img src=image/13443653_10.gif>, and refinements using zeta-function estimates, as well as probabilistic approaches like the Erdős–Kac theorem extended to <img src=image/13443653_03.gif> (distinct primes) over <img src=image/13443653_11.gif>-free and <img src=image/13443653_11.gif>-full numbers. However, prior research has largely overlooked the multiplicity in <img src=image/13443653_03.gif> and its twisting by broad classes of multiplicative functions beyond divisors, particularly for square-full integers. The analysis covers three distinct cases: when <img src=image/13443653_04.gif> belongs to the subclass <img src=image/13443653_12.gif> (where <img src=image/13443653_13.gif> for all primes <img src=image/13443653_06.gif>); when <img src=image/13443653_04.gif> is in the broader class <img src=image/13443653_15.gif> but not in <img src=image/13443653_12.gif>; and when <img src=image/13443653_16.gif> is square-full with <img src=image/13443653_14.gif>. Examples of such functions include the number of non-isomorphic Abelian groups of order <img src=image/13443653_16.gif>, the number of square-full divisors of <img src=image/13443653_16.gif>, the divisor function <img src=image/13443653_17.gif>, and the <img src=image/13443653_07.gif>-fold divisor function <img src=image/13443653_18.gif>. The results are obtained using Dirichlet series <img src=image/13443653_19.gif>, which admit an Euler product decomposition due to multiplicativity, enabling analytic continuation via differentiation with respect to an auxiliary parameter <img src=image/13443653_20.gif>, contour integration, and estimates for the Riemann zeta function, as well as analytic continuation techniques. The following results were obtained: for case (i), <img src=image/13443653_21.gif>; for case (ii), <img src=image/13443653_22.gif>; and for square-full <img src=image/13443653_16.gif>, <img src=image/13443653_23.gif>. The work is theoretical in nature. The results of this study can be applied in further research in number theory, group theory, and discrete mathematics, with potential applications in algorithmic number theory (e.g., efficient computation of group orders) and cryptographic protocols relying on prime factorizations.</p>]]></description>
<pubDate>Feb 2026</pubDate>
</item>
<item>
<title><![CDATA[A Comparative Study of Adomian–Kamal Decomposition and Euler-Based Methods for Solving the Fractional Abel Differential Equation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15682]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2026<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;14&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Muhamad Deni Johansyah&nbsp; &nbsp;Endang Rusyaman&nbsp; &nbsp;Alit Kartiwa&nbsp; &nbsp;Badrulfalah&nbsp; &nbsp;Salma Az-Zahra&nbsp; &nbsp;Hanifah Al Affian&nbsp; &nbsp;Asep K. Supriatna&nbsp; &nbsp;Aceng Sambas&nbsp; &nbsp;and Sundarapandian Vaidyanathan&nbsp; &nbsp;</p><p>This study presents the development and application of two semi-analytical methods—namely the Adomian Laplace Theorem and the Adomian–Kamal Theorem—for solving the Fractional Abel Differential Equation (FADE). Both approaches integrate the Adomian Decomposition Method (ADM) with distinct integral transforms to enhance accuracy and computational efficiency. The Adomian–Laplace method combines ADM with the Laplace Transform (LT), while the Adomian–Kamal method incorporates the Kamal Integral Transform (KIT), enabling improved handling of the non-local and long-memory characteristics inherent in fractional-order systems. Additionally, a fractional extension of the classical Euler method is implemented for comparative purposes. The methods are evaluated through two case studies, where approximate solutions are compared to exact solutions for various fractional orders α. Graphical analyses demonstrate that both semi-analytical methods yield results that perfectly overlap with exact solutions, indicating high accuracy and convergence. In contrast, the fractional Euler method exhibits reduced accuracy at lower fractional orders due to its limited capability of capturing memory effects. The findings highlight the superior performance and reliability of the Adomian–Kamal and Adomian–Laplace approaches for solving nonlinear FADEs, offering a robust framework for analytical and semi-analytical modeling in physics, engineering, and applied sciences.</p>]]></description>
<pubDate>Feb 2026</pubDate>
</item>
<item>
<title><![CDATA[Beyond Gaussian Processes: Pearson Type VII Processes for Robust Bayesian Regression]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15638]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Abir Bilal Al Khabori&nbsp; &nbsp;and Moh'd Taleb Alodat&nbsp; &nbsp;</p><p>We propose a Bayesian framework for regression modeling that uses Pearson Type VII Processes (P7Ps) as an adaptable generalization of Gaussian Processes (GPs). The P7P is a scale mixture of Normal and Gamma distributions. It has flexible heavy-tailed properties that enhance its robustness against outliers and render it suitable for modelling heavytailed data. The P7P framework outperforms GP functions by providing explicit control over tail behavior and variance modulation via the shape and scale parameters. These factors dictate the weight of the tails and the overall dispersion of the process, facilitating a seamless transition between Gaussian and heavy-tailed regimes, thus improving modelling flexibility and robustness. We derive the predictive distribution for new data points and use the Laplace approximation for efficient posterior inference under the Pearson Type VII (P7) prior, which ensures scalability while maintaining analytical tractability. The Pearson Type VII Process for regression (P7PR) outperforms Gaussian Process for regression (GPR) in terms of resilience, adaptability and predictive accuracy, especially in challenging scenarios with heavy-tailed noise and outliers, as evidenced using comparative analysis based on Mean Square Error (MSE), Mean Absolute Error (MAE) and predictive log-likelihood (PLL), using both simulated and real-world datasets. These results validate the benefits of heavy-tailed priors in modelling non-Gaussian distributions and illustrate the resilience of P7PR as a Bayesian substitute for conventional GPR. Although the proposed P7PR framework exhibits robustness and strong predictive accuracy, its computational cost increases remarkably as the size of the datasets, the complexity of the model, and the posterior space expand. Consequently, in order to preserve scalability without compromising performance, practical implementations may necessitate moderate model sizes or approximate inference methods.</p>]]></description>
<pubDate>Dec 2025</pubDate>
</item>
<item>
<title><![CDATA[A Novel Approach for Modeling Non-Local Systems with Scale-Dependent Fluctuations and Material Heterogeneities]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15637]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Sahib A. Sachit&nbsp; &nbsp;Mohammed A. Hussein&nbsp; &nbsp;and Hassan K. Jassim&nbsp; &nbsp;</p><p>This study makes a novel contribution to the field of fractional calculus through the introduction of the Sachit Hussein Jassim (SHJ) fractional derivative. This is a newly defined operator with a non-singular, smooth kernel that provides a unified two-sided formulation for temporal and spatial variables. Methodologically, the paper develops a rigorous mathematical formulation and theoretical foundation for the SHJ derivative, establishing proofs for existence, uniqueness, and transform compatibility. The analytical properties of the operator are examined through the use of Laplace, Sumudu, and Jafari transforms, thereby confirming its integration within established analytical techniques. In order to demonstrate its applicability, the Laplace–Adomian Decomposition Method (LADM) is employed to obtain approximate solutions for fractional partial differential equations incorporating the SHJ derivative. Examples of such equations include Burgers' equation and heat-like models. The numerical findings demonstrate that the SHJ derivative attains considerably higher accuracy and faster convergence in comparison to the Caputo–Fabrizio and Atangana–Baleanu formulations. Quantitative comparisons and graphical analyses reveal minimal numerical errors and enhanced stability across varying fractional orders, thereby emphasising the robustness of the new operator in modelling nonlocal and multiscale phenomena. The study concludes that the SHJ derivative constitutes a substantial advancement in the theory and application of fractional derivatives, providing a reliable analytical and computational tool for simulating nonlocal dynamics, material heterogeneities, and complex diffusion processes. The implications of this phenomenon extend to a variety of disciplines, including applied mathematics, physics, and engineering, where nonlocal modelling is imperative. It is suggested that future work should explore multidimensional extensions and numerical optimisation, with a view to broadening the potential impact across scientific and industrial domains.</p>]]></description>
<pubDate>Dec 2025</pubDate>
</item>
<item>
<title><![CDATA[An Efficient HaarWavelet–Finite Difference Scheme for Solving Heat-Type Parabolic Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15636]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Rawa M. Hammad Amin&nbsp; &nbsp;and Younis A. Sabawi&nbsp; &nbsp;</p><p>This paper presents a detailed numerical analysis of a particular class of heat equations. To approximate the solutions, we develop a hybrid computational framework that combines the Haar wavelet collocation method with finite difference techniques. In the proposed approach, finite difference schemes are first applied to discretize the time variable, thereby reducing the original problem to a semi-discrete system. Subsequently, Haar wavelets defined on a uniform spatial grid are employed to approximate the spatial derivatives, allowing for efficient representation of both smooth and non-smooth solution profiles. Within the framework of Sobolev spaces, rigorous proofs of stability and convergence are provided, ensuring the reliability of the method from a theoretical standpoint. To validate the practical performance of the proposed scheme, extensive numerical experiments are conducted across a variety of test problems. The accuracy of the method is quantitatively assessed using both the <img src=image/13443061_01.gif> and <img src=image/13443061_02.gif> error norms. The results consistently demonstrate that the method delivers high accuracy while maintaining strong computational efficiency. Furthermore, the computed solutions exhibit excellent agreement with exact analytical solutions, thereby confirming the robustness and effectiveness of the hybrid scheme for solving heat equations. In addition, the findings of this research make several contributions to the field of numerical analysis of partial differential equations.</p>]]></description>
<pubDate>Dec 2025</pubDate>
</item>
<item>
<title><![CDATA[A Static Index Number and Out Degree (In Degree) Domination Number of Total Directed Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15635]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>S. Mahesh Priya&nbsp; &nbsp;and M. Kamaraj&nbsp; &nbsp;</p><p>The primary objective of this research is to introduce and analyse the concept of the static index number in the context of directed and total directed graphs. This study focuses on exploring the structural properties of two newly defined components, the engender edge set and the sturdy set, which play a crucial role in characterizing connectivity within directed graphs. A directed graph is said to be a minimally connected sub digraph, and if we remove any one edge from the graph, then the graph will be disconnected. The minimum cardinality of a set <img src=image/13443264_01.gif> is said to be the Engender edge set of <img src=image/13443264_05.gif> if (i) <img src=image/13443264_02.gif> (ii) <img src=image/13443264_03.gif> is a minimally connected subdigraph. The set of edges <img src=image/13443264_04.gif> is called a sturdy set of <img src=image/13443264_05.gif> and the minimum cardinality of a sturdy set is called a static index number of a directed graph <img src=image/13443264_05.gif> and it is denoted by <img src=image/13443264_06.gif>. This parameter serves as a novel measure of structural stability and minimal connectivity in directed graphs. In this work, we establish several theoretical results and derive bounds for the static index number of both directed and total directed graphs. Furthermore, we examine its relationship with other graph-theoretic parameters, particularly the out-degree and in-degree domination numbers. The findings provide deeper insight into the interplay between domination and minimal connectivity in directed networks and contribute to the broader understanding of graph invariants in digraph theory.</p>]]></description>
<pubDate>Dec 2025</pubDate>
</item>
<item>
<title><![CDATA[Generalized Bihari-Type Inequality via <img src=image/13443173_01.gif>-Hilfer Fractional Operators and Applications to Nonlocal Cauchy-Type Systems]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15634]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Liqiang Chen&nbsp; &nbsp;and Norazrizal Aswad Abdul Rahman&nbsp; &nbsp;</p><p>This paper develops a generalized Bihari-type integral inequality within the framework of <img src=image/13443173_02.gif>-Hilfer fractional operators and demonstrates its applicability to the analysis of nonlocal Cauchy-type fractional differential systems. The proposed inequality extends the classical Bihari inequality to fractional integrals defined with respect to an increasing function <img src=image/13443173_02.gif>, thereby encompassing the Riemann–Liouville and Caputo cases as special instances. This generalization offers a unified perspective on several well-known inequalities and provides a powerful tool for investigating nonlinear fractional systems with memory and hereditary properties. Building on this theoretical foundation, we study a class of nonlinear coupled nonlocal Cauchy-type systems governed by <img src=image/13443173_02.gif>-Hilfer derivatives. Using fixed point theory, we establish rigorous existence and uniqueness results and further demonstrate that the solutions exhibit Mittag-Leffler–Ulam–Hyers (ML-UH) stability. Specifically, we prove that small perturbations in system parameters or initial data lead to deviations that remain uniformly bounded by explicit Mittag-Leffler functions, ensuring solution robustness. Numerical simulations are provided to illustrate the theoretical findings, clearly visualizing solution trajectories and their stability under perturbations, thereby confirming the accuracy of the stability estimates. The main contribution of this work lies in extending the Bihari inequality to the <img src=image/13443173_02.gif>-Hilfer setting and applying it to stability analysis of nonlocal fractional problems—a topic that, to the best of our knowledge, has not been fully addressed in the literature. This study introduces a novel methodological tool for fractional calculus and highlights its significance in analyzing qualitative properties of fractional-order systems. The results also offer potential applications in engineering, physics, and control theory, where fractional models are increasingly used to describe anomalous diffusion, viscoelasticity, and long-memory processes. The limitations of this work primarily concern the Lipschitz assumptions imposed on the nonlinear terms and the restriction to deterministic systems. Future research may extend the analysis to more general nonlinearities of Osgood type, impulsive or stochastic fractional systems, or to tempered or distributed-order fractional operators. Overall, this research strengthens the theoretical foundation for studying nonlocal fractional systems and provides practical criteria for ensuring stability in complex dynamical models.</p>]]></description>
<pubDate>Dec 2025</pubDate>
</item>
<item>
<title><![CDATA[On the Prediction Variance Performances of the Latin Hypercube Design in Some Response Surface Models]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15633]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Rita Efeizomor&nbsp; &nbsp;Hope Mbachu&nbsp; &nbsp;Julius Nwanya&nbsp; &nbsp;Stephen Ihekuna&nbsp; &nbsp;and Emenike Chukwu&nbsp; &nbsp;</p><p>In space-filling designs, Latin Hypercube Design (LHD) is a common choice of experimental design strategy for computer experiments. Prediction variance describes the error involved with making a prediction using a response surface model. This study provides a theoretical bound on the prediction variance of LHD, which is useful for analyzing the efficiency of space-filling design in computer experiments. The LHDs examined in this study have one-dimensional uniformity such that, for each input variable, its range is divided into the same number of equally-spaced intervals as the number of observations. This is done for factors <img src=image/13442845_01.gif> within the second- and third-order response surface models. The G-optimality, I-optimality criteria, and Fraction of Design Space (FDS) plots were the methods employed to assess the predictive capabilities and accuracy of these LHDs. The findings revealed that LHDs performed better under third-order models when evaluated using the G-optimality criterion, while using the I-optimality criterion, LHDs performed better in second-order models. The FDS plots further indicate that as the number of factors increases, the prediction errors across the models become approximately similar. The results highlight LHDs' versatility in handling high-dimensional problems which underscores practical interest in LHDs for engineering simulations, reinforcing their role in minimizing prediction variance.</p>]]></description>
<pubDate>Dec 2025</pubDate>
</item>
<item>
<title><![CDATA[On the Character Table of the Multiplicative Group of a Finite Nearfield]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15545]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>P. Djagba&nbsp; &nbsp;A. L. Prins&nbsp; &nbsp;and M. Tounkara&nbsp; &nbsp;</p><p>This study investigates the representation theory of the multiplicative group of a finite nearfield, with a specific focus on the group <img src=image/13440431_01.gif>, arising from Dickson nearfields. Nearfields, generalizations of fields lacking one of the two distributive laws, pose interesting challenges in algebraic structures due to their noncommutativity and asymmetry. Their multiplicative groups, particularly those derived from Dickson nearfields, have a metacyclic structure that allows for intricate analysis using representation theory. The purpose of this research is to construct and analyze the character tables of these multiplicative groups. We explore foundational concepts such as Schur's lemma, Maschke's theorem, and orthogonality relations, and apply them within the context of finite nearfields. A combination of theoretical and computational methodologies is employed. Specifically, we make use of the Magma algebra system to build and verify character tables for <img src=image/13440431_01.gif> for small values of <img src=image/13440431_02.gif> and <img src=image/13440431_03.gif>, including detailed derivations for <img src=image/13440431_04.gif> and <img src=image/13440431_05.gif>. The principal results include explicit character tables, classifications of conjugacy classes, degrees of irreducible representations, and a generalization framework for any pair (<img src=image/13440431_06.gif>). The research demonstrates that the multiplicative groups <img src=image/13440431_01.gif> are not only metacyclic but also nilpotent and solvable under specific conditions. It further establishes structural conditions that determine the conjugacy class partitioning and representation degrees. Our conclusions highlight the deep connection between nearfield algebraic properties and finite group representation theory. This work contributes to the theoretical development of group representations in non-classical algebraic settings and opens doors to applications in geometry, coding theory, and cryptography. Practically, the use of Magma facilitates automated representation analysis, and socially, this research promotes a broader understanding of non-traditional algebraic systems. While the current study is computationally verified for small instances, future work will aim to optimize algorithms for handling larger <img src=image/13440431_01.gif> structures, enhancing scalability and applicability.</p>]]></description>
<pubDate>Dec 2025</pubDate>
</item>
<item>
<title><![CDATA[Generalized Lambda Distribution-Based Change Point Detection Technique: A Comparative Study with Pruned Exact Linear Time (PELT) Approach]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15544]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Basnayake Ralalage Pavithra Malki Basnayake&nbsp; &nbsp;and Nishani Vasana Chandrasekara&nbsp; &nbsp;</p><p>Identifying structural changes in time series data is essential for accurate modeling, forecasting, and informed decision-making in many fields. Classical change point detection methods often assume specific distributional forms, such as normality, which can limit their effectiveness when dealing with real-world data exhibiting skewness or heavy tails. The limitation is mitigated in this study through the introduction of a methodological enhancement by introducing a change point analysis (CPA) approach based on the Generalized Lambda Distribution (GLD), which is known for its flexibility in modeling diverse data distributions. The primary objective is to develop and evaluate a GLD-based CPA (GLDCPA) method and compare the performance with the widely applied Pruned Exact Linear Time (PELT) algorithm. Simulation results demonstrated that the GLD-CPA approach consistently achieved higher accuracy in detecting all true change points across all sample sizes, particularly outperforming the PELT method, where PELT frequently failed to identify multiple change points. The proposed methodology is broadly applicable to real-world time series data, particularly those exhibiting non-normal characteristics. To illustrate the utility and effectiveness of the method, currency exchange rate series were selected. The findings highlight the advantage of adopting flexible distributional modeling in CPA, offering improved detection capabilities in both simulated and empirical settings. The proposed GLD-CPA method is a reliable framework for analyzing time series data characterized by complex distributional structures, offering practical utility for both applied and theoretical investigations.</p>]]></description>
<pubDate>Dec 2025</pubDate>
</item>
<item>
<title><![CDATA[Estimation of Population Mean Using Auxiliary Attribute Information: A Comparative Study]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15543]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Manpreet Singh&nbsp; &nbsp;and Sarbjit Singh Brar&nbsp; &nbsp;</p><p>This study presents a comparative analysis of a weighted estimator under post-stratified sampling and traditional estimators including ratio, product, exponential-ratio, exponential-product, and regression estimators in the presence of auxiliary attribute information. Although these estimators have been individually examined in the literature, their comparative performance under simple random sampling with poststratification remains unexplored. The study derives approximate expressions for the bias and mean square error (MSE) of all considered estimators and identifies conditions under which the weighted post-stratified estimator demonstrates superior efficiency. Theoretical results confirm that using auxiliary attributes at the estimation stage allows the weighted post-stratified estimator to attain lower MSE than traditional estimators. Furthermore, the study proposes a generalized Searls-type estimator, deriving expressions for its bias, minimum MSE, and optimality conditions. General criteria are also established to evaluate and compare various Searls-type estimators. A simulation study conducted in R for both the weighted post-stratified estimator and Searls form of post-stratified estimator supports the theoretical findings. Overall, the results advocate the use of weighted estimators when auxiliary attribute information is available, highlighting their applicability in diverse fields such as health, economics, education, and agriculture.</p>]]></description>
<pubDate>Dec 2025</pubDate>
</item>
<item>
<title><![CDATA[Bayesian and Frequentist Estimation of the Entropy for the Symmetric Double Pareto Distribution Using Random Sampling]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15542]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Mohammed Obeidat&nbsp; &nbsp;Farah Telfah&nbsp; &nbsp;Mariam Setaboha&nbsp; &nbsp;and Mohammad Al-Talib&nbsp; &nbsp;</p><p>Entropy is a useful measure in several fields; therefore, estimation of entropy has a wide range of applications. This research aims to estimate the entropy of the symmetric double Pareto (SDP) distribution under different sampling schemes, namely: simple random sampling, ranked set sampling, double ranked sampling, and systematic ranked set sampling. We adopt both maximum likelihood and Bayesian estimation methods. To deal with the fact that the support of the SDP distribution depends on its parameter, a highly efficient Bayesian approach is proposed and implemented through Markov Chain Monte Carlo methods. Both point and interval estimation are studied. The proposed Bayesian method is compared with maximum likelihood and bootstrapping methods. A comprehensive simulation study is conducted and the performance of the proposed estimation methods is studied through the mean square error and coverage probability. It was observed that the proposed Bayesian method outperforms the maximum likelihood and bootstrapping methods for small and moderate sample sizes. The discussed methods have similar performance for larger sample sizes. In terms of the sampling schemes, it was observed that the DRSS is the best for small <img src=image/13441419_01.gif>, while SRSS has better performance as <img src=image/13441419_01.gif> increases. The results were confirmed by applying the proposed methods to a real data example.</p>]]></description>
<pubDate>Dec 2025</pubDate>
</item>
<item>
<title><![CDATA[Adaptive Keep-Ratio Selection in Partial-Sample Regression via Nested Cross-Validation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15541]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Aarhus M. Dela Cruz&nbsp; &nbsp;</p><p>Selecting the right neighborhood size is crucial for nonparametric regression, especially when data conditions shift. We study two automated rules for Partial-Sample Regression (PSR), which predicts by averaging only the most relevant observations. The first rule, Fit-Max, minimizes training error, while the second, Risk-Min, minimizes cross-validated error. To avoid biased estimates, both are embedded in a nested cross-validation design that separates tuning from final testing. On synthetic data with regime changes, we find a clear U-shaped error curve with an optimum when keeping only 5–7% of the data, cutting mean squared error by more than 45% compared with a fixed 50% rule. Fit-Max and Risk-Min perform almost identically out of sample, with a small bootstrap edge for Fit-Max. On benchmark datasets, adaptive PSR consistently reduced error relative to the fixed rule, with gains ranging from about 17% in moderate settings to over 80% in more complex cases. Compared with classical methods, PSR achieved accuracy competitive with <img src=image/13443152_01.gif>-nearest neighbors and Nadaraya–Watson kernel regression. These findings establish adaptive keep-ratio selection as a simple, reproducible, and effective strategy for relevance-based regression.</p>]]></description>
<pubDate>Dec 2025</pubDate>
</item>
<item>
<title><![CDATA[A Brief Review on Fuzzy Fractional Differential Equations and Their Applications]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15540]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Mohammed Rabih&nbsp; &nbsp;</p><p>Differential equation is a very important topic of interest for continuous system modelling as well as theoretical purposes. For real-life situation modelling, several variations are used by scientists and engineers. A fractional differential equation is one of them. Due to several reasons, such as measurement difficulty and environmental noise, the data set for real-life modelling is sometimes imprecise in nature. Fuzzy set theory is one of the suitable ideologies to deal with uncertainty. When a fuzzy set is associated with a fractional differential equation, then fuzzy fractional differential equations (FFDEs) come. Fuzzy fractional differential equations have garnered significant attention due to their ability to model processes with uncertainty and memory dependence in various scientific and theoretical contexts. Also, the concepts are applied to various engineering sciences and technical applications for modelling real-life problems. This review paper provides a comprehensive summary of FFDEs, a key concept in theoretical advancements, solution methodologies, and practical applications. The study examines previously published materials and conducts a comparative analysis of several components. The different crisp and fuzzy fractional derivatives are also compared with respect to different components. The various forms of past FFDEs are described, along with their theoretical and practical applications. Finally, tasks and probable future research directions in this emerging field are outlined. </p>]]></description>
<pubDate>Dec 2025</pubDate>
</item>
<item>
<title><![CDATA[Performance of Conditional Autoregressive Models and Their Implementation for Disease Mapping]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15539]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Jajang&nbsp; &nbsp;Budi Pratikno&nbsp; &nbsp;Mashuri&nbsp; &nbsp;Novita Eka Chandra&nbsp; &nbsp;Zulfa Fadhilah&nbsp; &nbsp;and Naura Nahda Nadhifa&nbsp; &nbsp;</p><p>Conditional Autoregressive (CAR) models have been widely used in various disciplines, including epidemiological studies. The application of the CAR model in epidemiological studies is often associated with the relative risk of an infectious disease. This relative risk value can be estimated using the CAR models. Here, we evaluate four commonly used CAR models: the Intrinsic CAR, the Besag-York-Mollié CAR (BYM CAR), the BYM-modified CAR (BYM2 CAR), and the Leroux CAR (LCAR). To estimate CAR model parameters, Bayesian inference and the Integrated Nested Laplace Approximation (INLA) concept are used. The selected model was then used to model the number of dengue hemorrhagic fever (DHF) cases in Central Java Province in 2024. To support this analysis, we used 50 datasets simulated for each sample size (n), ranging from 10 to 100. The results of the study showed that of the four models compared, the best model was BYM2. This model was then used to model DHF cases in 2024 in Central Java Province. The research findings indicate the necessity of controlling population density, optimizing the role of medical personnel, and preparing for increased rainfall to curb the spread of dengue fever. Comprehensive detection and control measures through medical facilities are also required. Meanwhile, based on the coefficient of the altitude variable in the model, altitude has a positive influence on the number of dengue fever cases. Therefore, the conflicting conclusions between the model results and the medical perspective require data verification and further study of this variable.</p>]]></description>
<pubDate>Dec 2025</pubDate>
</item>
<item>
<title><![CDATA[Applications of Inner Product and Hilbert Spaces in Machine Learning with Data Analysis]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15538]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Md. Abdul Mannan&nbsp; &nbsp;Md. Amanat Ullah&nbsp; &nbsp;Md. Amzad Hossain&nbsp; &nbsp;Siful Islam&nbsp; &nbsp;Md. Shafikul Islam&nbsp; &nbsp;Mohammad Makchudul Alam&nbsp; &nbsp;Md. Atiqur Rahman&nbsp; &nbsp;Sahib Jada Eyakub Khan&nbsp; &nbsp;and Muzibur Rahman Mozumder&nbsp; &nbsp;</p><p>This study presents a comprehensive exploration of inner product spaces and their completion into Hilbert spaces, examining their foundational roles in both pure mathematics and modern machine learning. Inner product spaces introduce geometric notions such as orthogonality, angle, and norm, while Hilbert spaces, being complete IPS, extend these ideas to infinite dimensional settings. This paper develops key theoretical concepts including the Cauchy–Schwarz inequality, Bessel's inequality, Parseval's identity, the polarization identity, and orthogonal projections. The discussion further explores the functional enrichment that Hilbert spaces provide over normed and Banach spaces, particularly in contexts requiring convergence and projection-based optimization. The practical relevance of Hilbert spaces is demonstrated through their role in machine learning algorithms such as Support Vector Machines (SVM), Principal Component Analysis (PCA), and kernel methods using Reproducing Kernel Hilbert Spaces (RKHS). Numerical simulations and MATLAB visualizations are employed to aid understanding and demonstrate the application of inner product theory in data driven tasks such as classification and dimensionality reduction. The results show how the geometry of Hilbert spaces naturally supports core operations in machine learning, making them indispensable in theoretical development and algorithmic design.</p>]]></description>
<pubDate>Dec 2025</pubDate>
</item>
<item>
<title><![CDATA[Summary Goodness-of-Fit Statistics for Binary Generalized Linear Models with Noncanonical Probit, Log-Log and Complementary Log-Log Links]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15537]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Xuelu Sun&nbsp; &nbsp;Stephen J. Quinn&nbsp; &nbsp;and Sunil Bhar&nbsp; &nbsp;</p><p>Logistic regression is a popular and widely used method in applied research. However, other noncanonical link models can be more appropriate when the logistic model does not suit the characteristics of the response variable. Goodness-of-fit statistics play a crucial role in evaluating model adequacy by providing quantitative measures of how closely predicted probabilities align with observed outcomes. Despite their importance, limited research has focused on assessing goodness-of-fit under noncanonical links. In this paper, we conducted a simulation study to compare the performance of the Hosmer Lemeshow (<img src=image/13442801_01.gif>) statistic, the normalized unweighted sum of squares (<img src=image/13442801_02.gif>) statistic, and the Hjort-Hosmer (<img src=image/13442801_03.gif>) statistic in generalized linear models with noncanonical links, specifically probit, log-log and complementary log-log. The simulation results show that all three statistics remained the expected Type I error rate of 5% under correctly specified models. In scenarios with model misspecification, the <img src=image/13442801_02.gif> (34.8%) and <img src=image/13442801_03.gif> (33.0%) statistics generally achieved higher overall rejection rates than the <img src=image/13442801_01.gif> statistic (27.8%). However, in some scenarios, the <img src=image/13442801_01.gif> statistic outperformed both the <img src=image/13442801_02.gif> and <img src=image/13442801_03.gif> statistics, suggesting that all three statistics have a role in assessing model adequacy.</p>]]></description>
<pubDate>Dec 2025</pubDate>
</item>
<item>
<title><![CDATA[Independent Domination Number of Certain Graphs and Their Degree Splitting Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15516]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>M. Sridevi&nbsp; &nbsp;N. Srinivasan&nbsp; &nbsp;and Parthiban A&nbsp; &nbsp;</p><p>The concept of domination in graphs is an inevitable topic in the field of graph theory. The sources of domination can be seen in a wide range of real-world situations, such as radio broadcasting, computer communication networks, school bus routing, electrical power networks, influence in social networks, surveying, resource allocation, and even transporting hazardous materials. A dominating set (DS) of a finite, simple, connected, and undirected graph <img src=image/13441473_01.gif>, or simply <img src=image/13441473_02.gif> with a non-empty node set <img src=image/13441473_03.gif> and line set <img src=image/13441473_04.gif>, is a set <img src=image/13441473_05.gif> such that every node, <img src=image/13441473_06.gif> is adjacent to a node, <img src=image/13441473_07.gif>. The domination number (DN) of <img src=image/13441473_02.gif>, represented by <img src=image/13441473_08.gif>, is the cardinality of a minimum DS. Similarly, set <img src=image/13441473_09.gif> of nodes is called an independent set (IS) if no two nodes in <img src=image/13441473_10.gif> are adjacent to each other. An independent dominating set (IDS) of <img src=image/13441473_02.gif> is a set <img src=image/13441473_11.gif> that is both dominating and independent in <img src=image/13441473_02.gif>. The independent domination number (IDN) of <img src=image/13441473_02.gif>, represented by <img src=image/13441473_12.gif>, is the cardinality of a minimum IDS. Further, let <img src=image/13441473_02.gif> be with node set <img src=image/13441473_13.gif> where each <img src=image/13441473_14.gif> is a set of nodes having at least two nodes of the same degree and <img src=image/13441473_15.gif>. The degree splitting graph (DSG) of <img src=image/13441473_02.gif>, denoted by <img src=image/13441473_16.gif>, is formed from <img src=image/13441473_02.gif> by adding nodes <img src=image/13441473_17.gif> and joining to each node of <img src=image/13441473_14.gif> for <img src=image/13441473_18.gif>. In this paper, we derive the IDN of certain graphs such as lotus graph, line graph of sunlet graph, butterfly graph, shipping graph, ladder graph, fan graph, double fan graph, and their degree splitting graphs.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[The Identification of Influential Groups in Linear Regression Models via an Influence Matrix Approach]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15515]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Tobias Ejiofor Ugah&nbsp; &nbsp;Anichebe Gregory Emeka&nbsp; &nbsp;Ezeora Nnamdi Johnson&nbsp; &nbsp;Caroline Ngozi Asogwa&nbsp; &nbsp;Adaora Angela Obayi&nbsp; &nbsp;Uchenna Charity Onwuamaeze&nbsp; &nbsp;Emmanuel Ikechukwu Mba&nbsp; &nbsp;Ifeoma Christy Mba&nbsp; &nbsp;Egbo Mary Nkechinyere&nbsp; &nbsp;and Comfort Njideka Ekene-Okafor&nbsp; &nbsp;</p><p>The Cook's distance measure is a prominent diagnostic tool for influence measure in linear regression diagnostics. Many authors have studied it, and the main focus is on its use for detection of a single influential observation in linear regression. In this work, we propose a standardized version of it and extend the single-case form to flag influential subsets. The proposed method uses diagonal and off-diagonal elements of a normalized influence matrix <img src=image/13441459_01.gif>. The main diagonal elements of <img src=image/13441459_01.gif> consist of the standardized univariate Cook statistics, and the off-diagonal elements consist of useful statistics that can detect influential subsets. A scattergram of the off-diagonal components of <img src=image/13441459_01.gif> is drawn, and bounds (lower and upper bounds) are imposed on it. These bounds form the main artillery for detecting influential subsets. One of the glaring advantages of the approach is that it facilitates the identification of influential subsets that would be lost if only the main diagonal entries of <img src=image/13441459_01.gif> are explored. The method is effective and computationally simple to apply, especially where more complex methods are not easy to implement, because they are computationally intensive. Analysis of well-known real-life data sets in linear regression diagnostics is used to illustrate the application and usefulness of the proposed method.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[Analytical Study of Priority-Feedback Queue Network Model Comprising of Three Parallel Servers within Stochastic Conditions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15514]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Preeti&nbsp; &nbsp;Deepak Gupta&nbsp; &nbsp;and Vandana Saini&nbsp; &nbsp;</p><p>This study presents a comprehensive analytical investigation of a four-server queueing system, modeled as a priority feedback queue network operating under stochastic conditions. The system architecture comprises a primary server that incorporates a priority mechanism and is connected to three parallel servers. Customers, divided into low- and high-priority groups, arrive at the primary server according to a Poisson process. Service times at all servers are assumed to follow exponential distributions. After completion of service at any server, a customer will either exit the system or re-enter for further service with a different feedback probability, which reflects a realistic service satisfaction mechanism. This analysis evaluates explicit expressions for time-independent state probabilities and derives steady-state performance metrics of the network, including server utilization, average queue lengths, waiting times, and throughput by application of generating function techniques and the calculus-based resolution of the associated differential-difference equations. Extensive numerical simulations are carried out to validate the theoretical results and to delve into the impact of different parameters on various performance indicators, which gives information to system administration in order to reduce congestion. This study yields actionable insights into the optimal allocation of resources, aiming to minimize delays and maximize throughput under varying stochastic conditions. Furthermore, the behavioral analysis of the model underscores the notable effects of priority and feedback mechanisms on the system achievement measures and provides a foundation for optimization strategies in multi-class service environments. The findings have significant implications for the design and optimization of complex queueing networks in telecommunications, manufacturing systems, and other service-oriented industries where feedback and priority-based processing are prevalent.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[Jacobi Identity for the Generalized Cross Product in <img src=image/13441863_01.gif>: A Theoretical and Computational Approach]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15513]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Ricardo Velezmoro-León&nbsp; &nbsp;Robert Ipanaqué-Chero&nbsp; &nbsp;Luis A. Fiestas-Llenque&nbsp; &nbsp;and Rolando E. Ipanaqué-Silva&nbsp; &nbsp;</p><p>The classical cross product in three-dimensional Euclidean space <img src=image/13441863_02.gif> satisfies the Jacobi identity, a fundamental property linked to Lie algebra structures. However, this identity's transparent and rigorous generalization to higher dimensions has remained elusive. In this paper, we address this gap by investigating the antisymmetrization of nested cross products involving <img src=image/13441863_03.gif> vectors in <img src=image/13441863_04.gif>. We aim to establish a generalized Jacobi identity valid for all dimensions <img src=image/13441863_05.gif>. Using symbolic computation via the Wolfram Language, we identified consistent algebraic patterns suggesting such a generalization. Building upon these observations, we developed a theoretical framework based on multilinear algebra, permutation groups, and antisymmetrization operators to prove the identity rigorously. Our methodology combines computational experimentation with classical mathematical theory, resulting in a comprehensive analysis. The principal result states that a specific linear combination of cross products, determined by <img src=image/13441863_06.gif>-shuffles, always vanishes in <img src=image/13441863_04.gif>. This provides a natural extension of the classical Jacobi identity, with deep combinatorial and algebraic significance. The main conclusions highlight the strong interplay between computation and abstract algebra in discovering new mathematical properties. This study contributes to the field by expanding the understanding of antisymmetric operations in higher-dimensional spaces and illustrating a robust methodology that bridges symbolic computation and formal proof. Limitations of this work include its current restriction to Euclidean spaces; extensions to more general manifolds or non-Euclidean settings are left for future research. Although the immediate practical and social implications are limited, the theoretical advances presented here lay the groundwork for further exploration in differential geometry, theoretical physics, and computational mathematics. Our findings emphasize the importance of modern computational tools in advancing pure mathematical research.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[Rough Approximations of Soft Semigroups]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15512]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Rukchart Prasertpong&nbsp; &nbsp;and Aiyared Iampan&nbsp; &nbsp;</p><p>In the context of classical soft set theory, soft sets are primarily used to find optimal parameters and the best alternative data. A soft set can be approximated by a rough set model as either an exact or an inexact soft set. In this context, lower and upper rough approximations of soft sets serve as tools for verification. To study this concept in algebraic structures, this paper introduces suitable lower and upper rough approximations within soft ideal theory in semigroups, leading to a new rough set model. A corresponding example is provided to illustrate the proposed concepts. Fundamental properties of the model are explored through operations on soft sets. Moreover, we prove that the lower and upper rough approximations of various types of soft semigroup structures—including uni-soft semigroups, uni-soft left ideals, uni-soft right ideals, uni-soft quasi-ideals, int-soft semigroups, int-soft left ideals, int-soft right ideals, int-soft quasi-ideals, soft-covered semigroups, soft-covered left ideals, soft-covered right ideals, soft-covered quasi-ideals, soft anti-covered semigroups, soft anti-covered left ideals, soft anti-covered right ideals, and soft anti-covered quasi-ideals—preserve their respective structural properties. Overall, this study provides new results for soft sets in semigroups within the framework of rough set theory. This research is anticipated to exert a significant impact on the study of algebra and decision-making processes. Specifically, when the parametric domain is endowed with a semigroup operation.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[Advancements in Robust Least Squares Approximation Techniques: A Comparative Analysis]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15511]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Aishah Basha&nbsp; &nbsp;</p><p>This study presents a comprehensive comparative analysis of recent advancements in robust least squares approximation techniques, with emphasis on their theoretical foundations, computational methods, and practical applications. While traditional least squares methods remain widely used for data fitting, signal processing, and estimation tasks, they are highly sensitive to noise and outliers, reducing effectiveness in real-world settings. To address these limitations, robust approaches have emerged, including penalised least squares, polynomial discrete penalised least squares, meshfree moving least squares (MLS), fuzzy least squares, and augmented MLS methods. The purpose of this research is to evaluate these techniques in terms of robustness, accuracy, and computational efficiency across various domains, including finance, engineering, geospatial analysis, and stochastic modelling. Methodologically, the study integrates theoretical insights, case studies, and a controlled numerical illustration with added noise, outliers, and irregular sampling, evaluated using Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and a robustness index (RI). Results show that each method offers distinct advantages. MLS techniques perform well with irregular data, fuzzy least squares are suited for uncertain environments, and penalised approaches balance complexity with stability. In quantitative evaluation, augmented MLS achieved the lowest error and highest robustness index, while runtime analysis revealed trade-offs between accuracy and computational cost. Contributions include a unified framework for comparing methods, explicit clarification of modelling assumptions, and discussion of orthogonal and generalised polynomial systems for enhanced stability. Limitations involve the restricted scope of benchmarks and case studies, which may limit generalisability. Nonetheless, the findings provide actionable guidance for practitioners facing noisy, uncertain, or irregular data, and highlight opportunities for hybrid and adaptive robust least squares methods in emerging fields.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[Research on Estimating Unknown Functions in a Class of Integral Inequalities Involving Discontinuous Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15510]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Liqiang Chen&nbsp; &nbsp;and Norazrizal Aswad Abdul Rahman&nbsp; &nbsp;</p><p>Integral inequalities serve as fundamental analytical tools for investigating the qualitative behavior of solutions to differential, integral, and integro-differential equations. Classical inequalities such as those of Gronwall, Bellman, and Bihari have been widely used to establish uniqueness, boundedness, and stability results. However, their applicability is limited in the presence of discontinuities, impulsive effects, and nonlinear terms of power type. To address these limitations, this paper develops a new class of integral inequalities specifically designed for discontinuous functions. The proposed framework unifies delayed arguments, impulsive jumps, and power-function nonlinearities within a single inequality structure. By combining interval partitioning, piecewise analysis, and mathematical induction, together with extensions of the Gronwall–Bellman–Bihari approach, we derive explicit maximal bounds for unknown functions that satisfy these inequalities. These bounds generalize many existing results as special cases, while maintaining concise and computationally convenient forms. The explicit estimates enhance the theoretical tractability of discontinuous systems and offer practical applicability in complex settings. To demonstrate the effectiveness of the proposed results, we apply them to impulsive differential equations with nonlinear growth and delayed feedback. The example shows that the new inequalities guarantee boundedness even under strong nonlinear effects and abrupt impulses, thus confirming the robustness of the framework. The results highlight potential applications in control theory, stability analysis, and dynamical systems involving memory, delays, or impulsive disturbances. The contributions of this study are threefold: (i) it establishes a unified inequality framework that simultaneously incorporates delay, impulse, and nonlinear structures; (ii) it provides constructive explicit bounds that improve upon existing results; and (iii) it lays a foundation for extending the analysis to fractional-order, stochastic, or multidimensional systems. Overall, this research advances the qualitative theory of differential equations and enriches the available tools for analyzing discontinuous dynamical models.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[A Mixed Integer Optimization Framework for Significant Variable Selection in Linear Regression with Multicollinearity Control]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15509]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Samah Abdellatif El-Danasoury&nbsp; &nbsp;Mahmoud Rashwan&nbsp; &nbsp;and Nadia Makary Girgis&nbsp; &nbsp;</p><p>Variable selection in linear regression is a crucial process for selecting the most important variables that contribute to building an efficient model. This process primarily aims to improve model performance, reduce complexity, and enhance interpretability. Different methods have been proposed for variable selection; some are classical methods, and others are mathematical programming methods, which are preferred over classical methods because we can target various constructive objectives within the same model. This paper aims to propose, develop, test, and evaluate a mathematical programming model that selects a subset of explanatory variables in order to obtain a statistically significant linear regression model (LRM) with non-collinear variables. To construct an efficient LRM that is valid for interpretation, the LRM assumptions have to be satisfied. The proposed mathematical programming model will result in a significant linear regression model with significant variables by minimizing the Sum of Squares of Errors (SSE) and satisfying the significance of the overall LRM, ensuring no multicollinearity, as well as other assumptions considered as mathematical constraints in Chung's model (linearity and individual variables' significance), which aims to enhance the mathematical programming model. The proposed mathematical programming model is compared to Chung's mathematical programming model and the classical stepwise method for variable selection. Using simulated data and applying the three methods, the results show that the suggested model is more suitable for a small number of variables - especially in small and moderate sample sizes- for the case of the overall model significance, whereas adding a no-multicollinearity constraint improves the model's performance in selecting the appropriate variables regardless of the numbers of variables and across different sample sizes.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[A Lagrange-Based Hybrid Block Method with Single Off-Step Point for Differential Equations within a Fuzzy Framework]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15400]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Siti Syazwani Sazali&nbsp; &nbsp;Nurul Huda Abdul Aziz&nbsp; &nbsp;Muhammad Zaini Ahmad&nbsp; &nbsp;and Lee Khai Chien&nbsp; &nbsp;</p><p>This paper presents a novel and efficient two-step hybrid block scheme incorporating a single off-step point for solving differential equation with fuzzy numbers. Addressing the demand for robust and precise solvers designed specifically for fuzzy systems, the proposed approach leverages the adaptability of a two-step scheme enhanced by an additional evaluation at an off-step point to improve both accuracy and computational performance. The formulation of the 2HBM1OP algorithm is achieved through constructing a Lagrange interpolation polynomial that includes both step and off-step nodes, enabling the approximation of the fuzzy solution at multiple points within each computational block. A thorough convergence study confirms the method's consistency and zero-stability conditions are satisfied under fuzzy equations. Numerical tests involving benchmark fuzzy differential equations demonstrate the approach's effectiveness in achieving accurate results with efficient computational effort relative to exact solutions. This work contributes to the advancement of numerical techniques in fuzzy system analysis by providing a rigorously developed and practically applicable method that tackles the unique challenges inherent in fuzzy initial value problems. The 2HBM1OP scheme bridges theoretical fuzzy concepts with numerical computation and lays groundwork for future exploration of higher-order block methods in fuzzy settings.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[Rough Injective G-Modules]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15399]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>S. Sangeetha&nbsp; &nbsp;Shakeela Sathish&nbsp; &nbsp;B. Muthu Deepika&nbsp; &nbsp;and B. Srirekha&nbsp; &nbsp;</p><p>The concept of a G-module serves as a fundamental link between group theory and module theory, offering a powerful algebraic framework for analyzing group actions on modules. In this paper, we extend the classical theory of G-modules by incorporating the mathematical foundation of rough set theory, which is well-suited for modeling uncertainty, vagueness, and indiscernibility. To this end, we introduce and formalize the notion of rough G-modules—algebraic structures where the module operations and group actions are subject to approximation, based on lower and upper bounds. Also, we define and investigate rough injective G-modules, which generalize the classical concept of injectivity under rough approximations. We examine their structural properties, provide characterizations, and explore their behavior under homomorphisms and extensions. The paper establishes key embedding theorems and extension criteria, ensuring the compatibility of injectivity with rough approximation operators. The practical relevance of these theoretical developments through an application to data access control systems is provided where user permissions and role hierarchies naturally exhibit rough and uncertain relationships. Our results offer a novel algebraic framework for reasoning about access privileges in the presence of partial or uncertain information, thus highlighting the potential of rough G-module theory in real-world computational settings.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[A New Approach towards Rough Lattice Using Rough Relation with a Condition]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15398]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>B. Srirekha&nbsp; &nbsp;Shakeela Sathish&nbsp; &nbsp;B. Muthu Deepika&nbsp; &nbsp;and S. Sangeetha&nbsp; &nbsp;</p><p>The concept of rough relations plays a significant role in rough set theory, introduced by Zdzisław Pawlak in 1981. This study employs the rough membership function as a key tool to represent and analyze rough relations over a universe of discourse. A specific condition is applied to these relations and examined through ordered pairs, allowing systematic evaluation of approximations and their behavior. The work investigates the algebraic properties of rough relations within a lattice-theoretic framework, particularly on distributive lattices equipped with a complementary operation. This structure provides a clear interpretation of the approximation process and the relationship between elements. The existence and behavior of upper approximations are illustrated through examples, with emphasis on how granularity refines approximation boundaries and improves the classification of indiscernible objects. Key theoretical results demonstrate that in a rough lattice, if two elements are ordered, then their meet and join operations preserve non-negativity under the rough membership function, reflecting algebraic consistency. This property extends to complementary elements, ensuring that their mutual relationships also maintain non-negative rough membership values. Additionally, when comparing two rough lattices over the same universe, inclusion between their approximation sets leads to a monotonic increase in rough membership values, indicating order preservation. In distributive lattices, specific inequalities involving elements and their complements reinforce internal consistency between the algebraic and rough structures. Within the upper approximation space, element ordering is symmetrically reflected through complementation, supporting the duality principle. The study also confirms the transitivity of the rough membership function: if one element is roughly related to a second, and that second to a third, then the first is also roughly related to the third—highlighting logical coherence. Overall, these findings advance the theoretical understanding of rough lattice structures and underscore their importance in modeling uncertain and incomplete information, with applications in logic programming, data mining, and formal concept analysis.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[A Fixed Point Theorem on Metric-like Spaces of Hyperbolic Type]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15397]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Silvja Çobani&nbsp; &nbsp;and Elida Hoxha&nbsp; &nbsp;</p><p>The metric-like space was first introduced by Hitzler in 2000 as a generalization of the metric space which allows for non-zero self-distances. Since then, these spaces have become a focal point of research with regard not only to the investigation of fixed points of contractive mappings, but also to introducing new generalizations of this type of spaces. In this paper, we introduce and investigate a new type of metric-like spaces, referred to as metric-like spaces of hyperbolic type, a generalization of metrically convex metriclike spaces. This space maintains the dislocated properties of the metric-like space while satisfying a hyperbolic convexity criteria inspired by Kirk's 1972 formulation. Motivated by the foundational work of Banach and its subsequent extensions to the metric-like space, we establish a new fixed point result that expand the applicability of weak contractive conditions in the metric-like spaces of hyperbolic type. More precisely, we derive a fixed point theorem for multivalued mappings satisfying a generalized <img src=image/13442551_01.gif> weak-type contractive condition. Our findings demonstrate that significant results in the literature can also be extended to the metric-like spaces of hyperbolic type. Moreover, an illustrative example is provided in the end to demonstrate the validity of the proposed theorem. This work contributes to the growing body of fixed point theory by extending and generalizing several existing results in the literature.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[Binary Huff Curves over Non-Local Rings: The Yak Protocol and Blockchain]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15396]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>EL MEHDI Badre&nbsp; &nbsp;SAMIR Kourtite&nbsp; &nbsp;SLIMANE Sidouna&nbsp; &nbsp;M’Hammed Ziane&nbsp; &nbsp;and ABDELALI Lahrech&nbsp; &nbsp;</p><p>Motivated by the growing need for secure and efficient cryptographic solutions in blockchain technology, this study explores the cryptographic potential of binary Huff curves defined over the non-local ring <img src=image/13442212_01.gif>, introducing novel group structures for advanced blockchain applications. The research aims to enhance the security and efficiency of cryptographic primitives by leveraging the algebraic properties of these curves, particularly for resource-constrained devices in blockchain ecosystems. We establish a bijection between the Huff curve <img src=image/13442212_02.gif> and the product <img src=image/13442212_03.gif>, enabling an efficient group law that increases the complexity of the discrete logarithm problem (DLP). Methodologically, we define arithmetic operations in <img src=image/13442212_04.gif>, prove the bijection, and derive addition formulas for the curve. These results are applied to adapt the Yak key exchange protocol, enhancing its resistance to DLP-based attacks through the non-local ring's structure. Principal findings demonstrate that <img src=image/13442212_02.gif> achieves approximately <img src=image/13442212_05.gif> group order, doubling the DLP security to <img src=image/13442212_06.gif>-bit compared to <img src=image/13442212_06.gif>/2-bit for standard curves over <img src=image/13442212_07.gif>, with computational efficiency suitable for Internet of Things (IoT) devices. The study contributes to cryptography by proposing a robust framework for blockchain transaction security and secure data management, notably in multi-party computation and zero-knowledge proofs. Key conclusions highlight the curves' potential to secure blockchain validators and IoT nodes, as exemplified in supply chain applications. Novel aspects include the non-local ring's algebraic constraints and the Yak protocol's adaptation for blockchain. Limitations include the need for practical implementation and benchmarking against curves like secp256k1. Practical implications involve improved transaction security and data privacy in blockchain, while social implications include enabling secure, decentralized systems for healthcare and supply chain tracking. Future research should validate performance in real-world blockchain environments and assess resistance to side-channel attacks.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[Oscillatory Behavior of Certain Class of Mixed Nonlinear Sub-Elliptic Equations in the Heisenberg Group]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15395]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>S. Balamani&nbsp; &nbsp;S. Priyadharshini&nbsp; &nbsp;K. K. Viswanathan&nbsp; &nbsp;and V. Sadhasivam&nbsp; &nbsp;</p><p>This paper investigates the oscillatory behavior of solutions to a class of nonlinear differential equations on the Heisenberg group, where the dynamics are governed by the interplay between superlinear and sublinear terms. The Heisenberg group, as a fundamental example of a non-commutative Lie group with sub-Riemannian geometry, offers a natural and rich framework for analyzing such equations, which frequently arise in mathematical physics and geometric analysis. To establish oscillatory criteria, we employ a combination of the Riccati transformation and the integral averaging method. The Riccati technique enables the reformulation of the original equation into a form suitable for qualitative analysis, while the integral averaging approach helps derive sufficient conditions by capturing the average behavior of the nonlinear terms and coefficients over certain intervals. This dual approach allows us to rigorously examine how the interaction between superlinear and sublinear growth influences the oscillatory nature of solutions. We derive new sufficient conditions that guarantee the oscillation of all solutions under appropriate structural assumptions. These results generalize and extend classical oscillation criteria to the subelliptic equation of the Heisenberg group. To demonstrate the applicability and sharpness of our results, several illustrative examples are presented. The findings contribute to the broader understanding of oscillation theory in non-Euclidean space and pave the way for future research in nonlinear analysis on Lie groups.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[Improvements of Some Normal Distribution Function Approximations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15394]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Mohammed Obeidat&nbsp; &nbsp;Rema Al-Jamal&nbsp; &nbsp;and Ahmad Hanandeh&nbsp; &nbsp;</p><p>The Gaussian distribution and its related functions are widely used in statistics, engineering, and physics, including hypothesis testing, quality control, and digital communications. As the standard normal cumulative distribution function (CDF) lacks a closed-form, accurate approximations are essential for practical applications. This study presents new, highly accurate improvements to existing approximations of the standard normal distribution function while preserving simplicity. The proposed methodology refines current formulas by introducing additional parameters and systematically optimizing them using R-based iterative procedures guided by the Kolmogorov-Smirnov statistic to minimize maximum and mean absolute errors. Evaluations across a dense grid confirm the precision and robustness of the proposed models. Results demonstrate significant improvements, with one enhancement reducing the maximum error by a factor of 4.86 compared to previous models, while other formulas show notable accuracy gains. The improved models remain straightforward to implement, making them suitable for real-time engineering and applied statistical computations. A practical application is demonstrated by calculating the bit error rate (BER) in Binary Phase Shift Keying (BPSK) systems under Gaussian noise, where the proposed approximations closely replicate the exact BER curve, confirming their effectiveness in telecommunications system analysis. These improvements provide engineers and scientists with precise, efficient tools, supporting modeling, risk assessment, and system reliability in applications requiring accurate Gaussian evaluations.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[Refining Wage Predictions with Machine Learning and Bayesian Optimization]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15393]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Blerina Boçi&nbsp; &nbsp;and Aurora Simoni&nbsp; &nbsp;</p><p>Machine learning (ML) methods are essential in predictive modeling, where they use historical data to build algorithms capable of forecasting future outcomes. To achieve this, hyperparameter optimization is essential for selecting the best model configuration for a specific problem, aiming to minimize prediction error and improve performance. This research examined the performance of three machine learning regression models: support vector regression (SVR), extreme gradient boosting (XGBoost), and random forest (RF). Their effectiveness was measured using evaluation indicators, including mean squared error (MSE), mean absolute error (MAE), coefficient of determination (<img src=image/13442252_01.gif>), and adjusted <img src=image/13442252_01.gif>. Bayesian Optimization (BO) was applied to identify the optimal hyperparameters for the SVR, XGBoost, and RF models to enhance their predictive capabilities. The models were tested on a dataset from the Albanian Institute of Statistics (INSTAT), which included the average gross monthly wage per employee by group-occupations. To ensure robustness and avoid temporal leakage, we used time-aware cross-validation (TimeSeriesSplit) for model validation, which preserves the chronological structure of the dataset and better reflects real-world forecasting scenarios. In addition, we applied bootstrapped confidence intervals to all evaluation metrics on the test set, offering a more reliable assessment of model performance. These methodological choices enhance the statistical credibility of the results. Among the evaluated models, the Bayesian optimized SVR using the EI acquisition function delivered the highest predictive accuracy, achieving an <img src=image/13442252_01.gif> value of 0.9955, an adjusted <img src=image/13442252_01.gif> of 0.9936, and low error metrics (MAE = 0.0416, MSE = 0.0042). By employing BO for hyperparameter tuning, the SVR model demonstrated exceptional accuracy in predicting average gross monthly wages, showcasing its effectiveness in handling the dataset. These findings suggest that SVR, when optimized using BO, is a powerful tool for wage prediction tasks. Despite the limited number of features, this study demonstrates how Bayesian Optimization can still offer valuable improvements in model accuracy, especially in constrained real-world settings. However, further research is needed to determine whether these results generalize to other datasets and domains.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[Some Path and Star Related Discrete Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15392]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Lekha Bijukumar(née Lekha S)&nbsp; &nbsp;</p><p>This study investigates the existence of discrete and strong discrete labeling for several classes of path and star-related graphs, contributing to the broader field of graph labeling theory. A graph is said to admit a discrete labeling if its vertices can be assigned binary labels (0 and 1), such that each edge receives a label determined by the exclusive OR (XOR) of the labels of its incident vertices, subject to specified constraints. We first establish that the super subdivision of a path <img src=image/13441705_01.gif> by the complete bipartite graph <img src=image/13441705_02.gif> is discrete, demonstrating that replacing each edge of <img src=image/13441705_01.gif> with <img src=image/13441705_02.gif> preserves the discrete labeling property. Additionally, we prove that the square of a path <img src=image/13441705_01.gif>, obtained by connecting vertices at a distance of at most two, also admits a discrete labeling. Further, we examine the shadow graph <img src=image/13441705_04.gif>, constructed by duplicating <img src=image/13441705_01.gif> and connecting each vertex to its copy's neighbors, and show that it is also discrete. Another variant, the graph <img src=image/13441705_03.gif> formed by switching one pendant vertex in <img src=image/13441705_01.gif>, is similarly proven to support discrete labeling. Beyond paths, we explore star-related graphs, confirming that the shadow graph of a star <img src=image/13441705_05.gif> and the shadow graph of a bi-star <img src=image/13441705_06.gif> both permit discrete labeling. Extending the concept, we introduce strong discrete labeling, which imposes the stricter requirement by adding one more constraint to discrete labeling. Under this stronger condition, we prove that the star graph <img src=image/13441705_07.gif> admits a strong discrete labeling if and only if m is even, highlighting a parity-based constraint absent in standard discrete labeling. These findings expand the known families of graphs with discrete and strong discrete labeling, offering new insights into structural properties that facilitate such labeling. Potential applications include coding theory, network design, and algorithm optimization, while open questions remain regarding the extensibility of these results to other graph operations and more complex graph classes.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[Restricted Detour <img src=image/13441700_01.gif>-Distance and Restricted Detour <img src=image/13441700_01.gif>-Index of Vertex Identified and Edge Introducing Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15391]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Mohammed Rahman Ahmed&nbsp; &nbsp;and Gashaw Aziz MohammedSaleh&nbsp; &nbsp;</p><p>To establish a mathematical foundation for the extensive research areas of Quantitative Structure–Activity Relationship and Quantitative Structure–Property Relationship, the concept of chemical graph theory has been introduced. In this framework, topological indices are defined as numerical descriptors that quantitatively represent the structural features of molecular graphs. A wide variety of topological indices have been systematically developed and extensively employed as predictive tools in Quantitative Structure–Activity Relationship and Quantitative Structure–Property Relationship modeling studies. Given the importance of indices in chemical graph theory, we introduce a novel distance between two vertices <img src=image/13441700_02.gif> and <img src=image/13441700_03.gif>, which is the restricted detour <img src=image/13441700_04.gif>-distance in order to define the restricted detour <img src=image/13441700_04.gif>-index. The restricted detour path <img src=image/13441700_05.gif> between two vertices <img src=image/13441700_02.gif> and <img src=image/13441700_03.gif> in a connected graph <img src=image/13441700_10.gif> is the longest <img src=image/13441700_06.gif> path <img src=image/13441700_11.gif> in the graph such that <img src=image/13441700_07.gif>. The <img src=image/13441700_04.gif>-distance <img src=image/13441700_08.gif>, in which the minimum is taken over all <img src=image/13441700_06.gif> paths and <img src=image/13441700_09.gif> is the length of the path <img src=image/13441700_11.gif>. Restricted Detour <img src=image/13441700_04.gif>-distance and Restricted detour <img src=image/13441700_04.gif>-index are two significant concepts in graph theory that provide deeper insights into the structural properties of graphs, particularly in relation to path lengths and network analysis. This paper fills that gap by formulating expressions for the restricted detour <img src=image/13441700_04.gif>-index for two structurally important graph operations: vertex identification and edge introducing. Our results extend known formulas and provide a unified framework for analyzing these operations, which has both theoretical implications and practical applications in network topology and mathematical chemistry. In addition to finding the restricted detour <img src=image/13441700_04.gif>-index of vertex identified and edge introducing graphs, in this article we found the restricted detour <img src=image/13441700_04.gif>-index of some special graphs such as Cycle, Wheel, thorn Path, thorn Rod, Caterpillar graphs and examined related properties.</p>]]></description>
<pubDate>Oct 2025</pubDate>
</item>
<item>
<title><![CDATA[Six Sigma-based Control Chart for Range under Inverse Half Logistic Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15318]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>R. Sathiya&nbsp; &nbsp;and M. Sakthi&nbsp; &nbsp;</p><p>Statistical Process Control (SPC) is a critical methodology in quality assurance that focuses on the continuous monitoring and regulation of manufacturing and service processes. It involves comparing the output of a process to established standards and initiating corrective actions whenever deviations occur. Beyond detecting variations, SPC also assesses whether a process is capable of consistently producing outputs that meet predefined quality requirements. Over the years, the development of SPC has been significantly influenced by foundational contributions. Purpose of the Research: This research addresses a significant limitation in conventional control chart design - its reliance on the assumption of normality. The primary objective is to develop a more robust Six Sigma-based control chart capable of handling skewed data distributions. Specifically, the study proposes a control chart tailored for data following an Inverse Half Logistic Distribution (IHLD), a type of skewed distribution commonly observed in practical quality control settings. The model is designed using Bowleys Coefficient of Skewness (BCS) and Kellys Coefficient of Skewness (KCS), two established measures of distributional asymmetry. Methodologies: The methodology involves constructing a control chart for process range data, incorporating BCS and KCS to adjust control limits based on the skewness of the data. This approach departs from traditional methods that use the mean and standard deviation, which can be misleading in the presence of skewed distributions. The IHLD is used as the underlying distribution model due to its suitability for representing asymmetrical data. The proposed framework is validated through simulation and statistical analysis to assess its performance in detecting process shifts compared to conventional control charts. Major Conclusions: The study concludes that traditional SPC tools are insufficient when applied to skewed or nonnormally distributed data. By utilizing BCS and KCS, control charts can be customized to reflect the true nature of the process distribution, thereby improving monitoring accuracy. This approach offers a statistically sound solution for extending Six Sigma principles to a wider range of data environments.</p>]]></description>
<pubDate>Aug 2025</pubDate>
</item>
<item>
<title><![CDATA[Comparative Analysis of Unknown Parameters of the Normal Distribution under Right-Censoring Using MLE and Bayesian Methods]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15317]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Nurmukhamedova Nargiza&nbsp; &nbsp;Berdimuradov Mirkamol&nbsp; &nbsp;and Murodov Sardor&nbsp; &nbsp;</p><p>Right-censored data frequently occur in survival analysis, reliability engineering, and economic modeling, where the exact value of some observations is unknown due to time limitations or censoring mechanisms. This paper aims to conduct a comparative analysis of parameter estimation for the Normal distribution under right-censoring conditions using two approaches: Maximum Likelihood Estimation (MLE) and Bayesian inference. In the Bayesian framework, both conjugate priors and numerical techniques such as Monte Carlo integration and Gibbs sampling are employed. For MLE, the Expectation-Maximization (EM) algorithm is used to iteratively estimate the mean and variance from incomplete observations. The study is structured around four censoring scenarios, including fixed and random censoring, with either the mean or the variance assumed unknown. Simulated datasets under each scenario are analyzed to assess the accuracy, stability, and efficiency of the two methods. The performance is evaluated in terms of Mean Squared Error (MSE), 95% confidence or highest posterior density (HPD) intervals, and robustness to increasing censoring levels. Results show that Bayesian methods yield more stable and lower-error estimates in small samples and highly censored data, while MLE performs competitively or better when sample size increases, particularly under complex censoring patterns. The study contributes to statistical methodology by highlighting when and why each method is preferable under practical constraints. The implications are relevant for practitioners in applied statistics, actuarial modeling, and data science dealing with incomplete observations. Though limited to the Normal model, the findings suggest broader applicability to other distributions and future work may include Gamma extensions, adaptive priors, or hybrid Bayesian EM frameworks. This work also serves as a practical reference for selecting estimation strategies based on data characteristics, sample size, and censoring structure.</p>]]></description>
<pubDate>Aug 2025</pubDate>
</item>
<item>
<title><![CDATA[Generalized Composition of Cycle and Path Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15316]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Akhil B.&nbsp; &nbsp;Roy John&nbsp; &nbsp;Manju V. N.&nbsp; &nbsp;and Athira Chandran&nbsp; &nbsp;</p><p>In this article, we explore the significance of graph operations, which provide a versatile framework for modeling, analyzing, and solving complex problems involving relationships and connections between entities. These operations are fundamental across various fields and play a vital role in advancing research, technology, and decision-making. Commonly used graph operations include composition, tensor product, and Cartesian product. In addition to these, several new variations have been introduced in recent literature. We begin by presenting these operations and then consider their generalizations, which can be approached either parametrically or structurally—that is, by generalizing the graph structure itself or the parameters involved in the operations. In particular, this article focuses on the generalized composition, known as the <img src=image/13441704_01.gif>-composition, of cycle and path-related graphs. This generalization is based on the parameter distance between vertices. Since the standard composition of graphs does not necessarily preserve connectedness, we examine results related to the connectedness of graphs under this generalized composition. For suitable values of <img src=image/13441704_01.gif>, we derive graphs resulting from the generalized composition of cycles and paths. These resulting graphs may be isomorphic, non-isomorphic, or consist of a single connected component. Furthermore, we identify a class of graphs for which the commutativity property holds under this generalized composition for certain values of <img src=image/13441704_01.gif>.</p>]]></description>
<pubDate>Aug 2025</pubDate>
</item>
<item>
<title><![CDATA[Commutativity of Prime Rings with <img src=image/13442052_01.gif>-generalized Derivations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15315]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Ahlam Fallatah&nbsp; &nbsp;Faiza Shujat&nbsp; &nbsp;and Abu Zaid Ansari&nbsp; &nbsp;</p><p>An iconic example of mathematical unification is the role of derivation and their generalizations, which integrate multiple fields of study and create a useful framework for examining problems of structural importance. The fields of rings, algebra, and extended algebraic structures with derivations have seen several important revolutions in the theoretical perspective. It is a classical and long-lasting problem in the study of derivations that once a derivation that is centralizing and non-zero on a prime ring takes place, the commutative structure of the ring may exist. The current research is motivated by the study on <img src=image/13442052_03.gif>-generalized derivations. Our objective is to establish the commutativity theorems by investigating some differential identities related to generalized derivations <img src=image/13442052_03.gif> in the dense ideal of <img src=image/13442052_02.gif>. We advance the techniques of how these mappings interact with the underlying algebraic structures (proper subsets of rings and extended rings) by exploring the idea of <img src=image/13442052_03.gif>-generalized derivation. Notice that our findings give direct application to several researches in the context of generalized derivation. Considering <img src=image/13442052_04.gif> and because every <img src=image/13442052_03.gif>-generalized derivation will be the ordinary generalized derivation for <img src=image/13442052_05.gif>, our results directly apply to many studies in the context of generalized derivation.</p>]]></description>
<pubDate>Aug 2025</pubDate>
</item>
<item>
<title><![CDATA[On Achromatic Pseudoachromatic Number of Cellular Neural Networks]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15314]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>T. Kalaiselvi&nbsp; &nbsp;Yegnanarayanan Venkataraman&nbsp; &nbsp;Mark Sepanski&nbsp; &nbsp;and Rajermani Thinakaran&nbsp; &nbsp;</p><p>A motivation for this work stems from the fact that networks can be visualized as graphs with vertices of a graph as processing nodes and the edges as communication links. It is a known fact that graphs are utilized as a great visualizing tool and are widely involved in the task of characterizing the networks as interconnection parallel architectures, data routing schemes, computational workload etc. Inspired by the recent success of computation of graph-based indices for cellular neural networks and 3-layered or 4-layered probabilistic neural networks, a new attempt is made to identify the hidden symmetric structures on each of these networks and find the precise values of pseudoachromatic and achromatic numbers of the above mentioned networks as they yield vital counsel on topological and structural attributes of neural networks. A cellular neural network of dimension <img src=image/13439635_01.gif>, denoted by <img src=image/13439635_02.gif> is a strong product of two path graphs <img src=image/13439635_03.gif> and <img src=image/13439635_04.gif>. We probed the pseudoachromatic number and achromatic number of <img src=image/13439635_05.gif> and derived tight lower and upper bounds of these parameters for <img src=image/13439635_05.gif>. We also explained the theoretical and practical importance of these coloring parameters as process innovation and also raised some open problems.</p>]]></description>
<pubDate>Aug 2025</pubDate>
</item>
<item>
<title><![CDATA[Investigating the Laplace Variational Iteration Method for Solving Nonlinear Differential Equations Arising in Digital Systems]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15313]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Dahiru Abdurrahman&nbsp; &nbsp;Maheshwar Pathak&nbsp; &nbsp;and Pratibha Joshi&nbsp; &nbsp;</p><p>The Laplace Variational Iteration Method (LVIM) discussed in this study is a powerful hybrid analytical approach that combines the strengths of the Variational Iteration Method (VIM) with the Laplace transform. Nonlinear differential equations arise frequently in modelling real-world situations across variety of fields in digital signal processing, control systems, and electronic circuit analysis where finding accurate and efficient solutions is crucial for analyzing system dynamics, ensuring stability, and optimizing performance. Traditional methods often are unable to evaluate exact or near-exact solutions for these nonlinear systems, particularly when the initial or boundary conditions are complex. The main objective of this research is to utilize LVIM to tackle specific nonlinear differential equations that are commonly encountered in digital systems. By incorporating the Laplace transform into the variational iteration methodology, this method streamlines the solution process and boosts convergence. The results are showcased through a mix of graphical plots and tables, highlighting the accuracy, computational efficiency, and robustness of the considered approach. The results show that LVIM offers a dependable and straightforward way to manage nonlinearities without involving complex techniques as linearization or perturbation techniques. While the method shows impressive performance, it is important to note that its effectiveness can hinge on the smoothness of the initial functions and the availability of Laplace transforms. This work adds to the expanding field of semi-analytical methods and provides practical insights for engineers and applied mathematicians engaged in cutting-edge digital technologies.</p>]]></description>
<pubDate>Aug 2025</pubDate>
</item>
<item>
<title><![CDATA[Existence and Uniqueness of the Maximum Likelihood Estimators of the Topp-Leone Distribution Parameters]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15312]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Ahmad Zghoul&nbsp; &nbsp;and Mahmoud Smadi&nbsp; &nbsp;</p><p>The Topp-Leone (TL) distribution is a bounded probability model with flexible hazard rate which can assume increasing, decreasing, or bathtub-shaped forms. This flexibility makes the TL distribution useful in reliability analysis and lifetime data modeling. Despite its applicability and attractive theoretical features, maximum likelihood estimation (MLE) of its parameters must be handled with care because the support of the distribution depends on the scale parameter. This dependency violates classical regularity conditions that ensure the existence and uniqueness of the MLEs. Thus, before applying MLE, we must verify that the estimators exist and are unique. This article investigates these properties for the TL distribution parameters. We show that the MLEs do not exist for a single observation but prove their existence and uniqueness when the sample contains at least two distinct values. After establishing the existence and uniqueness of the MLEs, we assess their performance through simulations. The simulation studies confirm the consistency of the estimators, where their bias and mean squared error (MSE) both decline as the sample size increases. However, the shape parameter estimator shows increasing bias for larger true parameter values. In addition to simulation studies, we fit the TL distribution to a real-world dataset—aircraft window glass strength—and confirm model adequacy using the Anderson-Darling goodness-of-fit test.</p>]]></description>
<pubDate>Aug 2025</pubDate>
</item>
<item>
<title><![CDATA[<img src=image/13441462_01.gif>-Fibonacci Cordial Labeling to Create New <img src=image/13441462_01.gif>-Fibonacci Cordial Families]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15311]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>R. Charishma&nbsp; &nbsp;and P. Nageswari&nbsp; &nbsp;</p><p>Cordial labeling is a pivotal concept in graph theory, involving assigning labels to graph elements to satisfy specific balance conditions. While traditional cordial labelings use binary or integer labels, this research introduces <img src=image/13441462_02.gif>-Fibonacci Cordial labeling, a novel approach integrating Fibonacci sequences, a cornerstone of combinatorial mathematics into graph labeling. This research investigated the <img src=image/13441462_02.gif>-Fibonacci Cordial labeling for path, cycle, Petersen graph, <img src=image/13441462_03.gif>-pan graph and Bistar graph. The Fibonacci numbers are determined by the linear recurrence relations which output an endless list of integers <img src=image/13441462_04.gif> = 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, … for <img src=image/13441462_05.gif> = 0, 1, 2, 3, …, 12, … An injective function <img src=image/13441462_06.gif> from vertices of <img src=image/13441462_10.gif> to <img src=image/13441462_07.gif> where <img src=image/13441462_07.gif> denotes the <img src=image/13441462_08.gif> Fibonacci number (<img src=image/13441462_09.gif>=1, 1, 2,…,n) is said to be <img src=image/13441462_02.gif>-Fibonacci Cordial labeling of <img src=image/13441462_10.gif> if the induced function g* is a function from edges of <img src=image/13441462_10.gif> to {0,1}. This results in edge labeled as 0 if the maximum label among the two adjacent vertices is an even number and 1 otherwise satisfies the condition number of vertices labeled with 0 and 1 differed by atmost 1. A graph admits <img src=image/13441462_02.gif>-Fibonacci Cordial Labeling is called <img src=image/13441462_02.gif>-Fibonacci Cordial Graph. The research demonstrates that paths, cycles, and Bistar graphs successfully admit <img src=image/13441462_02.gif>-Fibonacci Cordial labeling. The Petersen graph and pan graph require tailored labeling strategies due to their complex symmetries but are shown to comply with the balance condition. Fibonacci sequences introduce unique structural constraints, enabling novel insights into graph connectivity and label distribution. This paper examines the <img src=image/13441462_02.gif>-Fibonacci cordial labeling of various graphs. It provides explicit labeling schemes for key graph classes, serving as a foundation for future research in cryptographic algorithms, network design, and error-correcting codes.</p>]]></description>
<pubDate>Aug 2025</pubDate>
</item>
<item>
<title><![CDATA[Investigating the Use of M-Polynomial for Calculating Topological Indices in Carbazole Dendrimers]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15310]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>A. Venkatesan&nbsp; &nbsp;R. Binthiya&nbsp; &nbsp;B. Roopa&nbsp; &nbsp;and A. Jeslet Kani Bala&nbsp; &nbsp;</p><p>Graph theory plays an essential role in analysing the structural characteristics of molecular structures and network systems. Among the various degree-based topological indices, the Hyper Zagreb index, Redefined Zagreb index, and Reciprocal Randic index are particularly important for evaluating molecular graphs. I investigated the M-polynomial as a tool for calculating the Hyper Zagreb index, Redefined Zagreb index, and Reciprocal Randic index. Carbazole dendrimers are highly branched macromolecular structures consisting of a central carbazole unit with multiple dendritic branches extending from it. We successfully determined the M-polynomials for Carbozole dendrimers. Furthermore, by leveraging these M-polynomials, we will be computing various topological indices based on degree. Additionally, we obtained the Topological Indices values of Carbozole dendrimers for generations <img src=image/13441196_01.gif> using the M-polynomial. The M-polynomial is firstly used and is determined as follows: <img src=image/13441196_02.gif>. Where <img src=image/13441196_03.gif> and <img src=image/13441196_04.gif> is the total number of edges <img src=image/13441196_05.gif>, where <img src=image/13441196_06.gif>. The theory of Quantitative Structure-Property Relationships (QSPR) is founded on the principle that a compound's physicochemical properties are intrinsically linked to its molecular structure. The results of this study provide a thorough framework for examining dendritic structures using M-polynomials, which adds to the expanding collection of work on molecular graph theory. These findings might help develop prediction models for quantitative structure–property relationship (QSPR) research, especially when it comes to comprehending the physiochemical characteristics of dendritic molecules.</p>]]></description>
<pubDate>Aug 2025</pubDate>
</item>
<item>
<title><![CDATA[On Energy of Prime Coprime Graph for Dihedral Groups]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15193]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jun&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Mamika Ujianita Romdhini&nbsp; &nbsp;Abdurahim&nbsp; &nbsp;Faisal Al-Sharqi&nbsp; &nbsp;Syamsul Bahri&nbsp; &nbsp;and Andika Ellena Saufika Hakim Maharani&nbsp; &nbsp;</p><p>The study of graphs associated with algebraic structures has become an influential approach in understanding both the combinatorial and algebraic properties of groups. In this paper, we focus on a special class of graphs known as prime coprime graphs constructed from the dihedral groups <img src=image/13440904_01.gif>, a family of non-abelian finite groups of order <img src=image/13440904_02.gif>. The prime coprime graph, denoted <img src=image/13440904_03.gif>, is defined with the vertex set consisting of the non-identity elements of <img src=image/13440904_01.gif>, where two distinct vertices are adjacent whenever the greatest common divisor of their orders is a prime number. The primary aim of this research is to examine the spectral properties of <img src=image/13440904_03.gif>, with a focus on the graph energy, defined as the sum of the absolute values of the eigenvalues of its adjacency matrix. We construct the adjacency matrix of <img src=image/13440904_03.gif> for cases where <img src=image/13440904_04.gif> is either a power of a prime or a product of two distinct primes. Through direct computation and spectral analysis, we determine the eigenvalues and calculate the energy of the graph. Additionally, we explore the relationship between the energy and the spectral radius. Our findings provide new insights into the spectral behavior of non-commutative algebraic graphs and contribute to the broader theory of graph energies. While the study is restricted to specific cases of <img src=image/13440904_04.gif>, it opens pathways for generalizations to other classes of groups. These results may have further implications in algebraic combinatorics, coding theory, and the spectral classification of finite groups.</p>]]></description>
<pubDate>Jun 2025</pubDate>
</item>
<item>
<title><![CDATA[Application of EWMA Control Chart for Analyzing Changes in SAR(P)<sub>L</sub> Model with Quadratic Trend]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15192]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jun&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Dollaporn Polyeam&nbsp; &nbsp;and Suvimol Phanyaem&nbsp; &nbsp;</p><p>The Exponentially Weighted Moving Average (EWMA) control chart is widely implemented in applications in various fields, such as finance, medicine, engineering, and others. In real-world applications such as hospital admissions, share prices, and daily rainfall, data often exhibits both seasonal autocorrelation patterns and quadratic trend characteristics. Therefore, the application of EWMA control charts for detecting changes in processes offers significant advantages. The average run length (ARL) is commonly used as the standard criterion for measuring the efficiency of control charts. Hence, the accurate calculation of average run length with minimal processing time is an essential element of this research. This paper focuses on proposing an explicit formula for the ARL of the EWMA chart when the observations follow a seasonal autoregressive model of order P (SAR(P)<sub>L</sub>) with quadratic trend. The proof for deriving the explicit formula of the ARL uses Fredholm's integral equation method. The application of Banach's Fixed Point Theorem provides a guarantee for solution uniqueness. Additionally, the performance of the proposed explicit formula is compared with the approximate ARL derived from Numerical Integral Equation methods, which consist of the Midpoint rule, the Trapezoidal rule, the Simpson's rule, and the Gaussian rule. The efficiency of the explicit formula of ARL is evaluated using two criteria: absolute percentage difference and the computational (CPU) time. The results obtained indicate that the ARL from the explicit formula is close to the numerical integral equation with an absolute percentage difference of less than 0.001. The proposed explicit formula has a minimal CPU time of about 0.001 seconds, while the Midpoint and Trapezoidal rules take 2 - 3 seconds. The Simpson's and Gaussian rules require the longest times, approximately 9 - 10 seconds. A key finding of this study was that the explicit formulas performed better than the numerical integral equation methods in terms of CPU time. As a result, both the proposed explicit formulas and the numerical integral equation methods have emerged as viable alternatives for determining the ARL of the EWMA control chart.</p>]]></description>
<pubDate>Jun 2025</pubDate>
</item>
<item>
<title><![CDATA[Crank-Nicolson Finite Element Method for the Time Fractional Stochastic Wave Equation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15104]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jun&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Nwankwo Jude Chukwuyem&nbsp; &nbsp;Njoseh Ignatius Nkonyeasua&nbsp; &nbsp;and Joshua Sarduana Apanapudor&nbsp; &nbsp;</p><p>The Crank-Nicolson Finite Element Method (CNFEM) provides a robust and efficient framework for solving time-fractional stochastic wave equations, which are essential in modeling dynamic systems influenced by both memory effects and random perturbations. These equations often arise in fields such as geophysics, engineering, and financial mathematics, where processes exhibit fractional-order characteristics and stochastic influences. In our approach, we construct a numerical scheme that integrates the Crank-Nicolson method with the finite element technique to achieve an accurate approximation of the solution. The time-fractional derivative is managed using the Caputo definition, ensuring the incorporation of non-local temporal effects that characterize fractional processes. The stochastic component is introduced through white noise or noise perturbations, which are essential for modeling real-world uncertainties. By leveraging CNFEM, we ensure stability in time discretization while offering flexibility in spatial discretization, making it particularly useful for handling complex and irregular computational domains. A rigorous analysis of the proposed numerical scheme is conducted to examine its convergence and stability. The scheme is shown to be unconditionally stable, meaning it does not impose restrictive conditions on the time step or spatial mesh size, thereby enhancing computational efficiency. The numerical implementation of our method is carried out using MAPLE 18, a powerful symbolic and numerical computation tool, which aids in performing high-precision calculations and symbolic manipulations. By applying CNFEM to time-fractional stochastic wave equation, our study provides a reliable and efficient numerical strategy for simulating complex dynamical systems. The results demonstrate the method's capability in capturing both the fractional-order memory effects and stochastic behaviors inherent in these equations, making it a valuable tool for researchers in applied mathematics and computational science.</p>]]></description>
<pubDate>Jun 2025</pubDate>
</item>
<item>
<title><![CDATA[Cubic <img src=image/13441027_01.gif>-subalgebras and Its Properties]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15103]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jun&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Haliz J. Othman&nbsp; &nbsp;and Alias B. Khalaf&nbsp; &nbsp;</p><p>This article proposes the notion of cubic <img src=image/13441027_02.gif>-subalgebras with the aid of cubic sets known as <img src=image/13441027_03.gif>-cubic <img src=image/13441027_02.gif>-subalgebras where <img src=image/13441027_04.gif>. This paper investigates the fundamental properties and relationships between cubic sets and cubic <img src=image/13441027_02.gif>-subalgebras, offering a classification framework and analyzing their significance within the broader context of algebraic systems. Additionally, we provide examples and describe the structure of cubic <img src=image/13441027_02.gif>-subalgebras. We specify the conditions that are both necessary and sufficient for a cubic <img src=image/13441027_02.gif>-subalgebra in order the crisp subsets to be <img src=image/13441027_02.gif>-subalgebra. We proved that the <img src=image/13441027_13.gif>-union of any collection of <img src=image/13441027_05.gif>-cubic <img src=image/13441027_02.gif>-subalgebras is a <img src=image/13441027_05.gif>-cubic <img src=image/13441027_02.gif>-subalgebra, while the intersection of type <img src=image/13441027_14.gif> and <img src=image/13441027_15.gif>-union of <img src=image/13441027_05.gif>-cubic <img src=image/13441027_02.gif>-subalgebras may not be an <img src=image/13441027_05.gif>-cubic <img src=image/13441027_02.gif>-subalgebra. Also, we proved that the subset <img src=image/13441027_06.gif> is a <img src=image/13441027_02.gif>-subalgebra under certain conditions. Moreover, some examples in <img src=image/13441027_07.gif> with the usual subtraction are given to clarify the notion of the lower <img src=image/13441027_08.gif>-level, the lower <img src=image/13441027_09.gif>-level of a cubic sets and we showed that they are <img src=image/13441027_02.gif>-subalgebras whenever their corresponding cubic set is a an <img src=image/13441027_05.gif>-cubic <img src=image/13441027_02.gif>-subalgebra. Finally, we proved that a cubic set <img src=image/13441027_10.gif> is commutative cubic <img src=image/13441027_02.gif>-subalgebra of <img src=image/13441027_11.gif> if and only if it is <img src=image/13441027_05.gif>-cubic <img src=image/13441027_02.gif>-subalgebra when <img src=image/13441027_12.gif> is a commutative <img src=image/13441027_02.gif>-algebra.</p>]]></description>
<pubDate>Jun 2025</pubDate>
</item>
<item>
<title><![CDATA[Edge Metric and Edge Metric Topology on Graph]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15102]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jun&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Roy John&nbsp; &nbsp;Athira Chandran&nbsp; &nbsp;and G. Suresh Singh&nbsp; &nbsp;</p><p>The method for generating a topology on an edge set of a graph using a distance function on the graph's trail involves the conceptualization of a graph as a topological space. This approach employs a distance function defined over the graph's trail, which is a set of edges that form paths between vertices, to induce a topology on the edge set. The first step of the method is to define a distance metric that measures the "closeness" or "similarity" between different edges based on their positions within the graph's trails. This distance function is typically defined in terms of the structural properties of the graph, such as the number of common vertices, the shortest path distance, or the traversal distance between edges. This provides a powerful tool for analysing the structure and behaviour of graphs from a topological perspective. By considering how edges relate to each other through common trails, this method allows for a deeper understanding of the graph's geometric and connectivity properties. Once the distance function is defined, it is used to generate a family of open sets on the edge set by determining which sets of edges are close to one another. This leads to the generation of a topology that satisfies the standard properties of topological spaces, including the existence of open sets, the union and intersection of these sets, and the presence of limit points. In this article, we present a topology on an edge set of a graph by using a distance function on the trail of a graph. We also describe the graphs that produce discrete topology and indiscrete topology. Topologies produced by union, join and corona graphs are also discussed.</p>]]></description>
<pubDate>Jun 2025</pubDate>
</item>
<item>
<title><![CDATA[Degree Based Matrices of Co-prime Order Graph of Finite Groups]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15101]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jun&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Manjeet Saini&nbsp; &nbsp;Vijeta Kumari&nbsp; &nbsp;Pankaj Rana&nbsp; &nbsp;Amit Sehgal&nbsp; &nbsp;and Dalip Singh&nbsp; &nbsp;</p><p>The co-prime order graph of a group is a simple finite graph having vertex set as group itself, and there is an edge between vertex <img src=image/13440510_01.gif> to the other vertex <img src=image/13440510_02.gif> whenever <img src=image/13440510_03.gif> or prime. Spectral graph theory is the study of the properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated with the graph, such as its adjacency matrix, Laplacian matrix, signless Laplacian matrix, normalized Laplacian matrix, maximum degree matrix, minimum degree matrix, etc. In this article, we have extended the study from the Laplacian matrix spectrum to various degree based matrices such as signless Laplacian matrix, normalized Laplacian matrix, maximum degree and minimum degree matrix of co-prime order graph of finite groups <img src=image/13440510_04.gif>, <img src=image/13440510_05.gif> and also provided some general results for any finite group. Laplacian matrix, signless Laplacian matrix and normalized Laplacian matrix are based on adjacency matrix and degree matrix. The maximum degree matrix is based on the maximum of degree of a vertex among all the pairs of vertices whereas the minimum degree matrix is based on the minimum of degree of a vertex among all the pairs of vertices.</p>]]></description>
<pubDate>Jun 2025</pubDate>
</item>
<item>
<title><![CDATA[Irreducibility of Polynomials up to Fourth Degree with Some Prescribed Coefficients]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15035]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Pradeep Maan&nbsp; &nbsp;Deepak&nbsp; &nbsp;Sangeeta Malik&nbsp; &nbsp;and Amit Sehgal&nbsp; &nbsp;</p><p>This paper investigates the irreducibility properties of small-degree polynomials with specific pre-assigned coefficients over the field of rational numbers. Focusing on polynomials of degrees up to four, we analyze families of polynomials where some coefficients are fixed as <img src=image/13440866_14.gif>, <img src=image/13440866_15.gif> etc. We establish irreducibility of polynomials using combinatorial techniques and derive irreducibility conditions for others as: <img src=image/13440866_01.gif> is irreducible over <img src=image/13440866_02.gif> if and only if <img src=image/13440866_03.gif>, <img src=image/13440866_04.gif> is irreducible over <img src=image/13440866_02.gif> except for <img src=image/13440866_05.gif> and <img src=image/13440866_06.gif>, <img src=image/13440866_07.gif> is irreducible over <img src=image/13440866_02.gif> if <img src=image/13440866_03.gif>, <img src=image/13440866_08.gif> is irreducible over <img src=image/13440866_16.gif> when <img src=image/13440866_09.gif> is any prime with <img src=image/13440866_10.gif>, <img src=image/13440866_11.gif>, <img src=image/13440866_12.gif> and <img src=image/13440866_13.gif> are irreducible over <img src=image/13440866_02.gif>. In addition to the existing tools, the above mentioned results establish the irreducibility of small degree polynomial at first sight. Moreover, through our results, we extend the applicability of classical irreducibility criteria and provide new classes of irreducible polynomials. The combinatorial and analytical techniques employed offer a foundation for further research into higher-degree polynomials and their irreducibility over other fields.</p>]]></description>
<pubDate>Apr 2025</pubDate>
</item>
<item>
<title><![CDATA[Statistical Framework for Bivariate Point Processes: Conditional Intensity and Parameter Estimation Techniques]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15034]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Andi Kresna Jaya&nbsp; &nbsp;Nurtiti Sunusi&nbsp; &nbsp;and Erna Tri Herdiani&nbsp; &nbsp;</p><p>The point process is a model that is suitable to describe the number of events that occur randomly in a given interval through its intensity function. The conditional intensity of the bivariate point process can be seen more specifically in two separate groups. The purpose of this study is to construct the conditional intensity for the homogeneous bivariate point process and to estimate the parameters using the maximum likelihood method. The conditional intensity construction is obtained from the ratio of the event-time probability density function to the event time survival function. Estimation of the conditional intensity parameter is carried out by constructing a likelihood function of the probability of one event in a small interval multiplied by the probability of no event in the remaining observation time. The results show that the conditional intensity of a bivariate point process depends on the number of events <img src=image/13440812_01.gif> and the observation time interval <img src=image/13440812_02.gif>. The pattern application was performed on two datasets, namely dataset the number of active cases of Covid-19 in Indonesia and dataset the number of earthquake in Sulawesi Island. The application to Dataset Covid-19 or Dataset earthquake reveals that the conditional intensity of the type-1 and type-2 in both datasets exhibits a directly proportional to the average frequency of events and inversely proportional to the observation time. Application to Dataset A or Dataset B shows that the conditional intensity of type A and type B events for the two Datasets is inversely proportional to the observation time.</p>]]></description>
<pubDate>Apr 2025</pubDate>
</item>
<item>
<title><![CDATA[Antimagic Labeling of Digraphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15033]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Ancy Dsouza&nbsp; &nbsp;Saumya Y M&nbsp; &nbsp;and Kumudakshi&nbsp; &nbsp;</p><p>A directed graph D can be labeled antimagically by assigning different integers to its arcs, ensuring that the computed vertex weights are distinct. Antimagic labeling exists in a digraph D with <img src=image/13440650_01.gif> arcs and <img src=image/13440650_02.gif> vertices if it is possible to uniquely match each arc to an integer from 1 to h. For every vertex, the difference between the totals of the labels from incoming arcs and outgoing arcs is unique for each vertex. A directed graph that allows such antimagic labeling is referred to as an antimagic digraph. There are countless methods to create an antimagic digraph. These constructions are essential in fields such as network theory, coding theory, and combinatorial optimization. The subset sum problem is a well-recognized issue in both computer science and combinatorics. These problems play a decisive role in graph labeling, especially when it comes to the creation and evaluation of specific types of labels. In this paper, we connect the idea of subset sum problems with wheel digraphs <img src=image/13440650_03.gif> to represent it as antimagic. The Cartesian product of directed graphs is a fundamental concept in graph theory that holds both theoretical and practical importance. The applications of Cartesian products include network design and analysis, parallel computing, graph decomposition and construction, Game Theory, and Decision-Making. Additionally, in this paper, we have developed antimagic digraphs from the Cartesian product of directed path <img src=image/13440650_04.gif> and <img src=image/13440650_05.gif> by traversing the directed path <img src=image/13440650_06.gif> in alternating and unidirectional ways.</p>]]></description>
<pubDate>Apr 2025</pubDate>
</item>
<item>
<title><![CDATA[Structural Properties and Characteristic Polynomials of Cubic Power Graph of Dihedral Group]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=15032]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Mamika Ujianita Romdhini&nbsp; &nbsp;Pankaj Rana&nbsp; &nbsp;Amit Sehgal&nbsp; &nbsp;and Pooja Bhatia&nbsp; &nbsp;</p><p>Cubic power graph of dihedral group <img src=image/13440420_01.gif> having order <img src=image/13440420_02.gif> and identity element <img src=image/13440420_03.gif>, <img src=image/13440420_04.gif> is a finite, simple, undirected graph for which two different vertices <img src=image/13440420_05.gif> are adjacent if and only if <img src=image/13440420_06.gif> or <img src=image/13440420_07.gif> for some <img src=image/13440420_08.gif> with <img src=image/13440420_09.gif>. In this research, conditions on <img src=image/13440420_12.gif> under which <img src=image/13440420_04.gif> is planar, are calculated. Also conditions on <img src=image/13440420_12.gif> under which <img src=image/13440420_04.gif> is Hamiltonian, are calculated. It is also shown that <img src=image/13440420_04.gif> is not an Eulerian graph. A polynomial that counts how many different ways there are to color a graph with a specific number of colors is called a chromatic polynomial. We have calculated the chromatic polynomial of <img src=image/13440420_04.gif> when <img src=image/13440420_10.gif>. Also chromatic polynomial of almost complete graph obtained by deleting <img src=image/13440420_11.gif> number of non-adjacent edges is obtained. Spectral graph theory is a branch of mathematics that examines a graph's characteristics in relation to its characteristic polynomial, eigenvalues, and eigenvectors of related matrices, such as its adjacency matrix, Laplacian matrix or degree based matrices. Characteristic polynomials of degree based matrices such as Laplacian matrix, Signless Laplacian matrix, Maximum matrix, Minimum matrix and Greatest common divisor degree matrix of <img src=image/13440420_04.gif> when <img src=image/13440420_10.gif>.</p>]]></description>
<pubDate>Apr 2025</pubDate>
</item>
<item>
<title><![CDATA[Advanced Theorems on Double Integral Transforms and Their Applications]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14944]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>A. K. Awasthi&nbsp; &nbsp;Lukman Ahmed&nbsp; &nbsp;and Ruby Kumari&nbsp; &nbsp;</p><p>This article delves into the realm of double integral transforms (DIT), focusing on the pivotal role played by Fox's H-Function in their theoretical framework. The DIT denoted as <img src=image/13439778_01.gif> is intricately defined through Fox's HFunction and expressed as a Mellin-Barnes-type contour integral with various parameters and conditions. The study emphasizes chain properties connecting the DIT, presenting a concise representation as <img src=image/13439778_02.gif>. Three theorems are established, leveraging the power series expansions of special functions such as Laplace transforms, Hankel transforms, and specific transforms by Pathak and Narain. These theorems are proven analytically and their results are verified with the help of examples. The Fox H-Function is a powerful mathematical tool, which is explored for its significance in the analysis of DIT. Being a generalization of the hypergeometric series, the H-function finds widespread applications across mathematics, physics, and engineering. Detailed proofs substantiate the three theorems, illustrating the manipulation of the double integral transform under specific conditions. The application of these theorems extends to the evaluation of known and novel integrals involving the product of H-Function and other mathematical functions. This comprehensive exploration unveils the indispensable role of Fox's H-Function in the theoretical landscape of double integral transform and its applications.</p>]]></description>
<pubDate>Apr 2025</pubDate>
</item>
<item>
<title><![CDATA[Second Order Unstable Type Difference Equations with Deviating Arguments: New Asymptotic Results]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14943]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Sudha B&nbsp; &nbsp;Srinivasan K&nbsp; &nbsp;and Thandapani E&nbsp; &nbsp;</p><p>Differential equations with deviating arguments serve as a fundamental cornerstone in mathematical modeling, offering a robust framework for characterizing the dynamics of various systems across multiple disciplines. Meanwhile, oscillatory theorems are essential in analyzing the intrinsic vibrational patterns within dynamic systems, providing critical insights into their stability and periodicity. This study focuses on examining the behavior of half-linear second-order difference equations of an unstable type when their arguments are altered from the form <img src=image/13440270_01.gif>. Utilizing the summation averaging method alongside the generalized Riccati transformations, we initiate new properties of monotonic to non-oscillatory solutions, enabling us to establish conditions that eliminate specific types of non-oscillatory behavior. These findings lead to novel oscillation criteria applicable to second-order difference equations with both advanced and delayed arguments. A key application of these results is the analysis of the oscillatory nature of difference equations arising in the Thomas–Fermi (T–F) model, a fundamental equation in physics. The T–F equation represents the simplest formulation for modeling the screened electrostatic Coulomb potential around a highly charged nucleus and its surrounding electron cloud. Beyond atomic physics, this equation finds broad applicability across numerous physical domains. The examples presented at the conclusion of this work illustrate the enhanced results achieved, demonstrating improvements over previously established findings.</p>]]></description>
<pubDate>Apr 2025</pubDate>
</item>
<item>
<title><![CDATA[Comparison on Contraction Conditions and Conjecture in G-Metric Space]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14942]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>K. Kumara Swamy&nbsp; &nbsp;and S. Saravanan&nbsp; &nbsp;</p><p>Fixed point theory is an important discipline in Mathematics because of its results which are utilized to investigate the existence of solutions for the problems in applied sciences and engineering. The theorems in fixed point theory assured that a self-map on a metric space has an invariant point that can be noticed as the limit of an iterative scheme obtained by repeated elements under the contraction mapping of an initial point in the metric space. Many fixed-point results have been widely generalized throughout the years in various directions by introducing new metric spaces and setting of new contraction mappings. The results in fixed point theory can be noticed in geometry, computational algorithms, economics, fluid dynamics, micro-structures, nonlinear sciences, medical fields and optimization theory. Recently many Mathematicians have developed various contractions in G-metric spaces. In this paper, brief comparison was done on G-contractions developed by Mohanta, Vats et. al. and Phaneendra et. al. With this comparison, we realize that some more terms need to be added and also need to change the restricted range of constant in the condition, to get the generalized contraction condition. Finally, conjecture was made based on the comparison of these existed G-contractions. It is observed that, conjecture gives the generalization of results established by Mohanta, Vats et. al. and Phaneendra et. al.</p>]]></description>
<pubDate>Apr 2025</pubDate>
</item>
<item>
<title><![CDATA[Infinite Integrals with Spheroidal Functions: A Comprehensive Exploration Utilizing Fox's H-Function]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14883]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>A.K. Awasthi&nbsp; &nbsp;Ruby Kumari&nbsp; &nbsp;and Lukman Ahmed&nbsp; &nbsp;</p><p>This study investigates six infinite integrals, focusing on their evaluation through the integration of spheroidal functions and Fox's H-function, as introduced by C. Fox. The research aims to expand the analytical approaches available for handling complex integrals involving special functions. By presenting a generalized framework, the study encapsulates several unique cases, including integrals incorporating spheroidal wave functions, Mathieu functions, and the multivariate H-function. A compact representation of the multivariate H-function is introduced, enhancing the mathematical toolkit for subsequent research. Methodologically, this work employs advanced mathematical techniques, including Fox's H-function, contour integration, and established properties of Bessel functions and spheroidal functions, leading to intricate expressions with broad applications. The derived results are applicable to a diverse range of integrals, encompassing modified Bessel functions, spheroidal functions, and parameters that extend the relevance of this analysis to various fields. This research not only emphasizes novel aspects of Fox's H-function but also addresses the limitations in previous frameworks by providing a more flexible and inclusive approach. Potential implications of this work include new insights for mathematical and physical domains that rely on complex integral evaluations, while also presenting practical and theoretical contributions that could inspire further exploration of Fox's H-function applications.</p>]]></description>
<pubDate>Feb 2025</pubDate>
</item>
<item>
<title><![CDATA[Parity Weighted Totally Antimagic Total Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14882]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>G. Suthakaran&nbsp; &nbsp;and R. Jeyabalan&nbsp; &nbsp;</p><p>This manuscript presents a new development in graph labeling by exploring parity-weighted totally antimagic total (TAT) labeling. A parity-weighted TAT labeling is a TAT labeling in which all vertex and edge weights are either even or odd. If all vertex weights are even (or odd) and all edge weights are odd (or even), the labeling can be defined as strong (SPAT) or weak (WPAT) parity-weighted TAT labeling, respectively. The primary objective of this study is to determine whether specific graph families- namely, friendship graphs, generalized friendship graphs, fan graphs, and wheel graphs- admit SPAT or WPAT labeling. Key findings of the research identify conditions under which SPAT and WPAT labeling are possible, demonstrating the uniqueness of certain graph families in admitting these schemes. The results provide a foundation for further exploration of parity-constrained labeling in graph theory. The primary aim is not just to characterize these graphs and extend the theoretical understanding of antimagic total labeling under parity constraints. The work addresses the problem of characterizing and understanding parity-constrained antimagic total labeling schemes, which are not well-explored in existing literature. This includes identifying graph families that admit these schemes and understanding their structural and theoretical implications. Constructive labeling techniques is employed to establish the existence of these labelings.</p>]]></description>
<pubDate>Feb 2025</pubDate>
</item>
<item>
<title><![CDATA[A Study on Laceability Partition Dimension of A Graph]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14881]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Manjula M&nbsp; &nbsp;Leena N Shenoy&nbsp; &nbsp;and Deepthy D&nbsp; &nbsp;</p><p>A Hamiltonian laceability is specifically studied for a bipartite graph. A bipartite graph is called Hamiltonian laceable if, for every pair of distinct vertices <img src=image/13439539_08.gif> and <img src=image/13439539_09.gif>, there is a Hamiltonian path between them. A graph <img src=image/13439539_01.gif> with <img src=image/13439539_02.gif> is Random Hamiltonian Laceable, if there exists a <img src=image/13439539_03.gif> Hamiltonian path <img src=image/13439539_04.gif>. In this paper, we present the concept of decomposing a graph into induced subgraphs such that each subgraph is random Hamiltonian laceable. This kind of partitioning can be obtained in many ways and the least number of partitions of vertex set such that the subgraph induced by each partition is random Hamiltonian laceable gives the Laceability Partition Dimension of a connected graph <img src=image/13439539_05.gif>. Here, we examine the Laceability partition dimension for any simple, connected and non-bipartite graph G and it is denoted by <img src=image/13439539_06.gif>. Obviously, each of these induced subgraphs ensures that there is always a path between any two vertices, which assists in routing and connectivity strategies in networking. It also reduces the redundant connections in networking. This technique in fact, gives a light to solving an NP-class problem of finding Hamiltonian cycle in a graph.</p>]]></description>
<pubDate>Feb 2025</pubDate>
</item>
<item>
<title><![CDATA[A Modified Population Mean Estimator for Sample Surveys with Nonresponse Problems]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14880]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Napattchan Dansawad&nbsp; &nbsp;</p><p>Challenges in data collection often emerge due to constraints such as limited time, labor, and budget, especially when dealing with large population sizes. These limitations make gathering information from every individual impractical, prompting researchers to adopt survey methodologies that focus on selecting representative samples. While this approach can streamline data collection, it introduces potential sources of error. Analyzing data from these samples can sometimes yield inaccurate statistical values, particularly when issues like incomplete sampling or non-responses in key variables arise. Such challenges can significantly impact the reliability of study findings. To tackle this issue, this paper introduces a new estimator for the population mean within the context of sample surveys, leveraging sub-sampling techniques. The proposed method is designed to handle scenarios where both study and auxiliary variables experience non-response, a common challenge in survey research. The paper also delves into the new estimator's mathematical properties, such as bias, mean squared error (MSE), and minimum MSE (MMSE), evaluating its efficiency using the percent absolute relative biases (PARBs) and the percent relative efficiencies (PREs) criterion. The study employs three real-world datasets to validate the proposed estimator's effectiveness. It also conducts theoretical analyses and empirical studies to compare the new estimator's performance against existing methods. The results consistently demonstrate that the new estimator provides superior accuracy and reliability, outperforming existing estimators under similar conditions. These findings highlight the potential of the proposed approach to improve data accuracy in survey research, especially in cases plagued by non-response issues.</p>]]></description>
<pubDate>Feb 2025</pubDate>
</item>
<item>
<title><![CDATA[Reduced Second Zagreb Index and Bounds of Some Graph Operations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14879]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>K. Rengalakshmi&nbsp; &nbsp;and S. Pethanachi Selvam&nbsp; &nbsp;</p><p>To link mathematics with the vast field of QSAR (quantitative structure-activity relationship) and QSPR (quantitative structure-property relationship) research, the idea of the concept of chemical graph theory is introduced. Topological indices refer to numerical values or descriptors that encode the structural properties of a molecular graph. There are numerous topological indices that have been created and applied as a tool in QSAR/QSPR research up to this point. Among those indices, the reduced second Zagreb index (<img src=image/13440163_01.gif>) has been established in recent times. Combining two graphs results in a new graph, like the lexicographic product of a cycle with n vertices with the path on two vertices results in a closed fence graph, and a path on n vertices with a path on two vertices results in a fence graph whose index can be easily computed by our obtained results. In this article, we compute the <img src=image/13440163_01.gif> index for the join product, lexicographic product, and tensor product of any two simply connected graphs in terms of the first and second Zagreb index and the cardinality of the graphs' vertex and edge sets that are being used. For this, we use the degree of a vertex in the newly created graph that comes from an operation, as well as the vertex and edge set cardinality of the graphs involved in the process. In terms of maximum and minimum degree, we additionally establish certain lower and upper bounds for the aforementioned products. We further state the necessary and sufficient condition to obtain equality for the bounds. Furthermore, we deduce bounds on <img src=image/13440163_01.gif> index for the earlier mentioned products of certain graphical structures, such as paths and cycles, and verify the index for a closed fence graph for application purposes. In this way, various operations can be performed to obtain different chemical structures that exist in our everyday lives. The structural and chemical characteristics of the obtained chemical structure attained by the graph invariant can be used in drug delivery, pharmaceutical research, and research purposes.</p>]]></description>
<pubDate>Feb 2025</pubDate>
</item>
<item>
<title><![CDATA[Total 2 - Out Degree Equitable Domination Number for Distinct Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14800]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>T. Sindhuja&nbsp; &nbsp;V. Maheswari&nbsp; &nbsp;and V. Balaji&nbsp; &nbsp;</p><p>For any graph G, &quot;The out degree of u with respect to a dominating set D is denoted and defined as <img src=image/13438994_01.gif>&quot;. The subset D of V is referred to be a dominating set and the induced sub graph <img src=image/13438994_02.gif> doesn't contain any isolated vertices. The total domination number is determined by the number of vertices of the minimal dominating set, which is denoted by <img src=image/13438994_03.gif>. Based on the above concept of out degree and total dominating set, we introduce a new domination called total 2 - out degree equitable domination (2 - ODED) number. A set D of V is referred to be total 2 – ODED set if D is a dominating set and induced sub graph <img src=image/13438994_02.gif> that contains no isolated vertices also has the property <img src=image/13438994_04.gif> where <img src=image/13438994_01.gif> for any two vertices <img src=image/13438994_05.gif>. The minimum number of vertices of such dominating set is known as total 2 - ODED number. This paper is to investigate the proposed domination number for some general graphs like complete graph, star graph, path graph, cycle graph, double fan graph, bi star, fan graph, helm graph, crown graph, and triangular snake graph which are explained with examples. Finally, we discuss the application of total 2 – out degree equitable domination (2 – ODED) number in real life.</p>]]></description>
<pubDate>Feb 2025</pubDate>
</item>
<item>
<title><![CDATA[Robust L1-norm Estimation for Exploratory Factor Analysis Model]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14799]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Nagwa M. Albehery&nbsp; &nbsp;Hend A. Auda&nbsp; &nbsp;and Esraa A. H. Othman&nbsp; &nbsp;</p><p>Factor Analysis is a statistical technique used to understand the underlying structure of complex datasets by identifying latent variables that explain the correlations among multiple variables. Traditional factor analysis methods are often sensitive to outliers and noise, leading to unreliable estimates. So robust factor analysis is crucial for accurately identifying latent variables in datasets that include significant noise and outliers. Minimum Covariance Determinant (MCD) was widely used for robust factor analysis. The MCD estimator provides a robust estimate of the covariance matrix by minimizing the determinant of the covariance over a subset of the data effectively reducing the influence of outliers. In this paper, we propose a novel robust factor analysis method using L1-norm Exploratory Factor Analysis (L<sub>1</sub>-norm EFA). L<sub>1</sub>-norm EFA refines the factor estimation by minimizing the L1-norm of the residuals in the EFA model to reduce the impact of outliers using linear programming technique. A comprehensive analysis of L<sub>1</sub>-norm EFA will be provided, focusing on its effectiveness in improving the stability and accuracy of factor analysis estimates in the presence of outliers. A simulation study will be applied to compare L<sub>1</sub>-norm EFA with MCD and evaluate their performance and robustness in diverse data scenarios. Empirical results illustrate significant improvements in robustness compared to traditional factor analysis methods. Our method offers a practical solution for analysts dealing with noisy and outlier-prone datasets and more resilient framework for factor analysis.</p>]]></description>
<pubDate>Feb 2025</pubDate>
</item>
<item>
<title><![CDATA[Lower Bound for The Second Hyper-Zagreb Index of Trees with A Given Roman Domination Number]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14720]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Waqar Ali&nbsp; &nbsp;Mohamad Nazri Husin&nbsp; &nbsp;Muhammad Faisal Nadeem&nbsp; &nbsp;and Muqaddas Jabin&nbsp; &nbsp;</p><p>Graph theory plays a crucial role in understanding the structural properties of molecular and network systems. One of the significant topological indices used in this domain is the second Hyper-Zagreb index (<img src=image/13439505_01.gif>), which is computed by summing the degrees of adjacent vertices <img src=image/13439505_02.gif> and <img src=image/13439505_03.gif> in a molecular graph and squaring the result. This index provides valuable insights into the graph’s complexity and has chemistry, physics, and network analysis applications. Another important concept in graph theory is the Roman dominating number (RDN), defined as a function <img src=image/13439505_04.gif>: <img src=image/13439505_05.gif>, where <img src=image/13439505_06.gif> is the set of vertices. The RDN must satisfy the condition that for every vertex <img src=image/13439505_03.gif> with <img src=image/13439505_07.gif>, there exists an adjacent vertex <img src=image/13439505_02.gif> with <img src=image/13439505_08.gif>, ensuring that all vertices are strategically covered. The RDN, denoted by <img src=image/13439505_09.gif>, is the minimum total weight assigned by the RDN across all vertices and is critical for optimizing network security, resource allocation, and fault tolerance in various systems. This paper aims to bridge the gap between these two areas by establishing a lower bound on the <img src=image/13439505_10.gif> characterized by <img src=image/13439505_11.gif> vertices and their corresponding <img src=image/13439505_12.gif>. Our findings reveal new insights into the interplay between these graph parameters, offering enhanced tools for precise analysis in molecular chemistry and theoretical network sciences. The derived bounds have significant implications for improving the design and resilience of complex systems, particularly in scenarios where efficient resource deployment and stability are paramount. Future research may extend these methods to broader classes of graphs, thereby further expanding their applicability in real-world contexts.</p>]]></description>
<pubDate>Feb 2025</pubDate>
</item>
<item>
<title><![CDATA[Parameters Estimation of the Gompertz-Makeham Process in Non-Homogeneous Poisson Processes: Using Modified Maximum Likelihood Estimation and Artificial Intelligence Methods]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14719]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2025<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;13&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Adel S. Hussain&nbsp; &nbsp;Kawar B. Mahmood&nbsp; &nbsp;Ismat M. Ibrahim&nbsp; &nbsp;Ali F. Jameel&nbsp; &nbsp;Sundas Nawaz&nbsp; &nbsp;and Mohammad A. Tashtoush&nbsp; &nbsp;</p><p>In this paper, we study the rate of occurrence of the non-homogeneous Poisson process by introducing the Gompertz-Makeham distribution as a rate of occurrence, known as the Gompertz-Makeham Process (GMP). To estimate parameters of this process, we propose the Maximum Likelihood Estimator (MLE) and introduce a modification to address its limitations in finding accurate estimators. The modified method, referred to as the Modified Maximum Likelihood Estimator (MMLE), employs an intelligent algorithm for the likelihood function to improve its performance. We compare the results of MMLE with another intelligent method, Particle Swarm Optimization (PSO), to identify the most effective estimator for the rate of occurrence of the proposed Gompertz-Makeham process. Additionally, this paper includes a simulation study of the process and presents a practical application. By utilizing the MMLE and PSO algorithms, we seek to provide accurate parameter estimation for the Gompertz-Makeham process, thereby enhancing its applicability in diverse domains such as mortality modeling, reliability analysis, and disease progression studies. The comparative analysis between MMLE and PSO offers valuable insights into the performance and effectiveness of intelligent algorithms in estimating the rate of occurrence for NHPP processes. Applied to a real data application, it studies operating periods in days between two successive stops for the raw materials factory from the General Company for Northern Cement / Badoush Cement Factory and estimates the rate for the number of stops for the factory for the period time from 1<sup>st</sup> April 2020 to 1<sup>st</sup> January 2022.</p>]]></description>
<pubDate>Feb 2025</pubDate>
</item>
<item>
<title><![CDATA[Exploring the Connections of Finite Group Automata, Group Machines and Group Machine Recognizer: Analyzing Their Characteristics]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14695]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Sathiyasorubini G&nbsp; &nbsp;and Venkatesan R&nbsp; &nbsp;</p><p>This paper presents a comprehensive examination of the underlying structures and behaviours of finite group automata and group machines, delving into their intricate relationships and properties. Researchers have developed a comprehensive structure for finite group automata applicable to any finite group. Their work utilizes state complexity (SC) and accepting state complexity (ASC) as key metrics. They have also computed syntactic and quotient complexities (SNC and QC) specifically for cyclic groups. While the primary emphasis lies on cyclic groups, their versatile methodology establishes a foundation for broadening these complexity analyses to encompass other categories of finite groups. Building upon existing research on finite group automata, this study investigates the structures of group machines, group machine recognizers, and their properties, including strong connectivity, cyclicality, perfection, bideterminism, and permutation behaviours. Our analysis reveals the interconnected nature of group machines and group machine recognizers, shedding light on their distinct characteristics. A key finding of this research is that all group machines are finite group automata, although the converse is not always true. By differentiating these structures based on their properties, we can effectively handle machines according to their characteristics. This study contributes to the advancement of research in this field by providing a thorough understanding of the fundamental structures and behaviours of finite group automata and group machines, ultimately enriching the theoretical foundations of computational models.</p>]]></description>
<pubDate>Nov 2024</pubDate>
</item>
<item>
<title><![CDATA[Left Noetherian Ternary Semigroup and Its Direct Product]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14694]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Abin Sam Tharakan&nbsp; &nbsp;and G Sheeja&nbsp; &nbsp;</p><p>A ternary semigroup <img src=image/13439152_01.gif> is termed strongly left Noetherian if every left congruence on <img src=image/13439152_01.gif> can be generated by a finite set. This work examines the fundamental characteristics of left Noetherian ternary semigroups, considering the relationships between the semigroups and their substructures about the left Noetherian condition. Additionally, the study investigates whether this property is retained in the direct product of such semigroups. This work offers a comprehensive characterization of strongly left Noetherian ternary semigroups. It is established that the homomorphic image of a Noetherian ternary semigroup retains the Noetherian property, and an alternative approach is introduced for demonstrating that a ternary semigroup is strongly left Noetherian. The relationship between ternary semigroups and their subsemigroups is explored, showing that a ternary semigroup is strongly left Noetherian if at least one of its subsemigroups is left Noetherian. A necessary characterization for inverse ternary semigroups is also presented. In addition, the strongly left Noetherian property is the direct product of ternary semigroups, identifying conditions that preserve this property. A necessary and sufficient condition is established for the direct product of an infinite ternary semigroup and a ternary monoid to be strongly left Noetherian, along with findings on the left Noetherian property in direct products of various ternary semigroups.</p>]]></description>
<pubDate>Nov 2024</pubDate>
</item>
<item>
<title><![CDATA[Equitable Domination in Fuzzy Digraphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14591]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>G. B. Priyanka&nbsp; &nbsp;P.Xavier&nbsp; &nbsp;and J.Catherine Grace John&nbsp; &nbsp;</p><p>Fuzzy digraphs, an extension of directed graphs enhanced with fuzzy set theory, provide a robust framework for representing uncertainty in networks with relationships of varying strengths. In fuzzy digraphs, nodes represent entities, and directed edges are assigned fuzzy values to measure the degree of connection between them. This approach is particularly useful for modeling uncertainty in network applications like communication networks, transportation systems, and social networks, where relationships may be imprecise, uncertain, or dynamic. Equitable domination in a fuzzy graph involves selecting a dominating set where the membership values between dominated and dominating nodes are balanced. The concept of an equitable dominating set is beneficial in situations when a balanced or equitable distribution is required, as it expands on the idea of a traditional dominating set by including a fairness criteria. By providing a sophisticated tool for assessing and creating networks and systems, this idea closes the gap between dominance and equitability. In this article, we extend equitable concept to weighted graphs and apply it in more complex scenarios where vertices have varying levels of importance or weight. We studied the ideas of fuzzy equitable domination in fuzzy digraphs. We defined the fuzzy equitable domination number and studied the properties of minimal fuzzy equitable dominating set and fuzzy equitable independent sets.</p>]]></description>
<pubDate>Nov 2024</pubDate>
</item>
<item>
<title><![CDATA[Super-convergence and Stability Analysis of the Finite Element Orthogonal Collocation Method for Time-Fractional Telegraph Equation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14590]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Ebimene James Mamadu&nbsp; &nbsp;Henrietta Ify Ojarikre&nbsp; &nbsp;Daniel Chinedu Iweobodo&nbsp; &nbsp;Jude Chukwuyem Nwankwo&nbsp; &nbsp;Ebikonbo-Owei Anthony Mamadu&nbsp; &nbsp;Jonathan Tsetimi&nbsp; &nbsp;and Ignatius Nkonyeasua Njoseh&nbsp; &nbsp;</p><p>Super-convergence and stability analysis are essential components for ensuring efficiency, reliability, and accuracy of numerical approximations to iterative methods and differential equations. Stability analysis highlights the mechanism of error control over iterations to ensure reliability and long-term accuracy of simulation. Super-convergence analysis offers some extra conditions to ensure faster convergence of numerical solutions than expected, enabling enhanced accuracy and strategies. These analyses together guarantee the effective development of complex numerical algorithms as the basis for error control and estimation. These analyses involve complex simulations and are thus relevant in fields such as physics, engineering, and weather modeling. Thus, this paper is an extension of the Finite Element Orthogonal Collocation Method (FEOCM) by Mamadu et al. (2023) for the numerical approximation of the time fractional telegraph equation with Mamadu-Njoseh polynomials as basis functions, where relevant numerical simulations were carried out. The present study offers a comprehensive super-convergence and stability analysis of solutions for the time-fractional telegraph equations via FEOCM. Here, the L<sub>2</sub>-norm, H<sup>1</sup>-norm, interpolation theory, and Cauchy-Schwarz inequality are employed as optimal estimators to propose relevant theorems for the analysis of stability and super-convergence of solutions. The analysis shows that the solutions of the fully discretized scheme FEOCM are unconditionally stable and exhibit super-convergence, with the optimal error estimated as <img src=image/13438864_01.gif>.</p>]]></description>
<pubDate>Nov 2024</pubDate>
</item>
<item>
<title><![CDATA[Bipolar Neutrosophic Transportation Problem in Symmetric Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14589]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Kanchana M.&nbsp; &nbsp;and Kavitha K.&nbsp; &nbsp;</p><p>The breadth of transportation problems (TP) makes them applicable to real-world scenarios. Real-world issues are often unforeseen, making it impossible to estimate a specific cost. Fuzzy and intuitionistic fuzzy sets resolve uncertainty, but have severe limitations. To solve these challenges, bipolar neutrosophic sets (BNS) generalize fuzzy sets, crisp sets, and intuitionistic fuzzy sets, effectively handling ambiguous, unpredictable, and insufficient information in real-world scenarios. Using BNS provides a more dependable, precise, and trustworthy procedure than conventional methods. In this paper, we use a symmetric graph network to find the shortest path for a bipolar neutrosophic transit problem. The approach is utilized to address bipolar neutrosophic transportation network issues with a single-valued neutrosophic network problems. This integration improves transportation problem-solving skills, providing more precision and reliability. The novel technique serves a variety of businesses, including logistics and supply chain management. By delivering accurate solutions, BNS assists decision-makers in optimizing transportation networks, reducing costs, and increasing efficiency. Our findings show that BNS has the ability to address real-world transportation difficulties by providing a helpful tool for managing uncertainty and complexity, hence contributing to more dependable systems. This study helps design more effective transportation systems. This research leads to the creation of more efficient transportation networks, hence increasing operational effectiveness.</p>]]></description>
<pubDate>Nov 2024</pubDate>
</item>
<item>
<title><![CDATA[Extending Godunova-Levin Interval-Valued Functions to Stochastic Processes: New Hermite-Hadamard and Jensen-Type Inequalities]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14558]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Oualid Rholam&nbsp; &nbsp;Yassine Laarichi&nbsp; &nbsp;Mariem Elkaf&nbsp; &nbsp;and Amal Aloui&nbsp; &nbsp;</p><p>This paper attempts to broaden the scope of the interval-valued functions by proposing the concept of Godunova-Levin interval-valued functions as stochastic processes. We present a novel framework for interval-valued harmonical (h1, h2)-Godunova-Levin stochastic processes. This approach seeks to address the inherent uncertainty and variability in real-world phenomena by establishing a solid mathematical foundation for interval-valued functions in stochastic settings. The fundamental goal of this study is to obtain fresh estimates for interval Hermite-Hadamard and Jensen-type inequalities in the setting of these stochastic processes. We obtain important results using sophisticated stochastic analysis and interval arithmetic approaches, which not only generalize existing inequalities but also provide a deeper understanding of the behavior of interval-valued functions under stochastic effects. The findings of this study have the potential to improve the applicability of interval-valued functions in a variety of stochastic scenarios, including financial modeling, engineering, and decision-making under uncertainty. Furthermore, the theoretical advances discussed here contribute to the larger subject of stochastic processes, bringing up new opportunities for research and application. However, the assumptions underpinning interval-valued functions and stochastic processes may limit the applicability of the presented approaches. Future study could investigate the relaxing of these assumptions and the application of the suggested framework to more complex stochastic systems.</p>]]></description>
<pubDate>Sep 2024</pubDate>
</item>
<item>
<title><![CDATA[Solutions of Fuzzy Fractional Boundary Value Problems: A Novel Approach]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14557]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>V. Padmapriya&nbsp; &nbsp;and M. Kaliyappan&nbsp; &nbsp;</p><p>In this paper, the solution of fractional boundary value problem with fuzzy boundary conditions is investigated. The fuzzy fractional boundary value problem is decomposed into collections of classical fractional boundary value problems. Then Adomian decomposition method is applied to solve these classical fractional boundary value problems. The collection of all solutions to these fractional boundary value problems provide the solution to the fuzzy fractional boundary value problem. The solution to this problem is expressed in terms of a fuzzy collection of crisp real functions. The boundary value problem is satisfied by every real function in the solution set. The degree of membership of each function is determined by the minimum membership degree among its associated fuzzy boundary values. It can be demonstrated that if the corresponding fractional problem has a unique solution, then the fuzzy fractional problem also has one. It showed that when triangular fuzzy numbers are assigned to the boundary values, the resulting solution at a specific time also takes the form of a triangular fuzzy number. An example is provided to explain the suggested method.</p>]]></description>
<pubDate>Sep 2024</pubDate>
</item>
<item>
<title><![CDATA[Elevating Data Entropy and Efficiency through Symmetric Cryptographic technique Based on Finite State Machine]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14556]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>A Yasmin&nbsp; &nbsp;R Venkatesan&nbsp; &nbsp;and K Gaverchand&nbsp; &nbsp;</p><p>Globally, cryptosystems employ sophisticated techniques for encrypting and decrypting sensitive information, heavily relying significantly on secure secret keys. The generation of a randomized, secure 256-bit secret key is of paramount importance for maintaining data accuracy, confidentiality and fortification against a spectrum of security threats, making it challenging for potential hackers to anticipate key sequences. This proposed study endeavors to produce highly randomized ciphertext (CT), reduce time complexity as well as assure the efficiency and security of the cryptosystem. The key generation methodology involves considering the recipient's credentials and employing a codebook to strategically generate the secret key. The encryption process incorporates a finite state machine (FSM) to introduce randomness into the CT. Implementation is conducted in Python, with a meticulous evaluation of time complexity and a comparison with existing cryptographic methods to assess efficiency. The randomness test in the National Institute of Standards and Technology (NIST test) evaluates the proposed scheme, yielding a mean p-value of 0.6, surpassing the expected value. Nevertheless, Key space analysis is conducted to thwart unauthorized access through brute force attacks, revealing computational challenges inherent in the evaluation of 2<sup>256</sup> possibilities. The security analysis examines the strength and vulnerabilities of various attacks on the model, thereby bolstering the model resilience against potential threats. Aggregate results robustly validate the elevated security and efficacy of the proposed model when contrasted with its antecedents.</p>]]></description>
<pubDate>Sep 2024</pubDate>
</item>
<item>
<title><![CDATA[g-inverses in Ternary Semiring]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14555]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Pandiselvi T.&nbsp; &nbsp;and Anbalagan S.&nbsp; &nbsp;</p><p>Ternary algebraic systems represent a natural extension of algebraic structures, providing a greater grasp of their features and avenues for further development. Multiplicative semigroups over a field are non-regular, meaning that the regularity equation <img src=image/13438533_01.gif> is not always solvable. When <img src=image/13438533_02.gif> exists, it's referred to as a regular. The regularity requirement is a linear condition that solves linear equations, which makes regular rings significant in many areas of mathematics, particularly in matrix theory. The current state of generalized inverses encompasses many different mathematical fields, including semigroups, operator theory, c<sup>∗</sup>-algebras, matrix theory, and semirings. Applications for them can be found in many fields, including robotics, graphics, cryptography, coding theory, Markov chains, linear estimation, differential and difference equations, and graphics. For elements of Ternary semiring, the existence of the generalized inverse is examined. The most general 1- inverse and 1–2 inverse are found for an element over a regular ternary semiring. We looked into the properties and characterization of the g-inverse in ternary Semiring and some fascinating characteristics of the left and right cosets in Partial ordered ternary semiring in this article. Mainly, we investigated the g-inverses using Principal ideals (left and right coset) and found some results in ordered ternary semiring and ternary semiring.</p>]]></description>
<pubDate>Sep 2024</pubDate>
</item>
<item>
<title><![CDATA[Certain Subclass of Analytic Functions Defined By q−analogue Differential Operator]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14554]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>G. Sujatha&nbsp; &nbsp;K. K. Viswanathan&nbsp; &nbsp;B. Venkateswarlu&nbsp; &nbsp;H. Niranjan&nbsp; &nbsp;and P. Thirupathi Reddy&nbsp; &nbsp;</p><p>The quantum (or q-) calculus is a vital area of study in the field of traditional mathematical analysis. This paper explores the innovative use of the q - q-derivative concept to develop specific differential operators, extending the class of Salagean operators to include univalent functions. By leveraging this new operator, we define a novel subclass of analytic functions within the open unit disc <img src=image/13438104_01.gif>. Our primary objective is to establish a subclass of uniformly starlike functions corresponding to uniformly convex functions through the q-analog of the generalized differential operator. This research delves deeply into the intricate properties of this newly defined class of functions. We systematically analyze various aspects, such as coefficient estimates, which provide critical insights into the behavior of the functions within this class. Additionally, we examine neighborhoods, elucidating the local behavior and interaction of these functions within the region. Our study of partial sums offers a detailed understanding of the series representations and their properties. Furthermore, we investigate integral means inequalities, which are essential in understanding these functions' average growth and value distribution. The radii of close-to-convexity and star likeness are also rigorously evaluated, shedding light on the geometric properties that characterize the boundaries within which these functions maintain their specific starlike or convex nature.</p>]]></description>
<pubDate>Sep 2024</pubDate>
</item>
<item>
<title><![CDATA[A New Test for Equality of Two Covariance Matrices in High-Dimensional Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14553]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Saowapa Chaipitak&nbsp; &nbsp;and Boonyarit Choopradit&nbsp; &nbsp;</p><p>High-dimensional data, characterized by datasets with many variables (dimension) relative to the number of observations, is growing in prominence owing to advances in data collection and storage capabilities. This data type is widespread across various fields. While it presents unique challenges and opportunities, robust statistical approaches are crucial for effectively leveraging the potential of high-dimensional data. In multivariate statistical analysis, the likelihood ratio test (LRT) is frequently used for evaluating the equality of two covariance matrices. Nevertheless, the LRT lacks definition in high-dimensional data contexts. This paper aims to introduce a new test for ascertaining the equality of two covariance matrices in high-dimensional data, especially for datasets that follow a multivariate normal distribution. The test (<img src=image/13437835_01.gif>) proposed in this study utilizes consistent estimators in quadratic and symmetric bilinear forms. As the dimension and sample sizes approach infinity, the asymptotic null distribution of the test converges to the standard normal distribution. A simulation study was conducted to assess the performance of the proposed test compared to three existing tests. The existing tests were proposed by Schott in 2007, Srivastava and Yanagihara in 2010, and Li and Chen in 2012. The focus was on type I error rates and the test's power under spherical and Toeplitz covariance matrix structures. The simulation results demonstrate that <img src=image/13437835_01.gif> outperforms the tests proposed by Srivastava and Yanagihara, as well as Li and Chen, in all scenarios evaluated. Moreover, it performs comparably to Schott's test. Additionally, <img src=image/13437835_01.gif> demonstrates remarkable stability even when faced with alterations in the covariance matrix structure. This robust performance of <img src=image/13437835_01.gif> suggests that it can serve as a reliable tool for statistical inference in multivariate analyses, especially in high-dimensional contexts. For practitioners, the use of the proposed test could mean more accurate decision-making in scientific research and policymaking, where precision and reliability are paramount.</p>]]></description>
<pubDate>Sep 2024</pubDate>
</item>
<item>
<title><![CDATA[Exploring Prism Graphs with Fractional Domination Parameters]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14451]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>G. Uma&nbsp; &nbsp;S. Amutha&nbsp; &nbsp;N. Anbazhagan&nbsp; &nbsp;and B. Koushick&nbsp; &nbsp;</p><p>In this paper, we consider the prism graph of the cartesian product <img src=image/13437788_01.gif> as <img src=image/13437788_02.gif>. Our primary objectives are to explore the concept of the fractional domination number in the prism graphs by determining the bounds and relations of fractional domination number and other parameters in prism graphs. This makes significant improvements in resource allocation. To attain our objectives, we compute the bounds based on the definitions of the fractional domination number and other parameters. When comparing these parameters in prism graphs, we investigate how the fractional domination number relates to other parameters, such as the fractional mixed domination number, domination number, independence domination number, and vertex independence number of prism graphs. Our main findings include the domination chain of the prism graph, and also insights into how the bounds of the fractional domination number and the fractional mixed domination number change when the vertex or edge is added or removed from the prism graphs which is crucial for analyzing how changes in prism graphs affect the bounds of fractional domination-related parameters.</p>]]></description>
<pubDate>Sep 2024</pubDate>
</item>
<item>
<title><![CDATA[On the Large-sample Size Critical Values of the Maximum Absolute Internally Studentized Residuals]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14450]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Tobias Ejiofor Ugah&nbsp; &nbsp;Kingsley Chinedu Arum&nbsp; &nbsp;Charity Uchenna Onwuamaeze&nbsp; &nbsp;Everestus Okafor Ossai&nbsp; &nbsp;Nnaemeka Martin Eze&nbsp; &nbsp;Emmanuel Ikechukwu Mba&nbsp; &nbsp;Caroline Ngozi Asogwa&nbsp; &nbsp;Angela Obayi Adaora&nbsp; &nbsp;Ifeoma Christy Mba&nbsp; &nbsp;Oluchukwu Chukwuemeka Asogwa&nbsp; &nbsp;Ikenna Emmanuel Chimezie&nbsp; &nbsp;and Comfort Njideka Ekene-Okafor&nbsp; &nbsp;</p><p>The maximum absolute internally studentized residual is a regular diagnostic measure for identification of a single outlying observation in the response variable in linear regression models. However, due to the daunting and formidable nature of the probability density function of this statistic, exact critical values are tough to compute. The Bonferroni inequality and intensive simulations are the only tools for determining its critical values as a means for detecting a single outlying observation in a linear regression model. In this paper, we present a straightforward alternative technique for obtaining asymptotic critical values of this statistic. The technique can be applied to any linear regression model and is convenient for routine use. The asymptotic distribution of this statistic is derived and used in obtaining the upper bounds for its critical values. It is shown that the proposed technique does not depend on the number of independent variables or the number of regression parameters in the model. Thus, the computational cumbersomeness and tedium imposed by the complexity associated with the distribution of this statistic and the use of the Bonferroni inequality are circumvented. The main advantages of the proposed procedure are its computational simplicity and efficiency to handle large datasets in high dimension. The asymptotic critical values of this statistic obtained by the proposed method are almost identical to those obtained by other authors, even though the techniques and principles employed in this work are entirely different from that employed by them.</p>]]></description>
<pubDate>Sep 2024</pubDate>
</item>
<item>
<title><![CDATA[Dual Level Strategy Integrating 2-D CA and DCT Technique for Enhanced Data Protection]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14449]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>K. Gaverchand&nbsp; &nbsp;R. Venkatesan&nbsp; &nbsp;and A. Yasmin&nbsp; &nbsp;</p><p>In the contemporary era, safeguarding vast amounts of sensitive data and ensuring secure communication are paramount concerns, especially given the prevalence of insecure networks. The fields of cryptography and steganography have emerged as pivotal tools in addressing these challenges, attracting significant scientific attention due to their established effectiveness. This paper proposes a sophisticated double-layered security system for protecting textual data, initially utilizing 2-D Cellular automata (CA) to establish robust encryption protocols. The incorporation of the Von Neumann neighborhood of 2-D CA enhances the generation of secure cryptographic keys, leveraging its complex and chaotic behavior to improve randomness and unpredictability. Subsequently, to enhance security, the discrete cosine transform (DCT) technique is integrated, facilitating discreet embedding of encrypted data into the least significant bit (LSB) of DCT coefficients. An exhaustive range of tests and evaluations, encompassing analysis of key space, complexity, performance metrics, avalanche effect, security attacks, peak signal-to-noise ratio (PSNR) and mean square error (MSE), substantiates the effectiveness and resilience of this multifaceted approach. The findings reveal a significant average avalanche effect of 76%, indicating resilience against cryptographic attacks. Furthermore, superior preservation of image quality is observed during encryption, as evidenced by improved PSNR and MSE values compared to alternative technique. The overall analyses validate the proposed method as a proficient solution for secure data encryption in cyberspace.</p>]]></description>
<pubDate>Sep 2024</pubDate>
</item>
<item>
<title><![CDATA[On One Property of a Conditional Full Angle]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14448]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Anvarjon Sharipov&nbsp; &nbsp;Fayzulla Topvoldiyev&nbsp; &nbsp;and Zokirkhuja Usmonkhujaev&nbsp; &nbsp;</p><p>One of the main directions of modern differential geometry and topology is the so-called geometry "in large", which is a field that studies geometric objects as a whole. As one of the first results of this field, it is possible to recognize the equality of the polyhedron consisting of two closed, congruent sides with the same constituent sides, proved by O. Cauchy in 1813. The problems of restoring polyhedron and surfaces according to the given geometric characteristics are also included in the problems of geometry "in large". As a geometric characteristic, it is possible to obtain external curvature, internal curvature, and area of a spherical image, area of a cylindrical image of a polyhedron and surface, or any other value related to a geometric object. Geometric methods related to the theory of polyhedra are widely used not only for polyhedra, but also in the general theory of surfaces. Some surfaces are formed from the limit states of polyhedras. An analogy of some properties of polyhedra can be made for surfaces. Therefore, studying the properties of polygons helps to study such properties of surfaces. This article studied the properties of polyhedron isometric on sections. In particular, the concept of a conditional full angle given at the vertex, which is important for the restoration of isometric polygons in terms of their external curvature, is included. It is known that the concept of isometry on sections depends to given direction. The concept of a conditional full angle is generalized for trihedral angles that have edges perpendicular to this direction (a special case) and do not have a support plane perpendicular to this direction. The monotonicity of this conditional full angle for a trihedral angle is proved for the general case.</p>]]></description>
<pubDate>Sep 2024</pubDate>
</item>
<item>
<title><![CDATA[On the Number of Monochromatic Triples Associated with Binary Equations over Coloured Algebraic Groups]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14447]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Melvin Varghese&nbsp; &nbsp;and G. Sheeja&nbsp; &nbsp;</p><p>Schur&apos;s Theorem on integer colouring states that colouring integers using finitely many colours yields at least one monochromatic solution to the equation <img src=image/13437836_01.gif>. An extension of Schur&apos;s theorem on integer lattices is explored by Vishal Balaji, Andrew Lott and Alex Rice. Schur tried to prove Fermat&apos;s Last Theorem by proving non-existence of the solution to the equation <img src=image/13437836_02.gif> for prime <img src=image/13437836_03.gif>. But he in fact proved that &quot;for every integer <img src=image/13437836_04.gif>, there exists <img src=image/13437836_05.gif> such that for any prime <img src=image/13437836_06.gif>, the congruence <img src=image/13437836_02.gif> has a solution&quot; where he failed to prove Fermat&apos;s Last Theorem in this route of attack. This demonstrates that Fermat&apos;s Last Theorem does not hold in the finite field <img src=image/13437836_07.gif> for any sufficiently large prime <img src=image/13437836_03.gif>. We investigate Schur&apos;s Theorem on integer colouring and the corresponding theoretical framework in algebraic groups, and we classify colourings that yield a monochromatic solution to <img src=image/13437836_08.gif> (not all are equal). We use combinatorial tools like bijective counting and Pigeonhole principle to arrive at Theorem 3.9. Our methods include Principle of Inclusion-Exclusion formula to prove the principal result Theorem 3.20. We have used Python language to implement algorithms developed during the research to showcase Schur triples associated to the group <img src=image/13437836_09.gif> and some given colouring maps. We illustrated various groups and its colouring properties. We were able to find bounds using certain parameters involving special subgroups of the group for Schur triples <img src=image/13437836_10.gif> such that <img src=image/13437836_11.gif> and <img src=image/13437836_12.gif> get the same colour in algebraic groups when coloured using finitely many colours. We also find the connection between proper vertex colouring and group colouring via Cayley graphs of semigroups. Our study throws light on new combinatorial perspectives on colouring problems on finite algebraic groups. The results help to enhance new algorithms related to Cayley graph colourings associated with finite semigroups. The research helps in combinatorial studies equipped by colouring problems involving network theory.</p>]]></description>
<pubDate>Sep 2024</pubDate>
</item>
<item>
<title><![CDATA[On a Variant Weibull-Weibull Distribution: Theory and Properties]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14446]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>S. O. Ezeah&nbsp; &nbsp;A. A. Adekola&nbsp; &nbsp;O. O. Fabelurin&nbsp; &nbsp;and T. O. Obilade&nbsp; &nbsp;</p><p>In general, distribution theory plays a crucial role in modeling various real-life phenomena, making it a fundamental tool in statistical analysis and decision-making. Over the years, extensive research has been conducted on different statistical distributions and estimation techniques. While the literature abounds with information regarding well-known distributions, there is always room for exploring new variants that can better capture the characteristics of complex phenomena. In this paper, we contribute to the field of distribution theory by introducing novel probability distribution called the Weibull-Weibull distribution. The Weibull-Weibull distribution is derived by compounding two Weibull distributions, and it offers a flexible framework for modeling phenomena that exhibits a complex interplay of factors. By combining the strength of the Weibull distribution with itself, we are able to capture a wider range of shapes and behaviour, providing more accurate representations of real-world occurrences. To facilitate the practical application of the Weibull-Weibull distribution, we employ the maximum likelihood estimation (MLE) approach to estimate its shape and scale parameters. The MLE method is a widely used statistical technique that allows us to determine the most likely values of the parameters based on observed data. By applying this estimation method to the Weibull-Weibull distribution, we enable researchers and practitioners to effectively utilize this new distribution in their analyses and modeling efforts. Furthermore, we delved into a comprehensive study of statistical theory and properties of the Weibull-Weibull distribution. We investigate its moments, cumulative distribution function, probability density function, and other key measures. Through rigorous analysis, we establish the theoretical foundations of the Weibull-Weibull distribution and provide insights into its behaviour and characteristics. This comprehensive examination equips researchers with a solid understanding of the distribution, enabling them make informed decisions and interpretations when working with real-life data. In conclusion, our research introduces the Weibull-Weibull distribution as a valuable addition to the existing repertoire of statistical distributions. By leveraging the power of compounding two Weibull distributions, we provide a flexible and robust framework of modeling complex phenomena. With the use of maximum likelihood estimation and the given Chernoff bound, practitioners are able to estimate the distribution's parameters as well as analyze its tails accurately. Our extensive statistical analysis further enhances the understanding of the Weibull-Weibull distribution, facilitating its practical application in a wide range of fields, including reliability analysis, survival analysis, and risk assessment.</p>]]></description>
<pubDate>Sep 2024</pubDate>
</item>
<item>
<title><![CDATA[New Classes of Tests for The Pareto Distribution Based on A Conditional Expectation Characterisation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14432]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>T. Nombebe&nbsp; &nbsp;J.S. Allison&nbsp; &nbsp;L. Santana&nbsp; &nbsp;and I.J.H. Visagie&nbsp; &nbsp;</p><p>There are relatively few goodness-of-fit tests specifically developed for the Pareto distribution when compared to other well-known distributions like the normal or exponential distributions. This is the case even though there are a host of practical applications where it would be required to first check the assumption that the data were realised from a Pareto distribution. We propose and investigate new goodness-of-fit tests for the Pareto Type I distribution based on a specific conditional expectation that characterises the Pareto distribution. Currently, the literature contains no other tests for the Pareto distribution based on conditional expectation. We conduct a thorough Monte Carlo power study in order to assess the finite sample performance of the newly developed tests using various estimation methods. The results from the simulation study show that the newly proposed tests are competitive in terms of power performance when compared to some existing tests. It also shows that the majority of tests produce their highest powers when the unknown shape parameter is estimated by the method of moments. A practical example, where we consider the annual salaries of English Premier League football players for two consecutive seasons, is also included to illustrate the use of the newly proposed tests. We find that the salaries in the 2021–2022 season can be adequately modelled with the Pareto distribution, but not the salaries for the 2022–2023 season.</p>]]></description>
<pubDate>Jul 2024</pubDate>
</item>
<item>
<title><![CDATA[Hyers-Ulam Stability of the Hexic-Quadratic-Additive Mixed-Type Functional Equation in Non-Archimedean Normed Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14431]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Koushika Dhevi S&nbsp; &nbsp;and Sangeetha S&nbsp; &nbsp;</p><p>Functional equations are important and exciting concepts in mathematics. They make it possible to investigate fundamental algebraic operations and create fascinating solutions. The concept of functional equations develops further creative methods and techniques for resolving issues in information theory, finance, geometry, wireless sensor networks, and other domains. These include geometry, algebra, analysis, and so on. In recent decades, several writers in many domains have covered the study of various types of stability. Many authors have studied the stability of various functional equations in great detail, with the traditional case (Archimedean) revealing more fascinating results. Recently, researchers have used NANS to study the equivalent conclusions of stability problems from various functional equations. In this research, we examine the Hyers-Ulam stability of the hexic-quadraticadditive mixed-type functional equation <img src=image/13437416_01.gif> where <img src=image/13437416_02.gif> is fixed such that <img src=image/13437416_03.gif> and <img src=image/13437416_04.gif> in NANS and also provided some suitable counterexamples.</p>]]></description>
<pubDate>Jul 2024</pubDate>
</item>
<item>
<title><![CDATA[Homogeneous Spaces and Induced Transformation Groups of S-Topological Transformation Group]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14430]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>C. Rajapandiyan&nbsp; &nbsp;and V. Visalakshi&nbsp; &nbsp;</p><p>This paper explores the homogeneous spaces and induced transformation groups of S-topological transformation group. S-topological transformation group is a structure constructed by concatenating a topological group with a topological space through a semi totally continuous action. It is shown that any map from a topological group to the quotient group of a finite Hausdorff topological group by the isotropy group is surjective, continuous, open and it has been proven that any map from the quotient group of a finite Hausdorff topological group by the isotropy group to the homogenous space is both H-isomorphism and semi totally continuous. Furthermore, an equivariant map has been established between homogeneous spaces and discussed the partial order relation on the family of all Hausdorff homogeneous spaces for a compact Hausdorff topological group. Subsequently, an induced S-topological transformation group is constructed by an induced H-action. For any compact subgroup K of a topological group H, it is verified that any map from the topological spcae Y to the orbit space of K-action is continuous and a K-map. For any H-space, K-map and an induced S-topological transformation group; it is proved that there is a unique semi totally continuous H-map. Additionally, it is shown that for a topological group, a subgroup K of topological group and a K-space, there is a unique H-space and a unique injective K-map and also it is established that for a H-space and a semi totally continuous K-map, there exists a unique semi totally continuous H-map. Finally, it is demonstrated that for a finite Hausdorff topological group, finite Frechet space and a M-space, any map from the orbit space of M-action to <img src=image/13437593_01.gif> is semi totally continuous, for the subgroups M and N of topological group.</p>]]></description>
<pubDate>Jul 2024</pubDate>
</item>
<item>
<title><![CDATA[Incident Vertex Pi Coloring of Graph Families: Fan, Book, Gear, Windmill, Dutch Windmill and Crown Graph]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14429]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Sunil B. Thakare&nbsp; &nbsp;Archana Bhange&nbsp; &nbsp;and H. R. Bhapkar&nbsp; &nbsp;</p><p>In graph theory, the notion of graph coloring plays an important role and has several applications in the fields of science and engineering. Since the concept of map coloring was first proposed, many researchers have invented a wide range of graph coloring techniques, among which are vertex coloring, edge coloring, total coloring, perfect coloring, list coloring, acyclic coloring, strong coloring, radio coloring, and rank coloring; these are some of the important graph coloring methods that color the graph's vertices, edges, and regions with certain conditions. One of the coloring method is Incident Vertex PI coloring. This is a function of coloring from the set of pairs of incident vertices of every edge of a graph to the power set of colors. This method ensures that all vertices are properly colored, with an additional condition that ordered pair vertices for all edges of graph receive distinct colors. Many types of graphs are defined in the graph theory. In this paper, we have discussed the Incident Vertex PI Coloring numbers for the class of graph families, Fan graph, Book graph, Gear graph, Windmill graph, Dutch Windmill graph and Crown graph.</p>]]></description>
<pubDate>Jul 2024</pubDate>
</item>
<item>
<title><![CDATA[An Extension of The Hesitant Fuzzy Weight Averaging Operator-VIKOR Method under Hesitant Fuzzy Sets]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14428]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Rafi Raza&nbsp; &nbsp;Ahmad Termimi Ab Ghani∗&nbsp; &nbsp;and Lazim Abdullah&nbsp; &nbsp;</p><p>The hesitant fuzzy set (HFS) is an innovative approach to decision-making under uncertainty. This study addresses the aggregated operation of the HFS decision matrix. The introduction of induced VIKOR procedures, various extensions of HFSs aggregation operator, and essential approaches for multi-criteria decision-making (MCDM) are presented. This technique uses the aggregation operator, HFWA operator, to rank alternatives and identify the compromise solution that comes closest to the ideal solution. We developed the hesitant fuzzy weight averaging VIKOR (HFWA-VIKOR) model as a novel technique to achieve this. By combining the hesitant fuzzy elements, the HFWA aggregation operator creates aggregated values that are expressed as a single value. The primary advantage of the HFWA-VIKOR model lies in its initial step of aggregating the hesitant fuzzy element. This results in an initial hesitant fuzzy decision matrix, which provides much more detailed information for decision-making and, through the use of the inducing HFWA operator, represents the complex attitudinal nature of the decision-makers. The multi-criteria location selection problem is then solved using the combined HFWA-VIKOR technique, and the outcomes are presented in an easy-to-understand way owing to aggregation operators. A numerical example is also applied in this new method which gives out the best alternative result. As per the scope of our research work, MCDM under hesitant fuzzy sets with HFWA-VIKOR method have been used and their result revealed the best alternative is to find out. These results indicate good potential for objectives. This technique may also be used for other studies or applications. Further research in this area may provide a more developed technique for this application.</p>]]></description>
<pubDate>Jul 2024</pubDate>
</item>
<item>
<title><![CDATA[On the Jaulent-Miodek System for Fluid Mechanics Using Combination of Adomian Decomposition and Padé Techniques]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14427]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Ahmed J. Sabali&nbsp; &nbsp;Saad A. Manaa&nbsp; &nbsp;and Fadhil H. Easif&nbsp; &nbsp;</p><p>Solving nonlinear partial differential equations (PDEs) is crucial in various scientific and engineering domains. The Adomian Decomposition Method (ADM) has emerged as a promising technique for tackling such problems. However, its effectiveness diminishes over extended time intervals due to divergence issues. This limitation hampers its practical applicability in real-world scenarios where stable and accurate numerical solutions are essential. To address the divergence problem associated with ADM, this research explores the combination of the Adomian Decomposition Method (ADM) with the Padé technique – a method known for its accuracy and efficiency. This combination's purpose is to mitigate ADM's shortcomings, particularly when dealing with extended time intervals. Experimental analysis was conducted across varying time intervals to compare the performance of the combined technique with traditional ADM. Mathematica software was used to obtain all calculations, including the creation of tables and figures. Results from the experiments demonstrate the superiority of the combined technique in producing accurate results regardless of the time interval used. Furthermore, the combined method improves accuracy and ensures result stability over long time intervals, creating new possibilities for its use in scientific and engineering fields. This research contributes to the field by offering a solution to the divergence issue associated with ADM, thereby enhancing its applicability in solving nonlinear PDEs. While acknowledging limitations such as reliance on numerical simulations, the study highlights the practical implications of its findings, including more accurate predictions and modeling in complex systems, with potential social implications in decision-making and problem-solving contexts.</p>]]></description>
<pubDate>Jul 2024</pubDate>
</item>
<item>
<title><![CDATA[Viscosity Approximation Methods for Generalized Modification of the System of Equilibrium Problem and Fixed Point Problems of an Infinite Family of Nonexpansive Mappings]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14328]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Prashant Patel&nbsp; &nbsp;and Rahul Shukla&nbsp; &nbsp;</p><p>Fixed points (FP) of infinite families of nonexpansive mappings find diverse applications across various disciplines. In economics, they help to find stable prices and quantities in markets. In game theory, fixed points help to find Nash equilibria. In computer science, fixed points are used to understand program meanings and help in making better algorithms for tasks like data analysis, checking models, and improving compilers. Solutions to equilibrium problems have practical uses in various areas. For instance, in physics, these solutions assist in analyzing systems at rest or in motion. In engineering, they aid in designing structures that can withstand forces without collapsing, ensuring safety and stability in construction projects. The main aim of the article is to present the concept of generalized modification of the system of equilibrium problems (GMSEP) for an infinite family of nonexpansive mappings. In this paper, we study viscosity approximation methods and present a new algorithm to find a common element of the fixed point of an infinite family of nonexpansive mappings and the set of solutions of generalized modification of the system of equilibrium problem in the setting of Hilbert spaces. Under some conditions, we prove that the sequence generated by the algorithm converges strongly to this common solution.</p>]]></description>
<pubDate>Jul 2024</pubDate>
</item>
<item>
<title><![CDATA[On Zagreb Energy of Certain Classes of Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14327]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>S. Sripriya&nbsp; &nbsp;and A. Anuradha&nbsp; &nbsp;</p><p>Energy of the graph G is the sum of absolute values of eigenvalues of its adjacency matrix. Given a simple connected graph G, its first (second) Zagreb matrix is constructed by including the sum (product) of the degrees of each pair of adjacent vertices of G. Computation of sum of absolute eigen values of these matrices yields the corresponding Zagreb energies. In this paper, the first and second Zagreb energies of certain families of graphs have been computed and a criterion to discern the nature of graph G based on their energies is obtained. The paper focuses on the comparative analysis of first and second Zagreb energies in terms of regular graphs such as cycle graphs, bipartite and tripartite graphs. Our findings reveal that the second Zagreb energy is always greater than first Zagreb energy for all complete bipartite graphs of even order greater than or equal to 4. Also we have established that the same is the case for complete tripartite graphs too. Furthermore, we illustrate that the two Zagreb energies coincide exclusively for the complete bipartite graph with equal partite sets if and only if the graph is of order 2. Additionally, we provide a criterion leading to an infinite set of non-isomorphic Zagreb equi-energetic graphs for all r>1 within partite graphs. The computations of two Zagreb energies for graph operations like t-splitting graph and t-shadow graph are also illustrated. The first and second Zagreb energies for some specific graphs along with bounds on Zagreb energies for wheel graphs are also discussed.</p>]]></description>
<pubDate>Jul 2024</pubDate>
</item>
<item>
<title><![CDATA[Some New Oscillation Criteria for Euler-Bernoulli Beam Equations with Damping Term]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14326]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>S. Priyadharshini&nbsp; &nbsp;V. Sadhasivam&nbsp; &nbsp;and K. K. Viswanathan&nbsp; &nbsp;</p><p>The main objective of this study is to investigate some new oscillation criteria for Euler-Bernoulli beam equations with damping term by using the integral average method and Riccati technique. Philo introduces the following new integral operator, which is the main tool in this paper. Our plan of action is to reduce the multidimensional problems to ordinary differential problem by using Jenson's inequality, assuming the assumptions and integration by parts with boundary conditions. With hinged, sliding and hinged-sliding end boundary conditions, several new sufficient conditions are established. The results improve and generalize those given in some previous papers, which can be seen by the examples given at the end of this paper. The majority of engineering constructions, ships, support buildings, airplanes, and rotor blades all use beams as structural elements. It is presumed that these elements are only subjected to static loads; yet, dynamic loads induce vibrations, which affect the stress and strain values. These mechanical phenomena also result in noise, instability, and the potential for resonance, which enhances deflections and failure. We analyze the spatial force load <img src=image/13437139_01.gif> the equations of a damped Euler-Bernoulli beam derived from the equation for the velocity or final time displacement that we measured. Usually, internal damping determines the nature of this term.</p>]]></description>
<pubDate>Jul 2024</pubDate>
</item>
<item>
<title><![CDATA[Spatial Autoregressive Model with Mixture of Gaussian Distribution for the Random Effect: Formulation, Estimation and Application]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14325]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Prem Antony J&nbsp; &nbsp;and Edwin Prabakaran T&nbsp; &nbsp;</p><p>Spatial econometrics is pivotal in understanding spatial dependencies across diverse fields like urban economics, environmental economics, and disease spread. This study highlights the significance of spatial grouping for data management and pattern detection, particularly in epidemiological analysis and policy planning. The Spatial Autoregressive random effect (SAR-RE) model is a classical model for analysing datasets with repeated observations across units over time, particularly when these units are situated in a spatial context. The mixture effect models account for the presence of different sub-groups within the overall population, each of which has a unique response pattern. In this paper, the proposed methodology integrates the SAR-RE model into a mixture framework, allowing for the consideration of diverse spatial patterns and class-specific coefficients. By incorporating class-specific coefficients, the model accommodates heterogeneous spatial structures within the data, providing a more nuanced understanding of spatial dependencies. The Spatial autoregressive model along with the assumption that the random effect follows a mixture of Gaussian distributions is developed to analyse panel data with spatial dependency and unobserved heterogeneity. The parameters of the model are estimated using the Limited-Memory BFGS (L-BFGS) quasi-Newton method-based EM algorithm for good convergence of the estimated. The classification of subjects into different latent classes is carried out based on their posterior probabilities. The model is applied to state-wise COVID-19 confirmed rates, revealing insightful patterns. The analysis employs the estimated model for the interpretation and comprehensive understanding of spatially dependent panel data with unobserved heterogeneity. The results of the empirical study show that the proposed model outperforms the existing model based on performance metrics criteria.</p>]]></description>
<pubDate>Jul 2024</pubDate>
</item>
<item>
<title><![CDATA[Construction of Bivariate Transmuted Frechet Distribution with its Properties]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14324]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Hayfa Abdul Jawad Saieed&nbsp; &nbsp;Mhasen Saleh Altalib&nbsp; &nbsp;Safwan Nathem Rashed&nbsp; &nbsp;and Manaf Hazim Ahmed&nbsp; &nbsp;</p><p>In multivariate data modeling, the statistical analyst can desire to construct a multivariate distribution with correlated variables. For this reason, there is a need to generalize univariate distributions, but this generalization is not easy. Many methods have been presented for construction of continuous multivariate families with univariate distributions. Some of these methods are based on a single baseline, while others are based on more than one baseline, so that their variables are dependent. Some authors were interested in expanding a univariate transmuted family into multivariate case. Some suggestions were made about extension of univariate quadratic transmuted (QT) family to bivariate ones, and another modification was made to this family by replacing the (c.d.f.) with exponentiated (c.d.f.). Another construction of bivariate family is based on probability distribution of paired order Statistics for a sample size two drawn from quadratic ranked transmuted (QRT) margin, and this bivariate family allows for positive and negative dependence between variables. Another family proposed an extension of univariate mixture of standard continuous uniform, with decreasing densities to a bivariate case. Our proposed (CT<sub>2</sub>) reduces to a bivariate quadratic transmuted (QT<sub>2</sub>) family if the cubic transmutation parameters equal to zero. (CT<sub>2</sub>) family can be used for modeling positive and negative correlated variables. Some statistical properties of (CT<sub>2</sub>) family have been studied which comprise joint, marginal and conditional (c.d.f., p.d.f), joint, marginal and conditional moments, data generation and dependence coefficients. It is seen that (joint, marginal and conditional) moments depend on raw moments of (baseline variables and largest order statistics of samples sizes 2 and 3). The Egyptian bivariate economic data are fitted by (CT<sub>2</sub>Fr), (FGMFr), (T<sub>2</sub>Fr) and (DSASFr). The (CT<sub>2</sub>Fr) is the fit to which has smallest (AIC) and (BIC) criteria.</p>]]></description>
<pubDate>Jul 2024</pubDate>
</item>
<item>
<title><![CDATA[The Locating and Local Locating Domination of Prism Family Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14301]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Jebisha Esther S&nbsp; &nbsp;and Veninstine Vivik J&nbsp; &nbsp;</p><p>In the fields of combinatorics and graph theory, prism graphs are very important. They provide insights into the structural features of many real-world networks and act as a model for them. In graph theory, the study of dominant sets is essential for a variety of applications, including social network research and network design. A dominating set in a graph G is a subset D of vertices V having the property that each vertex w belongs to V − D is neighbouring to at least one vertex D. Determining the minimum cardinal number of dominating sets, locating dominating sets, and local locating dominating sets is of critical importance in such fields as network design and social network analysis. In this paper, we determine these minimum cardinal bounds for families of prism graphs. The study adds to the basic understanding of graph theory by methodically disentangling the intricate relationships between dominating sets in prism graphs. The exploration of lowest cardinal value of locating dominating sets yields solutions to optimisation issues in network design. In this work, we determine the upper bounds of locating domination and local locating domination for prsim, antiprism, crossed prism and circulant ladder prism graph.</p>]]></description>
<pubDate>May 2024</pubDate>
</item>
<item>
<title><![CDATA[On Use of Entropy Function for Validating Differential Calculus Results]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14300]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Omdutt Sharma&nbsp; &nbsp;Surender Kumar&nbsp; &nbsp;Naveen Kumar&nbsp; &nbsp;and Pratiksha Tiwari&nbsp; &nbsp;</p><p>Rolle's Theorem (RT) and Lagrange's Mean-value Theorem (LMVT) are significant for pure and applied mathematics, and they have applications in various other fields such as management, physics etc. RT is significant in finding the projectile trajectory's maximum height and in information theory, and the entropy function (measure) is used to measure the uncertainty of information. RT is used to analyze the graphs of annual performance in any field. Since information is necessary to analyze any performance and in information theory, entropy measure is a significant tool to quantize the uncertainty so by using the concept of RT and LMVT in information theory the uncertainty and vagueness or noise can be minimized or maximized. In this manuscript, the concept of differential calculus, i.e., RT and LMVT are used for validation of the entropy function. In this paper, characteristics of differential calculus in information entropy function have been discussed. It has been shown that the entropy function satisfies RT and LMVT. It also describes the conditions when Rolle's Theorem becomes the necessary and sufficient condition for entropy function. Theorems are proved related to the concept of differential calculus in information theory which shows that by using the existing entropy function some new entropies can be derived.</p>]]></description>
<pubDate>May 2024</pubDate>
</item>
<item>
<title><![CDATA[A Numerical Study of Newell-Whitehead-Segel Type Equations Using Fourth Order Cubic B-spline Collocation Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14299]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Maheshwar Pathak&nbsp; &nbsp;Rachna Bhatia&nbsp; &nbsp;Pratibha Joshi&nbsp; &nbsp;and Ramesh Chand Mittal&nbsp; &nbsp;</p><p>Newell-Whitehead-Segel (NWS) type equations arise in solid-state physics, optics, dispersion, convection system, mathematical biology, quantum mechanics, plasma physics and oil pollution in ocean environment. Extensive applications of such type of equations draw attention of scientists toward their numerical solutions. In this work, we propose fourth order numerical method based on cubic B-spline functions for the numerical solutions of nonlinear NWS type equations. The Crank Nicolson finite difference scheme is used to discretize the equation and quasi-linearization is use to linearize the nonlinear term. As a result, we get a system of linear equation, which we solve using Gauss elimination method. Stability analysis has been carried out by a thorough Fourier series analysis and stability conditions have been obtained. The scheme has been applied to five numerical problems having quadratic, cubic and forth order nonlinear terms. The effectiveness and robustness of the proposed technique have been demonstrated by comparing the obtained numerical results with the exact solutions and numerical results obtained by other existing methods. A comparison of the numerical results obtained using the proposed technique with exact solutions shows excellent agreement. Graphs of numerical solutions have been drawn at different times and also compared with the graphs of the exact solutions. The comparative analysis shows that the proposed scheme outperformed other methods in terms of accuracy and produced good results.</p>]]></description>
<pubDate>May 2024</pubDate>
</item>
<item>
<title><![CDATA[Confidence Intervals for the Parameter of the Juchez Distribution and Their Applications]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14298]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Patchanok Srisuradetchai&nbsp; &nbsp;and Wararit Panichkitkosolkul&nbsp; &nbsp;</p><p>This paper presents four types of confidence intervals (CIs) for parameter estimation of the Juchez distribution, a robust model in the domain of lifetime data analysis. The likelihood-based, Wald-type, bootstrap-t, andbias-corrected and accelerated (BCa) bootstrap confidence intervals are proposed and evaluated through simulation studies and application to real datasets. The effectiveness of these methods is assessed in terms of the empirical coverage probability (CP) and average length (AL) of the confidence intervals, providing an understanding of their performance under various conditions. Additionally, we derive the Wald-type CI formula in explicit form, making it readily calculable. The results show that when the sample size is small, such as 10, 20, or 30, the bootstrap-t and BCa bootstrap methods produce CPs less than 0.95. However, as sample sizes increase, the CPs of all methods tend to converge towards the nominal level of 0.95. The parameter values also affect the CP. At low values of the parameter, the CPs are quite close to the ideal, with both the Wald-type and likelihood-based methods achieving a CP of approximately 0.95. However, at higher parameter values with small sample sizes, the CPs for the bootstrap-t and BCa bootstrap methods tend to have lower coverage.</p>]]></description>
<pubDate>May 2024</pubDate>
</item>
<item>
<title><![CDATA[Partial Product-Exponential Method of Estimation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14236]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Gagandeep Kaur&nbsp; &nbsp;and Sarbjit Singh Brar&nbsp; &nbsp;</p><p>This research introduces the Partial Product- Exponential Method of Estimation, focusing on utilizing partial auxiliary information for estimating population mean in simple random sampling without replacement. The method proposes novel estimators tailored for situations where only partial auxiliary information is available, particularly when it demonstrates a negative correlation with the study variable within sub-populations. The paper evaluates the performance of the suggested method under two cases: when sub-population weights are known and when they are unknown. Approximate expressions for bias and variance, up to the first order, are derived for the suggested estimators. A comprehensive comparative analysis concludes that the proposed estimators are more efficient than existing estimators, such as mean per unit estimator, partial product estimator, and weighted post-stratified estimator, under specific conditions. Particularly, the proposed estimators outperform the corresponding existing methods when certain conditions are true, demonstrating superiority in both known and unknown weight cases. Furthermore, a simulation study using R software validates the theoretical findings for normal and non-normal populations. The study showcases the practical utility of the proposed estimators, emphasizing their superiority over existing counterparts in real-world applications. Particularly, the proposed estimators are increased accuracy and efficiency in estimating the population mean, enhancing the reliability of sample survey results. In summary, the Partial Product-Exponential Method of Estimation presents a valuable addition to the domain of sample survey methodology, addressing the challenge of partial auxiliary information. The suggested methods demonstrated advantages in efficiency and accuracy, and highlights its potential for practical applications, promising enhanced estimation accuracy in various cases of sample survey.</p>]]></description>
<pubDate>May 2024</pubDate>
</item>
<item>
<title><![CDATA[Coneighbor Graphs and Related Topologies]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14235]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Nechirvan B. Ibrahim&nbsp; &nbsp;and Alias B. Khalaf&nbsp; &nbsp;</p><p>The primary aim of this paper is to establish and analyze certain topological structures linked with a specified graph <img src=image/13436824_01.gif>. In a graph <img src=image/13436824_01.gif>, a vertex u is considered a neighbor of another vertex v if there exists an edge uv in <img src=image/13436824_01.gif>. Furthermore, we define two vertices (or edges) in <img src=image/13436824_01.gif> as coneighbors if they share identical sets of neighboring vertices (or edges). The topology under consideration arises from the collections of vertices that are coneighbor and the collections of edges that are coneighbor within the graph. It is proved that the coneighbor topology of every non-coneighbor graph is homeomorphic to the included point topology while this space is quasi-discrete if and only if the graph contains at least one coneighbor set of vertices and some examples of coneighbor topologies of special graphs are presented to be quasi-discrete spaces such as (a path, a cycle and a bipartite) graphs. Moreover, several topological properties of the coneighbor space are presented. We proved that the coneighbor topological space associated with a graph <img src=image/13436824_01.gif> always has dimension one and satisfies the T<sub>1/2</sub> axiom. Also, the family of θ-open sets is determined in this spaces and it is proved that this space is almost compact whenever the family of coneighbor sets is finite. Finally, we looked at some graphs in which the coneighbor space fulfills other topological concepts such as connectedness, compactness and countable compactness.</p>]]></description>
<pubDate>May 2024</pubDate>
</item>
<item>
<title><![CDATA[Sharp Bounds on Vertex N-magic Total Labeling Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14234]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>R. Nishanthini&nbsp; &nbsp;and R. Jeyabalan&nbsp; &nbsp;</p><p>A vertex N-magic total labeling is a bijective function that maps the vertices and edges of a graph G onto the successive integers from 1 to p + q. The labeling exhibits two distinct properties: First, the count of unique magic constants k<sub>i</sub> for i belonging to the set {1, 2, ...,N} is equivalent to the cardinality of N; secondly, the magic constants ki must be arranged in a strictly ascending order. In the present context, the constant N is employed to represent different degrees of vertices. The term “magic constant values k<sub>i</sub>” for i ∈ {1, 2, ...,N} refers to specific numbers that exhibit unique and interesting properties and are employed in the context of this investigation. By adding up the weights of each vertex in V (G), we might receive a magical constant number k<sub>i</sub> for i ∈ {1, 2, ...,N}. Within the scope of this study, we discuss the sharp bounds of vertex N-magic total labeling graphs. In terms of magic constants k<sub>i</sub> for i ∈ {1, 2, ...,N}, we also found the requirement for vertex N-magic total labeling of trees. We investigated the potential for vertex N-magic total labeling at vertices in graphs with varying vertex degrees.</p>]]></description>
<pubDate>May 2024</pubDate>
</item>
<item>
<title><![CDATA[On Questions Concerning Finite Prime Distance Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14233]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Ram Dayal&nbsp; &nbsp;A. Parthiban&nbsp; &nbsp;and P. Selvaraju&nbsp; &nbsp;</p><p>Graph labeling is an allocation of labels (mostly integers) to the nodes/lines or both of a graph G<sub>α</sub> subject to a few conditions. The field of graph theory, specifically graph labeling, plays a vital role in various fields. To name a few, graph labeling is utilized in coding, x−ray crystallography, radar, astronomy, circuit design, communication network addressing, and data base management. It can also be applied to network security, network addressing, channel assignment process, and social networks. A graph G<sub>β</sub> is a prime distance graph (PDG) if its nodes can be assigned with distinct integers such that for any two adjacent nodes, the positive difference of their labels is a prime number. A complete characterization of prime distance graphs is an open problem of high interest. This paper contributes partially towards the same. More specifically, Laison et al. raised the following questions. (1) Is there a family of graphs which are PDGs if and only if Goldbach’s Conjecture is true? (2) What other families of graphs are PDGs? In this paper, these questions are answered partially and also show certain families of graphs that admit prime distance labeling (PDL) if and only if the Twin Prime Conjecture holds, besides establishing PDL of some special graphs. </p>]]></description>
<pubDate>May 2024</pubDate>
</item>
<item>
<title><![CDATA[Computational Solution and Analysis of Fuzzy Nonlinear HIV Infection Model via New Multistage Fuzzy Variational Iteration Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14191]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Hafed. H. Saleh&nbsp; &nbsp;Amirah Azmi&nbsp; &nbsp;and Ali. F. Jameel&nbsp; &nbsp;</p><p>In order to obtain sufficient solutions for fuzzy differential equations (FDEs), reliable and efficient approximation methods are necessary. Approximate numerical methods can not directly solve fuzzy HIV models. Meanwhile, the approximate analytical methods can potentially provide more straightforward solutions without the need for extensive numerical computations or linearization and discretization techniques, which may be challenging to apply to fuzzy models. One significant advantage of approximate analytical methods is their ability to provide insights into solution accuracy without requiring an exact solution for comparison, where an exact solution may not be readily available. In this work, the fuzzy nonlinear HIV infection model is analyzed and solved using the new fuzzy form of an approximate analytical method. Fuzzy set theory mixed with standard fuzzy variational iteration method (FVIM) properties is utilized to produce a new formulation denoted by the multistage fuzzy variational iteration method (MFVIM) to process and solve a fuzzy nonlinear HIV infection model. MFVIM offers an effective method for attaining convergence in the series solution presented as a polynomial function. This approach enables efficient solutions to diverse mathematical challenges. The solution methodology is reliant on fuzzy differential equations conversion into systems of ordinary differential equations, utilizing the parametric form regarding its r-level representations, and considering the approximate solution of the system in a sequence of intervals. Subsequently, the equivalent classical systems are resolved by applying FVIM algorithms in each subinterval. Also, the existence and unique solution analysis of the proposed problem have been described, along with a fuzzy optimal control analysis. A tabular and graphical representation of the MFVIM of the proposed models is presented and analyzed in comparison with the numerical method and FVIM. The new method produces better performance in terms of solutions than a numerical method with a simple implementation for solving fuzzy nonlinear HIV infection model associated with FIVPs. The ability to better comprehend the behavior of the system under investigation can enable researchers and scientists to work on models incorporating systems with long memories and ill-defined notions to make more effective design and decision-making.</p>]]></description>
<pubDate>May 2024</pubDate>
</item>
<item>
<title><![CDATA[Integral Graph Spectrum and Energy of Interconnected Balanced Multi-star Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14142]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>B. I. Andrew&nbsp; &nbsp;and A. Anuradha&nbsp; &nbsp;</p><p>Balanced multi-star graph <img src=image/13436175_01.gif> is a specialized type of graph formed by connecting apex vertices of star graphs to create a cohesive structure known as a clique. These graphs comprise r star graphs, where each star graph has an apex vertex connected to n pendant vertices. Balanced multistar graphs offer benefits in scenarios requiring equal distances between peripheral nodes, such as sensor networks, distributed computing, traffic engineering, telecommunications, supply chain management, and power distribution. The integral graph spectrum derived from the adjacency matrix of balanced multistar graphs holds significance across various domains. It aids in network analysis to understand connectivity patterns, facilitates efficient computation of structural properties through graph algorithms, and enables graph partitioning and community detection. Spectral graph theory assists in identifying connectivity patterns in network visualization, supports modeling biological networks in biomedical research, aids in generating personalized recommendations in recommendation systems and contributes to graph-based segmentation and scene analysis tasks in image processing. This paper aims to characterize the integral graph spectrum of balanced multi-star graphs <img src=image/13436175_01.gif> by focusing on spectral parameters of double-star graphs (r=2), triple-star graphs (r=3), and quadruple-star graphs (r=4). This spectrum serves as an important tool across disciplines, providing insights into graph structure and facilitating tasks ranging from network analysis to computational biology and image processing.</p>]]></description>
<pubDate>Mar 2024</pubDate>
</item>
<item>
<title><![CDATA[Variations of Rigidity for Abelian Groups]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14141]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Inessa I. Pavlyuk&nbsp; &nbsp;and Sergey V. Sudoplatov&nbsp; &nbsp;</p><p>A series of basic characteristics of structures and of elementary theories reflects their complexity and richness. Among these characteristics, four kinds of degrees of rigidity and the index of rigidity are considered as measures of how far the given structure is situated from rigid one, both with respect to the automorphism group and to the definable closure, for some or any subset of the universe, which has the given finite cardinality. Thus, a natural question arises on a classification of model-theoretic objects with respect to rigidity characteristics. We apply a general approach of studying the rigidity values and related classification to abelian groups and their theories. We describe possibilities of degrees and indexes of rigidity for finite abelian groups and for standard infinite abelian groups. This description is based both on general consideration of rigidity, on its application for finite structures, and on their specificity for abelian groups including Szmielew invariants, combinatorial formulas for cardinalities of orbits, links with dimensions, and on their combinations. It shows how characteristics of infinite abelian groups are related to them with respect to finite ones. Some applications for non-standard abelian groups are discussed.</p>]]></description>
<pubDate>Mar 2024</pubDate>
</item>
<item>
<title><![CDATA[Emerging Frameworks: 2-Multiplicative Metric and Normed Linear Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14140]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>B. Surender Reddy&nbsp; &nbsp;S. Vijayabalaji&nbsp; &nbsp;N. Thillaigovindan&nbsp; &nbsp;and K. Punniyamoorthy&nbsp; &nbsp;</p><p>This new study helps us understand 2-multiplicative or product metric spaces and normed linear spaces (NDLS) better than before, going beyond what we already know. Seeing a gap in existing research, our main aim is to thoroughly explore the natural properties of 2-multiplicative NDLS. Using a careful approach that looks at continuity, compactness, and convergence properties, our research finds results that point out the special features of these spaces and show the connections between their algebraic and topological sides. The importance of our findings goes beyond just theory, affecting practical uses and encouraging collaboration across different fields. Our research builds a strong base in mathematical analysis, giving useful insights for making nuanced decisions. Acknowledging some limitations in our study opens the door for future improvements, creating promising paths for further exploration. In real-world terms, what we learn from this thorough study not only informs but also changes how we make decisions in mathematical analysis. In research community, our work makes people appreciate the connection between algebraic and topological spaces more deeply, sparking curiosity and inspiring future research. In essence, this research acts as a guiding light, showcasing the unique features of 2-multiplicative NDLS and paving the way for a deeper understanding of mathematical structures and their flexible uses in both theory and practice. Furthermore, our exploration motivates future researchers to dive into the details of 2-multiplicative NDLS, expanding their knowledge and looking into broader implications in the field of mathematical analysis.</p>]]></description>
<pubDate>Mar 2024</pubDate>
</item>
<item>
<title><![CDATA[A Class of Efficient Shrinkage Estimators for Modelling the Reliability of Burr XII Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14139]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Zuhair A. Al-Hemyari&nbsp; &nbsp;Alaa Khlaif Jiheel&nbsp; &nbsp;and Iman Jalil Atewi&nbsp; &nbsp;</p><p>For the purpose of modelling the Reliability of Burr XII Distribution, a family of shrinkage estimators is proposed for any parameter of any distribution when a prior guess value <img src=image/13436156_01.gif> of <img src=image/13436156_02.gif> is available from the past. In addition, two sub-models of the shrinkage type estimators for estimating the reliability and parameters of the Burr XII Distribution using two types of shrinkage weight functions with the preliminary test of the hypothesis <img src=image/13436156_03.gif> against the alternative <img src=image/13436156_04.gif> have been proposed and studied. The criteria for studying the properties of two sub-models of the reliability estimators which are the Bias, Bias ratio, Mean Squared Error and Relative Efficiency were derived and computed numerically for each sub-model for the purpose of studying the behavior of the estimators for the Burr XII Distribution because they are complicated and contain many complex functions. The numerical results showed the usefulness of the proposed two sub-models of the reliability estimators of Burr XII Distribution relative to the classical estimators for both of the shrinkage functions when the value of the a priori guess value <img src=image/13436156_01.gif> is close to the true value of <img src=image/13436156_02.gif>. In addition, the comparison between the proposed two sub-models of the shrinkag</p>]]></description>
<pubDate>Mar 2024</pubDate>
</item>
<item>
<title><![CDATA[Product Signed Domination in Probabilistic Neural Networks]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14138]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>T. M. Velammal&nbsp; &nbsp;A. Nagarajan&nbsp; &nbsp;and K. Palani&nbsp; &nbsp;</p><p>Domination plays a very important role in graph theory. It has a lot of applications in various fields like communication, social science, engineering, etc. Let <img src=image/13436148_01.gif> be a simple graph. A function <img src=image/13436148_02.gif> is said to be a product signed dominating function if each vertex <img src=image/13436148_03.gif> in <img src=image/13436148_04.gif> satisfies the condition <img src=image/13436148_05.gif> where <img src=image/13436148_06.gif> denotes the closed neighborhood of <img src=image/13436148_03.gif>. The weight <img src=image/13436148_07.gif> of a function <img src=image/13436148_10.gif> is defined as <img src=image/13436148_08.gif>. The product signed domination number of a graph <img src=image/13436148_11.gif> is the minimum positive weight of a product signed dominating function and is denoted as <img src=image/13436148_09.gif>. Product signed dominating function assigns 1 or -1 to the nodes of the graph. This variation of dominating function has applications in social networks of people or organizations. Probabilistic Neural Network (PNN) was first proposed by Specht. This is a classifier that maps input patterns in a number of class levels and estimates the probability of a sample being part of learned theory. This paper studies the existence of product signed dominating functions in probabilistic neural networks and calculates the accurate values of product signed domination numbers of three layered and four layered probabilistic neural networks.</p>]]></description>
<pubDate>Mar 2024</pubDate>
</item>
<item>
<title><![CDATA[Forecasts with SPR Model Using Bootstrap-Reversible Jump MCMC]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14137]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Suparman&nbsp; &nbsp;Eviana Hikamudin&nbsp; &nbsp;Hery Suharna&nbsp; &nbsp;Aryanti&nbsp; &nbsp;In Hi Abdullah&nbsp; &nbsp;and Rina Heryani&nbsp; &nbsp;</p><p>Polynomial regression (PR) is a stochastic model that has been widely used in forecasting in various fields. Stationary stochastic models play a very important role in forecasting. Generally, PR model parameter estimation methods have been developed for non-stationary PR models. This article aims to develop an algorithm to estimate the parameters of a stationary polynomial regression (SPR) model. The SPR model parameters are estimated using the Bayesian method. The Bayes estimator cannot be determined analytically because the posterior distribution for the SPR model parameters has a complex structure. The complexity of the posterior distribution is caused by the SPR model parameters which have a variable dimensional space. Therefore, this article uses the reversible jump MCMC algorithm which is suitable for estimating the parameters of variable-dimensional models. Applying the reversible jump MCMC algorithm to big data requires many iterations. To reduce the number of iterations, the reversible jump MCMC algorithm is combined with the Bootstrap algorithm via the resampling method. The performance of the Bootstrap-reversible jump MCMC algorithm is validated using 2 simulated data sets. These findings show that the Bootstrap-reversible jump MCMC algorithm can estimate the SPR model parameters well. These findings contribute to the development of SPR models and SPR model parameter estimation methods. In addition, these findings contribute to big data modeling. Further research can be done by replacing Gaussian noise in SPR with non-Gaussian noise.</p>]]></description>
<pubDate>Mar 2024</pubDate>
</item>
<item>
<title><![CDATA[Decision Making with Parametric Reduction and Graphical Representation of Neutrosophic Soft Set]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14136]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Sonali Priyadarsini&nbsp; &nbsp;Ajay Vikram Singh&nbsp; &nbsp;and Said Broumi&nbsp; &nbsp;</p><p>The neutrosophic soft set is one of the most significant mathematical approaches for uncertainty description, and it has a multitude of practical applications in the realm of decision making. On the other hand, the decision-making process is often made more difficult and complex since these situations contain criteria that are less significant and more redundant. In neutrosophic soft set-based decision-making problems, parameter reduction is an efficient method for cutting down on redundant and superfluous factors, and it does so without damaging the decision-makers' ability to make decisions. In this work, a parametric reduction strategy has been proposed. This approach lessens the difficulties associated with decision making while maintaining the existing order of available options. Because the decision sequence is maintained while the process of reduction is streamlined, utilizing this tactic results in an experience that is both less difficult and more convenient. This article demonstrates the applicability of this method by outlining a decision-making dilemma that was taken from the actual world and providing a solution for it. This article discusses a novel method for dealing with neutrosophic soft graphs by merging graph theory with neutrosophic soft set theory. An illustration of a graphical depiction of a neutrosophic soft set is provided alongside an explanation of neutrosophic graphs and neutrosophic soft set graphs in this article.</p>]]></description>
<pubDate>Mar 2024</pubDate>
</item>
<item>
<title><![CDATA[Recursive Estimation of the Multidimensional Distribution Function Using Bernstein Polynomial]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14015]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>D. A. N. Njamen&nbsp; &nbsp;B. Baldagaï&nbsp; &nbsp;G. T. Nguefack&nbsp; &nbsp;and A. Y. Nana&nbsp; &nbsp;</p><p>The recursive method known as the stochastic approximation method, can be used among other things, for constructing recursive nonparametric estimators. Its aim is to ease the updating of the estimator when moving from a sample of size n to n + 1. Some authors have used it to estimate the density and distribution functions, as well as univariate regression using Bernstein's polynomials. In this paper, we propose a nonparametric approach to the multidimensional recursive estimators of the distribution function using Bernstein's polynomial by the stochastic approximation method. We determine an asymptotic expression for the first two moments of our estimator of the distribution function, and then give some of its properties, such as first- and second-order moments, the bias, the mean square error (MSE), and the integrated mean square error (IMSE). We also determine the optimal choice of parameters for which the MSE is minimal. Numerical simulations are carried out and show that, under certain conditions, the estimator obtained converges to the usual laws and is faster than other methods in the case of distribution function. However, there is still a lot of work to be done on this issue. These include the studies of the convergence properties of the proposed estimator and also the estimation of the recursive regression function; the developments of a new estimator based on Bernstein polynomials of a regression function using the semi-recursive estimation method; and also a new recursive estimator of the distribution function, density and regression functions; when the variables are dependent.</p>]]></description>
<pubDate>Mar 2024</pubDate>
</item>
<item>
<title><![CDATA[Applications of Onto Functions in Cryptography]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14014]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>K Krishna Sowmya&nbsp; &nbsp;and V Srinivas&nbsp; &nbsp;</p><p>The concept of onto functions plays a very important role in the theory of Analysis and has got rich applications in many engineering and scientific techniques. Here in this paper, we are proposing a new application in the field of cryptography by using onto functions on the algebraic structures like rings and fields to get a strong encryption technique. A new symmetric cryptographic system based on Hill ciphers is developed using onto functions with two keys- Primary and Secondary, to enhance the security. This is the first algorithm in cryptography developed using onto functions which ensures a strong security for the system while maintaining the simplicity of the existing Hill cipher. The concept of using two keys is also novel in the symmetric key cryptography. The usage of onto functions in the encryption technique eventually gives the highest security to the algorithm which has been discussed through different examples. The original Hill cipher is obsolete in the present-day technology and serves as pedagogical purpose but whereas this newly proposed algorithm can be safely used in the present-day technology. Vulnerability from different types of attacks of the algorithm and the cardinality of key spaces have also been discussed.</p>]]></description>
<pubDate>Mar 2024</pubDate>
</item>
<item>
<title><![CDATA[A Pivotal Operation on Triangular Fuzzy Number for Solving Fuzzy Nonlinear Programming Problems]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14013]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>D. Bharathi&nbsp; &nbsp;and A. Saraswathi&nbsp; &nbsp;</p><p>Fuzzy nonlinear programming plays a vital role in decision-making where uncertainties and nonlinearity significantly impact outcomes. Real-world situations often involve imprecise or vague information. Fuzzy nonlinear programming allows for the representation of uncertainty through fuzzy sets, enabling more accurate modeling of real-world complexities. Many optimization problems exhibit nonlinear relationships among variables. Fuzzy nonlinear programming addresses these complex relationships, providing solutions that linear programming methods cannot accommodate. The objective of this research article proposes Fuzzy Non-Linear Programming Problems (FNLPP) under environment of triangular Fuzzy numbers. This paper proposed a method based on the pivotal operation with aid of Wolfe's technique. Fuzzy non-linear programming is an area of study that deals with optimization problems in which the objective function and constraints involve fuzzy numbers, which represent uncertainty or vagueness in real-world data. We claim that the proposed method is easier to understand and apply compared to existing methods for solving similar problems that arise in real-life situations. To demonstrate the effectiveness of the method, the authors have solved a numerical example and provided illustrations in the paper. This proposed method in the paper aims to address such complexities and find solutions to these problems more efficiently.</p>]]></description>
<pubDate>Mar 2024</pubDate>
</item>
<item>
<title><![CDATA[Convergence of Spectral-Grid Method for Burgers Equation with Initial-Boundary Conditions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=14012]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Chori Normurodov&nbsp; &nbsp;Akbar Toyirov&nbsp; &nbsp;Shakhnoza Ziyakulova&nbsp; &nbsp;and K. K. Viswanathan&nbsp; &nbsp;</p><p>In this study, initial-boundary value problem for the Burgers equation is solved using the theoretical substantiation of the spectral-grid method. Using the theory of Green's functions, an operator equation of the second kind is obtained with the corresponding initial-boundary conditions for a continuous problem. To approximately solve the differential problem, the spectral grid method is used, i.e. a grid is introduced on the integration interval, and approximate solutions of the differential problem on each of the grid elements are presented as a finite series in Chebyshev polynomials of the first kind. At the internal nodes of the grid, the requirement for the continuity of the approximate solution and its first derivative is imposed. The corresponding boundary conditions are satisfied at the boundary nodes. A discrete analogue of the operator equation of the second kind is obtained using the spectral-grid method. The convergence theorems for the spectral-grid method are proven and estimates for the method's convergence rate are obtained. To discretize the Burgers equation in time on the interval [0,T], a grid with a uniform step of <img src=image/13435533_01.gif> is introduced, i.e. <img src=image/13435533_02.gif>, where <img src=image/13435533_03.gif> - given number. Numerical calculations have been carried out at sufficiently low values of viscosity, which cannot be obtained by other numerical methods. The high accuracy and efficiency of the spectral-grid method used in solving the initial-boundary value problem for the Burgers equation is shown.</p>]]></description>
<pubDate>Mar 2024</pubDate>
</item>
<item>
<title><![CDATA[Mixture of Ailamujia and Size Biased Ailamujia Distributions: Estimation and Application]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13990]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Bader Alruwaili&nbsp; &nbsp;</p><p>In this article, we introduce a new model entitled a mixture of the Ailamujia and size biased Ailamujia distributions. We present and discuss some statistical properties of this mixture of the Ailamujia and size biased Ailamujia distributions, such as moments, skewness, and kurtosis. We also provide some graphical results on the mixture of the Ailamujia and size biased Ailamujia distributions and provide some numerical results to understand the behavior of the proposed mixture and its properties. Also, we provide some reliability analysis results on the proposed mixture. The parameters of the Ailamujia and size biased Ailamujia distributions are estimated by using the maximum likelihood method. The usefulness of the proposed combination is illustrated by using a real-life dataset. We use the Ailamujia distribution and the size biased Ailamujia distribution, in addition to the mixture of the Ailamujia and size biased Ailamujia distributions to fit the real-life dataset. We use different criteria in this comparison; the results show that the proposed mixture fits the dataset better than the use of the Ailamujia distribution and the size biased Ailamujia distribution alone.</p>]]></description>
<pubDate>Jan 2024</pubDate>
</item>
<item>
<title><![CDATA[The ARCH Model for Analyzing and Forecasting Temperature Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13989]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Ali Sadig Mohommed Bager&nbsp; &nbsp;</p><p>The chaotic nature of the earth's atmosphere and the significant impact of weather on various fields necessitate accurate weather forecasting. Time series analysis plays a crucial role in predicting future values based on past data. The Autoregressive Conditional Heteroscedasticity (ARCH) model is widely used for forecasting, especially in the field of temperature analysis. This study focuses on the ARCH model for analyzing and forecasting temperature changes. The ARCH model is selected based on its ability to capture the regular variations in the predictability of meteorological variables. The methodology section explains the ARCH model and various statistical tests used, such as the heteroscedasticity test (ARCH test), Jarque-Bera test, and Augmented Dickey-Fuller test (ADF). A sample study is conducted on monthly average temperature data from Athenry, Ireland, over a period of four years. The study utilizes the ARCH model to calculate temperature series volatility and assesses the model's performance using goodness-of-fit measures and predictive accuracy. The results show that the ARCH model successfully predicts temperature changes for three years, as indicated by the forecasted temperature series. The statistical performance of the ARCH model is evaluated using in-sample and out-of-sample analyses, demonstrating its effectiveness in capturing temperature variations. The study highlights the importance of time series forecasting and the significant impact of the ARCH model in temperature analysis.</p>]]></description>
<pubDate>Jan 2024</pubDate>
</item>
<item>
<title><![CDATA[Moments of Gaussian Distributions for Small and Large Sample Sizes Revisited]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13988]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Florian Heiser&nbsp; &nbsp;and E W Knapp&nbsp; &nbsp;</p><p>Central moments of statistical samples provide coarse-grained information on width, symmetry and shape of the underlying probability distribution. They need appropriate corrections for fulfilling two conditions: (1) yielding correct limiting values for large samples; (2) yielding these values also, if averaged over many samples of the same size. We provide correct expressions of unbiased central moments up to the fourth and provide an unbiased expression for the kurtosis, which generally is available in a biased form only. We have verified the derived general expressions by applying them to the Gaussian probability distribution (GPD) and we show how unbiased central moments and kurtosis behave for finite samples. For this purpose, we evaluated precise distributions of all four moments for finite samples of the GPD. These distributions are based on up to 3.2*10<sup>8</sup> randomly generated samples of specific sizes. For large samples, these moment distributions become Gaussians whose second moments decay with the inverse sample size. We parameterized the corresponding decay laws. Based on these moment distributions, we demonstrate how p-values can be computed to compare the values of mean and variance evaluated from a sample with the corresponding expected values. We also show how one can use p-values for the third moment to investigate the symmetry and for the fourth moment to investigate the shape of the underlying probability distribution, certifying or ruling out a Gaussian distribution. This all provides new power for the usage of statistical moments. Finally, we apply the evaluation of p-values for a dataset of the percent of people of age 65 and above in the 50 different states of the USA.</p>]]></description>
<pubDate>Jan 2024</pubDate>
</item>
<item>
<title><![CDATA[Functional Continuum Regression Approach to Wavelet Transformation Data in a Non-Invasive Glucose Measurement Calibration Model]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13987]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Ismah Ismah&nbsp; &nbsp;Erfiani&nbsp; &nbsp;Aji Hamim Wigena&nbsp; &nbsp;and Bagus Sartono&nbsp; &nbsp;</p><p>Functional data has a data structure with large dimensions and is a broad source of information, but it is very possible that there are problems in analyzing functional data. Functional continuum regression is an alternative method that can be used to overcome calibration modeling with functional data. This study aimed to determine the robustness of Functional continuum regression in overcoming multicollinearity problems or the number of independent variables greater than the number of observations, with functional data. The research method used in this study is the analysis of the Functional continuum regression method on the results of the Wavelet transform of blood glucose measurements with noninvasive techniques in the calibration model, and making comparisons with non-functional methods, namely Principal component regression, partial least square regression, least square regression, and functional method namely functional regression. The results of the analysis using the five methods obtained the root mean square error prediction (RMSEP), the correlation between the observed data and the estimated observation data, and the mean absolute error (MAE). The results of the analysis can be said that reduction methods such as Functional continuum regression, Principal component regression, and partial least square regression are superior methods when used when multicollinearity occurs or the number of independent variables is greater than the number of observations. In the case of functional data analysis, the application of Functional continuum regression is better because it does not eliminate data patterns. Thus it can be said that Functional continuum regression is an effective approach in analyzing calibration models which generally have functional data, and there are several problems which include multicollinearity or the number of independent variables is greater than the number of observations.</p>]]></description>
<pubDate>Jan 2024</pubDate>
</item>
<item>
<title><![CDATA[On The Existence of MDS Matrices over <img src=image/13435040_01.gif>]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13860]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Defita&nbsp; &nbsp;Intan Muchtadi-Alamsyah&nbsp; &nbsp;and Aleams Barra&nbsp; &nbsp;</p><p>An MDS (maximum distance separable) matrix is a square matrix where all its submatrices are non-singular. The MDS matrices are used in some cryptographic systems' encryption and decryption processes. The matrix used in the process of decryption is the inverse matrix used in the encryption. Therefore, choosing a matrix which inverse is easy to find, is more efficient. Orthogonal and involutory matrices are two kinds of matrices in which inverses are easy to find. On the other hand, in terms of storage memory, a circulant matrix is more advantageous than any square matrix. In 2019, Cauchois and Loidreau proved that there is no involutory circulant MDS matrices of order 2m for m≥2 over a field of characteristics prime number p≥2. In 2022, Adhiguna et al. stated that there is no orthogonal circulant MDS matrix of even order and of order divisible by p > 2 over a field with characteristic p. This research concerns about observing the existence of MDS matrices over ring <img src=image/13435040_02.gif>, that is, the ring <img src=image/13435040_03.gif> where <img src=image/13435040_04.gif> and q is the power of p. This paper shows that there is no involutory circulant MDS and no orthogonal circulant MDS matrices of certain order over <img src=image/13435040_02.gif>. Furthermore, we can generalize these results for ring <img src=image/13435040_05.gif>.</p>]]></description>
<pubDate>Jan 2024</pubDate>
</item>
<item>
<title><![CDATA[Derivation and Evaluation of Monte Carlo Estimators of the Scattering Equation Using the Ward BRDF and Different Sample Allocation Strategies]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13859]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Carlos Lopez Garces&nbsp; &nbsp;and Nayeong Kong&nbsp; &nbsp;</p><p>This paper investigates three distinct Monte Carlo estimators derived from the research of Sbert et al. These estimators are specifically tailored to the scattering equations using the Ward Bidirectional Reflectance Distribution Function (BRDF) integrated with a designed cosine-weighted environment map. In this paper, we have two goals. First, to bridge the gap between theoretical foundations and practical applicability to understand how these estimators can be seamlessly integrated as extensions to the acclaimed PBRT renderer. And the second is to measure their real-world performance. We aim to validate our methodology by comparing rendered images with varying convergence rates and deviations to the results of Sbert et al. This validation will ensure the robustness and reliability of our approaches. We analyze the analytical structure of these estimators to derive their precise form. We then implement the three estimators as extensions to the PBRT renderer, subjecting them to a numerical evaluation. We further evaluate the results of the estimator set and sampling strategy by utilizing another pair of incident radiance functions and BRDFs. The final step is to generate rendered images from the implementation to verify the results observed by Svart et el. and extend them with this new pair of functions.</p>]]></description>
<pubDate>Jan 2024</pubDate>
</item>
<item>
<title><![CDATA[On Intuitionistic Hesitancy Fuzzy Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13858]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Sunil M.P.&nbsp; &nbsp;and J. Suresh Kumar&nbsp; &nbsp;</p><p>A graph is a basic representation of relationship between vertices and edges. This can be used when the relationships are normal and straight forward. But most of the real life situations are rather complex and it calls for advance development in graph theory. The concept of fuzzy graph addresses uncertainity to a certain extent. But, situations arise when we have to address complex hesitant situations such as taking major decisions regarding merging of companies. Intuitionistic fuzzy graph (IFG) and Hesitancy fuzzy graph (HFG) were developed to resolve this uncertainity. But it also fell short in resolving problems related to hesitant situations. In this paper, we present the concepts of IFG and HFG, which serve as the foundation for introducing, defining and analysing Intuitionistic hesitancy fuzzy graph (IHFG). We explore the concepts such as λ-strong, δ-strong and ρ-strong IHFGs. Also, we make a detailed comparative study on the cartesian product and composition of HFGs and IHFGs, establishing essential theorems related to the properties of such products. We prove that the cartesian product and composition of two strong HFGs need not be a strong HFG, but the cartesian product and composition of two strong IHFGs is a strong IHFG. Also we prove that if the cartesian product of two IHFGs is strong, then, at least one of the IHFG will be strong and if the composition of two IHFGs is strong, then at least one of the IHFG will be strong. IHFG models provide exact and accurate results for taking apt decisions in problems involving hesitant situations.</p>]]></description>
<pubDate>Jan 2024</pubDate>
</item>
<item>
<title><![CDATA[Complex Neutrosophic Fuzzy Set]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13857]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>V. Kalaiyarasan&nbsp; &nbsp;and K. Muthunagai&nbsp; &nbsp;</p><p>Complex number system is an extension of the real number system which came into existence during the attempts to find a solution for cubic equations. A set characterized by a membership (characteristic) function which assigns to each object a grade of membership ranging between zero and one is called a Fuzzy set. A new development of Fuzzy system is a Complex Fuzzy system in which the membership function is complex- valued and the range of which is represented by the unit disk. The fuzzy similarity measure helps us to find the closeness among the Fuzzy sets. Due to the wide range of applications to various fields, Fuzzy Multi Criteria Decision Making (FMCDM) has gained its importance in Fuzzy set theory. A combination of Complex Fuzzy set, Fuzzy similarity measure and Fuzzy Multi Criteria Decision Making has resulted in this research contribution. In this article, we have introduced and investigated Complex neutrosophic fuzzy set, which involves complex- valued neutrosophic component. We have discussed two real life examples, one on selecting the best variety of a seed that gives the maximum yield and profit in a short period of time and another on choosing the best company to invest. Similarity measure between Complex neutrosophic fuzzy sets has been used to take a decision.</p>]]></description>
<pubDate>Jan 2024</pubDate>
</item>
<item>
<title><![CDATA[A New Robust Interval Estimation for the Median of An Exponential Population When Some of the Observations are Extreme Values]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13856]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Faris Muslim Al-Athari&nbsp; &nbsp;</p><p>The issue of obtaining accurate interval estimates for the median of an exponential population when some of the observations are extreme values is an important issue for researchers in the fields of reliability applications and survival analysis. In this research paper, a new method is proposed for obtaining a robust confidence interval which is a substitute for the known ordinary (classical) confidence interval when there are extreme values in the sample. The proposed method is simply a result of changing the sample mean by a constant multiple of a sample median and adjusting the upper percentile point of the chi-square of the ordinary confidence interval formula. Further, the performance of the proposed method is evaluated and compared with the ordinary one by using Monte Carlo simulation based on 100,000 trials for each sample size with 5% and 10% extreme values showing that the proposed method, under the contaminated exponential distribution, is always performing better than the ordinary method in the sense of having simulated confidence probability quite close to the aimed confidence level with shorter width and smaller standard error. The use and the application of the proposed method to real-life data are presented and compared with the simulation results.</p>]]></description>
<pubDate>Jan 2024</pubDate>
</item>
<item>
<title><![CDATA[Communications to the Pseudo-Additive Probability Measure and the Induced Probability Measure Realized by <img src=image/13492447_01.gif>]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13855]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Dhurata Valera&nbsp; &nbsp;Bederiana Shyti&nbsp; &nbsp;and Silvana Paralloj&nbsp; &nbsp;</p><p>The Theory of Pseudo-Additive Measures has been studied by analyzing and evaluating significant results. The system of pseudo-arithmetic operations (SPAO) <img src=image/13492447_02.gif> as a system generated by the generator <img src=image/13492447_20.gif> is shown directly by taking results of Rybárik and Pap, but <img src=image/13492447_03.gif> is a further development of <img src=image/13492447_05.gif>. Using the meaning of entropy as a logarithmic measure in information theory. Through examples we present the relation between the <img src=image/13492447_06.gif> and the entropy, realized by the <img src=image/13492447_07.gif>, i.e. a <img src=image/13492447_08.gif>. The paper studies the construction of relationships between entropy and <img src=image/13492447_09.gif> supported by <img src=image/13492447_03.gif> and the connection with Shannon Entropy. For the pseudo-additive probabilistic measure <img src=image/13492447_10.gif>, using <img src=image/13492447_07.gif> as well as in the system <img src=image/13492447_02.gif> generated by <img src=image/13492447_21.gif>, the problem of modification of this measure by <img src=image/13492447_03.gif> is addressed. The modifications of the Pseudo-Additive Probability Measure <img src=image/13492447_11.gif> and the Induced Probability Measure <img src=image/13492447_12.gif> supported by <img src=image/13492447_13.gif> are presented, showing the relationships between the two modifications of the Pseudo-Additive Probability Measure (PAPM) <img src=image/13492447_14.gif> and the Induced Probability Measure (IPM)<img src=image/13492447_15.gif>. Further, the Bi-Pseudo-Integral for <img src=image/13492447_16.gif> and the Lebesgue Integral are represented in a relationship.</p>]]></description>
<pubDate>Jan 2024</pubDate>
</item>
<item>
<title><![CDATA[Other New Versions of Generalized Neutrosophic Connectedness and Compactness and Their Applications]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13854]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Alaa. M. F. AL. Jumaili&nbsp; &nbsp;</p><p>The concepts of neutrosophic connectedness and compactness between neutrosophic sets find extensive applications in various fields, including sensor networks, physics, mechanical engineering, robotics and data analysis involving numerous variables. Neutrosophic set theory also plays a pivotal role in addressing complex problems in engineering, environment science, economics, and advanced mathematical disciplines. Hence, this paper aims to extend the classical definitions of neutrosophic connectedness and compactness within neutrosophic topological spaces. We introduce new classes of neutrosophic connectedness and compactness, specifically, neutrosophic δ-ß-connectedness and neutrosophic δ-ß-compactness, defined using a generalized neutrosophic open set known as "neutrosophic δ-ß-open sets". We explore several essential properties and characterizations of these spaces and introduce new notions of neutrosophic covers, which lead to the concept of neutrosophic compact spaces. Additionally, we present characterizations related to neutrosophic δ-ß-separated sets. A noteworthy feature of these concepts is their ability to model intricate connectedness networks and facilitate optimal solutions for problems involving a multitude of variables, each with degrees of acceptance, rejection, and indeterminacy. We provide relevant examples to illustrate our main findings.</p>]]></description>
<pubDate>Jan 2024</pubDate>
</item>
<item>
<title><![CDATA[Some New Kind of Contra Continuous Functions in Nano Ideal Topological Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13853]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>S. Manicka Vinayagam&nbsp; &nbsp;L. Meenakshi Sundaram&nbsp; &nbsp;and C. Devamanoharan&nbsp; &nbsp;</p><p>The main objective of this paper is to introduce a new type of contra continuous function namely <img src=image/13435054_01.gif> based on the concept of <img src=image/13435054_02.gif> set and <img src=image/13435054_03.gif> function in Nano Ideal Topological Spaces. The conceptualisation of contra continuous functions, which is an alteration of continuity that requires inverse images of open sets to be closed rather than open. We compare <img src=image/13435054_01.gif> function with <img src=image/13435054_04.gif> function and establish the independent relation between <img src=image/13435054_01.gif> and <img src=image/13435054_05.gif> functions by providing suitable counter examples. Fundamental properties of <img src=image/13435054_06.gif> with <img src=image/13435054_07.gif> and <img src=image/13435054_08.gif> are investigated. We study the behaviour of <img src=image/13435054_09.gif> with <img src=image/13435054_06.gif>. We define <img src=image/13435054_10.gif> space and describe its relation upon <img src=image/13435054_11.gif> space and <img src=image/13435054_12.gif> space. Characterizations of <img src=image/13435054_06.gif> based on <img src=image/13435054_13.gif> space, <img src=image/13435054_12.gif> space and graph function namely <img src=image/13435054_15.gif> are explored. As like the continuity, the <img src=image/13435054_16.gif> preserves the property that it maps <img src=image/13435054_16.gif> and <img src=image/13435054_17.gif> sets to the same type of sets in co-domain. We defined <img src=image/13435054_18.gif> space and described its nature over <img src=image/13435054_06.gif>. Also we have introduced <img src=image/13435054_19.gif> functions with an example and discussed its relation with <img src=image/13435054_06.gif> and analysed its basic properties. Composition of functions under <img src=image/13435054_01.gif>, <img src=image/13435054_19.gif> and <img src=image/13435054_04.gif> are examined.</p>]]></description>
<pubDate>Jan 2024</pubDate>
</item>
<item>
<title><![CDATA[Homomorphism of <img src=image/13434765_01.gif> Neutrosophic Fuzzy Subgroup over a Finite Group]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13852]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2024<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;12&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>V Dhanya&nbsp; &nbsp;M Selvarathi&nbsp; &nbsp;and M Ambika&nbsp; &nbsp;</p><p>Neutrosophic fuzzy sets are an extension of fuzzy sets. Fuzzy sets can only handle vague information, and it cannot deal with incomplete and inconsistent information. But neutrosophic fuzzy sets and their combinations are one technique for handling incomplete and inconsistent information. Neutrosophic fuzzy set theory provides the groundwork for a whole group of new mathematical theories and summarizes both the traditional and fuzzy counterparts. Following this, the area of neutrosophic fuzzy sets is being developed intensively, with the goal of strengthening the foundations of the theory, creating new applications, and enhancing its practicality in a range of real-life scenarios. Further, neutrosophic fuzzy sets are characterized by three components. One is truth (<img src=image/13434765_02.gif>), the second is indeterminacy (<img src=image/13434765_03.gif>), and the third is falsity (<img src=image/13434765_04.gif>). In this paper, we have examined the idea of homomorphism of implication-based (<img src=image/13434765_05.gif>) neutrosophic fuzzy subgroups over a finite group. Then, <img src=image/13434765_05.gif> neutrosophic fuzzy subgroups over a finite group and <img src=image/13434765_05.gif> neutrosophic fuzzy normal subgroups over a finite group were defined. Finally, we have demonstrated some basic properties of homomorphism of <img src=image/13434765_05.gif> neutrosophic fuzzy subgroups over a finite group in this study.</p>]]></description>
<pubDate>Jan 2024</pubDate>
</item>
<item>
<title><![CDATA[On Nash Equilibrium Solutions for Rough Differential Games]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13806]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Abd El-Monem A. Megahed&nbsp; &nbsp;Mohamed R. Zeen El Deen&nbsp; &nbsp;and Asmaa A. Ahmed&nbsp; &nbsp;</p><p>The purpose of this paper is to investigate the Nash equilibrium concept for differential games when there is uncertainty in the available information for the players. Our study involves examining the problem of uncertainty in player information during the game using the "rough sets" concept, which is widely used for many such problems. Furthermore, we also explore the possible alliance between continuous differential games and the rough programming approach. Our primary aim is to ascertain the Nash equilibrium for a differential game in situations where the players have uncertain information, so they are exerting rough control, along with the trajectory of the system state being rough as well. We derive the necessary and sufficient conditions for the open-loop Nash equilibrium of the rough differential game. Additionally, we make use of the expected value operator and trust measure of rough interval to convert the rough problem into a crisp problem, allowing us to calculate the expected Nash equilibrium strategies and α-trust Nash equilibrium strategies for the game. Finally, a numerical example that outlines the steps involved in producing the rough interval of the Nash equilibrium and system state trajectory for the rough differential game is given. Moreover, this example demonstrates how to obtain each crisp problem from a rough one and then determines its Nash equilibrium and the corresponding state trajectory.</p>]]></description>
<pubDate>Nov 2023</pubDate>
</item>
<item>
<title><![CDATA[Development and Isometry of Surfaces Galilean Space G3]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13805]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>B.M. Sultanov&nbsp; &nbsp;A. Kurudirek&nbsp; &nbsp;and Sh.Sh. Ismoilov&nbsp; &nbsp;</p><p>Currently, the study of the geometry of semi-Euclidean spaces is an urgent task of geometry. In the singular parts of pseudo-Euclidean spaces, a geometry associated with a degenerate metric appears. A special case of this geometry is the geometry of Galileo. The basic concepts of the geometry of Galilean space are given in the monograph by A. Artykbaev. Here the differential geometry "in the small" is studied, the first and second fundamental forms of surfaces and geometric characteristics of surfaces are determined. The derivational equations of surfaces, analogs of the Peterson-Codazzi and Gauss formulas are calculated. This paper studies the development and isometry of surfaces in Galilean space. Moreover, the isometry of surfaces in Galilean space is divided into three types: semi-isometry, isometry and completely isometry. This separation is due to the degeneracy of the Galilean space metric. The existence of a development of a surface projecting uniquely onto a plane in general position is proved, as well as the conditions for isometric and completely isometric surfaces of Galilean space. We present the conditions associated with the analog of the Christoffel symbol, providing isometries of the surfaces of Galilean space. An example of isometric, but not completely isometric surfaces in G3 is given. The concept of surface development for Galilean space is generalized. A development of the surface is obtained, which is uniquely projected onto the plane of the general position. In addition, the Gaussian curvature of the surface has been shown to be completely defined by Christoffel symbols.</p>]]></description>
<pubDate>Nov 2023</pubDate>
</item>
<item>
<title><![CDATA[Limit Theorems for Functionals of Random Convex Hulls in a Unit Disk]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13804]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Isakjan Khamdamov&nbsp; &nbsp;and Azam Imomov&nbsp; &nbsp;</p><p>In this article, we study the functionals of the convex hull generated by independent observations over two-dimensional random points. When the random points are given in the polar coordinate system, their components are independent of each other, the angular coordinate is distributed uniformly, and the tail of the distribution of the radial coordinate is a regularly varying function near the circle of the unit disk – support. Here, with the approximation of the binomial point process by an inhomogeneous Poisson one, it is possible to study the asymptotic properties of the main functionals of the convex hull. Using the independence property of the increment of Poisson processes, we find an asymptotic expression for the mean values and variances for the main functionals of the convex hull. Uniform boundedness of exponential moments is proved for the same functionals, in the case when the convex hull is generated from an inhomogeneous Poisson point process inside the disk. The indicated independence property of the increment of the Poisson process allows us to express the area of the convex hull as a sum of independent identically distributed random variables, with which we prove the central limiting theorem for the number of vertices and the area of the convex hull. From the results obtained, we can conclude that if the tail of the distribution near the boundary is heavier, then there are many elements of the sample near the support boundary, and this means that there are many vertices of the convex hull, but the area bounded by the perimeter of the convex hull and the circle, as well as the difference between the perimeter of the convex hull and the circle, becomes negligible.</p>]]></description>
<pubDate>Nov 2023</pubDate>
</item>
<item>
<title><![CDATA[On The Metric Dimension for The Line Graphs of Hammer and Triangular Benzene Structures]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13803]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>R. Nithya Raj&nbsp; &nbsp;R. Sundara Rajan&nbsp; &nbsp;Haewon Byeon&nbsp; &nbsp;CT. Nagaraj&nbsp; &nbsp;and G. Kokila&nbsp; &nbsp;</p><p>The metric dimension of a chemical graph is a fundamental parameter in the study of molecular structures and their properties. This metric dimension is a numerical measure of the smallest set of atoms required to uniquely determine the location of all other atoms within the molecule. In this abstract, we explore the concept of metric dimension in chemical graphs, discussing its theoretical foundations and its applications in various fields such as navigation, network theory, drug design, optimization, pattern recognition, and other related fields computational chemistry, and material science. Understanding the metric dimension of chemical graphs enables the identification of crucial atoms or bonds that significantly impact the properties and behavior of molecules, aiding in the design of more effective drugs, catalysts, and materials. Finding the metric dimension of any given graph poses a computational challenge classified within the NP-complete problem category. A group of nodes, denoted as <img src=image/13434254_01.gif>, is regarded as a locating set if, every pair of nodes <img src=image/13434254_02.gif> and <img src=image/13434254_03.gif> within the graph <img src=image/13434254_04.gif>, there is a minimum of one node <img src=image/13434254_05.gif> in <img src=image/13434254_06.gif> such a way that the separation between <img src=image/13434254_02.gif> and <img src=image/13434254_05.gif> is not the same as the separation between <img src=image/13434254_03.gif> and <img src=image/13434254_05.gif>. The <img src=image/13434254_07.gif> metric dimension is represented by <img src=image/13434254_08.gif> and corresponds to the minimum size of a locating set for <img src=image/13434254_04.gif>. The primary objective of this effort is to establish the proof that, for <img src=image/13434254_09.gif>, the metric dimensions of the line network for the Hammer and triangular benzene structures are 2 and 3, respectively. We also established the existence of a constant metric dimension for this class of line graphs, which includes Hammer and triangular benzene structures.</p>]]></description>
<pubDate>Nov 2023</pubDate>
</item>
<item>
<title><![CDATA[The Number of Games to Win by Two Points]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13802]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Nahathai Rerkruthairat&nbsp; &nbsp;and Noppadon Wichitsongkram&nbsp; &nbsp;</p><p>Sometimes draws or ties occur in sports. Tiebreakers are the forms of competition that break ties and decide the winner when a draw or a tie occurs. Depending on types of tiebreakers, some take shorter and some take longer to end the competition. In this article, we are interested in calculating the expectation and variance of the number of games that will continue after a draw from types of tiebreakers that require players to win by two points. We focus on three types of win by two points that are used in many popular sports, such as tennis, volleyball and racquetball. By calculating the expected number of games, we can compare the number of games in each type of tiebreakers that will approximately be taken to end the game. In these kinds of sports, the rules to gain each point are usually the same. This means that there are the same finite states that the players or teams can reach in each point and each possible state depends only on the previous state. Since we know that a Markov chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event, we can use an application of Markov chains to solve the problems.</p>]]></description>
<pubDate>Nov 2023</pubDate>
</item>
<item>
<title><![CDATA[Generalized Half-Logistic Distribution Using Linear Regression Model]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13801]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Ahmed Al-Adilee&nbsp; &nbsp;and Wasan Al-Shemmari&nbsp; &nbsp;</p><p>In this study, the generalized half-logistic distribution (GHLD) was expanded by replacing the shape parameter with a linear model, denoted by the notation <img src=image/13434377_01.gif>. This model involves a vector of explanatory variables denoted by <img src=image/13434377_02.gif>, where <img src=image/13434377_03.gif> with a vector of coefficients of each one of those explanatory variables, denoted by <img src=image/13434377_04.gif>. The linear model represents several explanatory variables with their coefficients that represent effects on some items. Briefly, the proposed distribution is denoted by, LM-GHLD. Afterward, by finding the pdf, and cdf of LM-GHLD, many mathematical and statistical characteristics were investigated, such as the survival function, the hazard function, the moments, the moment generating function, quantiles, the Rényi entropy, and the order statistic function. The unknown parameters of the modern distribution were estimated with the non-Bayesian method, which is known as the Maximum Likelihood Estimate (MLE). An important part of such a study is related to the simulation, which is shown within a generation of different sample sizes. A goodness-of-fit measure has been implemented on real data sets to compare the classical distribution (GHLD) and the proposed distribution (LM-GHLD) enabling us to determine which distribution is better. Eventually, we provide some conclusions and summarize our findings.</p>]]></description>
<pubDate>Nov 2023</pubDate>
</item>
<item>
<title><![CDATA[Adomian Decomposition Method for Solving Fuzzy Hilfer Fractional Differential Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13662]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>V. Padmapriya&nbsp; &nbsp;and M. Kaliyappan&nbsp; &nbsp;</p><p>The field of fractional calculus is mainly concerned with the differentiation as well as integration of arbitrary orders. This concept is obviously present in various domains of science and engineering. Most people are familiar with the Caputo and Riemann-Liouville fractional definitions. Recently, Hilfer has related the Caputo and Riemann-Liouville derivatives by a general formula; this connection is referred to as the Hilfer or generalized Riemann-Liouville derivative. The Hilfer fractional derivative serves as an intermediary between the Riemann-Liouville and Caputo fractional derivatives, providing a means of interpolation. Parameters in the Hilfer derivative provide more degrees of freedom. Adomian decomposition method (ADM) is widely regarded as a highly effective mathematical technique for solving both linear and nonlinear differential equations. ADM provides an analytical solution in the form of a series solution. Motivated by the growing number of real-life applications for fractional calculus, the objective of this work is to explore the solutions of Hilfer fractional differential equations in a fuzzy sense using the ADM. The efficiency and accuracy of the proposed method are demonstrated by the solution of numerical examples. Graphical representations are provided to visualize the solutions' behavior. This shows that as the number of terms in the series goes up, the numerical results get closer and closer to the exact solutions.</p>]]></description>
<pubDate>Nov 2023</pubDate>
</item>
<item>
<title><![CDATA[Σ-uniserial Modules and Their Properties]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13661]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Ayazul Hasan&nbsp; &nbsp;and Jules Clement Mba&nbsp; &nbsp;</p><p>The close association of abelian group theory and the theory of modules have been extensively studied in the literatures. In fact, the theory of abelian groups is one of the principal motives of new research in module theory. As it is well-known, module theory can only be processed by generalizing the theory of abelian groups that provide novel viewpoints of various structures for torsion abelian groups. The theory of torsion abelian groups is significant as it generates the natural problems in QTAG-module theory. The notion of QTAG (torsion abelian group like) module is one of the most important tools in module theory. Its importance lies behind the fact that this module can be applied in order to generalized torsion abelian group accurately. Significant work on QTAG-module was produced by many authors, concentrating on establishing when torsion abelian groups are actually QTAG-modules. There are two rather natural problems which arise in connection with the Σ-uniserial modules. Namely: The QTAG-module M is Σ-uniserial if and only if all N-high submodules of M are Σ-uniserial, for some basic submodules N of M, and M is not a Σ-uniserial module if and only if it contains a proper (ω + 1)-projective submodule. The current work explores these two problems for QTAG-modules. Some related concepts and problems are also considered. Our global aim here is to review the relationship between the aspects of group theory in the form of torsion abelian groups and theory of modules in the form of QTAG-modules.</p>]]></description>
<pubDate>Nov 2023</pubDate>
</item>
<item>
<title><![CDATA[A New Wavelet-based Galerkin Method of Weighted Residual Function for The Numerical Solution of One-dimensional Differential Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13660]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Iweobodo D. C.&nbsp; &nbsp;Njoseh I. N.&nbsp; &nbsp;and Apanapudor J. S.&nbsp; &nbsp;</p><p>In this paper, we developed a new wavelet-based Galerkin method of weighted residual function. In order to achieve this, we considered the wavelet transform as it relates to orthogonal polynomials, developed new wavelets using the Mamadu-Njoseh Polynomials, and formulated a base function with the newly developed wavelets. We considered the method of implementing solutions with the newly developed wavelet-based Galerkin method of weighted residual function, and applied it in obtaining approximate solutions of some one-dimensional differential equations having the Dirichlet boundary conditions. The results obtained from the newly developed method were compared with the results obtained from the exact solution and that from the classical Finite Difference Method (FDM) in literature. It was observed that the newly developed wavelet-based Galerkin method of weighted residual function demonstrated a high efficiency in providing approximate solutions to differential equations. The study revealed that the newly developed wavelet-based Galerkin method of weighted residual function converges at a good pace to the exact solution, and iterated the accuracy and effectiveness of its solutions. We used the MAPLE 18 software in carrying out all computations in this work.</p>]]></description>
<pubDate>Nov 2023</pubDate>
</item>
<item>
<title><![CDATA[An Optimal Approach to Identify the Importance of Variables in Machine Learning Using Cuckoo Search Algorithm]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13659]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Asep Rusyana&nbsp; &nbsp;Aji Hamim Wigena&nbsp; &nbsp;I Made Sumertajaya&nbsp; &nbsp;and Bagus Sartono&nbsp; &nbsp;</p><p>Different machine learning algorithms may produce different orders of the variable importance measures even though they use an identical dataset. The measures raise the difficulty of concluding which predictor variables are the most important. Therefore, there is a requirement to unify those scores into a single order so that the analyst can withdraw a conclusive decision more easily. This research applied the Cuckoo Search algorithm approach to obtain the unification of those orders into a single one. A simulation study was conducted to justify that the approach could work well in several circumstances of data. We implemented the algorithm to identify the importance of the variables where the correlations among them are low, moderate, and high. The result of the paper shows that the proposed variable importance measure is the best if it is applied to predictors independent of each other. Generally, it is more accurate than variable importance measures of machine learning. The algorithm was also applied to identify the proposed important variable measure for recognizing food insecurity in households in Indonesia. The proposed variable importance has good accuracy. The accuracy is higher if the number of variables is greater than ten.</p>]]></description>
<pubDate>Nov 2023</pubDate>
</item>
<item>
<title><![CDATA[Identifying and Estimating Seasonal Moving Average Models by Mathematical Programming]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13658]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Rasha A. Farghali&nbsp; &nbsp;Hemat M. Abd-Elgaber&nbsp; &nbsp;and Essam A. Ahmed&nbsp; &nbsp;</p><p>In this paper, a novel method is presented for simultaneously identifying and estimating Seasonal Moving Average (SMA) models, which are considered a special case of Seasonal Autoregressive Integrated Moving Average (SARIMA) models introduced by Box and Jenkins. To accomplish this, we utilize a mixed-integer nonlinear programming (MINLP) model, which falls within the class of optimization problems involving integer and continuous decision variables, as well as non-linear objective functions and/or constraints. The advantage of employing MINLP lies in its ability to provide a more flexible representation of real-world problems. The aim of employing the MINLP is to identify and estimate the appropriate SMA model, specifically determining whether it is Multiplicative or Non-multiplicative. To evaluate the effectiveness of the proposed MINLP approach, we conducted both a simulation study and real-world applications. In the simulation study, we generate 1000 time series datasets from each of the twelve SMA models, which comprised six multiplicative SMA models and six non-multiplicative SMA models, with different orders. Additionally, we examine the effectiveness of MINLP through two real-world applications: Carbon Dioxide Levels data and College Enrollment data. The results obtained from both the simulation study and real-world applications consistently demonstrate the effectiveness of MINLP in accurately identifying the appropriate SMA model. These findings support the applicability and reliability of the proposed method in practical scenarios. Overall, our research contributes to the field of time series analysis by providing a new approach for identifying and estimating SMA models using MINLP, paving the way for improved forecasting and decision-making in various domains.</p>]]></description>
<pubDate>Nov 2023</pubDate>
</item>
<item>
<title><![CDATA[Some Properties of Cyclic and Dihedral Homology for Schemes]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13657]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Samar A. A. Quota&nbsp; &nbsp;Faten. R. Kara&nbsp; &nbsp;O. H. Fathy&nbsp; &nbsp;and W. M. Mahmoud&nbsp; &nbsp;</p><p>A scheme is a type of mathematical construction that extends the concept of algebraic variety in a number of ways, including accounting for multiplicities and being defined over any commutative ring. In this article, we study some properties of the cyclic and dihedral homology theory in schemes. We study the long exact sequence of cyclic homology of scheme and prove some results. So, we introduce and study Morita-equivalence in cyclic homology of schemes and proof the main relation between trace map and inclusion map. Our goal is to explain product structures on cyclic homology groups <img src=image/13433629_01.gif>. Especially, we show <img src=image/13433629_02.gif> of algebra. We give the relations between dihedral homology <img src=image/13433629_03.gif> and cyclic homology <img src=image/13433629_14.gif> of schemes, therefore: <img src=image/13433629_04.gif>. We explain the trace map and inclusion map of cyclic homology for scheme algebra which takes form: <img src=image/13433629_05.gif> and <img src=image/13433629_06.gif>. For the shuffle map <img src=image/13433629_07.gif>, we obtain the long exact sequence of cyclic homology for scheme: <img src=image/13433629_08.gif>. We give the long exact sequence of dihedral homology for scheme: <img src=image/13433629_09.gif>. For any three <img src=image/13433629_10.gif> and <img src=image/13433629_11.gif> algebra, we write the next long exact sequence as a commutative diagram: <img src=image/13433629_12.gif>. For all <img src=image/13433629_10.gif> and <img src=image/13433629_11.gif> schemes, we give the long exact sequence of dihedral homology as: <img src=image/13433629_13.gif>.</p>]]></description>
<pubDate>Nov 2023</pubDate>
</item>
<item>
<title><![CDATA[Some Convergence Properties of a Random Closed Set Sequence]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13656]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Bourakadi Ahssaine&nbsp; &nbsp;Baraka Achraf Chakir&nbsp; &nbsp;and Khalifi Hamid&nbsp; &nbsp;</p><p>In this article, we have discussed the properties of the probability law "T" called functional capacity and other closely related functionals "Q and C" pertaining to random closed sets. We are interested in the most widely used functional in random set theory "T". We have established the belonging of "T" to the interval [0,1], and proven that it is increasing in the sense of inclusion, and its sub-additivity property through probability techniques. Moreover, we have explored the various types of convergences of a sequence of random closed sets, such as weak convergence, strong convergence (almost surely in the sense of Hausdorff), convergence in the sense of Painlevé-Kuratowski and Wijsman-Mosco, as well as convergence in probability. In the second part of our work, we have proven a new corollary which states that the strong convergence in the sense of Hausdorff implies the convergence in probability of a sequence of random closed sets at infinity. Our proof involves the definition of mathematical expectation for a discrete variable and the indicator variable, which is a random variable that takes two possible values, 0 or 1.</p>]]></description>
<pubDate>Nov 2023</pubDate>
</item>
<item>
<title><![CDATA[Properties and Applications of Klongdee Distribution in Actuarial Science]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13644]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Adisak Moumeesri&nbsp; &nbsp;and Weenakorn Ieosanurak&nbsp; &nbsp;</p><p>We have introduced a novel continuous distribution known as the Klongdee distribution, which is a combination of the exponential distribution with parameter <img src=image/13434312_01.gif> and the gamma distribution with parameters <img src=image/13434312_02.gif>. We thoroughly examined various statistical properties that provide insights into probability distributions. These properties encompass measures such as the cumulative distribution function, moments about the origin, and the moment-generating function. Additionally, we explored other important measures including skewness, kurtosis, C.V., and reliability measures. Furthermore, we explore parameter estimation using nonlinear least squares methods. The numerical results presented compare the unweighted and weighted least squares (UWLS and WLS) methods, maximum likelihood estimation (MLE), and method of moments (MOM). Based on our findings, the MLE demonstrates superior performance compared to other parameter estimation methods. Moreover, we demonstrate the application of this distribution within an actuarial context, specifically in the analysis of collective risk models using a mixed Poisson framework. By incorporating the proposed distribution into the mixed Poisson model and analyzing a real-life dataset, it has been determined that the Poisson-Klongdee model outperforms alternative models in terms of performance. Highlighting its capability to mitigate the problem of overcharges, the Poisson-Klongdee model has been proven to be a valuable tool.</p>]]></description>
<pubDate>Sep 2023</pubDate>
</item>
<item>
<title><![CDATA[Convergence of the Jordan Neutrosophic Ideal in Neutrosophic Normed Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13643]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>R. Muthuraj&nbsp; &nbsp;K. Nachammal&nbsp; &nbsp;M. Jeyaraman&nbsp; &nbsp;and D. Poovaragavan&nbsp; &nbsp;</p><p>In the context of the Neutrosophic Norm, the essay explores the challenge of constructing precise sequence spaces whose elements' convergence is a generalised form of the Cauchy convergence. It has proven to be a crucial tool, opening the door to the theory of functions and the law of large numbers applications. Numerous authors, including those who investigated the Euler totient matrix operator, have studied the strategy for building new sequence spaces that are specified as the domain of matrix operators. Recently, the Jordan totient function <img src=image/13433793_04.gif> generalised the Euler totient function <img src=image/13433793_05.gif>. In the context of neutrosophic Norm spaces, we establish some sequence spaces, specifically <img src=image/13433793_01.gif>, <img src=image/13433793_02.gif> and <img src=image/13433793_03.gif> as a domain of the triangular Jordan totient matrix operator, and investigate the ideal convergence of these sequences. These concepts serve as an introduction to a new sort of convergence that Fast and Steinhaus presented as more general than normal convergence and statistical convergence. According to Kostyrko et al., this form is known as ideal convergence. In order to arrive at a finite limit, the Jordan totient operator, an infinite matrix operator, is used. We also construct a number of inclusion connections between the spaces as we explain various topological and algebraic properties. The Jordan totient operator, an infinite matrix operator, is used to accomplish the task of reaching a finite limit. As we discuss various topological and algebraic features, we also create several inclusion relations between the spaces.</p>]]></description>
<pubDate>Sep 2023</pubDate>
</item>
<item>
<title><![CDATA[Resolution of Linear Systems Using Interval Arithmetic and Cholesky Decomposition]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13642]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Benhari Mohamed amine&nbsp; &nbsp;and Kaicer Mohammed&nbsp; &nbsp;</p><p>This article presents an innovative approach to solving linear systems with interval coefficients efficiently. The use of intervals allows the uncertainty and measurement errors inherent in many practical applications to be considered. We focus on the solution algorithm based on the Cholesky decomposition applied to positive symmetric matrices and illustrate its efficiency by applying it to the Leontief economic model. First, we use Sylvester's criterion to check whether a symmetric matrix is positive, which is an essential condition for the Cholesky decomposition to be applicable. It guarantees the validity of our solution algorithm and avoids undesirable errors. Using theoretical analyses and numerical simulations, we show that our algorithm based on the Cholesky decomposition performs remarkably well in terms of accuracy. To evaluate our method in concrete terms, we apply it to the Leontief economic model. This model is widely used to analyze the economic interdependencies between different sectors of an economy. By considering the uncertainty in the coefficients, our approach offers a more realistic and reliable solution to the Leontief model. The results obtained demonstrate the relevance and effectiveness of our algorithm for solving linear systems with interval coefficients, as well as its successful application to the Leontief model. These advances are crucial for fields such as economics, engineering, and the social sciences, where data uncertainty can greatly affect the results of analyses. In summary, this article highlights the importance of interval arithmetic and Cholesky's method in solving linear systems with interval coefficients. Applying these tools to the Leontief model can help you better understand the impact of uncertainty and make informed decisions in a variety of fields, including economics and engineering.</p>]]></description>
<pubDate>Sep 2023</pubDate>
</item>
<item>
<title><![CDATA[On the Problem of Solution of Non-Linear (Exponential) Diophantine Equation <img src=image/13431086_01.gif>]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13641]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Sudhanshu Aggarwal&nbsp; &nbsp;Shahida A. T.&nbsp; &nbsp;Ekta Pandey&nbsp; &nbsp;and Aakansha Vyas&nbsp; &nbsp;</p><p>Diophantine equations have great importance in research and thus among researchers. Algebraic equations with integer coefficients having integer solutions are Diophantine equations. For tackling the Diophantine equations, there is no universal method available. So, researchers are keenly interested in developing new methods for solving these equations. While handling any such equation, three issue arises, that is whether the problem is solvable or not; if solvable, possible number of solutions and lastly to find the complete solutions. Fermat's equation and Pell's equation are most popularly known as Diophantine equations. Diophantine equations are most often used in the field of algebra, coordinate geometry, group theory, linear algebra, trigonometry, cryptography and apart from them, one can even define the number of rational points on circle. In the present manuscript, the authors demonstrated the problem of existence of a solution of a non-linear (exponential) Diophantine equation <img src=image/13431086_02.gif>, where <img src=image/13431086_03.gif> are non-negative integers and <img src=image/13431086_04.gif> are primes such that <img src=image/13431086_05.gif> has the form <img src=image/13431086_06.gif> of a natural number n. After it, authors also discussed some corollaries as special cases of the equation <img src=image/13431086_02.gif> in detail. Results of the present manuscript depict that the equation of the study is not satisfied by the non-negative integer values of the unknowns <img src=image/13431086_07.gif> and <img src=image/13431086_08.gif>. The present methodology of this paper suggests a new way of solving the Diophantine equation especially for academicians, researchers and people interested in the same field.</p>]]></description>
<pubDate>Sep 2023</pubDate>
</item>
<item>
<title><![CDATA[Neutrosophic Generalized Pareto Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13640]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Nahed I. Eassa&nbsp; &nbsp;Hegazy M. Zaher&nbsp; &nbsp;and Noura A. T. Abu El-Magd&nbsp; &nbsp;</p><p>The purpose of this paper is to present a neutrosophic form of the generalized Pareto distribution (NGPD) which is more flexible than the existing classical distribution and deals with indeterminate, incomplete and imprecise data in a flexible manner. In addition to this, NGPD will be obtained as a generalization of the neutrousophic Pareto distribution. Also, the paper introduces its special cases as neutrosophic Lomax distribution. The mathematical properties of the proposed distributions, such as mean, variance and moment generating function are derived. Additionally, the analysis of reliability properties, including survival and hazard rate functions, is mentioned. Furthermore, neutrosophic random variable for Pareto distribution was presented and recommended using it when data in the interval form follow a Pareto distribution and have some sort of indeterminacy. This research deals the statistical problems that have inaccurate and vague data. The proposed model NGPD is widely used in finance to model low probability events. So, it is applied to a real-world data set to modelling the public debt in Egypt for the purpose of dealing with neutrosophic scale and shape parameters, finally the conclusions are discussed.</p>]]></description>
<pubDate>Sep 2023</pubDate>
</item>
<item>
<title><![CDATA[Aspects of Algebraic Structure of Rough Sets]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13499]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>S. Sangeetha&nbsp; &nbsp;and Shakeela Sathish&nbsp; &nbsp;</p><p>Rough sets are extensions of classical sets characterized by vagueness and imprecision. The main idea of rough set theory is to use incomplete information to approximate the concept of imprecision or uncertainty, or to treat ambiguous phenomena and problems based on observation and measurement. In Pawlak rough set model, equivalence relations are a key concept, and equivalence classes are the foundations for lower and upper approximations. Developing an algebraic structure for rough sets will allow us to study set theoretic properties in detail. Several researchers studied rough sets from an algebraic perspective and a number of structures have been developed in recent years, including rough semigroups, rough groups, rough rings, rough modules, and rough vector spaces. The purpose of this study is to demonstrate the usefulness of rough set theory in group theory. There have been several papers investigating the roughness in algebraic structures by substituting an algebraic structure for the universe set. In this paper, rough groups are defined using upper and lower approximations of rough sets from a finite universe instead of considering the whole universe. Here we have considered a finite universe <img src=image/13433180_01.gif> along with a relation <img src=image/13433180_02.gif> which classifies the universe into equivalence classes. We have identified all rough sets with respect to this relation. The upper and lower approximated sets have been taken separately and these form a rough group equivalence relation (<img src=image/13433180_03.gif>) and it partitions the group (<img src=image/13433180_04.gif>) into equivalence classes. In this paper, the rough group approximation space (<img src=image/13433180_05.gif>) has been defined along with upper and lower approximations and properties of subsets of <img src=image/13433180_06.gif> with respect to rough group equivalence relations have been illustrated.</p>]]></description>
<pubDate>Sep 2023</pubDate>
</item>
<item>
<title><![CDATA[MID-units in Right Duo-seminearrings]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13498]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>S. Senthil&nbsp; &nbsp;and R. Perumal&nbsp; &nbsp;</p><p>In this paper, we focus on a subclass of duo-seminearrings called as right duo-seminearrings. We also focus on the algebraic properties and peculiarities of mid-units within this class. As a logical extension of the concept of mid-identities in semirings, the concept of mid-units in right pair seminearings is introduced. Mid-units are elements with both left and right invertibility, making them essential for understanding the structure and behaviour of right duo-seminearrings. In particular, we examine the interaction between idempotents and seminearring mid-units. We have also investigated regular right duo-seminearring which is a semilattice of subseminearrings with mid-units. In order to have a mid-unit in duo-seminearrings, we have established the necessary and sufficient conditions. The aim of this work is to carry out an extensive study on algebraic structure of right duo-seminearrings and the major objective is to further enhance the theory of right duo-seminearrings in order to find special structures of right duo-seminearrings. Throughout the research, rigorous proofs are provided to support the theoretical developments and ensure the validity of the findings. Concrete examples are also presented to illustrate the concepts and facilitate a better understanding of the algebraic structures associated with duo seminearings and mid-units. These examples serve as valuable tools for researchers and practitioners interested in the application of right duo-seminearrings and mid-units in their respective fields. Due to their applicability in domains such as computer science, cryptography, and coding theory, the topic of duo seminearrings, which generalise both semirings and duo-rings, have received substantial attention in algebraic research.</p>]]></description>
<pubDate>Sep 2023</pubDate>
</item>
<item>
<title><![CDATA[Generalization of Riemann-Liouville Fractional Operators in Bicomplex Space and Applications]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13497]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Mahesh Puri Goswami&nbsp; &nbsp;and Raj Kumar&nbsp; &nbsp;</p><p>In this article, we generalize the Riemann-Liouville fractional differential and integral operators that can be applied to the functions of a bicomplex variable. For this purpose, we consider the bicomplex Cauchy integral formula and some contours in bicomplex space. We elaborate these operators through some examples. Also, we contemplate some significant properties of these operators which include a discussion of bicomplex analytical behavior of generalized bicomplex functions through Pochhammer contours, the law of exponents, generalized Leibniz rule along with a depiction of the region of convergence, and generalized chain rule for Riemann-Liouville fractional operators of bicomplex order. We give an application of our work in the construction of fractional Maxwell's type equations in vacuum and sourcefree domains equipped with the Riemann-Liouville derivative operator. For this, we define bicomplex grad, div, and curl operator with the help of these newly defined operators. The advantage of this fractional construction of Maxwell's equation is that it may be used to build fractional non-local electronics in bicomplex space. By considering bicomplex vector fields for the respective domains, we reduce the number of these fractional Maxwell's type equations by half, which makes it easier to extract electric and magnetic fields from the bicomplex vector fields.</p>]]></description>
<pubDate>Sep 2023</pubDate>
</item>
<item>
<title><![CDATA[Jacobson Graph of Matrix Rings]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13496]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Siti Humaira&nbsp; &nbsp;Pudji Astuti&nbsp; &nbsp;Intan Muchtadi Alamsyah&nbsp; &nbsp;and Edy Tri Baskoro&nbsp; &nbsp;</p><p>Some researchers have studied some properties of the Jacobson graph of commutative rings. In this study, we expand these results by examining the Jacobson graph of a non-commutative ring with identity, where we focus on the case of matrix rings. Initially, we update the definition of the Jacobson graph of non-commutative rings as a directed graph. Then we find that the Jacobson graph of the matrix rings case is undirected. We can classify matrices based on rank by viewing the matrix as a linear transformation. The main result is that the order of the matrix rank values will be proportional to the order of the matrix degrees as vertices of the graph. So that one can identify the maximum and minimum degrees in this graph. Sequentially, we describe the graph properties starting from the Jacobson graph of matrices over fields, then expanding to the Jacobson graph of matrices over local commutative rings and the Jacobson graph of matrices over non-local rings. In the end, we also give different results on the Jacobson graph of triangular matrices. The main contribution of this paper is to review the relationship between the aspects of linear algebra in the form of matrix rings and combinatorics in the form of diameter and vertex degree on this graph.</p>]]></description>
<pubDate>Sep 2023</pubDate>
</item>
<item>
<title><![CDATA[The Generalized Inverse of Picture Fuzzy Matrices]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13495]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>V.Kamalakannan&nbsp; &nbsp;P.Murugadas&nbsp; &nbsp;and M.Kavitha&nbsp; &nbsp;</p><p>The generalized inverse is crucial in matrix theory. In many applications, such as control systems, robotics, and signal processing, the generalized inverse of matrices is critical. The generalized inverse of a picture fuzzy matrix is critical to solving a variety of real-world problems. Because of their ability to handle uncertain and imprecise medical data, applications of the generalized inverse of picture fuzzy matrix have gained significant attention in the medical field. Numerous researchers have investigated generalized inverses in fuzzy matrices and intuitionistic matrices. The fuzzy picture is an effective mathematical model for dealing with uncertain realworld issues. The picture fuzzy matrix is a generalization of the classical fuzzy matrix and the intuitionistic fuzzy matrix. In this research, a method for determining the generalized inverse (g-inverse) of a picture fuzzy matrix is implemented. In addition, the concept of a standard basis for picture fuzzy vectors is established. A few results related to the g-inverse of a fuzzy picture matrix are premeditated with relevant examples. An algorithm for evaluating the generalized inverse of a fuzzy picture matrix is provided. This study concludes with an application of the g-inverse of a picture fuzzy matrix.</p>]]></description>
<pubDate>Sep 2023</pubDate>
</item>
<item>
<title><![CDATA[A Joint Chance Constrained Programming with Bivariate Dagum Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13494]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Khalid M. El-khabeary&nbsp; &nbsp;Afaf El-Dash&nbsp; &nbsp;Nada M. Hafez&nbsp; &nbsp;and Samah M. Abo-El-hadid&nbsp; &nbsp;</p><p>A Joint chance-constrained programming (JCCP) technique is regarded as one of the most useful applicable techniques of stochastic programming techniques. It is more suitable for solving uncertain real problems, especially in economics and social problems, where some of model parameters are positive dependent random variables and follow well-known probability distributions. In this paper, we take into account a linear JCCP problem where some right-hand side random parameters are dependent and follow the Dagum distributions. So, firstly we derive a bivariate Dagum distribution with seven parameters with marginals following the Dagum distribution with three parameters. This proposed bivariate Dagum distribution is based on the Farlie-Gumbel-Morgensten copula (as presented in theorem (2.1)). Secondly, the proposed bivariate distribution is used in the context of JCCP technique to transform a linear JCCP model into an exact equivalent deterministic nonlinear programming model through theorem (3.1). Thirdly, through theorem (3.2), we prove that the obtained exact equivalent deterministic nonlinear programming model is a convex model, hence any nonlinear programming method can be used to solve it and find the global optimal solution. Finally, in order to demonstrate how to convert a linear JCCP model into an equivalent deterministic nonlinear programming model and solve it using the cutting plane method, a numerical example is included.</p>]]></description>
<pubDate>Sep 2023</pubDate>
</item>
<item>
<title><![CDATA[A Study on Tripled Fixed Point Results in G<sub>JS</sub>-Metric Space]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13493]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>D. Srilatha&nbsp; &nbsp;and V. Kiran&nbsp; &nbsp;</p><p>Generalization of metric spaces is always an evergreen topic of interest to many researchers. In order to generalize a metric space, researchers proposed various methods like by weakening any one condition from the definition of a metric or combining the notion of one metric space with the notion of one or more other metric spaces. Recently, S<sup>JS</sup>–metric space and S<sub>b</sub>-metric space have been introduced by combining the notion of S - metric space with JS-metric space and b-metric space respectively. Similarly, G<sub>b</sub>–metric space has been introduced as the generalization of G–metric space using b–metric. This notion motivated the present study. The purpose of this article is to introduce G<sub>JS</sub>-metric space and to present some fixed point theorems in G<sub>JS</sub>-metric space. By introducing the idea of G<sub>JS</sub>-metric space, we combined the notions of two metric spaces namely, G-metric space and JS-metric space. First, we begin with some basic definitions which are useful for the introduction of G<sub>JS</sub>–metric and then proceed with the necessary standard topological concepts of G<sub>JS</sub>-metric space. Then, using these topological concepts, we achieve some specific and principal results on G<sub>JS</sub>-metric space. Further, by providing suitable examples where ever required, we demonstrate the independent nature of G-metric space, G<sub>JS</sub>-metric space and JS-metric space. Further, we validate the conditions for the presence of tripled fixed point and verify its uniqueness by considering various cases on G<sub>JS</sub>-metric space.</p>]]></description>
<pubDate>Sep 2023</pubDate>
</item>
<item>
<title><![CDATA[Study of Intuitionistic Fuzzy Super Matrices and Its Application in Decision Making]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13492]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Siddharth Shah&nbsp; &nbsp;Rudraharsh Tewary&nbsp; &nbsp;Manoj Sahni&nbsp; &nbsp;Ritu Sahni&nbsp; &nbsp;Ernesto Leon Castro&nbsp; &nbsp;and Jose Merigo Lindahl&nbsp; &nbsp;</p><p>Recent developments in fuzzy theory have been of great use in providing a framework for the understanding of situations involving decision-making. However, these tools have limitations, such as the fact that multi-attribute decision-making problems cannot be described in a single matrix. Fuzzy and intuitionistic fuzzy matrices are important tools for these types of problems since they can help to solve them. We presented a new super matrix theory in the intuitionistic fuzzy environment in order to overcome these restrictions. This theory is able to readily cope with problems that include numerous attributes while addressing belongingness and non-belonging criteria. Hence, it introduces a fresh perspective into our thinking, which in turn enables us to generalize our findings and arrive at more sound conclusions. For the purpose of theoretical development, we define a variety of different kinds of intuitionistic fuzzy super matrices and present a number of essential algebraic operations in order to make it more applicable to situations that take place in the real world. One multi-criteria decision-making problem based on super matrix theory is discussed here for the sake of validating and illustrating the applicability of the established findings. In addition to this, we suggest a general multi-criteria decision-making algorithm that makes use of intuitionistic fuzzy super matrix theory. This algorithm is more dynamic than both intuitionistic fuzzy matrix and fuzzy super matrix theories, and it can be applied to the resolution of a wide range of issues. The validation of the proposed theory is done by taking a real-world example to show its importance.</p>]]></description>
<pubDate>Sep 2023</pubDate>
</item>
<item>
<title><![CDATA[NE-Nil Clean Rings and Their Generalization]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13441]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Renas T. M.Salim&nbsp; &nbsp;and Nazar H. Shuker&nbsp; &nbsp;</p><p>This article presents the concept of a NE-nil clean ring, which is a generalization of the strongly nil clean ring. A ring R is considered NE-nil clean if, for every a in R, there exists a<sub>1</sub> in R such that aa<sub>1</sub> = <img src=image/13433325_01.gif> with a − a<sub>1</sub> = q and a<sub>1</sub>q = qa<sub>1</sub>, where q is nilpotent and <img src=image/13433325_01.gif> is idempotent. This article&apos;s aim is to introduce a new type of ring, the NE-nil clean ring, and provide the fundamental properties of this ring. We also establish the relationship between NE-nil clean rings and 2-Boolean rings. Additionally, we demonstrate that the Jacobson radical <img src=image/13433325_02.gif> and the right singular ideal <img src=image/13433325_03.gif> over NE-nil clean ring are nil ideals. Among other results, we prove that every strongly nil clean ring and every weak * nil clean ring are NE-nil clean. We establish that a strongly 2-nil clean ring and NE-nil clean ring are equivalent. Furthermore, we introduce and investigate NT-nil clean ring, that is a ring with every a in R, there exists a<sub>1</sub> in R such that aa<sub>1</sub> = t with a − a<sub>1</sub> = q and a<sub>1</sub>q = qa<sub>1</sub>, where t is a tripotent and q is nilpotent, by showing that these rings are a generalization of NE-nil clean rings. We provide the basic properties of these rings and explore their relationship with NE-nil clean and Zhou rings.</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[A New Procedure for Multiple Outliers Detection in Linear Regression]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13440]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Ugah Tobias Ejiofor&nbsp; &nbsp;Arum Kingsley Chinedu&nbsp; &nbsp;Charity Uchenna Onwuamaeze&nbsp; &nbsp;Everestus Okafor Ossai&nbsp; &nbsp;Henrrietta Ebele Oranye&nbsp; &nbsp;Nnaemeka Martin Eze&nbsp; &nbsp;Mba Emmanuel Ikechukwu&nbsp; &nbsp;Ifeoma Christy Mba&nbsp; &nbsp;Comfort Njideka Ekene-Okafor&nbsp; &nbsp;Asogwa Oluchukwu Chukwuemeka&nbsp; &nbsp;and Nkechi Grace Okoacha&nbsp; &nbsp;</p><p>In this paper, a simple asymptotic test statistic for identifying multiple outliers in linear regression is proposed. Sequential methods of multiple outliers detection test for the presence of a single outlier each time the procedure is applied. That is, the most severe or extreme outlying observation (the observation with the largest absolute internally studentized residual from the original fit of the mode to the entire observations) is tested first. If the test detects this observation as an outlier, then this observation is deleted, and the model is refitted to the remaining (reduced) observations. Then the observation with the next largest absolute internally studentized residual from the reduced sample is tested, and so on. This procedure of deleting observations and recomputing studentized residuals is continued until the null hypothesis of no outliers fails to be rejected. However, in this work our method or procedure entails calculating and uses only one set of internally studentized residuals obtained from fitting the model to the original data throughout the test exercise, and hence the procedure of deleting an observation, refitting the data to the remaining observations (reduced values) and recomputing the absolute internally studentized residuals at each stage of the test is avoided. The test statistic is incorporated into a technique (procedure) that entails a sequential application of a function of the internally studentized residuals. The procedure is a straightforward multistage method and is based on a result giving large sample properties of the internally studentized residuals. Approximate critical values of this test statistic are obtained based on approximations that depend on the application of the Bonferroni inequality since their exact values are not available. The new test statistic is very simple to compute, efficient and effective in large data sets, where more complex methods are difficult to apply because of their enormous computational demands or requirements. The results of the simulation study and numerical examples clearly show that the proposed test statistic is very successful in the identification of outlying observations.</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[Overtrees and Their Chromatic Polynomials]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13439]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Iakov M. Erusalimskiy&nbsp; &nbsp;and Vladimir A. Skorokhodov&nbsp; &nbsp;</p><p>In this paper, graphs called overtrees are introduced and studied. These are connected graphs that contain a single simple cycle. Such graphs are connected graphs following the trees in terms of the number of edges. An overtree can be obtained from a tree by adding an edge to connect two non-adjacent vertices of a tree. The same class of graphs can also be defined as a class of graphs obtained from trees by replacing one vertex of the tree with a simple cycle. The main characteristics of overtrees are <img src=image/13433172_02.gif>, which is the number of vertices, and <img src=image/13433172_03.gif>, which is the number of vertices of a simple cycle (<img src=image/13433172_01.gif>). A formula for the chromatic polynomial of an overtree is obtained, which is determined by the characteristics <img src=image/13433172_02.gif> and <img src=image/13433172_03.gif> only. As a consequence, it is obtained the formula for the chromatic function of a graph which is built from a tree by replacing some of its vertices (possibly all) with simple cycles of arbitrary length. It follows from these formulas that any overtree with an even-length cycle is two-colored, and with an odd-length cycle is three-colored. The same is true for graphs obtained from trees by replacing some vertices with simple cycles.</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[Fractional Differential Equations and Matrix Bicomplex Two-parameter Mittag-Leffler Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13438]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>A. Thirumalai&nbsp; &nbsp;K. Muthunagai&nbsp; &nbsp;and M. Kaliyappan&nbsp; &nbsp;</p><p>The skew field of Quaternions is the best known extension of the field of Complex numbers. The beauty of the Quaternions is that they form a field but the handicap is loss of commutativity. Thus the four- dimensional algebra called Bicomplex numbers with the set of all Complex numbers as a subalgebra preserving commutativity came into existence, by considering two imaginary units. The conventional calculus is generalized using Fractional calculus which is useful to extend derivatives of integer order to fractional order. Due to their vast applications to various disciplines of Science and Engineering, Mittag- Leffler functions have become prominent. Our contribution here is a combination of all the three streams mentioned above. In our research findings, bicomplex two-parameter Mittag- Leffler functions have been obtained as the solutions for the set of fractional differential equations that are linear in bicomplex space. A block diagonal of a square matrix <img src=image/13431569_01.gif> is a diagonal matrix whose Principal diagonal elements are square matrices <img src=image/13431569_02.gif> and the diagonal elements of <img src=image/13431569_02.gif> lie along the diagonal of <img src=image/13431569_01.gif>. A Jordan block is a matrix that is upper triangular with <img src=image/13431569_03.gif> in the Principal diagonal, 1s just above the Principal diagonal and all other entries as 0. A Jordan Canonical form is a block diagonal matrix where each block is Jordan. A minimal polynomial of a matrix <img src=image/13431569_01.gif> is a polynomial which is monic in <img src=image/13431569_01.gif> with least degree. By using the methods of the minimal polynomial (eigenpolynomial) and Jordan canonical matrix, we have computed matrix Mittag–Leffler functions. The solutions obtained for the numerical examples have been visualized and interpreted using MATLAB.</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[To Enhance New Interval Arithmetic Operations in Solving Linear Programming Problem Using Interval-valued Trapezoidal Neutrosophic Numbers]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13437]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>S Sinika&nbsp; &nbsp;and G Ramesh&nbsp; &nbsp;</p><p>Now in real-life scenarios, indeterminacy arises everywhere in various fields, including physics, mathematics, economics, philosophy, social sciences, etc. It occurs whenever prediction is difficult, when we didn't get a predetermined outcome or obtain fixed or multiple possible outcomes etc. Overcoming indeterminacy is one of the most prominent duties for everyone to lead a confusion-less society. Hence a neutrosophic concept came into force to analyze indeterminacy explicitly. In contrast, a fuzzy set assigns only membership grade, and an intuitionistic set allocates membership and non-membership to elements. Decision-makers can use neutrosophic settings to model uncertainty and ambiguity in complex systems for flexible analysis. The neutrosophic environment with interval numbers makes one handle the situations efficiently. Hence we utilize interval-valued trapezoidal neutrosophic numbers for more flexibility. Trapezoidal number together with interval truth, interval indeterminacy, and interval falsity are the parameters of these neutrosophic numbers. Considering a de-neutrosophication technique in crisp numbers again leads to vagueness in real-life circumstances. Hence our primary goal is to develop a new de-neutrosophication strategy in the form of an interval number instead of the crisp number. This paper provides an overview of the de-neutrosophication and a new ranking technique based on an interval number, and some extended neutrosophic linear programming theorems. Further, an interval version of simplex and Robust Two-Step method (RTSM) are used to answer an interval-valued trapezoidal neutrosophic linear programming problem. Finally, this paper highlights the limitations and advantages of the proposed technique to improve problem-solving in a wide range of fields.</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[Self-Adjoint Operators in Bilinear Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13436]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Sabarinsyah&nbsp; &nbsp;Hanni Garminia&nbsp; &nbsp;Pudji Astuti&nbsp; &nbsp;and Zelvin Mutiara Leastari&nbsp; &nbsp;</p><p>In this research, it was agreed that a bilinear form is an extension of the inner product since a symmetry bilinear form will be equivalent to the inner product over a field of real numbers. Concepts in bilinear space, such as the concept of orthogonality of two vectors, the concept of orthogonal subspace of a subspace, the concept of adjoint operators of a linear operator and the concept of closed subspace are defined according to those prevailing in the inner product space fact assumed to be extensions of the concepts applicable in the inner product space. In the context of a cap subspace, we can identify the necessary and sufficient conditions for any linear operator in a continuous Hilbert space. These results open up opportunities to introduce the concept of pseudo-continuous linear mapping in bilinear spaces. We have obtained the result that pseudo-continuous linear mapping spaces in bilinear spaces have a relationship with linear mapping spaces that have adjoint mapping. We have also obtained the result that the structure of linear operators limited to Hilbert spaces can be extended to pseudo-continuous operator structures in bilinearal spaces. In this study, we have generalized the properties of self-adjoint operators in product spaces in infinite dimensions to bilinear, including closed properties of addition operations, and scalar multiplication, commutative properties, properties of inverse operators, properties of zero operators, properties of polynomial operators over real fields, and orthogonal properties of eigenspaces of different eigenvalues.</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[Hybrid Correlation Coefficient of Spearman with MM-Estimator]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13435]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Siti Hajar binti Abu Bakar&nbsp; &nbsp;Muhamad Safiih Bin Lola&nbsp; &nbsp;Anton Abdulbasah Kamil&nbsp; &nbsp;Nurul Hila Zainuddin&nbsp; &nbsp;and Mohd Tajuddin Abdullah&nbsp; &nbsp;</p><p>The Spearman rho nonparametric correlation coefficient is widely used to measure the strength and degree of association between two variables. However, outliers in the data can skew the results, leading to inaccurate results as the Spearman correlation coefficient is sensitive toward outliers. Thus, the robust approach is used to construct a robust model which is highly resistant to data contamination. The robustness of an estimator is measured by the breakdown point which is the smallest fraction of outliers in a sample data without affecting the estimator entirely. To overcome this problem, the aim of this study is two-fold. Firstly, researchers have proposed a robust Spearman correlation coefficient model based on the MM-estimator, called the MM-Spearman correlation coefficient. Secondly, to test the performance of the proposed model, it was tested by the Monte Carlo simulation and contaminated air pollution data in Kuala Terengganu, Terengganu, Malaysia. The data have been contaminated from 10% to 50% outliers. The performance of the MM-Spearman correlation coefficient properties was evaluated by statistical measurements such as standard error, mean squared error, root mean squared error and bias. The MM-Spearman correlation coefficient model outperformed the classical model, producing significantly smaller standard error, mean squared error, and root mean squared error values. The robustness of the model was evaluated using the breakdown point, which measures the smallest fraction of outliers that can be present in sample data without entirely affecting the estimator. The hybrid MM-Spearman correlation coefficient model demonstrated high robustness and efficiently handled data contamination up to 50%. However, the study has a limitation in that it can only overcome data contamination up to a maximum of 50%. Despite this limitation, the proposed model provides accurate and efficient results, enabling management authorities to make sound decisions without being affected by contaminated data. The MM-Spearman correlation coefficient model provides a valuable tool for researchers and decision-makers, allowing them to analyze data with a high degree of accuracy and robustness, even in the presence of outliers.</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[Alternative Algebra for Multiplication and Inverse of Interval Number]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13434]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Mashadi&nbsp; &nbsp;Abdul Hadi&nbsp; &nbsp;and Sukono&nbsp; &nbsp;</p><p>Recently, there are a lot of arithmetic interval forms. One of them only defines nonnegative interval numbers, whereas another one defines all forms of intervals. However, there are not many differences among the many arithmetic forms that were provided, particularly for addition and subtraction. For multiplication, division or inverse, there are many types of operations offered. But the problem is how to determine inverse of an interval number. There are many alternative offers to determine inverse of an interval number <img src=image/13431526_01.gif>. But only for certain cases and for many cases, we have <img src=image/13431526_02.gif> which is not equal to interval number <img src=image/13431526_03.gif>. Based on these conditions, in this article an analysis of the issues with several existing interval algebras will be given and based on the analysis an alternative will be proposed to determine the form of multiplication and inverse from an interval number, which begins to define the positivity of an interval number with mid-point <img src=image/13431526_04.gif> and then we construct algebra operations especially for multiplication. From the multiplication operation, we can construct the inverse form of an interval number <img src=image/13431526_01.gif>. Furthermore, it is proven that for numbers of interval <img src=image/13431526_05.gif> where <img src=image/13431526_06.gif>, there is an interval number <img src=image/13431526_07.gif>, so that it applies <img src=image/13431526_08.gif>.</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[Optimal Stochastic Allocation in Multivariate Stratified Sampling]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13433]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Mahfouz Maha I.&nbsp; &nbsp;Rashwan Mahmoud M.&nbsp; &nbsp;and Khadr Zeinab A.&nbsp; &nbsp;</p><p>Optimal allocation of stratified sample is obtained either by minimizing the variance of the sample estimate for a fixed total cost of the survey or the total cost of survey for the fixed precision of the estimate. Actually, the survey cost and the variance of the estimate move in opposite directions, that is minimizing any of them results in increasing the other. Moreover, in practice, due to the uncertainty in the population data, the variances as well as the costs should be treated as random variables. In this paper, a multivariate optimal stochastic compromise allocation is proposed using multi-objective mathematical programming model that simultaneously minimizes both of the total cost of the survey as well as the individual variances of the overall stratified mean of each of the characteristics of interest. The proposed Stochastic Programming model is to be solved using the Chance-Constrained Programming technique. The proportional increase in the variance of the estimator under the optimum variance and under the optimum cost is set as a constraint and is upper-bounded by a pre-determined quantity. Simulation-based comparative study is conducted to assess the performance of the proposed allocation versus other optimal allocation techniques. Based on the criteria used for comparison, the findings show that the suggested model produced the highest efficient estimators with the highest precision, and efficient allocation of the sample size to the strata that accounts for the differences in the strata sizes and the variation within strata.</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[Existence, Uniqueness, and Stability Results for Fractional Differential Equations with Lacunary Interpolation by the Spline Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13432]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Ridha G. Karem&nbsp; &nbsp;Karwan H. F. Jwamer&nbsp; &nbsp;and Faraidun K. Hamasalh&nbsp; &nbsp;</p><p>Although there are theoretical conclusions about the existence, uniqueness, and properties of solutions to ordinary and partial differential equations, only the simplest and most straightforward particular problems can usually be solved explicitly, especially when nonlinear terms are involved, and we typically develop approximation. In order to resolve the form problem of fractional order beginning value (1) by lacunary interpolation with a fractional degree spline function, the main goal of this paper is to investigate and improve some approximate solutions as well as new approximate solution techniques that have been proposed for the first time. From a practical standpoint, the numerical solution of these differential equations is crucial because only a tiny portion of equations can be resolved analytically. For fractional differential equations that are sensitive to the beginning conditions, we provide a fractional spline approach. The polynomial coefficient-based spline interpolation must be constructed using the Caputo fractional integral and derivative. For the given spline function, a stability analysis is completed after investigating error boundaries. The numerical rationale for the suggested technique is thought to use three cases. The outcomes demonstrate how effective the spline fractional technique is in interpolating the coefficient with fractional polynomials. Finally, to demonstrate the effectiveness and correctness of the suggested strategy, general procedure programs are created in MATLAB and used to a number of instructive cases.</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[Some Results on Sequences in Banach Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13346]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>B. M. Cerna Maguina&nbsp; &nbsp;and Miguel A. Tarazona Giraldo&nbsp; &nbsp;</p><p>In this work, we prove in a very particular way the theorems of Dvoretzky-Roger's, Shur's, Orcliz's and Theorem 14.2 in their versions presented in the text [3]. The demonstrations of these Theorems carried out by us consist in establishing an appropriate link between the object of study and the relation that affirms that, for any <img src=image/13427827_01.gif> real numbers <img src=image/13427827_02.gif>, there exists a unique real number <img src=image/13427827_03.gif> such that <img src=image/13427827_04.gif>. Once the nexus is established, we use the definition of weak or strong convergence together with the Hahn-Banach Theorem to obtain the desired results. The relation <img src=image/13427827_05.gif> is obtained by decomposing the Hilbert space <img src=image/13427827_06.gif> as the direct sum of a closed subspace and its orthogonal complement. Since the dimension of the space <img src=image/13427827_06.gif> is finite, this guarantees that any linear functional defined on the space <img src=image/13427827_06.gif> is continuous, and this guarantees that the kernel of said linear functional is closed in the space <img src=image/13427827_06.gif>. Therefore we have that the space <img src=image/13427827_06.gif> breaks down, as the direct sum of the kernel of the continuous linear functional <img src=image/13427827_10.gif> and its orthogonal complement, that is: <img src=image/13427827_07.gif>, where the dimension of ker <img src=image/13427827_08.gif> and the dimension of <img src=image/13427827_09.gif>.</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[A New Type of Single Server Queue Operating in A Multi-level Environment with Customer Impatience]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13345]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Akshaya Ramesh&nbsp; &nbsp;and S. Udayabaskaran&nbsp; &nbsp;</p><p>A new type of single server queue is considered. In this type, the server asks for an assignment in a multi-level environment and the customer develops impatience during the assignment process. The environment has N levels and the server is assigned to operate in one of these levels with level dependent arrival and service rates. Customers arrive at the system all the time and there is an infinite buffer with the system. The assignment is done by a random switch which can initiate an assignment process only if at least one customer is in the system. The server working in any level of the environment reports to the random switch after serving the last customer in that level. Customers are not flushed out at any time. The random switch initiates an assignment process immediately at the epoch of arrival of a customer to the system. Assignment time is random and during the assignment period, customers are permitted to join the system. Once the assignment process starts, each customer waiting in the buffer clicks on a random impatience timer with him/her and leaves the system in case his/her timer ends before the assignment to the server is made. For this model, steady-state probabilities are found and a performance analysis is also made.</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[A Piecewise Linear Collocation with Closed Newton Cotes Scheme for Solving Second Kind Fredholm Integral Equation (FIE) via Half-Sweep SOR Iteration]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13344]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Nor Syahida Mohamad&nbsp; &nbsp;Jumat Sulaiman&nbsp; &nbsp;Azali Saudi&nbsp; &nbsp;and Nur Farah Azira Zainal&nbsp; &nbsp;</p><p>In this paper, an efficient and reliable algorithm has been established to solve the second kind of FIE based on the lower-order piecewise polynomial and the lower-order quadrature method, namely Half-sweep Composite Trapezoidal (HSCT), which was used to discretize any integral term. Furthermore, due to the benefit of the complexity reduction technique via the half-sweep iteration concept presented from previous studies based on the cell-centered approach, this paper attempts to derive an HSCT piecewise linear collocation approximation equation generated from the discretization process of the proposed problem by considering the distribution of node points with vertex-centered type. Using half-sweep collocation node points over the linear collocation approximation equation, we could construct a system of HSCT linear collocation approximation equations, whose coefficient matrix is huge-scale and dense. Furthermore, to attain the piecewise linear collocation solution of this linear system, we considered the efficient algorithm of the Half-Sweep Successive Over-Relaxation (HSSOR) iterative method. Therefore, several numerical experiments of the proposed iterative methods have been implemented by solving three tested examples, and the obtained results that were based on three parameters, namely iteration quantity, accomplished time, and maximum absolute error, were recorded and compared against other two iterations, namely Full-Sweep Gauss-Seidel (FSGS) and Half-Sweep Gauss-Seidel (HSGS).</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[A New Conjugate Gradient Algorithm for Minimization Problems Based on the Modified Conjugacy Condition]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13343]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Dlovan Haji Omar&nbsp; &nbsp;Salah Gazi Shareef&nbsp; &nbsp;and Bayda Ghanim Fathi&nbsp; &nbsp;</p><p>Optimization refers to the process of finding the best possible solution to a problem within a given set of constraints. It involves maximizing or minimizing a specific objective function while adhering to specific constraints. Optimization is used in various fields, including mathematics, engineering, economics, computer science, and data science, among others. The objective function can be a simple equation, a complex algorithm, or a mathematical model that describes a system or process. There are various optimization techniques available, including linear programming, nonlinear programming, genetic algorithms, simulated annealing, and particle swarm optimization, among others. These techniques use different algorithms to search for the optimal solution to a problem. In this paper, the main goal of unconstrained optimization is to minimize an objective function that uses real variables and has no value restrictions. In this study, based on the modified conjugacy condition, we offer a new conjugate gradient (CG) approach for nonlinear unconstrained problems in optimization. The new method satisfied the descent condition and the sufficient descent condition. We compare the numerical results of the new method with the Hestenes-Stiefel (HS) method. Our novel method is quite effective according to the number of iterations (NOI) and the number of functions (NOF) evaluated, as demonstrated by the numerical results on certain well-known non-linear test functions.</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[Properties of Classes of Analytic Functions of Fractional Order]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13342]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>K R Karthikeyan&nbsp; &nbsp;and Senguttuvan Alagiriswamy&nbsp; &nbsp;</p><p>The study of Univalent Function Theory is very vast and complicated, so simplifying assumptions were necessary. In view of the Riemann Mapping theorem, the most apt thing would be to replace an analytic function defined on an arbitrary domain with an analytic function defined in the unit disc and having a Taylor's series expansion of the form <img src=image/13430763_01.gif>. The powers of the series are usually integers, so all the prerequisite results also support the study of analytic functions having a series expansion with integers powers. The main deviation presented here is that we have defined a subclass of analytic functions using a Taylor's series whose powers are non-integers. To make this study more comprehensive, Janowski function which maps the unit disc onto a right half plane has been used in conjunction with two primary tools namely Subordination and Hadamard product. Motivated by the well-known class of λ-convex functions, here we have defined a fractional differential operator which is a convex combination of two analytic functions. Using the defined fractional differential operator, we introduce and study a new class of analytic functions involving a conic region impacted by the Janowski function. Necessary and sufficient conditions, coefficient estimates, growth and distortion bounds have been obtained for the defined function class. Since studies of various subclasses of analytic functions with fractional powers are rare, here we have pointed out several closely related studies by various authors. However, the superordinate function is a familiar function which has lots of applications.</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[Kumaraswamy Generalized Half-Logistic Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13341]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Wasan AL Shemmari&nbsp; &nbsp;and Ahmed AL Adilee&nbsp; &nbsp;</p><p>Statistical distributions play an essential part in the process of interpreting experimental data; nevertheless, choosing a distribution appropriate for the data that is currently available is not an easy task. Extending a known family of distribution to construct a new one is a long-honored technique. We suggest a new distribution named kumaraswamy generalized half-Logistic distribution (KW-GHLD). This distribution is obtained by adding two parameters to the existing model to raise its ability to fit complex data sets. Many mathematical and statistical properties were investigated, such as the survival function, the hazard function, the moment, the moment generating function, the incomplete moments, the Renyi entropy, the stochastic ordering, the probability-weighted moments, the order statistics, and the quantile function. The maximum likelihood method is utilized to make estimates for the KW-GHL distribution's unknown parameters. We study the efficacy of the desired distribution (KW-GHLD) by applying it to some real data set, which has been discussed within the measures of goodness of fit (AIC, BIC, CAIC, and QHIC) and comparing the outcomes with those obtained by the original distribution (GHLD), which produced best outcomes. This allowed us to determine whether or not the desired distribution is effective. Finally, we present several conclusions related to our findings.</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[Performance Analysis of a Markovian Model for Two Heterogeneous Servers Accompanied by Retrial, Impatience, Vacation and Additional Server]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13340]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>G. Vinitha&nbsp; &nbsp;P. Godhandaraman&nbsp; &nbsp;and V. Poongothai&nbsp; &nbsp;</p><p>The two heterogeneous servers of the Markovian retrial queue model with an additional server, impatience behavior and vacation are presented in this research paper. An arriving customer who finds accessible servers gets immediate service. Otherwise, if both servers are engaged, an entering customer will join in the orbit to retry and get their service after some random time. If any customers in the orbit discover that the waiting time is longer than expected, they may leave without receiving the service. We consider two servers with different service rates to provide the service based on "First Come, First Served". When the number of customers in orbit increases occasionally, we will instantly activate an additional server to reduce the queue size. After the orbit becomes null, the server goes for maintenance activity. The practical application is given to justify our model. The proposed model was obtained using the birth-death process and the equations were governed using Chapman-Kolmogorov equations. Finally, we have solved the equations using a recursive approach and the performance indices are derived to improve quality and efficiency.</p>]]></description>
<pubDate>Jul 2023</pubDate>
</item>
<item>
<title><![CDATA[Product Properties for Generalized Pairwise Lindelöf Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13283]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Zabidin Salleh&nbsp; &nbsp;Muzafar Nurillaev&nbsp; &nbsp;and Che Mohd Imran Che Taib&nbsp; &nbsp;</p><p>In topological spaces, although compactness is satisfying the product invariant properties, but for the Lindelöffness, it is not preserved by the product unless one or more factors are assumed to satisfy additional conditions. Similar results yield for the bitopological spaces, that is, the property of pairwise Lindelöf bitopological spaces is not preserved under the product unless one or more factors are assumed to be satisfy additional conditions, for instance, <img src=image/13492372_01.gif>-spaces. The Cartesian product for arbitrarily many bitopological spaces was defined by Datta in 1972. Since then, many researchers have begun their study for the product bitopological spaces for their reason and direction. In this paper, we shall study finite product of pairwise nearly Lindelöf, pairwise almost Lindelöf and pairwise weakly Lindelöf spaces. We show that, all these generalized pairwise Lindelöf spaces are not preserved under a product by some counterexamples provided. Furthermore, we give some necessary conditions for these three bitopological spaces to be preserved under a finite product. Such condition is that one or more of the spaces has to be <img src=image/13492372_01.gif>-space or the product have to be pairwise weak <img src=image/13492372_01.gif>-space. Another interesting result is that the projection of these generalized pairwise Lindelöf spaces product with <img src=image/13492372_01.gif>-space is a closed map.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[Isomorphism Criteria for A Subclass of Filiform Leibniz Algebras]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13282]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>I.S. Rakhimov&nbsp; &nbsp;</p><p>In the paper, we propose three isomorphism criteria for a subclass of finite-dimensional Leibniz algebras. Isomorphism Criterion 1 has been given earlier (see [5]). We introduce notations for new structure constants. Using the new notation, we state the isomorphism criterion 2. To formulate Isomorphism Criterion 3, we introduce "semi-invariant functions" needed. We prove that these three Isomorphism Criteria are equivalent. The isomorphism criterion 3 is convenient to find the invariant functions to represent isomorphism classes. The proof of the isomorphism criteria in the general case is computational and is based on hypothetic convolution identities given in [11]. Therefore, we give details in the ten-dimensional case.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[Monte Carlo Algorithms for the Solution of Quasi-Linear Dirichlet Boundary Value Problems of Elliptical Type]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13281]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Abdujabar Rasulov&nbsp; &nbsp;</p><p>The application of Monte Carlo methods in various fields is constantly growing due to increases in computer capabilities. Increasing speed and memory, and the wide availability of multiprocessor computers, allow us to solve many problems using the &quot;method of statistical sampling&quot;, better known as the Monte Carlo method. Monte Carlo methods are known to have particular strengths. These include: Algorithmic simplicity with a strong analogy to the underlying physical processes, solve complex realistic problems that include sophisticated geometry and many physical processes, solve problems with high dimensions, the ability to obtain point solutions or evaluation linear functional of the solution, error estimates can be empirically obtained for all types of problems in parallel way, and ease of efficient parallel implementation. A shortcoming of the method is slow rate of convergence of the error, namely <img src=image/13492367_01.gif>) where <img src=image/13492367_02.gif> is the number of numerical experiments or realizations of the random variable. In this paper, we will propose Monte Carlo algorithms for the solution of the interior Dirichlet boundary value problem (BVP) for the Helmholtz operator with a polynomial nonlinearity on the right-hand side. The statistical algorithm is justified and complexity of the proposed algorithms is investigated, also the ways of decreasing the computational work are considered.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[Statistical Convergence on Intuitionistic Fuzzy Normed Spaces over Non-Archimedean Fields]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13280]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>N. Saranya&nbsp; &nbsp;and K. Suja&nbsp; &nbsp;</p><p>This paper aims to explore the fundamental properties of statistical convergence sequences within non-Archimedean fields. In pure mathematics, statistical convergence plays a fundamental role. The idea of statistical convergence is an extension of the concept of convergence. Statistical convergence has been discussed in various fields of mathematics namely ergodic theory , fuzzy set theory, approximation theory, measure theory, probability theory, trigonometric series, number theory, and banach spaces, where problems were resolved using the concept of statistical convergence. Summability theory and functional analysis are two disciplines that heavily rely on the idea of statistical convergence. The study of analysis over non-Archimedean fields is called non-Archimedean analysis. The theory of statistical convergence plays a significant role in the functional analysis and summability theory. The objective of this paper is to expand upon the concepts of statistical convergence and statistically Cauchy sequences in non-Archimedean intuitionistic fuzzy normed spaces, and obtain some relevant results related to them. This article proves that some properties of statistically convergent sequences, which are not true classically, are true in a non-Archimedean field. Furthermore, in these spaces, we defined statistically complete and statistically continuous and established some fundamental facts. Throughout this paper, <img src=image/13431116_01.gif> denotes a complete, non-trivially valued, non-Archimedean field.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[Small Area Estimation of Illiteracy Rates based on Beta-Binomial Model using Hierarchical Likelihood Approach]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13279]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Etis Sunandi&nbsp; &nbsp;Khairil Anwar Notodiputro&nbsp; &nbsp;Indahwati&nbsp; &nbsp;and Agus M Soleh&nbsp; &nbsp;</p><p>Small Area Estimation (SAE) is a statistical method used to estimate parameters in sub-populations with small samples. This study aims to develop a Beta-Binomial model on SAE with a Hierarchical Likelihood (HL) approach. The model built is called the SAE-BB-HL model. This research begins by deriving a formula for estimating model parameters analytically. A good fit is calculated with the Mean Square Error of Prediction (MSEP) and bias. This study used simulation data and data from the National Socio-Economic Survey (SUSENAS) and Village Potential (PODES) of Bengkulu Province for 2021 collected by Statistics Indonesia (BPS). The simulation study aims to evaluate the SAE-BB-HL model. Simultaneously, the application study aims to predict the illiteracy rate per sub-district in Bengkulu Province. The simulation study results show that the parameter estimates of random area distribution are very close to the actual parameters. It also reveals that the bias and MSEP estimates of the proportion of HL are lower than the direct estimates. In addition, the results of this study show that the SAE-BB-HL model can improve the accuracy and precision of proportion estimation. Applying the SAE-BB_HL model to real data shows that the predictive value of the illiteracy rate tends to be higher when compared to the direct estimator.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[Upper Bound for Partition Dimension of Comb Product of a Wheel Graph and Tree]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13278]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Faisal&nbsp; &nbsp;and Andreas Martin&nbsp; &nbsp;</p><p>The concept of partition dimension in graph theory was first introduced by Chartrand et al. [1] as a variation of metric dimension. Since then, numerous studies have attempted to determine the partition dimensions of various types of graphs. However, for many types of graphs, their partition dimensions remain unknown as determining a general graph's partition dimension is an NP-complete problem. In this study, we aim to determine the partition dimension of a specific graph, namely the comb product of a wheel and a tree. One approach to finding the partition dimension of a graph is to determine its upper and lower bounds. In this article, we propose an upper bound for the partition dimension of the comb product using number representation for certain bases. We divide the problem into two cases based on the path graph. For the first case, which is the comb product with a path of a single vertex, Tomescu et al. [2] have already provided an upper bound. In the other case, we utilize the bijection property of a number system on the number copy of the tree to find an upper bound. Our results show that the partition dimension of the second case has a smaller upper bound compared to the general upper bound proposed by Chartrand et al. [1].</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[Numerical Approximation of Volterra Integro-Differential Equations of the Second Kind Using Boole's Quadrature Rule Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13277]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Muhammad Ashraf Darus&nbsp; &nbsp;Nurul Huda Abdul Aziz&nbsp; &nbsp;Deraman F.&nbsp; &nbsp;Asi Salina&nbsp; &nbsp;M. S. Anuar&nbsp; &nbsp;and Zakaria H. L.&nbsp; &nbsp;</p><p>This article presents the numerical approximation of Volterra integro-differential equations (VIDEs) of the second kind using the quadrature rule in the modified block method. The new implementation of new block method which considers the closest point to approximate two solutions of <img src=image/13492370_01.gif> and <img src=image/13492370_02.gif> concurrently was taken into account. This method is said to have an advantage in reducing the number of total steps and function evaluations compared to the classical multistep method. The techniques of quadrature rule which consist of the trapezoidal rule, Simpson's 1/3 rule, Simpson's 3/8 rule and Boole's quadrature rule have been used to approximate the integral parts of Kernel function, <img src=image/13492370_03.gif> for <img src=image/13492370_04.gif> for the case of <img src=image/13492370_05.gif>. The analysis of the order, error constant, consistency and convergence of VIDEs in the proposed method has also been presented. The stability analysis is derived using the specified linear test equation for both approximate solutions until obtained the stability polynomial. To validate the efficiency of the developed method, some of the numerical results are presented and compared with the existing method. It is shown that the modified block method has given better accuracy and efficiency in terms of maximum error and number of steps and function calls.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[Bounded Autocatalytic Set and Its Basic Properties]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13276]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Sumarni Abu Bakar&nbsp; &nbsp;Noor Syamsiah Mohd Noor&nbsp; &nbsp;Tahir Ahmad&nbsp; &nbsp;and Siti Salwana Mamat&nbsp; &nbsp;</p><p>Autocatalytic Set (ACS) is one of the areas of study that can be modelled using graph theory. An Autocatalytic Set (ACS) is defined as a graph, in which there is at least one incoming link for every node in the graph. Past research on ACS tremendously solved many applications including modelling complex systems through integration of ACS with fuzzy theory. Recently, a restricted form of ACS known as Weak Autocatalytic Set (WACS) was established and used to solve multi-criteria decision-making problems (MCDM), in which the related graph is transitive and involves non-cyclic triads. Though, in scenarios that occur in the real world, there exist MCDM problems, in which the related graph is intransitive, involving cyclic triads. Thus, it creates a limitation to used WACS to solve decision-making problems over cyclic triads. This paper introduced another class of ACS known as Bounded Autocatalytic Set (BACS). The concept of BACS provides the ability to represent a relation between one criterion to each other criterion, and the graph involves cyclic triads. Here, the definition of BACS is formed and introduced for the first time, and its basic properties related to edges, paths, and cycles in the form of theorem and propositions are established and presented.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[A Time Truncated New Group Chain Sampling Plan Based on Log-Logistic Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13275]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Nazrina Aziz&nbsp; &nbsp;Seu Wen Fei&nbsp; &nbsp;Waqar Hafeez&nbsp; &nbsp;Shazlyn Milleana Shaharudin&nbsp; &nbsp;and Javid Shabbir&nbsp; &nbsp;</p><p>The acceptance sampling is a technique for ensuring that both producers and consumers are satisfied with the product's quality. This paper proposes a new group chain sampling plan (NGChSP) using Log-logistic distribution when the life test is truncated at a predetermined time. The minimum number of groups, <img src=image/13492294_01.gif> and the probability of lot acceptance, <img src=image/13492294_02.gif> are determined through satisfying the consumer's risk, <img src=image/13492294_03.gif> under the specified design parameter. This paper shows that the minimum number of groups, <img src=image/13492294_01.gif> decreases when the value of design parameters such as <img src=image/13492294_04.gif> and <img src=image/13492294_05.gif> increases. With the same design parameters, the minimum <img src=image/13492294_01.gif> increases when the shape parameter increases. Moreover, the <img src=image/13492294_02.gif> increases as shape parameter and minimum <img src=image/13492294_01.gif> increases. An illustrative example for NGChSP is provided. The findings suggest that as the test time termination constant decreases, the minimum <img src=image/13492294_01.gif> increases. Furthermore, as the mean ratio, <img src=image/13492294_06.gif> increases, the <img src=image/13492294_02.gif> increases as well. In comparison to GChSP, the NGChSP requires a smaller number of groups, indicating that using the NGChSP for inspection will contribute to lower inspection time and costs. The NGChSP provides a higher probability of lot acceptance than GChSP. This paper concludes that the NGChSP performed better than the GChSP. Therefore, the NGChSP is better equipped for lot inspection in the manufacturing industry.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[Strong Form of Nano Ideal Set in Nano Ideal Topological Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13274]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>S. Manicka Vinayagam&nbsp; &nbsp;L. Meenakshi Sundaram&nbsp; &nbsp;and C. Devamanoharan&nbsp; &nbsp;</p><p>The purpose of this article is to define and analyse certain new types of a strongly open set namely <img src=image/13430742_01.gif> (<img src=image/13430742_02.gif>) in nano ideal topological space and compare it with the other existing sets in nano ideal topology. Here the author uses the lower approximation, upper approximation and boundary region to define nano topology. To emphasize the inclusive relationship of this particular nano ideal set with other existing familiar nano ideal sets like <img src=image/13430742_03.gif>, <img src=image/13430742_04.gif>, <img src=image/13430742_05.gif> and <img src=image/13430742_06.gif>, some counter examples are provided. We have also established the independence of this <img src=image/13430742_01.gif> set with both <img src=image/13430742_07.gif> set and <img src=image/13430742_08.gif> set in nano ideal topological spaces. In addition, <img src=image/13430742_09.gif>, <img src=image/13430742_10.gif> <img src=image/13430742_11.gif>, <img src=image/13430742_12.gif> are introduced, investigated with its basic results and fundamental properties. The Exterior operator plays a vital role in topological spaces. Unless like the interior operator, the exterior operator varies in some cases, for example it reverses inclusions when it comes to the subset property in topological spaces. In the next section, we have defined <img src=image/13430742_13.gif> and analysed some of its basic properties. We have also introduced <img src=image/13430742_14.gif> and discussed its correlation between <img src=image/13430742_14.gif> and <img src=image/13430742_09.gif>. The paper finally concludes with the definition of <img src=image/13430742_15.gif> and describes the relationship of <img src=image/13430742_15.gif> with <img src=image/13430742_10.gif>, <img src=image/13430742_09.gif> and <img src=image/13430742_14.gif>.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[3-Equitable and Prime Labeling of Some Classes of Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13134]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Sangeeta&nbsp; &nbsp;A. Parthiban&nbsp; &nbsp;and P. Selvaraju&nbsp; &nbsp;</p><p>Researchers have constructed a model to transform &quot;word motion problems into an algorithmic form&quot; in order to be processed by an intelligent tutoring system (ITS). This process has the following steps. Step 1: Categorizing the characteristics of motion problems, step 2: suggesting a model for the categories. &quot;In order to solve all categories of problems, graph theory including backward and forward chaining techniques of artificial intelligence can be utilized&quot;. The adoption of graph theory into motion problems has evidence that the model solves almost all of motion problems. Graph labeling is sub field of graph theory which has become the area of interest due to its diversified applications. Formally, if the nodes are labeled under some constraint, the resulting labeling is known as vertex labeling and it will be an edge labeling if the labels are assigned to edges under some conditions. Graph labeling nowadays is one of the rapid growing areas in applied mathematics which has shown its presence in almost every field. The known applications are in Computer Science, Physics, Chemistry, Radar, Coding Theory, Connectomics, Socioloy, x-ray crystallography, Astronomy etc. &quot;For a graph G(V,E) and k > 0, give node labels from {0, 1, . . . , k − 1} such that when the edge labels are induced by the absolute value of the difference of the node labels, the count of nodes labeled with i and the count of nodes labeled with j differ by at most one and the number of lines labeled with i and with j differ by at most 1. So G with such an allocation of labels is k−equitable and becomes 3-equitable labeling, when k = 3&quot;. In this paper, the existence and non-existence of 3-equitable labeling of certain graphs are established.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[Some Convergence Results for the Strong Versions of Order-integrals in Lattice Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13133]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Mimoza Shkembi&nbsp; &nbsp;Stela Ceno&nbsp; &nbsp;and John Shkembi&nbsp; &nbsp;</p><p>Integration in Riesz spaces has received significant attention in recent papers. The existing body of literature provides comprehensive analyses of the concepts related to order-type integrals for functions that are defined in ordered vector spaces and Banach lattices, as indicated by the studies covered in [3], [4], [5], [7], [8], [9], and [10]. In our work on strongly order-McShane (Henstock-Kurzweil) equiintegration, we have drawn upon the earlier works of Candeloro and Sambucini [6], as well as Boccuto et al. [1-2], who have conducted investigations in the field of order-type integrals. We have expanded upon their research to develop our own findings. This paper focuses on studying the (o)-McShane integral in ordered spaces, where we emphasize the important fact that investigating the (o)-McShane integral is essential in addition to the (o)-Henstock integral. We highlight that the (o)-McShane integration in Banach lattices has richer properties and is more convenient compared to the (o)-Henstock integral. The properties of (o)-convergence exhibited by ordered McShane integrals are prominently featured in our study. By using (o)-convergence, we have obtained valuable results related to the (o)-McShane integral. We arrive at the same results in Banach lattices as on McShane (Henstock-Kurzweil) norm-integrals, and we demonstrate that the (o)-McShane integral opens up a wide field of study where similar results with Henstock integration can be obtained. The outcomes demonstrate the benefits of utilizing this integration technique in ordered spaces, with potentially significant implications for diverse areas of mathematics and related fields.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[Maximum Likelihood Estimation of the Weighted Mixture Generalized Gamma Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13132]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Wikanda Phaphan&nbsp; &nbsp;Teerawat Simmachan&nbsp; &nbsp;and Ibrahim Abdullahi&nbsp; &nbsp;</p><p>The three-parameter weighted mixture generalized gamma (WMGG) distribution was developed from the four-parameter mixture generalized gamma (MGG) distribution since the parameter estimation of MGG distribution faced with the problem. The estimate of the weighted parameter p was out of the interval [0, 1]. The previous study proposed the maximum likelihood estimators (MLEs) of the WMGG distribution. However, their MLEs were written in nonlinear equations, and certain iterative methods were necessarily needed to solve numerically. The three parameters λ, β, and α were estimated by the quasi-Newton method. Nevertheless, this method performed well only the parameter λ. This motivated the main objective of this work. Consequently, the parameter estimation of the WMGG was further improved. This article developed two maximum likelihood estimation methods: expectation-maximization (EM) algorithm and simulated annealing algorithm of the three parameters of the WMGG distribution. These two methods were compared to the previous study's quasi-Newton method. Monte Carlo simulation technique was employed to assess the algorithm's performance. Sample sizes ranged from small to large as 10, 30, 50, and 100. The simulation was repeated 10,000 rounds in each scenario. Assessment criteria were the mean square error (MSE) and bias. The results revealed that the EM algorithm outperformed the other methods. Furthermore, the quasi-Newton method had the lowest efficiency.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[Existence and Uniqueness of Polyhedra with Given Values of the Conditional Curvature at the Vertices]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13131]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Anvarjon Sharipov&nbsp; &nbsp;and Mukhamedali Keunimjaev&nbsp; &nbsp;</p><p>The theory of polyhedra and the geometric methods associated with it are not only interesting in their own right but also have a wide outlet in the general theory of surfaces. Certainly, it is only sometimes possible to obtain the corresponding theorem on surfaces from the theorem on polyhedra by passing to the limit. Still, the theorems on polyhedra give directions for searching for the related theorems on surfaces. In the case of polyhedra, the elementary-geometric basis of more general results is revealed. In the present paper, we study polyhedra of a particular class, i.e., without edges and reference planes perpendicular to a given direction. This work is a logical continuation of the author's work, in which an invariant of convex polyhedra isometric on sections was found. The concept of isometry of surfaces and the concept of isometry on sections of surfaces differ from each other, examples of isometric surfaces that are not isometric on sections and examples of non-isometric surfaces that are isometric on sections. However, they have non-empty intersections, i.e., some surfaces are both isometric and non-isometric on sections. In this paper, we prove the positive definiteness of the found invariant. Further, conditional external curvature is introduced for "basic" sets, open faces, edges, and vertices. It is proved that the conditional curvature of the polyhedral angle considered is monotonicity and positive definiteness. At the end of the article, the problem of the existence and uniqueness of convex polyhedra with given values of conditional curvatures at the vertices is solved.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[Solution Analysis of Riccati's Fractional Differential Equations Using the ADM-Laplace Transformation and the ADM-Kashuri-Fundo Transformation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13130]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Muhamad Deni Johansyah&nbsp; &nbsp;Asep Kuswandi Supriatna&nbsp; &nbsp;Endang Rusyaman&nbsp; &nbsp;Salma Az-Zahra&nbsp; &nbsp;Eddy Djauhari&nbsp; &nbsp;and Aceng Sambas&nbsp; &nbsp;</p><p>Fractional differential equations (FDEs) are differential equations that involve fractional derivatives. Unlike ordinary derivatives, fractional derivatives are defined by fractional powers of the differentiation operator. FDEs can arise in a variety of contexts, including physics, engineering, biology, and finance. They are typically more complex than ordinary differential equations, and their solutions may exhibit unusual properties such as long-range memory, non-locality, and power-law behavior. The solution of the Riccati Fractional Differential Equation (RFDE) is generally challenging due to its nonlinearity and the presence of the fractional power term. The fractional derivative operators in the RFDE are non-local and involve an integral over a certain range of the independent variable. The non-local nature of the fractional derivatives can make the RFDE harder to handle compared to ordinary differential equations. In this paper, we have examined the Riccati Fractional Differential Equation (RFDE) using the combined theorem of the Adomian Decomposition Method and Laplace Transform (ADM-LT). Furthermore, we have compared with Adomian Decomposition Method and Kashuri-Fundo Transformation (ADM-KFT). It is shown that the ADM-LT is equivalent to the ADM-KFT algorithm for solving the Riccati equation. In addition, we have added new theorem of the relationship between the Kashuri Fundo inverse and the Laplace Transform inverse. The main finding of our study shows that the Adomian Decomposition Method and Laplace Transform (ADM-LT) have a good agreement between numerical simulation and exact solution.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[Approximation Method Using DP Ball Curves for Solving Ordinary Differential Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13129]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Abdul Hadi Bhatti&nbsp; &nbsp;and Sharmila Binti Karim&nbsp; &nbsp;</p><p>Many researchers frequently developed numerical methods to explore the idea of solving ordinary differential equations (ODEs) approximately. Scholars started evolving approximation methods by developing algorithms to improve the accuracy in terms of error for the approximate solution. Polynomials, piece-wise polynomials in the form of Bézier curves, Bernstein polynomials, etc., are frequently used to represent the approximate solution of ODEs. To get the minimum error between the exact and approximate solutions of ODEs, the DP Ball curve (DPBC) using the least squares method (LSM) is proposed to improve the accuracy of the approximate solutions for the initial value problem IVPs. This paper explores the use of control points of the DPBC with error reduction by minimizing the residual function. The residual function is minimized by constructing the objective function by taking the sum of squares of the residue function for the least residual error. Then, by solving the constraint optimization problem, we obtained the best control points of DPBC. Two strategies are employed: investigating DPBC's control points through error reduction with LSM and computing the optimum control points through degree raising of DPBC for the best approximate solution of ODEs. Substituting the values of control points back into the DPBC allows for the best approximate solution to be obtained. Moreover, the convergence of the proposed method to the IVPs is successfully analyzed in this study. The error accuracy of the proposed method is also compared with the existing studies. Numerous numerical examples of first, second, and third orders are presented to illustrate the efficiency of the proposed method in terms of error. The results of the numerical examples are shown in which the error accuracy is considerably improved.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[Historical Review of Existing Sequences and the Representation of the Wing Sequence]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13128]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Maizon Mohd Darus&nbsp; &nbsp;Haslinda Ibrahim&nbsp; &nbsp;and Sharmila Karim&nbsp; &nbsp;</p><p>A sequence is simply an ordered list of numbers. Sequences exist in mathematics very often. The Fibonacci, Lucas, Perrin, Catalan, and Motzkin sequences are a few that have drawn academics' attention over the years. These sequences have arisen from different perspectives. By investigating the construction of each sequence, these sequences can be classified into three groups, i.e., those that arise from nature, are constructed from other existing sequences, or are generated from geometric representation. This outcome may assist the researchers in adding a new number sequence to the family of sequences. Our observation of the geometric representation of the Motzkin sequence shows that a new sequence can be constructed, namely the Wing sequence. Therefore, we demonstrate the iterations of the Wing sequence for 3≤n≤5. The wings are constructed by classifying them into (n-1) classes and determining the first and second points. It will then provide (n-2) wings in each class. This technique will construct (n-1)(n-2) wings for each n. The iterations may provide a basic technique for researchers to construct a sequence using the technique of geometric representation. The observation of geometric representations can develop people's thinking skills and increase their visual abilities. Hence, the study of geometric representation may lead to new lines of research that go beyond only sequences.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[The Form of σ-Algebra on Probability Hilbert Space]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13127]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Bernadhita Herindri Samodera Utami&nbsp; &nbsp;Mustofa Usman&nbsp; &nbsp;Warsono&nbsp; &nbsp;and Fitriani&nbsp; &nbsp;</p><p>Measure theory is used as the basis for probability theory. One of the most useful measure theories for statistics and probability theory is the concept of distance. The concept of distance introduced in the inner product space is closely related to the order relation in each sequence of elements. In statistics, random variables can be seen as a sequence that can be an object to study, including the partial ordering relation, expectation value, convergence, also infimum and supremum. This study aims to obtain the properties of a partial ordering relation which is useful for forming probability Hilbert spaces, more specifically the σ-algebra. If in ordinary sets, σ-algebra uses the concept of intersection and combination of sets, in probability Hilbert space, σ-algebra uses the concept of partial relation ordering, lattice, and indicator lattice. This research is quantitative research with a method of proof to generalize the concept of the order of elements. The novelty of this research is to find the associative properties of lattice in Hilbert probability space as described in Corollary 1. Furthermore, based on the definition of absolute value in Hilbert probability space, we derive the properties of addition and subtraction of absolute values and find their relationship with the lattice stated in Proposition 1. In the Hilbert probability space, the convergence property of random variables also applies which results in the lattice convergence stated in Proposition 2. Finally, it can be shown that the set of indicators in the Hilbert probability space form the algebra σ which is stated in Proposition 3. This study also gave use of the dataset shares of 42 energy companies in Indonesia in 2022. The results of plotting the data using the probability density function of the Normal distribution, Log-Normal distribution, and Cauchy distribution.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[Steiner Antipodal Number of Graphs Obtained from Some Graph Operations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13126]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>R. Gurusamy&nbsp; &nbsp;A. Meena Kumari&nbsp; &nbsp;and R. Rathajeyalakshmi&nbsp; &nbsp;</p><p>The Steiner p-antipodal graph <img src=image/13425880_01.gif> of a connected graph G, has vertex set like G and p number of vertices are adjacent to each other in <img src=image/13425880_01.gif> whenever they are p-antipodal in G. If G has more than one component, then p vertices are adjacent to each other in <img src=image/13425880_01.gif> if at least one vertex from different components. Draw K<sub>p</sub> related to p-antipodal vertices in <img src=image/13425880_01.gif>. The Steiner antipodal number <img src=image/13425880_02.gif> of a graph G is the smallest natural number p, so that the Steiner p-antipodal graph of G is complete. In this article, Steiner antipodal number has been determined for the generalized corona of graphs and for each natural number p≥2, we can construct many non-isomorphic graphs of order p having Steiner antipodal number p. Also for any pair of natural numbers l,m ≥ 3 with l ≤ m, there is a graph whose Steiner antipodal number is l and Steiner antipodal number of its line graph is m. For every natural number p≥1, there is a graph G whose complement <img src=image/13425880_03.gif> has Steiner antipodal number p.</p>]]></description>
<pubDate>May 2023</pubDate>
</item>
<item>
<title><![CDATA[Cartesian Product of Quadratic Residue Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13117]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Shakila Banu P.&nbsp; &nbsp;and Suganthi T.&nbsp; &nbsp;</p><p>Rezaei [7], who introduced the first simple graph G, defined it as a quadratic residue graph modulo n if its vertex set is reduced, a residue system modulo n such that two different vertices a and b are nearby, and <img src=image/13429576_01.gif>(mod n). This initiates to study the present article, here we introduce a cartesian product of quadratic residue graphs <img src=image/13429576_01.gif>, where m and n are either prime or composite, and G<sub>m</sub> and H<sub>n</sub> are quadratic residue graphs, respectively. The aforementioned work suggests and evaluates the regular graphs that are produced from graph F and its adjacency matrix. In addition, we define and examine their generating matrices with the help of adjacency matrix of F. Also, in this article we define three linear codes that are taken from the graph F and the parameters of codes denotes [N, k, d], where N denotes length, k denotes the dimension which is taken from the number of vertices and d denotes the distance which is taken from the minimum degree. Moreover, we also introduce an encoding and decoding algorithm for the graph using binary bits which is illustrated with a suitable example. Finally, we test the error correction capability of the code by using sphere packing bounds.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[Sensitivity Equation for Competitive Model: Derivation, Numerical Realization and Parameter Estimation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13116]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Julan HERNADI&nbsp; &nbsp;Ceriawan H. SANTOSO&nbsp; &nbsp;and Iwan T. R. YANTO&nbsp; &nbsp;</p><p>Ecological systems can be quite complex, consisting of an interconnected system of plants and animals, predators and prey, flowering plants, seed dispersers, insects, parasites, pollinators, and so on. In the case of the existence of a species affecting the survival of other species and vice versa, it can derive a competitive model in the form of a system of differential equations. A competitive model involves a number of parameters which grow in proportion to the number of interacting species. The resistance of a state variable to tiny disturbances of some parameter is referred to as sensitivity. The competitive model of size N consists of N parameters for intrinsic growth, N parameters for carrying capacity, N<sup>2</sup> −N parameters for species interaction, and N parameters for initial conditions. As a result, there will be N<sup>2</sup>(N + 2) distinct values of sensitivity. The purpose of this paper is to derive a general formulation of the sensitivity equations of dynamical system and then apply it to the competitive model. This study also encompasses the formulation of some algorithms and the implementation for solving the sensitivity equation numerically. Finally, the sensitivity functions are employed as qualitative instruments in the optimal design of measurement for parameter estimation through a series of numerical experiments. The results of this study are the ordinary and the generalized sensitivity functions for interacting species. Based on numerical experiments, each group of data provides different information about the existing parameters.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[Actuarial Measures, Estimation and Applications of Sine Burr III Loss Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13115]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>John Abonongo&nbsp; &nbsp;Ivivi J. Mwaniki&nbsp; &nbsp;and Jane A. Aduda&nbsp; &nbsp;</p><p>The usefulness of heavy-tailed distributions for modeling insurance loss data is arguably an important subject for actuaries. Appropriate use of trigonometric functions allows a good understanding of the mathematical properties, limits over parameterization, and gives better applicability in modeling different datasets. Thus, the proposed method ensures that no additional parameter(s) is/are introduced in the bit to make a distribution from the F-Loss family of distributions flexible. The purpose of this paper is to improve the flexibility of the F-Loss family of distributions without introducing any additional parameter(s) and to develop heavy-tailed distributions with fewer parameters that give a better parametric fit to a given dataset than other existing distributions. In this paper, a new heavy-tailed distribution known as sine Burr III Loss distribution is proposed using the sine F-Loss generator. This distribution is flexible and able to model varying shapes of the hazard rate compared with the traditional Burr III distribution. The densities exhibit different kinds of decreasing and right-skewed shapes. The hazard rate functions show different kinds of decreasing, increasing constant-decreasing, and upside-down bathtub shapes. The statistical properties and actuarial measures are studied. The skewness is always positive, and the kurtosis is increasing. The numerical values of the actuarial measures show that increasing confidence levels are associated with increasing VaR, TVaR, and TV. The maximum likelihood estimators are studied, and simulations are carried out to ascertain the behavior of the estimators. It is observed that the estimators are consistent. The usefulness of the proposed distribution is demonstrated with two insurance loss datasets and compared with other known classical heavy-tailed distributions. The results show that, the proposed distribution provides the best parametric fit for the two insurance loss datasets. Insurance practitioners can employ the proposed models in modeling insurance loss since they are flexible.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[Inequalities for Forgotten Index of Duplication and Double Duplication of Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13114]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Kalpana R&nbsp; &nbsp;and Shobana L&nbsp; &nbsp;</p><p>Molecular descriptors act as an important part in mathematical chemistry, in investigating quantitative structure-property relationship and quantitative structureactivity relationship. A topological descriptors, also called a molecular descriptor, is a mathematical formula applied to any graph which produces new molecular structure. In medicine mathematical model, the chemical compound is represented as an undirected graph, where each vertex represents an atom and each edge indicates a chemical bond between these atoms. The Wiener index is the first topological index to be used in chemistry introduced by Harold Wiener [1947]. It is used to compare the boiling points of some alkane isomers. There are various topological indices which are applied in chemistry. Among them, our interest is on Forgotten index which is degree based topological index introduced by Furtula and Gutman in 2015[2], defined as <img src=image/13430486_01.gif> where d<sub>u</sub> is the degree of vertex u in G. The mathematicians and chemists have studied several general properties of Forgotten index which may help the chemical and pharmaceutical industry to achieve the significance details by quantitative methods than by experiments.Vaidya et al (2009) proposed the concept of duplication of a vertex by an edge and duplication of an edge by a vertex of graphs. Shobana et al. proposed the double duplication of graphs (2017) [6]. Only connected, simple, undirected and finite graphs are considered throughout this article. Also, some inequalities are obtained by comparing the duplication and double duplication of graphs using Forgotten index which can also be used by chemists to generate new antidrug in future.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[A Glimpse of Nonparametric Single and Double Residual Bootstrap Method with Outliers]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13113]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Nor Iza Anuar Razak&nbsp; &nbsp;and Zamira Hasanah Zamzuri&nbsp; &nbsp;</p><p>The significance of a model is affected by outliers. The outliers can affect the effectiveness of structural equation modeling (SEM). Here we describe and investigate the behavior of the nonparametric single and double residual bootstrap (DRB) methods in the presence of outliers when applied to SEM. Our study also intends to shorten the computational time of the standard double bootstrap by using an alternative double bootstrap approach. We demonstrate our proposed method by conducting a Monte Carlo experiment series for clean normal Gaussian distributions and contaminated data. The simulation studies were manipulated with different sample sizes, effect sizes, and 10% of contamination in the Y direction. The performance of the proposed method is evaluated using standard measurements and the construction of confidence intervals. The reasonably close parameter and bootstrap estimates suggest that the nonparametric single and double residual bootstrap is an excellent method. The DRB method showed a robust declining pattern for standard measurement estimates and shorter confidence intervals compared to the single residual bootstrap method in both normal and contaminated data. Also, the double bootstrap method takes twice as long as the single bootstrap method to compute. The DRB method is straightforward but demands slightly more computational time and better prediction approximation. This study offers additional perspectives to fellow researchers considering using the nonparametric single and alternative DRB methods with contaminated data.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[Binary Response on Logistics Regression Model and Its Simulation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13112]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Budi Pratikno&nbsp; &nbsp;Fifthany Marchelina Napitupulu&nbsp; &nbsp;Jajang&nbsp; &nbsp;Agustini Tripena Br. Sb&nbsp; &nbsp;and Mashuri&nbsp; &nbsp;</p><p>The research determined the binary response model on logistics regression (LR) and its application. Firstly, we select some eligible factors (predictors,X<sub>i</sub>, i=1,2,3,4 ) that are involved in the model, namely age ( X<sub>1</sub> ), sex (X<sub>2</sub>) , treatment (X<sub>3</sub>) , and nutrition (X<sub>4</sub>) , with the response (Y) being the case of tuberculosis (TB). Using the stepwise selection model and odd ratio (OR) interpretation, we have three suspected significant predictors (X<sub>1</sub> ,X<sub>3</sub>, and X<sub>4</sub> ), but we choose two (only) of the significant predictors, which are X<sub>3</sub> and X<sub>4</sub>. Therefore, the logistics regression model is written as <img src=image/13430380_01.gif>. To test the goodness of fit of the model, we used deviance test (p-value <img src=image/13430380_02.gif> 0.08). Due to this p-value, we then used the level of significance which is 0.08 (nearly close to 0.05) for obtaining the significant model. For more detailed interpretation, we here noted that the OR of the age (X<sub>1</sub>) , one of the three suspected significant predictors (X<sub>1</sub> , X<sub>3</sub> , and X<sub>4</sub> ), is close to be one ( <img src=image/13430380_02.gif> 1), so it is an independent predictor (not significant). So, we concluded that the significant predictors are only treatment (X<sub>3</sub>) and nutrition (X<sub>4</sub>) . Thus, the linear of the logistics regression model is then given as a <img src=image/13430486_03.gif> So, we noted that TB is only dependent on clinical treatment and providing nutrition.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[Investigation on Isotropic Bezier Sweeping Surface "IBSS" with Bishop Frame]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13111]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>W. M. Mahmoud&nbsp; &nbsp;M. A. Soliman&nbsp; &nbsp;and Esraa. M. Mohamed&nbsp; &nbsp;</p><p>This research aims to study the Sweeping surface which is generated by the motion of the straight line (the profile curve) while this movement of the plane in the space is in the same direction as the normal to a cubic Bezier curve (spine curve). In geometrical modeling, sweeping is an essential and useful tool and has some applications, especially in geometric design. The idea depends on choosing a geometrical object which is the straight line, that is called the generator, and sweeping it along a cubic Bezier curve (spine curve), which is called trajectory, along the Cubic Bezier curve (spine curve) in an isotropic space has produced an Isotropic Bezier Sweeping Surfaces (IBSS). This study discusses Isotropic Bezier Sweeping Surfaces (IBSS) with the Bishop frame. We studied a special case of a surface sweep, which is the cylindrical surface resulting from a path curve that is a straight line. We have calculated the 1st fundamental and 2nd fundamental forms for this surface. The parametric description of the Weingarten Isotropic Bezier Sweeping Surfaces (IBSS) is also calculated in terms of Gaussian and mean curvatures. Mathematica 3D visualizations were used to create these curvatures. Finally, we characterized new associated surfaces according to the Bishop frame on (IBSS), such as studying minimal and developable isotropic Bezier sweeping surfaces (IBSS).</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[Numerical Solution of Linear and Nonlinear Second Order Initial Value Problems Using Three-Step Generalized Off-Step Hybrid Block Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=13110]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Kamarun Hizam Mansor&nbsp; &nbsp;Oluwaseun Adeyeye&nbsp; &nbsp;and Zurni Omar&nbsp; &nbsp;</p><p>The numerical of second order initial value problems (IVPs) has garnered a lot of attention in literature, with recent studies ensuring to develop new methods with better accuracy than previously existing approaches. This led to the introduction of hybrid block methods which is a class of block methods capable of directly solving second order IVPs without reduction to a system of first order IVPs. Its hybrid characteristic features the addition of off-step points in the derivation of this block method, which has shown remarkable improvement in the accuracy of the block method. This article proposes a new three-step hybrid block method with three generalized off-step points to find the direct solution of second order IVPs. To derive the method, a power series is adopted as an approximate solution and is interpolated at the initial point and one off-step point while its second derivative is collocated at all points in the interval to obtain the main continuous scheme. The analysis of the method shows that the developed method is of order 7, zero-stable, consistent, and hence convergent. The numerical results affirm that the new method performs better than the existing methods it is compared with, in terms of error accuracy when solving the same IVPs of second order ordinary differential equations.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[Toeplitz Determinant For Error Starlike & Error Convex Function]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12955]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>D Kavitha&nbsp; &nbsp;K Dhanalakshmi&nbsp; &nbsp;and K Anitha&nbsp; &nbsp;</p><p>Normalised Error function has been coined and analyzed in 2018 [13].The concept of normalised error function discussed in [13], motivated us to find the new results of Toeplitz determinant for the subclasses of analytic univalent functions concurrent with error function. By seeing the history of error function in Geometric functions theory, Ramachandran et. al [13] derived the coefficient estimates followed by the Fekete-Szeg¨o problem for the normalised subclasses of starlike and convex functions associated with error function. Finding coefficient estimates is one of the most provoking concepts in geometric function theory. In current scenario scientists are concentrating on special functions which are connected with univalent functions. Based on these concepts, the present paper deals with supremum and infimum of Toeplitz determinant for starlike and convex in terms of error function with convolution product using the concept of subordination. Also, we derive the sharp bounds for probability distribution associated with error starlike and error convex functions.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[m-Continuity and Fixed Points in <img src=image/13430302_01.gif>-Complete G-Metric Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12954]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Banoth Madanlal Naik&nbsp; &nbsp;and V.Naga Raju&nbsp; &nbsp;</p><p>Fixed point technique can be considered as one of the most powerful tools to solve problems which occur in several fields like Physics, Chemistry, Computer Science, Economics and other subbranches of Mathematics etc. Banach [3] gave the first result in the field of metric fixed point theory which guarantees the existence and uniqueness of a fixed point in a complete metric space. Thereafter, many Mathematicians replace the notion of metric space and Banach contractive condition with various generalized metric spaces and different contractions to prove fixed point theorems. One such generalized metric space, called G-metric space, was proposed in [6]. Abhijit Pant, R.P.Pant [1] introduced a new type of contraction and obtained some results in metric spaces in the year 2017. The purpose of this paper is to define <img src=image/13430302_02.gif>-complete G-metric space and study three metric fixed point results for such spaces. In the first two fixed point results, we use weaker form of continuity, called m-continuity and new type contractive conditions while in the third result simulation function is used. The results which we obtained will improve, extend and generalize some results in [1] and [2] in the existing literature. In addition to this, we give examples to validate our results.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[On <img src=image/13430241_01.gif>-coloring and <img src=image/13430241_02.gif>-coloring of Windmill Graph]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12953]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Rubul Moran&nbsp;&nbsp;Niranjan Bora&nbsp; &nbsp;and Surashmi Bhattacharyya&nbsp; &nbsp;</p><p>The windmill graph <img src=image/13430241_03.gif> is the graph formed by joining a common vertex to every vertex of m copies of the complete graph K<sub>r</sub>. T-coloring of a graph is a map h defined on the set of vertices in such a way that for any edge <img src=image/13430241_04.gif> does not belong to a finite set T of non-negative integers. Strong T-Coloring (ST-coloring) is a particular case of T-coloring and is defined as the map: <img src=image/13430241_05.gif>, for which <img src=image/13430241_06.gif> and <img src=image/13430241_07.gif> for any two distinct edges <img src=image/13430241_08.gif>. Application of T and ST-coloring of graph naturally arises in the modeling of different scientific problems. Frequency assignment problem (FAP) is one of the well known problems in the field of telecommunication, which can be modeled using the concept of T and ST-coloring of graphs. In this paper, we will consider two special types of T-sets. The first one is <img src=image/13430241_09.gif>-initial set, introduced by Cozzens and Roberts, which is of the form <img src=image/13430241_10.gif> where S is any arbitrary set that doesn’t contain any multiple of <img src=image/13430241_11.gif> The second one is λ-multiple of q set, introduced by Raychaudhuri, which is of the form <img src=image/13430241_12.gif>, where S is a subset of the set <img src=image/13430241_13.gif>. We will discuss some parameters related to these two types of colorings viz. T-chromatic number, T-span, T-edge span on the basis of the two T-sets. We will also deduce some generalized results of ST-coloring of any graph based on any T-set, and with the help of these results we will obtain ST-chromatic number and bounds for the ST -span and ST-edge span of windmill graphs.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[P-dist Based Regularized Twin Support Vector Machine on Imbalanced Binary Dataset]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12952]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Sai Lakshmi B.&nbsp; &nbsp;and G. Gajendran&nbsp; &nbsp;</p><p>Data classification is a significant task in the field of machine learning. Support vector machine is one of the prominent algorithms in classification. Twin support vector machine is a solitary algorithm evolved from support vector machine which has gained popularity owing to its better generalization ability to a greater extent. Twin support vector machine attains quick training speed by explicitly exploring a pair of non-parallel hyperplanes for imbalanced data. In a Twin support vector machine, choosing numerical values for hyper parameters is challenging. Hyper parameter tuning is a prime factor that enhances the performance of a model. However, randomly preferred hyper parameters in the Twin support vector machine are uncertain. This paper proposes a novel p-dist-based regularized Twin support vector machine for imbalanced binary classification problems. Pairwise distances such as Jaccard and Correlation distances are considered for attuning the hyper parameters. The proposed work has been analyzed on many publicly available real-world benchmark datasets for both linear and non-linear cases. The performance of the p-dist-based regularized Twin support vector machine is computationally tested and compared with existing models. The outcome of the proposed model is validated using quality metrics such as Accuracy, F - mean, G-mean, and Elapsed time. Ultimately, the significant result exhibits better performance with less computational time in comparison to several existing methods.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[Performance Analysis of A Single Server Queue Operating in A Random Environment - A Novel Approach]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12951]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Akshaya Ramesh&nbsp; &nbsp;and S. Udayabaskaran&nbsp; &nbsp;</p><p>In this paper, we consider a single server queueing system operating in a random environment subject to disaster, repair and customer impatience. The random environment resides in any one of N + 1 phases 0, 1, 2, · · · ,N + 1. The queueing system resides in phase k, k = 1, 2, · · · ,N for a random interval of time and the sojourn period ends at the occurrence of a disaster. The sojourn period is exponentially distributed with mean <img src=image/13429819_01.gif>. At the end of the sojourn period, all customers in the system are washed out, the server goes for repair/set up and the system moves to phase 0. During the repair time, customers join the system, become impatient and leave the system. The impatience time is exponentially distributed with mean <img src=image/13429819_02.gif> . Immediately after the repair, the server is ready for offering service in phase i with probability <img src=image/13429819_03.gif>, k = 1, 2, · · · ,N. In the k−level of the environment, customers arrive according to a Poisson process with rate <img src=image/13429819_04.gif> and the service time is exponential with mean <img src=image/13429819_05.gif>. Explicit expressions for time-dependent state probabilities are found and the corresponding steady-state probabilities are deduced. Some new performance measures are also obtained. Choosing arbitrary values of the parameters subject to the stability condition, the behaviour of the system is examined. For the chosen values of the parameters, the performance measures indicated that the system did not exhibit much deviation by the presence of several phases of the environment.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[Exponential-Inverse Exponential[Weibull]: A New Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12950]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Mahmoud Riad Mahmoud&nbsp; &nbsp;Azza E. Ismail&nbsp; &nbsp;and Moshera A. M. Ahmad&nbsp; &nbsp;</p><p>Statistical distributions play a major role in analyzing experimental data, and finding an appropriate one for the data at hand is not an easy task. Extending a known family of distribution to construct a new one is a long honored technique in this regard. The T-X[Y] methodology is utilized to construct a new distribution as described in this study. The T-inverse exponential family of distributions, which was previously introduced by the same authors, is used to examine the exponential-inverse exponential[Weibull] distribution (Exp-IE[Weibull]). Several fundamental properties are explored, including survival function, hazard function, quantile function, median, skewness, kurtosis, moments, Shannon’s entropy, and order statistics. Our distribution exhibits a wide range of shapes with varying skewness and assume most possible forms of hazard rate function. The unknown parameters of the Exp-IE [Weibull] distribution are estimated via the maximum likelihood method for a complete and type II censored samples. We performed two applications on real data. The first one is vinyle chloride data, which is explained by [1] and the second is cancer patients data, which is explained by [2]. The significance of the Exp-IE[Weibull] model in relation to alternative distributions (Fr´echet, Weibull-exponential, logistic-exponential, logistic modified Weibull, Weibull-Lomax [log-logistic] and inverse power logistic exponential) is demonstrated. When using the applied real data, the new distribution (Exp-IE[Weibull]) achieved better results for the AIC and BIC criterion compared to other listed distributions.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[Flows Local Control in Resource Networks with A Low Resource]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12949]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Vladimir A. Skorokhodov&nbsp; &nbsp;and Iakov M. Erusalimskiy&nbsp; &nbsp;</p><p>The flow control problem in resource networks consists in finding such a set of vertices and capacities of arcs, which go out from these vertices, such that the limit state of the resource network <img src=image/13429563_01.gif> is the closest to the given state <img src=image/13429563_02.gif>. This problem is naturally divided into two subproblems. The first of them is the ”local” subproblem, which consists in determining the capacities of arcs which go out from the vertices of a given subset <img src=image/13429563_03.gif> (hereinafter, the set <img src=image/13429563_03.gif> will be called the set of controlled vertices). The second subproblem is the ”global” subproblem, which consists in finding the optimal set of controlled vertices <img src=image/13429563_03.gif>, consisting of at most s elements. The paper is devoted to the study of the possibility of flows local control in resource networks. Methods for solving a local subproblem for regular resource networks with a low resource allocation are proposed. The conditions for the unreachability of the limit state <img src=image/13429563_01.gif>, which coincides with the given state <img src=image/13429563_02.gif> are obtained. Three cases are considered for the distribution of controlled vertices on a resource network. In each of the considered cases, it is shown that if the condition of unreachability of the limit state is not satisfied, then there is a set of the capacities values of the arcs that go out the controlled vertices, for which the limit state <img src=image/13429563_01.gif> coincides with the state <img src=image/13429563_02.gif>.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[Comparing The Forecasting Accuracy Metrics of Support Vector Regression and ARIMA Algorithms for Non-Stationary Time Process]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12948]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Youness Jouilil&nbsp; &nbsp;and Driss Mentagui&nbsp; &nbsp;</p><p>Univariate time series forecasting is a crucial machine learning issue across many fields notably sentiment analysis, economy, medicine, agriculture, and finance. In this working paper, we tackled comparing the Support Vector Regression (SVR) to the traditional Autoregressive Integrated Moving Average (ARIMA) algorithms in terms of forecasting through a real case study. In fact, the data set used in this investigation has been extracted from the World Bank. The target time series is the American Foreign direct investment, net outflows (% of GDP) which includes the data for 50 years from 1972 to 2021. For analytical and comparison purposes, all the compilations have been done using the R programming language for Windows 10. The statistical findings revealed that, in short-term prediction, the forecast accuracy of both algorithms reduces in terms of error accuracy, significantly. Comparatively, the analysis conducted in this investigation demonstrates that the machine learning algorithms, especially the SVM one perform better than the ARIMA in short-term forecasting since its accuracy functions are the lowest. Thus, we highly recommend future research to compare the advanced machine learning algorithms especially the recurrent neural network algorithms with the classical algorithms, especially with the ARIMA approach in order to choose the best algorithm in terms of results and predictive performance.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[Multivariate Hotelling-<img src=image/13428791_01.gif> Control Chart for Neutrosophic Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12947]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Saritha M.B&nbsp; &nbsp;and R. Varadharajan&nbsp; &nbsp;</p><p>Industries are consistently confronted with a myriad of challenges, the most significant of which is the requirement to increase product quality while simultaneously minimising manufacturing costs. Statistical Process Control (SPC) provides quality control charts as one of its primary methods for achieving this goal. When it comes to monitoring the quality features of a process, the control chart is the most popular and widely used kind of statistical analysis tool. It is very necessary to make use of multivariate control charts if the quality of a process is found to be connected with more than one characteristic. The Hotelling-<img src=image/13428791_02.gif> chart is one of the most familiar methods of multivariate control chart. It is used for simultaneously monitoring the process mean and determining whether or not the process mean vector for two or more variables is under control. However, this is applicable only when the data is accurate, determined, and exact. As a result, when the data is vague or ambiguous, the utility of the conventional Hotelling-<img src=image/13428791_02.gif> control chart is limited. Within the scope of this research, we put up a neutrosophic Hotelling-<img src=image/13428791_02.gif> control chart as a potential solution to the issue described above. The performance of the proposed chart is evaluated using simulation at various degrees of shift in process average, with the neutrosophic alarm rate serving as the performance measure. To further investigate the applicability of the suggested chart in the actual world, we made use of a real-world example taken from the chemical sector.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[A Rotated Similarity Reduction Approach with Half-Sweep Successive Over-Relaxation Iteration for Solving Two-Dimensional Unsteady Convection-Diffusion Problems]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12946]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Nur Afza Mat Ali&nbsp; &nbsp;Jumat Sulaiman&nbsp; &nbsp;Azali Saudi&nbsp; &nbsp;and Nor Syahida Mohamad&nbsp; &nbsp;</p><p>In this paper, we transformed a two-dimensional unsteady convection-diffusion equation into a two-dimensional steady convection-diffusion equation using the similarity transformation technique. This technique can be easily applied to linear or nonlinear problems and is capable of reducing the size of computational works since the main idea of this technique is to reduce at least one independent variable. The corresponding similarity equation is then solved numerically using an effective numerical technique, namely a new five-point rotated similarity finite difference scheme via half-sweep successive over-relaxation iteration. This work compared the performance of the proposed method with Gauss-Seidel and successive over-relaxation with the full-sweep concept. Numerical tests were carried out to obtain the performance of the proposed method using C simulation. The results revealed that the combination of the five-point rotated similarity finite difference scheme via half-sweep successive over-relaxation iteration is the most superior method in terms of the iteration number and computational time compared to all these methods. Additionally, in terms of accuracy, all three iterative methods are also comparable.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[Estimation of the Location Parameter of Cauchy Distribution Using Some Variations of the Ranked Set Sampling Technique]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12945]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Arwa Salem Maabreh&nbsp; &nbsp;and Mohammad Fraiwan Al-Saleh&nbsp; &nbsp;</p><p>It is well known that ranked set sampling (RSS) technique and its variations, when applicable, are more efficient for estimating the population mean than the usual random sampling techniques. Despite the fascinating applications of Cauchy distribution, it has many unusual properties. For example: its moments either don’t exist or exist but are infinite, and its minimal sufficient statistics are just the order statistics. Given that the shape of the Cauchy distribution is similar to the normal one, it would be advantageous to carry out some statistical studies to focus on estimating its parameters; in particular the location parameter which is the median. In this paper, the estimation of the location parameter of the Cauchy distribution using RSS and some of its variations; namely, Double RSS, Median RSS, Multistage RSS, and Steady-State RSS are considered. The estimators are compared with each other and with their counterparts using simple random sampling (SRS). The findings show that RSS or any of its variations, being evaluated in this study, is more efficient in estimating the location parameter compared to SRS. The comparison among the RSS variations reveals that the steady-state RSS is more efficient than other RSS variations. Moreover, to overcome some of the challenges of Cauchy distribution, such as the non-existence of moments, a truncated Cauchy distribution is used. For this distribution, all moments are finite as well as the moments of order statistics. Results show that RSS and Median RSS outperform the SRS in estimating the location parameter, even with the truncated version of Cauchy. Overall, the work of this paper identifies other advantages of RSS techniques.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[Reliability Evaluation of Linear or Circular Consecutive k-out-of-n: F System Using Dynamic Bayesian Network]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12944]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>R Sakthivel&nbsp; &nbsp;and G Vijayalakshmi&nbsp; &nbsp;</p><p>In the field of reliability theory, one of the most significant topics to discuss is the process of determining the reliability of a complex system based on the reliabilities of its individual components. The consecutive k-out-of-n:F system is used in telephone networks, photographing in nuclear accelerators, spacecraft relay stations, telecommunication system consisting of relay stations connecting transmitter and receiver, microwave relay stations, the design of integrated circuits, vacuum systems in accelerators, oil pipeline systems and computing networks. The reliability estimation of the consecutive k-out-of-n:F system is studied because it plays an important role in many physical systems. Dynamic Bayesian networks are graphical models for time-varying probabilistic inference and causal analysis under system uncertainty. The dynamic Bayesian network is built for the proposed system since time is continuously measured. The consecutive k-out-of-n:F system depends on the k components, because the system fails when the consecutive k components fail, otherwise the system works. The contributions are the dynamic Bayesian network construction of the proposed system and the reliability analysis of the linear and circular consecutive k-out-of-n:F system. Furthermore, Dynamic Bayesian network- based reliability is shown to be significantly higher than the reliability achieved by Malinowski, Preuss and Gao, Liu, Wang, Peng and Amirian, Khodadadi, Chatrabgoun. The Dynamic Bayesian network- based Reliability of linear and circular consecutive k-out-of-n:F system is also compared.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[Convergence Analysis of Space Discretization of Time Fractional Telegraph Equation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12943]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Ebimene James Mamadu&nbsp; &nbsp;Henrietta Ify Ojarikre&nbsp; &nbsp;and Ignatius Nkonyeasua Njoseh&nbsp; &nbsp;</p><p>The role of fractional differential equations in the advancement of science and technology cannot be overemphasized. The time fractional telegraph equation (TFTE) is a hyperbolic partial differential equation (HPDE) with applications in frequency transmission lines such as the telegraph wire, radio frequency, wire radio antenna, telephone lines, and among others. Consequently, numerical procedures (such as finite element method, H<sup>1</sup> – Galerkin mixed finite element method, finite difference method, and among others) have become essential tools for obtaining approximate solutions for these HPDEs. It is also essential for these numerical techniques to converge to a given analytic solution to certain rate. The Ritz projection is often used in the analysis of stability, error estimation, convergence and superconvergence of many mathematical procedures. Hence, this paper offers a rigorous and comprehensive analysis of convergence of the space discretized time-fractional telegraph equation. To this effect, we define a temporal mesh on [0,T] with a finite element space in Mamadu-Njoseh polynomial space, φ<sub>m-1</sub>, of degree ≤m-1. An interpolation operator (also of a polynomial space) was introduced along the fractional Ritz projection to prove the convergence theorem. Basically, we have employed both the fractional Ritz projection and interpolation technique as superclose estimate in L<sub>2</sub> - norm between them to avoid a difficult Ritz operator construction to achieve the convergence of the method.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[MTSClust with Handling Missing Data Using VAR-Moving Average Imputation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12942]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Embay Rohaeti&nbsp; &nbsp;I Made Sumertajaya&nbsp; &nbsp;Aji Hamim Wigena&nbsp; &nbsp;and Kusman Sadik&nbsp; &nbsp;</p><p>Modeling and forecasting multivariate time series (MTS) data with multiple objects may be challenging, especially if the data have volatility and missing data. Several studies on inflation data have been proposed, but these studies either did not use MTS data or did not consider missing data. This study aims to develop an approach that can obtain general models and forecasts for MTS data with volatility and missing data. We proposed Vector Autoregressive Moving Average Imputation Method - Multivariate Time Series Clustering (VAR-IMMA - MTSClust) to group the objects into clusters. The clusters can then be used to obtain general models and forecasts. This study consists of three stages. The first stage is the imputation simulation stage, where 10%, 20%, and 30% of MTS data were randomly removed and imputed using the original VAR-IM and the proposed VAR-IMMA. The second stage is the clustering stage where six clustering methods, i.e., K-means Euclidean, K-means Manhattan, K-means DTW, PAM Euclidean, PAM Manhattan, and PAM DTW, were used on both the completed data and the imputed data from the first stage. The third stage is the modeling and forecasting stage, where clusters from the second stage are used to obtain general models and forecasts for each cluster. The simulations were performed 1000 times and evaluated using RMSE, RMSSTD, R-squared, ARI, and balanced accuracy. The results showed that VAR-IMMA could increase the imputation accuracy by 10% in 50% of cases and even more in another 25% of cases. This increase in imputation accuracy was proven beneficial in the second stage, where clustering on imputed data formed clusters that are still like the completed data clusters despite missing data. K-means Euclidean and PAM Euclidean are two of the best methods. Finally, the use of VAR-IMMA and PAM Euclidean on inflation rate data with missing data was illustrated. The imputed clusters have an ARI score of 0.57 and balanced accuracy of 92%, leading to similar models and forecasts to the ones in the completed data.</p>]]></description>
<pubDate>Mar 2023</pubDate>
</item>
<item>
<title><![CDATA[A Facet Defining of the Dicycle Polytope]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12920]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Mamane Souleye Ibrahim&nbsp; &nbsp;and Oumarou Abdou Arbi&nbsp; &nbsp;</p><p>In this paper, we consider the polytope <img src=image/13429471_01.gif> of all elementary dicycles of a digraph <img src=image/13429471_02.gif>. Dicycles problem, in graph theory and combinatorial optimization, solved by polyhedral approaches has been extensively studied in literature. Therefore cutting plane and branch and cut algorithms are unavoidable to exactly solve such a combinatorial optimization problem. For this purpose, we introduce a new family of valid inequalities called <img src=image/13429471_03.gif> alternating 3-arc path inequalities for the polytope of elementary dicycles <img src=image/13429471_01.gif>. Indeed, these inequalities can be used in cutting plane and branch and cut algorithms to construct strengthened relaxations of a linear formulation of the dicycle problem. To prove the facetness of <img src=image/13429471_03.gif> alternating 3-arc path inequalities, in opposite to what is usually done that consists basically to determine the affine subspace of a linear description of the considered polytope, we resort to constructive algorithms. Given the set of arcs of the digraph <img src=image/13429471_02.gif>, algorithms devised and introduced are based on the fact that from a first elementary dicycle, all other dicycles are iteratively generated by replacing some arcs of previously generated dicycles by others such that the current elementary dicycle contains an arc that does not belong to any other previously generated dicycles. These algorithms generate dicyles with affinely independent incidence vectors that satisfy <img src=image/13429471_03.gif> alternating 3-arc path inequalities with equality. It can easily be verified that all these devised algorithms are polynomial from time complexity point of view.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[Brachistochrone Curve Representation via Transition Curve]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12919]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Rabiatul Adawiah Fadzar&nbsp; &nbsp;and Md Yushalify Misro&nbsp; &nbsp;</p><p>The brachistochrone curve is an optimal curve that allows the fastest descent path of an object to slide frictionlessly under the influence of a uniform gravitational field. In this paper, the Brachistochrone curve will be reconstructed using two different basis functions, namely Bézier curve and trigonometric Bézier curve with shape parameters. The Brachistochrone curve between two points will be approximated via a C-shape transition curve. The travel time and curvature will be evaluated and compared for each curve. This research revealed that the trigonometric Bézier curve provides the closest approximation of Brachistochrone curve in terms of travel time estimation, and shape parameters in trigonometric Bézier curve provide better shape adjustability than Bézier curve.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[A Note on External Direct Products of BP-algebras]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12918]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Chatsuda Chanmanee&nbsp; &nbsp;Rukchart Prasertpong&nbsp; &nbsp;Pongpun Julatha&nbsp; &nbsp;U. V. Kalyani&nbsp; &nbsp;T. Eswarlal&nbsp; &nbsp;and Aiyared Iampan&nbsp; &nbsp;</p><p>The notion of BP-algebras was introduced by Ahn and Han [2] in 2013, which is related to several classes of algebra. It has been examined by several researchers. In the group, the concept of the direct product (DP) [21] was initially developed and given some features. Then, other algebraic structures are subjected to DP groups. Lingcong and Endam [16] examined the idea of the DP of (0-commutative) B-algebras and B-homomorphisms in 2016 and discovered several related features, one of which is a DP of two Balgebras that is a B-algebra. Later on, the concept of the DP of B-algebra was expanded to include finite family B-algebra, and some of the connected issues were researched. In this work, the external direct product (EDP), a general concept of the DP, is established, and the results of the EDP for certain subsets of BP-algebras are determined. In addition, we define the weak direct product (WDP) of BP-algebras. In light of the EDP BP-algebras, we conclude by presenting numerous essential theorems of (anti-)BP-homomorphisms.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[New Results on Face Magic Mean Labeling of Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12917]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>S. Vani Shree&nbsp; &nbsp;and S. Dhanalakshmi&nbsp; &nbsp;</p><p>In the midst of the 1960s, a theory by Kotzig-Ringel and a study by Rosa sparked curiosity in graph labeling. Our primary objective is to examine some types of graphs which admit Face Magic Mean Labeling (FMML). A bijection <img src=image/13429259_01.gif> is called a (1,0,0) F-Face magic mean labeling [FMML] of <img src=image/13429259_02.gif> if the induced face labeling <img src=image/13429259_03.gif> <img src=image/13429259_04.gif> A bijection <img src=image/13429259_05.gif> is called a (1,1,0) F-Face magic mean labeling [FMML] of <img src=image/13429259_02.gif> if the induced face labeling <img src=image/13429259_06.gif> <img src=image/13429259_07.gif> In this paper it is being investigated that the (1, 0, 0) – Face Magic Mean Labeling (F-FMML) of Ladder graphs, Tortoise graph and Middle graph of a path graph. Also (1,0,0) and (1,1,0) F-Face Magic Mean Labeling is verified for Ortho Chain Square Cactus graph, Para Chain Square Cactus graph and some snake related graphs like Triangular snake graphs and Quadrilateral snake graphs. For a wide range of applications, including the creation of good kind of codes, synch-set codes, missile guidance codes and convolutional codes with optimal auto correlation characteristics, labeled graphs serve as valuable mathematical models. They aid in the ability to develop the most efficient non-standard integer encodings; labeled graphs have also been used to identify ambiguities in the access protocol of communication networks; data base management to identify the best circuit layouts, etc.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[A New Quasi-Newton Method with PCG Method for Nonlinear Optimization Problems]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12916]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Bayda Ghanim Fathi&nbsp; &nbsp;and Alaa Luqman Ibrahim&nbsp; &nbsp;</p><p>The major stationary iterative method used to solve nonlinear optimization problems is the quasi-Newton (QN) method. Symmetric Rank-One (SR1) is a method in the quasi-Newton family. This algorithm converges towards the true Hessian fast and has computational advantages for sparse or partially separable problems [1]. Thus, investigating the efficiency of the SR1 algorithm is significant. It's possible that the matrix generated by SR1 update won't always be positive. The denominator may also vanish or become zero. To overcome the drawbacks of the SR1 method, resulting in better performance than the standard SR1 method, in this work, we derive a new vector <img src=image/13429423_01.gif> depending on the Barzilai-Borwein step size to obtain a new SR1 method. Then using this updating formula with preconditioning conjugate gradient (PCG) method is presented. With the aid of inexact line search procedure by strong Wolfe conditions, the new SR1 method is proposed and its performance is evaluated in comparison to the conventional SR1 method. It is proven that the updated matrix of the new SR1 method, <img src=image/13429423_02.gif>, is symmetric matrix and positive definite matrix, given <img src=image/13429423_03.gif> is initialized to identity matrix. In this study, the proposed method solved 13 problems effectively in terms of the number of iterations (NI) and the number of function evaluations (NF). Regarding NF, the new SR1 method also outperformed the classic SR1 method. The proposed method is shown to be more efficient in solving relatively large-scale problems (5,000 variables) compared to the original method. From the numerical results, the proposed method turned out to be significantly faster, effective and suitable for solving large dimension nonlinear equations.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[Adaptive Step Size Stochastic Runge-Kutta Method of Order 1.5(1.0) for Stochastic Differential Equations (SDEs)]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12915]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Noor Julailah Abd Mutalib&nbsp; &nbsp;Norhayati Rosli&nbsp; &nbsp;and Noor Amalina Nisa Ariffin&nbsp; &nbsp;</p><p>The stiff stochastic differential equations (SDEs) involve the solution with sharp turning points that permit us to use a very small step size to comprehend its behavior. Since the step size must be set up to be as small as possible, the implementation of the fixed step size method will result in high computational cost. Therefore, the application of variable step size method is needed where in the implementation of variable step size methods, the step size used can be considered more flexible. This paper devotes to the development of an embedded stochastic Runge-Kutta (SRK) pair method for SDEs. The proposed method is an adaptive step size SRK method. The method is constructed by embedding a SRK method of 1.0 order into a SRK method of 1.5 order of convergence. The technique of embedding is applicable for adaptive step size implementation, henceforth an estimate error at each step can be obtained. Numerical experiments are performed to demonstrate the efficiency of the method. The results show that the solution for adaptive step size SRK method of order 1.5(1.0) gives the smallest global error compared to the global error for fix step size SRK4, Euler and Milstein methods. Hence, this method is reliable in approximating the solution of SDEs.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[Construction of the <img src=image/13492345_01.gif> Graph of Mathieu Group <img src=image/13492345_02.gif>]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12914]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Suzila Mohd Kasim&nbsp; &nbsp;Shaharuddin Cik Soh&nbsp; &nbsp;and Siti Nor Aini Mohd Aslam&nbsp; &nbsp;</p><p>Suppose that <img src=image/13492345_03.gif> is a group and <img src=image/13492345_04.gif> is a subset of <img src=image/13492345_03.gif>. Then, the <img src=image/13492345_05.gif> graph of a group <img src=image/13492345_03.gif>, denoted by <img src=image/13492345_06.gif>, is the simple undirected graph in which two distinct vertices <img src=image/13492345_07.gif> are connected to each other by an edge if and only if both vertices satisfy <img src=image/13492345_08.gif>. The main contribution of this paper is to construct the <img src=image/13492345_05.gif> graph using the elements of Mathieu group, <img src=image/13492345_09.gif>. Additionally, the connectivity of <img src=image/13492345_06.gif> has been proven as a connected graph. Finally, an open problem is highlighted in addressing future research.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[Half-sweep Modified SOR Approximation of A Two-dimensional Nonlinear Parabolic Partial Differential Equation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12913]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Jackel Vui Lung Chew&nbsp; &nbsp;Jumat Sulaiman&nbsp; &nbsp;Andang Sunarto&nbsp; &nbsp;and Zurina Patrick&nbsp; &nbsp;</p><p>The sole subject of this numerical analysis was the half-sweep modified successive over-relaxation approach (HSMSOR), which takes the form of an iterative formula. This study computed a class of two-dimensional nonlinear parabolic partial differential equations subject to Dirichlet boundary conditions numerically using the implicit-type finite difference scheme. The computational cost optimization was considered by converting the traditional implicit finite difference approximation into the half-sweep finite difference approximation. The implementation required inner-outer iteration cycles, the second-order Newton method, and a linearization technique. The created HSMSOR is utilized to obtain an approximation of the linearized equations system through the inner iteration cycle. In contrast, the problem's numerical solutions are obtained using the outer iteration cycle. The study examined the local truncation error and the stability, convergence, and method analysis. Results from three initial-boundary value issues showed that the proposed method had competitive computational costs compared to the existing method.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[On the Performance of Bayesian Generalized Dissimilarity Model Estimator]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12912]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Evellin Dewi Lusiana&nbsp; &nbsp;Suci Astutik&nbsp; &nbsp;Nurjannah&nbsp; &nbsp;and Abu Bakar Sambah&nbsp; &nbsp;</p><p>The Generalized Dissimilarity Model (GDM) is an extension of Generalized Linear Model (GLM) that is used to describe and estimate biological pairwise dissimilarities following a binomial process in response to environmental gradients. Some improvement has been made to accommodate the uncertainty quantity of GDM by applying resampling scheme such as Bayesian Bootstrap (BBGDM). Because there is an ecological assumption in the GDM, it is reasonable to use a proper Bayesian approach rather than resampling method to obtain better modelling and inference results. Similar to other GLM techniques, the GDM also employs a link function, such as the logit link function that is commonly used for the binomial regression model. By using this link, a Bayesian approach to GDM framework which called Bayesian GDM (BGDM) can be constructed. In this paper, we aim to evaluate the estimators' performance of Bayesian Generalized Dissimilarity Model (BGDM) in relative to BBGDM. Our study revealed that the performance of BGDM estimator outperformed that of BBGDM, especially in term of unbiasedness and efficiency. However, the BGDM estimators failed to meet consistency property. Moreover, the application of the BGDM to a real case study indicates that its inferential abilities are superior to the preceding model. </p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[An Effective Spectral Approach to Solving Fractal Differential Equations of Variable Order Based on the Non-singular Kernel Derivative]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12889]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>M. Basim&nbsp; &nbsp;N. Senu&nbsp; &nbsp;A. Ahmadian&nbsp; &nbsp;Z. B. Ibrahim&nbsp; &nbsp;and S. Salahshour&nbsp; &nbsp;</p><p>A new differential operators class has been discovered utilising fractional and variable-order fractal Atangana-Baleanu derivatives that have inspired the development of differential equations' new class. Physical phenomena with variable memory and fractal variable dimension can be described using these operators. In addition, the primary goal of this study is to use the operation matrix based on shifted Legendre polynomials to obtain numerical solutions with respect to this new differential equations' class, which will aid us in solving the issue and transforming it into an algebraic equation system. This method is employed in solving two forms of fractal fractional differential equations: non-linear and linear. The suggested strategy is contrasted with the mixture of two-step Lagrange polynomials, the predictor-corrector algorithm, as well as the fractional calculus methods' fundamental theorem, using numerical examples to demonstrate its accuracy and simplicity. The estimation error was proposed to contrast the results of the suggested methods and the exact solution to the problems. The proposed approach could apply to a wider class of biological systems, such as mathematical modelling of infectious disease dynamics and other important areas of study, such as economics, finance, and engineering. We are confident that this paper will open many new avenues of investigation for modelling real-world system problems.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[A Formal Solution of Quadruple Series Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12888]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>A. K. Awasthi&nbsp; &nbsp;Rachna&nbsp; &nbsp;and Rohit&nbsp; &nbsp;</p><p>It cannot be overstated how significant Series Equations are to the fields of pure and applied mathematics respectively. The majority of mathematical topics revolve around the use of series. Virtually, in every subject of mathematics, series play an important role. Series solutions play a major role in the solution of mixed boundary value problems. Dual, triple, and quadruple series equations are useful in finding the solution of four part boundary value problems of electrostatics, elasticity and other fields of Mathematical physics. Cooke devised a method for finding the solution of quadruple series equations involving Fourier-Bessel series and obtained the solution using operator theory. Several workers have devoted considerable attention to the solutions of various equations involving for instance, trigonometric series, The Fourier-Bessel series, The Fourier Legendre series, The Dini series, series of Jacobi and Laguerre polynomials and series equations involving Bateman K-functions. Indeed, many of these problems arise in the investigation of certain classes of mixed boundary value problems in potential theory. There has been less work on quadruple series equations involving various polynomials and functions. In light of the significance of quadruple series solutions, proposed work examines quadruple series equations that include the product of r generalised Bateman K functions. Solution is formal, and there has been no attempt made to rationalise many restricting processes that have been encountered.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[On the Performance of Full Information Maximum Likelihood in SEM Missing Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12887]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Amal HMIMOU&nbsp; &nbsp;M'barek IAOUSSE&nbsp; &nbsp;Soumaia HMIMOU&nbsp; &nbsp;Hanaa HACHIMI&nbsp; &nbsp;and Youssfi EL KETTANI&nbsp; &nbsp;</p><p>Missing data is a real problem in all statistical modeling fields, particularly, in structural equation modeling which is a set of statistical techniques used to estimate models with latent concepts. In this research paper, an investigation of the techniques used to handle missing data in structural equation models is elaborated. To clarify this, a presentation of the mechanisms of missing data is made based on the probability distribution. This presentation recognizes three mechanisms: missing completely at random, missing at random, and missing not at random. Ignoring missing data in the statistical analysis may mislead the estimation and generates biased estimates. Many techniques are used to remedy this problem. In the present paper, we have presented three of them, namely, listwise deletion, pairwise deletion, and full information maximum likelihood. To investigate the power of each of these methods while using structural equation models a simulation study is launched. Furthermore, an examination of the correlation between the exogenous latent variables is done to extend the previous studies. We simulated a three latent variable structural model each with three observed variables. Three sample sizes (700, 1000, 1500) are examined accordingly to three missing rates for two specified mechanisms (2%, 10%, 15%). In addition, for each sample hundred other samples were generated and investigated using the same case design. The criteria of examination are a parameter bias calculated for each case design. The results illustrate as theoretically expected the following: (1) the non-convergence of pairwise deletion, (2) a huge loss of information when using listwise deletion, and (3) a relative performance for the full information maximum likelihood compared to listwise deletion when using the parameters bias as a criterion, particularly, for the correlation between the exogenous latent variables. This performance is revealed, chiefly, for larger sample sizes where the multivariate normal distribution occurs.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[Some Results of Generalized Weighted Norlund-Euler-<img src=image/13428701_01.gif> Statistical Convergence in Non-Archimedean Fields]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12886]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Muthu Meena Lakshmanan E&nbsp; &nbsp;and Suja K&nbsp; &nbsp;</p><p>Non-Archimedean analysis is the study of fields that satisfy the stronger triangular inequality, also known as ultrametric property. The theory of summability has many uses throughout analysis and applied mathematics. The origin of summability methods developed with the study of convergent and divergent series by Euler, Gauss, Cauchy and Abel. There is a good number of special methods of summability such as Abel, Borel, Euler, Taylor, Norlund, Hausdroff in classical Analysis. Norlund, Euler, Taylor and weighted mean methods in Non-Archimedan Analysis have been investigated in detail by Natarajan and Srinivasan. Schoenberg developed some basic properties of statistical convergence and also studied the concept as a summability method. The relationship between the summability theory and statistical convergence has been introduced by Schoenberg. The concept of weighted statistical convergence and its relations of statistical summability were developed by Karakaya and Chishti. Srinivasan introduced some summability methods namely y-method, Norlund method and Weighted mean method in p-adic Fields. The main objective of this work is to explore some important results on statistical convergence and its related concepts in Non-Archimedean fields using summability methods. In this article, Norlund-Euler-<img src=image/13428701_02.gif> statistical convergence, generalized weighted summability using Norlund-Euler-<img src=image/13428701_02.gif> method in an Ultrametric field are defined. The relation between Norlund-Euler-<img src=image/13428701_02.gif> statistical convergence and Statistical Norlund-Euler-<img src=image/13428701_02.gif> summability has been extended to non-Archidemean fields. The notion of Norlund-Euler-<img src=image/13428701_02.gif> statistical convergence and inclusion results of Norlund-Euler statistical convergent sequence has been characterized. Further the relation between Norlund-Euler-<img src=image/13428701_02.gif> statistical convergence of order α & β has been established.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[Two New Preconditioned Conjugate Gradient Methods for Minimization Problems]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12765]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Hussein Ageel Khatab&nbsp; &nbsp;and Salah Gazi Shareef&nbsp; &nbsp;</p><p>In application to general function, each of the conjugate gradient and Quasi-Newton methods has particular advantages and disadvantages. Conjugate gradient (CG) techniques are a class of unconstrained optimization algorithms with strong local and global convergence qualities and minimal memory needs. Quasi-Newton methods are reliable and eﬃcient on a wide range of problems and they converge faster than the conjugate gradient method and require fewer function evaluations but they have the disadvantage of requiring substantially more storage and if the problem is ill-conditioned, they may take several iterations. A new class has been developed, termed preconditioned conjugate gradient (PCG) method. It is a method that combines two methods, conjugate gradient and Quasi-Newton. In this work, two new preconditioned conjugate gradient algorithms are proposed namely New PCG1 and New PCG2 to solve nonlinear unconstrained optimization problems. A new PCG1 combines conjugate gradient method Hestenes-Stiefel (HS) with new self-scaling symmetric Rank one (SR1), and a new PCG2 combines conjugate gradient method Hestenes-Stiefel (HS) with new self-scaling Davidon, Flecher and Powell (DFP). The algorithm uses the strong Wolfe line search condition. Numerical comparisons with standard preconditioned conjugate gradient algorithms show that for these new algorithms, computational scheme outperforms the preconditioned conjugate gradient.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[A Simple Approach for Explicit Solution of The Neutron Diffusion Kinetic System]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12764]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Hind K. Al-Jeaid&nbsp; &nbsp;</p><p>This paper introduces a new approach to directly solve a system of two coupled partial differential equations (PDEs) subjected to physical conditions describing the diffusion kinetic problem with one delayed neutron precursor concentration in Cartesian geometry. In literature, many difficulties arise when dealing with the current model using various numerical/analytical approaches. Normally, mathematicians search for simple but effective methods to solve their physical models. This work aims to introduce a new approach to directly solve the model under investigation. The present approach suggests to transform the given PDEs to a system of linear ordinary differential equations (ODEs). The solution of this system of ODEs is obtained by a simple analytical procedure. In addition, the solution of the original system of PDEs is determined in explicit form. The main advantage of the current approach is that it avoided the use of any natural transformations such as the Laplace transform in the literature. It also gives the solution in a direct manner; hence, the massive computational work of other numerical/analytical approaches is avoided. Hence, the proposed method is effective and simpler than those previously published in the literature. Moreover, the proposed approach can be further extended and applied to solve other kinds of diffusion kinetic problems.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[The Locating Chromatic Number for Certain Operation of Origami Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12763]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Asmiati&nbsp; &nbsp;Agus Irawan&nbsp; &nbsp;Aang Nuryaman&nbsp; &nbsp;and Kurnia Muludi&nbsp; &nbsp;</p><p>The locating chromatic number introduced by Chartrand et al. in 2002 is the marriage of the partition dimension and graph coloring. The locating chromatic number depends on the minimum number of colors used in the locating coloring and the different color codes in vertices on the graph. There is no algorithm or theorem to determine the locating chromatic number of any graph carried out for each graph class or the resulting graph operation. This research is the development of scientific theory with a focus of the study on developing new ideas to determine the extent to which the locating chromatic number of a graph increases when applied to other operations. The locating chromatic number of the origami graph was obtained. The next exciting thing to know is locating chromatic number for certain operation of origami graphs. This paper discusses locating chromatic number for specific operation of origami graphs. The method used in this study is to determine the upper and lower bound of the locating chromatic number for certain operation of origami graphs. The result obtained is an increase of one color in the locating chromatic number of origami graphs.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[ANOVA Assisted Variable Selection in High-dimensional Multicategory Response Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12762]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Demudu Naganaidu&nbsp; &nbsp;and Zarina Mohd Khalid&nbsp; &nbsp;</p><p>Multinomial logistic regression is preferred in the classification of multicategory response data for its ease of interpretation and the ability to identify the associated input variables for each category. However, identifying important input variables in high-dimensional data poses several challenges as the majority of variables are unnecessary in discriminating the categories. Frequently used techniques in identifying important input variables in high-dimensional data include regularisation techniques such as Least Absolute Selection Shrinkage Operator (LASSO) and sure independent screening (SIS) or combinations of both. In this paper, we propose to use ANOVA, to assist the SIS in variable screening for high-dimensional data when the response variable is multicategorical. The new approach is straightforward and computationally effective. Simulated data without and with correlation are generated for numerical studies to illustrate the methodology, and the results of applying the methods on real data are presented. In conclusion, ANOVA performance is comparable with SIS in variable selection for uncorrelated input variables and performs better when used in combination with both ANOVA and SIS for correlated input variables.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[A New Bivariate Odd Generalized Exponential Gompertz Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12761]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Mervat Mahdy&nbsp; &nbsp;Eman Fathy&nbsp; &nbsp;and Dina S. Eltelbany&nbsp; &nbsp;</p><p>The objective of this study was to present a novel bivariate distribution, which we denoted as the bivariate odd generalized exponential gompertz(BOGE-G) distribution. Other well-known models included in this one include the gompertz, generalized exponential, odd generalized exponential, and odd generalized exponential gompertz distribution. The model introduced here is of Marshall-Olkin type [16]. The marginals of the new bivariate distribution have odd generalized exponential gompertz distribution which proposed by[7]. Closed forms exist for both the joint probability density function and the joint cumulative distribution function. The bivariate moment generating function, marginal moment generating function, conditional distribution, joint reliability function, marginal hazard rate function, joint mean waiting time, and joint reversed hazard rate function are some of the properties of this distribution that have been discussed. The maximum likelihood approach is used to estimate the model parameters. To demonstrate empirically the significance and adaptability of the new model in fitting and evaluating real lifespan data, two sets of real data are studied using the new bivariate distribution. Using the software Mathcad, a simulation research was conducted to evaluate the bias and mean square error (MSE) characteristics of MLE. We found that the bias and MSE decrease as the sample size increases.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[Even Vertex <img src=image/13429275_01.gif>-Graceful Labeling on Rough Graph]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12760]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>R. Nithya&nbsp; &nbsp;and K. Anitha&nbsp; &nbsp;</p><p>The study of set of objects with imprecise knowledge and vague information is known as rough set theory. The diagrammatic representation of this type of information may be handled through graphs for better decision making. Tong He and K. Shi introduced the constructional processes of rough graph in 2006 followed by the notion of edge rough graph. They constructed rough graph through set approximations called upper and lower approximations. He et al developed the concept of weighted rough graph with weighted attributes. Labelling is the process of making the graph into a more sensible way. In this process, integers are assigned for vertices of a graph so that we will be getting distinct weights for edges. Weight of an edge brings the degree of relationship between vertices. In this paper we have considered the rough graph constructed through rough membership values and as well as envisaged a novel type of labeling called Even vertex <img src=image/13429275_02.gif>-graceful labeling as weight value for edges. In case of rough graph, weight of an edge will identify the consistent attribute even though the information system is imprecise. We have investigated this labeling for some special graphs like rough path graph, rough cycle graph, rough comb graph, rough ladder graph and rough star graph etc. This Even vertex <img src=image/13429275_02.gif>-graceful labeling will be useful in feature extraction process and it leads to graph mining.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[A New Methodology on Rough Lattice Using Granular Concepts]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12759]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>B. Srirekha&nbsp; &nbsp;Shakeela Sathish&nbsp; &nbsp;and P. Devaki&nbsp; &nbsp;</p><p>Rough set theory has a vital role in the mathematical field of knowledge representation problems. Hence, a Rough algebraic structure is defined by Pawlak. Mathematics and Computer Science have many applications in the field of Lattice. The principle of the ordered set has been analyzed in logic programming for crypto-protocols. Iwinski extended an approach towards the lattice set with the rough set theory whereas an algebraic structure based on a rough lattice depends on indiscernibility relation which was established by Chakraborty. Granular means piecewise knowledge, grouping with similar elements. The universe set was partitioned by an indiscernibility relation to form a Granular. This structure was framed to describe the Rough set theory and to study its corresponding Rough approximation space. Analysis of the reduction of granular from the information table is based on object-oriented. An ordered pair of distributive lattices emphasize the congruence class to define its projection. This projection of distributive lattice is analyzed by a lemma defining that the largest and the smallest elements are trivial ordered sets of an index. A Rough approximation space was examined to incorporate with the upper approximation and analysis with various possibilities. The Cartesian product of the distributive lattice was investigated. A Lattice homomorphism was examined with an equivalence relation and its conditions. Hence the approximation space exists in its union and intersection in the upper approximation. The lower approximation in different subsets of the distributive lattice was studied. The generalized lower and upper approximations were established to verify some of the results and their properties.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[Raise Estimation: An Alternative Approach in The Presence of Problematic Multicollinearity]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12758]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Jinse Jacob&nbsp; &nbsp;and R. Varadharajan&nbsp; &nbsp;</p><p>When adopting the Ordinary Least Squares (OLS) method to compute regression coefficients, the results become unreliable when two or more predictor variables are linearly related to one another. The confidence interval of the estimates becomes longer as a result of the increased variance of the OLS estimator, which also causes test procedures to have the potential to generate deceptive results. Additionally, it is difficult to determine the marginal contribution of the associated predictors since the estimates depend on the other predictor variables that are included in the model. This makes the determination of the marginal contribution difficult. Ridge Regression (RR) is a popular alternative to consider in this scenario; however, doing so impairs the standard approach for statistical testing. The Raise Method (RM) is a technique that was developed to combat multicollinearity while maintaining statistical inference. In this work, we offer a novel approach for determining the raise parameter, because the traditional one is a function of actual coefficients, which limits the use of Raise Method in real-world circumstances. Using simulations, the suggested method was compared to Ordinary Least Squares and Ridge Regression in terms of its capacity to forecast, stability of its coefficients, and probability of obtaining unacceptable coefficients at different levels of sample size, linear dependence, and residual variance. According to the findings, the technique that we designed turns out to be quite effective. Finally, a practical application is discussed.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[Developing Average Run Length for Monitoring Changes in the Mean on the Presence of Long Memory under Seasonal Fractionally Integrated MAX Model]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12757]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Wilasinee Peerajit&nbsp; &nbsp;</p><p>The cumulative sum (CUSUM) control chart can sensitively detect small-to-moderate shifts in the process mean. The average run length (ARL) is a popular technique used to determine the performance of a control chart. Recently, several researchers investigated the performance of processes on a CUSUM control chart by evaluating the ARL using either Monte Carlo simulation or Markov chain. As these methods only yield approximate results, we developed solutions for the exact ARL by using explicit formulas based on an integral equation (IE) for studying the performance of a CUSUM control chart running a long-memory process with exponential white noise. The long-memory process observations are derived from a seasonal fractionally integrated MAX model while focusing on X. The existence and uniqueness of the solution for calculating the ARL via explicit formulas were proved by using Banach's fixed-point theorem. The accuracy percentage of the explicit formulas against the approximate ARL obtained via the numerical IE method was greater than 99%, which indicates excellent agreement between the two methods. An important conclusion of this study is that the proposed solution for the ARL using explicit formulas could sensitively detect changes in the process mean on a CUSUM control chart in this situation. Finally, an illustrative case study is provided to show the efficacy of the proposed explicit formulas with processes involving real data.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[Multiplication and Inverse Operations in Parametric Form of Triangular Fuzzy Number]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12756]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Mashadi&nbsp; &nbsp;Yuliana Safitri&nbsp; &nbsp;and Sukono&nbsp; &nbsp;</p><p>Many authors have given the arithmetic form of triangular fuzzy numbers, especially for addition and subtraction; however, there is not much difference. The differences occur for multiplication, division, and inverse operations. Several authors define the inverse form of triangular fuzzy numbers in parametric form. However, it always does not obtain <img src=image/13429431_01.gif>, because we cannot uniquely determine the inverse that obtains the unique identity. We will not be able to directly determine the inverse of any matrix in the form of a triangular fuzzy number. Thus, all problems using the matrix <img src=image/13429431_02.gif> in the form of a triangular fuzzy number cannot be solved directly by determining <img src=image/13429431_03.gif>. In addition, there are various authors who, with various methods, try to determine <img src=image/13429431_03.gif> but still do not produce <img src=image/13429431_04.gif>. Consequently, the solution of a fully fuzzy linear system will produce an incompatible solution, which results in different authors obtaining different solutions for the same fully fuzzy linear system. This paper will promote an alternative method to determine the inverse of a fuzzy triangular number in parametric form. It begins with the construction of a midpoint <img src=image/13429431_05.gif> for any triangular fuzzy number <img src=image/13429431_06.gif>, or in parametric form <img src=image/13429431_07.gif>. Then the multiplication form will be constructed obtaining a unique inverse which produces <img src=image/13429431_08.gif>. The multiplication, division, and inverse forms will be proven to satisfy various algebraic properties. Therefore, if a triangular fuzzy number is used, and also a triangular fuzzy number matrix is used, it can be easily directly applied to produce a unique inverse. At the end of this paper, we will give an example of calculating the inverse of a parametric triangular fuzzy number for various cases. It is expected that the reader can easily develop it in the case of a fuzzy matrix in the form of a triangular fuzzy number.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[Inclusion Results of a Generalized Mittag-Leffler-Type Poisson Distribution in the k-Uniformly Janowski Starlike and the k-Janowski Convex Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12755]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Jamal Salah&nbsp; &nbsp;Hameed Ur Rehman&nbsp; &nbsp;and Iman Al Buwaiqi&nbsp; &nbsp;</p><p>Due to the Mittag-Leffler function's crucial contribution to solving the fractional integral and differential equations, academics have begun to pay more attention to this function. The Mittag-Leffler function naturally appears in the solutions of fractional-order differential and integral equations, particularly in the studies of fractional generalization of kinetic equations, random walks, Levy flights, super-diffusive transport, and complex systems. As an example, it is possible to find certain properties of the Mittag-Leffler functions and generalized Mittag-Leffler functions [4,5]. We consider an additional generalization in this study, <img src=image/13429345_01.gif>, given by Prabhakar [6,7]. We normalize the later to deduce <img src=image/13429345_02.gif> in order to explore the inclusion results in a well-known class of analytic functions, namely <img src=image/13429345_03.gif> and <img src=image/13429345_04.gif>, <img src=image/13429345_05.gif>-uniformly Janowski starlike and k-Janowski convex functions, respectively. Recently, researches on the theory of univalent functions emphasize the crucial role of implementing distributions of random variables such as the negative binomial distribution, the geometric distribution, the hypergeometric distribution, and in this study, the focus is on the Poisson distribution associated with the convolution (Hadamard product) that is applied to define and explore the inclusion results of the followings: <img src=image/13429345_06.gif> and the integral operator <img src=image/13429345_07.gif>. Furthermore, some results of special cases will be also investigated.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[Linear Stability of Double-sided Symmetric Thin Liquid Film by Integral-theory]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12754]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Ibrahim S. Hamad&nbsp; &nbsp;</p><p>The Integral Theory approach is used to explore the stability and dynamics of a free double-sided symmetric thin liquid film. For a Newtonian liquid with non-variable density and moving viscosity, the flowing in a thinning liquid layer is analyzed in two dimensions. To construct an equation that governs such flow, the Navier and Stokes formulas are utilized with proper boundary conditions of zero shear stress conjointly of normal stress on the bounding free surfaces with dimensionless variables. After that, the equations that are a non-linear evolution structure of layer thickness, local stream rate, and the unknown functions can be solved by using straight stability investigation, and the normal mode strategy can moreover be connected to these conditions to reveal the critical condition. The characteristic equation for the growth rate and wave number can be analyzed by using MATLAM programming to show the region of stable and unstable films. As a result of our research, we are able to demonstrate that the effect of a thin, free, double-sided liquid layer is an unstable component.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[Development of Nonparametric Structural Equation Modeling on Simulation Data Using Exponential Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12753]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2023<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;11&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Tamara Rezti Syafriana&nbsp; &nbsp;Solimun&nbsp; &nbsp;Ni Wayan Surya Wardhani&nbsp; &nbsp;Atiek Iriany&nbsp; &nbsp;and Adji Achmad Rinaldo Fernandes&nbsp; &nbsp;</p><p>Objective: This study aims to determine the development of nonparametric SEM analysis on simulation data using the exponential function. Methodology: This study uses simulation data which is defined as an experimental approach to imitate the behavior of the system using a computer with the appropriate software. This study uses nonparametric structural equation modeling (SEM) analysis. The function used in this study is the exponential function. Results: The results showed that with simulation data all relationships have a significant effect on each other which have formative and reflective indicators. Testing the direct effect of Y2 on Y3 produces a structural coefficient value of 0.255 with a p-value <0.001 which means it is significant. The structural coefficient is positive, indicating that the relationship between the two is positive. This means that the higher Y2, the higher Y3. The results of the measurement model get a coefficient of determination of 0.91. It can be explained that 91% of the diversity of variables Y1, Y2, and Y3 can be explained by the X1 variable while 9% is explained by other variables not used in the model. Novelty: This study uses simulation data that is made very complex to analyze several related system structures at one time and can use a lot of data to get closer to real conditions to obtain comprehensive results, adjust to the criteria to be studied, and meet the following criteria: nonparametric SEM analysis criteria using the exponential function.</p>]]></description>
<pubDate>Jan 2023</pubDate>
</item>
<item>
<title><![CDATA[Construction of Rough Graph through Rough Membership Function]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12740]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>R. Aruna Devi&nbsp; &nbsp;and K. Anitha&nbsp; &nbsp;</p><p>Rough membership function defines the degree of relationship between conditional and decision attributes of an information system. It is defined by <img src=image/13429145_01.gif> where <img src=image/13429145_07.gif> is the subset of <img src=image/13429145_08.gif> under the relation <img src=image/13429145_09.gif> where <img src=image/13429145_08.gif> is the universe of discourse. It can be expressed in different forms like cardinality form, probabilistic form etc. In cardinality form, it is expressed as <img src=image/13429145_02.gif> where as in probabilistic form it can be denoted as <img src=image/13429145_03.gif> where <img src=image/13429145_04.gif> is the equivalence class of <img src=image/13429145_10.gif> with respect to <img src=image/13429145_09.gif>. This membership function is used to measure the value of uncertainty. In this paper we have introduced the concept of graphical representation of rough sets. Rough graph was introduced by He Tong in 2006. In this paper, we propose a novel method for the construction of rough graph through rough membership function <img src=image/13429145_05.gif>. We propose that there is an edge between vertices if <img src=image/13429145_06.gif>. The rough graph is being constructed for an information system; here objects are considered as vertices. Rough path, rough cycle, rough ladder graph are introduced in this paper. We develop the operations on rough graph and also extend the properties of rough graph.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Central Automorphisms in n-abelian Groups]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12739]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Rugare Kwashira&nbsp; &nbsp;</p><p>The study of Aut(G), the group of automorphisms of G, has been undertaken by various authors. One way to facilitate this study is to investigate the structure of Aut<sub>c</sub>(G), the subgroup of central automorphisms. For some classes of groups, algebraic properties like solvability, nilpotency, abelian and nilpotency relative to an automorphism can be deduced through the study of the subgroups Aut<sub>c</sub>(G) and Aut<sub>c∗</sub> (G) where Aut<sub>c∗</sub> (G) is the group of central automorphisms that fix Z(G) point-wise. For instance, [6], if Aut<sub>c</sub>(G) = Aut(G) then G is nilpotent of class 2 and if G is f-nilpotent for <img src=image/13428941_01.gif> Aut<sub>c∗</sub> (G), then for a group G, the notions of relative nilpotency and nilpotency coincide [8]. The group is abelian if G is identity nilpotent only [8]. For an arbitrary group G, the subgroups Aut<sub>c</sub>(G) and Aut<sub>c∗</sub> (G) are trivial, but for the case when G is a p-group, Aut<sub>c</sub>(G) is non-trivial and the structure of Aut<sub>c∗</sub> (G) have been described [4]. The study of the influence of types of subgroups on the structure of G is a powerful technique, thus, one can investigate the influence of maximal invariant subgroups of G on the structure of Aut<sub>c∗</sub> (G). We shall consider a class of finite, non-commutative, n-abelian groups that are not necessarily pgroups. Here, n = 2l + 1 is a positive integer and l is an odd integer. The purpose of this paper is to explicitly describe the central automorphisms of G = G<sub>l</sub> that fix the center elementwise and consequently the algebraic structure of Aut<sub>c∗</sub> (G). For this goal, we will study the invariant normal subgroups M of G such that <img src=image/13428941_02.gif> and M is maximal in G. It suffices to study Hom(G/M,Z(G)), the group of homomorphisms from the quotient G/M to the center Z(G). We explore the central automorphism group of pullbacks involving groups of the form G<sub>l</sub>. We extend our study to central automorphisms in this class of groups G<sub>l</sub>, in which the mapping <img src=image/13428941_03.gif> is an automorphism. For such groups, Aut<sub>c∗</sub> (G) can be described through Hom(G/M,Z(G)), where M is normal and a maximal subgroup in G such that the quotient group G/M is abelian. We show that Hom<img src=image/13428941_04.gif> and Aut<sub>c∗</sub> (G) is isomorphic to the cyclic group of order a prime p. The class of groups studied in our paper falls under a bigger class of groups which have a special characterization that their non normal subgroups are contranormal. The results of this paper can be generalized to this bigger class of groups.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Some Fixed Point Results in Bicomplex Valued Metric Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12738]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Duduka Venkatesh&nbsp; &nbsp;and V. Naga Raju&nbsp; &nbsp;</p><p>Fixed points are also called as invariant points. Invariant point theorems are very essential tools in solving problems arising in different branches of mathematical analysis. In the present paper, we establish three unique common invariant point theorems using two self-mappings, four self-mappings and six self-mappings in the bicomplex valued metric space. In the first theorem, we generate a common invariant point theorem for four self-mappings by using weaker conditions such as weakly compatible, generalized contraction and <img src=image/13428832_01.gif> property. Then, in the second theorem, we generate a common invariant point theorem for six self-mappings by using inclusion relation, generalized contraction, weakly compatible and commuting maps. Further, in the third theorem, we generate a common coupled invariant point for two self mappings using different contractions in the bicomplex valued metric space. The above results are the extention and generalization of the results of [11] in the Bicomplex metric space. Moreover, we provide an example which supports the results.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[A Study on Intuitionistic Fuzzy Critical Path Problems Through Centroid Based Ranking Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12737]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>T. Yogashanthi&nbsp; &nbsp;Shakeela Sathish&nbsp; &nbsp;and K. Ganesan&nbsp; &nbsp;</p><p>In this study the intuitionistic fuzzy version of the critical path method has been proposed to solve networking problems with uncertain activity durations. Intuitionistic fuzzy set [1] is an extension of fuzzy set theory [2] unlike fuzzy set, it focuses on degree of belonging, the degree of nonbelonging or non-membership function and the degree of hesitancy which helps the decision maker to adopt the best among the worst cases. Trapezoidal and the triangular intuitionistic fuzzy numbers are utilized to describe the uncertain activity or task durations of the project network. Here trapezoidal and triangular intuitionistic fuzzy numbers are converted into their corresponding parametric form and applying the proposed intuitionistic fuzzy arithmetic operations and a new method of ranking based on the parametric form of intuitionistic fuzzy numbers, the intuitionistic fuzzy critical path with vagueness reduced intuitionistic fuzzy completion duration of the project has been obtained. The authentication of the proposed method can be checked by comparing the obtained results with the results available in pieces of literature.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Transparency Order and Cross-Correlation Analysis of Boolean Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12736]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Mayasar Ahmad Dar&nbsp; &nbsp;Hiral Raja&nbsp; &nbsp;Afshan Butt&nbsp; &nbsp;and Deepmala Sharma&nbsp; &nbsp;</p><p>Transparency order is considered to be a cryptographically significant property that characterizes the resistance of S-boxes in opposition to differential power analysis attacks. The S-box having low transparency order is more resistant to these attacks. Until now, little attempts have been noticed to examine theoretically the transparency order and its relationship with other cryptographic properties. All constructions associated with transparency order are relying on search algorithms. In this paper, we discuss the new interpretation of bent functions in terms of their transparency order. Using the concept of vector concatenation and correlation characteristics, we find the transparency order of Boolean functions. The notion of complementary transparency order is given. For a pair of Boolean functions, we interpret complementary transparency order by their Walsh-Hadamard transform. We establish a relationship of transparency order with cross-correlation for a pair of Boolean functions. We find a relationship of transparency order with <img src=image/13428361_01.gif>−variable decomposition bent functions. We generalize the bounds on sum-of-squares of autocorrelation in terms of transparency order of Boolean functions using Walsh-Hadamard spectra. Further the transparency order of a function fulfilling the propagation criterion about a linear subspace is evaluated.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Asset Allocation in Indonesian Stocks Using Portfolio Robust]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12735]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Abdurakhman&nbsp; &nbsp;</p><p>The mean-variance portfolio has several weaknesses. It does not accommodate the uncertainty of parameters, tends to be sensitive to the changes of parameter input, and tends to be unreliable on extreme observations. Moreover, it cannot accommodate the changes in investor preferences regarding the evidence of abnormal and asymmetric asset return distribution. To overcome these three weaknesses, we can use the robust mean-variance portfolio that is based on the uncertainty of parameters. However, the robust mean-variance portfolio has not included skewness in its optimization. Therefore, here in this paper, we use the robust mean-variance-skewness portfolio which includes skewness in its optimization. So it can be used for the condition where the data return is skewed asymmetric and contains extreme values. An empirical study of robust mean-variance and robust mean-variance-skewness portfolios has been conducted on four banking stocks in Indonesia, i.e AGRS.JK, BTPN.JK, BBNI.JK, and BBCA.JK. The data used in this study is the daily closing price of the company's stock price for the period January 2, 2020 – January 2, 2022 (489 days) obtained from Yahoo! Finance. From the results of the data analysis, it can be concluded that the variance still plays an important role in determining the weight of the allocation of a portfolio. Meanwhile, the large value of skewness leads to the allocation of the same weight for each stock in a portfolio.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Maximum Likelihood Estimation in the Inverse Weibull Distribution with Type II Censored Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12734]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Fatima A. Alshaikh&nbsp; &nbsp;and Ayman Baklizi&nbsp; &nbsp;</p><p>We consider maximum likelihood estimation for the parameters and certain functions of the parameters in the Inverse Weibull (IW) distribution based on type II censored data. The functions under consideration are the Mean Residual Life (MRL), which is very important in reliability studies, and Tail Value at Risk (TVaR), which is an important measure of risk in actuarial studies. We investigated the performance of the MLE of the parameters and derived functions under various experimental conditions using simulation techniques. The performance criteria are the bias and the mean squared error of the estimators. Recommendations on the use of the MLE in this model are given. We found that the parameter estimators are almost unbiased, while the MRL and TVaR estimators are asymptotically unbiased. Moreover, the mean squared error of all estimators decreased for larger sample sizes and it increased when the censoring proportion is increased for a fixed sample size. The conclusion is that the maximum likelihood method of estimation works well for the parameters and the derived functions of the parameter like the MRL and TVaR. Two examples on a real data set are presented to illustrate the application of the methods used in this paper. The first one is on survival time of pigs while the other is on fire losses.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Estimation of Nonparametric Path Fourier Series and Truncated Spline Ensemble Models]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12733]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Atiek Iriany&nbsp; &nbsp;and Adji Achmad Rinaldo Fernandes&nbsp; &nbsp;</p><p>To ascertain whether there is a causal connection between exogenous and endogenous factors, one method is to perform path analysis. The linearity assumption is the one that has the power to alter the model. The model's shape is impacted by the linearity assumption. The path analysis is parametric if the linearity assumption is true, but non-parametric path analysis is used if the non-linear form is unknown and there is no knowledge of the data pattern. If the non-linear form is unknown and there is no knowledge of the data pattern, non-linear path analysis is used. This study's goal was to calculate the nonparametric route function using a combination of truncated spline and Fourier series methods. The findings demonstrated that nonparametric path analysis only in cases where the linearity presumption is violated can one employ the Fourier series and truncated spline. Then, using the Ordinary Least Square (OLS) approach, the estimator of Nonparametric Regression-Based Path Analysis was obtained, delivering an estimation result that is not unique because it makes use of a nonparametric approach. The contribution of this paper can be used as reference material, especially analysis in statistics. With this paper, it is hoped that it can be applied in various fields. Suggestions for further research can develop this research with other models.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Signal Modeling with IG Noise and Parameter Estimation Based on RJMCMC]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12732]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Akhmad Fauzy&nbsp; &nbsp;Suparman&nbsp; &nbsp;and Epha Diana Supandi&nbsp; &nbsp;</p><p>Piecewise constant (PC) is a stochastic model that can be applied in various fields such as engineering and ecology. The stochastic model contains a noise. The accuracy of the stochastic model in modeling a signal is influenced by the type of noise. This paper aims to propose inverse-gamma noise in the PC model and the procedure for estimating the model parameters. The model parameters are estimated using the Bayes approach. Model parameters have a variable dimension space so that the Bayesian estimator cannot be determined analytically. Therefore, the Bayesian estimator is calculated using the reversible jump Markov Chain Monte Carlo (RJMCMC) algorithm. The performance of the RJMCMC algorithm is validated using data synthesis. The finding is a new PC model in which the noise has an inverse-gamma distribution. In addition, this paper also proposes a parameter estimation procedure for the model based on an RJMCMC. The simulation study shows that the model parameter estimators generated by this algorithm are close to the model parameter values. This paper concludes that inverse gamma noise can be used as an alternative noise in the PC model. The RJMCMC is categorized as a valid algorithm and can estimate the PC model parameters where the noise has an inverse-gamma distribution. The novelty in this paper is the development of a new stochastic model and the procedure for estimating the model parameters. In application, the findings in this paper have the potential to improve the suitability of the stochastic model to the signal.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Ruin Probability for Some Mixed Linear Exponential Family in Classical Risk Process]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12604]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Khanchit Chuarkham&nbsp; &nbsp;Arthit Intarasit&nbsp; &nbsp;and Pakwan Riyapan&nbsp; &nbsp;</p><p>This article presents the probability of ruin for the classical risk process by including the density function of claims which satisfies a mixed linear exponential family. This can be defined as <img src=image/13428362_01.gif>, where <img src=image/13428362_02.gif>, <img src=image/13428362_03.gif>, <img src=image/13428362_20.gif> is a positive integer with <img src=image/13428362_04.gif> with <img src=image/13428362_05.gif>, <img src=image/13428362_06.gif>, <img src=image/13428362_07.gif>, and <img src=image/13428362_08.gif> is the canonical parameter. The main results show that the ordinary differential equation for the probability of ruin in the general case by using chain rule and mathematical induction technique is given in Theorem 2.2, the ordinary differential equation for some mixed linear exponential family when <img src=image/13428362_09.gif>, <img src=image/13428362_10.gif>, <img src=image/13428362_11.gif>, <img src=image/13428362_12.gif>, <img src=image/13428362_13.gif>, <img src=image/13428362_14.gif> is demonstrated in Theorem 2.3, and an explicit solution for the probability of ruin when the mixed linear exponential family satisfies the conditions which are <img src=image/13428362_15.gif>, <img src=image/13428362_16.gif>, <img src=image/13428362_10.gif> with <img src=image/13428362_17.gif>, and <img src=image/13428362_13.gif> is indicated in Theorem 2.4. Finally, we use MATLAB to generate the numerical simulations for the probability of ruin in the risk process that the number of claims is a Poisson process and the density function of claims satisfies a mixed linear exponential family and a gamma distribution under the conditions of Theorem 2.4 with the parameters <img src=image/13428362_18.gif>=1 and <img src=image/13428362_19.gif>=0.2. The numerical results reveal that the relative frequency of the ruin and the ruin probability also satisfy the Lundberg inequality which is the necessary condition for the ruin probability. In addition, the absolute values of its differences are small in order to confirm that the main results are correct.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Bipolar Soft Limit Points in Bipolar Soft Generalized Topological Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12603]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Hind Y. Saleh&nbsp; &nbsp;Baravan A. Asaad&nbsp; &nbsp;and Ramadhan A. Mohammed&nbsp; &nbsp;</p><p>The concept of soft set theory can be used as a mathematical tool for dealing with problems that contain uncertainty. Then, a new mixed mathematical model called the bipolar soft set is created by merging soft sets and bipolarity, which gave the concept of a binary model of grading. Bipolar soft set is characterized by two soft sets, one of which provides positive information and the other negative. Bipolar soft generalized topology is a generalization of bipolar soft topology. The importance of limit points in all branches of mathematics cannot be ignored. It forms one of the most significant and fundamental concepts in topology. On this basis, the derived set concept is required in the establishment and continuation of some properties. Accordingly, the limit point in bipolar soft generalized theory is defined. In this paper, we present the notion of bipolar soft generalized limit points. We explained the relation between the bipolar soft generalized derived and the bipolar soft generalized closure set. Added to that, we discussed some structures of a bipolar soft generalized topological space such as: <img src=image/13428354_01.gif>-interior point, <img src=image/13428354_01.gif>-exterior point, <img src=image/13428354_01.gif>-boundary point, <img src=image/13428354_01.gif>-neighborhood point and basis on <img src=image/13428354_02.gif>. Finally, we give comparisons among these concepts of bipolar soft generalized topological spaces (<img src=image/13428354_02.gif>) by using bipolar soft point (<img src=image/13428354_03.gif>). Each concept introduced in this paper is explained with clear examples.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Nonparametric REML-like Estimation in Linear Mixed Models with Uncorrelated Homoscedastic Errors]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12602]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>E.-P. Ndong Nguéma&nbsp; &nbsp;and Betrand Fesuh Nono&nbsp; &nbsp;</p><p>Restricted Maximum Likelihood (REML) is the most recommended approach for fitting a Linear Mixed Model (LMM) nowadays. Yet, as ML, REML suffers the drawback that it performs such a fitting by assuming normality for both the random effects and the residual errors, a dubious assumption for many real data sets. Now, there have been several attempts at trying to justify the use of the REML likelihood equations outside of the Gaussian world, with varying degrees of success. Recently, a new fitting methodology, code named 3S, was presented for LMMs with only added assumption (to the basic ones) that the residual errors are uncorrelated and homoscedastic. Specifically, the 3S-A1 variant was designed and then shown, for Gaussian LMMs, to differ only slightly from ML estimation. In this article, using the same 3S framework, we develop another iterative nonparametric estimation methodology, code named 3S-A1.RE, for the kind of LMMs just mentioned. However, we show that if the LMM is, indeed, Gaussian with i.i.d. residual errors, then the set of estimating equations defining any 3S-A1.RE iterative procedure is equivalent to the set of REML equations, but while including the nonnegativity constraints on all variance estimates, as well as positive semi-definiteness on all covariance matrices. In numerical tests on some simulated and real world clustered and longitudinal data sets, our new methods proved to be highly competitive when compared to the traditional REML in the R statistical software.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Enacting Alternating Least Square Algorithm to Estimate Model Fit of Sem Generalized Structured Component Analysis]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12601]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Cylvia Nissa Steffani&nbsp; &nbsp;and Gunardi&nbsp; &nbsp;</p><p>Structural Equation Modeling (SEM) is a statistical modeling technique that combines three methods, namely factor analysis, path analysis and regression analysis to test a theoretical model in social science, psychology and management. Covariance-based SEM is a parametric SEM that must meet several parametric assumptions such as, multivariate normally distributed data, large sample sizes and independent observations, so that, variance-based SEM was developed to overcome the problem of covariance SEM, namely the Generalized Structured Component Analysis (GSCA) method. This study aims to implement the GSCA method on factors data that are expected to have an effect on the level of behavioral intention towards online food delivery services and to examine the significance of the mediating variable on the structural relationship. The results of hypothesis testing with a 95% confidence level showed that the quality of convenience motivation, prior online purchase experience, and attitude towards online food delivery services had a significant effect on behavioral intentions towards online food delivery services. The fit value is above 0, 523 which indicates that the model is able to explain around 52,3% of the variation of the data. Furthermore, the hedonic motivation variable has a significant effect on convenience motivation. Post usage usefulness and prior online purchase experience variables significantly affected the attitudes towards online food delivery services. The proposed model using GSCA achieves a much better result (good fit) compared with the previous model using Confirmatory Factor Analysis (CFA) with marginal fit.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Iterative Algorithms for Solving the Partial Eigenvalue Problem for Symmetric Interval Matrixes]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12600]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Alimzhan A. Ibragimov&nbsp; &nbsp;and Dilafruz N. Khamroeva&nbsp; &nbsp;</p><p>In this paper, we consider iterative methods for solving a partial eigenvalue problem for real symmetric interval matrices. Such matrices have applications in modeling many technical problems where a lot of data suffers from limited variation or uncertainty. In modeling most applied problems, when some parameter values fluctuate with a known amplitude, then it can be considered that it is advisable to use interval methods. The algorithms proposed by us are built on the basis of the power method and its modification, the so-called "Method of scalar products" for solving a partial problem of eigenvalues of an interval symmetric matrix. These methods have not yet been studied in detail and are not justified for interval matrices. In the developed algorithms, boundary matrices are first determined by the Deif theorem, and then a partial eigenvalue problem is solved. We also study the problem of convergence of the power method for boundary matrices of a given interval symmetric matrix. The results of the computational experiment show that the interval eigenvalues obtained by the proposed algorithms are in good agreement with the results obtained by other researchers, and in some cases even better. The obtained numerical results are compared by the number of iterations and the width of the interval solution.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Binomial-Geometric Mixture and Its Applications]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12599]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Hussein Eledum&nbsp; &nbsp;and Alaa R. El-Alosey&nbsp; &nbsp;</p><p>A mixture distribution is a combination of two or more probability distributions; it can be obtained from different distribution families or the same distribution families with different parameters. The underlying distributions may be discrete or continuous, so the resulting mixture probability distribution function should be a mass or density function. In the last few years, there has been great interest in the problem of developing a mixture distribution based on the binomial distribution. This paper uses the probability generating function method to develop a new two-parameter discrete distribution called a binomial-geometric (BG) distribution, a mixture of binomial distribution with the number of trials (parameter <img src=image/13428702_01.gif>) taken after a geometric distribution. The quantile function, moments, moment generating function, Shannon entropy, order statistics, stress-strength reliability and simulating the random sample are some of the statistical highlights of the BG distribution that are explored. The model's parameters are estimated using the maximum likelihood method. To examine the performance of the accuracy of point estimates for BG distribution parameters, the Monte Carlo simulation is performed with different scenarios. Finally, the BG distribution is fitted to two real lifetime count data sets from the medical field. As a result, the proposed BG distribution is an overdispersed right-skewed and can accommodate a constant hazard rate function. The proposed BG distribution is appropriate for modelling the overdispersed right-skewed real-life count data sets and it can be an alternative to the negative binomial and geometric distributions.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[On the Generalized Quadratic-Quartic Cauchy Functional Equation and its Stability over Non-Archimedean Normed Space]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12598]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>A. Ramachandran&nbsp; &nbsp;and S. Sangeetha&nbsp; &nbsp;</p><p>Functional equation plays a very important and interesting role in the area of mathematics, which involves simple algebraic manipulations and through which one can arrive an interesting solution. The theory of functional equations is also used in the development of other areas such as analysis, algebra, Geometry etc., the new methods and techniques are applied in solving problem in Information theory, Finance, Geometry, wireless sensor networks etc., In recent decades, the study of various types of stability of a functional equation such as HUS (Hyers-Ulam stability), HURS (Hyers-Ulam-Rassias stability) and generalized HUS of different types of functional equation and also for mixed type were discussed by many authors in various space. The problem of the stability of different functional equations has been widely studied by many authors, and more interesting results have been proved in the classical case (Archimedean). In recent years, the analogues results of the stability problem of these functional equations were investigated in non-Archimedean space. The aim of this study is to investigate the HUS of a mixed type of general Quadratic-Quartic Cauchy functional equation in non-Archimedean normed space. In this current article, we prove the generalized HUS for the following Quadratic-Quartic Cauchy functional equation over non-Archimedean Normed space.<img src=image/13428783_01.gif></p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Step, Ramp, Delta, and Differentiable Activation Functions Obtained Using Percolation Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12597]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>David S. McLachlan&nbsp; &nbsp;and Godfrey Sauti&nbsp; &nbsp;</p><p>This paper presents two new analytical equations, the Two Exponent Phenomenological Percolation Equation (TEPPE) and the Single Exponent Phenomenological Percolation Equation (SEPPE) which, for the proper choice of parameters, approximate the widely used Heaviside Step Function. The plots of the equations presented in the figures in this paper show some, but by no means all, of the step, ramp, delta, and differentiable activation functions that can be obtained using the percolation equations. By adjusting the parameters these equations can give linear, concave, and convex ramp functions, which are basic signals in systems used in engineering and management. The equations are also Analytic Activation Functions, the form or nature of which can be varied by changing the parameters. Differentiating these functions gives delta functions, the height and width of which depend on the parameters used. The TEPPE and SEPPE and their derivatives are presented in terms of the conductivity (<img src=image/13428023_01.gif>) owing to their original use in describing the electrical properties of binary composites, but are applicable to other percolative phenomena. The plots in the figures presented are used to show the response <img src=image/13428023_02.gif> (composite conductivity) for the parameters <img src=image/13428023_03.gif> (higher conductivity component of the composite), <img src=image/13428023_04.gif> (lower conductivity component of the composite) and <img src=image/13428023_05.gif>, the volume fraction of the higher conductivity component in the composite. The additional parameters are the critical volume fraction, <img src=image/13428023_06.gif>, which determines the position of the step or delta function on the <img src=image/13428023_07.gif> axis and one or two exponents <img src=image/13428023_08.gif>, and <img src=image/13428023_09.gif>.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Modified Mathematical Models in Biology by the Means of Caputo Derivative of a Function with Respect to Another Exponential Function]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12596]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Jamal Salah&nbsp; &nbsp;Maryam Al Hashmi&nbsp; &nbsp;Hameed Ur Rehman&nbsp; &nbsp;and Khaled Al Mashrafi&nbsp; &nbsp;</p><p>In this article, the researcher considered some well-known mathematical models of ordinary differential equations applied in biology such as the bacterial growth, the natural FC solution models for vegetables, the biological phospholipids pathway, the glucose absorption by the body and the spread of epidemics. The ordinary differential equations for each model are fractionalized by the means of Caputo derivative of a function with respect to certain exponential function. In each model, we embed the concept fractionalization associated with a chosen exponential function in order to modify the given model. Consequently, various propositions are evoked by hypothetically allowing some modifications in several mathematical models of biology. The results are further visualized by providing the graphs of Mittag-Leffler function on various parameters. The graphs' analysis explored the behavior of the solution for every modified model. In this study, the solutions of the modified models are all of the Mittag–Leffler form while all original models are solved by the means of exponential function. Slight changes of the behavior of the solutions are due to the assumptions and the change of parameters.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Bootstrap-t Confidence Interval on Local Polynomial Regression Prediction]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12595]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Abil Mansyur&nbsp; &nbsp;and Elmanani Simamora&nbsp; &nbsp;</p><p>In local polynomial regression, prediction confidence interval estimation using standard theory will give coverage probability close to exact coverage probability. However, if the normality assumption is not met, the bootstrap method makes it possible to apply it. The working principle of the bootstrap method uses the resampling method where the sample data becomes a population and there is no need to know the distribution of the sample data is normal or not. Indiscriminate selection of smoothing parameters allows scatterplot results from local polynomial regressions to be rough and can even lead to misleading statistical conclusions. It is necessary to consider the optimal smoothing parameters to get local polynomial regression predictions that are not overfitting or underfitting. We offer two new algorithms based on the nested bootstrap resampling method to determine the bootstrap-t confidence interval in predicting local polynomial regression. Both algorithms consider the search for optimal smoothing parameters. The first algorithm performs paired and residual bootstrap samples, and the second algorithm performs based on residuals with residuals. The first algorithm provides a scatterplot and reasonable coverage probability on relatively large sample data. In contrast, the second algorithm is more powerful for each data size, including for relatively small sample data sizes. The mean of the bootstrap-t confidence interval coverage probability shows that the second algorithm for second-degree local polynomial regression is better than the other three. However, the larger the sample data size gives, the closer the closer the average coverage probability of the two algorithms is to the nominal coverage probability.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Parameter Estimation for Weibull Burr Type X Model with Right Censored Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12594]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Amna R. Ashour&nbsp; &nbsp;Noor A. Ibrahim&nbsp; &nbsp;Mundher A. Khaleel&nbsp; &nbsp;and Pelumi E. Oguntunde&nbsp; &nbsp;</p><p>Studies have considered generalizing statistical distributions in the past. These were aimed at making such distributions more flexible and suitable for describing real-world phenomena. In this study, we considered exploring the Weibull Burr Type X distribution, which extends the Burr Type X distribution using the Weibull generator. Particularly, the performance of the maximum likelihood estimators for its parameters encompassing the right censored dataset was explored and compared. On the performance of its estimators with respect to bias and root mean square error, we considered the Monte Carlo simulation study to make a comparison using varying sample sizes and censored percentages. We illustrated the usefulness and potentials of the Weibull Burr Type X distribution using a right censored dataset. We considered comparing the fitness of this model to its sub-models using real world dataset. The result showed that the Weibull Burr Type X distribution provides a better fit than other competing models. This indicates that the distribution is flexible and competitive. The Weibull Burr Type X distribution exhibits unimodal and decreasing shapes. The extra parameter in the distribution varies the model's tail weight and introduces skewness into the model. We introduced this model as an alternative to other existing models for modelling right censored data in various research fields and areas of study.
</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Qualitative Analysis of Food-Web Model through Diffusion-Driven Instability]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12593]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Chetan Swarup&nbsp; &nbsp;</p><p>Many food webs exist in the ecosystem, and their survival is directly dependent on the growth rate of primary prey; it balances the entire ecosystem. The spatiotemporal dynamics of three species' food webs was proposed and analyzed in this paper, where the intermediate predator's predation term follows Holling Type IV and the top predator's predation term follows Holling Type II. To begin, we examine the system's stability using linear stability analysis. We first obtained an equilibrium solution set and then used a Jacobian method to investigate the system's stability at a biologically feasible equilibrium point. We investigate random movement in species in the presence of diffusion, establish conditions for system stability, and derive the Turing instability condition. Following that, the Turing instability condition for a spatial food web system is calculated. Finally, numerical simulations are used to validate the findings. We discovered several intriguing spatial patterns (spots, strip, and mixed patterns) that help us understand the dynamics of the real-world food web. As a result, the Turing instability analysis used in the complex food web system is especially relevant experimentally because the associated consequences can be researched and applied to a wide range of mathematical, ecological, and biological models.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Mathematical Model for 100- and 200-meter Olympic Games Running Championship Time Records]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12592]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Agung Prabowo&nbsp; &nbsp;and Ngadiman&nbsp; &nbsp;</p><p>The 100 and 200- meter running championships for both males and females were first held in 1948 at London Olympic Games 1948. Some time records made by the running champions have been continuously improved. Thus, running championship is not only intended to win the gold medals but also to make new world records. The secondary data were in the form of running championships' time records used to formulate the mathematical models and determine the minimum time limits (fastest). This research used the time-record data of 100 and 200- meter running championships for both males and females winning the gold medals from the Olympic Games held in 1948 to 2020. The mathematical Model for 100 and 200- meter running championships was more appropriately formulated using a logarithmic regression equation. Meanwhile, the time records for running championships of 100 meters for females as well as those of 200 meters for both males and females used a simple linear regression. The world record for running 100 meters for males still belongs to Usain Bolt (9.63 seconds). By using an assumption that the time records are normally distributed, those time records can be broken/improved into 9.53 seconds. Moreover, if the analysis is made using a box-plot diagram, the fastest time can be 9.42 seconds. A similar conclusion was also obtained for the world records of running 100 meters for females and 200 meters for males and females mentioning that the recently achieved time records still can be broken/improved in the future.</p>]]></description>
<pubDate>Nov 2022</pubDate>
</item>
<item>
<title><![CDATA[Geographically Weighted Negative Binomial Regression Modeling using Adaptive Kernel on the Number of Maternal Deaths during Childbirth]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12557]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Fahimah Fauwziyah&nbsp; &nbsp;Suci Astutik&nbsp; &nbsp;and Henny Pramoedyo&nbsp; &nbsp;</p><p>The standard model that is used for count data is Poisson Regression. In fact, most of the count data is overdispersed, which means that the response variable has greater variance than the mean. So the Poisson Regression cannot be used because overdispersion can cause inaccurate parameter estimators. One of the most widely used methods to overcome overdispersion is Negative Binomial Regression. If there are spatial effects such as spatial heterogeneity that are taken into Negative Binomial model, the appropriate method to analyze is Geographically Weighted Negative Binomial Regression (GWNBR). A spatial weighting matrix is required in the GWNBR model. In this study, three weighting functions were used, that is Adaptive Gaussian Kernel, Adaptive Bisquare Kernel, and Adaptive Tricube Kernel. From the three weighting functions, a model will be formed and the best model will be selected based on the smallest AIC. Count data used in this study is maternal deaths during childbirth in West Java Province, which is the highest case in Indonesia. The results of the analysis show that based on the smallest AIC, the best modeling in maternal deaths during childbirth in West Java is the GWNBR model using the Adaptive Gaussian Kernel weight. The results of the best model were obtained from three groups based on the predictor variables that had a significant effect.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Characterization of a Class of Generalised Core-satellite Graphs Using Average Degree]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12556]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Malathy V&nbsp; &nbsp;and Kalyani Desikan&nbsp; &nbsp;</p><p>Network equilibrium models are significantly distinct in supply chain networks, traffic networks, and e-waste flow networks. The idea of network equilibrium is strongly perceived while determining the tuner sets of a graph (network). Tuner sets are subsets of vertices of the graph G whose degrees are lower than the average degree of G, d(G) that can compensate or balance the presence of vertices whose degrees are greater than d(G). Generalised core-satellite graph <img src=image/13428150_01.gif> comprises <img src=image/13428150_02.gif> copies of <img src=image/13428150_03.gif> (the satellites) meeting in K<sub>c</sub> (the core) and it belongs to the family of graphs of diameter two. It has a central core of vertices connected to a few satellites, where all satellite cliques need not be identical and can be of different sizes. Properties like hierarchical structure of large real-world networks, are competently modeled using core-satellite graphs [1, 2, 5]. This family of graphs exhibits the properties similar to scale-free network as they possess anomalous vertex connectivity, where a small fraction of vertices (the core) are densely connected. Since these graphs possess such a structural property, interesting results are obtained for these graphs when tuner sets are determined. In this paper, we have considered the graph <img src=image/13428150_04.gif>, with p > q, a subclass of the generalized core-satellite graph which is a join of η copies of the clique K<sub>q</sub> and γ copies of the clique K<sub>p</sub> with the core K<sub>1</sub>. We have obtained the tuner set for this subclass and established the relation between the Top T(G) and the cardinality of the tuner set <img src=image/13428150_05.gif> through necessary and sufficient conditions. We analyze and characterize these graphs and obtain some interesting results while simultaneously examining the existence of tuner sets.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Anti-hesitant Fuzzy Subalgebras, Ideals and Deductive Systems of Hilbert Algebras]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12555]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Aiyared Iampan&nbsp; &nbsp;S. Yamunadevi&nbsp; &nbsp;P. Maragatha Meenakshi&nbsp; &nbsp;and N. Rajesh&nbsp; &nbsp;</p><p>The Hilbert algebra, one of several algebraic structures, was first described by Diego in 1966 [7] and has since been extensively studied by other mathematicians. Torra [18] was the first to suggest the idea of hesitant fuzzy sets (HFSs) in 2010, which is a generalization of the fuzzy sets defined by Zadeh [20] in 1965 as a function from a reference set to a power set of the unit interval. The significance of the ideas of hesitant fuzzy subalgebras, ideals, and filters in the study of the different logical algebras aroused our interest in applying these concepts to Hilbert algebras. In this paper, the concepts of HFSs to subalgebras (SAs), ideals (IDs), and deductive systems (DSs) of Hilbert algebras are introduced in terms of anti-types. We call them anti-hesitant fuzzy subalgebras (AHFSAs), anti-hesitant fuzzy ideals (AHFIDs), and anti-hesitant fuzzy deductive systems (AHFDSs). The relationships between AHFSAs, AHFIDs, and AHFDSs and their lower and strong level subsets are provided. As a result of the study, we found their generalization as follows: every AHFID of a Hilbert algebra Ω is an AHFSA and an AHFDS of Ω. We also study and find the conditions for the complement of an HFS to be an AHFSA, an AHFID, and an AHFDS. In addition, the relationships between the complements of AHFSAs, AHFIDs, and AHFDSs and their upper and strong level subsets are also provided.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[On a Weak Solution of a Fractional-order Temporal Equation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12554]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Iqbal M. Batiha&nbsp; &nbsp;Zainouba Chebana&nbsp; &nbsp;Taki-Eddine Oussaeif&nbsp; &nbsp;Adel Ouannas&nbsp; &nbsp;and Iqbal H. Jebril&nbsp; &nbsp;</p><p>Several real-world phenomena emerging in engineering and science fields can be described successfully by developing certain models using fractional-order partial differential equations. The exact, analytical, semi-analytical or even numerical solutions for these models should be examined and investigated by distinguishing between their solvablities and non-solvabilities. In this paper, we aim to establish some sufficient conditions for exploring the existence and uniqueness of solution for a class of initial-boundary value problems with Dirichlet condition. The gained results from this research paper are established for the class of fractional-order partial differential equations by a method based on Lax Milgram theorem, which relies in its construction on properties of the symmetric part of the bilinear form. Lax Milgram theorem is deemed as a mathematical scheme that can be used to examine the existence and uniqueness of weak solutions for fractional-order partial differential equations. These equations are formulated here in view of the Caputo fractional-order derivative operator, which its inverse operator is the Riemann-Louville fractional-order integral one. The results of this paper will be supportive for mathematical analyzers and researchers when a fractional-order partial differential equation is handled in terms of finding its exact, analytical, semi-analytical or numerical solution.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Nano <img src=image/13428437_01.gif>-connectedness and Strongly Nano <img src=image/13428437_01.gif>-connectedness in Nano Topological Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12553]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>S. Gunavathy&nbsp; &nbsp;R. Alagar&nbsp; &nbsp;Aiyared Iampan&nbsp; &nbsp;and Vediyappan Govindan&nbsp; &nbsp;</p><p>This article&apos;s goals are to propose a brand-new category of space termed &quot;nano-ideal topological spaces&quot; and to look at how they relate to conventional topological spaces. To determine their relationships in these spaces, we create certain closed sets. These sets&apos; fundamental characteristics and properties are provided. Additionally, we look into two theories of optimal connectivity in nano topological spaces. In particular, we obtain certain features of such spaces and define <img src=image/13428437_02.gif>-connectedness and strongly <img src=image/13428437_02.gif>-connectedness nano-topological spaces in terms of any ideal <img src=image/13428437_02.gif>. This study aims to illustrate a novel kind of nano-topological space called nano-<img src=image/13428437_03.gif>-topological space, and we define the relationships between the various classes of open sets. We speak about how we might characterise them. Some of their characterizations are finally supported. The lower and upper approximations are used by the author to define nano topological space. As weak variants of Nano open sets, he also created Nano <img src=image/13428437_03.gif>-open sets, Nano semi-open sets, and Nano pre-open sets. Continuity, the fundamental notion of topology in nano topological space, was also introduced. Also, we introduce the notion of nano <img src=image/13428437_04.gif>-continuity between nano topological spaces and we investigate several properties of this type of near-nano continuity. Finally, we introduce two examples as applications in nano-topological spaces.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Finite Domination Type for Monoid Presentations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12552]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Elton Pasku&nbsp; &nbsp;and Anjeza Krakulli&nbsp; &nbsp;</p><p>In [5], Squier, Otto and Kobayashi explored a homotopical property for monoids called finite derivation type (FDT) and proved that FDT is a necessary condition that a finitely presented monoid must satisfy if it is to have a finite canonical presentation. In the latter development in [2], Kobayashi proved that the property <img src=image/13428295_01.gif> is equivalent with what is called in [2] finite domination type. It was indicated in the end of [2] that there are <img src=image/13428295_01.gif> monoids which are not even finitely generated, and as a consequence are not of FDT. It was this indication that inspired us to look for the possibility of defining a property of monoids which encapsulates both, FDT and finite domination type. This is realized in the current paper by extending the notion of finite domination from monoids to rewriting systems, and to achieve this, we are based on the approach of Isbell in [1], who defined the notion of the dominion of a subcategory <img src=image/13428295_02.gif> of a category <img src=image/13428295_03.gif> and characterized that dominion in terms of zigzags in <img src=image/13428295_03.gif> over <img src=image/13428295_02.gif>. The reason we followed this approach is that to every rewriting system <img src=image/13428295_04.gif> which gives a monoid <img src=image/13428295_06.gif>, there is always a category <img src=image/13428295_05.gif> associated to it which contains three types of information at the same time: (i) all the possible ways in which the elements of <img src=image/13428295_06.gif> are written in terms of words with letters from <img src=image/13428295_07.gif>, (ii) all the possible ways one can transform a word with letters from <img src=image/13428295_07.gif> into another one representing the same element of <img src=image/13428295_06.gif> by using rewriting rules from <img src=image/13428295_08.gif>. Each of such way gives is in fact a path in the reduction graph of <img src=image/13428295_04.gif>. The last information (iii) encoded in <img src=image/13428295_05.gif> is that <img src=image/13428295_05.gif> contains all the possible ways that two parallel paths of the reduction graph are linked to each other by a series of compositions of whiskerings of other parallel paths. This category <img src=image/13428295_05.gif> turns out to have the advantage that it can  &quot;measure&quot; the extent to which a set <img src=image/13428295_09.gif> of parallel paths is sufficient to express any pair of parallel paths by composing whiskers from <img src=image/13428295_09.gif>. The gadget used to measure this, is the Isbell dominion of the whisker category <img src=image/13428295_10.gif> generated by <img src=image/13428295_09.gif> over <img src=image/13428295_05.gif>. We then define the monoid <img src=image/13428295_06.gif> given by <img src=image/13428295_04.gif> to be of finite domination type (FDOT) if both <img src=image/13428295_07.gif> and <img src=image/13428295_08.gif> are finite and there is a finite set <img src=image/13428295_09.gif> of morphisms such that <img src=image/13428295_11.gif> is exactly <img src=image/13428295_05.gif>. The first main result of our paper is that likewise FDT, FDOT is an invariant of the monoid presentation, and the second one is that that FDT implies FDOT, while remains open whether the converse is true or not. The importance of FDOT stands in the fact that not only it generalizes FDT, but the way it is defined has a lot in common with <img src=image/13428295_01.gif>, giving thus hope that FDOT is the right tool to put FDT and <img src=image/13428295_01.gif> into the same framework.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[An Approach to the Evaluation of the Overall Performance with Efficiency Measurements by Means of an Efficiency Evaluation Chain Using DEA and Fuzzy DEA]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12551]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Blerta (Kristo) Nazarko&nbsp; &nbsp;and Ditila Ekmekçiu&nbsp; &nbsp;</p><p>In this paper, we will give an approach to the performance evaluation and efficiency value measurements where the gathered real data form a "time series" along a certain period of time (t). Along with the use of the basic models of DEA (Data Envelopment Analysis) method in joint cooperation with Fuzzy DEA models, the impact of the variable factors on DMU's (Decision Making Units) inefficiencies, performance evaluation and ranking is studied by accepting the t period of time as a discreet variable and as a unique "moment". This is determined by the objectives given by the CSAM (Connoisseur-study-analysis model) viewpoint in order to form the most real tableau of the DMUs' performance evaluation by means of an efficiency evaluation chain. For the evaluation of performance as a competitive process, 17 countries from the macroeconomic and financial environment are included (countries from the Western Balkans, the EU, and other countries). The study – a knowledge analysis – is also developed as issues of economic optimization portfolio by going through two steps: First step – (time as a discreet variable) evaluation of DMU's performance by measuring and analysing the efficiency value in the economic environment, with defined goals and criteria, where the concept of differences "deviation" elasticity coefficient (differences between Ef-VRS and Ef-CRS) is also included at the efficiency levels for the inefficient DMUs during the period of time 't'; Second step – (the period of time as a unique "moment") where is operated using the Fuzzy DEA model approach (α-cut) and the coordination of both steps with the DMUs' performance ranking. In addition, the study investigates the impact of the correlative relations between the variables to the DMUs' efficiency values during the evaluation of their performance. The gained results through the methodology followed in this study will give a more real tableau in the study of the performance evaluation and the connoisseur analysis of the DMUs' efficiency value, based on the real data gathered during the period of time (t) ϵ (2015-2019).</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Numerical Solution of the Two-Dimensional Elasticity Problem in Strains]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12550]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Khaldjigitov Abduvali&nbsp; &nbsp;and Djumayozov Umidjon&nbsp; &nbsp;</p><p>Usually, the boundary value problems of the theory of elasticity are formulated with respect to displacements, and are reduced to the well-known Lame equations. Strains and stresses can be calculated from displacements as a solution to Lame's equation. Also known are the Beltrami Mitchell equations, which make it possible to formulate the boundary value problem of the theory of elasticity with respect to stresses. Currently, the boundary value problems of the theory of elasticity in stresses are studied in more detail in the two-dimensional case, and usually solved numerically with the introduction of the Airy stress function. But, the direct solution of boundary value problems of elasticity theory with respect to stresses requires further researches. This work, similarly to the boundary value problem in stresses, is devoted to the formulation and numerical solution of boundary value problems of the theory of elasticity with respect to deformations. The proposed boundary value problem consists of six Beltrami-Mitchell-type equations depending on strains and three equations of the equilibrium equation expressed with respect to deformations. As boundary conditions, in addition to the usual conditions for surface forces, three additional conditions are also introduced based on the equilibrium equations. The boundary value problem is considered in detail for a rectangular area. The discrete analogue of the boundary value problem is composed by the finite difference method. The convergence of difference schemes and an iterative method for their solution are studied. Software has been developed in the C++ environment for solving boundary value problems in the theory of elasticity and deformation. A number of boundary value problems on the deformation of a rectangular plate are solved numerically under various boundary conditions. The reliability of the obtained results is substantiated by comparing the numerical results, with the exact solution, as well as with the known solutions of the plate tension problems with parabolic and uniformly distributed edge loads.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Fuzzy Norm on Fuzzy <img src=image/13427754_01.gif>-Normed Space]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12549]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Mashadi&nbsp; &nbsp;Abdul Hadi&nbsp; &nbsp;and Sukono&nbsp; &nbsp;</p><p>In various articles, fuzzy <img src=image/13427754_06.gif>-normed space concept for <img src=image/13427754_02.gif> is constructed from fuzzy normed space which uses intuitionistic approach or <img src=image/13427754_03.gif>-norm approach concept. However, fuzzy normed space can be approached using fuzzy point too. This paper shows that fuzzy <img src=image/13427754_06.gif>-normed space for <img src=image/13427754_02.gif> can be constructed from fuzzy normed space using fuzzy point approach of fuzzy set. Furthermore, for <img src=image/13427754_07.gif>, it is also discussed how to construct fuzzy (<img src=image/13427754_05.gif>)-normed space from fuzzy <img src=image/13427754_06.gif>-normed space using fuzzy point approach. The method that can be used is as follows. From fuzzy normed space, we construct a norm function that satisfies properties of fuzzy <img src=image/13427754_06.gif>-normed, so that fuzzy <img src=image/13427754_06.gif>-normed space is derived. Conversely, from fuzzy <img src=image/13427754_06.gif>-normed space, we construct a normed function that satisfies properties of fuzzy (<img src=image/13427754_05.gif>)-normed, so that fuzzy (<img src=image/13427754_05.gif>)-normed space is obtained. Finally, we get two new theorems that state that a fuzzy <img src=image/13427754_06.gif>-normed space from any fuzzy normed space and fuzzy (<img src=image/13427754_05.gif>)-normed space for <img src=image/13427754_07.gif> from fuzzy <img src=image/13427754_06.gif>-normed space using fuzzy point of fuzzy set always can be constructed.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Empirical Power and Type I Error of Covariate Adjusted Nonparametric Methods]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12548]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Jiabu Ye&nbsp; &nbsp;and Dejian Lai&nbsp; &nbsp;</p><p>In clinical trials, practitioners collect baseline covariates for enrolled patients prior to treatment assignment. In recent guidance from Food and Drug Administration and European Medicines Agency, regulators encourage practitioners to utilize baseline information at the analysis stage to improve the efficiency. However, the current guidance focused on linear or non-linear modelling approach. Nonparametric statistical methods were not focus in the guidance. In this article, we conducted simulations of several covariate-adjusted nonparametric statistical tests. Wilcoxon rank sum test is a widely used method for non-normally distributed response variables between two groups but its original form does not take into account the possible effect of covariates. We investigated the empirical power and the type I error of the Wilcoxon type test statistics under various settings of covariate adjustments commonly encountered in clinical trials. In addition to Wilcoxon type test statistics, we also compared simulation results to more advanced nonparametric test statistics such as the aligned rank test and Jaeckel, Hettmansperger-McKean test. The simulation result shows when there is covariate imbalance, applying Wilcoxon rank sum test without adjusting the covariates will become problematic. The survey of the covariate adjustments for varies tests under investigation gives brief guidance to trial practitioners in real practice, particularly whose baseline covariates are not well balanced.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Henstock - Kurzweil Integral for Banach Valued Function]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12422]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>T. G. Thange&nbsp; &nbsp;and S. S. Gangane&nbsp; &nbsp;</p><p>In this paper, we have studied the Henstock - Kurzweil integral which is a generalized Riemann integral means. Hen-stock - Kurzweil integral is the natural extension of Riemann integral. We defined Henstock - Kurzweil integral of Banach space valued function with respect to a function of bounded variation which is an extension of real valued Henstock - Kurzweil integral with respect to an increasing function. We investigated elementary properties of the Henstock - Kurzweil integral of Banach space valued function with respect to a function of bounded variation. We proved the convergence theorems and Saks - Henstock lemma of the Henstock - Kurzweil integral of Banach valued functions with respect to a function of bounded vari-ation. Equi-integrability with respect to Banach space valued function is defined and equi-integrable theorem of Henstock - Kurzweil integral of Banach space valued function with respect to a function of bounded variation is proved. Finally Bochner Henstock - Kurzweil integral of Banach valued function with respect to a function of bounded variation is defined and the relation between Bochner Henstock - Kurzweil integral and Henstock - Kurzweil integral is exhibited.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Mathematical Analysis of Dynamic Models of Suspension Bridges with Delayed Damping]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12421]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Akbar B. Aliyev&nbsp; &nbsp;and Yeter M. Farhadova&nbsp; &nbsp;</p><p>Suspension bridges are a type of construction in which the deck is suspended under a series of suspension cables that are on vertical hangers. The first modern example of this project began to appear in the early 1800s. Modern suspension bridges are lightweight, aesthetically pleasing and can span longer distances than any other bridge form. Many papers have been devoted to the modelling of suspension bridges, for instance, Lazer and McKenna studied the problem of nonlinear oscillation in a suspension bridge. They introduced a (one-dimensional) mathematical model for the bridge that takes into account of the fact that the coupling provided by the stays connecting the main cable to the deck of the road bed is fundamentally nonlinear, that is, they gave rise to the system of semi linear hyperbolic equation, where the first equation describes the vibration of the road bed in the vertical plain and the second equation describes that of the main cable from which the road bed is suspended by the tie cables. Recently, interest in this field has been increasing at a high rate. In this paper, we investigate some mathematical models of suspension bridges with a strong delay in linear aerodynamic resistance force. We establish the exponential decay of the solution for the corresponding homogeneous system and prove the existence of an absorbing set as well as a bounded attractor.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Solutions of Nonlinear Fractional Differential Equations with Nondifferentiable Terms]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12420]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Monica Botros&nbsp; &nbsp;E.A.A.Ziada&nbsp; &nbsp;and I.L. EL-Kalla&nbsp; &nbsp;</p><p>In this research, we employ a newly developed strategy based on a modified version of the Adomian decomposition method (ADM) to solve nonlinear fractional differential equations (FDE) with both differential and nondifferential variables. FDE have disturbed the interest of many researchers. This is due to the development of both the theory and applications of fractional calculus. This track from various areas of fractional differential equations can be used to model various fields of science and engineering such as fluid flows, viscoelasticity, electrochemistry, control, electromagnetic, and many others. Several fractional derivative definitions have been presented, including Riemann–Liouville, Caputo,and Caputo– Fabrizio fractional derivative. We just need to calculate the first Adomain polynomial in this technique avoiding the hurdles in the nondifferentiable nonlinear terms' remaining polynomials. Furthermore, the proposed technique is easy to programme and produces the desired output with minimal work and time on the same processor. When compared to the exact solution, this method has the advantage of reducing calculation steps, while producing accurate results. The supporting evidence proves that modified Adomian decomposition has an advantage over traditional Adomian decomposition method which can be explained very clear with nonlinear fractional differential equations. Our computational examples with difficult issues are used to prove the new algorithm's efficiency. The results show that the modified ADM is powerful, which has a faster convergence solution than the original one. Convergence analysis is discussed, also the uniqueness is explained.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Some Results on Theory of Numbers, Partial Differential Equations and Numerical Analysis]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12419]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>B. M. Cerna Maguiña&nbsp; &nbsp;Dik D. Lujerio Garcia&nbsp; &nbsp;Carlos Reyes Pareja&nbsp; &nbsp;and Torres Dominguez Cinthia&nbsp; &nbsp;</p><p>In this article, given a number <img src=image/13427314_01.gif> that ends in one and assuming that there are integer solutions <img src=image/13427314_02.gif> for the equations <img src=image/13427314_03.gif> or <img src=image/13427314_04.gif> or <img src=image/13427314_05.gif>, the straight line was used passing through the center of gravity of the triangle bounded by the vertices <img src=image/13427314_06.gif>. Considering A ≥ 25, we manage to divide the domain of the curve <img src=image/13427314_03.gif> into two disjoint subsets, and using Theorem (2.2) of this article, we find the subset where the integer solution of the equation <img src=image/13427314_03.gif> is found. Similar process is done when <img src=image/13427314_05.gif>, in case P is of the form <img src=image/13427314_04.gif> or <img src=image/13427314_07.gif>. These curves are different and to obtain a process similar to the one carried out previously, we proceeded according to Observation 2.2. Our results allow minimizing the number of operations to perform when our problem requires to be implemented computationally. Furthermore, we obtain some conditions to find the solution of the equations: <img src=image/13427314_08.gif> <img src=image/13427314_09.gif>, where <img src=image/13427314_10.gif> is of class <img src=image/13427314_11.gif>, <img src=image/13427314_12.gif> and <img src=image/13427314_13.gif> is a bounded open domain of <img src=image/13427314_14.gif> with piecewise smooth boundary <img src=image/13427314_15.gif>. All the operations carried out to find the solution have been carried out assuming that these exist, and we have found the conditions that <img src=image/13427314_16.gif> must satisfy for the coefficients <img src=image/13427314_17.gif>. We finish by finding an optimal domain for the real solution of a given polynomial of degree five. This process carried out on said given polynomial can also be carried out to reduce the degree of a given polynomial and thus obtain information about its roots.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[The Exact Solutions of the Space and Time Fractional Telegraph Equations by the Double Sadik Transform Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12418]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Prapart Pue-on&nbsp; &nbsp;</p><p>The double integral transform is a robust implementation that is important in handling scientific and engineering problems. Besides its simplicity of use and straightforward application to the issue, the ability to reduce the problems to an algebraic equation that can be easily solved is a substantial advantage of the tool. Among the several integral transforms, the double Sadik transform is acknowledged to be one of the most frequently used in solving differential and integral equations. This work deals with investigating a generalized double integral transform called the double Sadik transform. The proof of the double Sadik transforms for partial fractional derivatives in the Caputo sense is displayed, and the double Sadik transforms method is introduced. The method has been applied to solve the initial boundary value problems for linear space and timefractional telegraph equations. Moreover, the suggested strategy can be used on non-linear problems via an iterative method and a decomposition concept. Some known-solution questions are evaluated with relatively minimal computational cost. The results are represented by utilizing the Mittag-Leffler function and covering the solution of a classical telegraph equation. The obtained exact solutions not only show the accuracy and efficiency of the technique, but also reveal reliability when compared to those obtained using other methods.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[On <img src=image/13427444_01.gif>-Ideal Statistically Convergent of Double Sequences in n-Normed Spaces over Non-Archimedean Fields]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12417]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>R. Sakthipriya&nbsp; &nbsp;and K. Suja&nbsp; &nbsp;</p><p>The main aim of this work is to investigate some important properties of statistical convergence sequence in non-Archimedean fields. Statistical convergence has been discussed in various fields of mathematics namely approximation theory, measure theory, probability theory, trigonometric series, number theory, etc. The concept of summability over valued fields is a significant area of mathematics that has many applications in analytic continuation, quantum mechanics, probability theory, Fourier analysis, approximation theory, and fixed point theory. The theory of statistical convergence plays a notable space in the summability theory and functional analysis. The purpose of this work is to provide certain characterizations of <img src=image/13427444_01.gif> ideal statistical convergence of sequence and <img src=image/13427444_01.gif> ideal statistical Cauchy sequence in n-normed spaces and the establishment of relevant results in non-Archimedean fields. The <img src=image/13427444_01.gif> ideal statistical convergence of sequence and <img src=image/13427444_01.gif> ideal statistically Cauchy sequence are defined. A few related theorems are proved in field <img src=image/13427444_02.gif>. The results of this work are extended to establish statistical convergence of double sequences in n-normed space and some new results have been proved. In this work, the main concept is <img src=image/13427444_01.gif> ideal statistical convergence of double sequences in n-normed space over a complete, non-trivially valued, non-Archimedean field. Throughout this article, <img src=image/13427444_02.gif> is a complete, non-trivially valued, non-Archimedean field.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Mathematical Analysis of Priority Bi-serial Queue Network Model]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12416]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Deepak Gupta&nbsp; &nbsp;Aarti Saini&nbsp; &nbsp;and A.K.Tripathi&nbsp; &nbsp;</p><p>One of the most comprehensive theories of stochastic models is queueing theory. Through innovative analytical research with broad applicability, advanced theoretical models are being developed. In the present research, we would like to investigate at a queuing network model with low and high priority users and different server transition probabilities. The two service channels used in this study, <img src=image/13428051_01.gif> and <img src=image/13428051_02.gif>, are connected to the same server, <img src=image/13428051_03.gif>. Customers with low and high priorities are invited by the server <img src=image/13428051_04.gif>. The objective of the research is to design a model that helps in minimizing congestion in different systems. Poisson distribution is used to characterize both the arrival and service patterns. The functioning of this system takes place in a stochastic domain. The differential difference equations have been established, and the consistency of behaviour of the system has been examined. The generating function approach, the law of calculus, and a statistical formula are used to assess the model's performance. Numerical analyses and graphical presentations are used to show the model's outcomes. The results of the model are displayed graphically and through numerical analyses. This model can be used in a number of real situations, including administration, manufacturing, hospitals, banking systems, etc. In such situations, the present study is quite beneficial for understanding the system and redesigning it.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Using Clustering Methods to Detect the Revealed Preferences of Moroccans towards the Electric Vehicles: Latent Class Analysis (LCA) and K-Modes Algorithm (K-MA)]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12415]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Taoufiq El Harrouti&nbsp; &nbsp;Mourad Azhari&nbsp; &nbsp;Hajar Deqqaq&nbsp; &nbsp;Abdellah Abouabdellah&nbsp; &nbsp;Sanaa El Aidi&nbsp; &nbsp;and Habiba Chaoui&nbsp; &nbsp;</p><p>Latent Class Analysis (LCA) and k-Mode Algorithm (K-MA) are two unsupervised machine learning techniques. These methods aim to identify individuals on the basis of their shared traits. They are utilized in the context of categorical data and can be used to detect people's opinions toward green forms of transportation, especially Electric Vehicles (EV) as an alternative to conventional internal combustion engine vehicles. The LCA approach discovers group profiles (clusters) based on observed variables, whereas the K-MA technique is an adaptation of the k-means algorithm for categorical variables. In this study, we apply these two methods to identify Moroccans' preferences for the electrification of their means of transportation. Both algorithms are able to divide the analyzed sample into two groups, with the first group being more interested in EV. The second group consists of individuals who are less concerned about ecologically sustainable transportation. In addition, we conclude that the LCA algorithm performs well and is superior to the K-MA, and that its discrimination power (65% vs 35%) is more than that of the K-MA (52% vs 48%).</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Asymptotically Minimax Goodness-of-fit Testing for Single-index Models]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12414]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Jean-Philippe Tchiekre&nbsp; &nbsp;Christophe Pouet&nbsp; &nbsp;and Armel Fabrice E. Yodé&nbsp; &nbsp;</p><p>In the context of non parametric multivariate regression model, we are interested in goodness-of-fit testing for the single-index models. These models are dimension reduction models and are therefore useful in multidimensional nonparametric statistics because of the well-known phenomenon called the curse of dimensionality. Fan and Li [5] have proposed the first consistent test for goodness-of-fit testing of the single-index by using nonparametric kernel estimation method and a central limit theorem for degenerate <img src=image/13427297_05.gif>-statistics of order higher than two. Since then, the minimax properties of this test have not been investigated. Following this work, we use here the asymptotic minimax approach. We are interested in finding the asymptotic minimax rate of testing <img src=image/13427297_01.gif> which gives the minimal distance between the null and alternative hypotheses such that a successful testing is possible. We propose a test procedure of level <img src=image/13427297_02.gif> which can tend to zero when the sample size tends to infinity. We have established the minimax asymptotic properties of our test procedure by showing that it reaches the asymptotic minimax rate <img src=image/13427297_01.gif> for the dimension <img src=image/13427297_03.gif> and there is no test of level <img src=image/13427297_02.gif> reaching this rate for <img src=image/13427297_04.gif>. Because of its minimax asymptotic properties, our test is able to distinguish the null hypothesis of the closest possible alternative. The results obtained were possible thanks to a large deviation result that we established for a degenerate U-statistic of order two appearing in our decision variable.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Accuracy Improvement of Block Backward Differentiation Formulas for Solving Stiff Ordinary Differential Equations Using Modified Versions of Euler's Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12413]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Nurfaezah Mohd Husin&nbsp; &nbsp;Iskandar Shah Mohd Zawawi&nbsp; &nbsp;Nooraini Zainuddin&nbsp; &nbsp;and Zarina Bibi Ibrahim&nbsp; &nbsp;</p><p>In this study, the fully implicit 2-point block backward differentiation formulas (BBDF) method has been successfully utilized for solving stiff ordinary differential equations (ODEs) by taking into account the uses of new starting methods namely, modified Euler's method (MEM), improved modified Euler's method (IMEM), and new Euler's method (NEM). The reason of proposing the BBDF is that the method has been proven useful for stiff ODEs due to its A-stable properties. Furthermore, the method is able to approximate the solutions at two points simultaneously at each step. The proposed method is also implemented through Newton's iteration procedure, which involves the calculation of the Jacobian matrix. Accuracy of the method is evaluated based on its performance in solving linear and non-linear initial value problems (IVPs) of first order stiff ODEs with transient and steady-state solutions. Some comparisons are made with the conventional BBDF approach for indicating the reliability of the proposed method. Numerical results indicate that not only classical Euler's method provides accurate solutions for BBDF, but also the numerous modified versions of Euler's methods improve the accuracy of BBDF, in terms of absolute error at certain step size and stage of iteration.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Towards a Model for Simulating Collision of Multiple Water Droplets Flow Down a Leaf Surface]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12412]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Moa'ath N. Oqielat&nbsp; &nbsp;</p><p>In the current article, a physics-based mathematical model is presented to generate realistic trajectories of water droplets across the Frangipani leaf surface and can be applied on any other kind of leaves, which is the first in the series of two articles that we are going to present the second article later on. In the second article, we will study the collision between the droplet and the liquid streak. The model has many applications in different scientific and engineering fields, such as modelling pesticide movements on leaves surfaces and modeling absorption and nutrition systems. The leaf surface consists of a triangular mesh structure that needs to be constructed using different techniques such as a well-known technique called EasyMesh method. The leaf surface is constructed using surface fitting techniques, such as finite elements methods and Clough-Tocher method, using a set of 3D real-world data points collected by a laser scanner, and the motion of the droplet on each triangle is calculated using a derived equation of motion. The motion of the droplet affected different forces, such as gravity and drag forces. Simulations of the model were verified using Matlab programming, and the results seemed to be real and capture the droplet motion very well.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Stochastic-Fractal Analysis Modeling of Salts Precipitation from Aqueous Solution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12411]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Isela J. Reyna-Rosas&nbsp; &nbsp;Josué F. Pérez-Sánchez&nbsp; &nbsp;Edgardo Suárez-Domínguez&nbsp; &nbsp;Alejandra Hernández-Alvarado&nbsp; &nbsp;Susana Gonzalez-Santana&nbsp; &nbsp;and F. Izquierdo-Kulich&nbsp; &nbsp;</p><p>Electrolytes are of interest because thin plate coatings are normally obtained from aqueous solutions. The properties of the surface are important because various properties such as resistance or durability depend on it. To understand the phenomenological processes, it is better to analyze simpler processes such as sodium chloride. In this paper, a model is proposed to predict the temporal behavior of the fractal dimension of the patterns formed in salts precipitation by solvent evaporation in a scattering surface; for fractal-box counting, ImageJ software was used. The model was obtained by applying stochastic methods and fractal geometry, describing the internal fluctuations caused by precipitation and dissolution on the mesoscopic scale of solid crystalline particles. From adjusting the proposed model to the experimental data, it is possible to estimate the velocity constants related to the microscopic precipitation processes of the particles that form the pattern. The model was validated and used to study the precipitation of carbonate salts and sodium chloride, respectively, obtaining predictions corresponding to the physicochemical properties of these salts. From the adjustment of the proposed models to the observed experimental data, the value of the velocity constants of the precipitation and dissolution processes was also estimated.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Analysis of Hetrogeneous Feedback Queue Model in Stochastic and in Fuzzy Environment Using L-R Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12410]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Vandana Saini&nbsp; &nbsp;Deepak Gupta&nbsp; &nbsp;and A.K. Tripathi&nbsp; &nbsp;</p><p>In this paper, we analyse a feedback queue network in stochastic and in fuzzy environment. We consider a model with three heterogeneous servers which are commonly attached to a server in starting. At the initial stage, all queue performance measures are obtained in steady-state that is in stochastic environment. After that, work is extended to fuzzy environment because practically all characteristics of the system are not exact, they are uncertain in nature. In the present work we use probability generating function technique, triangular fuzzy numbers, classical formulae for the calculation of all queue characteristics and L-R method to calculate queue characteristics in fuzzy environment.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[A String of Disjoint Job Block with Processing Time Associated with Probability in Two-Stage Weighted Open Shop Model]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12409]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Jatinder Pal Kaur&nbsp; &nbsp;Deepak Gupta&nbsp; &nbsp;Adesh kumar Tripathi&nbsp; &nbsp;and Renuka&nbsp; &nbsp;</p><p>Open-shop scheduling problem (OSSP) is a well-known topic with wide industrial applications which belongs to one of the vital issues in the field of engineering. This paper deals with a two-stage open shop scheduling problem in which the processing time of jobs is allied with probabilities. The concept of a string of two job blocks which are disjoint in nature is considered so that the first block covers the jobs with a fixed route and the second block covers the jobs with an arbitrary path. Further, the weights of jobs are also introduced due to their applicability and relative importance in the real world. The objective of this study is to propose a heuristic which on execution, provides an optimal or near-optimal schedule to diminish the makespan. Several numerical illustrations are produced in MATLAB 2018a to demonstrate the effectiveness of the proposed approach, and to confirm the performance, the results are compared with the existing methods developed by Johnson and Palmer.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[The Initialization of Flexible K-Medoids Partitioning Method Using a Combination of Deviation and Sum of Variable Values]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12408]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Kariyam&nbsp; &nbsp;Abdurakhman&nbsp; &nbsp;Subanar&nbsp; &nbsp;and Herni Utami&nbsp; &nbsp;</p><p>This research proposed a new algorithm for clustering datasets using the Flexible K-Medoids Partitioning Method. The procedure is divided into two phases, selecting the initial medoids and determining the partitioned dataset. The initial medoids are selected based on the block representation of a combination of the sum and deviation of the variable values. The relative positions of the objects will be separated when the sum of the values of the p variables is different even though these objects have the same variance. The objects are selected flexibly from each block as the initial medoids to construct the initial groups. This process ensures that any identical objects will be in the same group. The candidate of final medoids is determined randomly by selecting objects from each initial group. Then, the final medoids were identified based on the combination of objects that produces the minimum value of the total deviation within the cluster. The proposed method overcomes the empty group that may arise in a simple and fast k-medoids algorithm. In addition, it overcomes identical objects in the different groups that may occur in the initialization of the simple k-medoids algorithm. Furthermore, the artificial data and six real datasets, namely iris, ionosphere, soybean small, primary tumor, heart disease case 1 and zoo were used to evaluate this method, and the results were compared with other algorithms based on the initial and final groups' performance. The experiment results showed that the proposed method ensures that no initial groups are empty. For real datasets, the adjusted Rand index and clustering accuracy of the final groups of the new algorithm outperforms the other methods.</p>]]></description>
<pubDate>Sep 2022</pubDate>
</item>
<item>
<title><![CDATA[Evolution Equations of Pseudo Spherical Images for Timelike Curves in Minkowski 3-Space]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12342]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>H. S. Abdel-Aziz&nbsp; &nbsp;H. Serry&nbsp; &nbsp;and M. Khalifa Saad&nbsp; &nbsp;</p><p>The pseudo spherical images of non-lightlike curves in Minkowski geometry are curves on the unit pseudo sphere, which are intimately related to the curvatures of the original ones. These images are obtained by means of Frenet-Serret frame vector fields associated with the curves. This classical topic is a well-known concept in Lorentzian geometry of curves. In this paper, we introduce the pseudo spherical images for a timelike curve in Minkowski 3-space. Our main purpose of the work is to obtain the time evolution equations of the orthonormal frame and curvatures of these images. The compatibility conditions for the evolutions are used. Finally, the theoretical results obtained through this study are given by some important theorems and explained in two computational examples with the corresponding graphs.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[2-Odd Labeling of Graphs Using Certain Number Theoretic Concepts and Graph Operations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12341]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Ajaz Ahmad Pir&nbsp; &nbsp;Tabasum Mushtaq&nbsp; &nbsp;and A. Parthiban&nbsp; &nbsp;</p><p>Graph theory plays a significant role in a variety of real-world systems. Graph concepts such as labeling and coloring are used to depict a variety of processes and relationships in material, social, biological, physical, and information systems. Specifically, graph labeling is used in communication network addressing, fault-tolerant system design, automatic channel allocation, etc. 2-odd labeling assigns distinct integers to the nodes of <img src=image/13427595_01.png> in such a manner, that the positive difference of adjacent nodes is either 2 or an odd integer, <img src=image/13427595_02.png>, <img src=image/13427595_03.png>. So, <img src=image/13427595_04.png> is a 2-odd graph if and only if it permits 2-odd labeling. Studying certain important modifications through various graph operations on a given graph is interesting and challenging. These operations mainly modify the underlying graph's structure, so understanding the complex operations that can be done over a graph or a set of graphs is inevitable. The motivation behind the development of this article is to apply the concept of 2-odd labeling on graphs generated by using various graph operations. Further, certain results on 2-odd labeling are also derived using some well-known number theoretic concepts such as the Twin prime conjecture and Goldbach's conjecture, besides recalling a few interesting applications of graph labeling and graph coloring.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[Numerical Solution of Nonlinear Fredholm Integral Equations Using Half-Sweep Newton-PKSOR Iteration]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12340]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Labiyana Hanif Ali&nbsp; &nbsp;Jumat Sulaiman&nbsp; &nbsp;Azali Saudi&nbsp; &nbsp;and Xu Ming Ming&nbsp; &nbsp;</p><p>This paper is concerned with producing an efficient numerical method to solve nonlinear Fredholm integral equations using Half-Sweep Newton-PKSOR (HSNPKSOR) iteration. The computation of numerical methods in solving nonlinear equations usually requires immense amounts of computational complexity. By implementing a Half-Sweep approach, the complexity of the calculation is tried to be reduced to produce a more efficient method. For this purpose, the steps of the solution process are discussed beginning with the derivation of nonlinear Fredholm integral equations using a quadrature scheme to get the half-sweep approximation equation. Then, the generated approximation equation is used to develop a nonlinear system. Following that, the formulation of the HSNPKSOR iterative method is constructed to solve nonlinear Fredholm integral equations. To verify the performance of the proposed method, the experimental results were compared with the Full-Sweep Newton-KSOR (FSNKSOR), Half-Sweep Newton-KSOR (HSNKSOR), and Full-Sweep Newton-PKSOR (FSNPKSOR) using three parameters: number of iteration, iteration time, and maximum absolute error. Several examples are used in this study to illustrate the efficiency of the tested methods. Based on the numerical experiment, the results appear that the HSNPKSOR method is effective in solving nonlinear Fredholm integral equations mainly in terms of iteration time compared to rest tested methods.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[Dynamics of Nonlinear Operator Generated by Lebesgue Quadratic Stochastic Operator with Exponential Measure]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12339]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Nur Zatul Akmar Hamzah&nbsp; &nbsp;Siti Nurlaili Karim&nbsp; &nbsp;Mathuri Selvarajoo&nbsp; &nbsp;and Noor Azida Sahabudin&nbsp; &nbsp;</p><p>Quadratic stochastic operator (QSO) is a branch of nonlinear operator studies initiated by Bernstein in 1924 through his presentation on population genetics. The study of QSO is still ongoing due to the incomplete understanding of the trajectory behavior of such operators given certain conditions and measures. In this paper, we intend to introduce and investigate a class of QSO named Lebesgue QSO which gets its name from the Lebesgue measure as the measure is used to define the probability measure of such QSO. The broad definition of Lebesgue QSO allows the construction of a new measure as its family of probability measure. We construct a class of Lebesgue QSO with exponential measure generated by 3-partition with three different parameters defined on continual state space <img src=image/13427304_01.png>. Also, we present the dynamics of such QSO by describing the fixed points and periodic points of the system of equations generated by the defined QSO using a functional analysis approach. The investigation is concluded by the regularity of the operator, where such Lebesgue QSO is either regular or nonregular depending on the parameters and defined measurable partitions. The result of this research allows us to define a new family of functions of the probability measure of Lebesgue QSO and compare their dynamics with the existing Lebesgue QSO.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[A Study on Sylow Theorems for Finding out Possible Subgroups of a Group in Different Types of Order]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12338]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Md. Abdul Mannan&nbsp; &nbsp;Md. Amanat Ullah&nbsp; &nbsp;Uttam Kumar Dey&nbsp; &nbsp;and Mohammad Alauddin&nbsp; &nbsp;</p><p>This paper aims at treating a study on Sylow theorem of different algebraic structures as groups, order of a group, subgroups, along with the associated notions of automorphisms group of the dihedral groups, split extensions of groups and vector spaces arises from the varying properties of real and complex numbers. We must have used the Sylow theorems of this work when it's generalized. Here we discuss possible subgroups of a group in different types of order which will give us a practical knowledge to see the applications of the Sylow theorems. In algebraic structures, we deal with operations of addition and multiplication and in order structures, those of greater than, less than and so on. It is through the study of Sylow theorems that we realize the importance of some definitions as like as the exact sequences and split extensions of groups, Sylow p-subgroup and semi-direct product. Thus it has been found necessary and convenient to study these structures in detail. In situations, where it was found that a given situation satisfies the basic axioms of structure and having already known the properties of that structure. Finally, we find out possible subgroups of a group in different types of order for abelian and non-abelian cases.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[The Power and Its Graph Simulations on Discrete and Continuous Distributions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12337]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Budi Pratikno&nbsp; &nbsp;Nailatul Azizah&nbsp; &nbsp;and Avita Nur Azizah&nbsp; &nbsp;</p><p>We determined the power and its graph simulations on the discrete Poisson and Chi-square distributions. There are four important steps of the research methodology summarized as follow: (1) determine the sufficient statistics (if possible), (2) create the rejection area (UMPT test is sometime used), (3) derive the formula of the power, and (4) determine the graphs using the data (in simulation). The formula of the power and their curves are then created using <img src=image/13427687_01.png> code. The result showed that the power of the illustration of the discrete (Binomial distribution) depended on the number of trials <img src=image/13427687_02.png> and bound of the rejection area. The curve of the power is sigmoid (<img src=image/13427687_03.png>-curve) and tends to be zero when parameter shape (<img src=image/13427687_04.png>) is greater than 0.4. It decreases (started from <img src=image/13427687_04.png>= 0.2) as the parameter theta increases. In the Poisson context, the curve of the power of the Poisson distribution is not <img src=image/13427687_03.png>-curve, and it only depends on the parameter shape <img src=image/13427687_05.png>. We note that the curve of the power of the Poisson is quickly to be one for <img src=image/13427687_02.png> greater than 2 and <img src=image/13427687_05.png> less than 10. In this case, the size of the Poisson distribution is greater than 0.05, so it is not a reasonable thing even the power is close to be one. In this context, we have to choose the maximum power and minimum size. In the context of Chi-square distribution, the graph of the power and size functions depend on rejection region boundary (<img src=image/13427687_06.png>). Here, we note that skewness of the <img src=image/13427687_03.png>-curve is positive as the <img src=image/13427687_06.png> increases. Similarly, the size also depends on the <img src=image/13427687_06.png> (and constant), and it decrease as the <img src=image/13427687_06.png> increases. We here also noted that the power is quickly to be one for large degree of freedom (<img src=image/13427687_07.png>).</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[Generalized Biased Estimator for Beta Regression Model: Simulation and Application]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12336]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Rasha A. Farghali&nbsp; &nbsp;and Samah M. Abo-El-Hadid&nbsp; &nbsp;</p><p>Beta regression model is used for modeling proportions measured on a continuous scale; its parameters are estimated with the maximum likelihood method. Classical regression models, such as linear regression model and nonlinear regression models like logistic regression are not suitable for such situations. As in linear regression model, the independent variables are assumed to be uncorrelated if this assumption is not met, then the multicollinearity appears. Multicollinearity problem means that there is a near dependency between the independent variables. Biased estimators are commonly used for correcting the multicollinearity problem. In this study, we propose a generalized biased estimator for correcting multicollinearity in beta regression that is generalize beta ridge regression estimator (GBRRE). The performance of the proposed generalized biased estimator is evaluated theoretically via the matrix mean squared errors and the scalar mean squared errors; and practically using a Monte Carlo simulation study. The simulation results show that the optimal shrinkage estimator is K1 and the worst one is K2. Also, the proposed generalized estimator is applied to a real data set of pre-university education students in Egypt during the academic year (2018/2019) and we found the application results agree with the simulation results. Finally based on the results of the simulation study and the application the performance of the suggested generalized biased estimator is better than maximum likelihood estimators.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[Analysis of Limiting Ratios of Special Sequences]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12335]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>A. Dinesh Kumar&nbsp; &nbsp;and R. Sivaraman&nbsp; &nbsp;</p><p>In this paper, we have determined the limit of ratio of (n+1)th term to the nth term of famous sequences in mathematics like Fibonacci Sequence, Fibonacci – Like Sequence, Pell's Sequence, Generalized Fibonacci Sequence, Padovan Sequence, Generalized Padovan Sequence, Narayana Sequence, Generalized Narayana Sequence, Generalized Recurrence Relations of Fibonacci – Type sequence, Polygonal Numbers, Catalan Sequence, Cayley numbers, Harmonic Numbers and Partition Numbers. We define this ratio as limiting ratio of the corresponding sequence. Sixteen different classes of special sequences are considered in this paper and we have determined the limiting ratios for each one of them. In particular, we have shown that the limiting ratios of Fibonacci sequence and Fibonacci – Like sequence is the fascinating real number called Golden Ratio which is 1.618 approximately. We have shown that the limiting ratio of Pell's sequence is a real number called Silver Ratio and the limiting ratios for generalized Fibonacci sequence are metallic ratios. We have also obtained the limiting ratios of Padovan and generalized Padovan sequence. The limiting ratio of Narayana sequence happens to be a number called super Golden Ratio which is 1.4655 approximately. We have shown that the limiting ratios of Generalized Narayana sequence are the numbers known as super Metallic Ratios. We have also shown that the limiting ratio of generalized recurrence relation of Fibonacci type is 2 and that of Polygonal numbers and Harmonic numbers are 1. We have proved that the limiting ratio of the famous Catalan sequence and Cayley numbers are 4. Finally, assuming Rademacher's Formula, we have shown that the limiting ratio of Partition numbers is the natural logarithmic base e. We have proved fourteen theorems to derive limiting ratios of various well known sequences in this paper. From these limiting ratio values, we can understand the asymptotic behavior of the terms of all these amusing sequences of numbers in mathematics. The limiting ratio values also provide an opportunity to apply in lots of counting and practical problems.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[A New Ranking Approach for Solving Fuzzy Transportation Problem with Pentagonal Fuzzy Number]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12334]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>V. Vidhya&nbsp; &nbsp;and K. Ganesan&nbsp; &nbsp;</p><p>In any decision-making process, imprecision is a significant issue. To deal with the ambiguous environment of collective decision-making, various tools and approaches have been created. Fuzzy set theory is one of the most recent approaches for coping with imprecision. The Fuzzy Transportation Problem (FTP) is a well-known network planned linear programming problem which exists in a variety of situations and has received a lot of attention recently. Many authors defined and solved the fuzzy transportation problem with frequently utilized fuzzy numbers such as triangular fuzzy numbers or trapezoidal fuzzy numbers. On the other hand, real-world problems usually involve more than four variables. To tackle these concerns, the pentagonal fuzzy number is applied to the problems. This article proposes an approach to solving transportation problems whose parameters are pentagonal fuzzy numbers without requiring an initial feasible solution. An algorithm based on the core and spread method and an extended MODI method is developed to determine the optimal solution to the problem. The proposed process is based on the approximation method and gives a more efficient result. An illustrated example is used to validate the model. As a result, the proposed methodology is both simpler and more computationally efficient than the existing approaches.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[Subset Intersection Group Testing Strategy]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12333]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Maheswaran Srinivasan&nbsp; &nbsp;</p><p>Many a time, items can be classified as defective or non-defective and the objective is to identify all the defective items, if any, in the population. The concept of group testing deals with identifying all such defective items using a minimum number of tests. This paper proposes probabilistic group testing through a subset intersection group testing strategy. The proposed algorithm 'Subset Intersection Group Testing Strategy' deals with dividing the whole population, if it is positive, into different rows and columns and individually testing all the defective rows and columns. Through this proposed strategy, the number of group tests is either always one when no defective is found or 1+r+c, where r and c denote the number of rows and columns, when at least one defective is found. The proposed algorithms are validated using simulation for different combinations of group size and the incidence probability of an item being defective (p) and implications are drawn. The results indicate that the average number of total tests required is smaller when p is small and considerably increases as p increases. Therefore, for the smaller values of p, this proposed strategy is more effective. Also, an attempt is made to estimate an upper bound for the number of tests through this strategy in various scenarios.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[Construction and Selection of Double Inspection Single Sampling Plan for an Independent Process Using Bivariate Poisson Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12267]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>D. Senthilkumar&nbsp; &nbsp;and P. Sabarish&nbsp; &nbsp;</p><p>There are more sampling concepts active in production Industries, for inspecting the samples and analysing performance of the population. Also the sampling plans reduce errors in the production and produce the error free products. In this study, construction and selection of Double Inspection with reference to Single Sampling Plan i.e., DISSP, by attribute are investigated by using the Bivariate Poisson distribution. The Methodology, DISSP, was proposed based on two quality characteristics of the same sample size, and the planning parameters (n, C<sub>1</sub>, C<sub>2</sub>) are based on the operating characteristics, the conventional two-point condition by the planning table parameters (AQL and LQL). It is based on selected quality requirements and risks designed to allow manufacturers to easily determine the required sample size and corresponding acceptance criteria. A Comparison was done based on the efficiency of the plan with an existing single sampling plan and gave a numerical example to expose the operating tables. Also, the study shows the advantages of the proposed plan, and performance of the curves like, Operating characteristics, Average Outgoing Quality, and Average Total Inspection to expose the proposed double inspection sampling plan.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[Bayesian Model Averaging in Modeling of State Specific Failure Rates in HIV/AIDS Progression]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12215]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Nahashon Mwirigi&nbsp; &nbsp;Prof. Richard Simwa&nbsp; &nbsp;Dr. Mary Wainaina&nbsp; &nbsp;and Dr. Stanley Sewe&nbsp; &nbsp;</p><p>In modeling HIV/AIDS progression, we carried out a comprehensive investigation into the risk factors for state-specific-failure rates to identify the influential co-variates using Bayesian Model averaging method (BMA). BMA provides a posterior probability via Markov Chain Monte Carlo (MCMC) for each variable that belongs to the model. It accounts for model uncertainty by averaging all plausible models using their posterior probabilities as the weights for model-averaged predictions and estimates of the required parameters. Patients' age, and gender, among other co-variates, have been found to influence the state-specific-failure rates highly. However, the impact of each of the factors on the state specific-failure was not quantified. This paper seeks to evaluate and quantify the contribution of the patient's age and gender, CD4 cell count during any two consecutive visits, and state movement on the state-specific-failure rates for patients transiting either to the same, better or worse state. We used R Studio statistical Programming software to implement the method by applying BMS and BMA packages. State movement had a comparatively large coefficient with a posterior inclusion probability (PIP) of 0.8788 (87.88%). Hence, the most critical variable followed by observation-two-CD4-cell-count with a PIP of 0.1416 (14.16%), age and gender were the last with a PIP of 0.0556 (5.56%) and 0.0510 (5.10%) respectively for patients transiting to the same state. For patients transiting to a better state, the patients' age group dominated with a PIP of 0.9969 (99.69%), followed by patients' gender with a PIP of 0.0608 (6.08%). Patients' CD4 cell count during the second observation had the least PIP of 0.0399 (3.99%). For patients transiting to a worse disease state, patients CD4 cell count during the second observation proved to be the most important, with a PIP of 0.6179(61.79%) followed by state movement with a PIP of 0.2599 (25.99%), patients gender tailed with a PIP of 0.0467 (4.67%).</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[Double Duplication of Special Classes of Cycle Related Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12214]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>R.Kuppan&nbsp; &nbsp;and L.Shobana&nbsp; &nbsp;</p><p>Let <img src=image/13427481_01.gif> be a simple, finite, connected, plane graph with the vertex set <img src=image/13427481_02.gif>, the edge set <img src=image/13427481_03.gif> and the face set <img src=image/13427481_04.gif>). Martin Baca [1] defined, a connected plane graph <img src=image/13427481_01.gif> with vertex set <img src=image/13427481_02.gif>, edge set <img src=image/13427481_03.gif> and face set <img src=image/13427481_04.gif> to be <img src=image/13427481_05.gif> face antimagic if there exists positive integers <img src=image/13427481_06.gif> and <img src=image/13427481_07.gif> and a bijection <img src=image/13427481_08.gif>: <img src=image/13427481_09.gif> such that the induced mapping <img src=image/13427481_10.gif>: <img src=image/13427481_11.gif>, where for a face <img src=image/13427481_12.gif>, <img src=image/13427481_13.gif> is the sum of all <img src=image/13427481_14.gif> for all edges <img src=image/13427481_15.gif> surrounding <img src=image/13427481_12.gif> is also a bijection. This paper proves the existence of face antimagic labeling for the double duplication of all vertices by edges of gear graph <img src=image/13427481_16.gif> for <img src=image/13427481_17.gif>, grid graph <img src=image/13427481_18.gif> for <img src=image/13427481_19.gif>, where <img src=image/13427481_20.gif> even, prism graph <img src=image/13427481_21.gif> for <img src=image/13427481_22.gif> and the double duplication of all vertices by edges of strong face of triangular snake graph <img src=image/13427481_23.gif> for <img src=image/13427481_17.gif>. The <img src=image/13427481_05.gif> face antimagic labeling for double duplication of special graphs can be used to encrypt and decrypt the messages, which is used as a real time application. In [3], we used <img src=image/13427481_05.gif> face antimagic labeling of strong face of duplication of all vertices by edges of a tree <img src=image/13427481_23.gif> for <img src=image/13427481_24.gif> to encrypt and decrypt thirteen secret numbers which can be extended to double duplication of graphs to encode and decode the numbers, which in turn can be used in military base, ATM and so on.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[Uncertainty Optimization-Based Rough Set for Incomplete Information Systems]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12213]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Arvind Kumar Sinha&nbsp; &nbsp;and Pradeep Shende&nbsp; &nbsp;</p><p>Often the information in the surrounding world is incomplete, and such incomplete information gives rise to uncertainties. Pawlak's rough set model is an approach to approximation under uncertainty. It uses a tolerance relation to obtain single granulation of the incomplete information system for approximation. In this work, we extend the single granulation rough set for the incomplete information system to an uncertainty optimization-based rough set (UOBRS). The proposed approach is used to minimize the uncertainty using multiple tolerance relations. We list properties of the UOBRS for incomplete information systems. We compare UOBRS with the classical single granulation rough set (SGRS) and multi-granular rough set (MGRS). We list the basic properties of the UOBMGRS. We introduce the application of the UOBRS for attribute subset selection in case of incomplete information system. We use the measure of approximation quality to assess the uncertainties of the attributes. We compare the approximation quality of the attributes using UOBRS with the approximation quality using SGRS and MGRS. We get higher approximation quality with the less number of attributes using UOBRS as compared to SGRS and MGRS. The proposed method is a novel approach to dealing with incomplete information systems for more effective dataset analysis.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[Perfect Codes in the Spanning and Induced Subgraphs of the Unity Product Graph]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12212]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Mohammad Hassan Mudaber&nbsp; &nbsp;Nor Haniza Sarmin&nbsp; &nbsp;and Ibrahim Gambo&nbsp; &nbsp;</p><p>The unity product graph of a ring <img src=image/13426687_01.gif> is a graph which is obtained by setting the set of unit elements of <img src=image/13426687_01.gif> as the vertex set. The two distinct vertices <img src=image/13426687_02.gif> and <img src=image/13426687_03.gif> are joined by an edge if and only if <img src=image/13426687_04.gif>. The subgraphs of a unity product graph which are obtained by the vertex and edge deletions are said to be its induced and spanning subgraphs, respectively. A subset <img src=image/13426687_05.gif> of the vertex set of induced (spanning) subgraph of a unity product graph is called perfect code if the closed neighbourhood of <img src=image/13426687_06.gif>, <img src=image/13426687_07.gif> forms a partition of the vertex set as <img src=image/13426687_06.gif> runs through <img src=image/13426687_05.gif>. In this paper, we determine the perfect codes in the induced and spanning subgraphs of the unity product graphs associated with some commutative rings <img src=image/13426687_01.gif> with identity. As a result, we characterize the rings <img src=image/13426687_01.gif> in such a way that the spanning subgraphs admit a perfect code of order cardinality of the vertex set. In addition, we establish some sharp lower and upper bounds for the order of <img src=image/13426687_05.gif> to be a perfect code admitted by the induced and spanning subgraphs of the unity product graphs.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[Cluster Analysis on Various Cluster Validity Indexes with Average Linkage Method and Euclidean Distance (Study on Compliant Paying Behavior of Bank X Customers in Indonesia 2021)]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12211]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Solimun Solimun&nbsp; &nbsp;and Adji Achmad Rinaldo Fernades&nbsp; &nbsp;</p><p>This study aims to examine the differences in various cluster validity indexes in the grouping of credit customers at Bank X Malang City, Indonesia using the average linkage and Euclidean distance methods. This study uses primary data with the variables used are service quality, environment, mode, willingness to pay, and obedient paying behavior obtained through a questionnaire with a Likert scale through purposive sampling distributed to 100 respondents. The data are then analyzed by clusters using the ward linkage and Euclidean distance methods on various validity cluster indexes, including the Silhouette Index, Krzanowski-Lai, Dunn, Gap, Davies-Bouldin, Index C, Global Sillhouette, Goodman-Kruskal in this study used as a tool analysis. This study uses R software. The results show that the Krzanowski-Lai, Dunn, Gap, Global Sillhouette, and Goodman-Kruskal indices have the same cluster members, as well as the Silhouette and Davies-Bouldin indices. The best cluster indexes are the Silhouette and Davies-Bouldin indexes. All validity indices produce variance between and within the same cluster. The novelty of this study is to compare 8 validity indices, namely Sillhouette Index, Krzanowski-Lai, Dunn, Gap, Davies-Bouldin, Index C, Global Sillhouette, and Goodman-Kruskal simultaneously.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[Analysis of IBFS for Transportation Problem by Using Various Methods]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12210]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>S. K. Sharma&nbsp; &nbsp;and Keshav Goel&nbsp; &nbsp;</p><p>The supply, demand and transportation cost in transportation problem cannot be obtained by all existing methods directly. In the existing literature, various methods have been proposed for calculating transportation cost. In this paper, we are comparing various methods for measuring the optimal cost. The objective of this paper is obtaining IBFS of real-life problems by various methods. In this paper, we include various methods such as AMM (Arithmetic Mean Method), ASM (Assigning Shortest Minimax Method) etc. The Initial Basic Feasible solution is one of the most important parts for analyzing the optimal cost of transportation Problem. For many applications of transportation problem such as image registration and wrapping, reflector design seismic tomography and reflection seismology etc, we analyze the transportation cost. TP is used to find the best solution in such a way in which product produced at several sources (origins) are supply to the various destinations. To fulfil all requirement of destination at lowest cost possible is the main objective of a transportation problem. All transport companies are looking forward to adopting a new approach for minimizing the cost. Along these lines, it is essential just as an adequate condition for the transportation problem to have an attainable arrangement. A numerical example is solved by different approaches for obtaining IBFS.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[Advancement of Generalized Method of Moment Estimation (GMM) For Spatial Dynamic Panel Simultaneous Equations Models with Fixed Time Effect]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12209]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Dwi Endah Kusrini&nbsp; &nbsp;Setiawan&nbsp; &nbsp;Heri Kuswanto&nbsp; &nbsp;and Budi Nurani Ruchjana&nbsp; &nbsp;</p><p>This research paper aims to form and estimate the spatial dynamic panel simultaneous equations models (SDPS) with fixed time effect that potentially have heteroscedasticity cases. The model formed with the individual effect is not eliminated but placed in the error model to accommodate cases of heteroscedasticity in the model. GMM with two stages least square (2SLS) method for the single equation is deliberately chosen as the estimation method for the SDPS model because it can eliminate heterogeneity cases in the model. The effectiveness of the estimate is seen based on the value of RMSE (Root Mean Square Error), mean and standard deviation (SD) of bias estimate by simulating Monte Carlo 100 times with different parameter pairs and different pairs N and T can also be concluded that parameter scenario changes do not give much effect on the mean bias value and SD bias. The SDPS model shows that the consistency of the estimated parameter values can be achieved easily if the number of T is added. Changes in the number of N and T indicate that the greater the N and T, the smaller RMSE value tends to be.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[Solving Lorenz System by Using Lower Order Symmetrized Runge-Kutta Methods]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12208]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>N. Adan&nbsp; &nbsp;N. Razali&nbsp; &nbsp;N. A. Zainuri&nbsp; &nbsp;N. A. Ismail&nbsp; &nbsp;A. Gorgey&nbsp; &nbsp;and N. I. Hamdan&nbsp; &nbsp;</p><p>Runge-Kutta is a widely used numerical method for solving the non-linear Lorenz system. This study focuses on solving the Lorenz equations with the classical parameter values by using the lower order symmetrized Runge-Kutta methods, Implicit Midpoint Rule (IMR), and Implicit Trapezoidal Rule (ITR). We show the construction of the symmetrical method and present the numerical experiments based on the two methods without symmetrization, with one- and two-step active symmetrization in a constant step size setting. For our numerical experiments, we use MATLAB software to solve and plot the graphical solutions of the Lorenz system. We compare the oscillatory behaviour of the solutions and it appears that IMR and two-step active IMR turn out to be chaotic while the rest turn out to be non-chaotic. We also compare the accuracy and efficiency of the methods and the result shows that IMR performs better than the symmetrizers, while two-step active ITR performs better than ITR and one-step active ITR. Based on the results, we conclude that different implicit numerical methods with different steps of active symmetrization can significantly impact the solutions of the non-linear Lorenz system. Since most study on solving the Lorenz system is based on explicit time schemes, we hope this study can motivate other researchers to analyze the Lorenz equations further by using Runge-Kutta methods based on implicit time schemes.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[Markowitz Random Set and Its Application to the Paris Stock Market Prices]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12207]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Ahssaine Bourakadi&nbsp; &nbsp;Naima Soukher&nbsp; &nbsp;Baraka Achraf Chakir&nbsp; &nbsp;and Driss Mentagui&nbsp; &nbsp;</p><p>In this paper, we will combine random set theory and portfolio theory, through the estimation of the lower bound of the Markowitz random set based on the Mean-Variance Analysis of Asset Portfolios Approach, which represents the efficient frontier of a portfolio. There are several Markowitz optimization approaches, of which we denote the most known and used in the modern theory of portfolio, namely, the Markowitz's approach, the Markowitz Sharpe's approach and the Markowitz and Perold's approach, generally these methods are based on the minimization of the variance of the return of a portfolio. On the other hand, the method used in this paper is completely different from those denoted above, because it is based on the theory of random sets, which allowed us to have the mathematical structure and the graphic of the Markowitz set. The graphical representation of the Markowitz set gives us an idea of the investment region. This region, called the investment zone, contains the stocks in which the rational investor can choose to invest. Mathematical and statistical estimation techniques are used in this paper to find the explicit form of the Markowitz random set, and to study its elements in function of the signs of the estimated parameters. Finally, we will apply the results found to the case of the returns of a portfolio composed of 200 assets from the Paris Stock Market Prices. The results obtained by this simulation allow us to have an idea on the stocks to recommend to the investors. In order to optimize their choices, these stocks are those which will be located above the curve of the hyperbola which represents the Markowitz set.</p>]]></description>
<pubDate>Jul 2022</pubDate>
</item>
<item>
<title><![CDATA[Form Invariance - An Alternative Answer to the Measurement Problem of Item Response Theory]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12179]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Henrik Bernshausen&nbsp; &nbsp;Christoph Fuhrmann&nbsp; &nbsp;Hanns-Ludwig Harney&nbsp; &nbsp;Klaus Harney&nbsp; &nbsp;and Andreas Muller&nbsp; &nbsp;</p><p>The measurement problem of item response theory is the question of how to assign ability parameters <img src=image/13426621_01.gif> to persons and difficulty parameters <img src=image/13426621_02.gif> to items such that the comparison of abilities is independent of the specific set of difficulties <img src=image/13426621_02.gif>. Correspondingly, the comparison of difficulties <img src=image/13426621_02.gif> should be independent of the specific set of abilities <img src=image/13426621_01.gif>. These requirements are called specific objectivity. They are the basis of the Rasch model. It measures <img src=image/13426621_01.gif> and <img src=image/13426621_02.gif> on one and the same scale. The present paper asks the different question of how to assign ability parameters <img src=image/13426621_01.gif> to persons in a way that the comparison of abilities is independent of the position on the scale where the measurement takes place. Correspondingly, the comparison of difficulties <img src=image/13426621_02.gif> should also be independent of the position on the scale where the calibration of difficulties takes place. Again, <img src=image/13426621_01.gif> and <img src=image/13426621_02.gif> measured on one and the same scale. These requirements are called form invariance. They lead to an item response function (IRF) different from that of the Rasch model. It integrates information from <img src=image/13426621_01.gif> and <img src=image/13426621_02.gif> beyond the mere score dependence and also shows specific objectivity (in a generalized mathematical form). The properties of the form invariant item response function are compared to that of the Rasch model, and related to previous work by Warm, Jaynes and Samejima. Moreover, several numerical examples for the use of it are provided.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[The <img src=image/13426661_01.gif>-prime Radicals in Posets]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12178]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>J.Catherine Grace John&nbsp; &nbsp;and B.Elavarasan&nbsp; &nbsp;</p><p>A relation is a mathematical tool for describing set relationships. Relationships are common in databases and scheduling applications. Science and engineering are designed to help humans make better decisions. To make these choices, we must first understand human expectations, the outcomes of various options, and the degree of confidence. With all of these data, partial orders will be generated. In several fields of engineering and computer science, partial order and lattice theory are now widely used. To mention a few, they are used in cloud computing (vector clocks, global predicate detection), concurrency theory (pomsets, occurrence nets), programming language semantics (fixed-point semantics), and data mining (concept analysis). Other theoretical disciplines benefit from them as well, such as combinatorics, number theory, and group theory. Partially ordered sets emerge naturally when dealing with multidimensional systems of qualitative ordinal variables in social science, especially to solve ranking, prioritising, and assessment concerns. As an alternative to standard techniques, partial order theory and partially ordered sets can be used to generate composite indicators for evaluating well-being, quality of life, and multidimensional poverty. They can be applied in multi-criteria analysis or for decision-making purposes in the study of individual and social desires, including in social choice theory. They're also valuable in social network analysis, where they may be utilized to apply mathematics to explore network topologies and dynamics. The Hasse diagram method, for example, produces a partial order with multiple incomparabilities (lack of order) between pairs of items. This is a common problem in ranking studies, and it can often be avoided by combining object attributes that lead to a complete order. However, such a mix introduces subjectivity and prejudice into the rating process. This work discusses the notion of a <img src=image/13426661_01.gif>-prime radical of a partially ordered set with respect to ideal. In posets, we investigated the concept of <img src=image/13426661_01.gif>-primary ideals. It is investigated how to characterise <img src=image/13426661_01.gif>-primary ideals in relation to <img src=image/13426661_01.gif>-prime radicals. In addition, an ideal's <img src=image/13426661_01.gif>-primary decomposition is constructed.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[Simulation-Based Assessment of the Effectiveness of Tests for Stationarity]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12177]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Vasile-Alexandru Suchar&nbsp; &nbsp;and Luis Gustavo Nardin&nbsp; &nbsp;</p><p>Non-stationarity potentially comes from many sources and they impact the analysis of a wide range of systems in various fields. There is a large set of statistical tests for checking specific departures from stationarity. This study uses Monte Carlo simulations over artificially generated time series data to assess the effectiveness of 16 statistical tests to detect the real state of a wide variety of time series (i.e., stationary or non-stationary) and to identify their source of non-stationarity, if applicable. Our results show that these tests have a low statistical power outside their scope of operation. Our results also corroborate with previous studies showing that there are effective individual statistical tests to detect stationary time series, but no effective individual tests for detecting non-stationary time series. For example, Dickey-Fuller (DF) family tests are effective in detecting stationary time series or non-stationarity time series with positive unit root, but fail to detect negative unit root as well as trend and break in the mean, variance, and autocorrelation. Stationarity and change point detection tests usually misclassify stationary time series as non-stationary. The Breusch-Pagan BG serial correlation test, the ARCH homoscedasticity test, and the structural change SC tests can help to identify the source of non-stationarity to some extent. This outcome reinforces the current practice of running several tests to determine the real state of a time series, thus highlighting the importance of the selection of complementary statistical tests to correctly identifying the source of non-stationarity.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[Statistical Inference of Modified Kies Exponential Distribution Using Censored Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12176]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Fathy H. Riad&nbsp; &nbsp;</p><p>This paper deals with obtaining the interval and point estimation to Modified Kies exponential distribution in case of progressive first failure (PFF) censored data. It uses two approaches, classical and non-classical methods of estimation, including the highest posterior density (HPD). We obtained the Maximum Likelihood Estimation of the parameters and the logarithm likelihood function, and we used the maximum likelihood estimation of the parameters as a classical approach. We calculated the confidence intervals for the parameters and the Bootstrap confidence Intervals. We employed the posterior distribution and the Bayesian estimation (BE) under different loss functions (Symmetric loss function, The MCMC usage, and The M-H algorithm). Some results depending on simulation data are adopted to explain estimation methods. We used various censoring schemes and various sample sizes to determine whether the sample size affects the estimation measures. We used different confidence intervals to determine the best and shortest intervals. Also, the major findings in the paper are remarked on in the conclusion section.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[A Bounded Maximal Function Operator and Its Acting on <img src=image/13425593_01.gif> Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12175]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Raghad S.Shamsah&nbsp; &nbsp;</p><p>With a novel generation operator known as Spherical Scaling Wavelet Projection Operator, this study proposes new strategies for achieving the scaling wavelet expansion's convergence of <img src=image/13425593_01.gif> functions with respect to <img src=image/13425593_02.gif> almost everywhere under generic hypotheses. Hypotheses of results are based on three types of conditions: Space's function f, Kind of Wavelet functions (spherical) and Wavelet Conditions. The results showed that in the case of <img src=image/13425593_03.gif> and under the assumption that scaling wavelet function <img src=image/13425593_04.gif> of a given multiresolution analysis is spherical wavelet with 0-regularity, the convergence of <img src=image/13425593_01.gif>) expansions almost everywhere will be achieved under a new kind of partial sums operator. We can examine some properties of spherical scaling wavelet functions like rapidity of decreasing and boundedness. After estimating the bounds of spherical scaling wavelet expansions, we examined the limited (bounds) of this operator. The results are established on the almost everywhere wavelet expansions convergence of <img src=image/13425593_01.gif> space functions. Several techniques were followed to achieve this convergence, such as the bounded condition of the Spherical Hardy-Littlewood maximal operator is achieved using the maximal inequality and Riesz basis functions conditions. The general wavelet expansions' convergence was demonstrated using the spherical scaling wavelet function and several of its fundamental features. In fact, the partial sums in these expansions are dominated in their magnitude by the maximal function operator, which may be applied to establish convergence. The convergence here may be obtained by assuming minimal regularity for a spherical scaling wavelet function <img src=image/13425593_05.gif>. The focus of this research is on recent advances in convergence theory issues with respect to spherical wavelet expansions' partial sums operators. The employment of scaling wavelet basis functions defined on <img src=image/13425593_06.gif> is regarded to be a key in solving convergence problems that occur inside spaces dimension <img src=image/13425593_06.gif>.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[On Subclasses of Uniformly Convex Spriallike Function Associated with Poisson Distribution Series]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12174]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>K.Marimuthu&nbsp; &nbsp;and J.Uma&nbsp; &nbsp;</p><p>Geometric Function Theory is one of the major area of mathematics which suggests the significance of geometric ideas and problems in complex analysis. Recently, the univalent functions are given particular attention and they are used to construct linear operators that preserve the class of univalent functions and some of its subclasses. Also, similar attention has been given to distribution series. Many authors have studied about certain subclasses of univalent and bi-univalent functions connected with distribution series like Pascal distribution, Binomial distribution, Poisson distribution, Mittag-Leffler-type Poisson distribution, Geometric distribution, Exponential distribution, Borel distribution, Generalized distribution and Generalized discrete probability distribution to name few. Some of the important results on Uniformly convex spirallike functions (UCF) and Uniformly spirallike functions (USF) related with such a distribution series are also of interest. The main aim of the present investigation is to obtain the necessary and sufficient conditions for Poisson distribution series to belong to the classes <img src=image/13427077_01.gif> and <img src=image/13427077_02.gif>. The inclusion properties associated with Poisson distribution series are taken up for study in this article. Proof of some inequalities on integral function connected to Poisson distribution series has also been discussed. Further, some corollaries and results that follow consequently from the theorems are also analysed.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[Comparison of Non-Preemptive Priority Queuing Performance Using Fuzzy Queuing Model and Intuitionistic Fuzzy Queuing Model with Different Service Rates]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12173]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>S. Aarthi&nbsp; &nbsp;and M. Shanmugasundari&nbsp; &nbsp;</p><p>This study provides non-preemptive priority fuzzy and intuitionistic fuzzy queuing models with unequal service rates. For performance evaluations of the industrial, supply chain, stock management, workstations, data exchange, and telecommunications equipment, non-preemptive priority queues are appropriate. The parameters of non-preemptive priority queues may be fuzzy due to unpredictable causes. The primary goal of this research is to compare the performance of a non-preemptive queuing model applying fuzzy queuing theory and intuitionistic fuzzy queuing theory. The performance metrics in the fuzzy queuing theory model are given as a range of values, however, the intuitionistic fuzzy queuing theory model offers a multitude of values. Both the arrival rate and the service rate are triangular and intuitionistic triangular fuzzy numbers in this case. An analysis is provided to identify the quality metrics by using a developed methodology through which without converting into crisp values we are taking the fuzzy values as it is and to demonstrate the viability of the suggested method, two numerical problems are solved.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[Exact Run Length Computation on EWMA Control Chart for Stationary Moving Average Process with Exogenous Variables]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12172]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Wannaphon Suriyakat&nbsp; &nbsp;and Kanita Petcharat&nbsp; &nbsp;</p><p>The exponentially weighted moving average (EWMA) control chart is a popular tool used to monitor and identify slight unnatural variations in the manufacturing, industrial, and service processes. In general, control charts operate under the assumption of normality observation of the attention quality feature, but it is not easy to maintain this assumption in practice. In such situations, the data of random processes are correlated data, such as stock price in the economic field or air pollution data in the environment field. The characteristics and performance of the control chart are measured by the average run length (ARL). In this article, we present the new explicit formula of ARL for EWMA control chart based on MAX(q,r) process. The proposed explicit formula of ARL for the MAX(q,r) process is proved using the Fredholm integral equation technique. Moreover, ARL values are also assessed using the numerical integral equations method based on Gaussian, midpoint, and trapezoidal rules. Banach's fixed point theorem guarantees the existence and uniqueness of the solution. Furthermore, the accuracy of the proposed explicit formula is assessed in absolute percentage relative error compared with the numerical integral equations method. The results found that the explicit formula's ARL values are similar to those obtained using the numerical integral equation method; the absolute percentage relative errors are less than 0.0001 percent. As a result, the essential conclusion is that the explicit formula outperforms the numerical method in computational time. Consequently, the proposed explicit formula and the numerical integral equation have been the alternative approaches for computing ARL values of the EWMA control chart. They would be applied in various fields, including economics, environment, biology, engineering, and others.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[Likelihood and Bayesian Inference in the Lomax Distribution under Progressive Censoring]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12171]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>A. Baklizi&nbsp; &nbsp;A. Saadati Nik&nbsp; &nbsp;and A. Asgharzadeh&nbsp; &nbsp;</p><p>The Lomax distribution has been used as a statistical model in several fields, especially for business failure data and reliability engineering. Accurate parameter estimation is very important because it is the base for most inferences from this model. In this paper, we shall study this problem in detail. We developed several points and interval estimators for the parameters of this model assuming the data are type II progressively censored. Specifically, we derive the maximum likelihood estimator and the associated Wald interval. Bayesian point and interval estimators were considered. Since they can't be obtained in a closed form, we used a Markov chains Monte Carlo technique, the so called the Metropolis – Hastings algorithm to obtain approximate Bayes estimators and credible intervals. The asymptotic approximation of Lindley to the Bayes estimator is obtained for the present problem. Moreover, we obtained the least squares and the weighted least squares estimators for the parameters of the Lomax model. Simulation techniques were used to investigate and compare the performance of the various estimators and intervals developed in this paper. We found that the Lindley's approximation to the Bayes estimator has the least mean squared error among all estimators and that the Bayes interval obtained using the Metropolis – Hastings to have better overall performance than the Wald intervals in terms of coverage probabilities and expected interval lengths. Therefore, Bayesian techniques are recommended for inference in this model. An example of real data on total rain volume is given to illustrate the application of the methods developed in this paper.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[A Descent Conjugate Gradient Method With Global Converges Properties for Non-Linear Optimization]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12170]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Salah Gazi Shareef&nbsp; &nbsp;</p><p>Iterative methods such as the conjugate gradient method are well known methods for solving non-linear unconstrained minimization problems partially because of their capacity to handle large-scale unconstrained optimization problems rapidly, and partly due to their algebraic representation and implementation in computer programs. The conjugate gradient method has wide applications in a lot of fields such as machine learning, neural networks and many other fields. Fletcher and Reeves [1] expanded the approach to nonlinear problems in 1964. It is considered to be the first nonlinear conjugate gradient technique. Since then, lots of new other conjugate gradient methods have been proposed. In this work, we will propose a new coefficient conjugate gradient method to find the minimum of the non-linear unconstrained optimization problems based on parameter of Hestenes Stiefel. Section one in this work contains the derivative of new method. In section two, we will satisfy the descent and sufficient descent conditions. In section three, we will study the property of the global convergence of the new proposed. In the fourth section, we will give some numerical results by using some known test functions and compare the new method with Hestenes S. to demonstrate the effectiveness of the suggestion method. Finally, we will give conclusions.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[Simulation Study of Bayesian Hurdle Poisson Regression on the Number of Deaths from Chronic Filariasis in Indonesia]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12169]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Nur Kamilah Sa'diyah&nbsp; &nbsp;Ani Budi Astuti&nbsp; &nbsp;and Maria Bernadetha T. Mitakda&nbsp; &nbsp;</p><p>One regression model to explain the relationship between predictor and response variable in the form of count is Poisson regression. In the case of certain Poisson with the presence of many zero values, causing overdispersion can be overcome with the Poisson Hurdle model. There is a good method for estimating the parameters on small sample sizes for all distributions, namely the Bayesian method. The response variable of the original data does not follow Poisson distribution, so parameter will be estimated by Bayesian method. The performance of the Bayesian Hurdle Poisson regression can be seen from simulation data on various sample sizes and overdispersion levels generated based on the parameters of original data showing that the Bayesian Hurdle Poisson regression model proposed in this study is suitable for large sample sizes or with varying levels of overdispersion due <img src=image/13426825_01.gif> or <img src=image/13426825_02.gif> because normal distribution is used as prior. Even though the response variable of the simulation data is generated with a Poisson distribution, it still does not follow a Poisson distribution because it's in accordance with the original data. The parameter estimated based on the simulation data is similar to the parameter estimated on the original data (both the estimator of the MLE Hurdle Poisson regression parameter and the parameter estimator of the Bayesian Hurdle Poisson regression). This indicates that the simulation scenario is appropriate.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[Effect of Parameter Estimation on the Performance of Shewhart <img src=image/13425522_01.gif>-joint Chart Looked at in Terms of the Run Length Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12040]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Ugwu Samson O.&nbsp; &nbsp;Nduka Uchenna C.&nbsp; &nbsp;Eze Nnaemeka M.&nbsp; &nbsp;Odoh Paschal N.&nbsp; &nbsp;and Ugwu Gibson C.&nbsp; &nbsp;</p><p>Using spread-charts to monitor process variation and thereafter using the <img src=image/13425522_02.gif>-chart to monitor the process mean after is a common practice. To apply these charts independently using estimated 3-sigma limits is common. Recently, some authors considered the application of <img src=image/13425522_02.gif> and R-charts together as a charting scheme, <img src=image/13425522_01.gif>-chart when the standards are known, Case KK, only the mean standard is known, Case KU and both standards unknown, Case UU. The average run length (ARL) performance criterion was used. However, because of the skewed nature of the run length (RL) distribution, many authors have frowned at the use of ARL as a sole performance measure and encouraged the percentiles of the RL distribution instead. Therefore, the cdfs of the RLs of the chart under the cases mentioned will be derived in this work, and the percentiles are used to look at the chart for Case KU and the yet to be considered case of the chart, Case UK where only the process variance is known is included for comparison. These are the contribution to the existing literature. <img src=image/13425522_01.gif>-chart performed better in Case KU than in Case UK and the unconditional in-control median run length described the behavior of the chart better than the in-control ARL.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[Some Results on Number Theory and Analysis]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12039]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>B. M. Cerna Maguiña&nbsp; &nbsp;Dik D. Lujerio Garcia&nbsp; &nbsp;Héctor F. Maguiña&nbsp; &nbsp;and Miguel A. Tarazona Giraldo&nbsp; &nbsp;</p><p>In this work, we obtain bounds for the sum of the integer solutions of quadratic polynomials of two variables of the form <img src=image/13423078_01.gif> where <img src=image/13423078_02.gif> is a given natural number that ends in one. This allows us to decide the primality of a natural number <img src=image/13423078_02.gif> that ends in one. Also we get some results on twin prime numbers. In addition, we use special linear functionals defined on a real Hilbert space of dimension <img src=image/13423078_03.gif> , in which the relation is obtained: <img src=image/13423078_04.gif>, where <img src=image/13423078_05.gif> is a real number for <img src=image/13423078_06.gif>. When <img src=image/13423078_07.gif> or <img src=image/13423078_08.gif>, we manage to address Fermat's Last Theorem and the equation <img src=image/13423078_09.gif>, proving that both equations do not have positive integer solutions. For <img src=image/13423078_10.gif>, the Cauchy-Schwartz Theorem and Young's inequality were proved in an original way.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[The Non-Abelian Tensor Square Graph Associated to a Symmetric Group and its Perfect Code]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12038]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Athirah Zulkarnain&nbsp; &nbsp;Hazzirah Izzati Mat Hassim&nbsp; &nbsp;Nor Haniza Sarmin&nbsp; &nbsp;and Ahmad Erfanian&nbsp; &nbsp;</p><p>A set of vertices and edges forms a graph. A graph can be associated with groups using the groups' properties for its vertices and edges. The set of vertices of the graph comprises the elements of the group, while the set of edges of the graph is the properties and requirements for the graph. A non-abelian tensor square graph of a group is defined when its vertex set represents the non-tensor centre elements' set of G. Then, two distinguished vertices are connected by an edge if and only if the non-abelian tensor square of these two elements is not equal to the identity of the non-abelian tensor square. This study investigates the non-abelian tensor square graph for the symmetric group of order six. In addition, some properties of this group's non-abelian tensor square graph are computed, including the diameter, the dominating number, and the chromatic number. The perfect code for the non-abelian tensor square graph for a symmetric group of order six is also found in this paper.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[Weibull Distribution as the Choice Model for State-Specific Failure Rates in HIV/AIDS Progression]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12006]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Nahashon Mwirigi&nbsp; &nbsp;Stanley Sewe&nbsp; &nbsp;Mary Wainaina&nbsp; &nbsp;and Richard Simwa&nbsp; &nbsp;</p><p>This study considered the problem of selecting the best single model for modeling state-specific failure rates in HIV/AIDS progression for patients on antiretroviral therapy with age and gender as risk factors using exponential, twoparameter, and three-parameter Weibull distributions. CD4 count changes in any two consecutive visits, the mean waiting time (μ), and transitional rates (λ) for remaining in the same state or transiting to a better or a worse state were analyzed. Various model selection criteria, namely, Akaike Information Criteria (AIC), Bayesian Information Criteria (BIC), and Log-Likelihood (LL), were used in each specific disease state. The Maximum Likelihood Estimation (MLE) method was applied to obtain the parameters of the distributions used. Plots of State-specific transition rates (λ) depicted constant, increasing, decreasing, and unimodal trends. Three-parameter Weibull distribution was the best for male patients and patients aged (40-69) years transiting in the states 1-2, 3-4, and 4-5, and 1-2, 3-4, and 5-6, respectively, and for male, female patients, and patients aged (40-69), remaining in the same state. Two-parameter Weibull distribution was the best for female patients and patients aged (20-39) years transiting in the states 1-2, 2-3, 4-5, and 1-2, 2-3, 3-4, respectively. Exponential distribution proved inferior to the other two distributions used.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[The Radii of Starlikeness for Concave Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12005]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Munirah Rossdy&nbsp; &nbsp;Rashidah Omar&nbsp; &nbsp;and Shaharuddin Cik Soh&nbsp; &nbsp;</p><p>Let <img src=image/13426735_02.gif> denote the functions' class that is normalized, analytic, as well as univalent in the unit disc given by <img src=image/13426735_01.gif>. Convex, starlike, as well as close-to-convex functions resemble the main subclasses of <img src=image/13426735_02.gif>, expressed by <img src=image/13426735_03.gif>, as well as <img src=image/13426735_04.gif>, accordingly. Many mathematicians have recently studied radius problems for various classes of functions contained in <img src=image/13426735_02.gif>. The determination of the univalence radius, starlikeness, and convexity for specific special functions in <img src=image/13426735_02.gif> is a relatively new topic in geometric function theory. The problem of determining the radius has been initiated since the 1920s. Mathematicians are still very interested in this, particularly when it comes to certain special functions in <img src=image/13426735_02.gif>. Indeed, many papers investigate the radius of starlikeness for numerous functions. With respect to the open unit disc <img src=image/13426735_05.gif> and class <img src=image/13426735_02.gif>, the class of concave functions <img src=image/13426735_06.gif>, known as <img src=image/13426735_07.gif>, is defined. It is identified as a normalised analytic function <img src=image/13426735_08.gif>, which meets the requirement of having the opening angle of <img src=image/13426735_09.gif> at <img src=image/13426735_10.gif>. A univalent function <img src=image/13426735_11.gif> is known as concave provided that <img src=image/13426735_12.gif> is concave. In other words, we have that <img src=image/13426735_13.gif> is also convex. There is no literature to date on determining the radius of starlikeness for concave univalent functions related to certain rational functions, lune, cardioid, and the exponential equation. Hence, by employing the subordination method, we present new findings on determining several radii of starlikeness for different subclasses of starlike functions for the class of concave univalent functions <img src=image/13426735_07.gif>.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[Comparison between The Discrimination Frequency of Two Queueing Systems]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12004]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Said Taoufiki&nbsp; &nbsp;and Jamal El Achky&nbsp; &nbsp;</p><p>Each of us has had the experience of being overtaken by another less demanding customer in a queue. And each of us got behind a demanding customer and had to wait a long time. The frequencies of discrimination that appear here are overruns and heavy work. These are two phenomena that accompany queues, and have a great impact on customer satisfaction. Recently, authors have turned to measure queuing fairness based on the idea that a customer may feel anger towards the queuing system, even if he does not stay long on hold if he had one of the two experiences. We have found that this type of approach is more in line with studies provided by sociologists and psychologists. The frequencies of discrimination in a queue are studied for certain models of a single server. But for the case of multi-servers, there is only one study of a two-server Markovian queue. In this article, we wish to generalize this last study and we demonstrate that the result found in the case of two servers remains valid after comparing the discrimination frequencies of two Markov queueing systems to several servers.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[Traumatic Systolic Blood Pressure Modeling: A Spectral Gaussian Process Regression Approach with Robust Sample Covariates]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12003]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>David Kwamena Mensah&nbsp; &nbsp;Michael Arthur Ofori&nbsp; &nbsp;and Nathaniel Howard&nbsp; &nbsp;</p><p>Physiological vital signs acquired during traumatic events are informative on the dynamics of the trauma and their relationship with other features such as sample-specific covariates. Non-time dependent covariates may introduce extra challenges in the Gaussian Process (<img src=image/13425527_01.gif>) regression, as their main predictors are functions of time. In this regard, the paper introduces the use of Orthogonalized Gnanadesikan-Kettering covariates for handling such predictors within the Gaussian process regression framework. Spectral Bayesian <img src=image/13425527_01.gif> regression is usually based on symmetric spectral frequencies and this may be too restrictive in some applications, especially physiological vital signs modeling. This paper builds on a fast non-standard variational Bayes method using a modified Van der Waerden sparse spectral approximation that allows uncertainty in covariance function hyperparameters to be handled in a standard way. This allows easy extension of Bayesian methods to complex models where non-time dependent predictors are available and the relationship between the smoothness of trend and covariates is of interest. The utility of the methods is illustrated using both simulations and real traumatic systolic blood pressure time series data.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[Parameter Estimation for Additive Hazard Model Recurrent Event Using Counting Process Approach]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12002]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Triastuti Wuryandari&nbsp; &nbsp;Gunardi&nbsp; &nbsp;and Danardono&nbsp; &nbsp;</p><p>The Cox regression model is widely used for survival data analysis. The Cox model requires a proportional hazard. If the proportional hazard assumption is doubtful, then the additive hazard model can be used, where the covariates act in an additively to the baseline hazard function. If the observed survival time is more than once for one individual during the observation, it is called a recurrent event. The additive hazard model measures risk difference to the effect of a covariate in absolutely, while the proportional hazards model measure hazard ratio in relatively. The risk coefficients estimation in the additive hazard model mimics the multiplicative hazard model, using partial likelihood methods. The derivation of these estimators, outlined in the technical notes, is based on the counting process approach. The counting process approach was first developed by Aalen on 1975 which combines elements of stochastic integration, martingale theory and counting process theory. The method is applied to study about the effect of supplementation on infant growth and development. Based on the processing results, the factors that affect the growth and development of the infant are gender, treatment and mother's education.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[Pricing of A European Call Option in Stochastic Volatility Models]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12001]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Said Taoufiki&nbsp; &nbsp;and Driss Gretete&nbsp; &nbsp;</p><p>Volatility occupies a strategic place in the financial markets. In this context of crisis, and with the great movements of the markets, traders have been forced to turn to volatility trading for the potential gain it provides. The Black-Scholes formula for the value of a European option to purchase the underlying depends on a few parameters which are more or less easy to calculate, except for the realized volatility at maturity which makes a problem, because there is no single value, nor an established way to calculate it. In this article, we exploit the Martingale pricing method to find the expected present value of a given asset relative to a riskneutral probability measure. We consider a bond-stock market that evolves according to the dynamics of the Black-Scholes model, with a risk-free interest rate varying with time. Our methodology has effectively directed us towards interesting formulas that we have derived from the exact calculation, giving the present value of the volatility realized over a period of maturity for a European option in a stochastic volatility model.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[On Generalized Bent and Negabent Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=12000]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Deepmala Sharma&nbsp; &nbsp;and Sampada Tiwari&nbsp; &nbsp;</p><p>From the last few years, generalized bent functions gain a lot of attention in research as they have many applications in various fields such as combinatorial design, sequence design theory, cryptography, CDMA communication, etc. A deep and broad study of generalized bent functions with their properties is done in literature. Kumar et al.[11] first gave the concept of generalized bent function. Many researchers studied the properties and characterizations of generalized bent functions. In [2] authors introduced the concept of generalized (<img src=image/13426756_03.gif>-ary) negabent functions and studied some properties of generalized (<img src=image/13426756_03.gif>-ary) negabent functions. In this paper, we study the generalized (<img src=image/13426756_03.gif>-ary) bent functions <img src=image/13426756_01.gif>, where <img src=image/13426756_02.gif> is the ring of integers with mod <img src=image/13426756_03.gif>, <img src=image/13426756_04.gif> is the vector space of dimension <img src=image/13426756_05.gif> over <img src=image/13426756_02.gif> and <img src=image/13426756_03.gif>≥2 is any positive integer. We discuss several properties of generalized (<img src=image/13426756_03.gif>-ary) bent functions with respect to their nega-Hadamard transform. We also study the relation between generalized nega-Hadamard transforms and generalized nega-autocorrelations. Furthermore, we prove the necessary and sufficient conditions for the bentness and negabentness of generalized (<img src=image/13426756_03.gif>-ary) bent function generated by the secondary construction for <img src=image/13426756_04.gif>, where <img src=image/13426756_06.gif>.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[Three-Point Block Algorithm for Approximating Duffing Type Differential Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11999]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Ahmad Fadly Nurullah Rasedee&nbsp; &nbsp;Mohammad Hasan Abdul Sathar&nbsp; &nbsp;Najwa Najib&nbsp; &nbsp;Nurhidaya Mohamad Jan&nbsp; &nbsp;Siti Munirah Mohd&nbsp; &nbsp;and Siti Nor Aini Mohd Aslam&nbsp; &nbsp;</p><p>The current study was conducted to establish a new numerical method for solving Duffing type differential equations. Duffing type differential equations are often linked to damping issues in physical systems, which can be found in control process problems. The proposed method is developed using a three-point block method in backward difference form, which offers an accurate approximation of Duffing type differential equations with less computational cost. Applying an Adam's like predictor-corrector formulation, the three point block method is programmed with a recursive relationship between explicit and implicit coefficients to reduce computational cost. By establishing this recursive relationship, we established a corrector algorithm in terms of the predictor. This eliminates any undesired redundancy in the calculation when obtaining the corrector. The proposed method allows a more efficient solution without any significant loss of accuracy. Four types of Duffing differential equations are selected to test the viability of the method. Numerical results will show efficiency of the three-point block method compared against conventional and more established methods. The outcome of this research is a new method for successfully solving Duffing type differential equation and other ordinary differential equations that are found in the field of science and engineering. An added advantage of the three-point block method is its adaptability to parallel programming.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[On Invariants of Surfaces with Isometric on Sections]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11998]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Sharipov Anvarjon Soliyevich&nbsp; &nbsp;and Topvoldiyev Fayzulla Foziljonovich&nbsp; &nbsp;</p><p>In one of the directions of classical differential geometry, the properties of geometric objects are studied in their entire range, which is called geometry "in large". Many problems of geometry "in large" are connected with the existence and uniqueness of surfaces with given characteristics. Geometric features can be intrinsic curvature, extrinsic or Gaussian curvature, and other features associated with the surface. The existence of a polyhedron with given curvatures of vertices or with a given development is also a problem of geometry "in large". Therefore, the problem of finding invariants of polyhedra of a certain class and the solution of the problem of the existence and uniqueness of polyhedra with given values of the invariant are relevant. This work is devoted to finding invariants, surfaces isometric on sections. In particular, we study the expansion properties of convex polyhedra that preserve isometry on sections. For such polyhedra, an invariant associated with the vertex of a convex polyhedral angle is found. Using this invariant, we can consider the question of restoring a convex polyhedron with given values of conditional curvature at the vertices. The isometry on section differs from the isometry of surfaces. The isometry of surfaces does not imply the isometry in sections, and vice versa. One of the invariants of surfaces isometric in cross sections is the area of the cylindrical image. This paper presents the properties of the area of a cylindrical image.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[(<img src=image/13426382_01.gif>)-Anti-Intuitionistic Fuzzy Soft b-Ideals in BCK/BCI-Algebras]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11997]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Aiyared Iampan&nbsp; &nbsp;M. Balamurugan&nbsp; &nbsp;and V. Govindan&nbsp; &nbsp;</p><p>Among many algebraic structures, algebras of logic form an essential class of algebras. BCK and BCI-algebras are two classes of logical algebras. They were introduced by Imai and Iséki [6, 7] in 1966 and have been extensively investigated by many researchers. The concept of fuzzy soft sets is introduced in [17] to generalize standard soft sets [21]. The concept of intuitionistic fuzzy soft sets is introduced by Maji et al. [18], which is based on a combination of the intuitionistic fuzzy set [2] and soft set models. The first section will discuss the origins and importance of studies in this article. Section 2 will review the definitions of a BCK/BCI-algebra, a soft set, a fuzzy soft set, and an intuitionistic fuzzy soft set and show the essential properties of BCK/BCI-algebras to be applied in the next section. In Section 3, the concept of an anti-intuitionistic fuzzy soft b-ideal (AIFSBI) is discussed in BCK/BCI-algebras, and essential properties are provided. A set of conditions is provided for an AIFSBI to be an anti-intuitionistic fuzzy soft ideal (AIFSI). The definition of quasi-coincidence of an intuitionistic fuzzy soft point with an intuitionistic fuzzy soft set (IFSS) is considered in a more general form. In Section 4, the concepts of an (<img src=image/13426382_01.gif>)-AFSBI and an (<img src=image/13426382_01.gif>)-AIFSBI of <img src=image/13426382_02.gif> are introduced, and some characterizations of (<img src=image/13426382_01.gif>)-AIFSBI are discussed using the concept of an AIFSBI with thresholds. Finally, conditions are given for a (<img src=image/13426382_01.gif>)-AIFSBI to be a (∈,∈)-AIFSBI.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[Half-Space Model Problem for Navier-Lamé Equations with Surface Tension]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11996]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Sri Maryani&nbsp; &nbsp;Bambang H Guswanto&nbsp; &nbsp;and Hendra Gunawan&nbsp; &nbsp;</p><p>Recently, we have seen the phenomena in use of partial differential equations (PDEs) especially in fluid dynamic area. The classical approach of the analysis of PDEs were dominated in early nineteenth century. As we know that for PDEs the fundamental theoretical question is whether the model problem consists of equation and its associated side condition is well-posed. There are many ways to investigate that the model problems are well-posed. Because of that reason, in this paper we consider the <img src=image/13426244_01.gif>-boundedness of the solution operator families for Navier-Lamé equation by taking into account the surface tension in a bounded domain of <img src=image/13426244_02.gif>-dimensional Euclidean space (<img src=image/13426244_02.gif>≥ 2) as one way to study the well-posedess. We investigate the <img src=image/13426244_01.gif>-boundedness in half-space domain case. The <img src=image/13426244_01.gif>-boundedness implies not only the generation of analytic semigroup but also the maximal <img src=image/13426244_03.gif> regularity for the initial boundary value problem by using Weis's operator valued Fourier multiplier theorem for time dependent problem. It was known that the maximal <img src=image/13426244_03.gif> regularity class is the powerful tool to prove the well-posesness of the model problem. This result can be used for further research for example to analyze the boundedness of the solution operators of the model problem in bent-half space or general domain case.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[Half-Sweep Refinement of SOR Iterative Method via Linear Rational Finite Difference Approximation for Second-Order Linear Fredholm Integro-Differential Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11995]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Ming-Ming Xu&nbsp; &nbsp;Jumat Sulaiman&nbsp; &nbsp;and Nur Afza Mat Ali&nbsp; &nbsp;</p><p>The numerical solutions of the second-order linear Fredholm integro-differential equations have been considered and discussed based on several discretization schemes. In this paper, the new schemes are developed derived on the hybrid of the three-point half-sweep linear rational finite difference (3HSLRFD) approaches with the half-sweep composite trapezoidal (HSCT) approach. The main advantage of the established schemes is that they discretize the differential terms and integral term of second-order linear Fredholm integro-differential equations into the algebraic equations and generate the corresponding linear system. Furthermore, the half-sweep (HS) concept is combined with the refinement of the successive over-relaxation (RSOR) iterative method to create the new half-sweep successive over-relaxation (HSRSOR) iterative method, which is implemented to get the numerical solution of a system of linear algebraic equations. Apart from that, the classical or full-sweep Gauss-Seidel (FSGS) and full-sweep successive over-relaxation iterative (FSSOR) methods are presented, which serve as the control method in this paper. In the end, we employed FSGS, FSRSOR and HSRSOR methods to obtain numerical solutions of three examples and make a detailed comparison from three aspects of the number of iterations, elapsed time and maximum absolute error. Numerical results demonstrate that FSRSOR and HSRSOR methods have lesser iterations, faster elapsed time, and are more accurate than FSGS. In addition, HSRSOR is the most effective of the three methods. To sum up, this paper has successfully proposed the applicability and superiority of the new HSRSOR method based on 3HSLRFD-HSCT schemes.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[On Some Properties of Fabulous Fraction Tree]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11994]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>A. Dinesh Kumar&nbsp; &nbsp;and R. Sivaraman&nbsp; &nbsp;</p><p>Among several properties that real numbers possess, this paper deals with the exciting formation of positive rational numbers constructed in the form of a Tree, in which every number has two branches to the left and right from the root number. This tree possesses all positive rational numbers. Hence it consists of infinite numbers. We call this tree "Fraction Tree". We will formally introduce the Fraction Tree and discuss several fascinating properties including proving the one-one correspondence between natural numbers and the entries of the Fraction Tree. In this paper, we shall provide the connection between the entries of the fraction tree and Fibonacci numbers through some specified paths. We have also provided ideas relating the terms of the Fraction Tree with that of continued fractions. Five interesting theorems related to the entries of the Fraction Tree are proved in this paper. The simple rule that is used to construct the Fraction Tree enables us to prove many mathematical properties in this paper. In this sense, one can witness the simplicity and beauty of making deep mathematics through simple and elegant formulations. The Fraction Tree discussed in this paper which is technically called Stern-Brocot Tree has profound applications in Science as diverse as in clock manufacturing in the early days. In particular, Brocot used the entries of the Fraction Tree to decide the gear ratios of mechanical clocks used several decades ago. A simple construction rule provides us with a mathematical structure that is worthy of so many properties and applications. This is the real beauty and charm of mathematics.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[The Relative (Co)homology Theory through Operator Algebras]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11993]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>M. Kozae&nbsp; &nbsp;Samar A. Abo Quota&nbsp; &nbsp;and Alaa H. N.&nbsp; &nbsp;</p><p>This paper introduces a new idea in the unital involutive Banach algebras and its closed subset. This paper aims to study the cohomology theory of operator algebra. We will study the Banach algebra as an applied example of operator algebra, and the Banach algebra will be denoted by <img src=image/13426521_01.gif>. The definitions of cyclic, simplicial, and dihedral cohomology group of <img src=image/13426521_01.gif> will be introduced. We presented the definition of <img src=image/13426521_14.gif>-relative dihedral cohomology group that is given by: <img src=image/13426521_02.gif>, and we will show that the relation between dihedral and <img src=image/13426521_14.gif>-relative dihedral cohomology group <img src=image/13426521_11.gif> <img src=image/13426521_12.gif> <img src=image/13426521_13.gif> can be obtained from the sequence <img src=image/13426521_03.gif>. Among the principal results that we will explain is the study of some theorems in the relative dihedral cohomology of Banach algebra as a Connes-Tsygan exact sequence, since the relation between the relative Banach dihedral and cyclic cohomology group (<img src=image/13426521_04.gif> and <img src=image/13426521_05.gif>) of <img src=image/13426521_01.gif> will be proved as the sequence <img src=image/13426521_06.gif>. Also, we studied and proved some basic notations in the relative cohomology of Banach algebra with unity and defined its properties. So, we showed the Morita invariance theorem in a relative case with maps <img src=image/13426521_07.gif> and <img src=image/13426521_08.gif>, and proved the Connes-Tsygan exact sequence that relates the relative cyclic and dihedral (co)homology of <img src=image/13426521_01.gif>. We proved the Mayer-Vietoris sequence of <img src=image/13426521_04.gif> in a new form in the Banach B-relative dihedral cohomology: <img src=image/13426521_09.gif> <img src=image/13426521_10.gif>. It should be borne in mind that the study of the cohomology theory of operator algebra is concerned with studying the spread of Covid 19.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[Three-Dimensional Control Charts for Regulating Processes Described by a Two-Dimensional Normal Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11992]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Kamola Saxibovna Ablazova&nbsp; &nbsp;</p><p>In the statistical management of processes in the initial phase, the stability of the technological process is determined based on the available samples. If the process is not stable, eliminating possible causes is brought into a statistically controlled position. At the same time, simple Shewhart control charts are used. In practice, some methods bring the process to a stable state (ISO standards, standards of various states). After the process has become stable, the boundaries of control charts are found for further management. Then, with the help of new samples, the process is managed. The article considers a process modeled by a two-dimensional normal distribution. New control charts have been found to check the normality and correlation of two-dimensional random variable components. The process is regulated using these charts, preserving the shape of the density of the individual components of the normal vector and linearity of these components. When constructing control charts, the Kolmogorov-Smirnov type agreement criterion and the Fisher criterion on the strength of the linear coupling of components were used. A concrete example shows the course of the introduction of these charts in production. The work results can be used in the initial phase of regulation and during the control check of the process under study. We used these control charts to assess product quality and quality control coming from the machine that produces the sleeves. It presents statistical methods for analyzing problems in factory practice and solutions for their elimination.</p>]]></description>
<pubDate>May 2022</pubDate>
</item>
<item>
<title><![CDATA[Data Encryption Using Face Antimagic Labeling and Hill Cipher]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11989]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>B. Vasuki&nbsp; &nbsp;L. Shobana&nbsp; &nbsp;and B. Roopa&nbsp; &nbsp;</p><p>An approach to encrypt and decrypt messages is obtained by relating the concepts of graph labeling and cryptography. Among the various types of labelings given in [3], our interest is on face antimagic labeling introduced by Mirka Miller in 2003 [1]. Baca [2] defines a connected plane graph <img src=image/13426646_01.gif> with edge set <img src=image/13426646_02.gif> and face set <img src=image/13426646_03.gif> as <img src=image/13426646_04.gif> face antimagic if there exist positive integers <img src=image/13426646_05.gif> and <img src=image/13426646_06.gif> and a bijection <img src=image/13426646_07.gif> such that the induced mapping <img src=image/13426646_08.gif>, where for a face <img src=image/13426646_09.gif>, <img src=image/13426646_10.gif> is the sum of all <img src=image/13426646_11.gif> for all edges <img src=image/13426646_12.gif> surrounding <img src=image/13426646_09.gif> is also a bijection. In cryptography there are many cryptosystems such as affine cipher, Hill cipher, RSA, knapsack and so on. Amongst these, Hill cipher is chosen for our encryption and decryption. In Hill cipher [8], plaintext letters are grouped into two-letter blocks, with a dummy letter X inserted at the end if needed to make all blocks of the same length, and then replace each letter with its respective ordinal number. Each plaintext block <img src=image/13426646_13.gif> is then replaced by a numeric ciphertext block <img src=image/13426646_14.gif>, where <img src=image/13426646_15.gif> and <img src=image/13426646_16.gif> are different linear combinations of <img src=image/13426646_17.gif> and <img src=image/13426646_18.gif> modulo 26: <img src=image/13426646_19.gif> (mod 26) and <img src=image/13426646_20.gif> (mod 26) with condition as <img src=image/13426646_21.gif> is one. Each number is translated into a cipher text letter which results in cipher text. In this paper, face antimagic labeling on double duplication of graphs along with Hill cipher is used to encrypt and decrypt the message.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[Principal Canonical Correlation Analysis with Missing Data in Small Samples]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11988]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Toru Ogura&nbsp; &nbsp;and Shin-ichi Tsukada&nbsp; &nbsp;</p><p>Missing data occur in various fields, such as clinical trials and social science. Canonical correlation analysis often used to analyze the correlation between two random vectors, cannot be performed on a dataset with missing data. Canonical correlation coefficients (CCCs) can also be calculated from a covariance matrix. When the covariance matrix can be estimated by excluding (complete-case and available-case analyses) or imputing (multivariate imputation by chained equations, k-nearest neighbor (kNN), and iterative robust model-based imputation) missing data, CCCs are estimated from this covariance matrix. CCCs have bias even with all observation data. Usually, estimated CCCs are even larger than population CCCs when a covariance matrix estimated from a dataset with missing data is used. The purpose is to bring the CCCs estimated from the dataset with missing data close to the population CCCs. The procedure involves three steps. First, principal component analysis is performed on the covariance matrix from the dataset with missing data to obtain the eigenvectors. Second, the covariance matrix is transformed using first to fourth eigenvectors. Finally, the CCCs are calculated from the transformed covariance matrix. CCCs derived using with this procedure are called the principal CCCs (PCCCs), and simulation studies and numerical examples confirmed the effectiveness of the PCCCs estimated from the dataset with missing data. There were many cases in the simulation results where the bias and root-mean-squared error of the PCCC estimated from the missing data based on kNN were the smallest. In the numerical example results, the first PCCC estimated from the missing data based on kNN is close to the first CCC estimated from the dataset comprising all observation data when the correlation between two vectors is low. Therefore, PCCCs based on kNN were recommended.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[The Non-Trivial Zeros of The Riemann Zeta Function through Taylor Series Expansion and Incomplete Gamma Function]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11987]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Jamal Salah&nbsp; &nbsp;Hameed Ur Rehman&nbsp; &nbsp;and Iman Al- Buwaiqi&nbsp; &nbsp;</p><p>The Riemann zeta <img src=image/13425982_01.gif> function <img src=image/13425982_02.gif> is valid for all complex number <img src=image/13425982_03.gif>, for the line <img src=image/13425982_04.gif> = 1. Euler-Riemann found that the function equals zero for all negative even integers: −2, −4, −6, ... (commonly known as trivial zeros) has an infinite number of zeros in the critical strip of complex numbers between the lines <img src=image/13425982_04.gif> = 0 and <img src=image/13425982_04.gif> = 1. Moreover, it was well known to him that all non-trivial zeros are exhibiting symmetry with respect to the critical line <img src=image/13425982_05.gif>. As a result, Riemann conjectured that all of the non-trivial zeros are on the critical line, this hypothesis is known as the Riemann hypothesis. The Riemann zeta function plays a momentous part while analyzing the number theory and has applications in applied statistics, probability theory and Physics. The Riemann zeta function is closely related to one of the most challenging unsolved problems in mathematics (the Riemann hypothesis) which has been classified as the 8th of Hilbert's 23 problems. This function is useful in number theory for investigating the anomalous behavior of prime numbers. If this theory is proven to be correct, it means we will be able to know the sequential order of the prime numbers. Numerous approaches have been applied towards the solution of this problem, which includes both numerical and geometrical approaches, also the Taylor series of the Riemann zeta function, and the asymptotic properties of its coefficients. Despite the fact that there are around 10<sup>13</sup>, non-trivial zeros on the critical line, we cannot assume that the Riemann Hypothesis (RH) is necessarily true unless a lucid proof is provided. Indeed, there are differing viewpoints not only on the Riemann Hypothesis's reliability, but also on certain basic conclusions see for example [16] in which the author justifies the location of non-trivial zero subject to the simultaneous occurrence of <img src=image/13425982_06.gif>, and omitting the impact of an indeterminate form <img src=image/13425982_07.gif>, that appears in Riemann's approach. In this study, we also consider the simultaneous occurrence <img src=image/13425982_06.gif> but we adopt an element-wise approach of the Taylor series by expanding <img src=image/13425982_08.gif> for all <img src=image/13425982_09.gif> = 1, 2, 3, ... at the real parts of the non-trivial zeta zeros lying in the critical strip for <img src=image/13425982_10.gif> is a non-trivial zero of <img src=image/13425982_11.gif>, we first expand each term <img src=image/13425982_08.gif> at <img src=image/13425982_12.gif> then at <img src=image/13425982_13.gif>. Then in this sequel, we evoke the simultaneous occurrence of the non-trivial zeta function zeros<img src=image/13425982_06.gif>, on the critical strip by the means of different representations of Zeta function. Consequently, proves that Riemann Hypothesis is likely to be true.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[On The Unconditional Run Length Distribution and Percentiles for The <img src=image/13425757_02.gif>-chart When The In-control Process Parameter Is Estimated]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11986]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Ugwu Samson. O&nbsp; &nbsp;Uchenna Nduka .C&nbsp; &nbsp;Ezra Precious .N&nbsp; &nbsp;Ugwu Gibson .C&nbsp; &nbsp;Odoh Paschal .N&nbsp; &nbsp;and Nwafor Cynthia. N&nbsp; &nbsp;</p><p>It is well known that the median is a better measure of location in skewed distributions. Run-length (RL) distribution is a skewed distribution, hence, median run-length measures chart performance better than the average run length. Some authors have advocated examination of the entire percentiles of the RL distribution in assessing chart performance. Such works already exist for Shewhart <img src=image/13425757_01.gif>−chart, CUSUM chart, CUSUM and EWMA charts, Hotelling's chi-square, and the two simple Shewhart multivariate non-parametric charts. Similar work on <img src=image/13425757_02.gif>-chart for one- and two-sided lacks in the literature. This work stands in the gap. Therefore, a detailed and comparative study of the one-sided upper and the two-sided <img src=image/13425757_02.gif>-control charts for some m reference samples at fixed sample size and false alarm rate will be considered here using the information from the unconditional RL cdf curve and its percentiles (mainly median). The order of the RL cdf curves of the one-sided upper <img src=image/13425757_02.gif>-chart is independent of the state of the process unlike in the two-sided one. The one-sided upper chart outperformed the two-sided one both in the in-control and in detecting positive shifts. The two-sided <img src=image/13425757_02.gif>-chart is more sensitive in detecting incremental shifts than to decremental shifts.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[Some Inequalities for <img src=image/13426606_01.gif>-times Differentiable Strongly Convex Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11985]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Duygu Dönmez Demir&nbsp; &nbsp;and Gülsüm Şanal&nbsp; &nbsp;</p><p>The theory of inequality is in a process of continuous development and has become a quite effective and powerful tool in various branches of mathematics to solve many problems. Convex functions are closely related to the theory of inequality, and many important inequalities are the results of the applications of convex functions. Recently, the results obtained for convex functions have been tried to be extended for strongly convex functions. In our previous studies, the perturbed trapezoid inequality obtained for convex functions has been extended to the functions that can be differentiated <img src=image/13426606_01.gif>-times. This study deals with some general identities introduced for <img src=image/13426606_01.gif>-times differentiable strongly convex functions. Besides, new inequalities related to general perturbed trapezoid inequality are constructed. These inequalities are obtained for the classes of functions which <img src=image/13426606_01.gif><sup> th</sup> derivatives of absolute values of the mentioned functions are strongly convex. It is seen that new classes of strongly convex functions turn into those obtained for convex functions under certain conditions. Considering the upper bounds obtained for strongly convex functions, it is concluded that it is better than those obtained for convex functions.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[Modified Profile Likelihood Estimation in the Lomax Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11984]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Maisoun Sewailem&nbsp; &nbsp;and Ayman Baklizi&nbsp; &nbsp;</p><p>In this paper, we consider improving maximum likelihood inference for the scale parameter of the Lomax distribution. The improvement is based on using modifications to the maximum likelihood estimator based on the Barndorff-Nielsen modification of the profile likelihood function. We apply these modifications to obtain improved estimators for the scale parameter of the Lomax distribution in the presence of a nuisance shape parameter. Due to the complicated expression for the Barndorff-Nielsen's modification, several approximations to this modification are considered in this paper, including the modification based on the empirical covariances and the approximation based on using suitably derived approximate ancillary statistics. We obtained the approximations for the Lomax profile likelihood function and the corresponding modified maximum likelihood estimators. They are not available in simple closed forms and can be obtained numerically as roots of some complicated likelihood equations. Comparisons between maximum profile likelihood estimator and modified profile likelihood estimators in terms of their biases and mean squared errors were carried out using simulation techniques. We found that the approximation based on the empirical covariances to have the best performance according to the criteria used. Therefore we recommend to use this modified version of the maximum likelihood estimator for the Lomax scale parameter, especially for small sample sizes with heavy censoring, which is quite common in industrial life testing experiments and reliability studies. An example based on real data is given to illustrate the methods considered in this paper.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[Fractional Variational Orthogonal Collocation Method for the Solution of Fractional Fredholm Integro-Differential Equation Using Mamadu-Njoseh Polynomials]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11983]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Jonathan Tsetimi&nbsp; &nbsp;and Ebimene James Mamadu&nbsp; &nbsp;</p><p>The use of orthogonal polynomials as basis functions via a suitable approximation scheme for the solution of many problems in science and technology has been on the increase and quite fascinating. In many numerical schemes, the convergence depends solely on the nature of the basis function adopted. The Mamadu-Njoseh polynomials are orthogonal polynomials developed in 2016 with reference to the weight function, <img src=image/13426333_01.gif> which bears the same convergence rate as that of Chebyshev polynomials. Thus, in this paper, the fractional variational orthogonal collocation method (FVOCM) is proposed for the solution of fractional Fredholm integro-differential equation using Mamadu-Njoseh polynomials (MNP) as basis functions. Here, the proposed method is an elegant mixture of the variational iteration method (VIM) and the orthogonal collocation method (OCM). The VIM is one of the popular methods available to researchers in seeking the solution to both linear and nonlinear differential problems requiring neither linearization nor perturbation to arrive at the required solution. Collocating at the roots of orthogonal polynomials gives birth to the OCM. For the proposed method, the VIM is initiated to generate the required approximations whereby producing the series <img src=image/13426333_02.gif> which is collocated orthogonally to derive the unknown parameters. The numerical results show that the method derives a high accurate and reliable approximation with a high convergence rate. We have also presented the existence and uniqueness of solution of the method. All computational frameworks in this research are performed via MAPLE 18 software.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[Solution of 1<sup>st</sup> Order Stiff Ordinary Differential Equations Using Feed Forward Neural Network and Bayesian Regularization Algorithm]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11982]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Rootvesh Mehta&nbsp; &nbsp;Sandeep Malhotra&nbsp; &nbsp;Dhiren Pandit&nbsp; &nbsp;and Manoj Sahni&nbsp; &nbsp;</p><p>A stiff equation is a differential equation for which certain numerical methods are not stable, unless the step length is taken to be extraordinarily small. The stiff differential equation includes few terms that could result in speedy variation in the solution. When integrating a differential equation numerically, the requisite step length should be incredibly small. In the solution curve, much variation can be observed where the solution curve straightens out to approach a line with slope almost zero. The phenomenon of stiffness is observed when the step-size is unacceptably small in a region where the solution curve is very smooth. A lot of work on solving the stiff ordinary differential equations (ODEs) have been done by researchers with numbers of numerical methods that currently exist. Extensive research has been done to unveil the comparison between their rate of convergence, number of computations, accuracy, and capability to solve certain type of test problems. In the present work, an advanced Feed Forward Neural Network (FFNN) and Bayesian regularization algorithm-based method is implemented to solve first order stiff ordinary differential equations and system of ordinary differential equations. Using proposed method, the problems are solved for various time steps and comparisons are made with available analytical solutions and other existing methods. A problem is simulated using proposed FFNN model and accuracy has been acquired with less calculation efforts and time. The outcome of the work is showing good result to use artificial neural network methods to solve various types of stiff differential equations in near future.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[A Branch and Bound Algorithm to Solve Travelling Salesman Problem (TSP) with Uncertain Parameters]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11981]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>S. Dhanasekar&nbsp; &nbsp;Saroj Kumar Dash&nbsp; &nbsp;and Neena Uthaman&nbsp; &nbsp;</p><p>The core of the theoretical Computing science and mathematics is computational complexity theory. It is usually concerned with the classification of computational problems in to P and NP problems by using their inherent challenges. There is no efficient algorithms for these problems. Travelling Salesman Problem is one of the most discussed problems in Combinatorial Mathematics. To deduct a Hamiltonian cycle in which the cost or time is minimum is the main objective of the TSP. There exist many algorithms to solve it. Since all the existing algorithms are not efficient to solve it, still many researchers are working to produce efficient algorithms. If the description of the parameters is vague, then fuzzy notions which include membership value are applied to model the parameters. Still the modeling does not give the exact representation of the vagueness. The Intuitionistic fuzzy set which includes non-membership value along with membership values in its domain is applied to model the parameters. The decision variables in the TSP, the cost, time or distance are modeled as intuitionistic fuzzy numbers, then the TSP is named as Intuitionistic fuzzy TSP (InFTSP). We develop the intuitionistic fuzzified version of littlewood's formula or branch and bound method to solve the Intuitionistic fuzzy TSP. This method is effective because it involves the simple arithmetic operation of Intuitionistic fuzzy numbers and ranking of intuitionistic fuzzy numbers. Ordering of intuitionistic fuzzy numbers is vital in optimization problems since it is equivalent to the ordering of alternatives. In this article, we used weighted arithmetic mean method to order the fuzzy numbers. Weighted arithmetic mean method satisfies linear property which is a very important characteristic of ranking function. Numerical examples are solved to validate the given algorithm and the results are discussed.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[Weighted Least Squares Estimation for AR(1) Model With Incomplete Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11980]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Mohamed Khalifa Ahmed Issa&nbsp; &nbsp;</p><p>Time series forecasting is the main objective in many life applications such as weather prediction, natural phenomena analysis, financial or economic analysis, etc. In real-life data analysis, missing data can be considered as a feature that the researcher faces because of human error, technical damage, or catastrophic natural phenomena, etc. When one or more observations are missing, it might be urgent to estimate the model as well as to estimate the missing values which lead to a better understanding of the data, and more accurate prediction. Different time series require different effective techniques to have better estimates for those missing values. Traditionally, the missing values are simply replaced by mean and mode imputation, deleted or handled using other methods, which are not convenient enough to address missing values, as those methods can cause bias. One of the most popular models used in estimating time-series data is autoregressive models. Autoregressive models forecast the future values in terms of the previous ones. The first-order autoregressive AR (1) model is the one which the current value is based on the immediately preceding value, then estimating parameters of AR (1) with missing observations is an urgent topic in time series analysis. Many approaches have been developed to address the estimation problems in time series such as ordinary least square (OLS), Yule Walker (YW). Therefore, a suggested method will be introduced to estimate the parameter of the model by using weighted least squares. The properties of the (WLS) estimator are investigated. Moreover, a comparison between those methods using AR (1) model with missing observations is conducted through a Monte Carlo simulation at various sample sizes and different proportions of missing observations, this comparison is conducted in terms of mean square error (MSE) and mean absolute error (MAE). The results of the simulation study state that (WLS) estimator can be considered as the preferable method of estimation. Also, time series real data with missing observations were estimated.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[Introduction to Applied Algebra: Book Review of Chapter 8-Linear Equations (System of Linear Equations)]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11979]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Elvis Adam Alhassan&nbsp; &nbsp;Kaiyu Tian&nbsp; &nbsp;and Adjabui Michael&nbsp; &nbsp;</p><p>This chapter review presents two ideas and techniques in solving Systems of Linear Equations in the most simple minded straightforward manner to enable the student as well as the instructor to follow it independently with very little guidance. The focus is on using simpler and easier approaches such as Determinants; and Elementary Row Operations to solve Systems of Linear Equations. We found the solution set of a few systems of linear equations by a successive ratio of the determinant of all the matrices formed from replacing each column of the coefficient matrix by the right hand side vector and the determinant of the coefficient matrix repeatedly giving the values of the variables in the system in the order in which they appeared. Similarly, we also used the three types of elementary row operations namely; Row Swap; Scalar Multiplication; and Row Sum to find the solution set of systems of linear equations through row echelon form to reduced row echelon form to find the solution set of some systems of linear equations. Technical forms of systems of linear equations were used to illustrate the two approaches in finding their solution sets. In each approach we started by finding the coefficient matrices from the systems of linear equations.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[On Tensor Product and Colorability of Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11841]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Veninstine Vivik J&nbsp; &nbsp;Sheeba Merlin G&nbsp; &nbsp;P. Xavier&nbsp; &nbsp;and Nila Prem JL&nbsp; &nbsp;</p><p>The idea of graph coloring problem (GCP) plays a vital role in allotment of resources resulting in its proper utilization in saving labor, space, time and cost effective, etc. The concept of GCP for graph <img src=image/13426130_1.gif> is assigning minimum number of colors to its nodes such that adjacent nodes are allotted a different color, the smallest of which is known as its chromatic number <img src=image/13426130_2.gif>. This work considers the approach of taking the tensor product between two graphs which emerges as a complex graph and it drives the idea of dealing with complexity. The load balancing on such complex networks is a hefty task. Amidst the various methods in graph theory the coloring is a quite simpler tool to unveil the intricate challenging networks. Further the node coloring helps in classifying the nodes with least number of classes in any network. So coloring is applied to balance the allocations in such complex network. We construct the tensor product between two graphs like path with wheel and helm, cycle with sunlet and closed helm graphs then structured their nature. The coloring is then applied for the nodes of the extended new graph to determine their optimal bounds. Hence we obtain the chromatic number for the tensor product of <img src=image/13426130_3.gif>, <img src=image/13426130_4.gif>, <img src=image/13426130_5.gif> and <img src=image/13426130_6.gif>.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[Application of the Fast Expansion Method in Space–Related Problems]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11840]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Mikhail Ivanovich Popov&nbsp; &nbsp;Aleksey Vasilyevich Skrypnikov&nbsp; &nbsp;Vyacheslav Gennadievich Kozlov&nbsp; &nbsp;Alexey Viktorovich Chernyshov&nbsp; &nbsp;Alexander Danilovich Chernyshov&nbsp; &nbsp;Sergey Yurievich Sablin&nbsp; &nbsp;Vladimir Valentinovich Nikitin&nbsp; &nbsp;and Roman Alexandrovich Druzhinin&nbsp; &nbsp;</p><p>In the paper, numerical and approximate analytical solutions for the problem of the motion of a spacecraft from a starting point to a final point during a certain time are obtained. The unpowered and powered portions of the flight are considered. For a numerical solution, a finite-difference scheme of the second order of accuracy is constructed. The space-related problem considered in the study is essentially nonlinear, which necessitates the use of trigonometric interpolation methods to replace the task of calculating the Fourier coefficients with the integral formulas by solving the interpolation system. One of the simplest options for trigonometric sine interpolation on a semi-closed segment [–a, a), where the right end is not included in the general system of interpolation points, is considered. In order to maintain the conditions of orthogonality of sines, an even number of 2M calculation points is uniformly applied to the segment. The sine interpolation theorem is proved and a compact formula is given for calculating the interpolation coefficients. A general theory of fast sine expansion is given. It is shown that in this case, the Fourier coefficients decrease much faster with the increase in serial number compared to the Fourier coefficients in the classical case. This property allows reducing the number of terms taken into account in the Fourier series, as well as the amount of computer calculations, and increasing the accuracy of calculations. The analysis of the obtained solutions is carried out, and their comparison with the exact solution of the test problem is proposed. With the same calculation error, the time spent on a computer using the fast expansion method is hundreds of times less than the time spent on classical finite-difference method.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[Generalized Family of Group Chain Sampling Plans Using Minimum Angle Method (MAM)]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11839]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Mohd Azri Pawan Teh&nbsp; &nbsp;Nazrina Aziz&nbsp; &nbsp;and Zakiyah Zain&nbsp; &nbsp;</p><p>This research develops a generalized family of group chain sampling plans using the minimum angle method (MAM). The MAM is a method whereby both the producer's and consumer's risks are considered when designing the sampling plans. There are three sampling plans nested under the family of group chain acceptance sampling which are group chain sampling plans (GChSP-1), new two-sided group chain sampling plans (NTSGChSP-1), and two-sided group chain sampling plans (TSGChSP-1). The methodology applied is random values of the fraction defectives for both producer and consumer, and the optimal number of groups, <img src=image/13492016_1.gif> is obtained using the Scilab software. The findings reveal that some of the design parameters manage to obtain the <img src=image/13492016_1.gif> corresponding to the smallest angle, <img src=image/13492016_2.gif> and some of the values fail to get the <img src=image/13492016_1.gif>. The <img src=image/13492016_1.gif> obtained in this research guarantees that the producer and the consumer are protected at most 10% from having defective items.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[New Group Chain Sampling Plan (NGChSP-1) for Generalized Exponential Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11838]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Nazrina Aziz&nbsp; &nbsp;Tan Jia Xin&nbsp; &nbsp;Zakiyah Zain&nbsp; &nbsp;and Mohd Azri Pawan Teh&nbsp; &nbsp;</p><p>Acceptance criteria are the conditions imposed on any sampling plan to determine whether the lot is accepted or rejected. Group chain sampling plan (GChSP-1) was constructed according to the 5 acceptance criteria; modified group chain sampling plan (MGChSP-1) was derived with 3 acceptance criteria; later new group chain sampling plan (NGChSP-1) was introduced with 4 acceptance criteria where the NGChSP-1 balances the acceptance criteria between the GChSP-1 and MGChSP-1. Producers favor a sampling plan with more acceptance criteria because it reduces the probability of rejecting a good lot (producer risk), whereas consumers may prefer a sampling plan with fewer acceptance criteria as it reduces the probability of accepting a bad lot (consumer risk). The disparity in acceptance criteria creates a conflict between the two main stakeholders in acceptance sampling. In the literature, there are numerous methods available for developing sampling plans. To date, NGChSP-1 was developed using the minimum angle method. In this paper, NGChSP-1 was constructed with the minimizing consumer's risk method for generalized exponential distribution where mean product lifetime is used as quality parameter. There are six phases involved to develop the NGChSP-1 for different design parameters. Result shows the minimum number of groups decrease when the value of design parameters increases. The results of the performance comparison show that the NGChSP-1 is a better sampling plan than the GChSP-1 because it has a smaller number of groups and lower probability of lot acceptance than the GChSP-1. NGChSP-1 should offer better alternatives to industrial practitioners in sectors involving product life test.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[Reversible Jump MCMC Algorithm for Transformed Laplacian AR: Application in Modeling CO<sub>2</sub> Emission Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11837]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Suparman&nbsp; &nbsp;Hery Suharna&nbsp; &nbsp;Mahyudin Ritonga&nbsp; &nbsp;Fitriana Ibrahim&nbsp; &nbsp;Tedy Machmud&nbsp; &nbsp;Mohd Saifullah Rusiman&nbsp; &nbsp;Yahya Hairun&nbsp; &nbsp;and Idrus Alhaddad&nbsp; &nbsp;</p><p>Autoregressive (AR) model is applied to model various types of data. For confidential data, data confusion is very important to protect the data from being known by other unauthorized parties. This paper aims to find data modeling with transformations in the AR model. In this AR model, the noise has a Laplace distribution. AR model parameters include order, coefficients, and variance of the noise. The estimation of the AR model parameter is proposed in a Bayesian method by using the reversible jump Markov Chain Monte Carlo (MCMC) algorithm. This paper shows that the posterior distribution of AR model parameters has a complicated equation, so the Bayes estimator cannot be determined analytically. Bayes estimators for AR model parameters are calculated using the reversible jump MCMC algorithm. This algorithm was validated through a simulation study. This algorithm can accurately estimate the parameters of the transformed AR model with Laplacian noise. This algorithm produces an AR model that satisfies the stationary conditions. The novelty in this paper is the use of transformations in the Laplacian AR model to secure research data when the research results are published in a scientific journal. As an example application, the Laplacian AR model was used to model CO<sub>2</sub> emission data. The results of this paper can be applied to modeling and forecasting confidential data in various sectors.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[A New Algorithm for Spectral Conjugate Gradient in Nonlinear Optimization]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11836]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Ahmed Anwer Mustafa&nbsp; &nbsp;</p><p>CJG is a nonlinear conjugation gradient. Algorithms have been used to solve large-scale unconstrained enhancement problems. Because of their minimal memory needs and global convergence qualities, they are widely used in a variety of fields. This approach has lately undergone many investigations and modifications to enhance it. In our daily lives, the conjugate gradient is incredibly significant. For example, whatever we do, we strive for the best outcomes, such as the highest profit, the lowest loss, the shortest road, or the shortest time, which are referred to as the minimum and maximum in mathematics, and one of these ways is the process of spectral gradient descent. For multidimensional unbounded objective function, the spectrum conjugated gradient (SCJG) approach is a strong tool. In this study, we describe a revolutionary SCG technique in which performance is quantified. Based on assumptions, we constructed the descent condition, sufficient descent theorem, conjugacy condition, and global convergence criteria using a robust Wolfe and Powell line search. Numerical data and graphs were constructed utilizing benchmark functions, which are often used in many classical functions, to demonstrate the efficacy of the recommended approach. According to numerical statistics, the suggested strategy is more efficient than some current techniques. In addition, we show how the unique method may be utilized to improve solutions and outcomes.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[Estimating Weibull Parameters Using Maximum Likelihood Estimation and Ordinary Least Squares: Simulation Study and Application on Meteorological Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11835]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Nawal Adlina Mohd Ikbal&nbsp; &nbsp;Syafrina Abdul Halim&nbsp; &nbsp;and Norhaslinda Ali&nbsp; &nbsp;</p><p>Inefficient estimation of distribution parameters for current climate will lead to misleading results in future climate. Maximum likelihood estimation (MLE) is widely used to estimate the parameters. However, MLE is not well performed for the small size. Hence, the objective of this study is to compare the efficiency of MLE with ordinary least squares (OLS) through the simulation study and real data application on wind speed data based on model selection criteria, Akaike information criterion (AIC) and Bayesian information criterion (BIC) values. The Anderson-Darling (AD) test is also performed to validate the proposed distribution. In summary, OLS is better than MLE when dealing with small sample sizes of data and estimating the shape parameter, while MLE is capable of estimating the value of scale parameter. However, both methods are well performed at a large sample size.</p>]]></description>
<pubDate>Mar 2022</pubDate>
</item>
<item>
<title><![CDATA[Limit Theorems for The Sums of Random Variables in A Special Form]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11799]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Azam A. Imomov&nbsp;&nbsp;and Zuhriddin A. Nazarov&nbsp;&nbsp;</p><p>In this paper, we consider some functionals of the sums of independent identically distributed random variables. The functionals of the sums are important in probabilistic models and stochastic branching systems. In connection with the application in various probabilistic models and stochastic branching systems, we are interested in the fulfillment of the law of large numbers and the Central limit theorem for these sums. The main hypotheses of the paper are the presence of second order moments of the variables and the fulfillment of the Lindeberg condition is considered. The research object and subject of this paper consists of specially generated random variables using the sums of non-bound random variables. In total, 6 different sums in a special form were studied in the paper and this sum was not previously studied by other scientists. The purpose of the paper is to examine whether these sums in a special form satisfy the terms of the law of large numbers and the Central limit theorem. The main result of the paper is to show that the law of large numbers and the terms of the classical limit theorem are fulfilled in some cases. The results obtained in the paper are of theoretical importance, The Central limit theorem analogues proved here are applications of Lindeberg theorem. The results can be applied to the determination of the fluctuation of immigration branching systems as well as the asymptotic state of autoregression processes. At the same time, from the main results obtained in the paper it can be used in practical lessons conducted on the theory of probability. The results of the paper will be an important guide for young researchers. Important theorems proved in the paper can be used in probability theory, stochastic branching systems and other practical problems.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Fuzzy EOQ Model for Time Varying Deterioration and Exponential Time Dependent Demand Rate under Inflation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11798]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>K.Geetha&nbsp;&nbsp;and S.P.Reshma&nbsp;&nbsp;</p><p>In this study we have discussed a fuzzy eoq model for deteriorating products with time varying deterioration under inflation and exponential time dependent demand rate. Shortages are not allowed in this fuzzy eoq model and the impact of inflation is investigated. An inventory model is used to determine whether the order quantity is more than or equal to a predetermined quantity for declining items.The optimal solution for the existing model is derived by taking truncated taylor’s series approximation for finding closed form optimal solution. The cost of deterioration, cost of ordering, cost of holding and the time taken to settle the delay in account are considered using triangular fuzzy numbers. In this study the fuzzy triangular numbers are used to estimate the optimal order quantity and cycle duration. Furthermore we have used graded mean integration method and signed distance approach to defuzzify these values. To validate our model numerical examples are discussed for all cases with the help of sensitivity analysis for different parameters. Finally, a higher decay rate results in a shorter ideal cycle time as well as higher overall relevant cost is established. The presented model can be used to predict demand as a quadratic function of time,stock level time dependent demand, selling price, and other variables.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Newton-PKSOR with Quadrature Scheme in Solving Nonlinear Fredholm Integral Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11797]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Labiyana Hanif Ali&nbsp;&nbsp;Jumat Sulaiman&nbsp;&nbsp;and Azali Saudi&nbsp;&nbsp;</p><p>In this study, we applied Newton method with a new version of KSOR, called PKSOR to form NPKSOR in solving nonlinear second kind Fredholm integral equations. A new version of KSOR is an update to the KSOR method with two relaxation parameters. The properties of KSOR helps in enlargement of the solution domain so the relaxation parameter can take the value <img src=image/13425283_01.gif>. With PKSOR, the relaxation parameter in KSOR <img src=image/13425283_02.gif> is treated into two different relaxation paramaters as <img src=image/13425283_03.gif> and <img src=image/13425283_04.gif> which resulting lower number of iteration compared to the KSOR method. By combining the Newton method with PKSOR, we intend to from more efficient method to solve the nonlinear Fredholm integral equations. The discretization part of this study is done using first-order quadrature scheme to develop a nonlinear system. We formulate the solution of the nonlinear system using the given approach by reducing it to a linear system and then solving it using iterative methods to obtain an approximate solution. Furthermore, we compare the results of the proposed methods with NKSOR and NGS methods on three examples. Based on our findings, the NPKSOR method is more efficient than NKSOR and NGS methods. By implementing the NPKSOR method, we can boost the convergence rate of the iteration by considering two relaxation parameters, resulting in a lower number of iteration and computational time.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Modelling of Cointegration with Student's T-errors]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11783]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Nimitha John&nbsp; &nbsp;and Balakrishna Narayana&nbsp; &nbsp;</p><p>Two or more non-stationary time series are said to be co-integrated if a certain linear combination of them becomes stationary. Identification of co-integrating relationships among the relevant time series helps the researchers to develop efficient forecasting methods. The classical approach of analyzing such series is to express the co-integrating time series in the form of error correction models with Gaussian errors. However, the modeling and analysis of cointegration in the presence of non-normal errors needs to be developed as most of the real time series in the field of finance and economics deviates from the assumption of normality. This paper focuses on modeling of a bivariate cointegration with a student's-t distributed error. The co-integrating vector obtained from the error correction equation is estimated using the method of maximum likelihood. A unit root test of first order non stationary process with student's t-errors is also defined. The resulting estimators are used to construct test procedures for testing the unit root and cointegration associated with two time series. The likelihood equations are all solved using numerical approaches because the estimating equations do not have an explicit solution. A simulation study is carried out to illustrate the finite sample properties of the model. The simulation experiments show that the estimates perform reasonably well. The applicability of the model is illustrated by analyzing the data on time series of Bombay stock exchange indices and crude oil prices and found that the proposed model is a good fit for the data sets.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Expectation-Maximization Algorithm Estimation Method in Automated Model Selection Procedure for Seemingly Unrelated Regression Equations Models]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11782]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Nur Azulia Kamarudin&nbsp; &nbsp;Suzilah Ismail&nbsp; &nbsp;and Norhayati Yusof&nbsp; &nbsp;</p><p>Model selection is the process of choosing a model from a set of possible models. The model's ability to generalise means it can fit both current and future data. Despite numerous emergences of procedures in selecting models automatically, there has been a lack of studies on procedures in selecting multiple equations models, particularly seemingly unrelated regression equations (SURE) models. Hence, this study concentrates on an automated model selection procedure for the SURE model by integrating the expectation-maximization (EM) algorithm estimation method, named SURE(EM)-Autometrics. This extension procedure was originally initiated from Autometrics, which is only applicable for a single equation. To assess the performance of SURE(EM)-Autometrics, simulation analysis was conducted under two strengths of correlation among equations and two levels of significance for a two-equation model with up to 18 variables in the initial general unrestricted model (GUM). Three econometric models have been utilised as a testbed for true specification search. The results were divided into four categories where a tight significance level of 1% had contributed a high percentage of all equations in the model containing variables precisely comparable to the true specifications. Then, an empirical comparison of four model selection techniques was conducted using water quality index (WQI) data. System selection to select all equations in the model simultaneously proved to be more efficient than single equation selection. SURE(EM)-Autometrics dominated the comparison by being at the top of the rankings for most of the error measures. Hence, the integration of EM algorithm estimation is appropriate in improving the performance of automated model selection procedures for multiple equations models.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[The Power of Test of Jennrich Statistic with Robust Methods in Testing the Equality of Correlation Matrices]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11781]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Bahtiar Jamili Zaini&nbsp; &nbsp;and Shamshuritawati Md Sharif&nbsp; &nbsp;</p><p>Jennrich statistic is a method that can be used to test the equality of 2 or more independent correlation matrices. However, Jennrich statistic begins to be problematic when there is presence of outliers that could lead to invalid results. When exiting outliers in data, Jennrich statistic implications will affect Type I errors and will reduce the power of test. To overcome the presence of outliers, this study suggests the use of robust methods as an alternative method and therefore, it will integrate the estimator into Jennrich statistic. Thus, it can improve the testing performance of correlation matrix hypotheses in relation to outlier problems. Therefore, this study proposes 3 statistical tests, namely Js-statistic, Jm-statistic, and Jmad-statistic that can be used to test the equation of 2 or more correlation matrices. The performance of the proposed method is assessed using the power of test. The results show that Jm-statistic and Jmad-statistic can overcome outlier problems into Jennrich statistic in testing the correlation matrix hypothesis. Jmad-statistic is also superior in testing the correlation matrix hypothesis for different sample sizes, especially those involving 10% outliers.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Solving Multi-Response Problem Using Goal Programming Approach and Quantile Regression]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11780]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Sara Abdel Baset&nbsp; &nbsp;Ramadan Hamed&nbsp; &nbsp;Maha El-Ashram&nbsp; &nbsp;and Zakaria Abdel Samea&nbsp; &nbsp;</p><p>Response surface methodology (RSM) is a group of mathematical and statistical techniques helpful for improving, developing and optimizing processes. It also has important uses in the design, development and formulation of new products. Moreover, it has a great help in the enhancement of existing products. (RSM) is a method used to discover response functions, which meet and fulfill all quality diagnostics simultaneously. Most applications have more than one response; the main problem is multi-response optimization (MRO). The classical methods used to solve the Multi-Response Optimization problem do not guarantee optimal designs and solutions. Besides, they take a long time and depend on the researcher's judgment. Therefore, some researchers used a Goal Programming-based method; however, they still do not guarantee an optimal solution. This study aims to form a goal programming model derived from a chance constrained approach using quantile regression to deal with outliers not normal and errors. It describes the relationship between responses and control variables at distinctive points in the response conditional distribution; it also considers the uncertainty problem and presents an illustrative example and simulation study for the suggested model.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Some Properties of BP-Space]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11779]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Ahmed Talip Hussein&nbsp; &nbsp;and Emad Allawi Shallal&nbsp; &nbsp;</p><p>Y. Imai, K. Iseki [4], and K. Iseki [5] presented types from summary algebras which are called BCK-algebras and BCI-algebras. It is known that the brand of BCK algebras is a suitable subtype from the type from BCI-algebras. The researchers Q. P. Hu [2] & X. Li [3] presented a width type from essence algebras: BCH- algebras. They have exhibited that the type of BCI-algebras is a suitable subtype of the type of BCH-algebras. Moreover, J. Neggers and H. S. K [9] presented the connotation from d - algebras that are else popularization from BCK-algebras, inspected kinsmen amidst d-algebras & BCK-algebras. They calculated diversified topologies to research from lattices but they did not discuss the experience of making the binary operation of d- algebra continuous. Topological set notions are famous and yet accurate by numerous mathematicians. Even global topographical algebraic structure is sought by several writers. We realize a Tb-algebra, get it several ownerships of such build, the generality significant flavors and arrive to realize a new gender of spaces designated BP- space, where we arrived the results. Let <img src=image/13425357_01.gif> be <img src=image/13425357_02.gif> B-space and <img src=image/13425357_03.gif> is periodic proportional. Then <img src=image/13425357_04.gif> is a compact set in <img src=image/13425357_01.gif> and <img src=image/13425357_04.gif> = <img src=image/13425357_05.gif>, <img src=image/13425357_06.gif>. Also If <img src=image/13425357_07.gif> is an invariant under <img src=image/13425357_08.gif>, then <img src=image/13425357_09.gif>, <img src=image/13425357_10.gif> and <img src=image/13425357_11.gif> are invariant under <img src=image/13425357_12.gif> for every Q in <img src=image/13425357_13.gif> if <img src=image/13425357_14.gif> is also. If the function <img src=image/13425357_14.gif> is closed (one to one) then <img src=image/13425357_15.gif>, (<img src=image/13425357_16.gif>) is invariant under <img src=image/13425357_12.gif> and the set of interior points of <img src=image/13425357_18.gif> is invariant under <img src=image/13425357_12.gif>, if the function <img src=image/13425357_14.gif> is open and <img src=image/13425357_17.gif>.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Solving Differential Equations of Fractional Order Using Combined Adomian Decomposition Method with Kamal Integral Transformation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11778]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Muhamad Deni Johansyah&nbsp; &nbsp;Asep K Supriatna&nbsp; &nbsp;Endang Rusyaman&nbsp; &nbsp;and Jumadil Saputra&nbsp; &nbsp;</p><p>The differential equation is an equation that involves the derivative (derivatives) of the dependent variable with respect to the independent variable (variables). The derivative represents nothing but a rate of change, and the differential equation helps us present a relationship between the changing quantity with respect to the change in another quantity. The Adomian decomposition method is one of the iterative methods that can be used to solve differential equations, both integer and fractional order, linear or nonlinear, ordinary or partial. This method can be combined with integral transformations, such as Laplace, Sumudu, Natural, Elzaki, Mohand, Kashuri-Fundo, and Kamal. The main objective of this research is to solve differential equations of fractional order using a combination of the Adomian decomposition method with the Kamal integral transformation. Furthermore, the solution of the fractional differential equation using the combined method of the Adomian decomposition method and the Kamal integral transformation was investigated. The main finding of our study shows that the combined method of the Adomian decomposition method and the Kamal integral transformation is very accurate in solving differential equations of fractional order. The present results are original and new for solving differential equations of fractional order. The results attained in this paper confirm the illustrative example has been solved to show the efficiency of the proposed method.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Fuzzy Number – A New Hypothesis and Solution of Fuzzy Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11777]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Vijay C. Makwana&nbsp; &nbsp;Vijay. P. Soni&nbsp; &nbsp;Nayan I. Patel&nbsp; &nbsp;and Manoj Sahni&nbsp; &nbsp;</p><p>In this paper, a new hypothesis of fuzzy number has been proposed which is more precise and direct. This new proposed approach is considered as an equivalence class on set of real numbers R with its algebraic structure and its properties along with theoretical study and computational results. Newly defined hypothesis provides a well-structured summary that offers both a deeper knowledge about the theory of fuzzy numbers and an extensive view on its algebra. We defined field of newly defined fuzzy numbers which opens new era in future for fuzzy mathematics. It is shown that, by using newly defined fuzzy number and its membership function, we are able to solve fuzzy equations in an uncertain environment. We have illustrated solution of fuzzy linear and quadratic equations using the defined new fuzzy number. This can be extended to higher order polynomial equations in future. The linear fuzzy equations have numerous applications in science and engineering. We may develop some iterative methods for system of fuzzy linear equations in a very simple and ordinary way by using this new methodology. This is an innovative and purposefulness study of fuzzy numbers along with replacement of this newly defined fuzzy number with ordinary fuzzy number.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[A Griffith Crack at the Interface of an Isotropic and Orthotropic Half Space Bonded Together]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11776]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>A. K. Awasthi&nbsp; &nbsp;Rachna&nbsp; &nbsp;and Harpreet Kaur&nbsp; &nbsp;</p><p>In the past 53 years, many efforts have been contributed to develop and demonstrate the properties of reinforced composite materials. The ever-increasing use of composite materials through engineering structures needs the proper analysis of the mechanical response of these structures. In the proposed work, we have an exact form of Stress components and Displacement components to a Griffith crack at the interface of an Isotropic and Orthotropic half-space bounded together. The expression was evaluated in the vicinity of crack tips by using Fourier transform method but here these components have been evaluated with the help of Fredholm integral equations and then reduce to the coupled Fredholm integral equations. In this paper, we use the problem of Lowengrub and Sneddon and reduce it to dual integral equations. Solution of these equations through the use of the method of Srivastava and Lowengrub is reduced to coupled Fredholm integral equation. Further reduces the problem to decoupled Fredholm integral equation of 2nd kind. We get the solution of dual integral equations and the problem is reduced to coupled Fredholm integral equation. We find the solution of the Fredholm integral equation and reduce it to decoupled Fredholm integral equation of 2nd kind. The Physical interest in fracture design criterion is due to Stress and crack opening Displacement components. In the end, we can easily calculate the Stress components and Displacement components in the exact form.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Outcomes of Common Fixed Point Theorems in S-metric Space]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11661]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Katta.Mallaiah&nbsp; &nbsp;and Veladi Srinivas&nbsp; &nbsp;</p><p>In the present paper, we establish the existence of two unique common fixed point theorems with a new contractive condition for four self-mappings in the S-metric space. First, we establish a common fixed-point theorem by using weaker conditions such as compatible mappings of type-(E) and subsequentially continuous mappings. Further, in the next theorem, we use another set of weaker conditions like sub-compatible and sub-sequentially continuous mappings, which are weaker than occasionally weak compatible mappings. Moreover, it is observed that the mappings in these two theorems are sub-sequentially continuous, but these mappings are neither continuous nor reciprocally continuous mappings. These two results will extend and generalize the existing results of [7] and [9] in the S-metric space. Furthermore, we also provide some suitable examples to justify our outcomes.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[An Approach to Solve Multi Attribute Decision-making Problem Based on the New Possibility Measure of Picture Fuzzy Numbers]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11660]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>K. Deva&nbsp; &nbsp;and S. Mohanaselvi&nbsp; &nbsp;</p><p>A picture fuzzy set is a more powerful tool to deal with uncertainties in the given information as compared to fuzzy set and intuitionistic fuzzy set and has energetic applications in decision-making. The aim of this study is to develop a new possibility measure for ranking picture fuzzy numbers and then some of its basic properties are proved. The proposed method provides the same ranking order as the score function in the literature. Moreover, the new possibility measure can provide additional information for the relative comparison of the picture fuzzy numbers. A picture fuzzy multi attribute decision-making problem is solved based on the possibility matrix generated by the proposed method after being aggregated using picture fuzzy Einstein weighted averaging aggregation operator. To verify the importance of the proposed method, an picture fuzzy multi attribute decision-making strategy is presented along with an application for selecting suitable alternative. The superiority of the proposed method and limitations of the existing methods are discussed with the help of a comparative study. Finally, a numerical example and comparative analysis are provided to illustrate the practicality and feasibility of the proposed method.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[A Basic <img src=image/13425240_01.gif> Dimensional Representation of Artin Braid Group <img src=image/13425240_02.gif>, and a General Burau Representation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11659]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Arash Pourkia&nbsp; &nbsp;</p><p>Braid groups and their representations are at the center of study, not only in low-dimensional topology, but also in many other branches of mathematics and theoretical physics. Burau representation of the Artin braid group which has two versions, reduced and unreduced, has been the focus of extensive study and research since its discovery in 1930's. It remains as one of the very important representations for the braid group. Partly, because of its connections to the Alexander polynomial which is one of the first and most useful invariants for knots and links. In the present work, we show that interesting representations of braid group could be achieved using a simple and intuitive approach, where we simply analyse the path of strands in a braid and encode the over-crossings, under-crossings or no-crossings into some parameters. More precisely, at each crossing, where, for example, the strand <img src=image/13425240_03.gif> crosses over the strand <img src=image/13425240_04.gif> we assign t to the top strand and b to the bottom strand. We consider the parameter t as a relative weight given to strand <img src=image/13425240_03.gif> relative to <img src=image/13425240_04.gif>, hence the position <img src=image/13425240_05.gif> for t in the matrix representation. Similarly, the parameter b is a relative weight given to strand <img src=image/13425240_04.gif> relative to <img src=image/13425240_03.gif>, hence the position <img src=image/13425240_06.gif> for b in the matrix representation. We show this simple path analyzing approach that leads us to an interesting simple representation. Next, we show that following the same intuitive approach, only by introducing an additional parameter, we can greatly improve the representation into the one with much smaller kernel. This more general representation includes the unreduced Burau representation, as a special case. Our new path analyzing approach has the advantage that it applies a very simple and intuitive method capturing the fundamental interactions of the strands in a braid. In this approach we intuitively follow each strand in a braid and create a history for the strand as it interacts with other strands via over-crossings, under-crossings or no-crossings. This, directly, leads us to the desired representations.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[On Recent Advances in Divisor Cordial Labeling of Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11658]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Vishally Sharma&nbsp; &nbsp;and A. Parthiban&nbsp; &nbsp;</p><p>An assignment of intergers to the vertices of a graph <img src=image/13424638_01.gif> subject to certain constraints is called a vertex labeling of <img src=image/13424638_01.gif>. Different types of graph labeling techniques are used in the field of coding theory, cryptography, radar, missile guidance, <img src=image/13424638_02.gif>-ray crystallography etc. A DCL of <img src=image/13424638_01.gif> is a bijective function <img src=image/13424638_03.gif> from node set <img src=image/13424638_04.gif> of <img src=image/13424638_01.gif> to <img src=image/13424638_05.gif> such that for each edge <img src=image/13424638_06.gif>, we allot 1 if <img src=image/13424638_07.gif> divides <img src=image/13424638_08.gif> or <img src=image/13424638_08.gif> divides <img src=image/13424638_07.gif> & 0 otherwise, then the absolute difference between the number of edges having 1 & the number of edges having 0 do not exceed 1, i.e., <img src=image/13424638_09.gif>. If <img src=image/13424638_01.gif> permits a DCL, then it is called a DCG. A complete graph <img src=image/13424638_10.gif>, is a graph on <img src=image/13424638_11.gif> nodes in which any 2 nodes are adjacent and lilly graph <img src=image/13424638_12.gif> is formed by <img src=image/13424638_13.gif> joining <img src=image/13424638_14.gif>, <img src=image/13424638_15.gif> sharing a common node. i.e., <img src=image/13424638_16.gif>, where <img src=image/13424638_17.gif> is a complete bipartite graph & <img src=image/13424638_18.gif> is a path on <img src=image/13424638_11.gif> nodes. In this paper, we propose an interesting conjecture concerning DCL for a given <img src=image/13424638_01.gif>, besides, discussing certain general results concerning DCL of complete graph <img src=image/13424638_10.gif>-related graphs. We also prove that <img src=image/13424638_12.gif> admits a DCL for all <img src=image/13424638_15.gif>. Further, we establish the DCL of some <img src=image/13424638_12.gif>-related graphs in the context of some graph operations such as duplication of a node by an edge, node by a node, extension of a node by a node, switching of a node, degree splitting graph, & barycentric subdivision of the given <img src=image/13424638_01.gif>.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Viscosity Analysis of Lubricating Oil Through the Solution of Exponential Fractional Differential Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11657]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Endang Rusyaman&nbsp; &nbsp;Kankan Parmikanti&nbsp; &nbsp;Diah Chaerani&nbsp; &nbsp;and Khoirunnisa Rohadatul Aisy Muslihin&nbsp; &nbsp;</p><p>Lubricating oil is still a primary need for people dealing with machines. The important thing of lubricating oil is viscosity which is closely related to surface tension. Fluid viscosity states the measure of friction in the fluid, while surface tension is the tendency of the fluid to stretch due to attractive forces between the molecules (cohesion). We want to know how and to what extent the relationship between viscosity and surface tension of the lubricating oil is. This paper will discuss the analysis of a model in the form of an exponential fractional differential equation that states the relationship between surface tension and viscosity of lubricating oil. The Modified Homotopy Perturbation Method (MHPM) will be used to determine the solution of the fractional differential equation. This study indicates a relationship between viscosity and surface tension in the form of fractional differential equation in which the existence and uniqueness of the solution are guaranteed. From the analysis of the solution function both analytically and geometrically supported by empirical data, it can be concluded that there is a strong exponential relationship between viscosity and surface tension in lubricating oil.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Estimation of Extreme Quantiles of Global Horizontal Irradiance: A Comparative Analysis Using an Extremal Mixture Model and a Generalised Additive Extreme Value Model]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11656]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Thakhani Ravele&nbsp; &nbsp;Caston Sigauke&nbsp; &nbsp;and Lordwell Jhamba&nbsp; &nbsp;</p><p>Solar power poses challenges to the management of grid energy due to its intermittency. To have an optimal integration of solar power on the electricity grid it is important to have accurate forecasts. This study discusses the comparative analysis of semi-parametric extremal mixture (SPEM), generalised additive extreme value (GAEV) or quantile regression via asymmetric Laplace distribution (QR-ALD), additive quantile regression (AQR-1), additive quantile regression with temperature variable (AQR-2) and penalised cubic regression smoothing spline (benchmark) models for probabilistic forecasting of hourly global horizontal irradiance (GHI) at extremely high quantiles (<img src=image/13425194_01.gif> = 0.95, 0.97, 0.99, 0.999 and 0.9999). The data used are from the University of Venda radiometric in South Africa and are from the period 1 January 2020 to 31 December 2020. Empirical results from the study showed that the AQR-2 is the best fitting model and gives the most accurate prediction of quantiles at <img src=image/13425194_01.gif> = 0.95, 0.97, 0.99 and 0.999, while at 0.9999-quantile the GAEV model has the most accurate predictions. Based on these results it is recommended that the AQR-2 and GAEV models be used for predicting extremely high quantiles of hourly GHI in South Africa. The predictions from this study are valuable to power utility decision-makers and system operators when making high-risk decisions and regulatory frameworks that require high-security levels. This is the first application to conduct a comparative analysis of the proposed models using South African solar irradiance data, to the best of our knowledge.
</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[A Goal Programming Approach for Generalized Calibration Weights Estimation in Stratified Random Sampling]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11655]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Siham Rabee&nbsp; &nbsp;Ramadan Hamed&nbsp; &nbsp;Ragaa Kassem&nbsp; &nbsp;and Mahmoud Rashwaan&nbsp; &nbsp;</p><p>Calibration estimation approach is a widely used method for increasing the precision of the estimates of population parameters. It works by modifying the design weights as little as possible by minimizing a given distance function to the calibrated weights respecting a set of constraints related to specified auxiliary variables. This paper proposes a goal programming approach for generalized calibration estimation. In the generalized calibration estimation, multi study variables will be considered by incorporating multi auxiliary variables. Almost all calibration estimation's literature proposed calibrated estimators for the population mean of only one study variable. And nevertheless, up to researcher's knowledge, there is no study that considers calibration estimation approach for multi study variables. According to the correlation structure between the study variables, estimating the calibrated weights will be formulated in two different models. The theory of the proposed approach is presented and the calibrated weights are estimated. A simulation study is conducted in order to evaluate the performance of the proposed approach in the different scenarios compared by some existing calibration estimators. The Simulation results of the four generated populations show that the proposed approach is more flexible and efficient compared to classical methods.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Explicit Formulas and Numerical Integral Equation of ARL for SARX(P,r)<sub>L</sub> Model Based on CUSUM Chart]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11654]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Suvimol Phanyaem&nbsp; &nbsp;</p><p>The Cumulative Sum (CUSUM) chart is widely used and has many applications in different fields such as finance, medical, engineering, and other fields. In real applications, there are many situations in which the observations of random processes are serially correlated, such as a hospital admission in the medical field, a share price in the economic field, or a daily rainfall in the environmental field. The common characteristic of control charts that has been used to evaluate the performance of control charts is the Average Run Length (ARL). The primary goals of this paper are to derive the explicit formula and develop the numerical integral equation of the ARL for the CUSUM chart when observations are seasonal autoregressive models with exogenous variable, SARX(P,r)<sub>L</sub> with exponential white noise. The Fredholm Integral Equation has been used for solving the explicit formula of ARL, and we used numerical methods including the Midpoint rule, the Trapezoidal rule, the Simpson's rule, and the Gaussian rule to approximate the numerical integral equation of ARL. The uniqueness of solutions is guaranteed by using Banach's Fixed Point Theorem. In addition, the proposed explicit formula was compared with their numerical methods in terms of the absolute percentage difference to verify the accuracy of the ARL results and the computational time (CPU). The results obtained indicate that the ARL from the explicit formula is close to the numerical integral equation with an absolute percentage difference of less than 1%. We found an excellent agreement between the explicit formulas and the numerical integral equation solutions. An important conclusion of this study was that the explicit formulas outperformed the numerical integral equation methods in terms of CPU time. Consequently, the proposed explicit formulas and the numerical integral equation have been the alternative methods for finding the ARL of the CUSUM control chart and would be of use in fields like biology, engineering, physics, medical, and social sciences, among others.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Solving Ordinary Differential Equations (ODEs) Using Least Square Method Based on Wang Ball Curves]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11653]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Abdul Hadi Bhatti&nbsp; &nbsp;and Sharmila Binti Karim&nbsp; &nbsp;</p><p>Numerical methods are regularly established for the better approximate solutions of the ordinary differential equations (ODEs). The best approximate solution of ODEs can be obtained by error reduction between the approximate solution and exact solution. To improve the error accuracy, the representations of Wang Ball curves are proposed through the investigation of their control points by using the Least Square Method (LSM). The control points of Wang Ball curves are calculated by minimizing the residual function using LSM. The residual function is minimized by reducing the residual error where it is measured by the sum of the square of the residual function of the Wang Ball curve's control points. The approximate solution of ODEs is obtained by exploring and determining the control points of Wang Ball curves. Two numerical examples of initial value problem (IVP) and boundary value problem (BVP) are illustrated to demonstrate the proposed method in terms of error. The results of the numerical examples by using the proposed method show that the error accuracy is improved compared to the existing study of Bézier curves. Successfully, the convergence analysis is conducted with a two-point boundary value problem for the proposed method.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Prediction Variance Properties of Third-Order Response Surface Designs in the Hypersphere]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11652]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Abimibola Victoria Oladugba&nbsp; &nbsp;and Brenda Mbouamba Yankam&nbsp; &nbsp;</p><p>The variance dispersion graphs (VDGs) and the fraction of design space (FDS) graphs are two graphical methods that effectively describe and evaluate the points of best and worst prediction capability of a design using the scaled prediction variance properties. These graphs are often utilized as an alternative to the single-value criteria such as D- and E- when they fail to describe the true nature of designs. In this paper, the VDGs and FDS graphs of third-order orthogonal uniform composite designs (OUCD<sub>4</sub>) and orthogonal array composite designs (OACD<sub>4</sub>) using the scaled-prediction variance properties in the spherical region for 2 to 7 factors are studied throughout the design region and over a fraction of space. Single-valued criteria such as D-, A- and G-optimality are also studied. The results obtained show that the OUCD<sub>4</sub> is more optimal than the OACD<sub>4</sub> in terms of D-, A- and G-optimality. The OUCD<sub>4</sub> was shown to possess a more stable and uniform scaled-prediction variance throughout the design region and over a fraction of design space than the OACD<sub>4</sub> although the stability of both designs slightly deteriorated towards the extremes.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Study of the New Finite Mixture of Weibull Extension Model: Identifiability, Properties and Estimation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11651]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Noura S. Mohamed&nbsp; &nbsp;Moshira A. Ismail&nbsp; &nbsp;and Sanaa A. Ismail&nbsp; &nbsp;</p><p>Finite mixture models have been used in many fields of statistical analysis such as pattern recognition, clustering and survival analysis, and have been extensively applied in different scientific areas such as marketing, economics, medicine, genetics and social sciences. Introducing mixtures of new generalized lifetime distributions that exhibit important hazard shapes is a major field of research aiming at fitting and analyzing a wider variety of data sets. The main objective of this article is to present a full mathematical study of the properties of the new finite mixture of the three-parameter Weibull extension model, considered as a generalization of the standard Weibull distribution. The new proposed mixture model exhibits a bathtub-shaped hazard rate among other important shapes in reliability applications. We analytically prove the identifiability of the new mixture and investigate its mathematical properties and hazard rate function. Maximum likelihood estimation of the model parameters is considered. The Kolmogrov-Smirnov test statistic is used to fit two famous data sets from mechanical engineering to the proposed model, the Aarset data and the Meeker and Escobar datasets. Results show that the two-component version of the proposed mixture is a superior fit compared to various lifetime distributions, either one-component or two-component lifetime distributions. The new proposed mixture is a significant statistical tool to study lifetime data sets in numerous fields of study.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[A Simulation of an Elastic Filament Using Kirchhoff Model]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11650]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Saimir Tola&nbsp; &nbsp;Alfred Daci&nbsp; &nbsp;and Gentian Zavalani&nbsp; &nbsp;</p><p>This paper presents numerical simulations and comparisons between different approaches concerning elastic thin rods. Elastic rods are ideal for modeling the stretching, bending, and twisting deformations of such long and thin elastic materials. The static solution of Kirchhoff's equations [2] is produced using ODE45 solver where Kirchhoff and reference system equations are combined instantaneously. Solutions using formulations are based on Euler's elastica theory [1] which determines the deformed centerline of the rod by solving a boundary-value problem, on the Discreet Elastic Rod method using Bishop frame (DER) [5,6] which is based on discrete differential geometry, it starts with a discrete energy formulation and obtains the forces and equations of motion by taking the derivative of energies. Instead of discretizing smooth equations, DER solves discrete equations and obeys geometrical exactness. Using DER we measure torsion as the difference of angles between the material and the Bishop frame of the rod so that no additional degree of freedom is needed to represent the torsional behavior. We found excellent agreement between our Kirchhoff-based solution and numerical results obtained by the other methods. In our numerical results, we include simulation of the rod under the action of the terminal moment and illustrations of the gravity effects.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Stratification Methods for an Auxiliary Variable Model-Based Allocation under a Superpopulation Model]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11649]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Bhuwaneshwar Kumar Gupt&nbsp; &nbsp;Mankupar Swer&nbsp; &nbsp;Md. Irphan Ahamed&nbsp; &nbsp;B. K. Singh&nbsp; &nbsp;and Kh. Herachandra Singh&nbsp; &nbsp;</p><p>In this paper, the problem of optimum stratification of heteroscedastic populations in stratified sampling is considered for a known allocation under Simple Random Sampling With and Without Replacement (SRSWR & SRSWOR) design. The known allocation used in the problem is one of the model-based allocations proposed by Gupt [1,2] under a superpopulation model considered by Hanurav [3], Rao [4], and Gupt and Rao [5] which was modified by the author (Gupt [1,2]) to a more general form. The problem of finding optimum boundary points of stratification (OBPS) in stratifying populations considered here is based on an auxiliary variable which is highly correlated with the study variable. Equations giving the OBPS have been derived by minimizing the variance of estimator of the population mean. Since the equations giving OBPS are implicit and difficult for solving, some methods of finding approximately optimum boundary points of stratification (AOBPS) have also been obtained as the solutions of the equations giving OBPS. While deriving equations giving OBPS and methods of finding AOBPS, basic statistical definitions, tools of calculus, analytic functions and tools of algebra are used. While examining the efficiencies of the proposed methods of stratification, they are tested in a few generated populations and a live population. All the proposed methods of stratification are found to be efficient and suitable for practical applications. In this study, although the proposed methods are obtained under a heteroscedastic superpopulation model for level of heteroscedasticity one, the methods have shown robustness in empirical investigation in varied levels of heteroscedastic populations. The stratification methods proposed here are new as they are derived for an allocation, under the superpopulation model, which has never been used earlier by any researcher in the field of construction of strata in stratified sampling. The proposed methods may be a fascinating piece of work for researchers amidst the vigorously progressing theoretical research in the area of stratified sampling. Besides, by virtue of exhibiting high efficiencies in the performance of the methods, the work may provide a practically feasible solution in the planning of socio-economic survey.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Accuracy and Efficiency of Symmetrized Implicit Midpoint Rule for Solving the Water Tank System Problems]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11648]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2022<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;10&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>M. F. Zairul Fuaad&nbsp; &nbsp;N. Razali&nbsp; &nbsp;H. Hishamuddin&nbsp; &nbsp;and A. Jedi&nbsp; &nbsp;</p><p>The accuracy and efficiency of water tank system problems can be determined by comparing the Symmetrized Implicit Midpoint Rule (IMR) with the IMR. Static and dynamic analyses are part of a mathematical model that uses energy conservation to generate a nonlinear Ordinary Differential Equation. Static analysis provides optimal working points, while dynamic analysis outputs an overview of the system behaviour. The procedure mentioned is tested on two water tank designs, namely, cylindrical and rectangular tanks with two different parameters. The Symmetrized IMR is used in this study. Results show that the two-step active Symmetrized IMR applied on the proposed mathematical model is precise and efficient and can be used for the design of appropriate controls. The cylindrical water tank model takes the fastest time in emptying the water tank. The approach of the various water tank models shows an increase in accuracy and efficiency in the range of parameters used for practical model applications. The results of the numerical method show that the two-step Symmetrized IMR provides more precise stability, accuracy and efficiency for the fixed step size measurements compared with other numerical methods.</p>]]></description>
<pubDate>Jan 2022</pubDate>
</item>
<item>
<title><![CDATA[Singular Non-circular Complex Elliptically Symmetric Distributions: New Results and Applications]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11575]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Habti Abeida&nbsp; &nbsp;</p><p>Absolutely Continuous non-singular complex elliptically symmetric distributions (referred to as the nonsingular CES distributions) have been extensively studied in various applications under the assumption of nonsingularity of the scatter matrix for which the probability density functions (p.d.f's) exist. These p.d.f's, however, can not be used to characterize the CES distributions with a singular scatter matrix (referred to as the singular CES distributions). This paper presents a generalization of the singular real elliptically symmetric (RES) distributions studied by Díaz-García et al. to singular CES distributions. An explicit expression of the p.d.f of a multivariate non-circular complex random vector with singular CES distribution is derived. The stochastic representation of the singular non-circular CES (NC-CES) distributions and the quadratic forms in NC-CES random vector are proved. As special cases, explicit expressions for the p.d.f's of multivariate complex random vectors with singular non-circular complex normal (NC-CN) and singular non-circular complex Compound-Gaussian (NC-CCG) distributions are also derived. Some useful properties of singular NC-CES distributions and their conditional distributions are derived. Based on these results, the p.d.f's of non-circular complex t-distribution, K-distribution, and generalized Gaussian distribution under singularity are presented. These general results degenerate to those of singular circular CES (C-CES) distributions when the pseudo-scatter matrix is equal to the zero matrix. Finally, these results are applied to the problem of estimating the parameters of a complex-valued non-circular multivariate linear model in the presence either of singular NC-CES or C-CES distributed noise terms by proposing widely linear estimators</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[Properties of Sakaguchi Kind Functions Associated with Bessel Function]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11574]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>H.Priya&nbsp; &nbsp;and B. Srutha Keerthi&nbsp; &nbsp;</p><p>The aim of the paper is to obtain the First Hankel Determinant and the Second Hankel determinant. We shall make use of few lemmas which are based on Caratheodory's class of analytic functions. We establish a new Sakaguchi class <img src=image/13424969_01.gif> of univalent function, further we estimate the sharp bound for initial coefficients <img src=image/13424969_02.gif> and <img src=image/13424969_03.gif> using the Bessel function expansion. We have discussed about the coefficient <img src=image/13424969_04.gif> as well for the Second Hankel Determinant. The results are obtained for Sakaguchi kind. Our results travel along exploring the stages of Hankel Determinants. Various types of technologies like wire, optical or other electromagnetic systems are used for the transmission of data in one device to another. Filters play an important role in the process that can remove disorted signals. By using different parameter values for the function belongs to Sakaguchi kind of functions the Low pass filter and High pass filter can be designed and that can be done by the coefficient estimates.</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[An Asymptotic Test for A Single Outlier in Linear Regression Models]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11573]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Ugah Tobias Ejiofor&nbsp; &nbsp;Mba Emmanuel Ikechukwu&nbsp; &nbsp;Eze Micheal Chinonso&nbsp; &nbsp;Arum Kingsley Chinedu&nbsp; &nbsp;Mba Ifeoma Christy&nbsp; &nbsp;Urama Chinasa&nbsp; &nbsp;and Comfort Njideka Ekene-Okafor&nbsp; &nbsp;</p><p>It is not uncommon to find an outlier in the response variable in linear regression. Such a deviant value needs to be detected and scrutinized to find out why it is not in agreement with its fitted value. Srikantan [1] has developed a test statistic for detecting the presence of an outlier in the response variable in a multiple linear regression model. Approximate critical values of this test statistic are available and are obtained based on the first-order Bonferroni upper bound. The exact critical values are not available and a result of that, tests carried out on the basis of this approximate critical values may not be very accurate. In this paper, we obtained more accurate and precise critical values of this test statistic for large sample sizes (herein called asymptotic critical values) to improve on the tests that use these critical values. The procedure involved using the exact probability density function of this test statistic to obtain its asymptotic critical values. We then compared these asymptotic critical values with the approximate critical values obtained. An application to simulation results for linear regression models was used to examine the power of this test statistic. The asymptotic critical values obtained were found to be more accurate and precise. Also, the test performed better under these asymptotic values (the power performance of this test statistic was found to better when the asymptotic critical values were used).</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[Power Comparisons of Normality Tests Based on L-moments and Classical Tests]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11572]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Ivana Mala&nbsp; &nbsp;Vaclav Sladek&nbsp; &nbsp;and Diana Bilkova&nbsp; &nbsp;</p><p>Normality tests are used in the statistical analysis to determine whether a normal distribution is acceptable as a model for the data analysed. A wide range of available tests employs different properties of normal distribution to compare empirical and theoretical distributions. In the present paper, we perform the Monte Carlo simulation to analyse test power. We compare commonly known and applied tests (standard and robust versions of the Jarque-Bera test, Lilliefors test, chi-square goodness-of-fit test, Shapiro-Francia test, Cramer-von Mises goodness-of-fit test, Shapiro-Wilk test, D'Agostino test, and Anderson-Darling test) to the test based on robust L-moments. In the text, in Jarque-Bera type test the moment characteristics of skewness and kurtosis are replaced with their robust versions - L-skewness and L-kurtosis. The distributions with heavy tails (lognormal, Weibull, loglogistic and Student) are used to draw random samples to show the performance of tests when applied on data with outliers. Small sample properties (from 10 observations) are analysed up to large samples of 200 observations. Our results concerning the properties of the classical tests are in line with the conclusion of other recent articles. We concentrate on properties of the test based on L-moments. This normality test is comparable to well-performing and reliable tests; however, it is outperformed by the most powerful Shapiro-Wilks and Shapiro-Francia tests. It works well for Student (symmetric) distribution, comparably with the most frequently used Jarque-Berra tests. As expected, the test is robust to the presence of outliers in comparison with sensitive tests based on product moments or correlations. The test turns out to be very universally reliable.</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[Some Results on Number Theory and Differential Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11571]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>B. M. Cerna Maguiña&nbsp; &nbsp;Dik D. Lujerio Garcia&nbsp; &nbsp;and Héctor F. Maguiña&nbsp; &nbsp;</p><p>In this work, using the basic tools of functional analysis, we obtain a technique that allows us to obtain important results, related to quadratic equations in two variables that represent a natural number and differential equations. We show the possible ways to write an even number that ends in six, as the sum of two odd numbers and we establish conditions for said odd numbers to be prime, also making use of a suitable linear functional <img src=image/13423281_01.gif> we obtain representations of natural numbers of the form <img src=image/13423281_02.gif> in order to obtain positive integer solutions of the equation quadratic <img src=image/13423281_03.gif> where <img src=image/13423281_04.gif> is a natural number given that it ends with one. And finally, we show with three examples the use of the proposed technique to solve some ordinary and partial linear differential equations. We believe that the third corollary of our first result of this investigation can help to demonstrate the strong Goldbach conjecture.</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[Combined Adomian Decomposition Method with Integral Transform]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11570]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Betty Subartini&nbsp; &nbsp;Ira Sumiati&nbsp; &nbsp;Sukono&nbsp; &nbsp;Riaman&nbsp; &nbsp;and Ibrahim Mohammed Sulaiman&nbsp; &nbsp;</p><p>At present, three numerical solution methods have mainly been used to solve fractional-order chaotic systems in the literature: frequency domain approximation, predictor–corrector approach and Adomian decomposition method (ADM). Based on the literature, ADM is capable of dealing with linear and nonlinear problems in a time domain. Also, the Adomian decomposition method (ADM) is among the efficient approaches for solving linear and non-linear equations. Numerical solution method is one of the critical problems in theoretical research and in the applications of fractional-order systems. The solution is decomposed into an infinite series and the integral transformation to a differential equation is implemented in this work. Furthermore, the solution can be thought of as an infinite series that converges to an exact solution. The aim of this study is to combine the Adomian decomposition approach with a different integral transformation, including Laplace, Sumudu, Natural, Elzaki, Mohand, and Kashuri-Fundo. The study's key finding is that employing the combined method to solve fractional ordinary differential equations yields good results. The main contribution of our study shows that the combined numerical methods considered produce excellent numerical performance for solving fractional ordinary differential equations. Therefore, the proposed combined method has practical implications in solving fractional order differential equations in science and social sciences, such as finding analytical and numerical solutions for secure communication system, biological system, financial risk models, physics phenomenon, neuron models and engineering application.</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[Comparison of Distance and Linkage in Integrated Cluster Analysis with Multiple Discriminant Analysis on Home Ownership Credit Bank in Indonesia]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11569]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Ni Made Ayu Astari Badung&nbsp; &nbsp;Adji Achmad Rinaldo Fernandes&nbsp; &nbsp;and Waego Hadi Nugroho&nbsp; &nbsp;</p><p>This study aims to compare the size of distance (Euclidean distance, Manhattan distance, and Mahalanobis distance) and linkage (average linkage, single linkage, and complete linkage) in integrated cluster analysis with Multiple Discriminant Analysis on Home Ownership Credit Bank consumers in Indonesia. The data used are secondary data from the 5C assessment on Bank consumers in Indonesia. The data contain notes on the 5 C assessment as well as 3 credit collectability (current, special mention, and substandard) from Home Ownership Credit customers. The population in this study were all Home Ownership Credit customers in all banks in Indonesia. The sampling technique used was purposive random sampling. The sample size is 300 customers from customer data at three branches of Bank in Indonesia. This research is a quantitative study using cluster analysis integrated with multiple discriminant analysis. The best method for classifying Home Ownership Credit Bank customers based on the 5C variable assessment is an integrated cluster analysis with Multiple Discriminant Analysis based on the Mahalanobis distance with 2 clusters, namely the high cluster and the low cluster. Use of an integrated cluster with Multiple Discriminant Analysis to compare distance and linkage measures. In addition, the objects used are Home Ownership Credit Bank customers in Indonesia.</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[Modeling of Path Nonparametric Truncated Spline Linear, Quadratic, and Cubic in Model on Time Paying Bank Credit]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11568]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Erlinda Citra Lucki Efendi&nbsp; &nbsp;Adji Achmad Rinaldo Fernandes&nbsp; &nbsp;and Maria Bernadetha Theresia Mitakda&nbsp; &nbsp;</p><p>This study aims to estimate the nonparametric truncated spline path functions of linear, quadratic, and cubic orders at one and two knot points and determine the best model on the variables that affect the timely payment of House Ownership Credit (HOC). In addition, this study aims to test the hypothesis to determine the variables that have a significant effect on punctuality in paying House Ownership Credit (HOC). The data used in this study are primary data. The variables used are service quality and lifestyle as exogenous variables, willingness to pay as mediating variables and on time to pay as endogenous variables. Analysis of the data used in this study is a nonparametric path using R software. The results showed that the best model was obtained on a nonparametric truncated spline linear path model with 2 knot points. The model has the smallest GCV value of 25.9059 and R<sup>2</sup> value of 96.96%. In addition, the results of hypothesis testing on function estimation have a significant effect on the relationship between service quality and willingness to pay, the relationship between service quality and on time to pay, the relationship between lifestyle and willingness to pay, and the relationship between lifestyle and on time pay. The novelty of this research is to model and test the hypothesis of nonparametric regression development, namely nonparametric truncated spline paths of linear, quadratic and cubic orders.</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[An Improved Simple Averaging Approach for Estimating Parameters in Simple Linear Regression Model]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11567]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Jetsada Singthongchai&nbsp; &nbsp;Noppakun Thongmual&nbsp; &nbsp;and Nirun Nitisuk&nbsp; &nbsp;</p><p>This research is about estimating parameters in simple linear regression model. Regression model is applied for predictive in many filed. Ordinary lest square (OLS) approach and Maximum likelihood (ML) approach are employed for estimating parameter in simple linear regression model when the assumption is not violated. This research interested in simple linear regression model when the assumption is violated. Simple Averaging (SA) approach is an alternative for estimating parameters in simple linear regression model where assumptions are not successfully used. We improved SA approach based on the median which is called the improved Simple Averaging (ISA) approach. For comparing the two approaches for estimating parameter in simple linear regression model, ISA approach is compared with SA approach under Root Mean Square Error (RMSE) which reflected accuracy of prediction in simple linear regression. By using the sample, the results showed that ISA approach is better than SA approach where the value of RMSE of ISA approach is less than the value of RMSE of SA approach. Therefore, ISA approach is better than SA approach. Our study suggests ISA approach to estimating parameter on simple linear regression because ISA approach accuracy than SA approach and ISA approach simplify the estimation of parameters in the simple linear regression model. Hence, ISA approach an alternative for estimating parameters in simple linear regression model when the assumptions are not successfully used.</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[Some Results on Integer Solutions of Quadratic Polynomials in Two Variables]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11453]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>B. M. Cerna Maguiña&nbsp; &nbsp;and Janet Mamani Ramos&nbsp; &nbsp;</p><p>Although it is true that there are several articles that study quadratic equations in two variables, they do so in a general way. We focus on the study of natural numbers ending in one, because the other cases can be studied in a similar way. We have given the subject a different approach, that is why our bibliographic citations are few. In this work, using basic tools of functional analysis, we achieve some results in the study of integer solutions of quadratic polynomials in two variables that represent a given natural number. To determine if a natural number ending in one is prime, we must solve equations (i) <img src=image/13422919_01.gif>, (ii) <img src=image/13422919_02.gif>, (iii) <img src=image/13422919_03.gif>. If these equations do not have an integer solution, then the number P is prime. The advantage of this technique is that, to determine if a natural number p is prime, it is not necessary to know the prime numbers less than or equal to the square root of p. The objective of this work was to reduce the number of possibilities assumed by the integer variables <img src=image/13422919_04.gif> in the equation (i), (ii), (iii) respectively. Although it is true that this objective was achieved, we believe that the lower limits for the sums of the solutions of equations (i), (ii), (iii), were not optimal, since in our recent research we have managed to obtain limits lower, which reduce the domain of the integer variables <img src=image/13422919_04.gif> solve equations (i), (ii), (iii), respectively. In a future article we will show the results obtained. The methodology used was deductive and inductive. We would have liked to have a supercomputer, to build or determine prime numbers of many millions of digits, but this is not possible, since we do not have the support of our respective authorities. We believe that the contribution of this work to number theory is the creation of linear functionals for the study of integer solutions of quadratic polynomials in two variables, which represent a given natural number. The utility of large prime numbers can be used to encode any type of information safely, and the scheme shown in this article could be useful for this process.</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[Derivation of New Degrees for Best COCUNP Weighted Approximation: II]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11452]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Malik Saad Al-Muhja&nbsp; &nbsp;Habibulla Akhadkulov&nbsp; &nbsp;and Nazihah Ahmad&nbsp; &nbsp;</p><p>Approximation Theory is a branch of analysis and applied mathematics requiring that the approximation process preserves certain <img src=image/13417154_01.gif>-shaped properties defined at a finite interval <img src=image/13417154_02.gif>, such as convexity in all or parts of the interval. The (Co)convex and Unconstrained Polynomial (COCUNP) approximation is one of the key estimations of the approximation theory that Kopotun has recently raised for ten years. Numerous studies have been conducted on modern methods of weighted approximation to construct the best degree of approximation. In developing COCUNP a novel technique, the Lebesgue Stieltjes integral-i technique is used to resolve certain disadvantages, such as Riemann's integrable functions, which do not have the degree of the best approximation in norm space. In order to achieve the main goal, Derivation of New Degree (DOND) of the best COCUNP approximation was constructions. The theoretical results revealed that, in general, the new degrees of best approximation were able to smaller errors compared to the existing literature in the same estimating. In conclusion, this study has successfully developed DOND for the best (Co)convex Polynomial (COCP) weighted approximation.</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[Seidel Laplacian and Seidel Signless Laplacian Spectrum of the Zero-divisor Graph on the Ring of Integers Modulo <img src=image/13424080_01.gif>]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11451]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Magi P M&nbsp; &nbsp;Sr.Magie Jose&nbsp; &nbsp;and Anjaly Kishore&nbsp; &nbsp;</p><p>Let <img src=image/13424080_02.gif> be a simple graph of order <img src=image/13424080_03.gif> and let <img src=image/13424080_04.gif> be the Seidel matrix of <img src=image/13424080_02.gif>, defined as <img src=image/13424080_05.gif> where <img src=image/13424080_06.gif> if the vertices <img src=image/13424080_07.gif> and <img src=image/13424080_08.gif> are adjacent and <img src=image/13424080_09.gif> if the vertices <img src=image/13424080_07.gif> and <img src=image/13424080_08.gif> are not adjacent and <img src=image/13424080_10.gif> if <img src=image/13424080_11.gif>. Let <img src=image/13424080_12.gif> be the diagonal matrix where <img src=image/13424080_13.gif> denotes the degree of the <img src=image/13424080_14.gif> vertex of <img src=image/13424080_02.gif>. The Seidel Laplacian matrix of a graph <img src=image/13424080_02.gif> is defined as <img src=image/13424080_15.gif> and the Seidel signless Laplacian matrix of a graph <img src=image/13424080_02.gif> is defined as <img src=image/13424080_16.gif>. The zero-divisor graph of a commutative ring <img src=image/13424080_17.gif>, denoted by <img src=image/13424080_18.gif>, is a simple undirected graph with all non-zero zero-divisors as vertices and two distinct vertices <img src=image/13424080_19.gif> are adjacent if and only if <img src=image/13424080_20.gif>. In this paper, we find the Seidel polynomial and Seidel Laplacian polynomial of the join of two regular graphs using the concept of schur complement and coronal of a square matrix. Also we describe the computation of the Seidel Laplacian and Seidel signless Laplacian eigenvalues of the join of more than two regular graphs, using the well known Fiedler's lemma and apply these results to describe these eigenvalues for the zero-divisor graph on <img src=image/13424080_21.gif>. Further we find the Seidel Laplacian and Seidel signless Laplacian spectrum of the zero-divisor graph of <img src=image/13424080_21.gif> for some values of <img src=image/13424080_03.gif>, say <img src=image/13424080_22.gif>, where <img src=image/13424080_23.gif> are distinct primes. We also prove that 0 is a simple Seidel Laplacian eigenvalue of <img src=image/13424080_24.gif>, for any <img src=image/13424080_03.gif>.</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[Some New Results on Equivalent Cauchy Sequences and Their Applications to Meir-Keeler Contraction in Partial Rectangular Metric Space]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11450]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Sidite Duraj&nbsp; &nbsp;Eriola Sila&nbsp; &nbsp;and Elida Hoxha&nbsp; &nbsp;</p><p>The study of fixed points in the metric spaces plays a crucial role in the development of Functional Analysis. It is evolved by generalizing the metric space or improving the contractive conditions. Recently, the partial rectangular metric space and its topology have been the center of study for many researchers. They have defined open and closed balls the equivalent Cauchy sequences and Cauchy sequences, convergent sequences which are used as tools in many achieved results. In this paper, two facts for equivalent Cauchy sequences in a partial rectangular metric space are provided by using an ultra - altering distance function. Furthermore, some results of Cauchy sequences in a partial rectangular metric space are highlighted. There is proved that under some conditions the equivalent Cauchy sequences are Cauchy sequences in a partial rectangular metric space. Some fixed point results have been taken as applications of our new conditions of Cauchy sequences and equivalent Cauchy sequences in a partial rectangular metric space <img src=image/13424354_01.gif> for orbitally continuous functions <img src=image/13424354_02.gif>. To illustrate the obtained results some examples are given.</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[Trigonometric Ratios Using Algebraic Methods]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11449]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Sameen Ahmed Khan&nbsp; &nbsp;</p><p>The main aim of this article is to start with an expository introduction to the trigonometric ratios and then proceed to the latest results in the field. Historically, the exact ratios were obtained using geometric constructions. The geometric methods have their own limitations arising from certain theorems. In view of the certain limitations of the geometric methods, we shall focus on the powerful techniques of equations in deriving the exact trigonometric ratios using surds. The cubic and higher-order equations naturally arise while deriving the exact trigonometric ratios. These equations are best expressed using the expansions of the cosines and sine of multiple angles using the Chebyshev polynomials of the first and second kind respectively. So, we briefly present the essential properties of the Chebyshev polynomials. The equations lead to the question of reduced polynomials. This question of the reduced polynomials is addressed using the Euler's totient function. So, we describe the techniques from theory of equations and reduced polynomials. The trigonometric ratios of certain rational angles (when measured in degrees) give rise to rational trigonometric ratios. We shall discuss these along with the related theorems. This is a frontline area of research connecting trigonometry and number theory. Results from number theory and theory of equations are presented wherever required.</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[The Combinatorial Expressions and Probability of Random Generation of Binary Palindromic Digit Combinations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11448]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Vladislav V. Lyubimov&nbsp; &nbsp;</p><p>The aim of this paper is to obtain three types of expressions for calculating the probability of implementing palindromic digit combinations on a finite equally possible combination of zeros and ones. When calculating the probability of implementation of palindromic digit combinations, the classical definition of probability is applied. The main results of the paper are formulated in the form of three theorems. Moreover, the consequences of these theorems and typical examples of calculating the probability of implementing palindromic digit combinations in a data string of binary code are considered. All formulated theorems and their consequences are accompanied by proofs. The obtained numerical results of the paper can be used in the analysis of numerical computer data written as a binary code string in BIN format files. It should also be noted that the combinatorial expressions described in the article for calculating the number of palindromic combinations of digits in the binary number system can be used in number theory and in various branches of computer science. The development of these results from the point of view of obtaining an expression for calculating the number of palindromic combinations of digits in the binary number system contained in two-dimensional data arrays is also of immediate theoretical and practical interest. However, these results are not presented in this work, but they can be considered in subsequent publications. </p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[A Modified Perry's Conjugate Gradient Method Based on Powell's Equation for Solving Large-Scale Unconstrained Optimization]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11447]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Mardeen Sh. Taher&nbsp; &nbsp;and Salah G. Shareef&nbsp; &nbsp;</p><p>It is known that the conjugate gradient method is still a popular method for many researchers who are focused in solving the large-scale unconstrained optimization problems and nonlinear equations because the method avoids the computation and storage of some matrices so the memory's requirements of the method are very small. In this work, a modified Perry conjugate gradient method which fulfills a global convergence with standard assumptions is shown and analyzed. The idea of new method is based on Perry method by using the equation which is founded via Powell in 1978. The weak Wolfe–Powell search conditions are used to choose the optimal line search, under the line search and suitable conditions, we prove both descent and sufficient descent conditions. In particular, numerical results show that the new conjugate gradient method is more effective and competitive when compared to other of standard conjugate gradient methods including: - CG- Hestenes and Stiefel (H/S) method, CG-Perry method, CG- Dai and Yuan (D/Y) method. The comparison is completed under a group of standard test problems with various dimensions from the CUTEst test library and the comparative performances of the methods are evaluated by total the number of iterations and the total number of function evaluations.</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[An Analytical Study for Caputo Fractional Derivative on Unsteady Casson Fluid with Thermal Radiation Effect]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11446]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Ridhwan Reyaz&nbsp; &nbsp;Ahmad Qushairi Mohamad&nbsp; &nbsp;Yeaou Jiann Lim&nbsp; &nbsp;Muhammad Saqib&nbsp; &nbsp;Zaiton Mat Isa&nbsp; &nbsp;and Sharidan Shafie&nbsp; &nbsp;</p><p>Studies on Casson fluid are essential in the development of the manufacturing and engineering fields since it is widely used there. Meanwhile, fractional derivative has been known to be a constructive paradox that can be beneficial in the future. In this study, the development fractional derivative on Casson fluid flow is investigated. A fractional Casson fluid model with effect of thermal radiation is derived together with momentum and energy equations. The Caputo definition of fractional derivative is used in the mathematical formulation. Casson fluid with constant wall temperature over an oscillating plate in the presence of thermal radiation is considered. Solutions were obtained by using Laplace transform and are presented in the form of Wright function. Graphical analysis on velocity and temperature profiles was conducted with variations in parametric values such as fractional parameter, Grashof number, Prandtl number and radiation parameter. Numerical computations were carried out to investigate behaviours of skin friction and Nusselt number. It is found that when the fractional parameter is increased, the velocity and temperature profiles will also increase. Existence of fractional parameter in both velocity and temperature profiles shows the transitional phenomenon of both profiles from an unsteady state to steady state, providing a new perspective on Casson fluid flow. An increment in both profiles is also observed when the thermal radiation parameter is increased. The present results are also validated with published results, and it is found that they are in agreement with each other.</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[Relative Coprime Probability and Graph for Some Nonabelian Groups of Small Order and Their Associated Graph Properties]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11445]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Nurfarah Zulkifli&nbsp; &nbsp;and Nor Muhainiah Mohd Ali&nbsp; &nbsp;</p><p>Let <img src=image/13491971_01.gif> be a finite group. The probability that two selected elements <img src=image/13491971_02.gif> from <img src=image/13491971_03.gif> and <img src=image/13491971_04.gif> from <img src=image/13491971_01.gif> are chosen at random in a way that the greatest common divisor also known as gcd, of the order of <img src=image/13491971_02.gif> and <img src=image/13491971_04.gif>,  which is equal to one, is called as the relative coprime probability. Meanwhile, another definition states that the vertices or nodes are the elements of a group and two distinct vertices or nodes are adjacent if and only if their orders are coprime and any of them is in the subgroup of the group and this is called as the relative coprime graph. This research focuses on determining the relative coprime probability and graph for cyclic subgroups of some nonabelian groups of small order and their associated graph properties by referring to the definitions and theorems given by previous researchers. Besides, various results of the relative coprime probability for nonabelian groups of small order are obtained. As for the relative coprime graph, the result shows that the domination number for each group is one whereas the number of edges and the independence number for each group vary. Types of graphs that can be formed are either star graph, planar graph or complete <img src=image/13491971_05.gif> subgraph depending on the order of the subgroup of a group.</p>]]></description>
<pubDate>Nov 2021</pubDate>
</item>
<item>
<title><![CDATA[Unbounded Toeplitz Operators with Rational Symbols]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11420]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Domenico P.L. Castrigiano&nbsp; &nbsp;</p><p>Unbounded (and bounded) Toeplitz operators (TO) with rational symbols are analysed in detail showing that they are densely defined closed and have finite dimensional kernels and deficiency spaces. The latter spaces as well as the domains, ranges, spectral and Fredholm points are determined. In particular, in the symmetric case, i.e., for a real rational symbol the deficiency spaces and indices are explicitly available. — The concluding section gives a brief overview on the research on unbounded TO in order to locate the present contribution. Regarding properties of unbounded TO in general, it furnishes some new results recalling the close relationship to Wiener-Hopf operators and, in case of semiboundedness, to singular operators of Hilbert transformation type. Specific symbols considered in the literature admit further analysis. Some conclusions are drawn for semibounded integrable and real square-integrable symbols. There is an approach to semibounded TO, which starts from closable semibounded forms related to a Toeplitz matrix. The Friedrichs extension of the TO associated with such a form is studied. Finally, analytic TO and Toeplitz-like operators are briefly examined, which in general differ from the TO treated here.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[Unique Common Tripled Fixed Point for Three Mappings in <img src=image/13424616_01.gif> Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11419]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>K. Kumara Swamy&nbsp; &nbsp;Swatmaram&nbsp; &nbsp;Bipan Hazarika&nbsp; &nbsp;and P. Sumati Kumari&nbsp; &nbsp;</p><p>It has been a century since the Banach fixed point theorem was established, and because of this, the result is the progenitor in some ways. This seems essential to revisit fixed point theorems in specific and in light of most of those. Those are numerous and prevalent in mathematics, as we will demonstrate. Fixed point theorems can be noticed in advanced mathematics, economics, micro-structures, geometry, dynamics, computational mathematics, and differential equations. <img src=image/13424616_03.gif> space is to broaden and extrapolate the paradigm of the concept of metric space. The characteristic of a <img src=image/13424616_03.gif> space, in essence, is to comprehend the topological features of three points rather than two points via the perimeter of a triangle, where the metric indicates the distance between two points. The domain of <img src=image/13424616_02.gif> space is significantly larger than that of the class of <img src=image/13424616_03.gif> space. Hence we utilised this generalized space in order to obtain common tripled fixed point for three mappings using rational type contractions in the setting of <img src=image/13424616_02.gif> spaces. Recently, Khomadram et al have developed coupled fixed point theorems in <img src=image/13424616_02.gif> spaces via rational type contractions. The main aim of our paper is to broaden and extrapolate the paradigm of Khomadram's results into tripled fixed point theorems. Therefore, examples are offered to support our findings.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[Numerical Solution of Ostrovsky Equation over Variable Topography Passes through Critical Point Using Pseudospectral Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11418]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Nik Nur Amiza Nik Ismail&nbsp; &nbsp;Azwani Alias&nbsp; &nbsp;and Fatimah Noor Harun&nbsp; &nbsp;</p><p>Internal solitary waves have been documented in several parts of the world. This paper intends to look at the effects of the variable topography and rotation on the evolution of the internal waves of depression. Here, the wave is considered to be propagating in a two-layer fluid system with the background topography is assumed to be rapidly and slowly varying. Therefore, the appropriate mathematical model to describe this situation is the variable-coefficient Ostrovsky equation. In particular, the study is interested in the transition of the internal solitary wave of depression when there is a polarity change under the influence of background rotation. The numerical results using the Pseudospectral method show that, over time, the internal solitary wave of elevation transforms into the internal solitary wave of depression as it propagates down a decreasing slope and changes its polarity. However, if the background rotation is considered, the internal solitary waves decompose and form a wave packet and its envelope amplitude decreases slowly due to the decreasing bottom surface. The numerical solutions show that the combination effect of variable topography and rotation when passing through the critical point affected the features and speed of the travelling solitary waves.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[A Note on Some Integrals by Malmsten and Bierens de Haan]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11417]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Robert Reynolds&nbsp; &nbsp;and Allan Stauffer&nbsp; &nbsp;</p><p>Carl Johan Malmsten (1846) and David Beirens de Haan (1847) published work containing some interesting integrals. While no formal derivations of the integrals in his book Nouvelles Tables d'Intègrales Dèfines are available in current literature deriving and evaluating such formulae are useful in all aspects of science and engineering whenever such formulae are used. Formulae in the book of Bierens de Haan are used in connection with certain potential problems where there is the need to determine the vector potential of two parallel, infinitely long, tubular rectangular conductors carrying cur-rents in opposite directions. In this current work we supply formal derivations for some of these integrals along with deriving some special cases as new integrals in order to expand upon the book of Bierens de haan to aid in potential research where these formulae are applicable. Updating book of integrals is always a useful exercise as it keeps the volume accurate and more useful for potential readers and researchers. Formal derivations are also useful as they help in verifying the correctness of integrals in such volumes. The definite integral we derived in this work is given by <img src=image/13424330_01.gif> (1) in terms of the Lerch function, where the parameters a; k; m; and p are general complex numbers subject to their restrictions. This formal derivation is then used to derive the correct version of a definite integral transform along with new formulae. Some of the results in this work are new.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[Structural Properties of the Essential Ideal Graph of <img src=image/13424240_01.gif>]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11416]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>P Jamsheena&nbsp; &nbsp;and A V Chithra&nbsp; &nbsp;</p><p>Let <img src=image/13424240_02.gif> be a commutative ring with unity. The essential ideal graph of <img src=image/13424240_02.gif>, denoted by <img src=image/13424240_03.gif>, is a graph with vertex set consisting of all nonzero proper ideals of A and two vertices <img src=image/13424240_04.gif> and <img src=image/13424240_05.gif> are adjacent whenever <img src=image/13424240_06.gif> is an essential ideal. An essential ideal <img src=image/13424240_04.gif> of a ring <img src=image/13424240_02.gif> is an ideal <img src=image/13424240_04.gif> of <img src=image/13424240_02.gif> (<img src=image/13424240_07.gif>), having nonzero intersection with every other ideal of <img src=image/13424240_02.gif>. The set <img src=image/13424240_08.gif> contains all the maximal ideals of <img src=image/13424240_02.gif>. The Jacobson radical of <img src=image/13424240_02.gif>, <img src=image/13424240_09.gif>, is the set of intersection of all maximal ideals of <img src=image/13424240_02.gif>. The comaximal ideal graph of <img src=image/13424240_02.gif>, denoted by <img src=image/13424240_11.gif>, is a simple graph with vertices as proper ideals of A not contained in <img src=image/13424240_09.gif> and the vertices <img src=image/13424240_04.gif> and <img src=image/13424240_05.gif> are associated with an edge whenever <img src=image/13424240_10.gif>. In this paper, we study the structural properties of the graph <img src=image/13424240_03.gif> by using the ring theoretic concepts. We obtain a characterization for <img src=image/13424240_03.gif> to be isomorphic to the comaximal ideal graph <img src=image/13424240_11.gif>. Moreover, we derive the structure theorem of <img src=image/13424240_12.gif> and determine graph parameters like clique number, chromatic number and independence number. Also, we characterize the perfectness of <img src=image/13424240_12.gif> and determine the values of <img src=image/13424240_13.gif> for which <img src=image/13424240_12.gif> is split and claw-free, Eulerian and Hamiltonian. In addition, we show that the finite essential ideal graph of any non-local ring is isomorphic to <img src=image/13424240_12.gif> for some <img src=image/13424240_13.gif>.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[An Approximate Solution to Predator-prey Models Using The Differential Transform Method and Multi-step Differential Transform Method, in Comparison with Results of The Classical Runge-kutta Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11415]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Adeniji A A&nbsp; &nbsp;Noufe H. A&nbsp; &nbsp;Mkolesia A C&nbsp; &nbsp;and Shatalov M Y&nbsp; &nbsp;</p><p>Predator-prey models are the building blocks of the ecosystems as biomasses are grown out of their resource masses. Different relationships exist between these models as different interacting species compete, metamorphosis occurs and migrate strategically aiming for resources to sustain their struggle to exist. To numerically investigate these assumptions, ordinary differential equations are formulated, and a variety of methods are used to obtain and compare approximate solutions against exact solutions, although most numerical methods often require heavy computations that are time-consuming. In this paper, the traditional differential transform (DTM) method is implemented to obtain a numerical approximate solution to prey-predator models. The solution obtained with DTM is convergent locally within a small domain. The multi-step differential transform method (MSDTM) is a technique that improves DTM in the sense that it increases its interval of convergence of the series expansion. One predator-one prey and two-predator-one prey models are considered with a quadratic term which signifies other food sources for its feeding. The result obtained numerically and graphically shows point DTM diverges. The advantage of the new algorithm is that the obtained series solution converges for wide time regions and the solutions obtained from DTM and MSDTM are compared with solutions obtained using the classical Runge-Kutta method of order four. The results demonstrated is that MSDTM computes fast, is reliable and gives good results compared to the solutions obtained using the classical Runge-Kutta method.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[The Fractional Residual Power Series Method for Solving a System of Linear Fractional Fredholm Integro-differential Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11414]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Prapart Pue-on&nbsp; &nbsp;</p><p>In this manuscript, the fractional residual power series (FRPS) method is employed in solving a system of linear fractional Fredholm integro-differential equations. The significant role of this system in various fields has attracted the attention of researchers for a decade. The definition of fractional derivative here is described in the Caputo sense. The proposed method relies on the generalized Taylor series expansion as well as the fact that the fractional derivative of stationary is zero. The process starts by constructing a residual function by supposing the finite order of an approximate power series solution that prescribes the initial conditions. Then, utilizing some conditions, the residual functions are converted to a linear system for the power series coefficients. Solving the linear system reveals the coefficients of the fractional power series solution. Finally, by substituting these coefficients into the supposed form of a solution, the approximate fractional power series solutions are derived. This technique has the advantage of being able to be applied directly to the problem and spending less time on computation. It is not only an easy method for implementation of the problem, but also provides productive results after a few iterations. Some problems with known solutions emphasize the procedure's simplicity and reliability. Moreover, the obtained exact solution demonstrated the efficiency and accuracy of the presented method.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[Estimating the Entropy and Residual Entropy of a Lomax Distribution under Generalized Type-II Hybrid Censoring]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11413]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Mahmoud Riad Mahmoud&nbsp; &nbsp;Moshera. A. M. Ahmad&nbsp; &nbsp;and Badiaa. S. Kh. Mohamed&nbsp; &nbsp;</p><p>The Lomax distribution (or Pareto II) was first introduced by K. S. Lomax in 1954. It can be readily applied to a wide range of situations including applications in the analysis of the business failure life time data, economics, and actuarial science, income and wealth inequality, size of cities, engineering, lifetime, and reliability modeling. In his pioneering paper, Shannon 1948 defined the notion of entropy as a mathematical measure of information, which is sometimes called Shannon entropy in his honor. He laid the groundwork for a new branch of mathematics in which the notion of entropy plays a fundamental role over different areas of applications such as statistics, information theory, financial analysis, and data compression. [Ebrahimi and Pellerey 14] introduced the residual entropy function because the entropy shouldn't be applied to a system that has survived for some units of time, and therefore, the residual entropy is used to measure the ageing and characterize, classify and order lifetime distributions. In this paper, the estimation of the entropy and residual entropy of a two parameter Lomax distribution under a generalized Type-II hybrid censoring scheme are introduced. The maximum likelihood estimation for the entropy is provided and the Bayes estimation for the residual entropy is obtained. Simulation studies to assess the performance of the estimates with different sample sizes are described, finally conclusions are discussed.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[A Moment Based Approximation for Expected Number of Renewals for Non-Negligible Repair]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11412]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Dilcu Barnes&nbsp; &nbsp;and Saeed Maghsoodloo&nbsp; &nbsp;</p><p>This paper focuses on the renewal function which is simply the mathematical expectation of number of renewals in a stochastic process. Renewal functions are important, and they have various applications in many fields. However, obtaining an analytical expression for the renewal function may be very complicated and even impossible. Therefore, researchers focused on developing approximation methods for them. The purpose of this paper is to explore the renewal functions for non-negligible repair for the most common reliability underlying distributions using the first four raw moments of the failure and repair distributions. This article gives the approximate number of cycles, number of failures and the resulting availability for particular distributions assuming Mean Time to Repair is not negligible and that Time to Restore, or repair has a probability density function denoted as r(t). When Mean Time to Repair is not negligible and Time to Restore has a probability density function denoted as r(t), the expected number of failures, cycles and the resulting availability were obtained by taking the Laplace transforms of corresponding renewal functions. An approximation method for obtaining the expected number of cycles, number of failures and availability using raw moments of failure and repair distributions are provided. Results show that the method produces very accurate results for especially large values of time t.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[Prediction Variance Capabilities of Third-Order Response Surface Designs for Cuboidal Regions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11411]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Brenda Mbouamba Yankam&nbsp; &nbsp;and Abimibola Victoria Oladugba&nbsp; &nbsp;</p><p>Experimenters often evaluate the steadiness and consistency of designs over the region of interest by means of the prediction variance capabilities using the variance dispersion graph and the fraction of design space graph. The variance dispersion graph and the fraction of design space graph effectively describe the prediction variance capabilities of a design in the region of interest. However, the prediction variance capabilities of third-order response surface designs have not been studied in the literature. In this paper, the prediction variance capabilities of two third-order response surface designs term augmented orthogonal uniform composite designs and orthogonal array composite designs in the cuboidal region for 3≤k≤7 with <img src=image/13424429_01.gif> center points are examined. The prediction variance capabilities are evaluated using the variance dispersion graph and the fraction of design space graph. Also, D-, E-, G-and T-optimality criteria are used in evaluating these designs in terms of single-value criterion. The results obtained show that the augmented orthogonal uniform composite designs have better prediction variance capabilities in the cuboidal region in the terms of the variance dispersion graphs for factors 3 and 4. The augmented orthogonal uniform composite designs also have better prediction variance capabilities for 3≤k≤7 compare to the orthogonal array composite designs in terms of the fraction of design space graph. The augmented orthogonal uniform composite designs are shown to be superior over the orthogonal array composite designs in terms of D-, E-, G-and T-optimality criteria for single-value criterion. This shows that the performances of the prediction variance capabilities of third-order response surface designs can be clearly visualized by means of the variance dispersion graph and fraction of design space and should be consider over the single-value criteria even though the single value-criteria show some degree of design performance. The augmented orthogonal uniform composite design is should often be considered in experimentation over the orthogonal array composite design since the augmented orthogonal uniform composite design performance better.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[Triangle Conics, Cubics and Possible Applications in Cryptography]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11245]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Veronika Starodub&nbsp; &nbsp;Ruslan V. Skuratovskii&nbsp; &nbsp;and Sergii S. Podpriatov&nbsp; &nbsp;</p><p>We research triangle cubics and conics in classical geometry with elements of projective geometry. In recent years, N.J. Wildberger has actively dealt with this topic using an algebraic perspective. Triangle conics were also studied in detail by H.M. Cundy and C.F. Parry recently. The main task of the article is development of a method for creating curves that pass through triangle centers. During the research, it was noticed that some different triangle centers in distinct triangles coincide. The simplest example: an incenter in a base triangle is an orthocenter in an excentral triangle. This is the key for creating an algorithm. Indeed, we can match points belonging to one curve (base curve) with other points of another triangle. Therefore, we get a new fascinating geometrical object. During the research number of new triangle conics and cubics are derived, their properties in Euclidian space are considered. In addition, it is discussed corollaries of the obtained theorems in projective geometry, which proves that all of the discovered results could be transferred to the projective plane. It is well known that many modern cryptosystems can be naturally transformed into elliptic curves. We investigate the class of curves applicable in cryptography.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[Category of Submodules of a Uniserial Module]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11244]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Fitriani&nbsp; &nbsp;Indah Emilia Wijayanti&nbsp; &nbsp;Budi Surodjo&nbsp; &nbsp;Sri Wahyuni&nbsp; &nbsp;and Ahmad Faisol&nbsp; &nbsp;</p><p>Let R be a ring, K,M be R-modules, L a uniserial R-module, and X a submodule of L. The triple (K,L,M) is said to be X-sub-exact at L if the sequence K→X→M is exact. Let σ(K,L,M) is a set of all submodules Y of L such that (K,L,M) is Y -sub-exact. The sub-exact sequence is a generalization of an exact sequence. We collect all triple (K,L,M) such that (K,L,M) is an X-sub exact sequence, where X is a maximal element of σ(K,L,M). In a uniserial module, all submodules can be compared under inclusion. So, we can find the maximal element of σ(K,L,M). In this paper, we prove that the set σ(K,L,M) form a category, and we denoted it by C<sub>L</sub>. Furthermore, we prove that C<sub>Y</sub> is a full subcategory of C<sub>L</sub>, for every submodule Y of L. Next, we show that if L is a uniserial module, then C<sub>L</sub> is a pre-additive category. Every morphism in C<sub>L</sub> has kernel under some conditions. Since a module factor of L is not a submodule of L, every morphism in a category C<sub>L</sub> does not have a cokernel. So, C<sub>L</sub> is not an abelian category. Moreover, we investigate a monic X-sub-exact and an epic X-sub-exact sequence. We prove that the triple (K,L,M) is a monic X-sub-exact if and only if the triple Z-modules (<img src=image/13424409_01.gif>, <img src=image/13424409_02.gif>, <img src=image/13424409_03.gif>) is a monic <img src=image/13424409_04.gif>-sub-exact sequence, for all R-modules N. Furthermore, the triple (K,L,M) is an epic X-sub-exact if and only if the triple Z-modules (<img src=image/13424409_05.gif>, <img src=image/13424409_06.gif>, <img src=image/13424409_07.gif>) is a monic <img src=image/13424409_08.gif>-sub-exact, for all R-module N.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[B-spline Estimation for Force of Mortality]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11243]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Tserenbat Oirov&nbsp; &nbsp;Gereltuya Terbish&nbsp; &nbsp;and Nyamsuren Dorj&nbsp; &nbsp;</p><p>The paper focuses on the estimation of the force of mortality of living time distribution. We use a third-order B-spline function to construct the logarithm for force of mortality of living time. The number of the knots, their locations and B-spline coefficients based on a sample of observations are estimated by the maximum likelihood estimation method. Evaluation of B-spline parameters estimated by maximum likelihood estimation tested with criteria of the modified chi-squared goodness of the fit statistic. An algorithm developed to calculate Sequential Procedure for the modified chi-squared goodness of the fit testing. The Matlab code was written using the algorithm. Within this evaluation, the number of knots in the model has significantly reduced. The developed method was used to explain the mortality rate of women aged 0 to 69 among the Mongolian population in 2019 and estimate the life expectancy of Mongolians. The results of this experiment provided an excellent estimation of the force of mortality. Construction of a mortality rate estimation gives possibilities to determine mortality trends and force of mortality. Here, force of mortality is further used to construct a survival function, a lifetime distribution function, and a lifetime distribution probability density function. The method can also be used in financial market models and in models that estimate the useful life of equipment.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[Mellin Transform of an Exponential Fourier Transform Expressed in Terms of the Lerch Function]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11242]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Robert Reynolds&nbsp; &nbsp;and Allan Stauffer&nbsp; &nbsp;</p><p>The aim of this paper is to provide a table of definite integrals which includes both known and new integrals. This work is important because we provide a formal derivation for integrals in [7] not currently present in literature along with new integrals. By deriving new integrals we hope to expand the current list of integral formulae which could assist in research where applicable. The authors apply their contour integral method [9] to an integral in [8] to achieve this new integral formula in terms of the Lerch function. In this present work, the authors provide a formal derivation for an interesting Exponential Fourier transform and express it in terms of the Lerch function. The Exponential Fourier transform has many real world applications namely, in the field of Electrical engineering, in the work of electrical transients by [10] and in the field of Civil engineering, in the work of stress analysis of boundary load on soil by [11]. The definite integral we derived in this work is given by <img src=image/13424121_01.gif> (1) where the variables <img src=image/13424121_02.gif>. This formal derivation is then used to derive the correct version of a definite integral transform along with new formulae. Some of the results in this work are new.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[The Relative Rank of Transformation Semigroups with Restricted Range on a Finite Chain]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11241]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Kittisak Tinpun&nbsp; &nbsp;</p><p>Let S be a semigroup and let G be a subset of S. A set G is a generating set G of S which is denoted by <img src=image/13423907_08.gif>. The rank of S is the minimal size or the minimal cardinality of a generating set of S, i.e. rank <img src=image/13423907_01.gif>. In last twenty years, the rank of semigroups is worldwide studied by many researchers. Then it lead to a new definition of rank that is called the relative rank of S modulo U is the minimal size of a subset <img src=image/13423907_02.gif> such that <img src=image/13423907_03.gif> generates S, i.e. rank <img src=image/13423907_04.gif>. A set <img src=image/13423907_02.gif> with <img src=image/13423907_09.gif> is called generating set of S modulo U. The idea of the relative rank was generalized from the concept of the rank of a semigroup and it was firstly introduced by Howie, Ruskuc and Higgins in 1998. Let X be a finite chain and let Y be a subchain of X. We denote <img src=image/13423907_10.gif> the semigroup of full transformations on X under the composition of functions. Let <img src=image/13423907_11.gif> be the set of all transformations from X to Y which is so-called the transformation semigroup with restricted range Y. It was firstly introduced and studied by Symons in 1975. Many results in <img src=image/13423907_10.gif> were extended to results in <img src=image/13423907_11.gif>. In this paper, we focus on the relative rank of semigroup <img src=image/13423907_11.gif> and the semigroup <img src=image/13423907_05.gif> of all orientation-preserving transformations in <img src=image/13423907_11.gif>. In Section 2.1, we determine the relative rank of <img src=image/13423907_11.gif> modulo the semigroup <img src=image/13423907_06.gif> of all order-preserving or order-reversing transformations. In Section 2.2, we describe the results of the relative rank of <img src=image/13423907_11.gif> modulo the semigroup <img src=image/13423907_05.gif>. In Section 2.3, we determine the relative rank of <img src=image/13423907_11.gif> modulo the semigroup <img src=image/13423907_07.gif> of all orientation-preserving or orientation-reversing transformations. Moreover, we obtain that the relative rank <img src=image/13423907_11.gif> modulo <img src=image/13423907_05.gif> and modulo <img src=image/13423907_07.gif> are equal.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[On New Generalized Fuzzy Directed Divergence Measure and Its Application in Decision Making Problem]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11240]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Bhagwan Dass&nbsp; &nbsp;Vijay Prakash Tomar&nbsp; &nbsp;Krishan Kumar&nbsp; &nbsp;and Vikas Ranga&nbsp; &nbsp;</p><p>The concept of fuzzy sets presented by Zadeh has conquered an enormous achievement in numerous fields. Uncertainty in real world is ubiquitous. Entropy is an important tool with uncertainty and fuzziness. In this article, we propose new measure of directed divergence on fuzzy set. The extension of the fuzzy sets and one that integrated with other theories have been applied by some researchers. To prove the validity of measure, some axioms are proved. Using the proposed measure, we generate a method about decision making criteria and give a suitable method. In this article, we describe directed divergence measure for fuzzy set. Properties of proposed measure are discussed. In the real world, the multicriteria decision making is a very practical method and has a wide range of uses. By using multicriteria decision making, we can find best choice among the given criteria. In recent years, many researchers extensively apply fuzzy directed divergence for multicriteria decision making. Also some researchers defined the application of parameterized Hesitant Fuzzy Soft Set theory in decision making. In this article, we shall investigate the multiple criteria decision making problem under fuzzy environment. Application of introduced measure is given for decision making problem. A numerical example is given for decision making problem. In a fuzzy multicriteria problem, the analysis is given by an illustration example of the new define approach regarding admission preference of a student for post graduate course of science stream.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[Choice of Strata Boundaries for Allocation Proportional to Stratum Cluster Totals in Stratified Cluster Sampling]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11239]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Bhuwaneshwar Kumar Gupt&nbsp; &nbsp;F. Lalthlamuanpuii&nbsp; &nbsp;and Md. Irphan Ahamed&nbsp; &nbsp;</p><p>In survey planning, sometimes, there arises situation to use cluster sampling because of nature of spatial relationship between elements of population or physical feature of land over which elements are dispersed or unavailability of reliable list of elements. At the same time, there requires technique and strategy for ensuring precision of the sample in representing the parent population. Although several theoretical cum practical works have been done in cluster sampling, stratified sampling and stratified cluster sampling, so far, the problem of stratified cluster sampling for a study variable based on an auxiliary variable, which is required in practice, has never been approached. For the first time, this paper deals with the problem of optimum stratification of population of clusters in cluster sampling with clusters of equal size of a characteristic y under study based on highly correlated concomitant variable x for allocation proportional to stratum cluster totals under a super population model. Equations giving optimum strata boundaries (OSB) for dividing population, in which sampling unit of the population is a cluster, are obtained by minimising sampling variance of the estimator of population mean. As the equations are implicit in nature, a few methods of finding approximately optimum strata boundaries (AOSB) are deduced from the equations giving OSB. In deriving the equations, mathematical tools of calculus and algebra are used in addition to statistical methods of finding conditional expectation of variance. All the proposed methods of stratification are empirically examined by illustrating in live data, population of villages in Lunglei and Serchhip districts of Mizoram State, India, and found to perform efficiently in stratifying the population. The proposed methods may provide practically feasible solution in planning socio-economic survey.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[Analytical Solutions of ARL for SAR(p)<sub>L</sub> Model on a Modified EWMA Chart]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11238]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Piyatida Phanthuna&nbsp; &nbsp;and Yupaporn Areepong&nbsp; &nbsp;</p><p>A modified exponentially weighted moving average (EWMA) scheme expanded from an EWMA chart is an instrument for immediate detection on a small shifted size. The objective of this research is to suggest the average run length (ARL) with the explicit formula on a modified EWMA control chart for observations of a seasonal autoregressive model of order p<sup>th</sup> (SAR(p)<sub>L</sub>) with exponential residual. A numerical integral equation method is brought to approximate ARL for checking an accuracy of explicit formulas. The results of two methods show that their ARL solutions are close and the percentage of the absolute relative change (ARC) is obtained to less than 0.002. Furthermore, the modified EWMA chart with the SAR(p)<sub>L</sub> model is tested to shift detection when the parameters c and <img src=image/13424135_01.gif> are changed. The ARL and the relative mean index (RMI) results are found to be better when c and <img src=image/13424135_01.gif> are increased. In addition, the modified EWMA control chart is compared to performance with the EWMA scheme and such that their results encourage the modified EWMA chart for a small shift. Finally, this explicit formula can be applied to various real-world data. For example, two data about information and communication technology are used for the validation and the capability of our techniques.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[Hesitant Fuzzy Network Approach for Alternatives Selection with Incomplete Weight Information]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11237]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Shahira Shafie&nbsp; &nbsp;and Abdul Malek Yaakob&nbsp; &nbsp;</p><p>Networked rule bases in fuzzy system, acknowledged as fuzzy network, carries multiple stages of development in decision making processes that involves the uncertainty in the data used as medium in various field. Fuzzy network promotes transparency in multicriteria decision making (MCDM) whereby the criteria are divided into subsystems of cost and benefit to ensure good assessment performance. By considering Hesitant fuzzy sets (HFS), which gives the permission of a set of possible values to present the membership degree of an element, we develop a novel approach that applies fuzzy network and the maximizing deviation method in solving MCDM problem. Fuzzy network addresses transparency in the formulation and maximizing deviation method can restore weight information in MCDM problems whether partially known or fully unknown. The proposed method is applied in case study of stock evaluation that carries opinion evaluated by several decision makers and compared in terms of performance using Spearman rho correlation.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[Starter Set Generation Based on Factorial Numbers for Half Wing of Butterfly Representation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11236]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Sharmila Karim&nbsp; &nbsp;and Haslinda Ibrahim&nbsp; &nbsp;</p><p>Permutation is an interesting subject to explore until today where it is widely applied in many areas. This paper presents the use of factorial numbers for generating starter sets where starter sets are used for listing permutation. Previously starter sets are generated by using their permutation under exchange-based and cycling based. However, in the new algorithm, this process is replaced by factorial numbers. The base theory is there are <img src=image/13491677_01.gif> number of distinct starter sets. Every permutation has its decimal number from zero until <img src=image/13491677_02.gif> for Lexicographic order permutation only. From a decimal number, it will be converted to a factorial number. Then the factorial number will be mapped to its corresponding starter sets. After that, the Half Wing of Butterfly will be presented. The advantage of the use of factorial numbers is the avoidance of the recursive call function for starter set generation. In other words, any starter set can be generated by calling any decimal number. This new algorithm is still in the early stage and under development for the generation of the half wing of butterfly representation. Case n=5 is demonstrated for a new algorithm for lexicographic order permutation. In conclusion, this new development is only applicable for generating starter sets as a lexicographic order permutation due to factorial numbers is applicable for lexicographic order permutation.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[Robust Multivariate Location Estimation in the Existence of Casewise and Cellwise Outliers]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11235]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Yik-Siong Pang&nbsp; &nbsp;Nor Aishah Ahad&nbsp; &nbsp;and Sharipah Soaad Syed Yahaya&nbsp; &nbsp;</p><p>Multivariate outliers can exist in two forms, casewise and cellwise. Data collection typically contains unknown proportion and types of outliers which can jeopardize the location estimation and affect research findings. In cases where the two coexist in the same data set, traditional distance-based trimmed mean and coordinate-wise trimmed mean are unable to perform well in estimating location measurement. Distance-based trimmed mean suffers from leftover cellwise outliers after the trimming whereas coordinate-wise trimmed mean is affected by extra casewise outliers. Thus, this paper proposes new robust multivariate location estimation known as α-distance-based trimmed median (<img src=image/13491675_01.gif>) to deal with both types of outliers simultaneously in a data set. Simulated data were used to illustrate the feasibility of the new procedure by comparing with the classical mean, classical median and α-distance-based trimmed mean. Undeniably, the classical mean performed the best when dealing with clean data, but contrarily on contaminated data. Meanwhile, classical median outperformed distance-based trimmed mean when dealing with both casewise and cellwise outliers, but still affected by the combined outliers' effect. Based on the simulation results, the proposed <img src=image/13491675_01.gif> yields better location estimation on contaminated data compared to the other three estimators considered in this paper. Thus, the proposed <img src=image/13491675_01.gif> can mitigate the issues of outliers and provide a better location estimation.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[On the Representation of the Weight Enumerator of <img src=image/13424212_01.gif>]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11234]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Mans L Mananohas&nbsp; &nbsp;Charles E Mongi&nbsp; &nbsp;Dolfie Pandara&nbsp; &nbsp;Chriestie E J C Montolalu&nbsp; &nbsp;and Muhammad P M Mo'o&nbsp; &nbsp;</p><p>The weight enumerator of a code is a homogeneous polynomial that provides a lot of information about the code. In this case, for the development of a code, research on the weight enumerator is very important. In this study, we focus on the code <img src=image/13424212_13.gif>. Let <img src=image/13424212_02.gif> be the weight enumerator of the code <img src=image/13424212_13.gif>. Fujii and Oura showed that <img src=image/13424212_02.gif> is generated by <img src=image/13424212_03.gif> and <img src=image/13424212_04.gif>. Indeed, we show that <img src=image/13424212_02.gif> is an element of the polynomial ring <img src=image/13424212_05.gif>. We know that the weight enumerator of all self-dual double-even (Type II) code is generated by <img src=image/13424212_03.gif> and <img src=image/13424212_06.gif>. Recall <img src=image/13424212_13.gif> is a type II code. Thus, <img src=image/13424212_02.gif> is an element of the polynomial ring <img src=image/13424212_07.gif> and <img src=image/13424212_08.gif>. One of the motivations of this research is to investigate the connection between these two polynomial rings in representing <img src=image/13424212_02.gif>. Let <img src=image/13424212_09.gif> and <img src=image/13424212_10.gif> be the coefficients of polynomial that represent <img src=image/13424212_02.gif> as an element of <img src=image/13424212_07.gif> and <img src=image/13424212_08.gif>, respectively. We find that <img src=image/13424212_10.gif> is an element of the polynomial <img src=image/13424212_11.gif>. In addition, we also show that there are no weight enumerators of Type II code generated by <img src=image/13424212_03.gif> and <img src=image/13424212_12.gif> that can be written uniquely as isobaric polynomials in five homogeneous polynomial elements of degrees 8, 24, 24, 24, 24.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[The Theory of Pure Algebraic (Co)Homology]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11233]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Alaa Hassan Noreldeen&nbsp; &nbsp;Wageeda M. M.&nbsp; &nbsp;and O. H. Fathy&nbsp; &nbsp;</p><p>Polynomial: algebra is essential in commutative algebra since it can serve as a fundamental model for differentiation. For module differentials and Loday's differential commutative graded algebra, simplified homology for polynomial algebra was defined. In this article, the definitions of the simplicial, the cyclic, and the dihedral homology of pure algebra are presented. The definition of the simplicial and the cyclic homology is presented in the Algebra of Polynomials and Laurent's Polynomials. The long exact sequence of both cyclic homology and simplicial homology is presented. The Morita invariance property of cyclic homology was submitted. The relationship <img src=image/13424161_01.gif> was introduced, representing the relationship between dihedral and cyclic (co)homology in polynomial algebra. Besides, a relationship <img src=image/13424161_02.gif>, <img src=image/13424161_03.gif> was examined, defining the relationship between dihedral and cyclic (co)homology of Laurent polynomials algebra. Furthermore, the Morita invariance property of dihedral homology in polynomial algebra was investigated. Also, the Morita property of dihedral homology in Laurent polynomials was studied. For the dihedral homology, the long exact sequence <img src=image/13424161_04.gif> was obtained of the short sequence <img src=image/13424161_05.gif>. The long exact sequence of the short sequence <img src=image/13424161_05.gif> was obtained from the reflexive (co)homology of polynomial algebra. Studying polynomial algebra helps calculate COVID-19 vaccines.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[A Monte Carlo Study for Dealing with Multicollinearity and Autocorrelation Problems in Linear Regression Using Two Stage Ridge Regression Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11232]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Hussein Eledum&nbsp; &nbsp;and Hytham Hussein Awadallah&nbsp; &nbsp;</p><p>In the multiple linear regression model, the problem of multicollinearity may come together with autocorrelation; therefore, several methods of estimation are developed to deal with this case; Two-Stage Ridge Regression (TR) is one of them. This article's main objective is to run a Monte Carlo simulation to investigate the impact of both problems, Multicollinearity and Autocorrelation, in multiple linear regression model on the performance of the TR. The simulation is carried out under different levels of multicollinearity, and sets of autocorrelation coefficient, taking into account different sample sizes. Some new properties for the TR method, including expectation, variance and mean square error, are droved. In contrast, the study also has developed some techniques to estimate the biasing parameter for the TR by modifying some popular techniques used in ridge regression (RR). Moreover, Mean Square Error is used as a base for evaluation and comparison. The empirical findings from the simulations revealed that the TR estimator performs better than the RR, and the values of the biasing parameter under the TR are always less than that under the RR. This paper contributes to the existing literature on developing new estimation methods used to overcome the presence of mixed problems in a linear regression model and studying their properties.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[Methods of Stratification for a Generalised Auxiliary Variable Optimum Allocation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11231]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Md. Irphan Ahamed&nbsp; &nbsp;Bhuwaneshwar Kumar Gupt&nbsp; &nbsp;and Manoshi Phukon&nbsp; &nbsp;</p><p>In stratified sampling, ever since Dalenius [1] undertook the problem of optimum stratification, the research in the area has been progressing in various perspectives and dimensions till date. Amidst the multifaceted developments in the trend of the research, consideration of the topic by taking into account various aspects such as different sample selection methods and allocations, study variable based stratification, auxiliary variable based stratification, superpopulation models, extension to two study variables for a single auxiliary variable, extension to two stratification variables for a single study variable etc., are a few noteworthy ones. However, with regard to considering optimum stratification of heteroscedastic populations, as live populations are generally heteroscedastic, it was Gupt and Ahamed [2,3] who considered the problem for a few allocations under a heteroscedastic regression superpopulation (HRS) model. As a sequel to the work of the authors, in this paper, the problem of optimum stratification for an objective variable y based on a concomitant variable x under the HRS model is considered for an allocation proposed by Gupt [4,5] and termed as Generalised Auxiliary Variable Optimum Allocation (GAVOA). Methods of stratification in the form of equations and approximate solutions to the equations which stratify populations at optimum strata boundaries (OSB) and approximately optimum strata boundaries (AOSB) respectively are obtained. Mathematical analysis is used in minimizing sampling variance of the estimator of population mean and deriving all the proposed methods of stratification. The proposed equations divide heteroscedastic populations, symmetrical or moderately skewed or highly skewed, at OSB, but, the equations are implicit in nature and not easy in solving. Therefore, a few methods of finding AOSB are deduced from the equations through analytically justified steps of approximation. The methods may provide practically feasible solutions in survey planning in stratifying heteroscedastic population of any level of heteroscedasticity and the work may contribute, to some extent, theoretically in the research area. The methods are empirically examined in a few generated heteroscedastic data of varied shapes with some assumed levels of heteroscedasticity and found to perform with high efficiency. The proposed methods of stratification are restricted to the particular allocation used.</p>]]></description>
<pubDate>Sep 2021</pubDate>
</item>
<item>
<title><![CDATA[A Novel Concept of Uncertainty Optimization Based Multi-Granular Rough Set and Its Application]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11201]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Pradeep Shende&nbsp; &nbsp;and Arvind Kumar Sinha&nbsp; &nbsp;</p><p>Data is generating at an exponential pace with the advancement in information technology. Such data highly contain uncertain and vague information. The rough set approximation is a way to find information in the data-set under uncertainty and to classify objects of the dataset. This work presents a mathematical approach to evaluate the data-sets uncertainties and their application to data reduction. In this work, we have extended the multi-granulation variable precision rough set in the context of uncertainty optimization. We develop an uncertainty optimization-based multi-granular rough set (UOMGRS) to minimize the uncertainties in the data set more effectively. Using UOMGRS, we find the most informative attribute in the feature space. It is desirable to minimize the rough set boundary region using the attribute having the highest approximation quality. Thus we group the attributes whose relative quality of approximation is the maximum to maximize the positive region and to minimize the uncertain region. We compare the UOMGRS with the single granulation rough set (SGRS) and the multi-granular rough set (MGRS). By our proposed method, we require only an average of 62% attributes for approximation whereas, SGRS and MGRS need an average of at least 72% of attributes in the data set for approximation of the concepts in the data-set. Our proposed method requires less amount of data for the classification of objects in the dataset. The method helps minimize the uncertainties in the dataset in a more efficient way.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[The Class of Noetherian RingsWith Finite Valuation Dimension]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11200]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Samsul Arifin&nbsp; &nbsp;Hanni Garminia&nbsp; &nbsp;and Pudji Astuti&nbsp; &nbsp;</p><p>Not a long time ago, Ghorbani and Nazemian [2015] introduced the concept of dimension of valuation which measures how much does the ring differ from the valuation. They've shown that every Artinian ring has a finite valuation dimensions. Further, any comutative ring with a finite valuation dimension is semiperfect. However, there is a semiperfect ring which has an infinite valuation dimension. With those facts, it is of interest to further investigate property of rings that has a finite dimension of valuation. In this article we define conditions that a Noetherian ring requires and suffices to have a finite valuation dimension. In particular we prove that, if and only if it is Artinian or valuation, a Noetherian ring has its finite valuation dimension. In view of the fact that a ring needs a semi perfect dimension in terms of valuation, our investigation is confined on semiperfect Noetherian rings. Furthermore, as a finite product of local rings is a semi perfect ring, the inquiry into our outcome is divided into two cases, the case of the examined ring being local and the case where the investigated ring is a product of at least two local rings. This is, first of all, that every local Noetherian ring possesses a finite valuation dimension, if and only if it is Artinian or valuation. Secondly, any Notherian Ring generated by two or more local rings is shown to have a finite valuation dimension, if and only if it is an Artinian.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[Derivation of Some Entries in the Tables of David Bierens De Haan and Anatolii Prudnikov: An Exercise in Integration Theory]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11199]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Robert Reynolds&nbsp; &nbsp;and Allan Stauffer&nbsp; &nbsp;</p><p>It is always useful to improve the catalogue of definite integrals available in tables. In this paper we use our previous work on Lobachevsky integrals to derive entries in the tables by Bierens De Haan and Anatolli Prudnikov featuring errata results and new integral formula for interested readers. In this work we derive a definite integral given by <img src=image/13423833_01.gif> (1) in terms of the Lerch function. The importance of this work lies in the derivation of known and new results not presently found in current literature. We used our contour integral method and applied it to an integral in Prudnikov and used it to derive a closed form solution in terms of a special function. The advantage of using a special function is the added benefit of analytic continuation which widens the range of computation of the parameters. Special functions have significance in mathematical analysis, functional analysis, geometry, physics, and other applications. Special functions are used in the solutions of differential equations or integrals of elementary functions. Special functions are linked to the theory of Lie groups and Lie algebras, as well as certain topics in mathematical physics.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[A Convergence Algorithm of Boundary Elements for the Laplace Operator's Dirichlet Eigenvalue Problem]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11198]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Ali Naji Shaker&nbsp; &nbsp;</p><p>A partial differential equation has been using the various boundary elements techniques for getting the solution to eigenvalue problem. A number of mathematical concepts were enlightened in this paper in relation with eigenvalue problem. Initially, we studied the basic approaches such as Dirichlet distribution, Dirichlet process and the Model of mixed Dirichlet. Four different eigenvalue problems were summarized, viz. Dirichlet eigenvalue problems, Neumann eigenvalue problems, Mixed Dirichlet-Neumann eigenvalue problem and periodic eigenvalue problem. Dirichlet eigenvalue problem was analyzed briefly for three different cases of value of λ. We put the result for multinomial as its prior is Dirichlet distribution. The result of eigenvalues for the ordinary differential equation was extrapolated. The Basic mathematics was also performed for λ calculations which follow iterative method.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[Quasi-Chebyshevity in <img src=image/13422060_01.gif>]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11197]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Jamila Jawdat&nbsp; &nbsp;and Ayat Kamal&nbsp; &nbsp;</p><p>This paper deals with Quasi-Chebyshevity in the Bochner function spaces <img src=image/13422060_01.gif>, where X is a Banach space. For W a nonempty closed subset of X and x ∊ X, an element w0 in W is called &quot;best approximation&quot; to x from W, if <img src=image/13422060_02.gif>, for all w in W. All best approximation points of x from W form a set usually denoted by P<sub>W</sub> (x). The set W is called &quot;proximinal&quot; in X if P<sub>W</sub> (x) is non empty, for each x in X. Now, W is said to be &quot;Quasi-Chebyshev&quot; in X whenever, for each x in X, the set P<sub>W</sub> (x) is nonempty and compact in X. This subject was studied in general Banach spaces by several authors and some results had been obtained. In this work, we study Quasi-Chebyshevity in the Bochner L<sup>p</sup>- spaces. The main result in this paper is that: given W a Quasi-Chebyshev subspace in X then L<sup>p</sup>(μ, W) is Quasi-Chebyshev in <img src=image/13422060_01.gif>, if and only if L<sup>1</sup> (μ, W) is Quasi-Chebyshev in L<sup>1</sup>(μ, X). As a consequence, one gets that if W is reflexive in X such that X satisfies the sequential KK-property then L<sup>p</sup>(μ, W) is Quasi-Chebyshev in <img src=image/13422060_01.gif>.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[Robust Estimation for Proportional Odds Model through Monte Carlo Simulation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11196]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Faiz Zulkifli&nbsp; &nbsp;Zulkifley Mohamed&nbsp; &nbsp;Nor Afzalina Azmee&nbsp; &nbsp;and Rozaimah Zainal Abidin&nbsp; &nbsp;</p><p>Ordinal regression is used to model the ordinal response variable as functions of several explanatory variables. The most commonly used model for ordinal regression is the proportional odds model (POM). The classical technique for estimating the unknown parameters of this model is the maximum likelihood (ML) estimator. However, this method is not suitable for solving problems with extreme observations. A robust regression method is needed to handle the problem of extreme points in the data. This study proposes Huber M-estimator as a robust method to estimate the parameters of the POM with a logistic link function and polytomous explanatory variables. This study assesses ML estimator performance and the robust method proposed through an extensive Monte Carlo simulation study conducted using statistical software, R. Measurement for comparisons are bias, RMSE, and Lipsitzs' goodness of fit test. Various sample sizes, percentages of contamination, and residual standard deviations are considered in the simulation study. Preliminary results show that Huber estimates provide the best results for parameter estimation and overall model fitting. Huber's estimator has reached a 50% breakdown point for data containing extreme points that are quite far from most points. In addition, the presence of extreme points that have only a distance of two times far from most points has no major impact on ML estimates. This means that the estimates for ML and Huber may yield the same results if the model's residual values are between -2 and 2. This situation may also occur for data with a percentage of contamination below 5%.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[Unsteady Couette Flow Past between Two Horizontal Riga Plates with Hall and Ion Slip Effects]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11195]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>S. Nasrin&nbsp; &nbsp;R. N. Mondal&nbsp; &nbsp;and M. M. Alam&nbsp; &nbsp;</p><p>Riga plate is the span wise array of electrodes and permanent magnets that creates a plane surface and produced the electromagnetic hydrodynamic fluid behavior and mostly used in industrial processes with fluid flow affairs. In cases where an external application of a magnetic or electric field is required, better flow is obtained by the involvement of the Riga plate. Riga plate acts as an agent to reduce the skin friction and enhance the heat transfer phenomena. It also diminishes the turbulent effects, so that it is possible to get an efficient flow control and it increases the performance of the machine. So the numerical investigation of the unsteady Couette flow with Hall and ion-slip current effects past between two Riga plates has been studied and the numerical solutions are acquired by using explicit finite difference method and estimated results have been gained for several values of the dimensionless parameter such as pressure gradient parameter, Hall and Ion-slip parameters, modified Hartmann number, Prandtl number, and Eckert number. In this article, the importance of the modified Hartmann number on the flow profiles is immense owing to the Riga plate. The expression of skin friction and Nusselt number has been computed and the outcomes of the relevant parameters on various distributions have been sketched and presented as well as graphically.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[On the Gaussian Approximation to Bayesian Posterior Distributions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11005]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Christoph Fuhrmann&nbsp; &nbsp;Hanns-Ludwig Harney&nbsp; &nbsp;Klaus Harney&nbsp; &nbsp;and Andreas M¨uller&nbsp; &nbsp;</p><p>The present article derives the minimal number N of observations needed to approximate a Bayesian posterior distribution by a Gaussian. The derivation is based on an invariance requirement for the likelihood <img src=image/13422860_01.gif>. This requirement is defined by a Lie group that leaves the <img src=image/13422860_01.gif> unchanged, when applied both to the observation(s) <img src=image/13422860_05.gif> and to the parameter <img src=image/13422860_02.gif> to be estimated. It leads, in turn, to a class of specific priors. In general, the criterion for the Gaussian approximation is found to depend on (i) the Fisher information related to the likelihood <img src=image/13422860_01.gif>, and (ii) on the lowest non-vanishing order in the Taylor expansion of the Kullback-Leibler distance between <img src=image/13422860_01.gif> and <img src=image/13422860_03.gif>, where <img src=image/13422860_04.gif> is the maximum-likelihood estimator of <img src=image/13422860_02.gif>, given by the observations <img src=image/13422860_05.gif>. Two examples are presented, widespread in various statistical analyses. In the first one, a chi-squared distribution, both the observations <img src=image/13422860_05.gif> and the parameter <img src=image/13422860_02.gif> are defined all over the real axis. In the other one, the binomial distribution, the observation is a binary number, while the parameter is defined on a finite interval of the real axis. Analytic expressions for the required minimal N are given in both cases. The necessary N is an order of magnitude larger for the chi-squared model (continuous <img src=image/13422860_05.gif>) than for the binomial model (binary <img src=image/13422860_05.gif>). The difference is traced back to symmetry properties of the likelihood function <img src=image/13422860_01.gif>. We see considerable practical interest in our results since the normal distribution is the basis of parametric methods of applied statistics widely used in diverse areas of research (education, medicine, physics, astronomy etc.). To have an analytical criterion whether the normal distribution is applicable or not, appears relevant for practitioners in these fields.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[Inference on P[Y < X] for Geometric Extreme Exponential Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11004]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Reza Pakyari&nbsp; &nbsp;</p><p>Geometric Extreme Exponential Distribution (GEE) is one of the statistical models that can be useful in fitting and describing lifetime data. In this paper, the problem of estimation of the reliability R = P(Y < X) when X and Y are independent GEE random variables with common scale parameter but different shape parameters has been considered. The probability R = P(Y < X) is also known as stress-strength reliability parameter and demonstrates the case where a component has stress X and is subjected to strength Y. The reliability R = P(Y < X) has applications in engineering, finance and biomedical sciences. We present the maximum likelihood estimator of R and study its asymptotic behavior. We first study the asymptotic distribution of the maximum likelihood estimators of the GEE parameters. We prove that the maximum likelihood estimators and so the reliability R have asymptotic normal distribution. A bootstrap confidence interval for R is also presented. Monte Carlo simulations are performed to assess he performance of the proposed estimation method and validity of the confidence interval. We found that the performance of the maximum likelihood estimator and also the bootstrap confidence interval is satisfactory even for small sample sizes. Analysis of a dataset has been given for illustrative purposes.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[Finitely Generated Modules's Uniserial Dimensions Over a Discrete Valuation Domain]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11003]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Samsul Arifin&nbsp; &nbsp;Hanni Garminia&nbsp; &nbsp;and Pudji Astuti&nbsp; &nbsp;</p><p>We present some methods for calculating the module's uniserial dimension that finitely generated over a DVD in this article. The idea of a module's uniserial dimension over a commutative ring, which defines how far the module deviates from being uniserial, was recently proposed by Nazemian etc. They show that if R is Noetherian commutative ring, which implies that every finitely generated module over R has uniserial dimension. Ghorbani and Nazemians have shown that R is Noetherian (resp. Artinian) ring if only the ring R X R has (resp. finite) valuation dimension. The finitely generated modules over valuation domain are further examined from here. However, since the region remains too broad, further research into the module's uniserial dimensions that finitely generated over a DVD is needed. In the case of a DVD R, a finitely generated module over R can, as is well-known, be divided into a direct sum of torsion and a free module. Therefore, first, we present methods for determining the primary module's uniserial dimension, and then followed by methods for the general finitely generated module. As can be observed, the module's uniserial dimension is a function of the elementary divisors and the rank of the non torsion module item, which is the major finding of this work.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[Time Series Forecasting with Trend and Seasonal Patterns using NARX Network Ensembles]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11002]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Hermansah&nbsp; &nbsp;Dedi Rosadi&nbsp; &nbsp;Abdurakhman&nbsp; &nbsp;and Herni Utami&nbsp; &nbsp;</p><p>In this research, we propose a Nonlinear Auto-Regressive network with exogenous inputs (NARX) model with a different approach, namely the determination of the main input variables using a stepwise regression and exogenous input using a deterministic seasonal dummy. There are two approaches in making a deterministic seasonal dummy, namely the binary and the sine-cosine dummy variables. Approximately half the number of input variables plus one is contained in the neurons of the hidden layer. Furthermore, the resilient backpropagation learning algorithm and the tangent hyperbolic activation function were used to train each network. Three ensemble operators are used, namely mean, median, and mode, to solve the overfitting problem and the single NARX model's weakness. Furthermore, we provide an empirical study using actual data, where forecasting accuracy is determined by Mean Absolute Percent Error (MAPE). The empirical study results show that the NARX model with binary dummy exogenous is the most accurate for trend and seasonal with multiplicative properties data patterns. For trend and seasonal with additive properties data patterns, the NARX model with sine-cosine dummy exogenous is more accurate, except the fact that the NARX model uses the mean ensemble operator. Besides, for trend and non-seasonal data patterns, the most accurate NARX model is obtained using the mean ensemble operator. This research also shows that the median and mode ensemble operators, which are rarely used, are more accurate than the mean ensemble operator for data that have trend and seasonal patterns. The median ensemble operator requires the least average computation time, followed by the mode ensemble operator. On the other hand, all of our proposed NARX models' accuracy consistently outperforms the exponential smoothing method and the ARIMA method.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[An Analysis about Fourier Series Estimator in Nonparametric Regression for Longitudinal Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11001]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>M. Fariz Fadillah Mardianto&nbsp; &nbsp;Gunardi&nbsp; &nbsp;and Herni Utami&nbsp; &nbsp;</p><p>Fourier series is a function that is often used Mathematically and Statistically especially for modeling. Here, Fourier series can be constructed as an estimator in nonparametric regression. Nonparametric regression is not only using cross section data, but also longitudinal data. Some of nonparametric regression estimators have been developed for longitudinal data case, such as kernel, and spline. In this study, we concentrate to develop an inference analysis that related to Fourier series estimator in nonparametric regression for longitudinal data. Nonparametric regression based on Fourier series is capable to model data relationship with fluctuation or oscillation pattern that represents with sine and cosine functions. For point estimation analysis, Penalized Weighted Least Square (PWLS) is used to determine an estimator for parameter vector in nonparametric regression. Different with previous studies, PWLS is used to get smooth estimator. The result is an estimator for nonparametric regression curve for longitudinal data based on Fourier series approach. In addition, this study also investigated the asymptotic properties of the nonparametric regression curve estimators using the Fourier series approach for longitudinal data, especially linearity and consistency. Some study cases based on previous research and a new study case is given to make sure that Fourier series estimator in nonparametric regression has good performance in longitudinal data modeling. This study is important in order to develop further inferences Statistics, such as interval estimation and test hypothesis that related nonparametric regression with Fourier series estimator for longitudinal data.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[Time Sensitive Analysis of Antagonistic Stochastic Processes and Applications to Finance and Queueing]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=11000]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Jewgeni H. Dshalalow&nbsp; &nbsp;Kizza Nandyose&nbsp; &nbsp;and Ryan T. White&nbsp; &nbsp;</p><p>This paper deals with a class of antagonistic stochastic games of three players A, B, and C, of whom the first two are active players and the third is a passive player. The active players exchange hostile attacks at random times <img src=image/13423210_01.gif> of random magnitudes with each other and also with player C. Player C does not respond to any attacks (that are regarded as a collateral damage). There are two sustainability thresholds M and T are set so that when the total damages to players A and B cross M and T, respectively, the underlying player is ruined. At some point <img src=image/13423210_02.gif> (ruin time), one of the two active players will be ruined. Player C's damages are sustainable and some rebuilt. Of interest are the ruin time <img src=image/13423210_02.gif> and the status of all three players upon <img src=image/13423210_02.gif> as well as at any time t prior to <img src=image/13423210_02.gif>. We obtain an analytic formula for the joint distribution of the named processes and demonstrate its closed form in various analytic and computational examples. In some situations pertaining to stock option trading, stock prices (player C) can fluctuate. So in this case, it is of interest to predict the first time when an underlying stock price drops or significantly drops so that the trader can exercise the call option prior to the drop and before maturity T. Player A monitors the prices upon times <img src=image/13423210_03.gif> assigning 0 damage to itself if the stock price appreciates or does not change and assumes a positive integer if the price drops. The times <img src=image/13423210_03.gif> are themselves damages to player B with threshold T. The "ruin" time is when threshold M is crossed (i.e., there is a big price drop or a series of drops) or when the maturity T expires whichever comes first. Thus a prior action is needed and its time is predicted. We illustrate the applicability of the game on a number of other practical models, including queueing systems with vacations and (N,T)-policy.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[Three Dimensional Fractional Fourier-Mellin Transform, and its Applications]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10999]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Arvind Kumar Sinha&nbsp; &nbsp;and Srikumar Panda&nbsp; &nbsp;</p><p>The main objective of the paper is to study the three-dimensional fractional Fourier Mellin transforms (3DFRFMT), their basic properties and applicability due to mainly use in the radar system, reconstruction of grayscale images, in the detection of the human face, etc. Only the fractional Fourier transform is based on time-frequency distribution, whereas only the fractional Mellin transform is on scale covariant transformation. Both transforms can discover action in the definite assortment. The fractional Fourier transform is applicable for controlling the range of shift, whereas the fractional Mellin transform is accustomed to managing the range of rotation and scaling of the function. So, combining both transformations, we get an elegant expression for 3DFRFMT, which can be used in several fields. The paper introduces the concept of three-dimensional fractional Fourier Mellin transforms and their applications. Modulation property is the most useful concept in the signal system, radar technology, pattern reorganization, and many more in the integral transform. Parseval's identity applies to the conservation of energy in the universe. Thus we establish the modulation theorem, Parseval's theorem, scaling theorem, analytic theorem for three-dimensional fractional Fourier Mellin transform. We also give some examples of three-dimensional fractional Fourier-Mellin transform on some functions. Finally, we provide three-dimensional fractional Fourier-Mellin transform applications for solving homogeneous and non-homogeneous Mboctara partial differential equations that we can apply with advantages to solve the different types of problems in signal processing systems. The transform is beneficial in a maritime strategy as a co-realtor to control moments in any specific three-dimensional space. The concept is the most powerful tool to deal with any information system problems. After obtaining the generalization, we can explore many more ideas in applying three-dimensional fractional Fourier-Mellin transformations in many real word problems.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[Modified Variational Iteration Method for Solving Nonlinear Partial Differential Equation Using Adomian Polynomials]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10998]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>S. A. Ojobor&nbsp; &nbsp;and A. Obihia&nbsp; &nbsp;</p><p>The aim of this paper is to solve numerically the Cauchy problems of nonlinear partial differential equation (PDE) in a modified variational iteration approach. The standard variational iteration method (VIM) is first studied before modifying it using the standard Adomian polynomials in decomposing the nonlinear terms of the PDE to attain the new iterative scheme modified variational iteration method (MVIM). The VIM was used to iteratively determine the nonlinear parabolic partial differential equation to obtain some results. Also, the modified VIM was used to solve the nonlinear PDEs with the aid of Maple 18 software. The results show that the new scheme MVIM encourages rapid convergence for the problem under consideration. From the results, it is observed that for the values the MVIM converges faster to exact result than the VIM though both of them attained a maximum error of order 10<sup>-9</sup>. The resulting numerical evidences were competing with the standard VIM as to the convergence, accuracy and effectiveness. The results obtained show that the modified VIM is a better approximant of the above nonlinear equation than the traditional VIM. On the basis of the analysis and computation we strongly advocate that the modified with finite Adomian polynomials as decomposer of nonlinear terms in partial differential equations and any other mathematical equation be encouraged as a numerical method.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[Z-Score Functions of Hesitant Fuzzy Sets]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10997]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Zahari Md Rodzi&nbsp; &nbsp;Abd Ghafur Ahmad&nbsp; &nbsp;Norul Fadhilah Ismail&nbsp; &nbsp;and Nur Lina Abdullah&nbsp; &nbsp;</p><p>The hesitant fuzzy set (HFS) concept as an extension of fuzzy set (FS) in which the membership degree of a given element, called the hesitant fuzzy element (HFE), is defined as a set of possible values. A large number of studies are concentrating on HFE and HFS measurements. It is not just because of their crucial importance in theoretical studies, but also because they are required for almost any application field. The score function of HFE is a useful method for converting data into a single value. Moreover, the scoring function provides a much easier way to determine each alternative's ranking order for multi-criteria decision-making (MCDM). This study introduces a new hesitant degree of HFE and the z-score function of HFE, which consists of z-arithmetic mean, z-geometric mean, and z-harmonic mean. The z-score function is developed with four main bases: a hesitant degree of HFE, deviation value of HFE, the importance of the hesitant degree of HFE, α, and importance of the deviation value of HFE, β. These three proposed scores are compared with the existing scores functions to identify the proposed z-score function's flexibility. An algorithm based on the z-score function was developed to create an algorithm solution to MCDM. Example of secondary data on supplier selection for automated companies is used to prove the algorithms' capability in ranking order for MCDM.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[Two-Sided Group Chain Sampling Plans Based on Truncated Life Test for Generalized Exponential Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10996]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Nazrina Aziz&nbsp; &nbsp;Zahirah Hasim&nbsp; &nbsp;and Zakiyah Zain&nbsp; &nbsp;</p><p>Acceptance sampling is an important technique in quality assurance; its main goal is to achieve the most accurate decision in accepting lot using minimum resources. In practice, this often translates into minimizing the required sample sizes for the inspection, while satisfying the maximum allowable risks by consumer and producer. Numerous sampling plans have been developed over the past decades, the most recent being the incorporation of grouping to enable simultaneous inspection in the two-sided chain sampling which considers information from preceding and succeeding samples. This combination offers improved decision accuracy with reduced inspection resources. To-date, two-sided group chain sampling plan (TSGCh) for characteristic based on truncated lifetime has only been explored for Pareto distribution of the 2<sup>nd</sup> kind. This article introduces TSGCh sampling plan for products with lifetime that follows generalized exponential distribution. It focuses on minimizing consumer's risk and operates with three acceptance criteria. The equations that derived from the set conditions involving generalized exponential and binomial distributions are mathematically solved to develop this sampling plan. Its performance is measured on the probability of lot acceptance and number of minimum groups. A comparison with the established new two-sided group chain (NTSGCh) indicates that the proposed TSGCh sampling plan performs better in terms of sample size requirement and consumers' protection. Thus, this new acceptance sampling plan can reduce the inspection time, resources, and costs via smaller sample size (number of groups), while providing the desired consumers' protection.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[Approximate Solution of Higher Order Fuzzy Initial Value Problems of Ordinary Differential Equations Using Bezier Curve Representation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10995]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Sardar G Amen&nbsp; &nbsp;Ali F Jameel&nbsp; &nbsp;and Abdul Malek Yaakob&nbsp; &nbsp;</p><p>The Bezier curve is a parametric curve used in the graphics of a computer and related areas. This curve, connected to the polynomials of Bernstein, is named after the design curves of Renault's cars by Pierre Bézier in the 1960s. There has recently been considerable focus on finding reliable and more effective approximate methods for solving different mathematical problems with differential equations. Fuzzy differential equations (known as FDEs) make extensive use of various scientific analysis and engineering applications. They appear because of the incomplete information from their mathematical models and their parameters under uncertainty. This article discusses the use of Bezier curves for solving elevated order fuzzy initial value problems (FIVPs) in the form of ordinary differential equation. A Bezier curve approach is analyzed and updated with concepts and properties of the fuzzy set theory for solving fuzzy linear problems. The control points on Bezier curve are obtained by minimizing the residual function based on the least square method. Numerical examples involving the second and third order linear FIVPs are presented and compared with the exact solution to show the capability of the method in the form of tables and two dimensional shapes. Such findings show that the proposed method is exceptionally viable and is straightforward to apply.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[The Effect of Independent Parameter on Accuracy of Direct Block Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10994]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Iskandar Shah Mohd Zawawi&nbsp; &nbsp;Zarina Bibi Ibrahim&nbsp; &nbsp;and Khairil Iskandar Othman&nbsp; &nbsp;</p><p>Block methods that approximate the solution at several points in block form are commonly used to solve higher order differential equations. Inspired by the literature and ongoing research in this field, this paper intends to explore a new derivation of block backward differentiation formula that employs independent parameter to provide sufficient accuracy when solving second order ordinary differential equations directly. The use of three backward steps and five independent parameters are considered adequately in generating the variable coefficients of the formulas. To ascertain only one parameter exists in the derived formula, the order of the method is determined. Such independent parameter retains the favorable convergence properties although the values of parameter will affect the zero stability and truncation error. An ability of the method to compute the approximated solutions at two points concurrently is undeniable. Another advantage of the method is being able to solve the second order problems directly without recourse to the technique of reducing it to a system of first order equations. The essential of the error analysis is to observe the effect of independent parameter on the accuracy, in the sense that with certain appropriate values of parameter, the accuracy is improved. The performance of the method is tested with some initial value problems and the numerical results confirm that the maximum error and average error obtained by the proposed method are smaller at certain step size compared to the other conventional direct methods.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[Two-Stage Spline-Approximation with an Unknown Number of Elements in Applied Optimization Problem of a Special Kind]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10993]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>V. I. Struchenkov&nbsp; &nbsp;and D. A. Karpov&nbsp; &nbsp;</p><p>Being a continuation of the paper published in Mathematics and Statistics, vol. 7, No. 5, 2019, this article describes the algorithm for the first stage of spline- approximation with an unknown number of elements of the spline and constraints on its parameters. Such problems arise in the computer-aided design of road routes and other linear structures. In this article we consider the problem of a discrete sequence approximation of points on a plane by a spline consisting of line segments conjugated by circular arcs. This problem occurs when designing the longitudinal profile of new and reconstructed railways and highways. At the first stage, using a special dynamic programming algorithm, the number of elements of the spline and the approximate values of its parameters that satisfy all the constraints are determined. At the second stage, this result is used as an initial approximation for optimizing the spline parameters using a special nonlinear programming algorithm. The dynamic programming algorithm is practically the same as in the mentioned article published earlier, with significant simplifications due to the absence of clothoids when connecting straight lines and curves. The need for the second stage is due to the fact that when designing new roads, it is impossible to implement dynamic programming due to the need to take into account the relationship of spline elements in fills and in cuts, if fills will be constructed from soils of cuts. The nonlinear programming algorithm is based on constructing a basis in zero spaces of matrices of active constraints and adjusting this basis when changing the set of active constraints in an iterative process. This allows finding the direction of descent and solving the problem of excluding constraints from the active set without solving systems of linear equations in general or by solving linear systems of low dimension. As an objective function, instead of the traditionally used sum of squares of the deviations of the approximated points from the spline, the article proposes other functions, taking into account the specifics of a specific project task.</p>]]></description>
<pubDate>Jul 2021</pubDate>
</item>
<item>
<title><![CDATA[Tensor Multivariate Trace Inequalities and Their Applications <font color=red>(THIS PAPER HAS BEEN WITHDRAWN)</font>]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10973]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Shih Yu Chang&nbsp; &nbsp;and Hsiao-Chun Wu&nbsp; &nbsp;</p><p>In linear algebra, the trace of a square matrix is defined as the sum of elements on the main diagonal. The trace of a matrix is the sum of its eigenvalues (counted with multiplicities), and it is invariant under the change of basis. This characterization can be used to define the trace of a tensor in general. Trace inequalities are mathematical relations between different multivariate trace functionals involving linear operators. These relations are straightforward equalities if the involved linear operators commute, however, they can be difficult to prove when the non-commuting linear operators are involved. Given two Hermitian tensors H<sub>1</sub> and H<sub>2</sub> that do not commute. Does there exist a method to transform one of the two tensors such that they commute without completely destroying the structure of the original tensor? The spectral pinching method is a tool to resolve this problem. In this work, we will apply such spectral pinching method to prove several trace inequalities that extend the Araki–Lieb–Thirring (ALT) inequality, Golden–Thompson(GT) inequality and logarithmic trace inequality to arbitrary many tensors. Our approaches rely on complex interpolation theory as well as asymptotic spectral pinching, providing a transparent mechanism to treat generic tensor multivariate trace inequalities. As an example application of our tensor extension of the Golden–Thompson inequality, we give the tail bound for the independent sum of tensors. Such bound will play a fundamental role in high-dimensional probability and statistical data analysis.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[No Finite Time Blowup for 3D Incompressible Navier Stokes Equations via Scaling Invariance]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10972]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Terry E. Moschandreou&nbsp; &nbsp;</p><p>The problem to The Clay Math Institute "Navier-Stokes, breakdown of smooth solutions here on an arbitrary cube subset of three dimensional space with periodic boundary conditions is examined. The incompressible Navier-Stokes Equations are presented in a new and conventionally different way here, by naturally reducing them to an operator form which is then further analyzed. It is shown that a reduction to a general 2D N-S system decoupled from a 1D non-linear partial differential equation is possible to obtain. This is executed using integration over n-dimensional compact intervals which allows decoupling. The operator form is considered in a physical geometric vorticity case, and a more general case. In the general case, the solution is revealed to have smooth solutions which exhibit finite-time blowup on a fine measure zero set and using the Prékopa-Leindler and Gagliardo-Nirenberg inequalities it is shown that for any non zero measure set in the form of cube subset of 3D there is no finite time blowup for the starred velocity for large dimension of cube and small d. In particular vortices are shown to exist and it is shown that zero is in the attractor of the 3D Navier-Stokes equations.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[Comparing the Performance of AdaBoost, XGBoost, and Logistic Regression for Imbalanced Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10971]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Sharmeen Binti Syazwan Lai&nbsp; &nbsp;Nur Huda Nabihan Binti Md Shahri&nbsp; &nbsp;Mazni Binti Mohamad&nbsp; &nbsp;Hezlin Aryani Binti Abdul Rahman&nbsp; &nbsp;and Adzhar Bin Rambli&nbsp; &nbsp;</p><p>An imbalanced data problem occurs in the absence of a good class distribution between classes. Imbalanced data will cause the classifier to be biased to the majority class as the standard classification algorithms are based on the belief that the training set is balanced. Therefore, it is crucial to find a classifier that can deal with imbalanced data for any given classification task. The aim of this research is to find the best method among AdaBoost, XGBoost, and Logistic Regression to deal with imbalanced simulated datasets and real datasets. The performances of these three methods in both simulated and real imbalanced datasets are compared using five performance measures, namely sensitivity, specificity, precision, F1-score, and g-mean. The results of the simulated datasets show that logistic regression performs better than AdaBoost and XGBoost in highly imbalanced datasets, whereas in the real imbalanced datasets, AdaBoost and logistic regression demonstrated similarly good performance. All methods seem to perform well in datasets that are not severely imbalanced. Compared to AdaBoost and XGBoost, logistic regression is found to predict better for datasets with severe imbalanced ratios. However, all three methods perform poorly for data with a 5% minority, with a sample size of n = 100. In this study, it is found that different methods perform the best for data with different minority percentages.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[Block Method for the Solution of First Order Nonlinear ODEs and Its Application to HIV Infection of CD4+T Cells Model]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10970]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Adeyeye Oluwaseun&nbsp; &nbsp;and Omar Zurni&nbsp; &nbsp;</p><p>Some of the issues relating to the human immunodeficiency virus (HIV) epidemic can be expressed as a system of nonlinear first order ordinary differential equations. This includes modelling the spread of the HIV virus in infecting CD4+T cells that help the human immune system to fight diseases. However, real life differential equation models usually fail to have an exact solution, which is also the case with the nonlinear model considered in this article. Thus, an approximate method, known as the block method, is developed to solve the system of first order nonlinear differential equation. To develop the block method, a linear block approach was adopted, and the basic properties required to classify the method as convergent were investigated. The block method was found to be convergent, which ascertained its usability for the solution of the model. The solution obtained from the newly developed method in this article was compared to previous methods that have been adopted to solve same model. In order to have a justifiable basis of comparison, two-step length values were substituted to obtain a one-step and two-step block method. The results show the newly developed block method obtaining accurate results in comparison to previous studies. Hence, this article has introduced a new method suitable for the direct solution of first order differential equation models without the need to simplify to a system of linear algebraic equations. Likewise, its convergent properties and accuracy also give the block method an edge over existing methods.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[Stationary and Non-Stationary Models of Extreme Ground-Level Ozone in Peninsular Malaysia]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10969]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Siti Aisyah Zakaria&nbsp; &nbsp;Nor Azrita Mohd Amin&nbsp; &nbsp;Noor Fadhilah Ahmad Radi&nbsp; &nbsp;and Nasrul Hamidin&nbsp; &nbsp;</p><p>High ground-level ozone (GLO) concentrations will adversely affect human health, vegetations as well as the ecosystem. Therefore, continuous monitoring for GLO trends is a good practice to address issues related to air quality based on high concentrations of GLO. The purpose of this study is to introduce stationary and non-stationary model of extreme GLO. The method is applied to 25 selected stations in Peninsular Malaysia. The maximum daily GLO concentration data over 8 hours from year 2000 to 2016 are used. The factors of this distribution are anticipated using maximum likelihood estimation. A comparison between stationary (constant model) and non-stationary (linear and cyclic model) is performed using the likelihood ratio test (LRT). The LRT is based on the larger value of deviance statistics compared to a chi-square distribution providing the significance evidence to non-stationary model either there is linear trend or cyclic trend. The best fit model between selected models is tested by Akaike's Information Criterion. The results show that 25 stations conform to the non-stationary model either linear or cyclic model, with 14 stations showing significant improvement over the linear model in location parameter while 11 stations follow the cyclic model. This study is important to identify the trends of ozone phenomenon for better quality risk management.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[Numerical Treatment for Solving Fuzzy Volterra Integral Equation by Sixth Order Runge-Kutta Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10968]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Rawaa Ibrahim Esa&nbsp; &nbsp;Rasha H Ibraheem&nbsp; &nbsp;and Al.i F Jameel&nbsp; &nbsp;</p><p>There has recently been considerable focus on finding reliable and more effective numerical methods for solving different mathematical problems with integral equations. The Runge–Kutta methods in numerical analysis are a family of iterative methods, both implicit and explicit, with different orders of accuracy, used in temporal and modification for the numerical solutions of integral equations. Fuzzy Integral equations (known as FIEs) make extensive use of many scientific analysis and engineering applications. They appear because of the incomplete information from their mathematical models and their parameters under fuzzy domain. In this paper, the sixth order Runge-Kutta is used to solve second-kind fuzzy Volterra integral equations numerically. The proposed method is reformulated and updated for solving fuzzy second-kind Volterra integral equations in general form by using properties and descriptions of fuzzy set theory. Furthermore a Volterra fuzzy integral equation, based on the parametric form of a fuzzy numbers, transforms into two integral equations of the second kind in the crisp case under fuzzy properties. We apply our modified method using the specific example with a linear fuzzy integral Volterra equation to illustrate the strengths and accurateness of this process. A comparison of evaluated numerical results with the exact solution for each fuzzy level set is displayed in the form of table and figures. Such results indicate that the proposed approach is remarkably feasible and easy to use.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[Approximation treatment for Linear Fuzzy HIV Infection Model by Variational Iteration Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10967]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Hafed H Saleh&nbsp; &nbsp;Azmi A.&nbsp; &nbsp;and Ali. F. Jameel&nbsp; &nbsp;</p><p>There has recently been considerable focus on finding reliable and more effective approximate methods for solving biological mathematical models in the form of differential equations. One of the well-known approximate or semi-analytical methods for solving linear, nonlinear differential well as partial differential equations within various fields of mathematics is the Variational Iteration Method (VIM). This paper looks at the use of fuzzy differential equations in human immunodeficiency virus (HIV) infection modeling. The main advantage of the method lies in its flexibility and ability to solve nonlinear equations easily. VIM is introduced to provide approximate solutions for linear ordinary differential equation system including the fuzzy HIV infection model. The model explains the amount of undefined immune cells, and the immune system viral load intensity intrinsic that will trigger fuzziness in patients infected by HIV. CD4+T-cells and cytototoxic T-lymphocytes (CTLs) are known for the immune cells concerned. The dynamics of the immune cell level and viral burden are analyzed and compared across three classes of patients with low, moderate and high immune systems. A modification and formulation of the VIM in the fuzzy domain based on the use of the properties of fuzzy set theory are presented. A model was established in this regard, accompanied by plots that demonstrate the reliability and simplicity of the methods. The numerical results of the model indicate that this approach is effective and easily used in fuzzy domain.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[On the Up-to-date Course of Mathematical Logic for the Future Math Teachers]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10966]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>E. N. Sinyukova&nbsp; &nbsp;S. V. Drahanyuk&nbsp; &nbsp;and O. O. Chepok&nbsp; &nbsp;</p><p>All-round development of the everyday logic of students should be considered as one of the most important tasks of general secondary education on the whole and general secondary mathematics education in particular. We discuss the problem of organization in teachers' training institutions of higher education and the expedient training of the future math teachers at institutions of general secondary education. The main goal is to ensure their ability to realize all their future professional activities and the necessary participation in forming the everyday logic of their pupils. The authors think that vocational educational program of training is that the future secondary school math teachers must contain a separate course of mathematical logic including at least 90 training hours (3 credits ECTS). Although the content filling of the course cannot be irrespective of the general level of arrangement of mathematics education in the corresponding country, it ought to be a subject of discussion of the international mathematics community and managers in the sphere of higher mathematics education. Simultaneously, the role, the place, and the expedient structure of such a course in the corresponding training programs should be under discussion. The article represents the authors' point of view on the problems indicated above. The research has a qualitative characteristic as a whole. Only some of its conclusions have statistical corroboration.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[A Goal Programming Approach for Multivariate Calibration Weights Estimation in Stratified Random Sampling]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10965]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Siham Rabee&nbsp; &nbsp;Ramadan Hamed&nbsp; &nbsp;Ragaa Kassem&nbsp; &nbsp;and Mahmoud Rashwaan&nbsp; &nbsp;</p><p>Calibration estimation is one of the most important ways to improve the precision of the survey estimates. It is a method in which the designs weights are modified as little as possible by minimizing a given distance measure to the calibrated weights respecting a set of constraints related to suitable auxiliary information. This paper proposes a new approach for Multivariate Calibration Estimation (MCE) of the population mean of a study variable under stratified random sampling scheme using two auxiliary variables. Almost all literature on calibration estimation used Lagrange multiplier technique in order to estimate the calibrated weights. While Lagrange multiplier technique requires all equations included in the model to be differentiable functions, some un- differentiable functions may be faced in some cases. Hence, it is essential to look for using another technique that can provide more flexibility in dealing with the problem. Accordingly, in this paper, using goal programming approach is newly suggested as a different approach for MCE. The theory of the proposed calibration estimation is presented and the calibrated weights are estimated. A comparison study is conducted using actual and generated data to evaluate the performance of the proposed approach for multivariate calibration estimator with other existing calibration estimators. The results of this study prove that using the proposed GP approach for MCE is more flexible and efficient compared to other calibration estimation methods of the population mean.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[Per Capita Expenditure Modeling Using Spatial EBLUP Approach – SAE]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10964]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Luthfatul Amaliana&nbsp; &nbsp;Ani Budi Astuti&nbsp; &nbsp;and Nur Silviyah Rahmi&nbsp; &nbsp;</p><p>Per capita expenditure of an area is a welfare indicator of the community. It is also a reflection of the economic capacity in meeting basic needs. Bali is the second richest province in Indonesia. This study aims to model the per capita expenditure of Bali at the sub-district level using Spatial-EBLUP (SEBLUP) approach in SAE. Small area estimation (SAE) modeling is an indirect estimation approach capable of increasing the effectiveness of sample sizes and minimizing variance. The heterogeneity of an area is influenced by other areas around. Everything is related to one another, but something closer will be more influential than something far away. Therefore, the spatial effect can be included in the random effect of a model small area, which is called as SEBLUP model. The selection of a spatial weights matrix is very important in spatial data modeling. It represents the neighborhood relationship of each spatial observation unit. A SEBLUP model needs a spatial weights matrix, which can be based on distance (radial distance and power distance), contiguity (queen), and a combination of distance and contiguity (radial distance and queen contiguity). The result of the implementation of the SEBLUP approach in per capita expenditure of Bali shows that the SEBLUP model with radial distance spatial weights matrix is the best model with the smallest ARMSE. South Denpasar Sub-district is the most prosperous sub-district with the highest per capita expenditure in Bali. Meanwhile, Abang Sub-district is the smallest per capita expenditure.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[An Approximation to Zeros of the Riemann Zeta Function Using Fractional Calculus]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10822]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>A. Torres-Hernandez&nbsp; &nbsp;and F. Brambila-Paz&nbsp; &nbsp;</p><p>In this paper an approximation to the zeros of the Riemann zeta function has been obtained for the first time using a fractional iterative method which originates from a unique feature of the fractional calculus. This iterative method, valid for one and several variables, uses the property that the fractional derivative of constants are not always zero. This allows us to construct a fractional iterative method to find the zeros of functions in which it is possible to avoid expressions that involve hypergeometric functions, Mittag-Leffler functions or infinite series. Furthermore, we can find multiple zeros of a function using a singe initial condition. This partially solves the intrinsic problem of iterative methods, which in general is necessary to provide N initial conditions to find N solutions. Consequently the method is suitable for approximating nontrivial zeros of the Riemann zeta function when the absolute value of its imaginary part tends to infinity. Some examples of its implementation are presented, and finally 53 different values near to the zeros of the Riemann zeta function are shown.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[Almost Interior Gamma-ideals and Fuzzy Almost Interior Gamma-ideals in Gamma-semigroups]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10821]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Wichayaporn Jantanan&nbsp; &nbsp;Anusorn Simuen&nbsp; &nbsp;Winita Yonthanthum&nbsp; &nbsp;and Ronnason Chinram&nbsp; &nbsp;</p><p>Ideal theory plays an important role in studying in many algebraic structures, for example, rings, semigroups, semirings, etc. The algebraic structure Г-semigroup is a generalization of the classical semigroup. Many results in semigroups were extended to results in Г-semigroups. Many results in ideal theory of Г-semigroups were widely investigated. In this paper, we first focus to study some novel ideals of Г-semigroups. In Section 2, we define almost interior Г-ideals and weakly almost interior Г-ideals of Г-semigroups by using the concept ideas of interior Г-ideals and almost Г-ideals of Г-semigroups. Every almost interior Г-ideal of a Г-semigroup S is clearly a weakly almost interior Г-ideal of S but the converse is not true in general. The notions of both almost interior Г-ideals and weakly almost interior Г-ideals of Г-semigroups are generalizations of the notion of interior Г-ideal of a Г-semigroup S. We investigate basic properties of both almost interior Г-ideals and weakly almost interior Г-ideals of Г-semigroups. The notion of fuzzy sets was introduced by Zadeh in 1965. Fuzzy set is an extension of the classical notion of sets. Fuzzy sets are somewhat like sets whose elements have degrees of membership. In the remainder of this paper, we focus on studying some novelties of fuzzy ideals in Г-semigroups. In Section 3, we introduce fuzzy almost interior Г-ideals and fuzzy weakly almost interior Г-ideals of Г-semigroups. We investigate their properties. Finally, we give some relationship between almost interior Г-ideals [weakly almost interior Г-ideals] and fuzzy almost interior Г-ideals [fuzzy weakly almost interior Г-ideals] of Г-semigroups.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[On the Stochastic Processes on 7-Dimensional Spheres]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10820]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Nurfa Risha&nbsp; &nbsp;and Muhammad Farchani Rosyid&nbsp; &nbsp;</p><p>We studied isometric stochastic flows of a Stratonovich stochastic differential equation on spheres, i.e., on the standard sphere <img src=image/13422677_01.gif> and Gromoll-Meyer exotic sphere <img src=image/13422677_02.gif>. In this case, <img src=image/13422677_01.gif> and <img src=image/13422677_02.gif> are homeomorphic but not diffeomorphic. The standard sphere <img src=image/13422677_01.gif> can be constructed as the quotient manifold <img src=image/13422677_03.gif> with the so-called <img src=image/13422677_08.gif>-action of S<sup>3</sup>, whereas the Gromoll-Meyer exotic sphere <img src=image/13422677_02.gif> as the quotient manifold <img src=image/13422677_03.gif> with respect to the so-called <img src=image/13422677_09.gif>-action of S<sup>3</sup>. The corresponding continuous-time stochastic process and its properties on the Gromoll-Meyer exotic sphere can be obtained by constructing a homeomorphism <img src=image/13422677_04.gif>. The stochastic flow <img src=image/13422677_05.gif> can be regarded as the same stochastic flow <img src=image/13422677_06.gif> on S<sup>7</sup>, but viewed in Gromoll-Meyer differential structure. The flow <img src=image/13422677_06.gif> on <img src=image/13422677_01.gif> and the corresponding flow <img src=image/13422677_05.gif> on <img src=image/13422677_02.gif> constructed in this paper have the same regularities. There is no difference between the stochastic flow's appearance on S<sup>7</sup> viewed in standard differential structure and the appearance of the same stochastic flow viewed in the Gromoll-Meyer differential structure. Furthermore, since the inverse mapping h<sup>-1</sup> is differentiable on <img src=image/13422677_02.gif>, the Riemannian metric tensor <img src=image/13422677_07.gif> on <img src=image/13422677_02.gif>, i.e., the pull-back of the Riemannian metric tensor G on the standard sphere <img src=image/13422677_01.gif>, is also differentiable. This fact implies, for instance, the fact that the Fokker-Planck equation associated with the stochastic flow <img src=image/13422677_05.gif> and the Fokker-Planck equation associated with the stochastic differential equation have the same regularities provided that the function β is C<sup>1</sup>-differentiable. Therefore both differential structures on S<sup>7</sup> give the same description of the dynamics of the distribution function of the stochastic process understudy on seven spheres.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[Reducing Approximation Error with Rapid Convergence Rate for Non-Negative Matrix Factorization (NMF)]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10819]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Jayanta Biswas&nbsp; &nbsp;Pritam Kayal&nbsp; &nbsp;and Debabrata Samanta&nbsp; &nbsp;</p><p>Non-Negative Matrix Factorization (NMF) is utilized in many important applications. This paper presents development of an efficient low rank approximate NMF algorithm for feature extraction related to text mining and spectral data analysis. NMF can be used for clustering. NMF factorizes a positive matrix A to two positive matrices W and H matrices where A=WH. The proposal uses k-means clustering algorithm to determine the centroid of each cluster and assigns the centroid coordinates of each cluster as one column for W matrix. The initial choice of W matrix is positive. The H matrix is determined with gradient descent algorithm based on thin QR optimization. The performance comparison of the proposed NMF algorithm is illustrated with results. The accurate choice of initial positive W matrix reduces approximation error and the use of thin QR algorithm in combination with gradient descent approach provides rapid convergence rate for NMF. The proposed algorithm is implemented with the randomly generated matrix in MATLAB environment. The number of significant singular values of the generated matrix is selected as the number of clusters. The error and convergence rate comparison of the proposed algorithm with the current algorithms are demonstrated in this research. The accurate measurement of execution time for individual program is not possible in MATLAB. The average time execution over 200 iterations is therefore calculated with an increasing iteration count of the proposed algorithm and the comparative results are presented.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[Application of Supersaturated Design to Study the Spread of Electronic Games]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10818]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Alanazi Talal Abdulrahman&nbsp; &nbsp;Randa Alharbi&nbsp; &nbsp;Osama Alamri&nbsp; &nbsp;Dalia Alnagar&nbsp; &nbsp;and Bader Alruwaili&nbsp; &nbsp;</p><p>A supersaturated design is an important method that relies on factorial designs whose number of factors is greater than experiments' number. The analysis of supersaturated designs is challenging due to the complexity of the design matrix. This problem is challenging due to the fact that the design matrix has a complicated structure. Identification of the variable including the active factor plays an essential role when supersaturated design is used to analyse the data. A variable selection technique to screen active effects in the SSDs and regression analysis are applied to our case study. This study set out to examine the actual reasons for the spread of electronic games statistically such as Saudi society. An online survey provided quantitative data from 200 participants. Respondents were randomly divided into two conditions (Yes+, No-) and asked to respond to one of two sets of the causes of electronic games. The responses was analysed using contrast method with supersaturated designs and regression methods using the SPSS computer software to determine the actual causes that led to the spread of electronic games. The findings indicated that because of their constant preoccupation, some parents resort to such games in order to get rid of the child's inconvenience and insufficient awareness among parents of the dangers of these games, and excessive pampering is the factor that led to the spread of electronic games in Saudi society statistically. On this basis, it is recommended that Saudi government professionals develop an operational plan to study these causes to take actions. In future investigations, no recent studies address the external environmental aspects that could influence gaming among individuals, and hence further research is required in this field.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[Volume Minimization of a Closed Coil Helical Spring Using ALO, GWO, DA, FA, FPA, WOA, CSO. BA, PSO and GSA]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10817]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Rejula Mercy. J&nbsp; &nbsp;and S. Elizabeth Amudhini Stephen&nbsp; &nbsp;</p><p>Springs are important members often used in machines to exert force, absorb energy and provide flexibility. In mechanical systems, wherever flexibility or relatively a large load under the given circumstances is required, some form of spring is used. In this paper, non-traditional optimization algorithms, namely, Ant Lion Optimizer, Grey Wolf Optimizer, Dragonfly optimization algorithm, Firefly algorithm, Flower Pollination Algorithm, Whale Optimization Algorithm, Cat Swarm Optimization, Bat Algorithm, Particle Swarm Optimization, Gravitational Search Algorithm are proposed to get the global optimal solution for the closed coil helical spring design problem. The problem has three design variables and eight inequality constraints and three bounds. The mathematical formulation of the objective function U is to minimize the volume of closed coil helical spring subject to constraints. The design variables considered are Wire diameter d, Mean coil diameter D, Number of active coils N of the spring. The proposed methods are tested and the performance is evaluated. Ten non-traditional optimization methods are used to find the minimum volume. The problem is computed in the MATLAB environment. The experimental results show that Particle Swarm Optimization outperforms other methods. The results show that PSO gives better results in terms of consistency and minimum value in terms of time and volume of a closed coil helical spring compared to other methods. When compared to other Optimization methods, PSO has few advantages like simplicity and efficiency. In the future, PSO could be extended to solve other mechanical element problems.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[A New Three-Parameter Weibull Inverse Rayleigh Distribution: Theoretical Development and Applications]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10816]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Adeyinka Solomon Ogunsanya&nbsp; &nbsp;Waheed Babatunde Yahya&nbsp; &nbsp;Taiwo Mobolaji Adegoke&nbsp; &nbsp;Christiana Iluno&nbsp; &nbsp;Oluwaseun R. Aderele&nbsp; &nbsp;and Matthew Iwada Ekum&nbsp; &nbsp;</p><p>In this work, a three-parameter Weibull Inverse Rayleigh (WIR) distribution is proposed. The new WIR distribution is an extension of a one-parameter Inverse Rayleigh distribution that incorporated a transformation of the Weibull distribution and Log-logistic as quantile function. The statistical properties such as quantile function, order statistic, monotone likelihood ratio property, hazard, reverse hazard functions, moments, skewness, kurtosis, and linear representation of the new proposed distribution were studied theoretically. The maximum likelihood estimators cannot be derived in an explicit form. So we employed the iterative procedure called Newton Raphson method to obtain the maximum likelihood estimators. The Bayes estimators for the scale and shape parameters for the WIR distribution under squared error, Linex, and Entropy loss functions are provided. The Bayes estimators cannot be obtained explicitly. Hence we adopted a numerical approximation method known as Lindley's approximation in other to obtain the Bayes estimators. Simulation procedures were adopted to see the effectiveness of different estimators. The applications of the new WIR distribution were demonstrated on three real-life data sets. Further results showed that the new WIR distribution performed credibly well when compared with five of the related existing skewed distributions. It was observed that the Bayesian estimates derived performs better than the classical method.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[Statistical Analyses on Factors Affecting Retirement Savings Decision in Malaysia]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10815]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Nurul Sima Mohamad Shariff&nbsp; &nbsp;and Waznatul Widad Mohamad Ishak&nbsp; &nbsp;</p><p>Retirement savings decision is related to the individual judgment on savings planning, and preparation for the retirement. Several factors may affect this decision towards retirement savings. Some of them are demographic factors and other determinants, such as financial knowledge and management, future expectation, social influences and risk tolerance. Due to this interest, this study aims to impact of such factors on retirement savings decision. Furthermore, this study will also discuss the retirement savings decision among Malaysians at different age groups. The data were collected through a survey strategy by using a set of questionnaires. The questions were divided into several sections on the demographic profile, Likert-scale questions on the factors, and the retirement savings decisions. The technique sampling used in this study is a random sampling with 385 respondents. As such, several statistical procedures will be utilized such as the reliability test, Kruskal-Wallis H test, and the ordered probit model. The results of this study found that age, financial knowledge and management, future expectation, and social influences were the significant determinants towards retirement savings decision in Malaysia.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[Relative Complexity Index for Decision-Making Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10814]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Harliza Mohd Hanif&nbsp; &nbsp;Daud Mohamad&nbsp; &nbsp;and Rosma Mohd Dom&nbsp; &nbsp;</p><p>The complexity of a method has been discussed in the decision-making area since complexity may impose some disadvantages such as loss of information and a high degree of uncertainty. However, there is no empirical justification to determine the complexity level of a method. This paper focuses on introducing a method of measuring the complexity of the decision-making method. In the computational area, there is an established method of measuring complexity named Big-O Notation. This paper adopts the method for determining the complexity level of the decision-making method. However, there is a lack of applying Big-O in the decision-making method. Applying Big-O in decision-making may not be able to differentiate the complexity level of two different decision-making methods. Hence, this paper introduces a Relative Complexity Index (RCI) to cater to this problem. The basic properties of the Relative Complexity Index are also discussed. After the introduction of the Relative Complexity Index, the method is implemented in Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[Z-Score Functions of Dual Hesitant Fuzzy Set and Its Applications in Multi-Criteria Decision Making]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10813]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Zahari Md Rodzi&nbsp; &nbsp;Abd Ghafur Ahmad&nbsp; &nbsp;Nur Sa’aidah Ismail&nbsp; &nbsp;Wan Normila Mohamad&nbsp; &nbsp;and Sarahiza Mohmad&nbsp; &nbsp;</p><p>Dual hesitant fuzzy set (DHFS) consists of two parts: membership hesitant function and non-membership hesitant function. This set supports more exemplary and flexible access to set degrees for each element in the domain and can address two types of hesitant in this situation. It can be considered a powerful tool for expressing uncertain information in the decision-making process. The function of z-score, namely z-arithmetic mean, z-geometric mean, and z-harmonic mean, has been proposed with five important bases, these bases are hesitant degree for dual hesitant fuzzy element (DHFE), DHFE deviation degree, parameter α (the importance of the hesitant degree), parameter β (the importance of the deviation degree) and parameter ϑ (the importance of membership (positive view) or non-membership (negative view). A comparison of the z-score with the existing score function was made to show some of their drawbacks. Next, the z-score function is then applied to solve multi-criteria decision making (MCDM) problems. To illustrate the proposed method's effectiveness, an example of MCDM specifically in pattern recognition has been shown.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[Two Observations in the Application of Logarithm Theory and their Implications for Economic Modeling and Analysis]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10812]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Oluremi Davies Ogun&nbsp; &nbsp;</p><p>The contents of this paper apply to researches in the fields of economics, statistics – physical or life sciences, other social sciences, accounting and finance, business management and mathematics – core and applied. First, I discussed the misconception and the implications thereof, inherent in the conventional practice of entering interest rates as natural or untransformed series in data analysis most especially, regression models. The trends and variabilities of both transformed and untransformed interest rate series were shown to be similar thereby enhancing the likelihood of similar performances in regressions. By extension therefore, the indicated conventional practice unnecessarily and unjustifiably precluded elasticity inference on the coefficients of interest rates and summing up to procedural inefficiency as an independent computation of elasticity became the only available option. Percentages were not the equivalence of percentage changes and thus only series in growth terms hence, percentage changes should be spared log transformation. Secondly, the paper stressed the imperative to avoid unwieldy and theory incongruent expressions in post preliminary data analysis, by flagging the idea that regression models, in particular, of the growth varieties, should as much as practicable, sync with the dictates of modern time series econometrics in the specification of final equations.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[On Some Properties of Leibniz's Triangle]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10811]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>R. Sivaraman&nbsp; &nbsp;</p><p>One of the Greatest mathematicians of all time, Gotfried Leibniz, introduced amusing triangular array of numbers called Leibniz's Harmonic triangle similar to that of Pascal's triangle but with different properties. I had introduced entries of Leibniz's triangle through Beta Integrals. In this paper, I have proved that the Beta Integral assumption is exactly same as that of entries obtained through Pascal's triangle. The Beta Integral formulation leads us to establish several significant properties related to Leibniz's triangle in quite elegant way. I have shown that the sum of alternating terms in any row of Leibniz's triangle is either zero or a Harmonic number. A separate section is devoted in this paper to prove interesting results regarding centralized Leibniz's triangle numbers including obtaining a closed expression, the asymptotic behavior of successive centralized Leibniz's triangle numbers, connection between centralized Leibniz's triangle numbers and Catalan numbers as well as centralized binomial coefficients, convergence of series whose terms are centralized Leibniz's triangle numbers. All the results discussed in this section are new and proved for the first time. Finally, I have proved two exceedingly important theorems namely Infinite Hockey Stick theorem and Infinite Triangle Sum theorem. Though these two theorems were known in literature, the way of proving them using Beta Integral formulation is quite new and makes the proof short and elegant. Thus, by simple re-formulation of entries of Leibniz's triangle through Beta Integrals, I have proved existing as well as new theorems in much compact way. These ideas will throw a new light upon understanding the fabulous Leibniz's number triangle.</p>]]></description>
<pubDate>May 2021</pubDate>
</item>
<item>
<title><![CDATA[The Seasonal Reproduction Number of p.vivax Malaria Dynamics in Korea]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10715]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Anne M. Fernando&nbsp; &nbsp;Ana Vivas Barber&nbsp; &nbsp;and Sunmi Lee&nbsp; &nbsp;</p><p>Understanding the dynamics of Malaria can help in reducing the impact of the disease. Previous research proved that including animals in the human transmission model, or 'zooprophylaxis', is effective in reducing transmission of malaria in the human population. This model studies plasmodium vivax malaria and has variables for animal population and mosquito attraction to animals. The existing time-independent Malaria population ODE model is extended to time-dependent model with the differences explored. We introduce the seasonal mosquito population, a Gaussian profile based on data, as a variant for the previous models. The seasonal reproduction number is found using the next generation matrix, endemic and stability analysis is carried out using dynamical systems theory. The model includes short and long term human incubation periods and sensitivity analysis on parameters and all simulations are over three year period. Simulations show for each year larger peaks in the infected populations and seasonal reproduction number during the summer months and we analyze which parameters have more sensitivity in the model and in the seasonal reproduction number. Analysis provides conditions for disease free equilibrium (DFE) and the system is found to be locally asymptotically stable around the DFE when R<sub>0</sub><1, furthermore we find the uniqueness of the endemic equilibrium point. The sensitivity analysis for the parameters shows that the model was not sensitive to the exact values of the long or short term periods as it was to the average number of contacts between host and mosquito or rate of disease progression for mosquitoes. This model shows that inclusion of variable mosquito population informs how domestic animals in the human population can be more effectively used as a method of reducing the transmission of malaria. The most relevant contribution of this work is including the time evolution of mosquito population and simulations show how this feature affects human infection dynamics. An analytical expression for the endemic equilibrium point is provided for future work to establish conditions over which an epidemic may be prevented.</p>]]></description>
<pubDate>Mar 2021</pubDate>
</item>
<item>
<title><![CDATA[A New Solution for The Enzymatic Glucose Fuel Cell Model with Morrison Equation via Haar Wavelet Collocation Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10714]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Kuntida Kawinwit&nbsp; &nbsp;Akapak Charoenloedmongkhon&nbsp; &nbsp;and Sanoe Koonprasert&nbsp; &nbsp;</p><p>Integral equations are essential tools in various areas of applied mathematics. A computational approach to solving an integral equation is important in scientific research. The Haar wavelet collocation method (HWCM) with operational matrices of integration is one famous method which has been applied to solve systems of linear integral equations. In this paper, an approximated analytical method based on the Haar wavelet collocation method is applied to the system of diffusion convection partial differential equations with initial and boundary conditions. This system determines the enzymatic glucose fuel cell with the chemical reaction rate of the Morrison equation. The enzymatic glucose fuel cell model describes the concentration of glucose and hydrogen ion that can be converted into energy. During the process, the model reduces to the linear integral equation system including computational Haar matrices. The computational Haar matrices can be computed by HWCM coding in the Maple program. Illustrated examples are provided to demonstrate the preciseness and effectiveness of the proposed method. The results are shown as numerical solutions of glucose and hydrogen ion.</p>]]></description>
<pubDate>Mar 2021</pubDate>
</item>
<item>
<title><![CDATA[A Dirac Delta Operator]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10713]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Juan Carlos Ferrando&nbsp; &nbsp;</p><p>If T is a (densely defined) self-adjoint operator acting on a complex Hilbert space H and I stands for the identity operator, we introduce the delta function operator <img src=image/13422917_01.gif> at T. When T is a bounded operator, then <img src=image/13422917_02.gif> is an operator-valued distribution. If T is unbounded, <img src=image/13422917_02.gif> is a more general object that still retains some properties of distributions. We provide an explicit representation of <img src=image/13422917_02.gif> in some particular cases, derive various operative formulas involving <img src=image/13422917_02.gif> and give several applications of its usage in Spectral Theory as well as in Quantum Mechanics.</p>]]></description>
<pubDate>Mar 2021</pubDate>
</item>
<item>
<title><![CDATA[On Non-Associative Rings]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10712]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Ida Kurnia Waliyanti&nbsp; &nbsp;Indah Emilia Wijayanti&nbsp; &nbsp;and M. Farchani Rosyid&nbsp; &nbsp;</p><p>Jordan ring is one example of the non-associative rings. We can construct a Jordan ring from an associative ring by defining the Jordan product. In this paper, we discuss the properties of non-associative rings by studying the properties of the Jordan rings. All of the ideals of a non-associative ring R are non-associative, except the ideal generated by the associator in R. Hence, a quotient ring <img src=image/13422591_01.gif> can be constructed, where <img src=image/13422591_02.gif> is the ideal generated by associators in R. The fundamental theorem of the homomorphism ring can be applied to the non-associative rings. By a little modification, we can find that <img src=image/13422591_01.gif> is isomorphic to <img src=image/13422591_03.gif>. Furthermore, we define a module over a non-associative ring and investigate its properties. We also give some examples of such modules. We show if M is a module over a non-associative ring R, then M is also a module over <img src=image/13422591_01.gif> if <img src=image/13422591_02.gif> is contained in the annihilator of R. Moreover, we define the tensor product of modules over a non-associative ring. The tensor product of the modules over a non-associative ring is commutative and associative up to isomorphism but not element by element.</p>]]></description>
<pubDate>Mar 2021</pubDate>
</item>
<item>
<title><![CDATA[Solving One-Dimensional Porous Medium Equation Using Unconditionally Stable Half-Sweep Finite Difference and SOR Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10711]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Jackel Vui Lung Chew&nbsp; &nbsp;Jumat Sulaiman&nbsp; &nbsp;and Andang Sunarto&nbsp; &nbsp;</p><p>A porous medium equation is a nonlinear parabolic partial differential equation that presents many physical occurrences. The solutions of the porous medium equation are important to facilitate the investigation on nonlinear processes involving fluid flow, heat transfer, diffusion of gas-particles or population dynamics. As part of the development of a family of efficient iterative methods to solve the porous medium equation, the Half-Sweep technique has been adopted. Prior works in the existing literature on the application of Half-Sweep to successfully approximate the solutions of several types of mathematical problems are the underlying motivation of this research. This work aims to solve the one-dimensional porous medium equation efficiently by incorporating the Half-Sweep technique in the formulation of an unconditionally-stable implicit finite difference scheme. The noticeable unique property of Half-Sweep is its ability to secure a low computational complexity in computing numerical solutions. This work involves the application of the Half-Sweep finite difference scheme on the general porous medium equation, until the formulation of a nonlinear approximation function. The Newton method is used to linearize the formulated Half-Sweep finite difference approximation, so that the linear system in the form of a matrix can be constructed. Next, the Successive Over Relaxation method with a single parameter was applied to efficiently solve the generated linear system per time step. Next, to evaluate the efficiency of the developed method, deemed as the Half-Sweep Newton Successive Over Relaxation (HSNSOR) method, the criteria such as the number of iterations, the program execution time and the magnitude of absolute errors were investigated. According to the numerical results, the numerical solutions obtained by the HSNSOR are as accurate as those of the Half-Sweep Newton Gauss-Seidel (HSNGS), which is under the same family of Half-Sweep iterations, and the benchmark, Newton-Gauss-Seidel (NGS) method. The improvement in the numerical results produced by the HSNSOR is significant, and requires a lesser number of iterations and a shorter program execution time, as compared to the HSNGS and NGS methods.</p>]]></description>
<pubDate>Mar 2021</pubDate>
</item>
<item>
<title><![CDATA[Some Remarks and Propositions on Riemann Hypothesis]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10710]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Jamal Salah&nbsp; &nbsp;</p><p>In 1859, Bernhard Riemann, a German mathematician, published a paper to the Berlin Academy that would change mathematics forever. The mystery of prime numbers was the focus. At the core of the presentation was indeed a concept that had not yet been proven by Riemann, one that to this day baffles mathematicians. The way we do business could have been changed if the Riemann hypothesis holds true, which is because prime numbers are the key element for banking and e-commerce security. It will also have a significant influence, impacting quantum mechanics, chaos theory, and the future of computation, on the cutting edge of science. In this article, we look at some well-known results of Riemann Zeta function in a different light. We explore the proofs of Zeta integral Representation, Analytic continuity and the first functional equation. Initially, we observe omitting a logical undefined term in the integral representation of Zeta function by the means of Gamma function. For that we propound some modifications in order to reasonably justify the location of the non-trivial zeros on the critical line: s= 1/2 by assuming that ζ(s) and ζ(1-s) simultaneously equal zero. Consequently, we conditionally prove Riemann Hypothesis.</p>]]></description>
<pubDate>Mar 2021</pubDate>
</item>
<item>
<title><![CDATA[On Three-Dimensional Mixing Geometric Quadratic Stochastic Operators]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10709]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Ftameh Khaled&nbsp; &nbsp;and Pah Chin Hee&nbsp; &nbsp;</p><p>It is widely recognized that the theory of quadratic stochastic operator frequently arises due to its enormous contribution as a source of analysis for the investigation of dynamical properties and modeling in diverse domains. In this paper, we are motivated to construct a class of quadratic stochastic operators called mixing quadratic stochastic operators generated by geometric distribution on infinite state space <img src=image/13491747_01.gif>. We also study regularity of such operators by investigating of the limit behavior for each case of the parameter. Some of non-regular cases proved for a new definition of mixing operators by using the shifting definition, where the new parameters satisfy the shifted conditions. A mixing quadratic stochastic operator was established on 3-partitions of the state space <img src=image/13491747_01.gif> and considered for a special case of the parameter Ɛ. We found that the mixing quadratic stochastic operator is a regular transformation for <img src=image/13491747_02.gif> and is a non-regular for <img src=image/13491747_03.gif>. Also, the trajectories converge to one of the fixed points. Stability and instability of the fixed points were investigated by finding of the eigenvalues of Jacobian matrix at these fixed points. We approximate the parameter Ɛ by the parameter <img src=image/13491747_04.gif>, where we established the regularity of the quadratic stochastic operators for some inequalities that satisfy <img src=image/13491747_04.gif>. We conclude this paper by comparing with previous studies where we found some of such quadratic stochastic operators will be non-regular.</p>]]></description>
<pubDate>Mar 2021</pubDate>
</item>
<item>
<title><![CDATA[Formulation of a New Implicit Method for Group Implicit BBDF in Solving Related Stiff Ordinary Differential Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10708]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Norshakila Abd Rasid&nbsp; &nbsp;Zarina Bibi Ibrahim&nbsp; &nbsp;Zanariah Abdul Majid&nbsp; &nbsp;and Fudziah Ismail&nbsp; &nbsp;</p><p>This paper proposed a new alternative approach of the implicit diagonal block backward differentiation formula (BBDF) to solve linear and nonlinear first-order stiff ordinary differential equations (ODEs). We generate the solver by manipulating the numbers of back values to achieve a higher-order possible using the interpolation procedure. The algorithm is developed and implemented in C ++ medium. The numerical integrator approximates few solution points concurrently with off-step points in a block scheme over a non-overlapping solution interval at a single iteration. The lower triangular matrix form of the implicit diagonal causes fewer differentiation coefficients and ultimately reduces the execution time during running codes. We choose two intermediate points as off-step points appropriately, which are proven to guarantee the method's zero stability. The off-step points help to increase the accuracy by optimizing the local truncation error. The proposed solver satisfied theoretical consistency and zero-stable requirements, leading to a convergent multistep method with third algebraic order. We used the well-known and standard linear and nonlinear stiff IVP problems used in literature for validation to measure the algorithm's accuracy and processor time efficiency. The performance metrics are validated by comparing them with a proven solver, and the output shows that the alternative method is better than the existing one.</p>]]></description>
<pubDate>Mar 2021</pubDate>
</item>
<item>
<title><![CDATA[The Varying Threshold Values of Logistic Regression and Linear Discriminant for Classifying Fraudulent Firm]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10707]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Samingun Handoyo&nbsp; &nbsp;Ying-Ping Chen&nbsp; &nbsp;Gugus Irianto&nbsp; &nbsp;and Agus Widodo&nbsp; &nbsp;</p><p>The aim of the research is to find the best performance both of logistic regression and linear discriminant which their threshold uses some various values. The performance tools used for evaluating classifier model are confusion matrix, precision-recall, F1 score and receiver operation characteristic (ROC) curve. The Audit-risk data set are used for the implementation of the proposed method. The screening data and dimension reduction by using principal component analysis (PCA) are the first step that must be conducted before the data are divided into the training and testing set. After the training process for obtaining the classifier model parameters has been completed, the calculation of performance measures is done only on the testing set where the various constants are added to the threshold value of both classifier models. The logistic regression classifier has the best performance of 94% on the precision-recall, 91.7% on the F1-score, and 0.906 on the area under curve (AUC) where the threshold values are on the interval between 0.002 and 0.018. On the other hand, the linear discriminant classifier has the best performance when the threshold value is 0.035 and its performance value is respectively the precision-recall of 94%, the F1-score of 91.7%, and the AUC of 0.846.</p>]]></description>
<pubDate>Mar 2021</pubDate>
</item>
<item>
<title><![CDATA[Polya's Problem Solving Strategy in Trigonometry: An Analysis of Students' Difficulties in Problem Solving]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10706]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Dwi Sulistyaningsih&nbsp; &nbsp;Eko Andy Purnomo&nbsp; &nbsp;and Purnomo&nbsp; &nbsp;</p><p>This study is focused on investigating errors made by students and the various causal factors in working on trigonometry problems by applying sine and cosine rules. Samples were taken randomly from high school students. Data were collected in two ways, namely a written test that was referred to Polya's strategy and interviews with students who made mistakes. Students' errors were analyzed with the Newman concept. The results show that all types of errors occurred with a distribution of 3.83, 19.15, 24.74, 24.89 and 27.39% for reading errors (RE), comprehension error (CE), transformation errors (TE), process skill errors (PSE), and encoding errors (EE), respectively. The RE, CE, TE, PSE, and EE are marked by errors in reading symbols or important information, misunderstanding information and not understanding what is known and questioned, cannot change problems into mathematical models and also incorrectly use signs in arithmetic operations, student inaccuracies in the process of answering and also their lack of understanding in fraction operations, and the inability to deduce answers, respectively. An anomaly occurs because it turns out students who have medium trigonometry achievements make more mistakes than students who have low achievement.</p>]]></description>
<pubDate>Mar 2021</pubDate>
</item>
<item>
<title><![CDATA[Analysis and Evaluation of Factors Affecting the Use of Google Classroom in Albania: A Partial Least Squares Structural Equation Modelling Approach]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10705]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Evgjeni Xhafaj&nbsp; &nbsp;Daniela Halidini Qendraj&nbsp; &nbsp;Alban Xhafaj&nbsp; &nbsp;and Etleva Halidini&nbsp; &nbsp;</p><p>The study explores the number of factors that affect the use of Google Classroom in Albanian universities, by using the methodological developments of partial least squares structural equation modelling technique (PLS– SEM). This technique has been used because it allows flexibility in modelling the relationship between constructs (or factors) and in exploring theoretical concepts. An alternative model is introduced by extending the Unified Theory of Acceptance and Use of Technology (UTAUT2) and by integrating new relation between constructs. Our data are from a study of 528 students from 4 Albanian universities during the year 2020. Using Importance-Performance Matrix Analysis (IPMA) our analysis suggest that Habit is the construct that has the greatest importance in determining the Behavioral Intention towards Google Classroom, whereas Behavioral Intention has the greatest importance related to Use Behavior of Google Classroom. The results of the study show that Habit, and Hedonic Motivation have a greater impact on the Behavioral Intention to use Google Classroom. Additionally, we find that all constructs of the alternative model have an important influence to Behavioral Intention to Google Classroom and explain 65.3 per cent of its variance. This study will help the Higher Educations Institutions in assessing the factors that influence the use of Google Classroom, in such a way that this platform should be used as a support tool in the future.</p>]]></description>
<pubDate>Mar 2021</pubDate>
</item>
<item>
<title><![CDATA[Instrument Test Development of Mathematics Skill on Elementary School]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10704]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Viktor Pandra&nbsp; &nbsp;Badrun Kartowagiran&nbsp; &nbsp;and Sugiman&nbsp; &nbsp;</p><p>The aims of this research are: 1) producing the test instrument of mathematics skill on elementary school which is valid and reliable, 2) finding out the characteristics of the test instrument of mathematics skill on elementary school. The instrument test development in this research uses the development model of Wilson, Oriondo and Antonio which is modified. The number of testing sample in this research is 160 students in each class. This research results: 1) the validity index of aiken v is 0.979 in grade IV and 0.988 in grade V. The coefficient of instrument skill in class IV and V are 0.883 and 0.954. 2) the compatibility model in this research is it is suitable for 1PL model or parameter b (difficulty level). The result of parameter analysis of test item in class IV and V, shows that the overall item is in good category which is between -2 to 2. The case indicates that the overall item is accepted and reliable to be used for measuring the development of mathematics skill of elementary school students.</p>]]></description>
<pubDate>Mar 2021</pubDate>
</item>
<item>
<title><![CDATA[Numerical Solution for Fuzzy Diffusion Problem via Two Parameter Alternating Group Explicit Technique]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10703]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>A. A. Dahalan&nbsp; &nbsp;and J. Sulaiman&nbsp; &nbsp;</p><p>The computational technique has become a significant area of study in physics and engineering. The first method to evaluate the problems numerically was a finite difference. In 2002, a computational approach, an explicit finite difference technique, was used to overcome the fuzzy partial differential equation (FPDE) based on the Seikkala derivative. The application of the iterative technique, in particular the Two Parameter Alternating Group Explicit (TAGE) method, is employed to resolve the finite difference approximation resulting after the fuzzy heat equation is investigated in this article. This article broadens the use of the TAGE iterative technique to solve fuzzy problems due to the reliability of the approaches. The development and execution of the TAGE technique towards the full-sweep (FS) and half-sweep (HS) techniques are also presented. The idea of using the HS scheme is to reduce the computational complexity of the iterative methods by nearly/more than half. Additionally, numerical outcomes from the solution of two experimental problems are included and compared with the Alternating Group Explicit (AGE) approaches to clarify their feasibility. In conclusion, the families of the TAGE technique have been used to overcome the linear system structure through a one-dimensional fuzzy diffusion (1D-FD) discretization using a finite difference scheme. The findings suggest that the HSTAGE approach is surpassing in terms of iteration counts, time taken, and Hausdorff distance relative to the FSTAGE and AGE approaches. It demonstrates that the number of iterations for HSTAGE approach has decreased by approximately 71.60-72.95%, whereas for the execution time, the implementation of HSTAGE method is between 74.05-86.42% better. Since TAGE is ideal for concurrent processing, this method has been seen as the key benefit as it consumes sets of independent tasks that can be performed at the same time. The ability of the suggested technique is projected to be useful for the advanced exploration in solving any multi-dimensional FPDEs.</p>]]></description>
<pubDate>Mar 2021</pubDate>
</item>
<item>
<title><![CDATA[Prospective Filipino Teachers' Disposition to Mathematics]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10702]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Restituto M. Llagas Jr.&nbsp; &nbsp;</p><p>Studying mathematics comprises acquiring a positive disposition toward mathematics and seeing mathematics as an effective way of looking at real-life situations. This study aimed to correlate the disposition to Mathematics of prospective Filipino teachers to some teacher-related variables. The participants were the prospective Filipino teachers at the University of Northern Philippines (UNP) and at the Divine Word College of Vigan (DWCV). Two sets of instruments were utilized in the study – the self-report questionnaire and the Mathematics Dispositional Functioning Inventory developed by Beyers [1]. Frequency and percentage, weighted mean, and chi-square were utilized for data analysis. Results show that the overall disposition to mathematics of the participants is "Positive". The cognitive, affective, and conative aspects received a positive disposition. However, some items show an uncertain disposition to mathematics. The participants' profile variables have no significant relationship with their cognitive and conative disposition to mathematics. A training plan was conceptualized to provide information on the results of the study, to enhance the awareness and understanding of dispositions, to equip appropriate methods in solving mathematical problems, and to provide enrichment activities that will foster a positive disposition to mathematics and consequently will improve prospective teachers' and students' performance. Teachers are influential to the development of the students of effective ways of learning, doing, and thinking about mathematics. Understanding how attitudes are learned to establish an association between the teacher's disposition and students' attitude and performance. Thus, fostering dispositions to mathematics through training improves prospective Filipino teachers' and students' performance.</p>]]></description>
<pubDate>Mar 2021</pubDate>
</item>
<item>
<title><![CDATA[On Application of Max-Plus Algebra to Synchronized Discrete Event System]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10701]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>A. A. Aminu&nbsp; &nbsp;S. E. Olowo&nbsp; &nbsp;I. M. Sulaiman&nbsp; &nbsp;N. Abu Bakar&nbsp; &nbsp;and M. Mamat&nbsp; &nbsp;</p><p>Max-plus algebra is a discrete algebraic system developed on the operations max (<img src=image/13420822_01.gif>) and plus (<img src=image/13420822_03.gif>), where the max and plus operations are defined as addition and multiplication in conventional algebra. This algebraic structure is a semi-ring with its elements being real numbers along with ε=-∞ and e=0. On the other hand, the synchronized discrete event problem is a problem in which an event is scheduled to meet a deadline. There are two aspects of this problem. They include the events running simultaneously and the completion of the lengthiest event at the deadline. A recent survey on max-plus linear algebra shows that the operations max (<img src=image/13420822_01.gif>) and plus (<img src=image/13420822_03.gif>) play a significant role in modeling of human activities. However, numerous studies have shown that there are very limited literatures on the application of the max-plus algebra to real-life problems. This idea motivates the basic algebraic results and techniques of this research. This paper proposed the discrepancy method of max-plus for solving n×n system of linear equations with n≤n, and further show that an nxn linear system of equations will have either a unique solution, an infinitely many solutions or no solution whiles nxn linear system of equations has either an infinitely many solutions or no solution in (<img src=image/13420822_02.gif>). Also, the proposed concept was extended to the job-shop problem in a synchronized event. The results obtained have shown that the method is very efficient for solving n×n system of linear equations and is also applicable to job-shop problems.</p>]]></description>
<pubDate>Mar 2021</pubDate>
</item>
<item>
<title><![CDATA[Applications of the Differential Transformation Method and Multi-Step Differential Transformation Method to Solve a Rotavirus Epidemic Model]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10653]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Pakwan Riyapan&nbsp; &nbsp;Sherif Eneye Shuaib&nbsp; &nbsp;Arthit Intarasit&nbsp; &nbsp;and Khanchit Chuarkham&nbsp; &nbsp;</p><p>Epidemic models are essential in understanding the transmission dynamics of diseases. These models are often formulated using differential equations. A variety of methods, which includes approximate, exact and purely numerical, are often used to find the solutions of the differential equations. However, most of these methods are computationally intensive or require symbolic computations. This article presents the Differential Transformation Method (DTM) and Multi-Step Differential Transformation Method (MSDTM) to find the approximate series solutions of an SVIR rotavirus epidemic model. The SVIR model is formulated using the nonlinear first-order ordinary differential equations, where S; V; I and R are the susceptible, vaccinated, infected and recovered compartments. We begin by discussing the theoretical background and the mathematical operations of the DTM and MSDTM. Next, the DTM and MSDTM are applied to compute the solutions of the SVIR rotavirus epidemic model. Lastly, to investigate the efficiency and reliability of both methods, solutions obtained from the DTM and MSDTM are compared with the solutions from the Runge-Kutta Order 4 (RK4) method. The solutions from the DTM and MSDTM are in good agreement with the solutions from the RK4 method. However, the comparison results show that the MSDTM is more efficient and converges to the RK4 method than the DTM. The advantage of the DTM and MSDTM over other methods is that it does not require a perturbation parameter to work and does not generate secular terms. Therefore the application of both methods</p>]]></description>
<pubDate>Jan 2021</pubDate>
</item>
<item>
<title><![CDATA[On One Mathematical Model of Cooling Living Biological Tissue]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10652]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>B. K. Buzdov&nbsp; &nbsp;</p><p>When cooling living biological tissue (active, non-inert medium), cryomedicine uses cryo-instruments with various forms of cooling surface. Cryoinstruments are located on the surface of biological tissue or completely penetrate into it. With a decrease in the temperature of the cooling surface, an unsteady temperature field appears in the tissue, which in the general case depends on three spatial coordinates and time. To date, there are a large number of scientific publications that consider mathematical models of cryodestruction of biological tissue. However, in the overwhelming majority of them, the Pennes equation (or some of its modifications) is taken as the basis of the mathematical model, from which the linear nature of the dependence of heat sources of biological tissue on the desired temperature field is visible. This character of the dependence does not allow one to describe the actually observed spatial localization of heat. In addition, Pennes' model does not take into account the fact that the freezing of the intercellular fluid occurs much earlier than the freezing of the intracellular fluid and the heat corresponding to these two processes is released at different times. In the proposed work, a new mathematical model of cooling and freezing of living biological tissue are built with a flat rectangular applicator located on its surface. The model takes into account the above features and is a three-dimensional boundary-value problem of the Stefan type with nonlinear heat sources of a special type and has applications in cryosurgery. A method is proposed for the numerical study of the problem posed, based on the use of locally one-dimensional difference schemes without explicitly separating the boundary of the influence of cold and the boundaries of the phase transition. The method was previously successfully tested by the author in solving other two-dimensional problems arising in cryomedicine.</p>]]></description>
<pubDate>Jan 2021</pubDate>
</item>
<item>
<title><![CDATA[Fixed Point Theorems in Complex Valued Quasi b-Metric Spaces for Satisfying Rational Type Contraction]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10651]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>J. Uma Maheswari&nbsp; &nbsp;A. Anbarasan&nbsp; &nbsp;and M. Ravichandran&nbsp; &nbsp;</p><p>The notion of complex valued metric spaces proved the common fixed point theorem that satisfies rational mapping of contraction. In the contraction mapping theory, several researchers demonstrated many fixed-point theorems, common fixed-point theorems and coupled fixed-point theorems by using complex valued metric spaces. The idea of b-metric spaces proved the fixed point theorem by the principle of contraction mapping. The notion of complex valued b-metric spaces, and this metric space was the generalization of complex valued metric spaces. They explained the fixed point theorem by using the rational contraction. In the metric spaces, we refer to this metric space as a quasi-metric space, the symmetric condition d(x, y) = d(y, x) is ignored. Metric space is a special kind of space that is quasi-metric. The Quasi metric spaces were discussed by many researchers. Banach introduced the theory of contraction mapping and proved the theorem of fixed points in metric spaces. We are now introducing the new notion of complex quasi b-metric spaces involving rational type contraction which proved the unique fixed point theorems with continuous as well as non-continuous functions. Illustrate this with example.</p>]]></description>
<pubDate>Jan 2021</pubDate>
</item>
<item>
<title><![CDATA[Generalized Relation between the Roots of Polynomial and Term of Recurrence Relation Sequence]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10650]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Vipin Verma&nbsp; &nbsp;and Mannu Arya&nbsp; &nbsp;</p><p>Many researchers have been working on recurrence relation which is an important topic not only in mathematics but also in physics, economics and various applications in computer science. There are many useful results on recurrence relation sequence but there main problem to find any term of recurrence relation sequence we need to find all previous terms of recurrence relation sequence. There were many important theorems obtained on recurrence relations. In this paper we have given special identity for generalized kth order recurrence relation. These identities are very useful for finding any term of any order of recurrence relation sequence. Authors define a special formula in this paper by this we can find direct any term of a recurrence relation sequence. In this recurrence relation sequence to find any terms we need to find all previous terms so this result is very important. There is important property of a relation between coefficients of recurrence relation terms and roots of a polynomial for second order relation but in this paper, we gave this same property of recurrence relation of all higher order recurrence relation. So finally, we can say that this theorem is valid all order of recurrence relation only condition that roots are distinct. So, we can say that this paper is generalization of property of a relation between coefficients of recurrence relation terms and roots of a polynomial. Theorem: - Let C<sub>1</sub> and C<sub>2</sub> are arbitrary real numbers and suppose the equation <img src=image/13422456_01.gif> (1) Has X<sub>1</sub> and X<sub>2</sub> are distinct roots. Then the sequence <img src=image/13422456_25.gif> is a solution of the recurrence relation <img src=image/13422456_02.gif> (2) <img src=image/13422456_03.gif>. For n= 0, 1, 2 …where β<sub>1</sub> and β<sub>2</sub> are arbitrary constants. Proof: - First suppose that <img src=image/13422456_25.gif> of type <img src=image/13422456_04.gif> we shall prove <img src=image/13422456_25.gif> is a solution of recurrence relation (2). Since X<sub>1</sub>, X<sub>2</sub> and X<sub>3</sub> are roots of equation (1) so all are satisfied equation (1) so we have<img src=image/13422456_05.gif>, <img src=image/13422456_06.gif>. Consider <img src=image/13422456_07.gif><img src=image/13422456_08.gif>. This implies <img src=image/13422456_09.gif>. So the sequence <img src=image/13422456_25.gif> is a solution of the recurrence relation. Now we will prove the second part of theorem. Let <img src=image/13422456_10.gif> is a sequence with three <img src=image/13422456_11.gif>. Let <img src=image/13422456_12.gif>. So <img src=image/13422456_13.gif> (3). <img src=image/13422456_14.gif> (4). Multiply by X<sub>1</sub> to (3) and subtracts from (4). We have <img src=image/13422456_15.gif> similarly we can find <img src=image/13422456_16.gif>. So we can say that values of β<sub>1</sub> and β<sub>2</sub> are defined as roots are distinct. So non- trivial values ofβ<sub>1</sub> and β<sub>2</sub> can find and we can say that result is valid. Example: Let <img src=image/13422456_25.gif> be any sequence such that <img src=image/13422456_17.gif> n≥3 and a<sub>0</sub>=0, a<sub>1</sub>=1, a<sub>2</sub>=2. Then find a<sub>10</sub> for above sequence. Solution: The polynomial of above sequence is <img src=image/13422456_18.gif>. Solving this equation we have roots are 1, 2, and 3 using above theorem we have <img src=image/13422456_19.gif> (7). Using a<sub>0</sub>=0, a<sub>1</sub>=1, a<sub>2</sub>=2 in (7) we have β<sub>1</sub>+β<sub>2</sub>+β<sub>3</sub>=0 (8). β<sub>1</sub>+2β<sub>2</sub>+3β<sub>2</sub>=1 (9).β<sub>1</sub>+4β<sub>2</sub>+9β<sub>3</sub>=2 (10) Solving (8), (9) and (10) we have <img src=image/13422456_20.gif>, <img src=image/13422456_21.gif>, <img src=image/13422456_22.gif>. This implies <img src=image/13422456_23.gif>. Now put n=10 we have a<sub>10</sub>=-27478. Recurrence relation is a very useful topic of mathematics, many problems of real life may be solved by recurrence relations, but in recurrence relation there is a major difficulty in the recurrence relation. If we want to find 100th term of sequence, then we need to find all previous 99 terms of given sequence, then we can get 100th term of sequence but above theorem is very useful if coefficients of recurrence relation of given sequence satisfies the condition of the above theorem, then we can apply above theorem and we can find direct any term of sequence without finding all previous terms.</p>]]></description>
<pubDate>Jan 2021</pubDate>
</item>
<item>
<title><![CDATA[Fuzzy Time Series Forecasting Model Based on Intuitionistic Fuzzy Sets via Delegation of Hesitancy Degree to the Major Grade De-i-fuzzification Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10649]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Nik Muhammad Farhan Hakim Nik Badrul Alam&nbsp; &nbsp;Nazirah Ramli&nbsp; &nbsp;and Norhuda Mohammed&nbsp; &nbsp;</p><p>Fuzzy time series is a powerful tool to forecast the time series data under uncertainty. Fuzzy time series was first initiated with fuzzy sets and then generalized by intuitionistic fuzzy sets. The intuitionistic fuzzy sets consider the degree of hesitation in which the degree of non-membership is incorporated. In this paper, a fuzzy set time series forecasting model based on intuitionistic fuzzy sets via delegation of hesitancy degree to the major grade de-i-fuzzification approach was developed. The proposed model was implemented on the data of student enrollments at the University of Alabama. The forecasted output was obtained using the fuzzy logical relationships of the output, and the performance of the forecasted output was compared with the fuzzy time series forecasting model based on fuzzy sets using the mean square error, root mean square error, mean absolute error, and mean absolute percentage error. The results showed that the forecasting model based on induced fuzzy sets from intuitionistic fuzzy sets performs better compared to the fuzzy time series forecasting model based on fuzzy sets.</p>]]></description>
<pubDate>Jan 2021</pubDate>
</item>
<item>
<title><![CDATA[A Note on Lienard-Chipart Criteria and its Application to Epidemic Models]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10556]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Auni Aslah Mat Daud&nbsp; &nbsp;</p><p>An important part of the study of epidemic models is the local stability analysis of the equilibrium points. The linear algebra method which is commonly employed is the well-known Routh-Hurwitz criteria. The criteria give necessary and sufficient conditions for all of the roots of the characteristic polynomial to be negative or have negative real parts. To date, there are no epidemic models in the literature which employ Lienard-Chipart criteria. This note recommends an alternative linear algebra method namely Lienard-Chipart criteria, to significantly simplify the local stability analysis of epidemic models. Although Routh-Hurwitz criteria are a correct method for local stability analysis, Lienard-Chipart criteria have advantages over Routh-Hurwitz criteria. Using Lienard-Chipart criteria, only about half of the Hurwitz determinants inequalities are required, with the remaining conditions of each set concern with only the sign of the alternate coefficients of the characteristic polynomial. The Lienard-Chipart criteria are especially useful for polynomials with symbolic coefficients, as the determinants are usually significantly more complicated than original coefficients as degree of the polynomial increases. Lienard-Chipart criteria and Routh-Hurwitz criteria have similar performance for systems of dimension five or less. Theoretically, for systems of dimension higher than five, verifying Lienard-Chipart criteria should be much easier than verifying Routh-Hurwitz criteria and the advantage of Lienard-Chipart criteria may become clear. Examples of local stability analysis using Lienard-Chipart criteria for two recently proposed models are demonstrated to show the advantages of simplified Lienard-Chipart criteria over Routh-Hurwitz criteria.</p>]]></description>
<pubDate>Jan 2021</pubDate>
</item>
<item>
<title><![CDATA[Application of Fuzzy Linear Regression with Symmetric Parameter for Predicting Tumor Size of Colorectal Cancer]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10555]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Muhammad Ammar Shafi&nbsp; &nbsp;Mohd Saifullah Rusiman&nbsp; &nbsp;and Siti Nabilah Syuhada Abdullah&nbsp; &nbsp;</p><p>The colon and rectum is the final portion of the digestive tube in the human body. Colorectal cancer (CRC) occurs due to bacteria produced from undigested food in the body. However, factors and symptoms needed to predict tumor size of colorectal cancer are still ambiguous. The problem of using linear regression arises with the use of uncertain and imprecise data. Since the fuzzy set theory's concept can deal with data not to a precise point value (uncertainty data), this study applied the latest fuzzy linear regression to predict tumor size of CRC. Other than that, the parameter, error and explanation for the both models were included. Furthermore, secondary data of 180 colorectal cancer patients who received treatment in general hospital with twenty five independent variables with different combination of variable types were considered to find the best models to predict the tumor size of CRC. Two models; fuzzy linear regression (FLR) and fuzzy linear regression with symmetric parameter (FLRWSP) were compared to get the best model in predicting tumor size of colorectal cancer using two measurement statistical errors. FLRWSP was found to be the best model with least value of mean square error (MSE) and root mean square error (RMSE) followed by the methodology stated.</p>]]></description>
<pubDate>Jan 2021</pubDate>
</item>
<item>
<title><![CDATA[Impact of Sleep on Usage of the Smart Phone at the Bedtime– A Case Study]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10554]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Navya Pratyusha M&nbsp; &nbsp;Rajyalakshmi K&nbsp; &nbsp;Apparao B V&nbsp; &nbsp;and Charankumar G&nbsp; &nbsp;</p><p>Pittsburgh Sleep Quality Index (PSQI) Scoring (Buysse et al. 1989) is a powerful method to measure the sleep quality index based on the scores of various factors namely duration of sleep, sleep disturbance, sleep latency, day dysfunction due to sleepiness, sleep efficiency, need meds to sleep and overall sleep quality. Mainly we focused on the smart phones' usage and its impact on the quality of sleep at the bed time. Many studies have proved that the usage of smart phones at bed time affects the sleep quality, health and productivity. In the present study, we have collected data randomly from the middle-aged adults and observed the relation between gender and the quality of sleep using phi coefficient. It is clearly observed that as we move from males to females, we move negatively from good sleep quality to poor sleep quality. It indicates that males have poor sleep quality than females. We also performed an analysis of variance to test the hypothesis that there is any association between the smart phones' usage and its impact on quality of sleep at bed time.</p>]]></description>
<pubDate>Jan 2021</pubDate>
</item>
<item>
<title><![CDATA[Fourier Method in Initial Boundary Value Problems for Regions with Curvilinear Boundaries]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10553]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Leontiev V. L.&nbsp; &nbsp;</p><p>The algorithm of the generalized Fourier method associated with the use of orthogonal splines is presented on the example of an initial boundary value problem for a region with a curvilinear boundary. It is shown that the sequence of finite Fourier series formed by the method algorithm converges at each moment to the exact solution of the problem – an infinite Fourier series. The structure of these finite Fourier series is similar to that of partial sums of an infinite Fourier series. As the number of grid nodes increases in the area under consideration with a curvilinear boundary, the approximate eigenvalues and eigenfunctions of the boundary value problem converge to the exact eigenvalues and eigenfunctions, and the finite Fourier series approach the exact solution of the initial boundary value problem. The method provides arbitrarily accurate approximate analytical solutions to the problem, similar in structure to the exact solution, and therefore belongs to the group of analytical methods for constructing solutions in the form of orthogonal series. The obtained theoretical results are confirmed by the results of solving a test problem for which both the exact solution and analytical solutions of discrete problems for any number of grid nodes are known. The solution of test problem confirm the findings of the theoretical study of the convergence of the proposed method and the proposed algorithm of the method of separation of variables associated with orthogonal splines, yields the approximate analytical solutions of initial boundary value problem in the form of a finite Fourier series with any desired accuracy. For any number of grid nodes, the method leads to a generalized finite Fourier series which corresponds with high accuracy to the partial sum of the Fourier series of the exact solution of the problem.</p>]]></description>
<pubDate>Jan 2021</pubDate>
</item>
<item>
<title><![CDATA[The Performance Analysis of a New Modification of Conjugate Gradient Parameter for Unconstrained Optimization Models]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10552]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>I M Sulaiman&nbsp; &nbsp;M Mamat&nbsp; &nbsp;M Y Waziri&nbsp; &nbsp;U A Yakubu&nbsp; &nbsp;and M Malik&nbsp; &nbsp;</p><p>Conjugate Gradient (CG) method is the most prominent iterative mathematical technique that can be useful for the optimization of both linear and non-linear systems due to its simplicity, low memory requirement, computational cost, and global convergence properties. However, some of the classical CG methods have some drawbacks which include weak global convergence, poor numerical performance both in terms of number of iterations and the CPU time. To overcome these drawbacks, researchers proposed new variants of the CG parameters with efficient numerical results and nice convergence properties. Some of the variants of the CG method include the scale CG method, hybrid CG method, spectral CG method, three-term CG method, and many more. The hybrid conjugate gradient (CG) algorithm is among the efficient variant in the class of the conjugate gradient methods mentioned above. Some interesting features of the hybrid modifications include inherenting the nice convergence properties and efficient numerical performance of the existing CG methods. In this paper, we proposed a new hybrid CG algorithm that inherits the features of the Rivaie et al. (RMIL*) and Dai (RMIL+) conjugate gradient methods. The proposed algorithm generates a descent direction under the strong Wolfe line search conditions. Preliminary results on some benchmark problems show that the proposed method efficient and promising.</p>]]></description>
<pubDate>Jan 2021</pubDate>
</item>
<item>
<title><![CDATA[Some Properties on Fréchet-Weibull Distribution with Application to Real Life Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10488]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Deepshikha Deka&nbsp; &nbsp;Bhanita Das&nbsp; &nbsp;Bhupen K Baruah&nbsp; &nbsp;and Bhupen Baruah&nbsp; &nbsp;</p><p>Research, development and extensive use of generalized form of distributions in order to analyze and modeling of applied sciences research data has been growing tremendously. Weibull and Fréchet distribution are widely discussed for reliability and survival analysis using experimental data from physical, chemical, environmental and engineering sciences. Both the distributions are applicable to extreme value theory as well as small and large data sets. Recently researchers develop several probability distributions to model experimental data as these parent models are not adequate to fit in some experiments. Modified forms of the Weibull distribution and Fréchet distribution are more flexible distributions for modeling experimental data. This article aims to introduce a generalize form of Weibull distribution known as Fréchet-Weibull Distribution (FWD) by using the T-X family which extends a more flexible distribution for modeling experimental data. Here the pdf and cdf with survival function [S(t)], hazard rate function [h(t)] and asymptotic behaviour of pdf and survival function and the possible shapes of pdf, cdf, S(t) and h(t) of FWD have been studied and the parameters are estimated using maximum livelihood method (MLM). Some statistical properties of FWD such as mode, moments, skewness, kurtosis, variation, quantile function, moment generating function, characteristic function and entropies are investigated. Finally the FWD has been applied to two sets of observations from mechanical engineering and shows the superiority of FWD over other related distributions. This study will provide a useful tool to analyze and modeling of datasets in Mechanical Engineering sciences and other related field.</p>]]></description>
<pubDate>Jan 2021</pubDate>
</item>
<item>
<title><![CDATA[Corporate Domination Number of the Cartesian Product of Cycle and Path]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10487]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2021<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;9&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>S. Padmashini&nbsp; &nbsp;and S. Pethanachi Selvam&nbsp; &nbsp;</p><p>Domination in graphs is to dominate the graph G by a set of vertices <img src=image/13421349_01.gif>, vertex set of G) when each vertex in G is either in D or adjoining to a vertex in D. D is called a perfect dominating set if for each vertex v is not in D, which is adjacent to exactly one vertex of D. We consider the subset C which consists of both vertices and edges. Let <img src=image/13421349_02.gif> denote the set of all vertices V and the edges E of the graph G. Then <img src=image/13421349_03.gif> is said to be a corporate dominating set if every vertex v not in <img src=image/13421349_04.gif> is adjacent to exactly one vertex of <img src=image/13421349_04.gif>, where the set P consists of all vertices in the vertex set of an edge induced sub graph <img src=image/13421349_05.gif>, (E<sub>1</sub> a subset of E) such that there should be maximum one vertex common to any two open neighborhood of different vertices in V(G[E<sub>1</sub>]) and Q, the set consists of all vertices in the vertex set V<sub>1</sub>, a subset of V such that there exists no vertex common to any two open neighborhood of different vertices in V<sub>1</sub>. The corporate domination number of G, denoted by <img src=image/13421349_06.gif>, is the minimum cardinality of elements in C. In this paper, we intend to determine the exact value of corporate domination number for the Cartesian product of the Cycle <img src=image/13421349_07.gif> and Path <img src=image/13421349_08.gif>.</p>]]></description>
<pubDate>Jan 2021</pubDate>
</item>
<item>
<title><![CDATA[Generalised Modified Taylor Series Approach of Developing k-step Block Methods for Solving Second Order Ordinary Differential Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10429]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Oluwaseun Adeyeye&nbsp; &nbsp;and Zurni Omar&nbsp; &nbsp;</p><p>Various algorithms have been proposed for developing block methods where the most adopted approach is the numerical integration and collocation approaches. However, there is another conventional approach known as the Taylor series approach, although it was utilised at inception for the development of linear multistep methods for first order differential equations. Thus, this article explores the adoption of this approach through the modification of the aforementioned conventional Taylor series approach. A new methodology is then presented for developing block methods, which is a more accurate method for solving second order ordinary differential equations, coined as the Modified Taylor Series (MTS) Approach. A further step is taken by presenting a generalised form of the MTS Approach that produces any k-step block method for solving second order ordinary differential equations. The computational complexity of this approach after being generalised to develop k-step block method for second order ordinary differential equations is calculated and the result shows that the generalised algorithm involves less computational burden, and hence is suitable for adoption when developing block methods for solving second order ordinary differential equations. Specifically, an alternate and easy-to-adopt approach to developing k-step block methods for solving second order ODEs with fewer computations has been introduced in this article with the developed block methods being suitable for solving second order differential equations directly.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[Rainfall Modelling using Generalized Extreme Value Distribution with Cyclic Covariate]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10428]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Jasmine Lee Jia Min&nbsp; &nbsp;and Syafrina Abdul Halim&nbsp; &nbsp;</p><p>Increased flood risk is recognized as one of the most significant threats in most parts of the world, resulting in severe flooding events which have caused significant property and human life losses. As there is an increase in the number of extreme flash flood events observed in Klang Valley, Malaysia recently, this paper focuses on modelling extreme daily rainfall within 30 years from year 1975 toyear 2005 in Klang Valley using generalized extreme value (GEV) distribution. Cyclic covariate is introduced in the distribution because of the seasonal rainfall variation in the series. One stationary (GEV) and three nonstationary models (NSGEV1, NSGEV2, and NSGEV3) are constructed to assess the impact of cyclic covariates on the extreme daily rainfall events. The better GEV model is selected using Akaike's information criterion (AIC), bayesian information criterion (BIC) and likelihood ratio test (LRT). The return level is then computed using the selected fitted GEV model. Results indicate that the NSGEV3 model with cyclic covariate trend presented in location and scale parameters provides better fits the extreme rainfall data. The results showed the capability of the nonstationary GEV with cyclic covariates in capturing the extreme rainfall events. The findings would be useful for engineering design and flood risk management purposes.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[Fuzzy Estimations for Detecting Abrupt Changes: Cases on Tourism Series]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10427]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Nurhaida&nbsp; &nbsp;Subanar&nbsp; &nbsp;Abdurakhman&nbsp; &nbsp;and Agus Maman Abadi&nbsp; &nbsp;</p><p>This article deals with problems of detecting abrupt changes in time series ba Change Point Model (CPM) framework. We propose a fuzzification in a Fuzzy Time Series (FTS) model to eliminate a trend in a contaminated dependent series. The independent residuals are then inputed on the CPM method. In simulating an abrupt change, an ARIMA(1,1,1) and variance of the model are considered. The abrupt change is modelled as an AO (Additive Outlier) type of outliers. The minimum weight or breaksize of the abrupt change is defined based on the ARIMA variance formulated in this article. The percentage of uncorrelated residuals obtained by the FTS model and the percentage of correct detection of the proposed procedure are shown by simulation. The proposed detecting algorithm is implemented to detect abrupt changes in monthly tourism series in literature, i.e., in Taiwan and in Bali. The first series shows a slowly increasing trend with one abrupt change while the second series exhibits not only a slowly increasing trend but also a strong seasonal pattern with two abrupt changes. For comparison, we detect the changes in the empirical examples on an existing automatic detection procedure using tso package in R. For the first example, the results show that both detecting procedures give exactly a similar location of one change point where the package recognises it as an AO type of outliers. The abrupt change is related to the period of SARS outbreak in Taiwan. On the second example, the proposed procedure locates 4 change points which form two locations of changes, i.e., the first two change points are within 2 time points so do the last two change points. The locations are closed to times of Bali Bombing events. Meanwhile, the automatic procedure recognizes only one AO outlier on the series.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[Fitting a Curve, Cutting Surface, and Adjusting the Shapes of Developable Hermite Patches]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10426]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Kusno&nbsp; &nbsp;</p><p>Formulation of developable patches is beneficial for modeling of the plate-metal sheet in the based-metal-industries objects. Meanwhile, installing the developable patches on a frame of the items and making a hole on these objects surface still need some practical techniques for developing. For these reasons, this research aims to introduce some methods for fitting a curve segment, cutting the developable patches, and adjusting their formulas. Using these methods can design various profile shapes of rubber filer installed on a frame of the objects and create a fissure or hole on the patches' surface. The steps are as follows. First, we define the planes containing the patches' generatrixes and orthogonal to the boundary curves. Then, it fits the Hermite and Bézier curve, via arranging some control points data on these planes, to model the rubber filler shapes. Second, we numerically evaluate a method for cutting the patches with a plane and adjusting the patches' form by modifying their formula from a linear interpolation form into a combination of curve and vectors forms. As a result, it can present some equations and procedures for plotting required curves, cutting surfaces, and modifying the extensible or narrowable shape of Hermite patches. These methods offer some advantages and contribute to designing the based-metal-sheets' object surfaces, especially modeling various forms of rubber filer profiles installed on a frame of the objects and making hole shapes on the plate-metal sheets.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[Algorithmic Verification of Constraint Satisfaction Method on Timetable Problem]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10425]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Viliam Ďuriš&nbsp; &nbsp;</p><p>Various problems in the real world can be viewed as the Constraint Satisfaction Problem (CSP) based on several mathematical principles. This paper is a guideline for complete automation of the Timetable Problem (TTP) formulated as CSP, which we are able to solve algorithmically, and so the advantage is the possibility to solve the problem on a computer. The theory presents fundamental concepts and characteristics of CSP along with an overview of basic algorithms used in terms of its solution and forms the TTP as CSP and delineates the basic properties and requirements to be met in the timetable. The theory in our paper is mostly based on the Jeavons, Cohen, Gyssens, Cooper, and Koubarakis work, on the basis of which we've constructed a computer programme, which verifies the validity and functionality of the Constraint satisfaction method for solving the Timetable Problem. The solution of the TTP, which is characterized by its basic characteristics and requirements, was implemented by a tree-based search algorithm to a program and our main contribution is an algorithmic verification of constraints abilities and reliability when solving a TTP by means of constraints. The created program was also used to verify the time complexity of the algorithmic solution.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[Determining the Order of a Moving Average Model of Time Series Using Reversible Jump MCMC: A Comparison between Laplacian and Gaussian Noises]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10424]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Suparman&nbsp; &nbsp;Abdellah Salhi&nbsp; &nbsp;and Mohd Saifullah Rusiman&nbsp; &nbsp;</p><p>Moving average (MA) is a time series model often used for pattern forecasting and recognition. It contains a noise that is often assumed to have a Gaussian distribution. However, in various applications, noise often does not have this distribution. This paper suggests using Laplacian noise in the MA model, instead. The comparison of Gaussian and Laplacian noises was also investigated to ascertain the right noise for the model. Moreover, the Bayesian method was used to estimate the parameters, such as the order and coefficient of the model, as well as noise variance. The posterior distribution has a complex form because the parameters are concerened with the combination of spaces of different dimensions. Therefore, to overcome this problem, the Markov Chain Monte Carlo (MCMC) reversible jump algorithm is adopted. A simulation study was conducted to evaluate its performance. After it has worked properly, it was applied to model human heart rate data. The results showed that the MCMC algorithm can estimate the parameters of the MA model. This was developed using Laplace distributed noise. Moreover, when compared with the Gaussian, the Laplacian noise resulted in a higher order model and produced a smaller variance.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[Comparison of Parameter Estimators for Generalized Pareto Distribution under Peak over Threshold]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10423]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Wilhemina Adoma Pels&nbsp; &nbsp;Atinuke Olusola Adebanji&nbsp; &nbsp;and Sampson Twumasi-Ankrah&nbsp; &nbsp;</p><p>The study focused on the Generalized Pareto Distribution (GPD) under the Peak Over Threshold approach (POT). Twenty-one estimation methods were considered for extreme value modeling and their performances were compared. Our goal is to identify the best method in various conditions by the use of a systematic simulation study. Some other estimators which were initially not created under the POT framework (NON-POT) were also compared concurrently with the ones under the POT framework. The simulation results under varying shape parameters showed the Zhang Estimator as "best" in performance for NON-POT in estimating both the shape and scale parameter for heavy-tailed cases. In the POT framework, the Zhang Estimator again performed "best" in estimating very heavy tails for the shape and very short tails for the scale regardless of the value of the scale parameter. When varying sample size, under the NON-POT framework, the Zhang estimator performed as "best" heavy-tailed whiles for the POT framework, the Pickands Estimator was "best" in performance at estimating the shape parameter for large sample sizes and the Zhang, small sample sizes.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[Convergence Almost Everywhere of Non-convolutional Integral Operators in Lebesgue Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10237]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Yakhshiboev M. U.&nbsp; &nbsp;</p><p>The case of one-dimensional and multidimensional non-convolutional integral operators in Lebesgue spaces is considered in this paper. The convergence in the norm and almost everywhere of non-convolution integral operators in Lebesgue spaces was insufficiently studied. The kernels <img src=image/13420650_01.gif> of non-convolutional integral operators do not need to have a monotone majorant, so the well-known results on the convergence almost everywhere of convolutional averages are not applicable here. The kernels <img src=image/13420650_01.gif> of nonconvolutional integral operators take into account different behaviors at <img src=image/13420650_02.gif> and <img src=image/13420650_03.gif> depending on <img src=image/13420650_04.gif> (which is important in applications) and cover the situation in the particular case of convolutional and non-convolutional integral operators. We are interested in the behavior of function <img src=image/13420650_05.gif> as <img src=image/13420650_06.gif>. Theorems on convergence almost everywhere in the case of one-dimensional and multidimensional nonconvolution integral operators in Lebesgue spaces are proved. The theorems proved are more general ones (including for convolutional integral operators) and cover a wide class of kernels.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[Generalization of the Reachability Problem on Directed Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10236]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Vladimir A. Skorokhodov&nbsp; &nbsp;</p><p>The problem of reachability on graphs with restriction is studied. Such restrictions mean that only those paths that satisfy certain conditions are valid paths on the graph. Because of this, for classical optimization problems one has to consider only a subset of feasible paths on the graph, which significantly complicates their solution. Reachability constraints arise naturally in various applied problems, for example, in the problem of navigation in telecommunication networks with areas of strong signal attenuation or when modeling technological processes in which there is a condition for the order of actions or the compatibility of operations. General concepts of a graph with non-standard reachability and a valid path on it are introduced. It is shown that the classical graphs, as well as graphs with restrictions on passing through the selected arcs subsets are special cases of graphs with non-standard reachability. General approach to solving the shortest path problem on a graph with non-standard achievability is developed. This approach consists in constructing an auxiliary graph and reducing the shortest path problem on a graph with non-standard reachability to a similar problem on an auxiliary graph. The theorem on the correspondence of the paths of the original and auxiliary graphs is proved.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[On Some Global Solution of the Basic Equations in the Geodesic Mappings' Theory of Riemannian Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10235]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>E. N. Sinyukova&nbsp; &nbsp;and O. L. Chepok&nbsp; &nbsp;</p><p>It is well known that concepts of a geodesic line and a geodesic mapping are among the most fundamental concepts of classical theory of Riemannian spaces. In geometry, concept of Riemannian space has been formed as a generalization of the concept of a smooth surface in a three-dimensional Euclidean space. It has turned out to be possible to extend to Riemannian space the concept of a geodesic point of a curve and to represent a geodesic line of Riemannian space as a curve that consists exclusively of geodesic points. The fact has allowed understanding not only the local but also the global character of basic equations of geodesic mappings' theory of Riemannian spaces that have been originally received as a result of local investigations. An example of the global solution of the so-called new form of basic equations in the theory of geodesic mappings of Riemannian spaces is built in the article. Sphere <img src=image/13417732_01.gif> that is considered as a subset of Euclidean space <img src=image/13417732_02.gif>, forms its topological background. Investigations are based on the concept of equidistant Riemannian space. They are carried out according to the atlas that consists of two charts, obtained with the help of a stereographic projection.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[Comparison of Rank Transformation Test Statistics with Its Nonparametric Counterpart Using Real-Life Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10234]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Adejumo T. Joel.&nbsp; &nbsp;Omonijo D. Ojo&nbsp; &nbsp;Owolabi A. Timothy&nbsp; &nbsp;Okegbade A. Ibukun&nbsp; &nbsp;Odukoya A. Jonathan&nbsp; &nbsp;and Ayedun C. Ayedun&nbsp; &nbsp;</p><p>Over the years, non-parametric test statistics have been the only solution to solve data that do not follow a normal distribution. However, giving statistical interpretation used to be a great challenge to some researchers. Hence, to overcome these hurdles, another test statistics was proposed called Rank transformation test statistics so as to close the gap between parametric and non-parametric test statistics. The purpose of this study is to compare the conclusion statement of Rank transformation test statistics with its equivalent non parametric test statistics in both one and two samples problems using real-life data. In this study, (2018/2019) Post Unified Tertiary Matriculation Examinations (UTME) results of prospective students of Ladoke Akintola University of Technology (LAUTECH) Ogbomoso across all faculties of the institution were used for the analysis. The data were subjected to nonparametric test statistics which include; Asymptotic Wilcoxon sign test and Wilcoxon sum Rank (both Asymptotic and Distribution) using Statistical Packages for Social Sciences (SPSS). In the same vein, R-statistical programming codes were written for Rank Transformation test statistics. Their P-values were extracted and compared with each other with respect to the pre-selected alpha level (α) = 0.05. Results in both cases revealed that there is a significant difference in the median of the scores across all faculties since their type I error rate are less than the preselected alpha level 0.05. Therefore, Rank transformation test statistics is recommended as alternative test statistics to non-parametric test in both one sample and two-sample problems.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[The Implementation of First Order and Second Order with Mixed Measurement to Identify Farmers Satisfaction]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10233]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Retno Ayu Cahyoningtyas&nbsp; &nbsp;Solimun&nbsp; &nbsp;and Adji Achmad Rinaldo Fernandes&nbsp; &nbsp;</p><p>The purpose of this research is to develop structural modeling with metric and nonmetric measurement scales. Also, this study compares the level of efficiency between the first order and second-order models. The application of structural modeling in agriculture is the satisfaction of farmers in East Java. The data used in this study are about perceptions by distributing questionnaires to farmers in East Java Province in 2020. The respondents in this study were 155 districts in East Java Province. Therefore, the sampling technique chosen is probability sampling, which is a proportional area random sampling. The results are obtained that the first-order model is better than the second-order model because it has the lowest MSE value and the highest R2. The results of the path analysis for the first order and second-order models produce the same results that there is a significant positive effect between the gratitude variables on the farmer satisfaction variable. That is, the more gratitude felt by farmers, the satisfaction will be increased by East Java Farmers. On the other hand, the test results showed that demographic variables did not significantly influence gratitude variables.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[Measuring Given Partial Information about Intuitionistic Fuzzy Sets]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10232]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Priya Arora&nbsp; &nbsp;and V. P Tomar&nbsp; &nbsp;</p><p>Background: Measuring the information and removal of uncertainty are the essential nature of human thinking and many world objectives. Information is well used and beneficial if it is free from uncertainty and fuzziness. Shannon was the primitive who coined the term entropy for measure of uncertainty. He also gave an expression of entropy based on probability distribution. Zadeh used the idea of Shannon to develop the concept of fuzzy sets. Later on, Atanassov generalized the concept of fuzzy set and developed intuitionistic fuzzy sets. Purpose: Sometimes we do not have complete information about fuzzy set or intuitionistic fuzzy sets. Some partial information is known about them i.e either only few values of membership function <img src=image/13417006_01.gif> or non membership function <img src=image/13417006_02.gif> are known or a relationship between them is known or some inequalities governing these parameters are known. Kapur has measured the partial information given by a fuzzy set. In this paper, we have attempted to quantify partial information given by intuitionistic fuzzy sets by considering all the cases. Methodologies: We analyze some well-known definitions and axioms used in the field of fuzzy theory. Principal Results: We have devised methods to measure the incomplete information given about intuitionistic fuzzy sets. Major Conclusions: By devising the methods of measuring partial information about IFS, we can use this information to get an idea about the given set and use this information wisely to make a good decision.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[Evaluating the Performance of Unit Root Tests in Single Time Series Processes]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10231]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Jonathan Kwaku Afriyie&nbsp; &nbsp;Sampson Twumasi-Ankrah&nbsp; &nbsp;Kwasi Baah Gyamfi&nbsp; &nbsp;Doris Arthur&nbsp; &nbsp;and Wilhemina Adoma Pels&nbsp; &nbsp;</p><p>Unit root tests for stationarity have relevancy in almost every practical time series analysis. Deciding on which unit root test to use is a topic of active interest. In this study, we compare the performance of the three commonly used unit root tests (i.e., Augmented Dickey-Fuller (ADF), Phillips-Perron (PP), and Kwiatkowski Phillips Schmidt and Shin (KPSS)) in time series. Based on literature, these unit root tests sometimes disagree in selecting the appropriate order of integration for a given series. Therefore, the decision to use a unit root test relies essentially on the judgment of the researcher. Suppose we wish to annul the subjective decision. In that case, we have to locate an objective basis that unmistakably characterizes which test is the most appropriate for a particular time series type. Thus, this study seeks to unravel this problem by providing a guide on which unit root tests to utilize when there is a disagreement between them. A simulation study of eight (8) univariate time series models with eight (8) different sample sizes, three (3) differencing orders, and nine different parameter values were performed. It was observed from the results that the performance of the three tests improved as the sample size increased. Based on comparing the overall performance, the KPSS was the "best" unit root test to use when there is disagreement.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[A Mathematical Model of Horizontal Averaged Groundwater Pollution Measurement with Several Substances due to Chemical Reaction]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9880]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Jirapud Limthanakul&nbsp; &nbsp;and Nopparat Pochai&nbsp; &nbsp;</p><p>Chloride is a well-known chemical compound that is very useful in industry and agricultural, chloride can be transformed to hypochlorite, chlorite, chlorate and perchlorate, chloride and their substances are not dangerous if we used in the optimal level. Groundwater that contaminated chloride and their substances impacts human health, for an example, if we drink water that contaminated chloride exceed 250 mg/L it can cause heart problems and contribute to high blood pressure. to avoid this problem, we used mathematical models to explain groundwater contamination with chloride and their substances. Transient groundwater flow model provides the hydraulic head of groundwater, in this model we will get the level of groundwater, next, we need to find its velocity and direction by using the result in first model put into second model. Groundwater velocity model provides x- and z-direction vector in groundwater, after computation we will plugin the result into the last model to approximated the chloride concentration in groundwater. Groundwater contamination dispersion model provides chloride, hypochlorite, chlorite, chlorate and perchlorate concentration. The proposed explicit finite difference techniques are used to approximate the model solution. Explicit method was used to solved hydraulic head model. Forward space described groundwater velocity model. Forward time and central space used to predict transient groundwater contaminated models. The simulations can be used to indicate when each simulated zone becomes a hazardous zone or a protection zone.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[Construction of Lorenz Curves Based on Empirical Distribution Laws of Economic Indicators]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9879]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Aleksandr Bochkov&nbsp; &nbsp;Dmitrii Pervukhin&nbsp; &nbsp;Aleksandr Grafov&nbsp; &nbsp;and Veronika Nikitina&nbsp; &nbsp;</p><p>The quality of construction of Lorenz curves depends on the features of the information used. As a rule, information is represented by a sample of values of the studied indicator, which is checked for unevenness. Economic indicators of income and cost, and features of their samples are considered. The feature of the cost economic indicator associated with the presence in the sample of its values of the clot is highlighted (the concentration of values on a small segment of the entire range of sample). It is shown that the established order of constructing empirical laws based on such samples does not give the desired effect when constructing Lorenz curves due to the loss of information content of the sample in the places of the clot. The purpose of this article is to improve the quality of the Lorenz curve by increasing the information content of the sample with a clot by applying the clustering procedure when constructing an empirical law. A step-by-step clustering procedure is proposed for dividing the entire range of sample into intervals to construct an empirical distribution law, which is an element of the novelty of this study. A specific example shows how to improve the quality of building a Lorenz curve using this procedure. In addition, it is shown that Lorenz curves for economic indicators can be constructed directly on the basis of the empirical distribution law and at the same time take into account its features.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[Solution of Newell – Whitehead – Segal Equation of Fractional Order by Using Sumudu Decomposition Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9878]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Shams A. Ahmed&nbsp; &nbsp;and Mohamed Elbadri&nbsp; &nbsp;</p><p>Newell Whitehead Segal (NWS) equation has been used in describing many natural phenomena arising in fluid mechanics and hence acquired more attention. Studies in the past gave importance to obtaining numerical or analytical solutions of this kind of equations by employing methods like Modified Homotopy Analysis Transform method (MHATM), Adomian Decomposition method (ADM), Homotopy Analysis Sumudu Transform method (HASTM), Fractional Complex Transform (FCT) coupled with He's polynomials method (FCT-HPM) and Fractional Residual Power Series method (FRPSM). This research aims to demonstrate an efficient analytical method called the Sumudu Decomposition Method (SDM) for the study of analytical and numerical solutions of the NWS of fractional order. The coupling of Adomian Decomposition method with Sumudu transform method simplifies the calculation. From the numerical results obtained, it is evident that SDM is easy to execute and offers accurate results for the NWS equation than with other methods such as FCT-HPM and FRPSM. Therefore, it is easy to apply the coupling of Adomian Decomposition technique with Sumudu transform method, and when applied to nonlinear differential equations of fractional order, it yields accurate results.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[On the Effect of Vaccination, Screening and Treatment in Controlling Typhoid Fever Spread Dynamics: Deterministic and Stochastic Applications]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9877]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Temitope Olu Ogunlade&nbsp; &nbsp;Oluwatayo Michael Ogunmiloro&nbsp; &nbsp;Segun Nathaniel Ogunyebi&nbsp; &nbsp;Grace Ebunoluwa Fatoyinbo&nbsp; &nbsp;Joshua Otonritse Okoro&nbsp; &nbsp;Opeyemi Roselyn Akindutire&nbsp; &nbsp;Omobolaji Yusuf Halid&nbsp; &nbsp;and Adenike Oluwafunmilola Olubiyi&nbsp; &nbsp;</p><p>This work concerns a deterministic and stochastic model describing the transmission of typhoid fever infection in human host community, where the vaccination of susceptible births and immigrants as well as screening and treatment of carriers and infected individuals are considered in the model build - up. The well-posedness and computation of the basic reproduction number R<sub>typ</sub> of the deterministic model are obtained and analysed. The deterministic model is further transformed into a stochastic model, where the drift and diffusion parts of the model are obtained, and the existence and uniqueness of the stochastic model are discussed. Numerical simulations involving the model parameters of R<sub>typ</sub> showed that vaccination of susceptible births and influx of immigrants as well as screening and treatment of carriers and infected humans are effective in bringing the threshold R<sub>typ</sub>(R<sub>typ</sub>)≈0.7944) below 1, and the results of other simulations suggest more health policies are to be implemented, as low R<sub>typ</sub> may not be guaranteed because vaccination wanes over time. In addition, the numerical simulations of the stochastic model equations describing the sub - population of human individuals in the total human host community are carried out using the computational software MATLAB.</p>]]></description>
<pubDate>Nov 2020</pubDate>
</item>
<item>
<title><![CDATA[Finite Difference Method for Pricing of Indonesian Option under a Mixed Fractional Brownian Motion]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9862]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Chatarina Enny Murwaningtyas&nbsp; &nbsp;Sri Haryatmi Kartiko&nbsp; &nbsp;Gunardi&nbsp; &nbsp;and Herry Pribawanto Suryawan&nbsp; &nbsp;</p><p>This paper deals with an Indonesian option pricing using mixed fractional Brownian motion to model the underlying stock price. There have been researched on the Indonesian option pricing by using Brownian motion. Another research states that logarithmic returns of the Jakarta composite index have long-range dependence. Motivated by the fact that there is long-range dependence on logarithmic returns of Indonesian stock prices, we use mixed fractional Brownian motion to model on logarithmic returns of stock prices. The Indonesian option is different from other options in terms of its exercise time. The option can be exercised at maturity or at any time before maturity with profit less than ten percent of the strike price. Also, the option will be exercised automatically if the stock price hits a barrier price. Therefore, the mathematical model is unique, and we apply the method of the partial differential equation to study it. An implicit finite difference scheme has been developed to solve the partial differential equation that is used to obtain Indonesian option prices. We study the stability and convergence of the implicit finite difference scheme. We also present several examples of numerical solutions. Based on theoretical analysis and the numerical solutions, the scheme proposed in this paper is efficient and reliable.</p>]]></description>
<pubDate>Sep 2020</pubDate>
</item>
<item>
<title><![CDATA[Probabilistic Inventory Model under Flexible Trade Credit Plan Depending upon Ordering Amount]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9861]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Piyali Mallick&nbsp; &nbsp;and Lakshmi Narayan De&nbsp; &nbsp;</p><p>In this work, we propose a stochastic inventory model under the situations that delay in imbursement is acceptable. Most of the inventory model on this topic supposed that the supplier would offer the retailer a fixed delay period and the retailer could sell the goods and accumulate revenue and earn interest with in the credit period. They also assumed that the trade credit period is independent of the order quantity. Limited investigators developed EOQ model under permissible delay in payments, where trade credit is connected with the order quantity. When the order quantity is a lesser amount of the quantity at which the delay in payment is not permitted, the payments for the items must be made immediately. Otherwise, the fixed credit period is permitted. However, all these models were completely deterministic in nature. In reality, this trade credit period cannot be fixed. If it is fixed, then retailer will not be interested to buy higher quantity than the fixed quantity at which delay in payment is permitted. To reflect this situation, we assumed that trade credit period is not static but fluctuates with the ordering quantity. The demand throughout any arrangement period follows a probability distribution. We have calculated the total variable cost for every unit of time. The optimum ordering policy of the scheme can be found with the aid of three theorems (proofs are provided). An algorithm to determine the best ordering rule with the assistance of the propositions is established and numerical instances are provided for clarification. Sensitivity investigation of all the parameters of the model is presented and deliberated. Some previously published results are special cases of the consequences gotten in this paper.</p>]]></description>
<pubDate>Sep 2020</pubDate>
</item>
<item>
<title><![CDATA[Determining Day of Given Date Mathematically]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9860]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>R. Sivaraman&nbsp; &nbsp;</p><p>Computation of day of a week from given date belonging to any century has been a great quest among astronomers and mathematicians for long time. In recent centuries, thanks to efforts of some great mathematicians we now know methods of accomplishing this task. In doing so, people have developed various methods, some of which are very concise and compact but not much accessible explanation is provided. The chief purpose of this paper is to address this issue. Also, almost all known calculations involve either usage of tables or some pre-determined codes usually assigned for months, years or centuries. In this paper, I had established the mathematical proof of determining the day of any given date which is applicable for any number of years even to the time of BCE. I had provided the detailed mathematical derivation of month codes which were key factors in determining the day of any given date. Though the procedures for determining the day of given date are quite well known, the way in which they arrived is not so well known. This paper will throw great detail in that aspect. To be precise, I had explained the formula obtained by German Mathematician Zeller in detail and tried to simplify it further which will reduce its complexity and at the same time, would be as effective as the original formula. The explanations for Leap Years and other astronomical facts were clearly presented in this paper to aid the derivation of the compact form of Zeller's Formula. Some special cases and illustrations are provided wherever necessary to clarify the computations for better understanding of the concepts.</p>]]></description>
<pubDate>Sep 2020</pubDate>
</item>
<item>
<title><![CDATA[Stochastic Latent Residual Approach for Consistency Model Assessment]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9859]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Hani Syahida Zulkafli&nbsp; &nbsp;George Streftaris&nbsp; &nbsp;and Gavin J. Gibson&nbsp; &nbsp;</p><p>Hypoglycaemia is a condition when blood sugar levels in body are too low. This condition is usually a side effect of insulin treatment in diabetic patients. Symptoms of hypoglycaemia vary not only between individuals but also within individuals making it difficult for the patients to recognize their hypoglycaemia episodes. Given this condition, and because the symptoms are not exclusive to only hypoglycaemia, it is very important for patients to be able to identify that they are having a hypoglycaemia episode. Consistency models are statistical models that quantify the consistency of individual symptoms reported during hypoglycaemia. Because there are variations of consistency model, it is important to identify which model best fits the data. The aim of this paper is to asses and verify the models. We developed an assessment method based on stochastic latent residuals and performed posterior predictive checking as the model verification. It was found that a grouped symptom consistency model with multiplicative form of symptom propensity and episode intensity threshold ﬁts the data better and has more reliable predictive ability as compared to other models. This model can be used in assisting patients and medical practitioners to quantify patients' reporting symptoms capability, hence promote awareness of their hypoglycaemia episodes so that corrective actions can be quickly taken.</p>]]></description>
<pubDate>Sep 2020</pubDate>
</item>
<item>
<title><![CDATA[Construction a Diagnostic Test in the Form of Two-tier Multiple Choice on Calculus Material]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9858]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Edy Nurfalah&nbsp; &nbsp;Irvana Arofah&nbsp; &nbsp;Ika Yuniwati&nbsp; &nbsp;Andi Haslinah&nbsp; &nbsp;and Dwi Retno Lestari&nbsp; &nbsp;</p><p>This work is a research development of two-tier multiples choice diagnostic test instruments on calculus material. The purpose of this study is; 1) Obtaining the construction of a two-tier multiples choice diagnostic test based on the validity of the contents and Constable, 2) obtaining the quality of two-tier multiples choice diagnostic tests based on the reliability value. The method used is focused on the construction of diagnostic tests. The development research was adapted from the Retnawati development model. The research generated: 1) Construction of a two-tier multiples choice diagnostic test based on the validity of the contents and the construction obtained that the two-tier multiples choice diagnostic test is proven valid. 2) The quality of two-tier multiples choice diagnostic tests based on the reliability value gained that the compiled two-tier diagnostic test instruments. The validity of the content is evidenced by the average validity index (V), for the two-tier multiples choice diagnostic test instrument obtained an average validity index (V) of 0.9333 and for an interview guideline instrument acquired the validity index (V) 0.7556 in which both the validity index (V) approaches the value 1. Whereas for the validity of the construction acquired three dominant factors based on the scree-plot and corresponds to many factors on the calculus material examined in this study. The quality of two-tier multiples choice diagnostic tests is compiled of two-tier diagnostic test instruments based on the reliability value gained.</p>]]></description>
<pubDate>Sep 2020</pubDate>
</item>
<item>
<title><![CDATA[Fuzzy Sumudu Decomposition Method for Fuzzy Delay Differential Equations with Strongly Generalized Differentiability]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9639]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>N. A. Abdul Rahman&nbsp; &nbsp;</p><p>Fuzzy delay differential equation has always been a tremendous way to model real-life problems. It has been developed throughout the last decade. Many types of fuzzy derivatives have been considered, including the recently introduced concept of strongly generalized differentiability. However, considering this interpretation, very few methods have been introduced, obstructing the potential of fuzzy delay differential equations to be developed further. This paper aims to provide solution for fuzzy nonlinear delay differential equations and the derivatives considered in this paper is interpreted using the concept of strongly generalized differentiability. Under this method, the calculations will lead to two cases i.e. two solutions, and one of the solutions is decreasing in the diameter. To fulfil this, a method resulting from the elegant combination of fuzzy Sumudu transform and Adomian decomposition method is used, it is termed as fuzzy Sumudu decomposition method. A detailed procedure for solving fuzzy nonlinear delay differential equations with the mentioned type of derivatives is constructed in detail. A numerical example is provided afterwards to demonstrate the applicability of the method. It is shown that the solution is not unique, and this is in accord with the concept of strongly generalized differentiability. The two solutions can later be chosen by researcher with regards to the characteristic of the problems. Finally, conclusion is drawn.</p>]]></description>
<pubDate>Sep 2020</pubDate>
</item>
<item>
<title><![CDATA[Hankel Determinant H<sub>2</sub>(3) for Certain Subclasses of Univalent Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9638]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Andy Liew Pik Hern&nbsp; &nbsp;Aini Janteng&nbsp; &nbsp;and Rashidah Omar&nbsp; &nbsp;</p><p>Let S to be the class of functions which are analytic, normalized and univalent in the unit disk <img src=image/13416779_01.gif>. The main subclasses of S are starlike functions, convex functions, close-to-convex functions, quasiconvex functions, starlike functions with respect to (w.r.t.) symmetric points and convex functions w.r.t. symmetric points which are denoted by <img src=image/13416779_02.gif>, and K<sub>S</sub> respectively. In recent past, a lot of mathematicians studied about Hankel determinant for numerous classes of functions contained in S. The qth Hankel determinant for <img src=image/13416779_03.gif> and <img src=image/13416779_04.gif> is defined by <img src=image/13416779_05.gif>. <img src=image/13416779_06.gif> is greatly familiar so called Fekete-Szeg¨o functional. It has been discussed since 1930's. Mathematicians still have lots of interest to this, especially in an altered version of <img src=image/13416779_07.gif>. Indeed, there are many papers explore the determinants H<sub>2</sub>(2) and H<sub>3</sub>(1). From the explicit form of the functional H<sub>3</sub>(1), it holds H<sub>2</sub>(k) provided k from 1-3. Exceptionally, one of the determinant that is <img src=image/13416779_08.gif> has not been discussed in many times yet. In this article, we deal with this Hankel determinant <img src=image/13416779_08.gif>. From this determinant, it consists of coefficients of function f which belongs to the classes <img src=image/13416779_09.gif> and K<sub>S</sub> so we may find the bounds of <img src=image/13416779_10.gif> for these classes. Likewise, we got the sharp results for <img src=image/13416779_09.gif> and K<sub>s</sub> for which a<sub>2</sub> = 0 are obtained.</p>]]></description>
<pubDate>Sep 2020</pubDate>
</item>
<item>
<title><![CDATA[Integration of Cluster Centers and Gaussian Distributions in Fuzzy C-Means for the Construction of Trapezoidal Membership Function]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9637]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Siti Hajar Khairuddin&nbsp; &nbsp;Mohd Hilmi Hasan&nbsp; &nbsp;and Manzoor Ahmed Hashmani&nbsp; &nbsp;</p><p>Fuzzy C-Means (FCM) is one of the mostly used techniques for fuzzy clustering and proven to be robust and more efficient based on various applications. Image segmentation, stock market and web analytics are examples of popular applications which use FCM. One limitation of FCM is that it only produces Gaussian membership function (MF). The literature shows that different types of membership functions may perform better than other types based on the data used. This means that, by only having Gaussian membership function as an option, it limits the capability of fuzzy systems to produce accurate outcomes. Hence, this paper presents a method to generate another popular shape of MF, the trapezoidal shape (trapMF) from FCM to allow more flexibility to FCM in producing outputs. The construction of trapMF is using mathematical theory of Gaussian distributions, confidence interval and inflection points. The cluster centers or mean (μ) and standard deviation (σ) from the Gaussian output are fully used to determine four trapezoidal parameters; lower limit a, upper limit d, lower support limit b, and upper support limit c with the assistance of function trapmf() in Matlab fuzzy toolbox. The result shows that the mathematical theory of Gaussian distributions can be applied to generate trapMF from FCM.</p>]]></description>
<pubDate>Sep 2020</pubDate>
</item>
<item>
<title><![CDATA[Homotopy Perturbation Method for Solving Linear Fuzzy Delay Differential Equations Using Double Parametric Approach]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9636]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Ali F Jameel&nbsp; &nbsp;Sardar G Amen&nbsp; &nbsp;Azizan Saaban&nbsp; &nbsp;Noraziah H Man&nbsp; &nbsp;and Fathilah M Alipiah&nbsp; &nbsp;</p><p>Delay differential equations (known as DDEs) are a broad use of many scientific researches and engineering applications. They come because the pace of the shift in their mathematical models relies all the basis not just on their present condition, but also on a certain past cases. In this work, we propose an algorithm of the approximate method to solve linear fuzzy delay differential equations using the Homotopy Perturbation Method with double parametric form fuzzy numbers. The detailed algorithm of the approach to fuzzification and defuzzificationis analysis is provided. In the initial conditions of the proposed problem there are uncertainties with regard to the triangular fuzzy number. A double parametric form of fuzzy numbers is defined and applied for the first time in this topic for the present analysis. This method's simplicity and ability to overcome delay differential equations without complicating Adomian polynomials or incorrect nonlinear assumptions. The approximate solution is compared with the exact solution to confirm the validity and efficiency of the method to handle linear fuzzy delay differential equation. To show the features of this proposed method, a numerical example is illustrated, involving first order fuzzy delay differential equation. These findings indicate that the suggested approach is very successful and simple to implement.</p>]]></description>
<pubDate>Sep 2020</pubDate>
</item>
<item>
<title><![CDATA[Modified Average Sample Number for Improved Double Sampling Plan Based on Truncated Life Test Using Exponentiated Distributions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9635]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>O. S. Deepa&nbsp; &nbsp;</p><p>The reliability of the product has developed a dynamic issue in a worldwide business market. Generally acceptance sampling guarantees the superiority of the product. In acceptance sampling plan, increasing the sample size may lead to minimization of customers' risk of accepting bad lots and producers' risk of rejecting good lots to a certain level but will increase the cost of inspection. Hence truncation of life test time may be introduced to reduce the cost of inspection. Modified Average Sample Number (MASN) for Improved Double Sampling Plan (IDSP) based on truncated life test for popular exponentiated family such as exponentiated gamma, exponentiated lomax and exonentiated Weibull distribution are considered. The modified ASN creates a band width for average sample number which is much useful for the consumer and producer. The interval for average sample number makes the choice of consumer with a maximum and minimum sample size which is of much benefit without any loss for the producer. The probability of acceptance and average sample number based on modified double sampling plan for lower and upper limit is computed for the exponentiated family. Optimal parameters of IDSP under various exponentiated families with different shape parameters were computed. The proposed plan is compared over traditional double sampling and modified double sampling using Gamma distribution, Weibull distribution and Birnbaum-Saunders distribution and shows that the proposed plan with respect to exponentiated family performs better than all other plans. The tables were provided for all distributions. Comparative study of tables based on proposed exponentiated family and earlier existing plan are also done. </p>]]></description>
<pubDate>Sep 2020</pubDate>
</item>
<item>
<title><![CDATA[<img src=image/13491345_01.gif>-action Induced by Shift Map on 1-Step Shift of Finite Type over Two Symbols and k-type Transitive]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9634]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Nor Syahmina Kamarudin&nbsp; &nbsp;and Syahida Che Dzul-Kifli&nbsp; &nbsp;</p><p>The dynamics of a multidimensional dynamical system may sometimes be inherited from the dynamics of its classical dynamical system. In a multidimensional case, we introduce a new map called a <img src=image/13491345_01.gif>-action on space X induced by a continuous map <img src=image/13491345_02.gif> as <img src=image/13491345_03.gif> such that<img src=image/13491345_04.gif>, where<img src=image/13491345_05.gif>, <img src=image/13491345_06.gif> and <img src=image/13491345_07.gif> is a map of the form <img src=image/13491345_08.gif>. We then look at how topological transitivity of f effects the behaviour of k-type transitivity of the <img src=image/13491345_01.gif>-action, <img src=image/13491345_09.gif>. To verify this, we look specifically at spaces called 1-step shifts of finite type over two symbols which are equipped with a map called the shift map, <img src=image/13491345_12.gif>. We apply some topological theories to prove the <img src=image/13491345_01.gif>-action on 1-step shifts of finite type over two symbols induced by the shift map, <img src=image/13491345_10.gif> is k-type transitive for all <img src=image/13491345_11.gif> whenever <img src=image/13491345_12.gif> is topologically transitive. We found a counterexample which shows that not all maps <img src=image/13491345_10.gif> are k-type transitive for all <img src=image/13491345_11.gif>. However, we have also found some sufficient conditions for k-type transitivity for all<img src=image/13491345_11.gif>. In conclusions, the map <img src=image/13491345_10.gif> on 1-step shifts of finite type over two symbols induced by the shift map is k-type transitive for all <img src=image/13491345_11.gif> whenever either the shift map is topologically transitive or satisfies the sufficient conditions. This study helps to develop the study of k-chaotic behaviours of <img src=image/13491345_01.gif>-action on the multidimensional dynamical system, contributions, and its application towards symbolic dynamics.</p>]]></description>
<pubDate>Sep 2020</pubDate>
</item>
<item>
<title><![CDATA[Comparison for the Approximate Solution of the Second-Order Fuzzy Nonlinear Differential Equation with Fuzzy Initial Conditions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9633]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Ali F Jameel&nbsp; &nbsp;Akram H. Shather&nbsp; &nbsp;N.R. Anakira&nbsp; &nbsp;A. K. Alomari&nbsp; &nbsp;and Azizan Saaban&nbsp; &nbsp;</p><p>This research focuses on the approximate solutions of second-order fuzzy differential equations with fuzzy initial condition with two different methods depending on the properties of the fuzzy set theory. The methods in this research based on the Optimum homotopy asymptotic method (OHAM) and homotopy analysis method (HAM) are used implemented and analyzed to obtain the approximate solution of second-order nonlinear fuzzy differential equation. The concept of topology homotopy is used in both methods to produce a convergent series solution for the propped problem. Nevertheless, in contrast to other destructive approaches, these methods do not rely upon tiny or large parameters. This way we can easily monitor the convergence of approximation series. Furthermore, these techniques do not require any discretization and linearization relative with numerical methods and thus decrease calculations more that can solve high order problems without reducing it into a first-order system of equations. The obtained results of the proposed problem are presented, followed by a comparative study of the two implemented methods. The use of the methods investigated and the validity and applicability of the methods in the fuzzy domain are illustrated by a numerical example. Finally, the convergence and accuracy of the proposed methods of the provided example are presented through the error estimates between the exact solutions displayed in the form of tables and figures.</p>]]></description>
<pubDate>Sep 2020</pubDate>
</item>
<item>
<title><![CDATA[Construction of Bivariate Copulas on a Multivariate Exponentially Weighted Moving Average Control Chart]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9632]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Sirasak Sasiwannapong&nbsp; &nbsp;Saowanit Sukparungsee&nbsp; &nbsp;Piyapatr Busababodhin&nbsp; &nbsp;and Yupaporn Areepong&nbsp; &nbsp;</p><p>The control chart is an important tool in multivariate statistical process control (MSPC), which for monitoring, control, and improvement of the process control. In this paper, we propose six types of copula combinations for use on a Multivariate Exponentially Weighted Moving Average (MEWMA) control chart. Observations from an exponential distribution with dependence measured with Kendall's tau for moderate and strong positive and negative dependence (where <img src=image/13491340_01.gif>) among the observations were generated by using Monte Carlo simulations to measure the Average Run Length (ARL) as the performance metric and should be sufficiently large when the process is in-control on a MEWMA control chart. In this study, we develop an approach performance on the MEWMA control chart based on copula combinations by using the Monte Carlo simulations.The results show that the out-of-control (ARL<sub>1</sub>) values for <img src=image/13491340_02.gif> were less than for <img src=image/13491340_03.gif> in almost all cases. The performances of the Farlie-Gumbel-Morgenstern×Ali-Mikhail-Haq copula combination was superior to the others for all shifts with strong positive dependence among the observations and <img src=image/13491340_02.gif>. Moreover, when the magnitudes of the shift were very large, the performance metric values for observations with moderate and strong positive and negative dependence followed the same pattern. </p>]]></description>
<pubDate>Sep 2020</pubDate>
</item>
<item>
<title><![CDATA[Test Efficiency Analysis of Parametric, Nonparametric, Semiparametric Regression in Spatial Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9631]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Diah Ayu Widyastuti&nbsp; &nbsp;Adji Achmad Rinaldo Fernandes&nbsp; &nbsp;Henny Pramoedyo&nbsp; &nbsp;Nurjannah&nbsp; &nbsp;and Solimun&nbsp; &nbsp;</p><p>Regression analysis has three approaches in estimating the regression curve, namely: parametric, nonparametric, and semiparametric approaches. Several studies have discussed modeling with the three approaches in cross-section data, where observations are assumed to be independent of each other. In this study, we propose a new method for estimating parametric, nonparametric, and semiparametric regression curves in spatial data. Spatial data states that at each point of observation has coordinates that indicate the position of the observation, so between observations are assumed to have different variations. The model developed in this research is to accommodate the influence of predictor variables on the response variable globally for all observations, as well as adding coordinates at each observation point locally. Based on the value of Mean Square Error (MSE) as the best model selection criteria, the results are obtained that modeling with a nonparametric approach produces the smallest MSE value. So this application data is more precise if it is modeled by the nonparametric truncated spline approach. There are eight possible models formed in this research, and the nonparametric model is better than the parametric model, because the MSE value in the nonparametric model is smaller. As for the semiparametric regression model that is formed, it is obtained that the variable X<sub>2</sub> is a parametric component while X<sub>1</sub> and X<sub>3</sub> are the nonparametric components (Model 2). The regression curve estimation model with a nonparametric approach tends to be more efficient than Model 2 because the linearity assumption test results show that the relationship of all the predictor variables to the response variable shows a non-linear relationship. So in this study, spatial data that has a non-linear relationship between predictor variables and responses tends to be better modeled with a nonparametric approach.</p>]]></description>
<pubDate>Sep 2020</pubDate>
</item>
<item>
<title><![CDATA[A Modified Robust Support Vector Regression Approach for Data Containing High Leverage Points and Outliers in the Y-direction]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9630]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Habshah Midi&nbsp; &nbsp;and Jama Mohamed&nbsp; &nbsp;</p><p>The support vector regression (SVR) model is currently a very popular non-parametric method used for estimating linear and non-linear relationships between response and predictor variables. However, there is a possibility of selecting vertical outliers as support vectors that can unduly affect the estimates of regression. Outliers from abnormal data points may result in bad predictions. In addition, when both vertical outliers and high leverage points are present in the data, the problem is further complicated. In this paper, we introduced a modified robust SVR technique in the simultaneous presence of these two problems. Three types of SVR models, i.e. eps-regression (ε-SVR), nu-regression (v-SVR) and bound constraint eps-regression (ε-BSVR), with eight different kernel functions are integrated into the new proposed algorithm. Based on 10-fold cross-validation and some model performance measures, the best model with a suitable kernel function is selected. To make the selected model robust, we developed a new double SVR (DSVR) technique based on fixed parameters. This can be used to detect and reduce the weight of influential observations or anomalous points in the data set. The effectiveness of the proposed technique is verified by using a simulation study and some well-known contaminated data sets.</p>]]></description>
<pubDate>Sep 2020</pubDate>
</item>
<item>
<title><![CDATA[Efficiency of Parameter Estimator of Various Resampling Methods on WarpPLS Analysis]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9629]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Luthfatul Amaliana&nbsp; &nbsp;Solimun&nbsp; &nbsp;Adji Achmad Rinaldo Fernandes&nbsp; &nbsp;and Nurjannah&nbsp; &nbsp;</p><p>WarpPLS analysis has three algorithms, namely the outer model parameter estimation algorithm, the inner model, and the hypothesis testing algorithm which consists of several choices of resampling methods namely Stable1, Stable2, Stable3, Bootstrap, Jackknife, and Blindfolding. The purpose of this study is to apply the WarpPLS analysis by comparing the six resampling methods based on the relative efficiency of the parameter estimates in the six methods. This study uses secondary data from the questionnaire with 1 variable being formative and 2 variables being reflective. Secondary data for the Infrastructure Service Satisfaction Index (IKLI) were obtained from the Study Report on the Regional Development Planning for Economic Growth and the Malang City Gini Index in 2018, while secondary data for the Social Capital Index (IMS) and Community Development Index (IPMas) were obtained from the Research Report on Performance Indicators Regional Human Development Index and Poverty Rate of Malang City in 2018. The results of this study indicate that based on two criteria used, namely the calculation of relative efficiency and measure of fit as a model good, it can be concluded that the Jackknife resampling method is the most efficient, followed with the Stable1, Bootstrap, Stable3, Stable2, and Blindfolding methods. </p>]]></description>
<pubDate>Sep 2020</pubDate>
</item>
<item>
<title><![CDATA[Penalized Maximum Likelihood Estimation of Semiparametric Generalized Linear Models with Application to Climate Temperature Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9607]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Azumah Karim&nbsp; &nbsp;Ananda Omutokoh Kube&nbsp; &nbsp;and Bashiru Imoro Ibn Saeed&nbsp; &nbsp;</p><p>Global temperature change is an important indicator of climate change. Climate time series data are characterized by trend, seasonal/cyclical as well as irregular components. Adequately modeling these components cannot be overemphasized. In this paper, we have proposed an approach of modeling temperature data using semiparametric additive generalized linear model. We have derived a penalized maximum likelihood estimation of the additive component of the semiparametric generalized linear models, that is, of regression coefficients and smooth functions. A statistical modeling with real time series data set was conducted on temperature data. The study has provided indications on the gain of using semiparametric modeling in situations where a signal component can be additively decomposed in to trend, cyclical and irregular components. Thus, we recommend semiparametric additive penalized models as an option to fit time series data sets in modelling the different component with different functions to adequately explain the relation inherent in data.</p>]]></description>
<pubDate>Jul 2020</pubDate>
</item>
<item>
<title><![CDATA[A Modification of Differential Transform Method for Solving Systems of Second Order Ordinary Differential Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9606]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>S. Al-Ahmad&nbsp; &nbsp;I. M. Sulaiman&nbsp; &nbsp;M. Mamat&nbsp; &nbsp;and L. G. Puspa&nbsp; &nbsp;</p><p>The method of differential transform (DTM) is among the famous mathematical approaches for obtaining the differential equations solutions. This is due to its simplicity and efficient numerical performance. However, the major drawback of the DTM is obtaining a truncated series solution which is often a good approximation to the true solution of the equation in a specified region. In this study, a modification of DMT scheme known as MDTM is proposed for obtaining an accurate approximation of ordinary differential equations of second order. The scheme whose procedure is designed via DTM, the Laplace transforms and finally Padé approximation, gives a good approximate for the true solution of the equations in a large region. The proposed approach would be able to overcome the difficulty encountered using the classical DTM, and thus, can serve as an alternative approach for obtaining the solutions of these problems. Preliminary results are presented based on some examples which illustrate the strength and application of the defined scheme. Also, all the obtained results corresponded to exact solutions.</p>]]></description>
<pubDate>Jul 2020</pubDate>
</item>
<item>
<title><![CDATA[Mathematics Day: Elements and Applications of Mathematics in the Final Year Project]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9604]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Noraishikin Zulkarnain&nbsp; &nbsp;Noorhelyna Razali&nbsp; &nbsp;Nuryazmin Ahmat Zainuri&nbsp; &nbsp;Haliza Othman&nbsp; &nbsp;and Alias Jedi&nbsp; &nbsp;</p><p>Mathematics is one of the major subjects that every engineering student needs to learn. However, every student may have different views and interests on Mathematics subjects because of their different levels of thinking. To foster the appreciation of engineering students on the applications of Mathematics in engineering courses and help them apply and enhance their mathematical knowledge, the Fundamental Engineering Unit at the Faculty of Engineering and Built Environments, Universiti Kebangsaan Malaysia (UKM), organised the first ‘Mathematics Day' on Thursday, May 4, 2017. For their final year project, 12 students participated in a competition where they used mathematical or statistical applications to create a poster. The competition was judged by the academic assessors, industry and UKM alumni. This study examines the mathematical elements and applications in students' posters. The relevance of the elements and topics in the Engineering Mathematics course in the posters is reviewed. Reports from students who were present during the competition are also analysed to determine the effectiveness of the activity. The expected outcome of the student reports is interpreted using a statistical descriptive method, and results indicate that the students had a positive reaction to the activity.</p>]]></description>
<pubDate>Jul 2020</pubDate>
</item>
<item>
<title><![CDATA[Comparative Study on Fuzzy Models for Crop Production Forecasting]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9495]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Amit Kumar Rana&nbsp; &nbsp;</p><p>Fuzzy sets theory is a very useful technique to increase effectiveness and efficiency of forecasting. The conventional time series is not applicable when the variable of time series are words variables i.e. variables with linguistic terms. As India and most of the Asian countries are of agriculture-based economy with very smaller farmer land holding area in comparison to America, Australia and Europe counterparts, it becomes more important for these countries to have an approximate idea regarding future crop production. It not only will help in planning policies for future but also will be a great help for farmers and agro based companies for their future managements. For small area production, soft computing technique is an important and effective tool for predicting production, as agriculture production involve a high degree of uncertainties in many parameters. In the present study, 21 years agricultural crop yield data is used and a comparative analysis of forecast is done with three fuzzy models. The robustness of the model is tested on real time agricultural farm production data of wheat crop of G.B. Pant University of Agriculture and Technology Pantnagar, India. As soft computing techniques involve uncertainty of the system under study, it becomes more and more important for forecasting models to be accurate with the prediction. The efficiency of the three models is examined on the basis of statistical errors. The models under study are judged on the basis of Mean Square Error and average percentage error. The results of the study are in case of small area production prediction and will encourage for predicting large scale production.</p>]]></description>
<pubDate>Jul 2020</pubDate>
</item>
<item>
<title><![CDATA[Triple Laplace Transform in Bicomplex Space with Application]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9397]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Mahesh Puri Goswami&nbsp; &nbsp;and Naveen Jha&nbsp; &nbsp;</p><p>In this article, we investigate bicomplex triple Laplace transform in the framework of bicomplexified frequency domain with Region of Convergence (ROC), which is generalization of complex triple Laplace transform. Bicomplex numbers are pairs of complex numbers with commutative ring with unity and zero-divisors, which describe physical interpretation in four dimensional spaces and provide large class of frequency domain. Also, we derive some basic properties and inversion theorem of triple Laplace transform in bicomplex space. In this technique, we use idempotent representation methodology of bicomplex numbers, which play vital role in proving our results. Consequently, the obtained results can be highly applicable in the fields of Quantum Mechanics, Signal Processing, Electric Circuit Theory, Control Engineering, and solving differential equations. Application of bicomplex triple Laplace transform has been discussed in finding the solution of third-order partial differential equation of bicomplex-valued function.</p>]]></description>
<pubDate>Jul 2020</pubDate>
</item>
<item>
<title><![CDATA[The Implementation of Nonlinear Principal Component Analysis to Acquire the Demography of Latent Variable Data (A Study Case on Brawijaya University Students)]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9396]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Solimun&nbsp; &nbsp;Adji Achmad Rinaldo Fernandes&nbsp; &nbsp;and Retno Ayu Cahyoningtyas&nbsp; &nbsp;</p><p>Nonlinear principal component analysis is used for data that has a mixed scale. This study uses a formative measurement model by combining metric and nonmetric data scales. The variable used in this study is the demographic variable. This study aims to obtain the principal component of the latent demographic variable and to identify the strongest indicators of demographic formers with mixed scales using samples of students of Brawijaya University based on predetermined indicators. The data used in this study are primary data with research instruments in the form of questionnaires distributed to research respondents, which are active students of Brawijaya University Malang. The used method is nonlinear principal component analysis. There are nine indicators specified in this study, namely gender, regional origin, father's occupation, mother's occupation, type of place of residence, father's last education, mother's last education, parents' income per month, and students' allowance per month. The result of this study shows that the latent demographic variable with samples of a student at Brawijaya University can be obtained by calculating its component scores. The nine indicators formed in PC1 or X<sub>1</sub> were able to store diversity or information by 19.49%, while the other 80.51% of diversity or other information was not saved in this PC. From these indicators, the strongest indicator in forming latent demographic variables with samples of a student of Brawijaya University is the origin of the region (I<sub>2</sub>) and type of residence (I<sub>5</sub>).</p>]]></description>
<pubDate>Jul 2020</pubDate>
</item>
<item>
<title><![CDATA[Variance Homogeneity Test Based on Cumulative Wavelet Coefficients]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9395]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Abdeslam Serroukh&nbsp; &nbsp;and Khudhayr A. Rashedi&nbsp; &nbsp;</p><p>The aim of this paper is to address the problem of variance break detection in time series in wavelet domain. The maximal overlapped discrete wavelet transform (MODWT) decomposes the series variance across scales into components known as the wavelet variances. We introduce all scale wavelet coefficients based test statistic that allows detecting a break in the homogeneity of the variance of a series through changes in the mean of wavelet variances. The statistic makes use of the traditional CUSUM (cumulative sum) based test designed to test for a break in the mean and constructed using cumulative sums of the square of wavelet coefficients. Under moments and mixing conditions, the test statistic satisfies the functional central limit theorem (FCLT) for a broad class of time series models. The overall performance of our test statistic is compared to the traditional Inclan [8] test statistic. The effectiveness of our statistic is supported by good performances reported in simulations and is as reliable as the traditional statistic. Our method provides a nonparametric test procedure that can be applied to a large class of linear and non linear models. We illustrate the practical use of our test procedure with the quarterly percentage changes in the Americans personal savings data set over the period 1970-2016. Both statistics detect a break in the variance in the second quarter of 2001.</p>]]></description>
<pubDate>Jul 2020</pubDate>
</item>
<item>
<title><![CDATA[Logistic Map on the Ring of Multisets and Its Application in Economic Models]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9394]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Iryna Halushchak&nbsp; &nbsp;Zoriana Novosad&nbsp; &nbsp;Yurii Tsizhma&nbsp; &nbsp;and Andriy Zagorodnyuk&nbsp; &nbsp;</p><p>In this paper, we extend complex polynomial dynamics to a set of multisets endowed with some ring operations (the metric ring of multisets associated with supersymmetric polynomials of infinitely many variables). Some new properties of the ring of multisets are established and a homomorphism to a function ring is constructed. Using complex homomorphisms on the ring of multisets, we proposed a method of investigations of polynomial dynamics over this ring by reducing them to a finite number of scalarvalued polynomial dynamics. An estimation of the number of such scalar-valued polynomial dynamics is established. As an important example, we considered an analogue of the logistic map, defined on a subring of multisets consisting of positive numbers in the interval [0; 1]: Some possible application to study the natural market development process in a competitive environment is proposed. In particular, it is shown that using the multiset approach, we can have a model that takes into account credit debt and reinvestments. Some numerical examples of logistic maps for different growth rate multiset [r] are considered. Note that the growth rate [r] may contain both "positive" and "negative" components and the examples demonstrate the influences of these components on the dynamics.</p>]]></description>
<pubDate>Jul 2020</pubDate>
</item>
<item>
<title><![CDATA[3-Vertex Friendly Index Set of Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9393]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Girija K. P.&nbsp; &nbsp;Devadas Nayak C&nbsp; &nbsp;Sabitha D’Souza&nbsp; &nbsp;and Pradeep G. Bhat&nbsp; &nbsp;</p><p>Graph labeling is an assignment of integers to the vertices or the edges, or both, subject to certain conditions. In literature we find several labelings such as graceful, harmonious, binary, friendly, cordial, ternary and many more. A friendly labeling is a binary mapping such that <img src=image/13415080_01.gif> where <img src=image/13415080_02.gif> and <img src=image/13415080_03.gif> represents number of vertices labeled by 1 and 0 respectively. For each edge <img src=image/13415080_21.gif> assign the label <img src=image/13415080_04.gif>, then the function f is cordial labeling of G if <img src=image/13415080_05.gif> and <img src=image/13415080_06.gif>, where<img src=image/13415080_07.gif> and <img src=image/13415080_08.gif> are the number of edges labeled 1 and 0 respectively. A friendly index set of a graph is { <img src=image/13415080_09.gif><img src=image/13415080_10.gif> runs over all f riendly labeling f of G} and it is denoted by FI(G). A mapping <img src=image/13415080_11.gif> is called ternary vertex labeling and <img src=image/13415080_12.gif> represents the vertex label for <img src=image/13415080_13.gif>. In this article, we extend the concept of ternary vertex labeling to 3-vertex friendly labeling and define 3-vertex friendly index set of graphs. The set <img src=image/13415080_14.gif> runs over all 3 ? vertex f riendly labeling f f or all <img src=image/13415080_15.gif> is referred as 3-vertex friendly index set. In order to achieve <img src=image/13415080_16.gif>, number of vertices are partitioned into <img src=image/13415080_17.gif> such that <img src=image/13415080_18.gif> for all <img src=image/13415080_19.gif> with <img src=image/13415080_20.gif> and la- bel the edge <img src=image/13415080_21.gif> by <img src=image/13415080_04.gif> where <img src=image/13415080_22.gif>. In this paper, we study the 3-vertex friendly index sets of some standard graphs such as complete graph K<sub>n</sub>, path P<sub>n</sub>, wheel graph W<sub>n</sub>, complete bipartite graph K<sub>m,n</sub> and cycle with parallel chords PC<sub>n</sub>.</p>]]></description>
<pubDate>Jul 2020</pubDate>
</item>
<item>
<title><![CDATA[The Parabolic Transform and Some Singular Integral Evolution Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9392]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Mahmoud M. El-Borai&nbsp; &nbsp;and Khairia El-Said El-Nadi&nbsp; &nbsp;</p><p>Some singular integral evolution equations with wide class of closed operators are studied in Banach space. The considered integral equations are investigated without the existence of the resolvent of the closed operators. Also, some non-linear singular evolution equations are studied. An abstract parabolic transform is constructed to study the solutions of the considered ill-posed problems. Applications to fractional evolution equations and Hilfer fractional evolution equations are given. All the results can be applied to general singular integro-differential equations. The Fourier Transform plays an important role in constructing solutions of the Cauchy problems for parabolic and hyperbolic partial differential equations. This means that the Fourier transform is suitable but under conditions on the characteristic forms of the partial differential operators. Also, the Laplace transform plays an important role in studying the Cauchy problem for abstract differential equations in Banach space. But in this case, we need the existence of the resolvent of the considered abstract operators. This note is devoted to exploring the Cauchy problem for general singular integro-partial differential equations without conditions on the characteristic forms and also to study general singular integral evolution equations. Our approach is based on applying the new parabolic transform. This transform generalizes the methods developed within the regularization theory of ill-posed problems.</p>]]></description>
<pubDate>Jul 2020</pubDate>
</item>
<item>
<title><![CDATA[Global Existence and Nonexistence of Solutions to a Cross Diffusion System with Nonlocal Boundary Conditions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9391]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Z. R. Rakhmonov&nbsp; &nbsp;A. Khaydarov&nbsp; &nbsp;and J. E. Urunbaev&nbsp; &nbsp;</p><p>Mathematical models of nonlinear cross diffusion are described by a system of nonlinear partial parabolic equations associated with nonlinear boundary conditions. Explicit analytical solutions of such nonlinearly coupled systems of partial differential equations are rarely existed and thus, several numerical methods have been applied to obtain approximate solutions. In this paper, based on a self-similar analysis and the method of standard equations, the qualitative properties of a nonlinear cross-diffusion system with nonlocal boundary conditions are studied. We are constructed various self-similar solutions to the cross diffusion problem for the case of slow diffusion. It is proved that for certain values of the numerical parameters of the nonlinear cross-diffusion system of parabolic equations coupled via nonlinear boundary conditions, they may not have global solutions in time. Based on a self-similar analysis and the comparison principle, the critical exponent of the Fujita type and the critical exponent of global solvability are established. Using the comparison theorem, upper bounds for global solutions and lower bounds for blow-up solutions are obtained.</p>]]></description>
<pubDate>Jul 2020</pubDate>
</item>
<item>
<title><![CDATA[Construction of Triangles with the Algebraic Geometry Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9390]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Viliam Ďuriš&nbsp; &nbsp;and Timotej Šumný&nbsp; &nbsp;</p><p>The accuracy of geometric construction is one of the important characteristics of mathematics and mathematical skills. However, in geometrical constructions, there is often a problem of accuracy. On the other hand, so-called 'Optical accuracy' appears, which means that the construction is accurate with respect to the drawing pad used. These "optically accurate" constructions are called approximative constructions because they do not achieve exact accuracy, but the best possible approximation occurs. Geometric problems correspond to algebraic equations in two ways. The first method is based on the construction of algebraic expressions, which are transformed into an equation. The second method is based on analytical geometry methods, where geometric objects and points are expressed directly using equations that describe their properties in a coordinate system. In any case, we obtain an equation whose solution in the algebraic sense corresponds to the geometric solution. The paper provides the methodology for solving some specific tasks in geometry by means of algebraic geometry, which is related to cubic and biquadratic equations. It is thus focusing on the approximate geometrical structures, which has a significant historical impact on the development of mathematics precisely because these tasks are not solvable using a compass and ruler. This type of geometric problems has a strong position and practical justification in the area of technology. The contribution of our work is so in approaching solutions of geometrical problems leading to higher degrees of algebraic equations, whose importance is undeniable for the development of mathematics. Since approximate constructions and methods of solution resulting from approximate constructions are not common, the content of the paper is significant.</p>]]></description>
<pubDate>Jul 2020</pubDate>
</item>
<item>
<title><![CDATA[Exploring Metallic Ratios]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9389]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>R. Sivaraman&nbsp; &nbsp;</p><p>Huge amount of literature has been written and published about Golden Ratio, but not many had heard about its generalized version called Metallic Ratios, which are introduced in this paper. The methods of deriving them were also discussed in detail. This will help to explore further in the search of universe of real numbers. In mathematics, sequences play a vital role in understanding of the complexities of any given problem which consist of some patterns. For example, the population growth, radioactive decay of a substance, lifetime of an object all follow a sequence called "Geometric Progression". In fact, the rate at which the recent novel corona virus (COVID – 19) is said to follow a Geometric Progression with common ratio approximately between 2 and 3. Almost all branches of science use sequences, for instance, genetic engineers use DNA sequence, Electrical Engineers use Morse-Thue Sequence and this list goes on and on. Among the vast number of sequences used for scientific investigations, one of the most famous and familiar is the Fibonacci Sequence named after the Italian mathematician Leonard Fibonacci through his book "Liber Abaci" published in 1202. In this paper, I shall try to introduce sequences resembling the Fibonacci sequence and try to generalize it to identify general class of numbers called "Metallic Ratios".</p>]]></description>
<pubDate>Jul 2020</pubDate>
</item>
<item>
<title><![CDATA[Common Coupled Fixed Point Theorems for Weakly F-contractive Mappings in Topological Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9388]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Savita Rathee&nbsp; &nbsp;and Priyanka Gupta&nbsp; &nbsp;</p><p>In late sixties, Furi and Vignoli proved fixed point results for α-condensing mappings on bounded complete metric spaces. Bugajewski generalized the results to "weakly F-contractive mappings" on topological spaces(TS). Bugajeski and Kasprzak proved several fixed point results for "weakly F-contractive mapping" using the approach of lower(upper) semi-continuous functions. After that, by modifying the concept of "weakly F-contractive mappings", the coupled fixed point results were proved by Cho, Shah and Hussain on topological space. On different spaces, common coupled fixed point results were discussed by Liu, Zhou and damjanovic, Nashine and Shatanawi and many other authors. In this work, we prove the common coupled fixed point theorems by adopting the modified definition of weakly F-contractive mapping r : T→T; where T is a topological space. After that, we extend the result of Cho, Shah and Hussain for Banach spaces to common coupled quasi solutions enriched with a relevant transitive binary relation. Also, we give an example in the support of proved result. Our results extend and generalize several existing results in the literature.</p>]]></description>
<pubDate>Jul 2020</pubDate>
</item>
<item>
<title><![CDATA[Robust Regression Analysis Study for Data with Outliers at Some Significance Levels]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9387]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Waego Hadi Nugroho&nbsp; &nbsp;Ni Wayan Surya Wardhani&nbsp; &nbsp;Adji Achmad Rinaldo Fernandes&nbsp; &nbsp;and Solimun&nbsp; &nbsp;</p><p>Robust regression analysis is an analysis that is used if there is an outlier in a regression model. Outliers cause data to be abnormal. The most commonly used parameter estimation method is Ordinary Least Squares (OLS). However, outliers in models cause the estimator of the least-squares in the model to be biased, so handling of outliers is required. One of the regressions used for outliers is robust regression. Robust regression method that can be used is M-Estimation. By using Tukey's Bisquare weighted function, a robust M-estimation method can estimate parameters in a model, for example in malnutrition data in East Java Province 2017 to 2012. This study aims to compare the robust method of M-estimation and OLS method on data with several different levels of significance, which is 1%, 5%, and 10%. The predictor variables used in this study were the percentage of poor society, population density, and some health facilities. R<sup>2</sup> is used to compare the OLS method and the robust method of M-estimation. The results obtained that robust regression is the best method to handle the model if there are outliers in the data. It was supported by almost all results of the value of R^2 on each data that M-estimation has a higher value than the OLS method.</p>]]></description>
<pubDate>Jul 2020</pubDate>
</item>
<item>
<title><![CDATA[Superstability and Solution of The Pexiderized Trigonometric Functional Equation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9360]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Gwang Hui Kim&nbsp; &nbsp;</p><p>The present work continues the study for the superstability and solution of the Pexider type functional equation <img src=image/13415077_01.gif>, which is the mixed functional equation represented by sum of the sine, cosine, tangent, hyperbolic trigonometric, and exponential functions. The stability of the cosine (d'Alembert) functional equation and the Wilson equation was researched by many authors: Baker [7], Badora [5], Kannappan [14], Kim ([16, 19]), and Fassi, etc [11]. The stability of the sine type equations was researched by Cholewa [10], Kim ([18], [20]). The stability of the difference type equation <img src=image/13415077_02.gif> for the above equation was studied by Kim ([21], [22]). In this paper, we investigate the superstability of the sine functional equation and the Wilson equation from the Pexider type difference functional equation <img src=image/13415077_03.gif>, which is the mixed equation represented by the sine, cosine, tangent, hyperbolic trigonometric functions, and exponential functions. Also, we obtain additionally that the Wilson equation and the cosine functional eqaution in the obtained results can be represented by the composition of a homomorphism. In here, the domain (G; +) of functions <img src=image/13415077_04.gif> is a noncommutative semigroup (or 2-divisible Abelian group), and A is an unital commutative normed algebra with unit 1A. The obtained results can be applied and expanded to the stability for the difference type's functional equation which consists of the (hyperbolic) secant, cosecant, logarithmic functions.</p>]]></description>
<pubDate>May 2020</pubDate>
</item>
<item>
<title><![CDATA[On (2; 2)-regular Non-associative Ordered Semigroups via Its Semilattices and Generated (Generalized Fuzzy) Ideals]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9359]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Yousef Al-Qudah&nbsp; &nbsp;Faisal Yousafzai&nbsp; &nbsp;Mohammed M. Khalaf&nbsp; &nbsp;and Mohammad Almousa&nbsp; &nbsp;</p><p>The main motivation behind this paper is to study some structural properties of a non-associative structure as it hasn't attracted much attention compared to associative structures. In this paper, we introduce the concept of an ordered A<sup>*</sup>G<sup>**</sup>-groupoid and provide that this class is more generalized than an ordered AG-groupoid with left identity. We also define the generated left (right) ideals in an ordered A<sup>*</sup>G<sup>**</sup>-groupoid and characterize a (2; 2)-regular ordered A<sup>*</sup>G<sup>**</sup>-groupoid in terms of these ideals. We then study the structural properties of an ordered A<sup>*</sup>G<sup>**</sup>-groupoid in terms of its semilattices, (2; 2)-regular class and generated commutative monoids. Subsequently, compare <img src=image/13415293_01.gif>-fuzzy left/right ideals of an ordered AG-groupoid and respective examples are provided. Relations between an <img src=image/13415293_01.gif>-fuzzy idempotent subsets of an ordered A<sup>*</sup>G<sup>**</sup>-groupoid and its <img src=image/13415293_01.gif>-fuzzybi-ideals are discussed. As an application of our results, we get characterizations of (2; 2)-regular ordered A<sup>*</sup>G<sup>**</sup>-groupoid in terms of semilattices and <img src=image/13415293_01.gif>-fuzzy left (right) ideals. These concepts will help in verifying the existing characterizations and will help in achieving new and generalized results in future works.</p>]]></description>
<pubDate>May 2020</pubDate>
</item>
<item>
<title><![CDATA[Differential Invariants of One Parametrical Group of Transformations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9358]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Abdishukurova Guzal&nbsp; &nbsp;Narmanov Abdigappar&nbsp; &nbsp;and Sharipov Xurshid&nbsp; &nbsp;</p><p>The concept of differential invariant, along with the concept of invariant differentiation, is the key in modern geometry [1]-[10]. In the Erlangen program [3] Felix Klein proposed a unified approach to the description of various geometries. According to this program, one of the main problems of geometry is to construct invariants of geometric objects with respect to the action of the group defining this geometry. This approach is largely based on the ideas of Sophus Lee, who introduced continuous geometry groups of transformations, now known as Lie groups, into geometry. In particular, when considering classification problems and equivalence problems in differential geometry, differential invariants with respect to the action of Lie groups should be considered. In this case, the equivalence problem of geometric objects is reduced to finding a complete system of scalar differential invariants. The interpretation of the k- order differential invariant as a function on the space of k- jets of sections of the corresponding bundle made it possible to operate with them efficiently, and using invariant differentiation, new differential invariants can be obtained. Differential invariants with respect to a certain Lie group generate differential equations for which this group is a symmetry group. This allows one to apply the well-known integration methods to such equations, and, in particular, the Li- Bianchi theorem [4]. Depending on the type of geometry, the orders of the first nontrivial differential invariants can be different. For example, in the space R<sup>3</sup> equipped with the Euclidean metric, the complete system of differential invariants of a curve is its curvature and torsion, which are second and third order invariants, respectively. Note that scalar differential invariants are the only type of invariants whose components do not change when changing coordinates. For this reason, scalar differential invariants are effectively used in solving equivalence problems. In this paper differential invariants of Lie group of one parametric transformations of the space of two independent and three dependent variables are studied. It is shown method of construction of invariant differential operator. Obtained results applied for finding differential invariants of surfaces.</p>]]></description>
<pubDate>May 2020</pubDate>
</item>
<item>
<title><![CDATA[High-speed Dynamic Programming Algorithms in Applied Problems of a Special Kind]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9357]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>V. I. Struchenkov&nbsp; &nbsp;and D. A. Karpov&nbsp; &nbsp;</p><p>The article discusses the solution of applied problems, for which the dynamic programming method developed by R. Bellman in the middle of the last century was previously proposed. Currently, dynamic programming algorithms are successfully used to solve applied problems, but with an increase in the dimension of the task, the reduction of the counting time remains relevant. This is especially important when designing systems in which dynamic programming is embedded in a computational cycle that is repeated many times. Therefore, the article analyzes various possibilities of increasing the speed of the dynamic programming algorithm. For some problems, using the Bellman optimality principle, recurrence formulas were obtained for calculating the optimal trajectory without any analysis of the set of options for its construction step by step. It is shown that many applied problems when using dynamic programming, in addition to rejecting unpromising paths lead to a specific state, also allow rejecting hopeless states. The article proposes a new algorithm for implementing the R. Bellman principle for solving such problems and establishes the conditions for its applicability. The results of solving two-parameter problems of various dimensions presented in the article showed that the exclusion of hopeless states can reduce the counting time by 10 or more times.</p>]]></description>
<pubDate>May 2020</pubDate>
</item>
<item>
<title><![CDATA[Hermite-Hadamard Type Inequalities for Composite Log-Convex Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9356]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Nik Muhammad Farhan Hakim Nik Badrul Alam&nbsp; &nbsp;Ajab Bai Akbarally&nbsp; &nbsp;and Silvestru Sever Dragomir&nbsp; &nbsp;</p><p>Hermite-Hadamard type inequalities related to convex functions are widely being studied in functional analysis. Researchers have refined the convex functions as quasi-convex, h-convex, log-convex, m-convex, (a,m)-convex and many more. Subsequently, the Hermite-Hadamard type inequalities have been obtained for these refined convex functions. In this paper, we firstly review the Hermite-Hadamard type inequality for both convex functions and log-convex functions. Then, the definition of composite convex function and the Hermite-Hadamard type inequalities for composite convex functions are also reviewed. Motivated by these works, we then make some refinement to obtain the definition of composite log-convex functions, namely composite-<img src=image/13415425_01.gif><sup>-1</sup> log-convex function. Some examples related to this definition such as GG-convexity and HG-convexity are given. We also define k-composite log-convexity and k-composite-<img src=image/13415425_01.gif><sup>-1</sup> log-convexity. We then prove a lemma and obtain some Hermite-Hadamard type inequalities for composite log-convex functions. Two corollaries are also proved using the theorem obtained; the first one by applying the exponential function and the second one by applying the properties of k-composite log-convexity. Also, an application for GG-convex functions is given. In this application, we compare the inequalities obtained from this paper with the inequalities obtained in the previous studies. The inequalities can be applied in calculating geometric means in statistics and other fields.</p>]]></description>
<pubDate>May 2020</pubDate>
</item>
<item>
<title><![CDATA[New Possibilities of Application of Artificial Intelligence Methods for High-Precision Solution of Boundary Value Problems]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9355]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Leonid N. Yasnitsky&nbsp; &nbsp;and Sergey L. Gladkiy&nbsp; &nbsp;</p><p>One of the main problems in modern mathematical modeling is to obtain high-precision solutions of boundary value problems. This study proposes a new approach that combines the methods of artificial intelligence and a classical analytical method. The use of the analytical method of fictitious canonic regions is proposed as the basis for obtaining reliable solutions of boundary value problems. The novelty of the approach is in the application of artificial intelligence methods, namely, genetic algorithms, to select the optimal location of fictitious canonic regions, ensuring maximum accuracy. A general genetic algorithm has been developed to solve the problem of determining the global minimum for the choice and location of fictitious canonic regions. For this genetic algorithm, several variants of the function of crossing individuals and mutations are proposed. The approach is applied to solve two test boundary value problems: the stationary heat conduction problem and the elasticity theory problem. The results of solving problems showed the effectiveness of the proposed approach. It took no more than a hundred generations to achieve high precision solutions in the work of the genetic algorithm. Moreover, the error in solving the stationary heat conduction problem was so insignificant that this solution can be considered as precise. Thus, the study showed that the proposed approach, combining the analytical method of fictitious canonic regions and the use of genetic optimization algorithms, allows solving complex boundary-value problems with high accuracy. This approach can be used in mathematical modeling of structures for responsible purposes, where the accuracy and reliability of the results is the main criterion for evaluating the solution. Further development of this approach will make it possible to solve with high accuracy of more complicated 3D problems, as well as problems of other types, for example, thermal elasticity, which are of great importance in the design of engineering structures.</p>]]></description>
<pubDate>May 2020</pubDate>
</item>
<item>
<title><![CDATA[Structural Equation Modeling (SEM) Analysis with Warppls Approach Based on Theory of Planned Behavior (TPB)]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9354]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Ni Wayan Surya Wardhani&nbsp; &nbsp;Waego Hadi Nugroho&nbsp; &nbsp;Adji Achmad Rinaldo Fernandes&nbsp; &nbsp;and Solimun&nbsp; &nbsp;</p><p>WANT-E is a tool created to purify methane gas from organic waste intended as a substitute for renewable gas fuel. The WANT-E product is new because it is necessary to do research related to the public interest in WANT-E products. This study uses primary data obtained from questionnaires with variables based on Theory of Planned Behavior (TPB), namely behavior attitudes, subjective norms, perceived behavior control, and behavior interests that are spread to the community of Cibeber Village, Cikalong Subdistrict, Tasikmalaya Regency that uses LPG gas cylinders or stove using sampling techniques in the form of the judgment sampling method. The analysis used is SEM with the WarpPLS approach, which is to determine the effect of relationships between variables. The results of the analysis obtained the effect of a positive relationship between behavior attitudes variables on subjective norms, behavior attitudes toward perceived behavior control, subjective norms of behavior interests, and perceived behavior control of behavior interests. Then the influence of indirect relations on subjective norms and perceived behavior control was obtained as mediation between behavior attitudes toward behavior interests.</p>]]></description>
<pubDate>May 2020</pubDate>
</item>
<item>
<title><![CDATA[The Indicatrix of the Surface in Four-Dimensional Galilean Space]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9235]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Artykbaev Abdullaaziz&nbsp; &nbsp;and Nurbayev Abdurashid Ravshanovich&nbsp; &nbsp;</p><p>This article discusses geometric quantities associated with the concept of surfaces and the indicatrix of a surface in four-dimensional Galileo space. In this case, the second orderly line in the plane is presented as a surface indicatrix. It is shown that with the help of the Galileo space movement, the second orderly line can be brought to the canonical form. The movement in the Galileo space is radically different from the movement in the Euclidean space. Galileo movements include parallel movement, axis rotation, and sliding. Sliding gives deformation in the Euclidean space. The surface indicatrix is deformed by the Galileo movement. When the indicatrix is deformed, the surface will be deformed. In the classification of three-dimensional surface points in the four-dimensional Galileo phase, the classification of the indicatrix of the surface at this point was used. This shows the cyclic state of surface points other than Euclidean geometry. The geometric characteristics of surface curves were determined using the indicatrix test. It is determined what kind of geometrical meaning the identified properties have in the Euclidean phase. It is shown that the Galilean movement gives surface deformation in the Euclidean sense. Deformation of the surface is indicated by the fact that the Gaussian curvature remains unchanged.</p>]]></description>
<pubDate>May 2020</pubDate>
</item>
<item>
<title><![CDATA[Characterizations of Some Special Curves in Lorentz-Minkowski Space]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9123]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>M. Khalifa Saad&nbsp; &nbsp;R. A. Abdel-Baky&nbsp; &nbsp;F. Alharbi&nbsp; &nbsp;and A. Aloufi&nbsp; &nbsp;</p><p>In a theory of space curves, especially, a helix is the most elementary and interesting topic. A helix, moreover, pays attention to natural scientists as well as mathematicians because of its various applications, for example, DNA, carbon nanotube, screws, springs and so on. Also there are many applications of helix curve or helical structures in Science such as fractal geometry, in the fields of computer aided design and computer graphics. Helices can be used for the tool path description, the simulation of kinematic motion or the design of highways, etc. The problem of the determination of parametric representation of the position vector of an arbitrary space curve according to the intrinsic equations is still open in the Euclidean space E<sup>3</sup> and in the Minkowski space <img src=image/13414896_01.gif>. In this paper, we introduce some characterizations of a non-null slant helix which has a spacelike or timelike axis in <img src=image/13414896_01.gif>. We use vector differential equations established by means of Frenet equations in Minkowski space <img src=image/13414896_01.gif>. Also, we investigate some differential geometric properties of these curves according to these vector differential equations. Besides, we illustrate some examples to confirm our findings.</p>]]></description>
<pubDate>May 2020</pubDate>
</item>
<item>
<title><![CDATA[On the Geometry of Hamiltonian Symmetries]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9122]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Narmanov Abdigappar&nbsp; &nbsp;and Parmonov Hamid&nbsp; &nbsp;</p><p>The problem of integrating equations of mechanics is the most important task of mathematics and mechanics. Before Poincare's book "Curves Defined by Differential Equations", integration tasks were considered as analytical problems of finding formulas for solutions of the equation of motion. After the appearance of this book, it became clear that the integration problems are related to the behavior of the trajectories as a whole. This, of course, stimulated methods of qualitative theory of differential equations. Present time, the main method in this problem has become the symmetry method. Newton used the ideas of symmetry for the problem of central motion. Further, Lagrange revealed that the classical integrals of the problem of gravitation bodies are associated with invariant equations of motion with respect to the Galileo group. Emmy Noether showed that each integral of the equation of motion corresponds to a group of transformations preserving the action. The phase flow of the Hamilton system of equations, in which the first integral serves as the Hamiltonian, translates the solutions of the original equations into solutions. The Liouville theorem on the integrability of Hamilton equations was created on this idea. The Liouville theorem states that phase flows of involutive integrals generate an Abelian group of symmetries Hamiltonian methods have become increasingly important in the study of the equations of continuum mechanics, including fluids, plasmas and elastic media. In this paper it is considered the problem on the Hamiltonian system which describes of motion of a particle which is attracted to a fixed point with a force varying as the inverse cube of the distance from the point. We are concerned with just one aspect of this problem, namely the questions on the symmetry groups and Hamiltonian symmetries. It is found Hamiltonian symmetries of this Hamiltonian system and it is proven that Hamiltonian symmetry group of the considered problem contains two dimensional Abelian Lie group. Also it is constructed the singular foliation which is generated by infinitesimal symmetries which invariant under phase flow of the system. In the present paper, smoothness is understood as smoothness of the class C<sup>∞</sup>.</p>]]></description>
<pubDate>May 2020</pubDate>
</item>
<item>
<title><![CDATA[Lightlike Hypersurfaces of an Indefinite Kaehler Manifold with an (<img src=image/13415329_01.gif>)-type Connection]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9121]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Jae Won Lee&nbsp; &nbsp;Dae Ho Jin&nbsp; &nbsp;and Chul Woo Lee&nbsp; &nbsp;</p><p>Jin [1] defined an (<img src=image/13415329_01.gif>)-type connection on semi-Riemannian manifolds. Semi-symmetric nonmetric connection and non-metric ∅-symmetric connection are two important examples of this connection such that (<img src=image/13415329_01.gif>) = (1; 0) and (<img src=image/13415329_01.gif>) = (0; 1), respectively. In semi-Riemannian geometry, there are few literatures for the lightlike geometry, so we expose new theories for non-degenerate submanifolds in semi-Riemannian geometry. The goal of this paper is to study a characterization of a (Lie) recurrent lightlike hypersurface M of an indefinite Kaehler manifold with an (<img src=image/13415329_01.gif>)-type connection when the charateristic vector field is tangnet to M. In the special case that an indefinite Kaehler manifold of constant holomorphic sectional curvature is an indefinite complex space form, we investigate a lightlike hypersurface of an indefinite complex space form with an (<img src=image/13415329_01.gif>)-type connection when the charateristic vector field is tangnet to M. Moreover, we show that the total space, the complex space form, is characterized by the screen conformal lightlike hypersurface with an (<img src=image/13415329_01.gif>)-type connection. With a semi-symmetric non-metric connection, we show that an indefinite complex space form is flat.</p>]]></description>
<pubDate>May 2020</pubDate>
</item>
<item>
<title><![CDATA[Adomian Decomposition Method with Modified Bernstein Polynomials for Solving Nonlinear Fredholm and Volterra Integral Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9120]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Mohammad Almousa&nbsp; &nbsp;</p><p>Many different problems in mathematics, physics, engineering can be expressed in the form of integral equations. Among these are diffraction problems, population growth, heat transfer, particle transport problems, electrical engineering, elasticity, control, elastic waves, diffusion problems, quantum mechanics, heat radiation, electrostatics and contact problems. Therefore, the solutions which are obtained by the mathematical methods play an important role in these fields. The most two basic types of integral equations are called Fredholm (FIEs) and Volterra (VIEs). In many instances, the ordinary and partial differential equations can be converted into Fredhom and Volterra integral equations that are solved more effectively. We aim through this research to present an improved Adomian decomposition method based on modified Bernstein polynomials (ADM-MBP) to solve nonlinear integral equations of the second kind. We introduced efficient method, constructed on modified Bernstein polynomials. The formulation is developed to solve nonlinear Fredholm and Volterra integral equations of second kind. This method is tested for some examples from nonlinear integral equations. Maple software was used to obtain the solutions of these examples. The results demonstrate reliability of the proposed method. Generally, the proposed method is very convenient to apply to find the solutions of Fredholm and Volterra integral equations of second kind.</p>]]></description>
<pubDate>May 2020</pubDate>
</item>
<item>
<title><![CDATA[MTSD-TCC: A Robust Alternative to Tukey's Control Chart (TCC) Based on the Modified Trimmed Standard Deviation (MTSD)]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9119]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Moustafa Omar Ahmed Abu-Shawiesh&nbsp; &nbsp;Muhammad Riaz&nbsp; &nbsp;and Qurat-Ul-Ain Khaliq&nbsp; &nbsp;</p><p>In this study, a robust control chart as an alternative to the Tukey's control chart (TCC) based on the modified trimmed standard deviation (MTSD), namely MTSD-TCC, is proposed. The performance of the proposed and the competing Tukey's control chart (TCC) is measured using different length properties such as average run length (ARL), standard deviation of run length (SDRL), and median run length (MDRL). Also, the study covered normal and contaminated cases. We have observed that the proposed robust control chart (MTSD-TCC) is quite efficient at detecting process shifts. Also, it is evident from the simulation results that the proposed robust control chart (MTSD-TCC) offers superior detection ability for different trimming levels as compared to the Tukey's control chart (TCC) under the contaminated process setups. As a result, it is recommended to use the proposed robust control chart (MTSD-TCC) for process monitoring. An application numerical example using real-life data is provided to illustrate the implementation of the proposed robust control chart (MTSD-TCC) which also supported the results of the simulation study to some extent.</p>]]></description>
<pubDate>May 2020</pubDate>
</item>
<item>
<title><![CDATA[ϵ -Compatible Map and New Approach for Common Fixed Point Theorems in Partial Metric Space Endowed with Graph]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9118]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Anuradha&nbsp; &nbsp;and SeemaMehra&nbsp; &nbsp;</p><p>In 2016, Muralisankar and Jeyabal introduced the concept of ε-Compatible maps and studied the set of common fixed points. They generalized the Banach contraction, Kannan contraction, Reich contraction and Bianchini type contraction to obtain some common fixed point theorems for ε-Compatible mappings which don't involve the suitable containment of the ranges for the given mappings in the setting of metric spaces. Motivated by this new concept of mappings, we establish a new approach for some common fixed point theorems via ϵ -compatible maps in context of complete partial metric space including a directed graph G=(V,E). By the remarkable work of Jachymski in 2008, we extend the results obtained by Muralisankar and Jeyabal in 2016. In 2008, Jachymski obtained some important fixed point results introduced by Ran and Reurings (2004) using the languages of graph theory instead of partial order and gave an interesting approach in this direction. After that, his work is considered as a reference in this domain. Sometimes, there are some mappings which do not satisfy the contractive nature on whole set M(say) but these can be made contractive on some subset of M and this can be done by including graph as shown in our Example 2.6 which is provided to substantiate the validity of our results.</p>]]></description>
<pubDate>May 2020</pubDate>
</item>
<item>
<title><![CDATA[Application of Parameterized Hesitant Fuzzy Soft Set Theory in Decision Making]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9117]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Zahari Md Rodzi&nbsp; &nbsp;and Abd Ghafur Ahmad&nbsp; &nbsp;</p><p>In this paper, by combining hesitant fuzzy soft sets (HFSSs) and fuzzy parameterized, we introduce the idea of a new hybrid model, fuzzy parameterized hesitant fuzzy soft sets (FPHFSSs). The benefit of this theory is that the degree of importance of parameters is being provided to HFSSs directly from decision makers. In addition, all the information is represented in a single set in the decision making process. Then, we likewise ponder its basic operations such as AND, OR, complement, union and intersection. The basic properties such as associative, distributive and de Morgan's law of FPHFSSs are proven. Next, in order to resolve the multi-criteria decision making problem (MCDM), we present arithmetic mean score and geometry mean score incorporated with hesitant degree of FPHFSSs in TOPSIS. This algorithm can cater some existing approach that suggested to add such elements to a shorter hesitant fuzzy element, rendering it equivalent to another hesitant fuzzy element, or to duplicate its elements to obtain two sequence of the same length. Such approaches would break the original data structure and modify the data. Finally, to demonstrate the efficacy and viability of our process, we equate our algorithm with existing methods.</p>]]></description>
<pubDate>May 2020</pubDate>
</item>
<item>
<title><![CDATA[The Consistency of Blindfolding in the Path Analysis Model with Various Number of Resampling]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9116]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Solimun&nbsp; &nbsp;and Adji Achmad Rinaldo Fernandes&nbsp; &nbsp;</p><p>The use of regression analysis has not been able to deal with the problems of complex relationships with several response variables and the presence of intervening endogenous variables in a relationship. Analysis that is able to handle these problems is path analysis. In path analysis there are several assumptions, one of which is the assumption of residual normality. If the normality residual assumptions are not met, then estimating the parameters can produce a biased estimator, a large and not consistent range of estimators. Unmet residual normality problems can be overcome by using resampling. Therefore in this study, a simulation study was conducted to apply resampling with the blindfold method to the condition that the normality assumption is not met with various levels of resampling in the path analysis. Based on the simulation results, different levels of closeness occur consistently at different resampling quantities. At a low level of closeness, it is consistent with the resampling magnitude of 1000. At a moderate level, a consistent level of resampling of 500 occurs. At a high level of closeness, it is consistent with the amount of resampling 1400.</p>]]></description>
<pubDate>May 2020</pubDate>
</item>
<item>
<title><![CDATA[Hybrid Flow-Shop Scheduling (HFS) Problem Solving with Migrating Birds Optimization (MBO) Algorithm]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8943]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2A&nbsp;&nbsp;<p>Yona Eka Pratiwi&nbsp; &nbsp;Kusbudiono&nbsp; &nbsp;Abduh Riski&nbsp; &nbsp;and Alfian Futuhul Hadi&nbsp; &nbsp;</p><p>The development of an increasingly rapid industrial development resulted in increasingly intense competition between industries. Companies are required to maximize performance in various fields, especially by meeting customer demand with agreed timeliness. Scheduling is the allocation of resources to the time to produce a collection of jobs. PT. Bella Agung Citra Mandiri is a manufacturing company engaged in making spring beds. The work stations in the company consist of 5 stages consisting of ram per with three machines, clamps per 1 machine, firing mattresses with two machines, sewing mattresses three machines and packing with one machine. The model problem that was solved in this study was Hybrid Flowshop Scheduling. The optimization method for solving problems is to use the metaheuristic method Migrating Birds Optimization. To avoid problems faced by the company, scheduling is needed to minimize makespan by paying attention to the number of parallel machines. The results of this study are scheduling for 16 jobs and 46 jobs. Decreasing makespan value for 16 jobs minimizes the time for 26 minutes 39 seconds, while for 46 jobs can minimize the time for 3 hours 31 minutes 39 seconds.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Fourth-order Compact Iterative Scheme for the Two-dimensional Time Fractional Sub-diffusion Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8942]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2A&nbsp;&nbsp;<p>Muhammad Asim Khan&nbsp; &nbsp;and Norhashidah Hj. Mohd Ali&nbsp; &nbsp;</p><p>The fractional diffusion equation is an important mathematical model for describing phenomena of anomalous diffusion in transport processes. A high-order compact iterative scheme is formulated in solving the two-dimensional time fractional sub-diffusion equation. The spatial derivative is evaluated using Crank-Nicolson scheme with a fourth-order compact approximation and the Caputo derivative is used for the time fractional derivative to obtain a discrete implicit scheme. The order of convergence for the proposed method will be shown to be of <img src=image/13491066_01.gif>. Numerical examples are provided to verify the high-order accuracy solutions of the proposed scheme.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Parameter Estimations of the Generalized Extreme Value Distributions for Small Sample Size]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8941]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2A&nbsp;&nbsp;<p>RaziraAniza Roslan&nbsp; &nbsp;Chin Su Na&nbsp; &nbsp;and Darmesah Gabda&nbsp; &nbsp;</p><p>The standard method of the maximum likelihood has poor performance in GEV parameter estimates for small sample data. This study aims to explore the Generalized Extreme Value (GEV) parameter estimation using several methods focusing on small sample size of an extreme event. We conducted simulation study to illustrate the performance of different methods such as the Maximum Likelihood (MLE), probability weighted moment (PWM) and the penalized likelihood method (PMLE) in estimating the GEV parameters. Based on the simulation results, we then applied the superior method in modelling the annual maximum stream flow in Sabah. The result of the simulation study shows that the PMLE gives better estimate compared to MLE and PMW as it has small bias and root mean square errors, RMSE. For an application, we can then compute the estimate of return level of river flow in Sabah.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[An Alternative Approach for Finding Newton's Direction in Solving Large-Scale Unconstrained Optimization for Problems with an Arrowhead Hessian Matrix]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8940]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2A&nbsp;&nbsp;<p>Khadizah Ghazali&nbsp; &nbsp;Jumat Sulaiman&nbsp; &nbsp;Yosza Dasril&nbsp; &nbsp;and Darmesah Gabda&nbsp; &nbsp;</p><p>In this paper, we proposed an alternative way to find the Newton direction in solving large-scale unconstrained optimization problems where the Hessian of the Newton direction is an arrowhead matrix. The alternative approach is a two-point Explicit Group Gauss-Seidel (2EGGS) block iterative method. To check the validity of our proposed Newton’s direction, we combined the Newton method with 2EGGS iteration for solving unconstrained optimization problems and compared it with a combination of the Newton method with Gauss-Seidel (GS) point iteration and the Newton method with Jacobi point iteration. The numerical experiments are carried out using three different artificial test problems with its Hessian in the form of an arrowhead matrix. In conclusion, the numerical results showed that our proposed method is more superior than the reference method in term of the number of inner iterations and the execution time.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Robust Method in Multiple Linear Regression Model on Diabetes Patients]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8939]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2A&nbsp;&nbsp;<p>Mohd Saifullah Rusiman&nbsp; &nbsp;Siti Nasuha Md Nor&nbsp; &nbsp;Suparman&nbsp; &nbsp;and Siti Noor Asyikin Mohd Razali&nbsp; &nbsp;</p><p>This paper is focusing on the application of robust method in multiple linear regression (MLR) model towards diabetes data. The objectives of this study are to identify the significant variables that affect diabetes by using MLR model and using MLR model with robust method, and to measure the performance of MLR model with/without robust method. Robust method is used in order to overcome the outlier problem of the data. There are three robust methods used in this study which are least quartile difference (LQD), median absolute deviation (MAD) and least-trimmed squares (LTS) estimator. The result shows that multiple linear regression with application of LTS estimator is the best model since it has the lowest value of mean square error (MSE) and mean absolute error (MAE). In conclusion, plasma glucose concentration in an oral glucose tolerance test is positively affected by body mass index, diastolic blood pressure, triceps skin fold thickness, diabetes pedigree function, age and yes/no for diabetes according to WHO criteria while negatively affected by the number of pregnancies. This finding can be used as a guideline for medical doctors as an early prevention of stage 2 of diabetes.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Markov Chain: First Step towards Heat Wave Analysis in Malaysia]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8938]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2A&nbsp;&nbsp;<p>Nur Hanim Mohd Salleh&nbsp; &nbsp;Husna Hasan&nbsp; &nbsp;and Fariza Yunus&nbsp; &nbsp;</p><p>Extreme temperature has been carried out around the world to provide awareness and proper opportunity for the societies to prepare necessary arrangements. In this present paper, the first order Markov chain model was applied to estimate the probability of extreme temperature based on the heat wave scales provided by the Malaysian Meteorological Department. In this study, the 24-year period (1994-2017) daily maximum temperature data for 17 meteorological stations in Malaysia was assigned to the four heat wave scales which are monitoring, alert level, heat wave and emergency. The analysis result indicated that most of the stations had three categories of heat wave scales. Only Chuping station had four categories while Bayan Lepas, Kuala Terengganu, Kota Bharu and Kota Kinabalu stations had two categories. The limiting probabilities obtained at each station showed a similar trend which the highest proportion of daily maximum temperature occurred in the scale of monitoring and followed by the alert level. This trend is apparent when the daily maximum temperature data revealed that Malaysia is experiencing two consecutive days of temperature below 35&#8451;.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Weakly Special Classes of Modules]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8937]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2A&nbsp;&nbsp;<p>Puguh Wahyu Prasetyo&nbsp; &nbsp;Indah Emilia Wijayanti&nbsp; &nbsp;Halina France-Jackson&nbsp; &nbsp;and Joe Repka&nbsp; &nbsp;</p><p>In the development of Theory Radical of Rings, there are two kinds of radical constructions. The first radical construction is the lower radical construction and the second one is the upper radical construction. In fact, the class π of all prime rings forms a special class and the upper radical class <img src=image/13491060_01.gif> of <img src=image/13491060_02.gif> forms a radical class which is called the prime radical. An upper radical class which is generated by a special class of rings is called a special radical class. On the other hand, we also have the class <img src=image/13491060_03.gif> of all semiprime rings which is weakly special class of rings. Moreover, we can construct a special class of modules by using a given special class of rings. This condition motivates the existence of the question how to construct weakly special class modules by using a given weakly special class of rings. This research is a qualitative research. The results of this research are derived from fundamental axioms and properties of radical class of rings especially on special and weakly special radical classes. In this paper, we introduce the notion of a weakly special class of modules, a generalization of the notion on a special class of modules based on the definition of semiprime modules. Furthermore, some properties and examples of weakly special classes of modules are given. The main results of this work are the definition of a weakly special class of modules and their properties.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Bayesian Estimation in Piecewise Constant Model with Gamma Noise by Using Reversible Jump MCMC]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8936]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2A&nbsp;&nbsp;<p>Suparman&nbsp; &nbsp;</p><p>A piecewise constant model is often applied to model data in many fields. Several noises can be added in the piecewise constant model. This paper proposes the piecewise constant model with a gamma multiplicative noise and a method to estimate a parameter of the model. The estimation is done in a Bayesian framework. A prior distribution for the model parameter is chosen. The prior distribution for the parameter model is multiplied with a likelihood function for the data to build a posterior distribution for the parameter. Because a number of models are also parameters, a form of the posterior distribution for the parameter is too complex. A Bayes estimator cannot be calculated easily. A reversible jump Monte Carlo Markov Chain (MCMC) is used to find the Bayes estimator of the model parameter. A result of this paper is the development of the piecewise constant model and the method to estimate the model parameter. An advantage of this method can simultaneously estimate the constant piecewise model parameter.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Approximate Analytical Solutions of Nonlinear Korteweg-de Vries Equations Using Multistep Modified Reduced Differential Transform Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8935]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2A&nbsp;&nbsp;<p>Che Haziqah Che Hussin&nbsp; &nbsp;Ahmad Izani Md Ismail&nbsp; &nbsp;Adem Kilicman&nbsp; &nbsp;and Amirah Azmi&nbsp; &nbsp;</p><p>This paper aims to propose and investigate the application of Multistep Modified Reduced Differential Transform Method (MMRDTM) for solving the nonlinear Korteweg-de Vries (KdV) equation. The proposed technique has the advantage of producing an analytical approximation in a fast converging sequence with a reduced number of calculated terms. MMRDTM is presented with some modification of the reduced differential transformation method (RDTM) which is the nonlinear term is replaced by related Adomian polynomials and then adopting a multistep approach. Consequently, the obtained approximation results do not only involve smaller number of calculated terms for the nonlinear KdV equation, but also converge rapidly in a broad time frame. We provided three examples to illustrates the advantages of the proposed method in obtaining the approximation solutions of the KdV equation. To depict the solution and show the validity and precision of the MMRDTM, graphical inputs are included.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[The Performance of Different Correlation Coefficient under Contaminated Bivariate Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8934]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2A&nbsp;&nbsp;<p>Bahtiar Jamili Zaini&nbsp; &nbsp;and Shamshuritawati Sharif&nbsp; &nbsp;</p><p>Bivariate data consist of 2 random variables that are obtained from the same population. The relationship between 2 bivariate data can be measured by correlation coefficient. A correlation coefficient computed from the sample data is used to measure the strength and direction of a linear relationship between 2 variables. However, the classical correlation coefficient results are inadequate in the presence of outliers. Therefore, this study focuses on the performance of different correlation coefficient under contaminated bivariate data to determine the strength of their relationships. We compared the performance of 5 types of correlation, which are classical correlations such as Pearson correlation, Spearman correlation and Kendall’s Tau correlation with other robust correlations, such as median correlation and median absolute deviation correlation. Results show that when there is no contamination in data, all 5 correlation methods show a strong relationship between 2 random variables. However, under the condition of data contamination, median absolute deviation correlation denotes a strong relationship compared to other methods.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Stochastic Decomposition Result of an Unreliable Queue with Two Types of Services]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8930]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Gautam Choudhury&nbsp; &nbsp;Akhil Goswami&nbsp; &nbsp;Anjana Begum&nbsp; &nbsp;and Hemanta Kumar Sarmah&nbsp; &nbsp;</p><p>The single server queue with two types of heterogeneous services with generalized vacation for unreliable server have been extended to include several types of generalizations to which attentions has been paid by several researchers. One of the most important results which deals with such types of models is the “Stochastic Decomposition Result”, which allows the system behaviour to be analyzed by considering separately distribution of system (queue) size with no vacation and additional system (queue) size due to vacation. Our intention is to look into some sort of united approach to establish stochastic decomposition result for two types of general heterogeneous service queues with generalized vacations for unreliable server with delayed repair to include several types of generalizations. Our results are based on embedded Markov Chain technique which is considerably a most powerful and popular method wisely used in applied probability, specially in queueing theory. The fundamental idea behind this method is to simplify the description of state from two dimensional states to one dimensinal state space. Finally, the results that are derived is shown to include several types of generalizations of some existing well- known results for vacation models, that may lead to remarkable simpliﬁcation while solving similar types of complex models.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Approximations for Theories of Abelian Groups]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8929]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Inessa I. Pavlyuk&nbsp; &nbsp;and Sergey V. Sudoplatov&nbsp; &nbsp;</p><p>Approximations of syntactic and semantic objects play an important role in various ﬁelds of mathematics. They can create theories and structures in one given class by means of others, usually simpler. For instance, in certain situations, inﬁnite objects can be approximated by ﬁnite or strongly minimal ones. Thus, complicated objects can be collected using simpliﬁed ones. Among these objects, Abelian groups, their ﬁrst order theories, connections and dynamics are of interests. Theories of Abelian groups are characterized by Szmielew invariants leading to the study and descriptions of approximations in terms of these invariants. In the paper we apply a general approach for approximating theories to the class of theories of Abelian groups which characterizes the approximability of a theory of Abelian groups by a given family of theories of Abelian groups in terms of Szmielew invariants and their limits. We describe some forms of approximations for theories of Abelian groups. In particular, approximations of theories of Abelian groups by theories of ﬁnite ones are characterized. In addition, we describe approximations by quasi-cyclic and torsion-free Abelian groups and their combinations with respect to given families of prime numbers. Approximations and closures of families of theories with respect to standard Abelian groups for various sets of prime numbers are also described.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Groundwater-quality Assessment Models with Total Nitrogen Transformation Effects]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8928]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Supawan Yena&nbsp; &nbsp;and Nopparat Pochai&nbsp; &nbsp;</p><p>Nitrogen is emitted extensively by industrial companies, increasing nitrogen compounds such as ammonia, nitrate, and nitrite in soil and water as a result of nitrogen cycle reactions. Groundwater contamination with nitrates and nitrites impacts human health. Mathematical models can explain groundwater contamination with nitrates and nitrites. Hydraulic head model provides the hydraulic head of groundwater. Groundwater velocity model provided x- and y- direction vector in groundwater. Groundwater contamination distribution model provides nitrogen, nitrate and nitrite concentration. Finite difference techniques are approximate the models solution. Alternating direction explicit method was used to clarify hydraulic head model. Centered space explained groundwater velocity model. Forward time central space was used to predict groundwater transportation of contamination models. We simulate different circumstances to explain the pollution in leachate water underground, paying attention to the toxic nitrogen, ammonia, nitrate, nitrite blended in the water.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Improved Frequency Table with Application to Environmental Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8927]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Mohammed M. B.&nbsp; &nbsp;Adam M. B.&nbsp; &nbsp;Zulkafli H. S.&nbsp; &nbsp;and Ali N.&nbsp; &nbsp;</p><p>This paper proposes three different statistics to be used to represent the magnitude observations in each class when estimating the statistical measures from the frequency table for continuous data. The existing frequency tables use the midpoint as the magnitude of observations in each class, which results in an error called grouping error. Using the midpoint is due to the assumption that the observations in each class are uniformly distributed and concentrated around their midpoint, which is not always valid. In this research, construction of the frequency tables using the three proposed statistics, the arithmetic mean, median, and midrange and midpoint are respectively named, Method 1, Method 2, Method 3, and the Existing method. The four methods are compared using root-mean-squared error (RMSE) by performing simulation studies using three distributions, normal, uniform, exponential distributions. The simulation results are validated using real data, Glasgow weather data. The ﬁndings indicated that using the arithmetic mean to represent the magnitude of observations in each class of the frequency table leads to minimal error relative to other statistics. It is followed by using the median, for data simulated from normal and exponential distributions, and using midrange for data simulated from uniform distribution. Meanwhile, in choosing the appropriate number of classes used in constructing the frequency tables, among seven different rules used, the freedman and Diaconis rule is the recommended rule.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Solvability, Completeness and Computational Analysis of A Perturbed Control Problem with Delays]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8926]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Ludwik Byszewski&nbsp; &nbsp;Denis Blackmore&nbsp; &nbsp;Alexander A. Balinsky&nbsp; &nbsp;Anatolij K. Prykarpatski&nbsp; &nbsp;and Mirosław Lu´styk&nbsp; &nbsp;</p><p>As a ﬁrst step, we provide a precise mathematical framework for the class of control problems with delays (which we refer to as the control problem) under investigation in a Banach space setting, followed by careful deﬁnitions of the key properties to be analyzed such as solvability and complete controllability. Then, we recast the control problem in a reduced form that is especially amenable to the innovative analytical approach that we employ. We then study in depth the solvability and completeness of the (reduced) nonlinearly perturbed linear control problem with delay parameters. The main tool in our approach is the use of a Borsuk–Ulam type ﬁxed point theorem to analyze the topological structure of a suitably reduced control problem solution, with a focus on estimating the dimension of the corresponding solution set, and proving its completeness. Next, we investigate its analytical solvability under some special, mildly restrictive, conditions imposed on the linear control and nonlinear functional perturbation. Then, we describe a novel computational projection-based discretization scheme of our own devising for obtaining accurate approximate solutions of the control problem along with useful error estimates. The scheme effectively reduces the inﬁnite-dimensional problem to a sequence of solvable ﬁnite-dimensional matrix valued tasks. Finally, we include an application of the scheme to a special degenerate case of the problem wherein the Banach–Steinhaus theorem is brought to bear in the estimation process.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[The Way of Pooling p-values]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8925]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Fausto Galetto&nbsp; &nbsp;</p><p>Pooling p-values arises both in practical (in any science and engineering applications) and theoretical (statistical) issues. The p-value (sometimes p value) is a probability used as a statistical decision quantity: in practical applications, it is used to decide if an experimenter has to believe that his/her collected data confirm or disconfirm his/her hypothesis about the “reality” of a phenomenon. It is a real number, determination of a Random Variable, uniformly distributed, related to the data provided by the measurement of a phenomenon. Almost all statistical software provides p-values when statistical hypotheses are considered, e.g. in Analysis of Variance and regression methods. Combining the p-values from various samples is crucial, because the number of degrees of freedom (df) of the samples we want to combine is influencing our decision: forgetting this can have dangerous consequences. One way of pooling p-values is provided by a formula of Fisher; unfortunately, this method does not consider the number of degrees of freedom. We will show other ways of doing that and we will prove that theory is more important than any formula which does not consider the phenomenon on which we have to decide: the distribution of the Random Variables is fundamental in order to pool data from various samples. Manager, professors and scholars should remember Deming’s profound knowledge and Juran’s ideas; profound knowledge means “understanding variation (type of variation)” in any process, production or managerial; not understanding variation causes cost of poor quality (more than 80% of sales value) and do not permits a real improvement.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Analysis of the Element's Arrangement Structures in Discrete Numerical Sequences]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8924]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Anton Epifanov&nbsp; &nbsp;</p><p>Paper contains the results of the analysis of the laws of functioning of discrete dynamical systems, as mathematical models of which, using the apparatus of geometric images of automatons, are used numerical sequences which interpreted as sequences of second coordinates of points of geometric images of automatons. The geometric images of the laws of the functioning of the automaton are reduced to numerical sequences and numerical graphs. The problem of constructing an estimate of the complexity of the structures of such sequences is considered. To analyze the structure of sequences, recurrence forms are used that give characteristics of the relative positions of elements in the sequence. The parameters of recurrent forms are considered, which characterize the lengths of the initial segments of sequences determined by recurrence forms of fixed orders, the number of changes of recurrent forms required to determine the entire sequence, the place of change of recurrence forms, etc. All these parameters are systematized into the special spectrum of dynamic parameters used for the recurrent determination of sequences, which is used as a means of constructing estimates of the complexity of sequences. In this paper, it also analyzes return sequences (for example, Fibonacci numbers), for the analysis of the properties of which characteristic sequences are used. The properties of sequences defining approximations of fundamental mathematical constants (number e, pi number, golden ratio, Euler constant, Catalan constant, values of Riemann zeta function, etc.) are studied. Complexity estimates are constructed for characteristic sequences that distinguish numbers with specific properties in a natural series, as well as for characteristic sequences that reflect combinations of the properties of numbers.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Orthogonal Splines in Approximation of Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8923]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Leontiev V. L.&nbsp; &nbsp;</p><p>The problem of approximating of a surface given by the values of a function of two arguments in a finite number of points of a certain region in the classical formulation is reduced to solving a system of algebraic equations with tightly filled matrixes or with band matrixes. In the case of complex surfaces, such a problem requires a significant number of arithmetic operations and significant computer time spent on such calculations. The curvilinear boundary of the domain of general type does not allow using classical orthogonal polynomials or trigonometric functions to solve this problem. This paper is devoted to an application of orthogonal splines for creation of approximations of functions in form of finite Fourier series. The orthogonal functions with compact supports give possibilities for creation of such approximations of functions in regions with arbitrary geometry of a boundary in multidimensional cases. A comparison of the fields of application of classical orthogonal polynomials, trigonometric functions and orthogonal splines in approximation problems is carried out. The advantages of orthogonal splines in multidimensional problems are shown. The formulation of function approximation problem in variational form is given, a system of equations for coefficients of linear approximation with a diagonal matrix is formed, expressions for Fourier coefficients and approximations in the form of a finite Fourier series are written. Examples of approximations are considered. The efficiency of orthogonal splines is shown. The development of this direction associated with the use of other orthogonal splines is discussed.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Numerical Simulation of a Two-Dimensional Vertically Averaged Groundwater Quality Assessment in Homogeneous Aquifer Using Explicit Finite Difference Techniques]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8817]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Supawan Yena&nbsp; &nbsp;and Nopparat Pochai&nbsp; &nbsp;</p><p>Leachate contamination in a landfill causes pollution that flowing down to the groundwater. There are many methods to measure the groundwater quality. Mathematical models are often used to describe the groundwater flow. In this research, the affection of landfill construction to groundwater-quality around rural area is focused. Three mathematical models are combined. The first model is a two-dimensional groundwater flow model. It provides the hydraulic head of the groundwater. The second model is the velocity potential model. It provides the groundwater flow velocity. The third model is a two-dimensional vertically averaged groundwater pollution dispersion model. The groundwater pollutant concentration is provided. The forward time centered technique with the centered in space, the forward in space and the backward in space with all boundaries are used to obtain approximate hydraulic head, the flow velocity in x- and y- directions, respectively. The approximated groundwater flow velocity is used to input into a two-dimensional vertically averaged groundwater pollution dispersion model. The forward time centered space technique with the centered in space, the forward in space and the backward in space with all boundaries are used to obtain approximate the groundwater pollutant concentration. The proposed explicit forward time centered spaced finite difference techniques to the groundwater flow model the velocity potential model and the groundwater pollution dispersion model give good agreement approximated solutions.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Probability Aspects of Entrance Exams at University]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8816]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Jindrich Klufa&nbsp; &nbsp;</p><p>The entrance examinations tests were shorted from 50 questions to 40 questions at the Faculty of International Relations at University of Economics in Prague due to time reasons. These tests are the multiple choice question tests. The multiple choice question tests are suitable for entrance examinations at University of Economics in Prague - the tests are objective and results can be evaluated quite easily and quickly for large number of students. On the other hand, a student can obtain certain number of points in the test purely by guessing the right answers. This shortening of the tests from 50 questions to 40 questions has negative influence on the probability distributions of number of points in the tests (under assumption of the random choice of answers). Therefore, this paper is suggested a solution of this problem. The comparison of these three ways of acceptance of applicants to study the Faculty of International Relations at University of Economics from probability point of view is performed in present paper. The results of this paper show that there has been a significant improvement of the probability distributions of number of points in the tests. The obtained conclusions can be used for admission process at the Faculty of International Relations in coming years.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Sufficient conditions for univalence obtained by using Briot-Bouquet differential subordination]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8761]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>GeorgiaIrina Oros&nbsp; &nbsp;and Alina Alb Lupas&nbsp; &nbsp;</p><p>In this paper, we define the operator I<sup>m</sup> :<img src=image/13414425_01.gif> differential-integral operator, where S<sup>m</sup> is S˘al˘agean differential operator and Lm is Libera integral operator. By using the operator I<sup>m</sup> the class of univalent functions denoted by <img src=image/13414425_02.gif> is defined and several differential subordinations are studied. Even if the use of linear operators and introduction of new classes of functions where subordinations are studied is a well-known process, the results are new and could be of interest for young researchers because of the new approach derived from mixing a differential operator and an integral one. By using this differential–integral operator, we have obtained new sufficient conditions for the functions from some classes to be univalent. For the newly introduced class of functions, we show that is it a class of convex functions and we prove some inclusion relations depending on the parameters of the class. Also, we show that this class has as subclass the class of functions with bounded rotation, a class studied earlier by many authors cited in the paper. Using the method of the subordination chains, some differential subordinations in their special Briot-Bouquet form are obtained regarding the differential–integral operator introduced in the paper. The best dominant of the Briot-Bouquet differential subordination is also given. As a consequence, sufficient conditions for univalence are stated in two criteria. An example is also illustrated, showing how the operator is used in obtaining Briot–Bouquet differential subordinations and the best dominant.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Geometric Topics on Elementary Amenable Groups]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8760]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Mostafa Ftouhi&nbsp; &nbsp;Mohammed Barmaki&nbsp; &nbsp;and Driss Gretete&nbsp; &nbsp;</p><p>The class of amenable groups plays an important role in many areas of mathematics such as ergodic theory, harmonic analysis, representation theory, dynamical systems, geometric group theory, probability theory and statistics. The class of amenable groups contains in particular all finite groups, all abelian groups and, more generally, all solvable groups. It is closed under the operations of taking subgroups, taking quotients, taking extensions, and taking inductive limits. In 1959, Harry Kesten proved that there is a relation between the amenability and the estimates of symmetric random walk on finitely generated groups. In this article we study the classification of locally compact compactly generated groups according to return probability to the origin. Our aim is to compare several geometric classes of groups. The central tool in this comparison is the return probability on locally compact groups. we introduce several classes of groups in order to characterize the geometry of locally compact groups compactly generated. Our aim is to compare these classes in order to better understand the geometry of such groups by referring to the behavior of random walks on these groups. As results, we have found inclusion relationships between these defined classes and we have given counterexamples for reciprocal inclusions.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Semi Bounded Solution of Hypersingular Integral Equations of the First Kind on the Rectangle]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8759]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Zainidin Eshkuvatov&nbsp; &nbsp;Massamdi Kommuji&nbsp; &nbsp;Rakhmatullo Aloev&nbsp; &nbsp;Nik Mohd Asri Nik Long&nbsp; &nbsp;and Mirzoali Khudoyberganov&nbsp; &nbsp;</p><p>A hypersingular integral equations (HSIEs) of the first kind on the interval [ 1 ; 1 ] with the assumption that kernel of the hypersingular integral is constant on the diagonal of the domain is considered. Truncated series of Chebyshev polynomials of the third and fourth kinds are used to find semi bounded (unbounded on the left and bounded on the right and vice versa) solutions of HSIEs of first kind. Exact calculations of singular and hypersingular integrals with respect to Chebyshev polynomials of third and forth kind with corresponding weights allows us to obtain high accurate approximate solution. Gauss-Chebyshev quadrature formula is extended for regular kernel integrals. Three examples are provided to verify the validity and accuracy of the proposed method. Numerical examples reveal that approximate solutions are exact if solution of HSIEs is of the polynomial forms with corresponding weights.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[Comparison Analysis: Large Data Classification Using PLS-DA and Decision Trees]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8758]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Nurazlina Abdul Rashid&nbsp; &nbsp;Norashikin Nasaruddin&nbsp; &nbsp;Kartini Kassim&nbsp; &nbsp;and Amirah Hazwani Abdul Rahim&nbsp; &nbsp;</p><p>Classification studies are widely applied in many areas of research. In our study, we are using classification analysis to explore approaches for tackling the classification problem for a large number of measures using partial least square discriminant analysis (PLS-DA) and decision trees (DT). The performance for both methods was compared using a sample data of breast tissues from the University of Wisconsin Hospital. A partial least square discriminant analysis (PLS-DA) and decision trees (DT) predict the diagnosis of breast tissues (M = malignant, B = benign). A total of 699 patients diagnose (458 benign and 241 malignant) are used in this study. The performance of PLS-DA and DT has been evaluated based on the misclassification error and accuracy rate. The results show PLS-DA can be considered as a good and reliable technique to be used when dealing with a large dataset for the classification task and have good prediction accuracy.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[On Degenerations and Invariants of Low-Dimensional Complex Nilpotent Leibniz Algebras]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8757]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Nurul Shazwani Mohamed&nbsp; &nbsp;Sharifah Kartini Said Husain&nbsp; &nbsp;and Faridah Yunos&nbsp; &nbsp;</p><p>Given two algebras <img src=image/13490829_01.gif> and <img src=image/13490829_02.gif>, if <img src=image/13490829_02.gif> lies in the Zariski closure of the orbit <img src=image/13490829_01.gif>, we say that <img src=image/13490829_02.gif> is a degeneration of <img src=image/13490829_01.gif>. We denote this by <img src=image/13490829_03.gif>. Degenerations (or contractions) were widely applied to a range of physical and mathematical point of view. The most well-known example oriented to the application on degenerations is limiting process from quantum mechanics to classical mechanics under <img src=image/13490829_04.gif> that corresponds to the contraction of the Heisenberg algebras to the abelian ones of the same dimension. Research on degenerations of Lie, Leibniz and other classes of algebras are very active. Throughout the paper we are dealing with mathematical background with abstract algebraic structures. The present paper is devoted to the degenerations of low-dimensional nilpotent Leibniz algebras over the field of complex numbers. Particularly, we focus on the classification of three-dimensional nilpotent Leibniz algebras. List of invariance arguments are provided and its dimensions are calculated in order to find the possible degenerations between each pair of algebras. We show that for each possible degenerations, there exists construction of parameterized basis on parameter <img src=image/13490829_05.gif> We proof the non-degeneration case for mentioned classes of algebras by providing some reasons to reject the degenerations. As a result, we give complete list of degenerations and non-degenerations of low-dimensional complex nilpotent Leibniz algebras. In future research, from this result we can find its rigidity and irreducible components.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[The Semi Analytics Iterative Method for Solving Newell-Whitehead-Segel Equation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8756]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Busyra Latif &nbsp; &nbsp;Mat Salim Selamat&nbsp; &nbsp;Ainnur Nasreen Rosli&nbsp; &nbsp;Alifah Ilyana Yusoff&nbsp; &nbsp;and Nur Munirah Hasan&nbsp; &nbsp;</p><p>Newell-Whitehead-Segel (NWS) equation is a nonlinear partial differential equation used in modeling various phenomena arising in fluid mechanics. In recent years, various methods have been used to solve the NWS equation such as Adomian Decomposition method (ADM), Homotopy Perturbation method (HPM), New Iterative method (NIM), Laplace Adomian Decomposition method (LADM) and Reduced Differential Transform method (RDTM). In this study, the NWS equation is solved approximately using the Semi Analytical Iterative method (SAIM) to determine the accuracy and effectiveness of this method. Comparisons of the results obtained by SAIM with the exact solution and other existing results obtained by other methods such as ADM, LADM, NIM and RDTM reveal the accuracy and effectiveness of the method. The solution obtained by SAIM is close to the exact solution and the error function is close to zero compared to the other methods mentioned above. The results have been executed using Maple 17. For future use, SAIM is accurate, reliable, and easier in solving the nonlinear problems since this method is simple, straightforward, and derivative free and does not require calculating multiple integrals and demands less computational work.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[From Exploratory Data Analysis to Exploratory Spatial Data Analysis]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8755]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Patricia Abelairas-Etxebarria&nbsp; &nbsp;and Inma Astorkiza&nbsp; &nbsp;</p><p>The Exploratory Data Analysis raised by Tuckey [19] has been used in multiple research in many areas but, especially, in the area of the social sciences. This technique searches behavioral patterns of the variables of the study, establishing a hypothesis with the least possible structure. However, in recent times, the inclusion of the spatial perspective in this type of analysis has been revealed as essential because, in many analyses, the observations are spatially autocorrelated and/or they present spatial heterogeneity. The presence of these spatial effects makes necessary to include spatial statistics and spatial tools in the Exploratory Data Analysis. Exploratory Spatial Data Analysis includes a set of techniques that describe and visualize those spatial effects: spatial dependence and spatial heterogeneity. It describes and visualizes spatial distributions, identifies outliers, finds distribution patterns, clusters and hot spots and suggests spatial regimes or other forms of spatial heterogeneity and, it is being increasingly used. With the objective of reviewing the last applications of this technique, this paper, firstly, shows the tools used in Exploratory Spatial Data Analysis and, secondly, reviews the latest Exploratory Spatial Data Analysis applications focused on different areas in the social sciences particularly. As conclusion, it should be noted the growing interest in the use of this spatial technique to analyze different aspects of the social sciences including the spatial dimension.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[A New Method to Estimate Parameters in the Simple Regression Linear Equation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8754]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Agung Prabowo&nbsp; &nbsp;Agus Sugandha&nbsp; &nbsp;Agustini Tripena&nbsp; &nbsp;Mustafa Mamat&nbsp; &nbsp;Sukono&nbsp; &nbsp;and Ruly Budiono&nbsp; &nbsp;</p><p>Linear regression is widely used in various fields. Research on linear regression uses the OLS and ML method in estimating its parameters. OLS and ML method require many assumptions to complete. It is frequently found there is an unconditional assumption that both methods are not successfully used. This paper proposes a new method which does not require any assumption with a condition. The new method is called SAM (Simple Averaging Method) to estimate parameters in the simple linear regression model. The method may be used without fulfilling assumptions in the regression model. Three new theorems are formulated to simplify the estimation of parameters in the simple linear regression model with SAM. By using the same data, the simple linear regression model parameter estimation is conducted using SAM. The result shows that the obtained regression parameter is not quite far different. However, to measure the accuracy of both methods, a comparison of errors made by each method is conducted using Root Mean Square Error (RMSE) and Mean Averaged Error (MAE). By comparing the values of RMSE and MAE for both methods, SAM method may be used to estimate parameters in the regression equation. The advantage of SAM is free from all assumptions required by regression, such as error normality assumption while the data should be from the normal distribution.</p>]]></description>
<pubDate>Mar 2020</pubDate>
</item>
<item>
<title><![CDATA[A Two-dimensional Mathematical Model for Long-term Contaminated Groundwater Pollution Measurement around a Land Fill]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8656]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Jirapud Limthanakul&nbsp; &nbsp;and Nopparat Pochai&nbsp; &nbsp;</p><p>A source of contaminated groundwater is governed by the disposal of waste material on a land fill. There are many people in rural areas where the primary source of drinking water is well water. This well water may be contaminated with groundwater from landfills. In this research, a two-dimensional mathematical model for long-term contaminated groundwater pollution measurement around a land fill is proposed. The model is governed by a combination of two models. The first model is a transient two-dimensional groundwater flow model that provides the hydraulic head of the groundwater. The second model is a transient twodimensional advection-diffusion equation that provides the groundwater pollutant concentration. The proposed explicit finite difference techniques are used to approximate the hydraulic head and the groundwater pollutant concentration. The simulations can be used to indicate when each simulated zone becomes a hazardous zone or a protection zone.</p>]]></description>
<pubDate>Jan 2020</pubDate>
</item>
<item>
<title><![CDATA[Multiplicity of Approach and Method in Augmentation of Simplex Method: A Review]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8655]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Nor Asmaa Alyaa Nor Azlan&nbsp; &nbsp;Effendi Mohamad&nbsp; &nbsp;Mohd Rizal Salleh&nbsp; &nbsp;Oyong Novareza&nbsp; &nbsp;Dani Yuniawan&nbsp; &nbsp;Muhamad Arfauz A Rahman&nbsp; &nbsp;Adi Saptari&nbsp; &nbsp;and Mohd Amri Sulaiman&nbsp; &nbsp;</p><p>The purpose of this review paper is to set an augmentation approach and exemplify distribution of augmentation works in Simplex method. The augmentation approach is classified into three forms whereby it comprises addition, substitution and integration. From the diversity study, the result shows that substitution approach appeared to be the highest usage frequency, which is about 45.2% from the total of percentage. This is then followed by addition approach which makes up 32.3% of usage frequency and integration approach for about 22.6% of usage frequency which makes it the least percentage of the overall usage frequency approach. Since it is being the least usage percentage, the paper is then interested to foresee a future study of integration approach that can be performed from the executed distribution of the augmentation works according to Simplex's computation stages. A theme screening is then conducted with a set of criteria and themes to come out with a proposal of new integration approach of augmentation of Simplex method. </p>]]></description>
<pubDate>Jan 2020</pubDate>
</item>
<item>
<title><![CDATA[Gaussian Distribution on Validity Testing to Analyze the Acceptance Tolerance and Significance Level]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8654]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Arif Rahman&nbsp; &nbsp;Oke Oktavianty&nbsp; &nbsp;Ratih Ardia Sari&nbsp; &nbsp;Wifqi Azlia&nbsp; &nbsp;and Lavestya Dina Anggreni&nbsp; &nbsp;</p><p>Some researches need data homogeneity. The dispersion of data causes research towards an absurd direction. The outlier makes unrealistic homogeneity. The research can reject the extreme data as outlier to estimate trimmed arithmetic mean. Because of the wide data dispersion, it will fail to identify the outliers. The study will evaluate the confidence interval and compare it with the acceptance tolerance. There are three types of invalidity of data gathering: outliers, too wide dispersion, distracted central tendency. </p>]]></description>
<pubDate>Jan 2020</pubDate>
</item>
<item>
<title><![CDATA[Fuzzy Parameterized Dual Hesitant Fuzzy Soft Sets and Its Application in TOPSIS]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8653]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Zahari Md Rodzi&nbsp; &nbsp;and Abd Ghafur Ahmad&nbsp; &nbsp;</p><p>The purpose of this work is to present a new theory namely fuzzy parameterized dual hesitant fuzzy soft sets (FPDHFSSs). This theory is an extension of the existing dual hesitant fuzzy soft set whereby the set of parameters have been assigned with respective weightage accordingly. We also introduced the basic operation functions for instance intersection, union, addition and product operations of FPDHFSSs. Then, we proposed the concept of score function of FPDHFSSs of which these scores function were determined based on average mean, geometry mean and fractional score. The said scores function then were divided into the membership and non-membership elements where the distance of FPDHFSSs was introduced. The proposed distance of FPDHFSSs has been applied in TOPSIS which will be able to solve the problem of fuzzy dual hesitant fuzzy soft set environment. </p>]]></description>
<pubDate>Jan 2020</pubDate>
</item>
<item>
<title><![CDATA[Usefulness of Mathematics Subjects in the Accounting Courses in Baccalaureate Education]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8652]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Alec John Villamar&nbsp; &nbsp;Marionne Gayagoy&nbsp; &nbsp;Flerida Matalang&nbsp; &nbsp;and Karen Joy Catacutan&nbsp; &nbsp;</p><p>This study aimed to determine the usefulness of Mathematics subjects in the accounting courses for Bachelor of Science in Accountancy. Mathematics subjects, which include College Algebra, Mathematics of Investment, Business Calculus and Quantitative Techniques, were evaluated through its Course Learning Objectives, while its usefulness for accounting courses which include Financial Accounting, Advance Accounting, Cost Accounting, Management Advisory Services, Auditing and Taxation, was evaluated by the students. Descriptive research was employed among all students in their 5<sup>th</sup>-year in BS-Accountancy who were done with all the Accounting Subjects in the Accountancy Program and they all passed the different Mathematics subjects prerequisite to their courses. A survey questionnaire was used to gather data. Using descriptive statistics, results showed that Mathematics of Investment is the most useful subject in the different accounting courses particularly in Financial Accounting, Advance Accounting and Auditing. Further, by using Mean, the results showed that several skills that can be acquired in the Mathematics subjects are found to be useful in accounting courses and the use of the fundamental operations is the most useful skill in all accounting subjects.</p>]]></description>
<pubDate>Jan 2020</pubDate>
</item>
<item>
<title><![CDATA[On A 3-Points Inflated Power Series Distributions Characterizations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8651]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Rafid S. A. Alshkaki&nbsp; &nbsp;</p><p>Differential equations are used in modelling many disciplines, in engineering, chemistry, physics, biology, economics, and other fields of sciences, hence can be used to understand and to determine the underlying probabilistic behavior of phenomena through their probability distributions. This paper came to use a simple form of differential equations, namely, the linear form, to determine the probabilistic distributions of some of the most important and popular sub class of discrete distributions used in real-life, the Poisson, the binomial, the negative binomial, and the logarithmic series distributions. A class of finite number of inflated points power series distributions, that contains the Poisson, the binomial, the negative binomial, and the logarithmic series distributions as some of its members, was defined and some of its characteristics properties, along with characterization of the 3-points inflated of these four distributions, through a linear differential equation for their probability generating functions were given. Further, some previous known results were shown to be special cases of our results.</p>]]></description>
<pubDate>Jan 2020</pubDate>
</item>
<item>
<title><![CDATA[A Comparative Study of Space and Time Fractional KdV Equation through Analytical Approach with Nonlinear Auxiliary Equation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8650]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2020<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;8&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Hasibun Naher&nbsp; &nbsp;Humayra Shafia&nbsp; &nbsp;Md. Emran Ali&nbsp; &nbsp;and Gour Chandra Paul&nbsp; &nbsp;</p><p>In this article, the nonlinear partial fractional differential equation, namely the KdV equation is renewed with the help of modified Riemann- Liouville fractional derivative. The equation is transformed into the nonlinear ordinary differential equation by using the fractional complex transformation. The goal of this paper is to construct new analytical solutions of the space and time fractional nonlinear KdV equation through the extended <img src=image/13414072_01.gif>-expansion method. The work produces abundant exact solutions in terms of hyperbolic, trigonometric, rational, exponential, and complex forms, which are new and more general than existing results in literature. The newly generated solutions show that the executed method is a well-organized and competent mathematical tool to investigate a class of nonlinear evolution fractional order equations.</p>]]></description>
<pubDate>Jan 2020</pubDate>
</item>
<item>
<title><![CDATA[Potential Growth Analysis of FDI in Albania]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8552]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Llesh Lleshaj&nbsp; &nbsp;and Alban Korbi&nbsp; &nbsp;</p><p>In this study analyzed 20 different countries that are the origin state of foreign investors, which have invested in Albania (this sample represents 95% of FDI (Foreign Direct Investments) stocks, 2007 - 2014). The analysis technic used is the gravity model of FDI stocks in Albania. The main independent variables in this analysis are GDP, the level of business taxes, the difference of GDP per capita, the similarity economies, etc. The result of this study: The level of FDI stocks in Albania is lower than its potential compare with FDI stock average in the States of the Balkan Region.</p>]]></description>
<pubDate>Nov 2019</pubDate>
</item>
<item>
<title><![CDATA[Non-Archimedean Fuzzy M-Metric Space and Fixed Point Theorems Endowed with a Reflexive Digraph]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8551]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Anuradha&nbsp; &nbsp;Seema Mehra&nbsp; &nbsp;and Said Broumi&nbsp; &nbsp;</p><p>Motivated by the concepts of fuzzy metric and m-metric spaces, we introduced the notion of Non- Archimedean fuzzy m-metric space which is an extension of partial fuzzy metric space. We present some examples in support of this new notion. Regarding this notion, its topological structure and some properties are specified simultaneously. At the end, some fixed point results are also provided.</p>]]></description>
<pubDate>Nov 2019</pubDate>
</item>
<item>
<title><![CDATA[Ellipsoidal Approximation of Distributions and Its Applications]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8550]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Igor Sinitsyn&nbsp; &nbsp;and Vladimir Sinitsyn&nbsp; &nbsp;</p><p>Analytical methods of the mathematical statistics of random vectors and matrices based on the parametrization of the distributions are widely used. These methods permit to design practically simple software when it is possible to have the definite information about analytical properties of the distributions under research. The main difficulty in practical applications of the methods based on the parametrization of the distributions is the rapid increase of the number of equations for the moments, the semiinvariants or the coefficients of the truncated orthogonal expansions of the dimension or the state vector (extended in the general case) and the maximal order of the moments involved. The number of equations for the parameters becomes exceedingly large in such cases. For structural parametrization and/or approximation of the probability densities of the random vectors we shall apply the ellipsoidal densities, i.e. the densities whose planes of the levels of equal probability are similar concentric ellipsoids (the ellipses for two-dimensional vectors, the ellipsoids for three-dimensional vectors, the hyperellipsoids for the vectors of the dimension more than three). In particular, a normal distribution in any finite-dimensional space has an ellipsoidal structure. The distinctive characteristics of such distributions consists in the fact that their probability densities are the functions of positively determined quadratic form <img src=image/13414026_1.gif> where is an expectation of the random vector <img src=image/13414026_2.gif> is some positively determined matrix. Ellipsoidal approximation method (EAM) cardinally reduces the number of parameters till <img src=image/13414026_3.gif> (<img src=image/13414026_4.gif>) where <img src=image/13414026_5.gif> being the number of probabilistic moments. While using ellipsoidal linearization method (ELM) we get <img src=image/13414026_6.gif> Basic EAM and ELM foundations and applications to problems of mathematical statistics and ellipsoidal distributions with invariant measure in populational Volterra differential stochastic nonlinear systems are considered.</p>]]></description>
<pubDate>Nov 2019</pubDate>
</item>
<item>
<title><![CDATA[To Asymptotic of the Solution of the Heat Conduction Problem with Double Nonlinearity with Absorption at a Critical Parameter]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8549]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Aripov M.&nbsp; &nbsp;Mukimov A.&nbsp; &nbsp;and Mirzayev B.&nbsp; &nbsp;</p><p>We study the asymptotic behavior (for <img src=image/13413939_1.gif>) of solutions of the Cauchy problem for a nonlinear parabolic equation with a double nonlinearity, describing the diffusion of heat with nonlinear heat absorption at the critical value of the parameter ᵝ. For numerical computations as an initial approximation we used founded the long time asymptotic of the solution. Numerical experiments and visualization were carried for one and two dimensional case.</p>]]></description>
<pubDate>Nov 2019</pubDate>
</item>
<item>
<title><![CDATA[Long-ranged Interaction Forces and Real Spaces Related to Them Including Anisotropic Cases]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8548]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Emil V. Veitsman&nbsp; &nbsp;</p><p>This paper is aimed to find a connection between i-dimensional spaces (i=0,…, ‘n') and the long-range j-dimensional attractive forces (j=0,…, ‘m') creating these spaces. The connection is fundamental and unrelated to any processes going in the spaces being studied. A theorem is formulated and strictly proved showing in which cases the long-ranged attractive forces can form real spaces of different dimensions ( i=0,…,n). The existence of the attraction between masses is defined by divergence of the vector of interaction between masses. Weak anisotropic real spaces are studied by rotating an ellipsoid for (3<img src=image/13413751_1.gif>ζ)D-cases when its eccentricity ε<<1. Such spaces cannot be in equilibrium, the time of their existence is substantially limited. The greater is anisotropy, the shorter is the lifetime of such substance. The latter cannot be in equilibrium, the time of their existence is substantially limited. </p>]]></description>
<pubDate>Nov 2019</pubDate>
</item>
<item>
<title><![CDATA[Backward Simulation of Correlated Negative  Binomial  L'evy Process Process]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8408]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Taehan Bae&nbsp; &nbsp;and Maral Mazjini&nbsp; &nbsp;</p><p>Recent studies on correlated Poisson processes show that the backward simulation methods are computationally efficient, and incorporate flexible and extremal correlation structures in a multivariate risk system. These methods rely on the fact that the past arrival times of a Poisson process given the number of events over a time interval, [0; T], are the order statistics of uniform random variables on [0; T]. In this paper, we discuss an extension of the backward methods to a correlated negative binomial L´evy process which is an appealing model for over-dispersed count data such as operational losses. To obtain the conditional uniformity for the negative binomial L´evy process, we consider a particular setting in which the time interval is partitioned into equally spaced sub-intervals with unit length and the terminal time T is set to be the number of sub-intervals. Under this setting, the resulting joint probability of the increment series, conditional on the number of events over [0; T], say l, is uniform for any points in the support of a [T; l]-simplex lattice. Based on this result, we establish a backward simulation method similar to that of Poisson process. Both the conditional independence and conditional dependence cases are discussed with illustrations of the corresponding time correlation patterns.</p>]]></description>
<pubDate>Nov 2019</pubDate>
</item>
<item>
<title><![CDATA[Evolutionary Variational Inequalities with Volterra Type Operators]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8407]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Mykola Bokalo&nbsp; &nbsp;and Olha Sus&nbsp; &nbsp;</p><p>In this paper, we consider the initial-value problem for parabolic variational inequalities (subdifferential inclusions) with Volterra type operators. We prove the existence and the uniqueness of the solution. Furthermore, the estimates of the solution are obtained. The results are achieved using the Banach's fixed point theorem (the principle of compression mappings). The motivation for this work comes from the evolutionary variational inequalities arising in the study of frictionless contact problems for linear viscoelastic materials with long-term memory. Also, such kind of problems have their application in constructing different models of the injection molding processes.</p>]]></description>
<pubDate>Nov 2019</pubDate>
</item>
<item>
<title><![CDATA[Estimation of Small Tail Probabilities by Repeated Fusion]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8406]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Benjamin Kedem&nbsp; &nbsp;Lemeng Pan&nbsp; &nbsp;and Paul J. Smith Chen Wang&nbsp; &nbsp;</p><p>It is shown how to estimate any threshold probability from data below or even far below the threshold through repeated fusion of the data with externally generated random samples. This is referred to as repeated out of sample fusion (ROSF). A comparison of the approach with peaks-over-threshold (POT) across different tail types shows that ROSF provides more precise point and interval estimates based on moderately large samples.</p>]]></description>
<pubDate>Nov 2019</pubDate>
</item>
<item>
<title><![CDATA[Green's Tensor and 2Dimiensional BIE Method in Static of Massif with Arbitrary Anisotropy]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8405]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Sh. A. Dildabayev&nbsp; &nbsp;and G. K. Zakir'yanova&nbsp; &nbsp;</p><p>Up to now remains open the question of constructing fundamental solutions of the two-dimensional statics of an elastic body with arbitrary anisotropy. Also in the scope of BEM method, the question of calculating stresses in boundary points and points located close to the boundary of the region still remains actual. In this work, fundamental solutions of the static problem for elastic plane with arbitrary anisotropic properties are obtained as the sum of residues with complex variable function. The assessments of fundamental solution and theirs derivatives are presented in closed form. In the distribution space obtained are the regular representations for the Somigliana formulas and the stress calculation formulas. The numerical implementation of the BIE method in direct formulation has been realized in standard way. The test results performed for circular hole in anisotropic plane of rhombic system show a higher compliance with the boundary values of displacements and stresses and with nodes placed close to boundary. The results of analysis of the stress-strain state in the vicinity of rectangular mining chambers located deep from day surface are presented in tables and pictures of isolines.</p>]]></description>
<pubDate>Nov 2019</pubDate>
</item>
<item>
<title><![CDATA[Special Spline Approximation in CAD Systems of Linear Structure Routing]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8404]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>D. A. Karpov&nbsp; &nbsp;and V. I. Struchenkov&nbsp; &nbsp;</p><p>This article deals with the problem of approximation of plane curves defined by a sequence of points by a spline of a given type. This task arises when developing methods for computer-aided design of linear structures: railways and roads, trenches for laying pipelines, canals, etc. Its fundamental differences from the problems are considered in the theory of splines and its applications are as follows: spline elements are of various types (straight line segments and circles joined by clothoids), the boundaries of the elements and even their number is unknown; also there are restrictions - inequalities on the parameters of the elements. Continuity of the curve, the tangent, and the curvature is provided. Clothoids are missing if curvature continuity is not required, for example, when designing pipelines. The above mentioned features of the task do not allow using the achievements of the theory of splines and nonlinear programming. We cannot recognize the individual elements of the desired spline by a given sequence of points. Therefore, it is not possible to implement their selection separately. We must search for the spline as a whole. The article presents a mathematical model and a new algorithm for solving the problem using dynamic programming.</p>]]></description>
<pubDate>Nov 2019</pubDate>
</item>
<item>
<title><![CDATA[Tree-based Threshold Model for Non-stationary Extremes with Application to the Air Pollution Index Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8373]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;4A&nbsp;&nbsp;<p>Afif Shihabuddin&nbsp; &nbsp;Norhaslinda Ali&nbsp; &nbsp;and Mohd Bakri Adam&nbsp; &nbsp;</p><p>Air pollution index (API) is a common tool used to describe the air quality in the environment. High level of API indicates the greater level of air pollution which will gives bad impact on human health. Statistical model for high level of API is important for the purpose of forecasting the level of API so that the public can be warned. In this study, extremes of API are modelled using Generalized Pareto Distribution (GPD). Since the values of API are determined by the value of five pollutants namely sulphur dioxide, nitrogen dioxide, carbon monoxide, ozone and suspended particulate matter, data on API exhibit non-stationarity. Standard method for modelling the non-stationary extremes using GPD is by fixing the high constant threshold and incorporating the covariate model in the GPD parameters for data above the threshold to account for the non-stationarity. However, high constant threshold value might be high enough on certain covariate for GPD approximation to be a valid model for extreme values, but not on the other covariates which leads to the violation of the asymptotic basis of GPD model. New method for the threshold selection in non-stationary extremes modelling using regression tree is proposed to the API data. Regression tree is used to partition the API data into a stationary group with similar covariate condition. Then, a high threshold value can be applied within a group. Study shows that model for extremes of API using tree-based threshold gives a good fit and provides an alternative to the model based on standard method.</p>]]></description>
<pubDate>Sep 2019</pubDate>
</item>
<item>
<title><![CDATA[Investigation on the Clusterability of Heterogeneous Dataset by Retaining the Scale of Variables]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8372]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;4A&nbsp;&nbsp;<p>Norin Rahayu Shamsuddin&nbsp; &nbsp;and Nor Idayu Mahat&nbsp; &nbsp;</p><p>Clustering with heterogeneous variables in a dataset is no doubt a challenging process owing to different scales in a data. The paper introduced a SimMultiCorrData package in R to generate the artificial dataset for clustering. The construction of artificial dataset with various distribution helps to mimic the scenario of nature of real datasets. Our experiments shows that the clusterability of a dataset are influenced by various factors such as overlapping clusters, noise, sub-cluster, and unbalance objects within the clusters.</p>]]></description>
<pubDate>Sep 2019</pubDate>
</item>
<item>
<title><![CDATA[Outlier Detection in Local Level Model: Impulse Indicator Saturation Approach]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8371]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;4A&nbsp;&nbsp;<p>F. Z. Che Rose&nbsp; &nbsp;M. T. Ismail&nbsp; &nbsp;and N. A. K. Rosili&nbsp; &nbsp;</p><p>The existence of outliers in financial time series may affect the estimation of economic indicators. Detection of outliers in structural time series framework by using indicator saturation approach has become our main interest in this study. The reference model used is local level model. We apply Monte Carlo simulations to assess the performance of impulse indicator saturation for detecting additive outliers in the reference model. It is found that the significance level, α = 0.001 (tiny) outperformed the other target size in detecting various size of additive outliers. Further, we apply the impulse indicator saturation to detection of outliers in FTSE Bursa Malaysia Emas (FBMEMAS) index. We discover that there were 14 outliers identified corresponding to several economic and financial events.</p>]]></description>
<pubDate>Sep 2019</pubDate>
</item>
<item>
<title><![CDATA[Application of ARIMAX Model to Forecast Weekly Cocoa Black Pod Disease Incidence]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8370]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;4A&nbsp;&nbsp;<p>Ling, A. S. C.&nbsp; &nbsp;Darmesah, G.&nbsp; &nbsp;Chong, K. P.&nbsp; &nbsp;and Ho, C. M.&nbsp; &nbsp;</p><p>The losses caused by cocoa black pod disease around the world exceeded $400 million due to inaccurate forecasting of cocoa black pod disease incidence which leads to inappropriate spraying timing. The weekly cocoa black pod disease incidence is affected by external factors, such as climatic variables. In order to overcome this inaccuracy of spraying timing, the forecasting disease incidence should consider the influencing external factors such as temperature, rainfall and relative humidity. The objective of this study is to develop a Autoregressive Integrated Moving Average with external variables (ARIMAX) model which tries to account the effects due to the climatic influencing factors, to forecast the weekly cocoa black pod disease incidence. With respect to performance measures, it is found that the proposed ARIMAX model improves the traditional Autoregressive Integrated Moving Average (ARIMA) model. The results of this forecasting can provide benefits especially for the development of decision support system in determine the right timing of action to be taken in controlling the cocoa black pod disease. </p>]]></description>
<pubDate>Sep 2019</pubDate>
</item>
<item>
<title><![CDATA[Performance of Classification Analysis: A Comparative Study between PLS-DA and Integrating PCA+LDA]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8369]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;4A&nbsp;&nbsp;<p>Nurazlina Abdul Rashid&nbsp; &nbsp;Wan Siti Esah Che Hussain&nbsp; &nbsp;Abd Razak Ahmad&nbsp; &nbsp;and Fatihah Norazami Abdullah&nbsp; &nbsp;</p><p>Classification methods are fundamental techniques designed to find mathematical models that are able to recognize the membership of each object to its proper class on the basis of a set of measurements. The issue of classifying objects into groups when variables in an experiment are large will cause the misclassification problems. This study explores the approaches for tackling the classification problem of a large number of independent variables using parametric method namely PLS-DA and PCA+LDA. Data are generated using data simulator; Azure Machine Learning (AML) studio through custom R module. The performance analysis of the PLS-DA was conducted and compared with PCA+LDA model using different number of variables (p) and different sample sizes (n). The performance of PLS-DA and PCA+LDA has been evaluated based on minimum misclassification rate. The results demonstrated that PLS-DA performed better than the PCA+LDA for large sample size. PLS-DA can be considered to have a good and reliable technique to be used when dealing with large datasets for classification task.</p>]]></description>
<pubDate>Sep 2019</pubDate>
</item>
<item>
<title><![CDATA[Comparison of Queuing Performance Using Queuing Theory Model and Fuzzy Queuing Model at Check-in Counter in Airport]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8368]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;4A&nbsp;&nbsp;<p>Noor Hidayah Mohd Zaki&nbsp; &nbsp;Aqilah Nadirah Saliman&nbsp; &nbsp;Nur Atikah Abdullah&nbsp; &nbsp;Nur Su Ain Abu Hussain&nbsp; &nbsp;and Norani Amit&nbsp; &nbsp;</p><p>A queuing system is a process to measure the efficiency of a model by underlying the concepts of queue models: arrival and service time distributions, queue disciplines and queue behaviour. The main aim of this study is to compare the behaviour of a queuing system at check-in counters using the Queuing Theory Model and Fuzzy Queuing Model. The Queuing Theory Model gives performance measures of a single value while the Fuzzy Queuing Model has a range of values. The Dong, Shah and Wong (DSW) algorithm is used to define the membership function of performance measures in the Fuzzy Queuing Model. Based on the observation, the problem often occurs when customers are required to wait in the queue for a long time, thus indicating that the service systems are inefficient. Data including the variables were collected, such as arrival time in the queue (server) and service time. Results show that the performance measures of the Queuing Theory Model lie in the range of the computed performance measures of the Fuzzy Queuing Model. Hence, the results obtained from the Fuzzy Queuing Model are consistent to measure the queuing performance of an airline company in order to solve the problem in waiting line and will improve the quality of services provided by airline company.</p>]]></description>
<pubDate>Sep 2019</pubDate>
</item>
<item>
<title><![CDATA[A 2-Component Laplace Mixture Model: Properties and Parametric Estimations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8367]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;4A&nbsp;&nbsp;<p>Zakiah I. Kalantan&nbsp; &nbsp;and Faten Alrewely&nbsp; &nbsp;</p><p>Mixture distributions have received considerable attention in life applications. This paper presents a finite Laplace mixture model with two components. We discuss the model properties and derive the parameters estimations using the method of moments and maximum likelihood estimation. We study the relationship between the parameters and the shape of the proposed distribution. The simulation study discusses the effectiveness of parameters estimations of Laplace mixture distribution.</p>]]></description>
<pubDate>Sep 2019</pubDate>
</item>
<item>
<title><![CDATA[The Investigation on the Impact of Financial Crisis on Bursa Malaysia Using Minimal Spanning Tree]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8366]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;4A&nbsp;&nbsp;<p>Hafizah Bahaludin&nbsp; &nbsp;Mimi Hafizah Abdullah&nbsp; &nbsp;Lam Weng Siew&nbsp; &nbsp;and Lam Weng Hoe&nbsp; &nbsp;</p><p>In recent years, there has been a growing interest in financial network. The financial network helps to visualize the complex relationship between stocks traded in the market. This paper investigates the stock market network in Bursa Malaysia during the 2008 global financial crisis. The financial network is based on the top hundred companies listed on Bursa Malaysia. Minimal spanning tree (MST) is employed to construct the financial network and uses cross-correlation as an input. The impact of the global financial crisis on the companies is evaluated using centrality measurements such as degree, betweenness, closeness and eigenvector centrality. The results indicate that there are some changes on the linkages between securities after the financial crisis, that can have some significant effect in investment decision making.</p>]]></description>
<pubDate>Sep 2019</pubDate>
</item>
<item>
<title><![CDATA[A Note on Locally Metric Connections]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8355]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Mihail Cocos&nbsp; &nbsp;</p><p>The Fundamental Theorem of Riemannian geometry states that on a Riemannian manifold there exist a unique symmetric connection compatible with the metric tensor. There are numerous examples of connections that even locally do not admit any compatible metrics. A very important class of symmetric connections in the tangent bundle of a certain manifolds (afinnely flat) are the ones for which the curvature tensor vanishes. Those connections are locally metric. S.S. Chern conjectured that the Euler characteristic of an affinely at manifold is zero. A possible proof of this long outstanding conjecture of S.S. Chern would be by verifying that the space of locally metric connections is path connected. In order to do so one needs to have practical criteria for the metrizability of a connection. In this paper, we give necessary and sufficient conditions for a connection in a plane bundle above a surface to be locally metric. These conditions are easy to be veri ed using any local frame. Also, as a global result we give a necessary condition for two connections to be metric equivalent in terms of their Euler class.</p>]]></description>
<pubDate>Sep 2019</pubDate>
</item>
<item>
<title><![CDATA[On the Exactness of Distribution Density Estimates Constructed by Some Classes of Dependent Observations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8354]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Zurab Kvatadze&nbsp; &nbsp;and Beqnu Pharjiani&nbsp; &nbsp;</p><p>On the probabilistic space (Ω ,F , P ) we consider a given two-component stationary (in the narrow sense) sequence <img src=image/13413263_01.gif>, where <img src=image/13413263_02.gif>is the controlling sequence and the members <img src=image/13413263_03.gif> of the sequence <img src=image/13413263_04.gif>are the observations of some random variable <img src=image/13413263_05.gif>which are used in the construction of kernel estimates of Rosenblatt-Parzen type for an unknown density <img src=image/13413263_06.gif> of the variable <img src=image/13413263_05.gif>. The cases of conditional independence and chain dependence of these observations are considered. The upper bounds are established for mathematical expectations of the square of deviation of the obtained estimates from <img src=image/13413263_06.gif>.</p>]]></description>
<pubDate>Sep 2019</pubDate>
</item>
<item>
<title><![CDATA[Ehrlich-type Methods with King's Correction for the Simultaneous Approximation of Polynomial Complex Zeros]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8353]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Roselaine Neves Machado&nbsp; &nbsp;and Luiz Guerreiro Lopes&nbsp; &nbsp;</p><p>There are many simultaneous iterative methods for approximating complex polynomial zeros, from more traditional numerical algorithms, such as the well-known third order Ehrlich–Aberth method, to the more recent ones. In this paper, we present a new family of combined iterative methods for the simultaneous determination of simple complex zeros of a polynomial, which uses the Ehrlich iteration and a correction based on King's family of iterative methods for nonlinear equations. The use of King's correction allows increasing the convergence order of the basic method from three to six. Some numerical examples are given to illustrate the convergence behaviour and effectiveness of the proposed sixth order Ehrlich-like family of combined iterative methods for the simultaneous approximation of simple complex polynomial zeros.</p>]]></description>
<pubDate>Sep 2019</pubDate>
</item>
<item>
<title><![CDATA[Mathematical Modelling of Corrosion for Polymers]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8227]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Norazaliza Mohd Jamil&nbsp; &nbsp;</p><p>The material of pipelines transporting water is usually polymers. Chlorine as oxidant agent is added into the water system to prevent the spread of some disease. However, exposure to a chlorinated environment could lead to polymer pipe degradation and crack formation which ultimately reaches a complete failure for the pipes. To save labor, time and operating cost for predicting a failure time for a polymer pipe, we focus on its modeling and simulation. A current kinetic model for the corrosion process of polymers due to the action of chlorine is extensively analyzed from the mathematical point of view. By using the nondimensionalization method, the number of parameters in the original governing equations of the kinetic model has been reduced. Then, the dimensionless set of differential equations is numerically solved by the Runge Kutta method. There are two sets of simulations which are low chlorine concentration and high chlorine concentration, and we captured some essential characteristics for both types. This approach enables us to obtain better predictive capabilities, hence increasing our understanding of the corrosion process.</p>]]></description>
<pubDate>Sep 2019</pubDate>
</item>
<item>
<title><![CDATA[Implicit Two-point Block Method with Third and Fourth Derivatives for Solving General Second Order ODEs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8226]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Reem Allogmany&nbsp; &nbsp;Fudziah Ismail&nbsp; &nbsp;and Zarina Bibi Ibrahim&nbsp; &nbsp;</p><p>In this paper, we present an implicit two-point block method for solving directly the general second order ordinary differential equations (ODEs). The method incorporates the first and second derivatives of f(x; y; y'), which are the third and fourth derivatives of the solution. The method is derived using Hermite Interpolating Polynomial as the basic function. A performance comparison of the two-point block method is compared in term of accuracy to several existing methods, which have order almost equal or higher than that of the new method. Numerical results interpret the accuracy and efficacy of the new method. Application of the new method is discussed.</p>]]></description>
<pubDate>Sep 2019</pubDate>
</item>
<item>
<title><![CDATA[Invariants of Surface Indicatrix in a Special Linear Transformation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8225]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>A. Artykbaev&nbsp; &nbsp;and B. M. Sultanov&nbsp; &nbsp;</p><p>The linear transformation of the plane is considered, whose matrix belongs to the Heisenberg group. The transformation matrix is neither symmetric nor orthogonal. But the determinant is one. The class of the second-order curves is studied, which is obtained from each other by the transformation under consideration. The invariant values of curves of this class are proved. In particular, the conservation of the product of semi-axes of curves in this class is proved, as well as the equality of the areas for the ellipses of the class under consideration. The obtained invariants of the second order curves are applied to curves of the second order, which is the indicatrix of the surface. Conclusion: a theorem is obtained which proves the invariance of the total curvature of a surface in a Euclidean space of the class under consideration is a transformation, which is a deformation.</p>]]></description>
<pubDate>Sep 2019</pubDate>
</item>
<item>
<title><![CDATA[Some Topological Indices of Subgroup Graph of Symmetric Group]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8224]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Abdussakir&nbsp; &nbsp;</p><p>The concept of the topological index of a graph is increasingly diverse because researchers continue to introduce new concepts of topological indices. Researches on the topological indices of a graph which initially only examines graphs related to chemical structures begin to examine graphs in general. On the other hand, the concept of graphs obtained from an algebraic structure is also increasingly being introduced. Thus, studying the topological indices of a graph obtained from an algebraic structure such as a group is very interesting to do. One concept of graph obtained from a group is subgroup graph introduced by Anderson et al in 2012 and there is no research on the topology index of the subgroup graph of the symmetric group until now. This article examines several topological indices of the subgroup graphs of the symmetric group for trivial normal subgroups. This article focuses on determining the formulae of various Zagreb indices such as first and second Zagreb indices and co-indices, reduced second Zagreb index and first and second multiplicatively Zagreb indices and several eccentricity-based topological indices such as first and second Zagreb eccentricity indices, eccentric connectivity, connective eccentricity, eccentric distance sum and adjacent eccentric distance sum indices of these graphs.</p>]]></description>
<pubDate>Sep 2019</pubDate>
</item>
<item>
<title><![CDATA[Assessing the Effect of Complex Survey Design in the Analysis of Child Labour Prevalence Rate in Ghana]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8223]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Lucy Twumwaah Afriyie&nbsp; &nbsp;Bashiru I. I. Saeed&nbsp; &nbsp;and Abukari Alhassan&nbsp; &nbsp;</p><p>Statistical surveys are conducted to estimate population parameters where there are reasons restricting the use of the total population. In practice, there are two different survey strategies (i.e. simple and complex survey designs) to be implemented and the choice of a strategy depends on several factors including the characteristics of the population, the nature of the research questions, etc. However, when the complex survey design is used, standard statistical methods that do not take into account the complex nature of the survey design may lead to inaccurate estimates. In Ghana, living standard surveys are conducted using complex survey design involving stratifications, clustering and estimation of survey weights. In this study, bootstrap resampling methods are used to explore the effect of complex survey design in the analysis of child labour prevalence rate. The relative efficiency of the complex survey design approach was determined by using design effect (deff). Data from the Ghana Living Standard Survey Round 6 (GLSS 6) conducted by the Ghana Statistical Service in 2012 was used for the analysis and the target population was children aged 5–17 years. The results from the simulation study shows that relative efficient estimates are obtained when the complex survey design characteristics are considered in the analysis. Thus, ignoring the characteristics of complex survey design could lead to unrealistic estimates.</p>]]></description>
<pubDate>Sep 2019</pubDate>
</item>
<item>
<title><![CDATA[The Difference Splitting Scheme for Hyperbolic Systems with Variable Coefficients]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8196]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Aloev R. D.&nbsp; &nbsp;Eshkuvatov Z. K.&nbsp; &nbsp;Khudoyberganov M. U.&nbsp; &nbsp;and Nematova D. E.&nbsp; &nbsp;</p><p>In the paper, we propose a systematic approach to design and investigate the adequacy of the computational models for a mixed dissipative boundary value problem posed for the symmetric t-hyperbolic systems. We consider a two-dimensional linear hyperbolic system with variable coefficients and with the lower order term in dissipative boundary conditions. We construct the difference splitting scheme for the numerical calculation of stable solutions for this system. A discrete analogue of the Lyapunov's function is constructed for the numerical verification of stability of solutions for the considered problem. A priori estimate is obtained for the discrete analogue of the Lyapunov's function. This estimate allows us to assert the exponential stability of the numerical solution. A theorem on the exponential stability of the solution of the boundary value problem for linear hyperbolic system and on stability of difference splitting scheme in the Sobolev spaces was proved. These stability theorems give us the opportunity to prove the convergence of the numerical solution.</p>]]></description>
<pubDate>Jul 2019</pubDate>
</item>
<item>
<title><![CDATA[Non-existence of Solutions of Diophantine Equations of the Form <img src=image/13490640_01.gif>]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8195]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Renz Jimwel S. Mina&nbsp; &nbsp;and Jerico B. Bacani&nbsp; &nbsp;</p><p>Numerous researches have been devoted in finding the solutions <img src=image/13490640_02.gif>, in the set of non-negative integers, of Diophantine equations of type <img src=image/13490640_03.gif> (1), where the values p and q are fixed. In this paper, we also deal with a more generalized form, that is, equations of type <img src=image/13490640_04.gif> (2), where n is a positive integer. We will present results that will guarantee the non-existence of solutions of such Diophantine equations in the set of positive integers. We will use the concepts of the Legendre symbol and Jacobi symbol, which were also used in the study of other types of Diophantine equations. Here, we assume that one of the exponents is odd. With these results, the problem of solving Diophantine equations of this type will become relatively easier as compared to the previous works of several authors. Moreover, we can extend the results by considering the Diophantine equations <img src=image/13490640_05.gif> (3) in the set of positive integers.</p>]]></description>
<pubDate>Jul 2019</pubDate>
</item>
<item>
<title><![CDATA[Power Law Behavior and Tail Modeling on Low Income Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8194]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Nurulkamal Masseran&nbsp; &nbsp;Lai Hoi Yee&nbsp; &nbsp;Muhammad Aslam Mohd Safari&nbsp; &nbsp;and Kamarulzaman Ibrahim&nbsp; &nbsp;</p><p>Poverty is an important issue that needs to be addressed by all countries. Poverty is related to a group of people earning a low income (lower-tail of the income distribution). In Malaysia, low-income earners are classified as the B40 group. This study aims to describe the behavior of the low-income distribution using the power law model. For this purpose, an inverse Pareto model was applied for describing the lower tail data of Malaysian household income. A robust and efficient estimator, called the probability integral transform statistic estimator, was utilized for estimating the shape parameter of the inverse Pareto distribution. Based on the fitted inverse Pareto model, not all households in the B40 group complied with the power law behavior. However, the power law was able to provide a good description for the group of B40 that was below the poverty line. Based on the inverse Pareto model, the parametric Lorenz curve and the Gini index were derived to provide a robust measure of the income inequality of poor households in Malaysia.</p>]]></description>
<pubDate>Jul 2019</pubDate>
</item>
<item>
<title><![CDATA[Sea Level Rise Impact on Atoll Islands: Implication to SDG 6]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8097]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Xin Yi Kh'ng&nbsp; &nbsp;Su Yean Teh&nbsp; &nbsp;and Hock Lye Koh&nbsp; &nbsp;</p><p>Low-lying atoll islands that depend heavily on fresh groundwater for survival are particularly vulnerable to sea level rise (SLR), which calls for appropriate climate action SDG 13. As the sea level rises, the associated increase in surface seawater inundation and subsurface saltwater intrusion will reduce the availability of fresh groundwater due to permanent salinization of groundwater with corresponding thinning of freshwater lens. This paper provides scientific insights on how freshwater lens in atoll islands respond to SLR. Simulations on saturated-unsaturated variable-density groundwater flow with salt transport are performed by the groundwater flow and solute transport model SUTRA (Saturated-Unsaturated Transport) developed by the U.S. Geological Survey. Model simulations and statistical analyses suggest that freshwater lens thickness depends mainly on groundwater recharge rate, island size and aquifer hydraulic conductivity. The impact of various geo-hydrologic parameters on fresh groundwater sustainability is then analyzed to explore feasibility of increasing groundwater recharge through rainwater harvesting, as a mitigation measure. The implication to the achievement of sustainable clean water and sanitation for all (SDG 6) is also discussed.</p>]]></description>
<pubDate>Jul 2019</pubDate>
</item>
<item>
<title><![CDATA[Modified Weighted Sum Method for Decisions with Altered Sources of Information]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8096]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Shahryar Sorooshian&nbsp; &nbsp;and Yasaman Parsia&nbsp; &nbsp;</p><p>Multi Attribute Decision Making (MADM) is an asset to provide solutions for our todays' complex issues and problems. The fact of the matter is that the main source of information in many MADMs is a panel of experts. However, in some cases, there is a possibility of lack of knowledge by the panel to rank or weight one or a few particular criterion/criteria for the decision making. Therefore, the decision maker needs an altered source of information to complete the decision making process. Hence, WSM (Weighted Sum Method) by means of the most popular MADM techniques is selected; and as a prior aim of this article, a modified version of the WSM is proposed as a solution for multiple criteria decision makers by way of a solution for the cases when there is a need for another source of information to rank or weight the particular criterion/criteria. The modified WSM is presented in five stages. The validity, through feasibility, of the modified WSM is tested and verified in a numerical example. Additionally, following this article, future researches could use the same approach for modification of other MADMs to deal with two or more sources of information.</p>]]></description>
<pubDate>Jul 2019</pubDate>
</item>
<item>
<title><![CDATA[Analyticity and Systematicity Students of Mathematics Education on Solving Non-routine Problems]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8067]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Swasti Maharani&nbsp; &nbsp;Toto Nusantara&nbsp; &nbsp;Abdur Rahman As'ari&nbsp; &nbsp;and Abdul Qohar&nbsp; &nbsp;</p><p>Critical thinking is a skill needed for education. Critical thinking has two main components i.e. ability and critical thinking disposition. The purpose of this research is to describe the disposition of critical thinking of mathematics education students especially analyticity and systematicity component when solving non-routine problem (the problem that is not logical and incomplete). This research is a qualitative descriptive study. The stages in this study are first, students are given three non-routine questions. The second stage, the researchers observed directly and recorded the subject when working on the problem. Third, interviewing the subject related to non-routine problem resolution. Fourth, concluded by describing the disposition of critical thinking of mathematics teacher candidate students, especially analyticity and systematicity components. The results showed that the disposition of critical thinking of first-year college students in mathematics education major is still low. They have not analyzed the problems and answers well and have not written the answers in order and lack of focus when solving non-routine problems. They not yet have a high sense about the irregularities of the problem. It is highly recommended for further research that there is a need for advanced development to improve the disposition of critical thinking students.</p>]]></description>
<pubDate>May 2019</pubDate>
</item>
<item>
<title><![CDATA[Some Properties of a Connected Topological Group]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8066]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Beshimov R. B.&nbsp; &nbsp;and Zhuraev R. M.&nbsp; &nbsp;</p><p>In this paper, we study some topological properties of connected topological groups. From a logical point of view, the concept of a topological group arises as a simple combination of the concepts of a group and a topological space. In the same set G, operations multiplication and topological closure are specified simultaneously.</p>]]></description>
<pubDate>May 2019</pubDate>
</item>
<item>
<title><![CDATA[An Efficient Copula under Data Perturbations Across Stock Markets]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=7927]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Ivy Barley&nbsp; &nbsp;Gabriel Asare Okyere&nbsp; &nbsp;Henry Man’tieebe Kpamma&nbsp; &nbsp;James Baah Achamfour&nbsp; &nbsp;David Kweku&nbsp; &nbsp;and Godfred Zaachi&nbsp; &nbsp;</p><p>Economic trade amongst the various West African economies can either lead to mutual gains or losses. It is therefore important to assess the extent to which dependence amongst these countries can have on their economies. The linear correlation coefficient is normally used as a measure of dependence between random variables. However, there are some limitations when used for economic variables like the stock market; as they do not follow the elliptical distribution. Copulas, however are scale-free methods of constructing dependence structures amongst the stock markets, even in cases of data perturbations. The aim of this study is to assess the impact of data perturbations on the copula models. The maximum likelihood estimation method was the parameter estimation method used for the Archimedean copulas. The Clayton, Joe, Frank and Gumbel copulas were estimated. The Gumbel copula was the most robust copula in all the cases of data perturbations.</p>]]></description>
<pubDate>May 2019</pubDate>
</item>
<item>
<title><![CDATA[An Analogy to Help Understanding Discovery, Insight and Invention in Mathematics]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=7926]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;May&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Ruggero Ferro&nbsp; &nbsp;</p><p>An analogy with how life would be evolving in a town where one is moving in, may help us to understand what could be meant by discovery, insight and invention in mathematics. The relevant key common features of these two environments (life in an another town and mathematics) are: 1) the involved mental abilities to deal with the situation, 2) the realization that anything observed is contingent, 3) the discovery of the motivations of what has been done and of their influence up to the present via insight, 4) the need to understand the motivations and the manners of realization of what was done to continue the development, 5) the continuous evolution of needs and requirements that opens new problems that demand insight and invention for their solutions, 6) not every solution meets the goals and requirements with the same short range and long range convenience, thus a preventive evaluation is convenient according to criteria to be established, though a conclusive evaluation can be done only afterward. What observed will justify the support of a dynamic attitude toward mathematics and the refusal of the one claiming that everything can't be but the way it is, according to a priori mental evidence which is unduly assumed.</p>]]></description>
<pubDate>May 2019</pubDate>
</item>
<item>
<title><![CDATA[Hyperstability and Stability of a Logarithm-type Functional Equation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=7849]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Young Whan Lee&nbsp; &nbsp;and Gwang Hui Kim&nbsp; &nbsp;</p><p>In 2001, Maksa and P´ales [12] introduced a new type’s stability: hyperstability for a class of linear functional equation<img src=image/13412851_01.gif>. Riedel and Sahoo [14] have generalized a functional equation associated with the distance between the probability distributions, which is <img src=image/13412851_02.gif>. Elfen etc. [7] obtained the solution of the functional equation <img src=image/13412851_03.gif> on semigroup G. The aim of this paper is to investigate the hyperstability and the Hyers-Ulam stability for the above Logarithm-type functional equation considered by Elfen, etc. Namely, if f is an approximative equation related to the above equation, then it is a solution of this equation which exists within " <img src=image/13412851_04.gif> bound of a given approximative function f.</p>]]></description>
<pubDate>Mar 2019</pubDate>
</item>
<item>
<title><![CDATA[Integro-Differential Age-Structured System for Influenza Transmission Dynamics]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=7848]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Ana Vivas-Barber&nbsp; &nbsp;and Sunmi Lee&nbsp; &nbsp;</p><p>Influenza infection shows a wide range of severity and it is well known that a significant proportion of individuals is asymptomatic or experience mild infections. Also, It is widely accepted that influenza transmission dynamics depends on age distributions. An integro-partial differential system is considered for influenza transmission dynamics, which includes the standard Susceptible-Infected-Recovered (SIR) classes with a quarantine (Q) class and an asymptomatic class (A). In this work, we extend the previous model to an integro-partial differential model by including age-structure. We establish the existence of an endemic steady state distribution and its explicit expression. Then, an analytic expression for the basic reproduction number is obtained. Furthermore, we prove the local and global stability of the disease-free equilibrium. Some numerical simulations of the basic reproduction number have been carried out using age-dependent influenza parameter values. This study can provide effective interventions and implementing age-dependent countermeasures.</p>]]></description>
<pubDate>Mar 2019</pubDate>
</item>
<item>
<title><![CDATA[Exact Traveling Wave Solutions of Nonlinear Evolution Equations: Indeterminant Homogeneous Balance and Linearizability]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=7847]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Barbara Abraham-Shrauner&nbsp; &nbsp;</p><p>Exact traveling (solitary) wave solutions of nonlinear partial differential equations (NLPDEs) are analyzed for third-order nonlinear evolution equations. These equations have indeterminant homogenous balance and therefore cannot be solved by the Power Index Method (PIM). Some evolution equations are linearizable where solutions are transferred from those of a linear PDE. For other evolution equations transforming to a NLPDE which has a homogenous balance gives rise to possible solutions by the PIM. The solutions for evolution equations that are not linearizable are developed here.</p>]]></description>
<pubDate>Mar 2019</pubDate>
</item>
<item>
<title><![CDATA[A Basic ANN System for Prediction of Excess Air Coefficient on Coal Burners Equipped with a CCD Camera]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=7846]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2019<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;7&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Cem Onat&nbsp; &nbsp;and Mahmut Daskin&nbsp; &nbsp;</p><p>Excess air coefficient (λ) is the most important parameter characterizing the combustion efficiency. Conventional measurement of λ is practiced by way of the flue analyze device with high market priced. Estimating of the λ from flame images is crucial in perspective of the combustion control because of decreasing structural dead time of the combustion process. Beside, estimation systems can be used continuously in a closed loop control system, unlike conventional analyzers. This paper represents a basic λ prediction system with a neural network for small scale nut coal burner equipped with a CCD camera. The proposed estimation system has two inputs. First input is stack gas temperature simply measuring from the flue. To choose the second input, eleven different matrix parameters have been evaluated together with flue gas temperature values and performed by matrix-based multiple linear regression analysis. As a result of these analyses, it has been seen that the trace of image matrix obtained from the flame image provides higher accuracy than the other matrix parameters. This instantaneous trace value of image source matrix is then filtered from high frequency dynamics by means of a low pass filter. Experimental data of the inputs and λ are synchronously matched by a neural network. Trained algorithm has reached R=0.984 in terms of accuracy. It is seen from the result that proposed estimating system using flame image with assistance of the stack gas temperature can be preferred in combustion control systems.</p>]]></description>
<pubDate>Mar 2019</pubDate>
</item>
<item>
<title><![CDATA[Probabilities Obtained by Means of Hyperhomographies into a Quadruple Random Quantity]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=7627]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2018<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;6&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Pierpaolo Angelini&nbsp; &nbsp;</p><p>I realized that it is possible to construct an original and well-organized theory of multiple random quantities by accepting the principles of the theory of concordance into the domain of subjective probability. A very important point relevant to such a construction is consequently treated in this paper by showing that a coherent prevision of a bivariate random quantity coincides with the notion of <img src=image/13412591_01.gif>-product of two vectors while a coherent prevision of a quadruple random quantity coincides with the notion of <img src=image/13412591_01.gif>-product of two affine tensors. Metric properties of the notion of <img src=image/13412591_01.gif>-product mathematically characterize both the notion of coherent prevision of a generic bivariate random quantity and the notion of coherent prevision of a generic quadruple random quantity. Coherent previsions of bivariate and quadruple random quantities can be used in order to obtain fundamental metric expressions of bivariate and quadruple random quantities.</p>]]></description>
<pubDate>Dec 2018</pubDate>
</item>
<item>
<title><![CDATA[A Simple Approximation for Normal Distribution Function]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=7626]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2018<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;6&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Medhat Edous&nbsp; &nbsp;and Omar Eidous&nbsp; &nbsp;</p><p>This paper proposes an approximation to the standard normal distribution function. The introduced approximation formula is very simple and it has a very acceptable accurate. By comparing the proposed approximation with other existing approximations, it can be observed that the proposed one has a simple, easily computable formula and it gives a good accurate with maximum absolute error equals 0.000444.</p>]]></description>
<pubDate>Dec 2018</pubDate>
</item>
<item>
<title><![CDATA[Discrete Logarithm in Galois Rings]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=7455]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2018<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;6&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Samuel Bertrand Liyimbeme Mouchili&nbsp; &nbsp;</p><p>Since Galois rings are the generalization of Galois fields, the question we tried to answer is: How to move from the discrete logarithm in Galois fields to the one in Galois rings? That concept of the discrete logarithm in Galois rings is a little bit different from the one in Galois fields. Here, the discrete logarithm of an element is the tuple, which is not the case in Galois fields. However, thanks to the multiplicative representation of elements in Galois rings, each element <img src=image/13411505_01.gif> can be uniquely represented in the form: <img src=image/13411505_02.gif>; where k is a nonnegative integer, <img src=image/13411505_03.gif> is a generator of the Galois ring (the definition of a generator in a Galois ring will be given later on). Then the tuple <img src=image/13411505_04.gif> will be called: the discrete logarithm of <img src=image/13411505_05.gif>. The notion of generators in Galois rings comes from the one in the group theory. The Knowledge of the generators in multiplicative groups <img src=image/13411505_06.gif> allows, as well to determine the generators in Galois rings <img src=image/13411505_07.gif>; p is a prime number and m is a nonnegative integer greater than or equal to two. These new concepts of discrete logarithm and generators in Galois rings will help to securely share common information and to perform ElGamal encryption in Galois rings.</p>]]></description>
<pubDate>Sep 2018</pubDate>
</item>
<item>
<title><![CDATA[Hausdorff Measures and Hausdorff Dimensions of the Invariant Sets for Iterated Function Systems of Geometric Fractals]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=7454]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2018<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;6&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Md. Jahurul Islam&nbsp; &nbsp;Md. Shahidul Islam&nbsp; &nbsp;and Md. Shafiqul Islam&nbsp; &nbsp;</p><p>In this paper, we discuss Hausdorff measure and Hausdorff dimension. We also discuss iterated function systems (IFS) of the generalized Cantor sets and higher dimensional fractals such as the square fractal, the Menger sponge and the Sierpinski tetrahedron and show the Hausdorff measures and Hausdorff dimensions of the invariant sets for IFS of these fractals.</p>]]></description>
<pubDate>Sep 2018</pubDate>
</item>
<item>
<title><![CDATA[Generating Minimal Boundary Maps]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=6989]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2018<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;6&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Marian Anton&nbsp; &nbsp;and Landon Renzullo&nbsp; &nbsp;</p><p>The field of computational topology is evolving rapidly and new algorithms are updated and released at a rapid pace. A good reference for currently available opensource libraries with peer-review publication can be found in [7]. In this paper we examine the descriptive potential of a combinatorial data structure known as Generating Set in constructing the boundary maps of a simplicial complex. By refining the approach of [1] in generating these maps, we provide algorithms that allow for relations among simplices to be easily accounted for. In this way we explicitly generate faces of a complex only once, even if a face is shared among multiple simplices. The result is a useful interface for constructing complexes with many relations and for extending our algorithms to ∆-complexes. Once we efficiently retrieve the representatives of "living" simplices i.e., of those that have not been related away, the construction of the boundary maps scales well with the number of relations and provides a simpler alternative to JavaPlex [8]. We note that the generating data of a complex is equivalent in information to its incidence matrix and provide efficient algorithms for converting from an incidence matrix to a Generating Set.</p>]]></description>
<pubDate>Apr 2018</pubDate>
</item>
<item>
<title><![CDATA[On Cyclic Codes of Odd Lengths from the Stable Variety of Regular Cayley Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=6988]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2018<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;6&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Chun P.B&nbsp; &nbsp;Ibrahim A.A&nbsp; &nbsp;and Kamoh N.M&nbsp; &nbsp;</p><p>The use of the adjacency matrix of a graph as a generator matrix for some classes of binary codes had been reported and studied. This paper concerns the utilization of the stable variety of Cayley regular graphs of odd order for efficient interconnection networks as studied, in the area of Codes Generation and Analysis. The Use of some succession scheme in the construction of a stable variety of the Cayley regular graph had been considered. We shall enumerate the adjacency matrices of the regular Cayley graphs so constructed which are of odd order (2m+1), for m≥3 as in [1]. Next, we would show that the Matrices are cyclic and can be used in the generation of cyclic codes of odd lengths.</p>]]></description>
<pubDate>Apr 2018</pubDate>
</item>
<item>
<title><![CDATA[Exponential Dichotomy and Bifurcation Conditions of Solutions of the Hamiltonian Operators Boundary Value Problems in the Hilbert Space]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=6718]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2018<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;6&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Pokutnyi Oleksandr&nbsp; &nbsp;</p><p>Sufficient conditions for the existence of solutions for a weakly linear perturbed boundary value problem are obtained in the so called resonance (critical) case. Iterative process for finding solutions has been presented. Necessary and sufficient conditions of the existence of solutions, bounded solutions, generalized solutions and quasi solutions are obtained.</p>]]></description>
<pubDate>Jan 2018</pubDate>
</item>
<item>
<title><![CDATA[New Gradient Methods for Bandwidth Selection in Bivariate Kernel Density Estimation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=6717]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2018<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;6&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Siloko, I. U.&nbsp; &nbsp;Ishiekwene, C. C.&nbsp; &nbsp;and Oyegue, F. O.&nbsp; &nbsp;</p><p>The bivariate kernel density estimator is fundamental in data smoothing methods especially for data exploration and visualization purposes due to its ease of graphical interpretation of results. The crucial factor which determines its performance is the bandwidth. We present new methods for bandwidth selection in bivariate kernel density estimation based on the principle of gradient method and compare the result with the biased cross-validation method. The results show that the new methods are reliable and they provide improved methods for a choice of smoothing parameter. The asymptotic mean integrated squared error is used as the measure of performance of the new methods.</p>]]></description>
<pubDate>Jan 2018</pubDate>
</item>
<item>
<title><![CDATA[Nonparametric Estimation of Replacement Rates]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=6493]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Nora Dörmann&nbsp; &nbsp;</p><p>Let X<sub>i</sub>, i ≥ 1, describe the lifetimes of items with finite mean μ = E (X<sub>i</sub>) which are successively placed in service. In order to estimate the replacement rate <sup>1</sup>/<sub>μ</sub> or related quantities, the random variables X<sub>i</sub> are usually assumed to be independent and identically distributed. It is shown that a nonparametric estimation of the replacement rate and other reciprocal functions of renewal theory is possible while using a delta method with weakened requirements upon the global growth of f which also allows dependent observations and respects the unboundedness of the analyzed reciprocal functions. Moreover, results on the moments and, furthermore, on corresponding simulations are included.</p>]]></description>
<pubDate>Sep 2017</pubDate>
</item>
<item>
<title><![CDATA[Qualitative Properties of the Solution to Brinkman-Stokes System Modelling a Filtration Process]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=6374]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Yulia Koroleva&nbsp; &nbsp;</p><p>The paper deals with study of Stokes-Brinkman system with varying viscosity that describes the fluid flow along an ensemble of partially porous cylindrical particles using the cell approach. Existence and uniqueness of the solution to the system is proved for an arbitrary varying viscosity. Some uniform estimates on the velocity of flow are derived. Moreover, an axillary weighted Friedrichs inequality was proved for the solution of the considered system. The numerical illustration of the obtained results is given.</p>]]></description>
<pubDate>Sep 2017</pubDate>
</item>
<item>
<title><![CDATA[Phase Space Ray Tracing for a Two-dimensional Parabolic Reflector]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=6373]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Sep&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>C. Filosa&nbsp; &nbsp;J. H. M. Thije Boonkkamp&nbsp; &nbsp;and W. L. IJzerman&nbsp; &nbsp;</p><p>Ray tracing is a technique used in geometric optics for calculating the light distribution at the target of an optical system. Monte Carlo (MC) ray tracing is very common in non-imaging optics. We propose a new ray tracing method that employs the phase space of the source and the target of the system. The new method gives a more accurate target distribution than classical MC ray tracing and requires less computational time. It is tested for two-dimensional optical systems. The results for the paraboloid reflector are provided.</p>]]></description>
<pubDate>Sep 2017</pubDate>
</item>
<item>
<title><![CDATA[A Presentation of the Free Lie Algebra M<sub>2,m</sub>]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=6262]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Gülistan Kaya Gök&nbsp; &nbsp;</p><p>Let M<sub>2,m</sub> be a free metabelian nilpotent Lie algebra of rank 2 and nilpotency class m-1. It is shown that M<sub>2,m</sub> admits a minimal presentation whose set of defining relators consists of certain types of basic commutators of length at most m.</p>]]></description>
<pubDate>Jul 2017</pubDate>
</item>
<item>
<title><![CDATA[Combination of Several Control Charts Based on Dynamic Ensemble Methods]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=6189]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Dhouha Mejri&nbsp; &nbsp;Mohamed Limam&nbsp; &nbsp;and Claus Weihs&nbsp; &nbsp;</p><p>Combining methods from Statistical Process Control (SPC) in order to benefit from more than one method's efficiency has been recently challenged. One of the reasons is that real life problems change overtime and a small improvement can lead to a very big profit. Ensemble methods from data mining domain have recently shown their effectiveness when used with SPC. The first combined control chart based on dynamic ensemble method, called Dynamic weighted control chart, is designed especially for monitoring concept drift in online processes. This article presents a new model of combining more than two control charts based on ensemble methods as well as error rates classifications to optimize the shift identification and control. This method can be applied for offline and online processes. It is based on a three step learning model: first a preprocessing step to prepare the data for classification. Second, an ensemble method based on Dynamic Weighted Majority (DWM) is applied to aggregate the decisions of the different charts at the end of the each batch. Finally, shifts are identified based on the misclassification error rates of DWM. Dynamic Ensemble Control chart model benefits from the knowledge from classification and control to give a most precise information about the process. Experiments have shown that the latter is better than the use of individual charts and classifies the variable which is responsible for the out of control.</p>]]></description>
<pubDate>Jul 2017</pubDate>
</item>
<item>
<title><![CDATA[The Stochastic Model for «Shnoll Effect»]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=6188]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jul&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>V. A. Meshkoff&nbsp; &nbsp;</p><p>«Shnoll effect» proved to be at the histograms study of a wide variety of processes. This paper examines the effect mainly for the examples of radioactive decay and chemical reactions. S.E. Shnoll supposed that the observed processes are caused by unknown Cosmophysical effects. In this article, we suggested not only a qualitative explanation of the effect, but also its mathematical model. It allows to get some quantitative estimation and to optimize the process of observation and data handling. To this end, we developed a quantitative method of estimation «similarity of histograms» that allows the use of standard computer programs. As far as «Shnoll effect» at present is not currently recognized by the scientific community, we suppose that the use of mathematical model and adequate methods of data handling allow synonymously solving that problem.</p>]]></description>
<pubDate>Jul 2017</pubDate>
</item>
<item>
<title><![CDATA[Some New Integral Inequalities for n-Times Differentiable s-Convex and s-Concave Functions in the Second Sense]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=5911]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Huriye Kadakal&nbsp; &nbsp;Mahir Kadakal&nbsp; &nbsp;and İmdat İşcan&nbsp; &nbsp;</p><p>In this article, by using an integral identity together with both the Hölder, Power-Mean integral inequalities and Hermite-Hadamard's inequality, we establish several new inequalities for n-time differentiable s-convex and s-concave functions in the second sense.</p>]]></description>
<pubDate>Mar 2017</pubDate>
</item>
<item>
<title><![CDATA[Symptom Proximity in Diagnostic Problem]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=5845]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Otakar Kříž&nbsp; &nbsp;</p><p>An algorithm SP (= Symptom Proximity) is suggested for solving discrete diagnostic problem. It is based on probabilistic approach to decision-making under uncertainty, however, it does not use knowledge integration from marginal distributions.</p>]]></description>
<pubDate>Mar 2017</pubDate>
</item>
<item>
<title><![CDATA[On Minimal and Maximal Regular Open Sets]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=5658]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Hisham Mahdi&nbsp; &nbsp;and Fadwa Nasser&nbsp; &nbsp;</p><p>The purpose of this paper is to investigate the concepts of minimal and maximal regular open sets and their relations with minimal and maximal open sets. We study several properties of such concepts in a semi-regular space. It is mainly shown that if X is a semi-regular space, then m<sub>i</sub>O(X) = m<sub>i</sub>RO(X). We introduce and study new type of sets called minimal regular generalized closed. A special interest type of topological space called rT<sub>min</sub> space is studied and obtain some of its basic properties.</p>]]></description>
<pubDate>Mar 2017</pubDate>
</item>
<item>
<title><![CDATA[On a Cumulative Distribution Function Related to the Bernoulli Process]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=5657]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Peter Kopanov&nbsp; &nbsp;and Miroslav Marinov&nbsp; &nbsp;</p><p>We examine the properties of a cumulative distribution function which is related to the Bernoulli process. Results figuring in a paper <sup>[1]</sup> are shown and new ones are included. Most of them are connected to the behaviour of the probability density function (derivative) of the given distribution.</p>]]></description>
<pubDate>Mar 2017</pubDate>
</item>
<item>
<title><![CDATA[Strong and Δ-Convergence Results for Generalized Multi-valued Non-expansive Maps in CAT (0) Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=5656]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Fiaz Hussain&nbsp; &nbsp;and Saima Zainab&nbsp; &nbsp;</p><p>In this paper, we establish strong convergence and Δ-convergence theorems for the class of generalized non-expansive multi-valued maps in a CAT(0) space. Our work extends and improves some recent results announced in the current literature.</p>]]></description>
<pubDate>Mar 2017</pubDate>
</item>
<item>
<title><![CDATA[Use of Doehlert Designs for Second-order Polynomial Models]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=5655]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>L. Rob Verdooren&nbsp; &nbsp;</p><p>The most popular designs for fitting the second-order polynomial model are the central composite designs of Box and Wilson [2] and the designs of Box and Behnken [1]. For k = 2, 4, 6 and 8, the uniform shell designs of Doehlert [4] require fewer experimental runs than the central composite or Box-Behnken designs. In analytic chemistry the Doehlert designs are widely used. The uniform shell designs are based on a regular simplex, this is the geometric figure formed by k + 1 equally spaced points in a k -dimensional space; an equilateral triangle is a two-dimensional regular simplex. The shell designs are used for fitting a response surface to k independent factors over a spherical region. Doehlert (1930 - 1999) proposed in 1970 the design for k = 2 factors starting from an equilateral triangle with sides of length 1, to construct a regular hexagon with a centre point at (0, 0). The n = 7 experimental points are (1, 0), (0.5, 0.866), (0, 0), (-0.5, 0.866), (-1, 0), (-0.5, -0.866) and (0.5, -0.866).The 6 outer points lie on a circle with a radius 1 and centre (0, 0). This Doehlert design has an equally spaced distribution of points over the experimental region, a so-called uniform space filler, where the distances between neighboring experiments are equal. Response surface designs are usually applied by scaling the coded factor ranges to the ranges of the experimental factors. The first factor covers the interval [-1, + 1], the second factor covers the interval [-0.866, + 0.866]. Doehlert design for four factors needs only 21 trials. Doehlert and Klee [5] show how to rotate the uniform shell designs to minimize the number of levels of the factors. Most of the rotated uniform shell designs have no more than five levels of any factor; the central composite design has five levels of every factor. The D-Optimality determinant criterion of the variance matrix of Doehlert designs will be compared with central composite designs and Box-Behnken designs, see Rasch et al. [6].</p>]]></description>
<pubDate>Mar 2017</pubDate>
</item>
<item>
<title><![CDATA[Apprehensions in the Graphic Register of Two-variable Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=5654]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Katia Vigo Ingar&nbsp; &nbsp;and Maria José Ferreira da Silva&nbsp; &nbsp;</p><p>The objective of this article is to present part of a doctoral thesis, which deals with an extension of Duval's study in relation to apprehensions in the graphic register of a two-variable function. Its relevance is extensive in teaching and learning Differential Calculus of two variables since the information the graph of this type of functions may provide is important to build knowledge on two-variable functions and for its applications. For graphic representation and knowledge building, we rely on CAS Mathematica, given that its dynamism allows performing operations in the graphic register. Because of this, we ask ourselves, how do apprehensions take place in the CAS graphic register of two-variable functions? Our research is qualitative and exploratory since the proposed object of study has not been studied a lot. We believe the interaction of apprehensions in the CAS graphic register allows students to conjecture properties of the two-variable functions when, for instance, a student applies those notions to optimization problems.</p>]]></description>
<pubDate>Mar 2017</pubDate>
</item>
<item>
<title><![CDATA[SETAR (Self-exciting Threshold Autoregressive) Non-linear Currency Modelling in EUR/USD, EUR/TRY and USD/TRY Parities]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=5524]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Emrah Hanifi Firat&nbsp; &nbsp;</p><p>In economies that are open to foreign markets the numerical value of the currencies as a macroeconomic variable is of great importance especially when the mutual dependency among the economies is concerned. When it is considered in terms of political economy, the targeted level of the currencies have vital importance especially in economies that have the characteristics of export-driven growth and in economies that struggle not to disrupt the picture in macroeconomic design. When it is considered that each time series has a structure that is sensitive to its own internal dynamics (sometimes these dynamics are expressed as the time series components), these dynamics provide us with coordinates for estimations and may eliminate the compulsory dependency on the outsourced variables at a serious level. This is exactly what has been done in this study. First of all, the non-linear time series analyses are examined in terms of linearity tests, and the linearity tests are applied for all parties and for different time periods. Then, the SETAR Modelling, which is the title of the study, has been applied in order to explain the non-linear pattern in detail. The SETAR Modelling process and other definitions statistical analyses of this model have been applied in relevant parities for separate time periods. The SETAR model, which is one of the TAR Group modeling, shows a better performance than many other linear and non-linear modeling. In this study, the secondary purpose is to express that the SETAR model performance is superior to the other models by considering the observation values of the parities.</p>]]></description>
<pubDate>Jan 2017</pubDate>
</item>
<item>
<title><![CDATA[Social Development of Iraqi Governorates in Comparison]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=5523]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Iftikhar I. M. Naqash&nbsp; &nbsp;</p><p>Inequitable distribution of investment funds on governorates and the autonomy of the Kurdistan Region with own investment policy are often mentioned as the main causes of the huge regional differences in social development in Iraq. In this paper, the differences in social development among 18 Iraqi governorates will be analyzed by using two different methods: first, 12 indicators of education, health and economic level are given equal weights and the length of the distance between every governorate and the governorate with the maximum standardized score for each individual indicator is combined into a Composite Regional Social Development Index (CRSDI<sub>equal</sub>); second, unequal weights are given for each indicator depending on the indicator's loading in the first principal component to identify the weight of that indicator in a Composite Regional Social Development Index (CRSDI<sub>unequal</sub>). Both methods will result in the same ranking of the 18 governorates with respect to their social development level.</p>]]></description>
<pubDate>Jan 2017</pubDate>
</item>
<item>
<title><![CDATA[A Remark on Exponential Dynamical Localization in a Long-range Potential]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=5329]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Victor Chulaevsky&nbsp; &nbsp;</p><p>Exponential decay of eigenfunctions and of their correlators is shown to occur in two Anderson models on the lattice of arbitrary dimension, with summable decay of infinite-range correlations of the random potential. For the proof, we check the applicability of the Fractional Moment Method.</p>]]></description>
<pubDate>Jan 2017</pubDate>
</item>
<item>
<title><![CDATA[Bilinear Multipliers of Weighted Lorentz Spaces and Variable Exponent Lorentz Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=5328]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Öznur Kulak&nbsp; &nbsp;and A. Turan Gürkanlı&nbsp; &nbsp;</p><p>Let ω<sub>1</sub>, ω<sub>2</sub> be slowly increasing functions and let ω<sub>3</sub> be weight function on ℝ<sup>n</sup>. In section 2 we define a bilinear multiplier from L(p<sub>1</sub>, q<sub>1</sub>, ω<sub>1</sub>dμ) (ℝ<sup>n</sup>) × L(p<sub>2</sub>, q<sub>2</sub>, ω<sub>2</sub>dμ) (ℝ<sup>n</sup>) to L(p<sub>3</sub>, q<sub>3</sub>, ω<sub>3</sub>dμ) (ℝ<sup>n</sup>) by a bounded operator B<sub>m</sub>, <img src=image/13408018_01.gif> where 1≤ p<sub>1</sub>, p<sub>2</sub>, p<sub>3</sub>, q<sub>1</sub>, q<sub>2</sub>, q<sub>3</sub> < ∞ and m (ξ,η) is a bounded, measurable function on ℝ<sup>n</sup> × ℝ<sup>n</sup>. We denote the space of bilinear multipliers of this type by BM (L(p<sub>1</sub>, q<sub>1</sub>, ω<sub>1</sub>dμ) × L(p<sub>2</sub>, q<sub>2</sub>, ω<sub>2</sub>dμ), L(p<sub>3</sub>, q<sub>3</sub>, ω<sub>3</sub>dμ)), and study of the basic properties of this space. We give methods of construction examples of bilinear multipliers. Similarly in section 3, by using variable exponent Lorentz space, we define the bilinear multipliers from L( p<sub>1</sub> (x), q<sub>1</sub> (x)) × L( p<sub>2</sub> (x), q<sub>2</sub> (x)) to L( p<sub>3</sub> (x), q<sub>3</sub> (x)) and discuss basic properties of the space of bilinear multipliers BM (L( p<sub>1</sub> (x), q<sub>1</sub> (x)) × L( p<sub>2</sub> (x), q<sub>2</sub> (x)), L( p<sub>3</sub> (x), q<sub>3</sub> (x))).</p>]]></description>
<pubDate>Jan 2017</pubDate>
</item>
<item>
<title><![CDATA[A Weighted Exponential Model for Grouped Line Transect Data]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=5327]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2017<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;5&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Fahid Al Eibood&nbsp; &nbsp;and Omar Eidous&nbsp; &nbsp;</p><p>This paper considers a parametric model for grouped data collected via line transect technique. The weighted exponential model is studied and investigated when the data are assumed to be grouped in the intervals. The maximum likelihood method is adopted for purpose of estimation. The resultant estimator of the population abundance is compared with the corresponding estimator that developed for ungrouped data by using the Laake stakes real data.</p>]]></description>
<pubDate>Jan 2017</pubDate>
</item>
<item>
<title><![CDATA[A Predator-prey Model with Predator Population Saturation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=5229]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2016<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;4&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Quay van der Hoff&nbsp; &nbsp;and Temple H. Fay&nbsp; &nbsp;</p><p>In this article, a new predator-prey model having predator saturation is proposed. The model resembles a classical Rosenzweig-MacArthur type model, but comes with an added function, the population saturation function of the predator. This function of the predator population is a factor in the predator fertility term in the model. Consequently the model behaves better than the Rosenzweig-MacArthur model since all solutions are bounded within the population quadrant. An invariant region arises where the Poincaré-Bendixon theorem can be applied. In most cases there is but a single critical value, either an attracting spiral point suggesting a stable population pair or an unstable node, resulting in a unique limit cycle. This model is fully described and an analysis of the stability of critical values is provided. The robustness of the model is demonstrated based on the classification of Gunawardena [8].</p>]]></description>
<pubDate>Nov 2016</pubDate>
</item>
<item>
<title><![CDATA[Statistics Model for Meteorological Forecasting Using Fuzzy Logic Model]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=5228]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Nov&nbsp;2016<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;4&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Nitaya Jantakoon&nbsp; &nbsp;</p><p>The key atmospheric variables that impact crops are weather and rainfall. Extreme rainfall or drought at critical periods of a crop's development can have dramatic influences on productivity and yields. The analysis of effect of rainfall is needed to evaluate crop production in Northeastern Thailand. Two operations were performed on the Fuzzy Logic model; the fuzzification operation and defuzzification operation. The model predicted outputs were compared with the actual rainfall data. Simulation results reveal that predicted results are in good agreement with measured data. Prediction Error and Root Mean Square Error (RMSE) were calculated, and on the basis of the results obtained, it can be suggested that fuzzy methodology is efficiently capable of handling scattered data.</p>]]></description>
<pubDate>Nov 2016</pubDate>
</item>
<item>
<title><![CDATA[Network Analysis Methods to Measure Sociometric Status]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=5083]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2016<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;4&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Simona Gozzo&nbsp; &nbsp;and Venera Tomaselli&nbsp; &nbsp;</p><p>This paper proposes an innovative methodological approach to measure sociometric status in small groups of pupils. Although it uses indirect data collected by interview, in this study the sociometric status is analysed by direct observation. This method is specifically suitable when the target population concerns pre-school children. Their cognitive competence, in fact, is not as well developed as their relational abilities. Hence, the indicators constructed are more reliable than the measures derived by the subjective perception of interviewed pupils. The Network Analysis methods allow for the definition of sociometric status by means of regular equivalence. Employing lambda sets and cliques, then, we specify further roles into distinctive small groups. The results show that sociometric status can be revealed by regular equivalence. Besides, the Network Analysis approach allows for the observation of further relational skills, not strictly associated with traditional social roles, detectable only through lambda sets and cliques.</p>]]></description>
<pubDate>Aug 2016</pubDate>
</item>
<item>
<title><![CDATA[Uncoupling Multidimensional Contingency Tables]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=4026]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2016<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;4&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Helmut Vorkauf&nbsp; &nbsp;</p><p>A parsimonious and robust new method, based on information theory, to analyze multidimensional contingency tables is presented. It swiftly reveals the important relations between dependent and independent variables and casually detects confounding effects in a straightforward manner. The method in its simplicity could replace logistic regression and log-linear analysis that, in dealing with their limitations and defects, have grown complicated and convoluted.</p>]]></description>
<pubDate>Aug 2016</pubDate>
</item>
<item>
<title><![CDATA[A Presentation of the Free Lie Algebra M<sub>2,m,3</sub>]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=4025]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2016<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;4&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>GÜlistan Kaya GÖk&nbsp; &nbsp;</p><p>Let M<sub>2,m,3</sub> be a free solvable nilpotent Lie algebra of rank 2 and nilpotency class m - 1. We show that M<sub>2,m,3</sub> admits a minimal presentation whose set of defining relators consists of certain types of basic commutators using techniques in Gröbner-Shirshov basis theory.</p>]]></description>
<pubDate>Aug 2016</pubDate>
</item>
<item>
<title><![CDATA[On the Development of an Exponentiated F Test for One-way ANOVA in the Presence of Outlier(s)]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=3858]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2016<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;4&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Adepoju K.A&nbsp; &nbsp;Shittu O.I&nbsp; &nbsp;and Chukwu A.U&nbsp; &nbsp;</p><p>The classical Fisher-Snedecor test which compares several population means depends on the underlined assumptions which include; independent of populations, constant variance and absence of outlier among others .Arguably the source of violation of some of these assumptions is the outlier which lead to unequal variances. Outlier leads to inequality in the variances of the populations which consequently leads to the failure of the classical-F to take correct decision in terms of the null hypothesis. A series of robust tests have been carried out to ameliorate these lapses with some degrees of inaccuracies and limitations in terms of inflating the type 1 error and the power of different combination of parameters at various sample sizes while still uses the conventional F-table. This study focuses on developing robust F-test called exponentiated F test with the introduction of one shape parameter to the conventional F-distribution capable of taking decisions on ANOVA that are robust to the existence of outlier. The performance of the robust F test was compared with the existing F-tests in the literature using the power of test. Real life and simulated data were used to illustrate the applicability and efficiency of the proposed distribution over the existing ones. Experimental data with balanced and unbalanced design were used with populations sizes k=3 and k=5 were simulated with 10000 replications and varying degrees of outliers were ejected randomly. The results obtained indicate that the Proposed Exponentiated-F test is uniformly most powerful than the conventional-F tests for analysis of variance in the presence of outlier and is therefore recommended for use by researchers.</p>]]></description>
<pubDate>Apr 2016</pubDate>
</item>
<item>
<title><![CDATA[An Inventory Model for Perishable Items Having Constant Demand with Time Dependent Holding Cost]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=3556]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2016<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;4&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Sarbjit Singh&nbsp; &nbsp;</p><p>This paper presents an inventory model for perishable items with constant demand, for which holding cost increases with time, the items considered in the model are deteriorating items with a constant rate of deterioration θ. In the majority of the earlier studies the holding cost has been considered to be constant, which is not true in most of the practical situations as the insurance cost and record keeping costs or even cost of keeping the items in the cold storage increases with time. In this paper the time dependent linear holding cost has been considered, the holding cost for the items increases with time. The approximate optimal solution has been obtained. The results are illustrated with the help of numerical examples.</p>]]></description>
<pubDate>Apr 2016</pubDate>
</item>
<item>
<title><![CDATA[On Stable Plane Vortex Flows of an Ideal Fluid]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=3555]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2016<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;4&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>O.V. Troshkin&nbsp; &nbsp;</p><p>2D-flows of an ideal incompressible fluid are treated in a rectangular. If analytical (resolved in series of powers of coordinates), the stationary flows are uniquely determined with the inflow vorticity. When excluded vortices of a spectral origin, such flows prove to be stable.</p>]]></description>
<pubDate>Apr 2016</pubDate>
</item>
<item>
<title><![CDATA[The Constructive Implicit Function Theorem and Proof in Logistic Mixtures]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=3436]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2016<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;4&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Xiao Liu&nbsp; &nbsp;</p><p>There is the work by Bridges et al (1999) on the key features of a constructive proof of the implicit function theorem, including some applications to physics and mechanics. For mixtures of logistic distributions such information is lacking, although a special instance of the implicit function theorem prevails therein. The theorem is needed to see that the ridgeline function, which carries information about the topography and critical points of a general logistic mixture problem, is well-defined [2]. In this paper, we express the implicit function theorem and related constructive techniques in their multivariate extension and propose analogs of Bridges and colleagues' results for the multivariate logistic mixture setting. In particular, the techniques such as the inverse of Lagrange's mean value theorem [4] allow to prove that the key concept of a logistic ridgeline function is well-defined in proper vicinities of its arguments.</p>]]></description>
<pubDate>Feb 2016</pubDate>
</item>
<item>
<title><![CDATA[Asymptotic Solving Essentially Nonlinear Problems]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=3435]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2016<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;4&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Alexander D. Bruno&nbsp; &nbsp;</p><p>Here we present a way of computation of asymptotic expansions of solutions to algebraic and differential equations and present a survey of some of its applications. The way is based on ideas and algorithms of Power Geometry. Power Geometry has applications in Algebraic Geometry, Differential Algebra, Nonstandard Analysis, Microlocal Analysis, Group Analysis, Tropical/Idempotent Mathematics and so on. We also discuss a connection of Power Geometry with Idempotent Mathematics.</p>]]></description>
<pubDate>Feb 2016</pubDate>
</item>
<item>
<title><![CDATA[Estimating Change Point in Single Server Queues]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=3434]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2016<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;4&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Sibanee Sahu&nbsp; &nbsp;and Sarat Kumar Acharya&nbsp; &nbsp;</p><p>The paper is concerned with the study of change point problem in the inter-arrival time and service time of single server queues. Maximum likelihood estimators of the parameters are derived. A test statistics has been developed and its properties have been studied.</p>]]></description>
<pubDate>Feb 2016</pubDate>
</item>
<item>
<title><![CDATA[One-term Approximation for Normal Distribution Function]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=3433]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2016<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;4&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Omar Eidous&nbsp; &nbsp;and Samar Al-Salman&nbsp; &nbsp;</p><p>This paper presents a one-term approximation to the cumulative normal distribution functions. The absolute maximum error of the proposed approximation is 0.0018 less than 0.003 of Polya's approximation. Comparisons between the proposed approximation and the different approximations with one-term that stated in the literature are given.</p>]]></description>
<pubDate>Feb 2016</pubDate>
</item>
<item>
<title><![CDATA[On the Kumaraswamy Fisher Snedecor Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=3432]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2016<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;4&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Adepoju, K.A&nbsp; &nbsp;Chukwu, A.U&nbsp; &nbsp;and Shittu, O.I&nbsp; &nbsp;</p><p>We propose the Kumaraswamy-F (KUMAF) distribution which is a generalization of the conventional Fisher Snedecor (F-distribution). The new distribution can be used even when one or more of the regular assumptions are violated. It is obtained with the addition of two shape parameters to a continuous F-distribution which is commonly used to test the null hypothesis in the Analysis of Variance (ANOVA test).The statistical properties of the proposed distribution such as moments, moment generating function, the asymptotic behavior among others were investigated. The method of maximum likelihood is used to estimate the model parameters and the observed information matrix is derived. The distribution is found to be more flexible and robust to regular assumptions of the conventional F-distribution. In future research, the flexibility of this distribution as well as its robustness using a real data set will be examined. The new distribution is recommended for used in most applications where the assumption underlying the use of conventional F distribution for one-way analysis of variance are violated such as homogeneity of variance or normality assumption probably as result of the presence of outlier(s). It is instructive to note that the new distribution preserves the originality of the data without transformation.</p>]]></description>
<pubDate>Feb 2016</pubDate>
</item>
<item>
<title><![CDATA[Hopf Cyclic Cohomology in Non-symmetric Monoidal Categories]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=3220]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Arash Pourkia&nbsp; &nbsp;</p><p>First, referring to our previous work, 'Hopf cyclic cohomology in braided monoidal categories', we reduce the restriction of the ambient category C being symmetric. We let C to be non-symmetric but assume only the restriction, ψ<sup>2</sup> = id, on the braid map correspond to the Hopf algebra H, which is the main player in the theory. We define a family of examples of such desired braided Hopf algebras, H, living in the category of anyonic vector spaces. Next, on one hand, we will prove that these anyonic Hopf algebras are the enveloping (Hopf) algebras of particular quantum Lie algebras, which we will construct. On the other hand, we will show that, analogous to the non-super and the super case, the well known relationship between the periodic Hopf cyclic cohomology of an enveloping (super) algebra and the (super) Lie algebra homology also holds for these particular quantum Lie algebras.</p>]]></description>
<pubDate>Dec 2015</pubDate>
</item>
<item>
<title><![CDATA[Minkowski Sum of a Voronoi Parallelotope and a Segment]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=3219]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Robert Erdahl&nbsp; &nbsp;and Viacheslav Grishukhin&nbsp; &nbsp;</p><p>By a Voronoi parallelotope P(a) we mean a parallelotope determined by linear in normal vectors p inequalities with a non-negative quadratic form a(p) as right hand side. For a positive form a, it was studied by Voronoi in his famous memoir. For a set of vectors P, we call its dual a set of vectors P<sup>*</sup> such that <img src=image/13405157_001.gif> ∈ {0;±1} for all p ∈ P and q ∈ P<sup>*</sup>. We prove that Minkowski sum of an irreducible Voronoi parallelotope P(a) and a segment z(u) is a Voronoi parallelotope if and only if u = we, where w > 0 and e is a vector of the dual of the set of normal vectors of all facets of P(a). Then the segment z(u) is described by the same set of inequalities with wa<sub>e</sub>(p)=w<img src=image/13405157_002.gif> as right hand side and P(a) + z(u) = P(a + wa<sub>e</sub>). A similar assertion is true for Minkowski sum of a reducible Voronoi parallelotope with a segment.</p>]]></description>
<pubDate>Dec 2015</pubDate>
</item>
<item>
<title><![CDATA[Characterization of Power Function Distribution through Expectation of Function of Order Statistics]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=3218]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Bhatt Milind B.&nbsp; &nbsp;</p><p>Independence of suitable function of order statistics, linear relation of conditional expectation, recurrence relations between expectations of function of order statistics, distributional properties of exponential distribution, record valves, lower record statistics, product of order statistics and Lorenz curve etc.. are various approaches available in the literature for the characterization of the power function distribution. In this research note different path breaking approach for the characterization of power function distribution through the expectation of function of order statistics is given and provides a method to characterize the power function distribution which needs any arbitrary non constant function only.</p>]]></description>
<pubDate>Dec 2015</pubDate>
</item>
<item>
<title><![CDATA[Why Mathematics Is Universal Multidisciplinary Science]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=3217]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Sergey Krylov&nbsp; &nbsp;</p><p>The paper shows meta-mathematical prerequisites for basic concepts of rigorous science called mathematics. These concepts explore a very simple idea concerning the hypothesis that all surrounding physical processes are basically algorithmic processes - as understandable as well as partially or fully incomprehensible ones. Mathematics is very successful in studying, formal describing and utilizing of such processes, because mathematics is based on similar algorithmic ideas, methods, and structures. These facts allow us to formulate more precisely useful mathematical (meta-scientific) concepts concerning some well-known scientific problems in various rigorous theories, including the theory of "object calculus", the theory of automatic cognition, the theory of biological evolution, the theory of heterogeneous electronic systems, the theory of logics in various chemical transformations, the basic architecture of completely programmable universal (multi-purpose) synthesizers-analyzers for various objects, and so on.</p>]]></description>
<pubDate>Dec 2015</pubDate>
</item>
<item>
<title><![CDATA[New Integral Inequalities via Harmonically Convex Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=3089]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Imdat Iscan&nbsp; &nbsp;Mustafa Aydin&nbsp; &nbsp;and Sema Dikmenoglu&nbsp; &nbsp;</p><p>In this paper, we establish some estimates, involving the Euler Beta function and the Hypergeometric function of the integral <img src=image/13404990_01.gif> for the class of functions whose certain powers of the absolute value are harmonically convex.</p>]]></description>
<pubDate>Oct 2015</pubDate>
</item>
<item>
<title><![CDATA[On Non-increasing of the Density and the Weak Density under Weakly Normal Functors of Finite Support]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=3088]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Beshimov R.B.&nbsp; &nbsp;and Mamadaliev N.K.&nbsp; &nbsp;</p><p>In the paper it is proved that if a covariant functor F : Comp → Comp is weakly normal, then for any infinite Tychonoff space X following inequalities hold: d(<img src=image/13404875_01.gif> (X)) ≤ d(X), d(<img src=image/13404875_02.gif> (X)) ≤ d(X), wd(<img src=image/13404875_03.gif> (X)) ≤ wd(X), wd(<img src=image/13404875_04.gif> (X)) ≤ wd(X).</p>]]></description>
<pubDate>Oct 2015</pubDate>
</item>
<item>
<title><![CDATA[Modelling Summer Daily Peak Loads in South Africa Using Discrete Time Markov Chain]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=3087]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Molete Mokhele&nbsp; &nbsp;and Caston Sigauke&nbsp; &nbsp;</p><p>Electricity demand exhibits a large degree of randomness in South Africa, particularly in summer. Its description requires a detailed analysis using statistical methodologies, in particular stochastic processes. The paper presents a Markov chain analysis of peak electricity demand. The data used is from South Africa's power utility company Eskom, for the period 2000 to 2011. This modelling approach is important to decision makers in the electricity sector particularly in scheduling maintenance and refurbishments of power-plants. The randomness effect is accountable to meteorological factors and major electricity appliance usage. Aggregated data on daily electricity peak demand is used to develop the transition probability matrices, steady-state probabilities, mean return- and the first passage times. Such analysis is important to Eskom and other energy companies in planning load-shifting, load analysis and scheduling of electricity particularly during peak period in summer. </p>]]></description>
<pubDate>Oct 2015</pubDate>
</item>
<item>
<title><![CDATA[The Problem of Integral Geometry of Volterra Type with a Weight Function of a Special Type]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=3086]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Akram.H. Begmatov&nbsp; &nbsp;M.E. Muminov&nbsp; &nbsp;and Z.H. Ochilov&nbsp; &nbsp;</p><p>We study new problem of reconstruction of a function in a strip from their given integrals with known weight function along polygonal lines. We obtained two simply inversion formulas for the solution to the problem. We prove uniqueness and existence theorems for solutions and obtain stability estimates of a solution to the problem in Sobolev's spaces and thus show their weak ill-posedness. Then we consider integral geometry problems with perturbation. The uniqueness theorems are proved and stability estimates of solutions in Sobolev spaces are obtained. </p>]]></description>
<pubDate>Oct 2015</pubDate>
</item>
<item>
<title><![CDATA[On Conditional Distribution of the Sample Mean for Densities with Singular Logarithmic Derivative]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2970]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Victor Chulaevsky&nbsp; &nbsp;</p><p>We study the regularity of the conditional distribution of the empiric mean of a finite sample of IID random variables with a bounded common probability density, conditional on the sample "uctuations", and extend a prior result, proved for strictly positive smooth densities, to a larger class of smooth densities vanishing at one or more points of their support.</p>]]></description>
<pubDate>Aug 2015</pubDate>
</item>
<item>
<title><![CDATA[Some Properties of Topological Spaces Related to the Local Density and the Local Weak Density]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2969]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Beshimov R. B.&nbsp; &nbsp;Mamadaliev N. K.&nbsp; &nbsp;and Mukhamadiev F. G.&nbsp; &nbsp;</p><p>In the paper the local density and the local weak density of topological spaces are investigated. It is proved that for stratifiable spaces the local density and the local weak density coincide, these cardinal numbers are preserved under open mappings, are inverse invariant of a class of closed irreducible mappings. Moreover, it is showed that the functor of probability measures of finite supports preserves the local density of compacts.</p>]]></description>
<pubDate>Aug 2015</pubDate>
</item>
<item>
<title><![CDATA[Grey-correlation Multi-attribute Decision-making Method Based on Intuitionistic Trapezoidal Fuzzy Numbers]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2897]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Ye-zhi Xiao&nbsp; &nbsp;and Sha Fu&nbsp; &nbsp;</p><p>This study proposes a grey-correlation multi-attribute decision-making method based on intuitionistic trapezoidal fuzzy numbers to solve the problem that the attribute weight depends on the various statuses and the attribute values offer multi-attribute decision making in the form of intuitionistic trapezoidal fuzzy numbers. Firstly, this paper gives the definitions of intuitionistic trapezoidal fuzzy numbers, and the distance formula. Then, the grey-correlation coefficient about the intuitionistic trapezoidal fuzzy numbers is obtained through grey-correlation analysis. The correlation degree between different options is obtained through calculation based on the correlation coefficient. With that, the options are ranked based on the values to identify the optimal option. Finally, the result of analysis of examples demonstrates the feasibility and effectiveness of the proposed method.</p>]]></description>
<pubDate>Aug 2015</pubDate>
</item>
<item>
<title><![CDATA[Method for Multi-attribute Decision Making with Triangular Fuzzy Number Based on Multi-period State]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2834]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Sha Fu&nbsp; &nbsp;</p><p>This paper takes the time weight and attribute weight in different periods into consideration to propose a dynamic triangular fuzzy number type multi-attribute decision making method to solve the problem with multi-attribute decision making with triangular fuzzy number as the attribute value. This method utilizes the characteristics of the triangular fuzzy number in order to establish the correlation model between the evaluation scheme and the positive and negative ideal scheme, and obtain comprehensive ranking of the evaluation scheme, thus acquiring the decision making result. At last, this paper demonstrates the feasibility and validity of the proposed methods through instance analysis.</p>]]></description>
<pubDate>Aug 2015</pubDate>
</item>
<item>
<title><![CDATA[The Geometry of the Universe]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2833]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Wun-Yi Shu(許文郁)&nbsp; &nbsp;</p><p>In the late 1990s, observations of type Ia supernovae led to the astounding discovery that the universe is expanding at an accelerating rate. The explanation of this anomalous acceleration has been one of the great problems in physics since that discovery. We propose cosmological models that can simply and elegantly explain the cosmic acceleration via the geometric structure of the spacetime continuum, without introducing a cosmological constant into the standard Einstein field equation, negating the necessity for the existence of dark energy. In this geometry, the three fundamental physical dimensions length, time, and mass are related in new kind of relativity. There are four conspicuous features of these models: 1) the speed of light and the gravitational "constant" are not constant, but vary with the evolution of the universe, 2) time has no beginning and no end; i.e., there is neither a big bang nor a big crunch singularity, 3) the spatial section of the universe is a 3-sphere, and 4) in the process of evolution, the universe experiences phases of both acceleration and deceleration. One of these models is selected and tested against current cosmological observations, and is found to fit the redshift- luminosity distance data quite well.</p>]]></description>
<pubDate>Aug 2015</pubDate>
</item>
<item>
<title><![CDATA[Accurate Asymptotic Formulas for Eigenvalues of a Boundary-value Problem of Fourth Order]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2734]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jun&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Karwan H. F. Jwamer&nbsp; &nbsp;and Hawsar Ali HR&nbsp; &nbsp;</p><p>This paper deals with the behavior of the solution and asymptotic behaviors of eigenvalues of a fourth order boundary value problem, having the following definition: <img src=image/13404190_01.gif> (1) with boundary conditions: <img src=image/13404190_02.gif> Where <img src=image/13404190_03.gif> and <img src=image/13404190_04.gif> are real valued functions and ρ(χ)=1, and λ is a spectral parameter in which <img src=image/13404190_05.gif> . Here it has been assumed that <img src=image/13404190_06.gif> and <img src=image/13404190_07.gif>.</p>]]></description>
<pubDate>Jun 2015</pubDate>
</item>
<item>
<title><![CDATA[Bayesian Multiperiod Forecasting for Arma Model under Jeffrey's Prior]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2733]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jun&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Zul Amry&nbsp; &nbsp;and Adam Baharum&nbsp; &nbsp;</p><p>The main purpose of this study is to find the Bayesian forecast of ARMA model under Jeffrey's prior assumption with quadratic loss function. The point forecast model is obtained based on the mean of the marginal conditional posterior predictive in mathematical expression. Furthermore, the point forecast model of the Bayesian forecasting compared to the traditional forecasting. The simulation shows that the forecast accuracy of Bayesian forecasting is better than the traditional forecasting and the descriptive statistics of Bayesian forecasting are closer to the true value than the traditional forecasting.</p>]]></description>
<pubDate>Jun 2015</pubDate>
</item>
<item>
<title><![CDATA[Ossa's Theorem via the Kunneth Formula]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2709]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jun&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Robert Bruner&nbsp; &nbsp;Khairia Mira&nbsp; &nbsp;Laura Stanley&nbsp; &nbsp;and Victor Snaith&nbsp; &nbsp;</p><p>Let p be a prime. We calculate the connective unitary K-theory of the smash product of two copies of the classifying space for the cyclic group of order p, using a Kunneth formula short exact sequence. As a corollary, using the Bott exact sequence and the mod 2 Hurewicz homomorphism we calculate the connective orthogonal K-theory of the smash product of two copies of the classifying space for the cyclic group of order two.</p>]]></description>
<pubDate>Jun 2015</pubDate>
</item>
<item>
<title><![CDATA[Vague Implicative LI – Ideals of Lattice Implication Algebras]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2708]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jun&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>V. Amarendra Babu&nbsp; &nbsp;and T. Anitha&nbsp; &nbsp;</p><p>We introduce the concept of vague implicative LI – ideals of lattice implication algebra and discuss some of their properties. We study the relationship between v-implicative filters, vague ILI - ideals and ILI – ideals. Extension property of a vague implicative LI – ideal is built.</p>]]></description>
<pubDate>Jun 2015</pubDate>
</item>
<item>
<title><![CDATA[Optimized Regularity Estimates of Conditional Distribution of the Sample Mean]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2603]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Victor Chulaevsky&nbsp; &nbsp;</p><p>We prove an optimized estimate for the regularity of the conditional distribution of the empiric mean of a finite sample of IID random variables, conditional on the sample "fluctuations". Prior results, based on bounds in probability, provided a Hölder-type regularity of the conditional distribution. We establish a Lipschitz regularity, using bounds in expectation. The new estimate, extending a well-known property of Gaussian IID samples, is a crucial ingredient of the Multi-Scale Analysis of multi-particle Anderson-type random Hamiltonians in a Euclidean space. In particular, the H¨older regularity of the multi-particle eigenvalue distribution, sufficient for the localization analysis of N-particle lattice Hamiltonians, with N ≥ 3, needs to be replaced by Lipschitz regularity for similar Hamiltonians in the Euclidean space.</p>]]></description>
<pubDate>Apr 2015</pubDate>
</item>
<item>
<title><![CDATA[Kalman Filter: A Simple Derivation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2479]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Ali M. Mosammam&nbsp;&nbsp;</p><p>The Kalman filter is a recursive estimator and plays a fundamental role in statistics for filtering, prediction and smoothing. The key element in any recursive estimator is the estimate of the current state, xk, at time k, based on observations up to and including observation k and the Kalman filter enables the estimate of the state to be updated as new observations become available. In this paper we have tried to derive the Kalman filter as simple as possible.</p>]]></description>
<pubDate>Apr 2015</pubDate>
</item>
<item>
<title><![CDATA[Detection of Structural Changes in Correctly Specified and Misspecified Conditional Quantile Polynomial Distributed Lag (QPDL) Model Using Change-point Analysis]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2478]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Kwadwo Agyei Nyantakyi&nbsp; &nbsp;B. L. Peiris&nbsp; &nbsp;and L. H. P. Gunaratne&nbsp; &nbsp;</p><p>Change-point analysis is a powerful tool for determining whether a change has taken place or not. In this paper we study the structural changes in the Conditional Quantile Polynomial Distributed Lag (QPDL) model using change-point analysis. We employ both the Binary Segmentation (BinSeg) and Cumulative Sum (Cusum) methods for this analysis. We studied the structural changes in both correctly specified and misspecified QPDL models. As an economic application we considered the production of rubber and its price returns. We observe that both Cusum and BinSeg methods correctly detected the structural changes for both the correctly specified and the misspecified QPDL model. The Cusum method gave the exact positions where the structural changes occurred and the BinSeg gave the approximated positions where the changes occurred. Both methods were able to detect the shift in time for both the mean and variance for the missspecified QPDL model, hence both methods were better for predicting structural stability in a QPDL models. The impact of this is that, when there are changes made to a data knowingly or unknowingly, they can be detected, as well as when these changes were effected. We further observed that both methods were powerful tools that better characterizes the changes, controls the overall error rate, robust to outliers, more flexible and simple to use.</p>]]></description>
<pubDate>Apr 2015</pubDate>
</item>
<item>
<title><![CDATA[Numerical Solution of Two Dimensional Stagnation Flows of Newtonian Fluid towards a Shrinking Sheet]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2477]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Mohammad Shafique&nbsp; &nbsp;Fatima Abbas&nbsp; &nbsp;and Atif Nazir&nbsp; &nbsp;</p><p>The two dimensional stagnation flows towards a shrinking sheet of Newtonian fluids has been solved numerically by using SOR Iterative Procedure. The similarity transformations have been used to reduce the highly nonlinear partial differential equations to ordinary differential equations. The results have been calculated on three different grid sizes to check the accuracy of the results. The problem relates to the flows towards a shrinking sheet when and if the flows towards a stretching sheet. The numerical results for Newtonian fluids are found in good agreement with those obtained previously.</p>]]></description>
<pubDate>Apr 2015</pubDate>
</item>
<item>
<title><![CDATA[On the Weak Grothendieck Group of a Morphic Ring and its Representations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2437]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Sorokin O.S.&nbsp; &nbsp;</p><p>The K-theoretical aspect of the commutative mophic rings is established using the arithmetical properties of the morphic rings in order to obtain a ring of all Smith normal forms of matrices over the morphic ring. The internal structure and basic properties of such rings are discussed as well as their presentations by the Witt vectors. In a case of a commutative von Neumann regular rings the famous Grothendieck group K<sub>0</sub>(R) obtains the alternative description.</p>]]></description>
<pubDate>Feb 2015</pubDate>
</item>
<item>
<title><![CDATA[Derivations Acting as Homomorphisms and as Anti-homomorphisms in σ-Lie Ideals of σ-Prime Gamma Rings]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2268]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>A. C. Paul&nbsp; &nbsp;and S. Chakraborty&nbsp; &nbsp;</p><p>Let U be a non-zero σ-square closed Lie ideal of a 2-torsion free σ-prime Τ-ring M satisfying the condition aαbβc = aβbαc for all a, b, c ∈ M and α, β ∈ Τ, and let d be a derivation of M such that dσ = σd. We prove here that (i) if d acts as a homomorphism on U, then d = 0 or U ⊆ Z(M), where Z(M) is the centre of M; and (ii) if d acts as an anti-homomorphism on U, then d = 0 or U⊆ Z(M).</p>]]></description>
<pubDate>Feb 2015</pubDate>
</item>
<item>
<title><![CDATA[A Note on Generalized Jordan Derivations in Semiprime Rings]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2267]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Mehsin Jabel Atteya&nbsp; &nbsp;</p><p>The main purpose of this paper is to study and investigate some results concerning generalized Jordan derivation and generalized derivation <img src=image/13403309_01.gif> on semiprime ring R, where D an additive mapping on R such that <img src=image/13403309_02.gif> for all<img src=image/13403309_03.gif> and D acts as left centralizer.</p>]]></description>
<pubDate>Feb 2015</pubDate>
</item>
<item>
<title><![CDATA[Criteria for the Existence of Common Points of Spectra of Several Operator Pencils]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2266]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2015<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;3&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>R. M. Dzhabarzadeh&nbsp; &nbsp;</p><p>In this paper we present two criteria for the existence of common eigen values of several operator pencils having discrete spectrum. One of the given criteria is proved by using analogs of resultant for several operator pencils; proof of the other criterion requires the use of the results of the multiparameter spectral theory. In both cases the number of operator pencils is finite, operator pencils act, generally speaking, in different Hilbert spaces.</p>]]></description>
<pubDate>Feb 2015</pubDate>
</item>
<item>
<title><![CDATA[Skew - Commuting Derivations of Noncommutative Prime Rings]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2223]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;8&nbsp;&nbsp;<p>Mehsin Jabel Atteya&nbsp; &nbsp;and Dalal Ibraheem Rasen&nbsp; &nbsp;</p><p>The main purpose of this paper is study and investigate a skew-commuting and skew-centralizing d and g be a derivations on noncommutative prime ring and semiprime ring R, we obtain the derivation d(R)=0 (resp. g(R)=0 ) .</p>]]></description>
<pubDate>Dec 2014</pubDate>
</item>
<item>
<title><![CDATA[Ultimate Population Size: Some Investigations under Stable Population Theory]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=2081]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;8&nbsp;&nbsp;<p>K.N.S. Yadava&nbsp; &nbsp;Shruti&nbsp; &nbsp;and J.Pandey&nbsp; &nbsp;</p><p>Some models have been proposed for the projection of future size of population for short and long terms under the stability conditions with changed regime of fertility schedule. The main aim of this paper is to see the size of population if fertility is curtailed up to the level of replacement, especially in developing countries. Models have been illustrated taking a set of real and hypothetical data consistent with the current demographic scenario of India. It was found that the proposed models are the extended forms of the models developed by the previous researchers and the projected population is more or less consistent with them.</p>]]></description>
<pubDate>Dec 2014</pubDate>
</item>
<item>
<title><![CDATA[Torsion Theory and its Applications in M − D Modules]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1924]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;7&nbsp;&nbsp;<p>Behnam Talaee&nbsp; &nbsp;</p><p>Let R be a ring and M an R−module. A module N ∈ [M] is called M-small if, N ≪ K for some K ∈ [M]. Torsion theory cogenerated by M−small modules is introduced and investigated in [9]. Also as a generalization of M−small modules, −M−small modules are studied in [6]. In this paper we will introduce M−delta (briefly M − D) modules and investigate the torsion theory cogenerated by such modules. We will get some equivalent conditions for when N is equal to its torsion theory submodule cogenerated by M − D modules. Especially we show that D(N;A) = 0 for all A ∈ [M] iff N = ReD[M](N). Some other important properties about this kind of modules will be obtained.</p>]]></description>
<pubDate>Oct 2014</pubDate>
</item>
<item>
<title><![CDATA[Non-nilpotent Subgroups in Locally Graded Groups]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1923]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;7&nbsp;&nbsp;<p>N. Azimi&nbsp; &nbsp;and M. Amirabadi&nbsp; &nbsp;</p><p>A non-nilpotent finite group whose proper subgroups are all nilpotent (or a finite group without non-nilpotent proper subgroups) is well-known (called Schmidt group). O.Yu. Schmidt (1924) studied such groups and proved that such groups are solvable. More recently Zarrin generalized Schmidt's Theorem and proved that every finite group with less than 22 non-nilpotent subgroups is solvable. In this paper, we show that every locally graded group with less than 22 non-nilpotent subgroups is solvable. </p>]]></description>
<pubDate>Oct 2014</pubDate>
</item>
<item>
<title><![CDATA[An Analytic Exact Form of the Unit Step Function]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1922]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;7&nbsp;&nbsp;<p>J. Venetis&nbsp; &nbsp;</p><p>In this paper, the author obtains an analytic exact form of the unit step function, which is also known as Heaviside function and constitutes a fundamental concept of the Operational Calculus. Particularly, this function is equivalently expressed in a closed form as the summation of two inverse trigonometric functions. The novelty of this work is that the exact representation which is proposed here is not performed in terms of non – elementary special functions, e.g. Dirac delta function or Error function and also is neither the limit of a function, nor the limit of a sequence of functions with point wise or uniform convergence. Therefore it may be much more appropriate in the computational procedures which are inserted into Operational Calculus techniques.</p>]]></description>
<pubDate>Oct 2014</pubDate>
</item>
<item>
<title><![CDATA[On Wrapped Binomial Model Characteristics]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1921]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;7&nbsp;&nbsp;<p>S. V. S. Girija&nbsp; &nbsp;A. V. Dattatreya Rao&nbsp; &nbsp;and G. V. L. N. Srihari&nbsp; &nbsp;</p><p>In this paper, a new discrete circular model, the Wrapped Binomial model is constructed by applying the method of wrapping a discrete linear model. The characteristic function of the Wrapped Binomial Distribution is also derived and the population characteristics are studied. </p>]]></description>
<pubDate>Oct 2014</pubDate>
</item>
<item>
<title><![CDATA[Blow up of Solutions for a System of Nonlinear Higher-order Kirchhoff-type Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1889]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Erhan Piskin&nbsp; &nbsp;</p><p>In this work, we consider the initial boundary value problem for the Kirchhoff-type equations with damping and source terms <img src=image/13402608_001.png> in a bounded domain. We prove the blow up of the solution with positive initial energy by using the technique of [26] with a modification in the energy functional due to the different nature of problems. This improves earlier results in the literature [3, 9, 13, 21].</p>]]></description>
<pubDate>Aug 2014</pubDate>
</item>
<item>
<title><![CDATA[On the p-Maps of Groups]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1888]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Swapnil Srivastava&nbsp; &nbsp;and Punish Kumar&nbsp; &nbsp;</p><p>In this paper, we have defined the concept of p-map and studied some properties of p-map. By using this map, we have shown that p(G) is a subgroup of G and S = {x : p(x) = e} is a right transversal (with identity) of p(G) in G which becomes group by using p-map and some more conditions. Finally, we have shown that G be an extension of p(G).</p>]]></description>
<pubDate>Aug 2014</pubDate>
</item>
<item>
<title><![CDATA[Characterization of Generalized Uniform Distribution through Expectation of Function of Order Statistics]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1778]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Bhatt Milind B.&nbsp; &nbsp;</p><p>Normally the mass of a root has a uniform distribution but some of have different uniform distribution named generalized uniform distribution (GUD). The characterization result based on expectation of function of order statistics has been obtained for generalized uniform distribution. Applications are given for illustrative purpose.</p>]]></description>
<pubDate>Aug 2014</pubDate>
</item>
<item>
<title><![CDATA[Prime Number Conjecture]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1777]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;6&nbsp;&nbsp;<p>Chris Gilbert Waltzek&nbsp; &nbsp;</p><p>This paper builds on Goldbach’s weak conjecture, showing that all primes to infinity are composed of 3 smaller primes, suggesting that the modern definition of a prime number may be incomplete, requiring revision. The results indicate that prime numbers should include 1 as a prime number and 2 as a composite number, adding a new dimension to the most fundamental of all integers.</p>]]></description>
<pubDate>Aug 2014</pubDate>
</item>
<item>
<title><![CDATA[Generating Smooth Surfaces by Refinement of Curves]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1746]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jun&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Uri Itai&nbsp; &nbsp;</p><p>Generalization of subdivision schemes refining points to schemes refining more complex geometric objects has become popular in recent years. In this paper we generalize corner-cutting schemes in order to refine curves taking into account the geometry of the curves. We provide conditions guaranteeing that these schemes are well defined and converge to surfaces with contentious tangents.</p>]]></description>
<pubDate>Jun 2014</pubDate>
</item>
<item>
<title><![CDATA[Forcing of Infinity and Algebras of Distributions of Binary Semi-isolating Formulas for Strongly Minimal Theories]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1745]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jun&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;5&nbsp;&nbsp;<p>Sergey V. Sudoplatov&nbsp; &nbsp;</p><p>We apply a general approach for distributions of binary isolating and semi-isolating formulas to the class of strongly minimal theories. For this aim we introduce and use the notion of forcing of infinity. Structures associated with binary formulas, in strongly minimal theories, and containing compositions and Boolean combinations are characterized: a list of basic structural properties for these structures, including the forcing of infinity, is presented, and it is shown that structures satisfying this list of properties are realized in strongly minimal theories.</p>]]></description>
<pubDate>Jun 2014</pubDate>
</item>
<item>
<title><![CDATA[On the Evaluation of the Martinelli-Bochner Integral in the Half-Space]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1569]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Rustamova S.Mastura&nbsp; &nbsp;</p><p>In this paper, a formula for calculating the Martinelli-Bochner integral of functions from L<sup>p</sup> in the half-space is obtained.</p>]]></description>
<pubDate>Apr 2014</pubDate>
</item>
<item>
<title><![CDATA[Some Generating Functions of Modified Gegenbauer Polynomials by Lie Algebraic Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1568]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>K.P. Samanta&nbsp; &nbsp;B. C. Chandra&nbsp; &nbsp;and C.S.Bera&nbsp; &nbsp;</p><p>In this paper we have obtained some novel generating functions of <img src=image/13402107_001.png>-a modification of Gegenbauer polynomials, <img src=image/13402107_002.png> by utilizing L. Wesiner’s grouptheoretic method. By giving suitable interpretations to both the index (n) and the parameter (λ) of the polynomial under consideration, we obtain, in section 2, a set of infinitesimal operators known as raising and the lowering operators which generates a four dimensional Lie algebra. Finally, in Section 3, a novel generating function of the modified Gegenbauer polynomials which in turn yields a number of new and known results on generating functions.</p>]]></description>
<pubDate>Apr 2014</pubDate>
</item>
<item>
<title><![CDATA[Filtering Problems for Periodically Correlated Isotropic Random Fields]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1567]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Iryna Dubovets’ka&nbsp; &nbsp;Oleksandr Masyutka&nbsp; &nbsp;and Mikhail Moklyachuk&nbsp; &nbsp;</p><p>Spectral theory of isotropic random fields in Euclidean space developed by M. I. Yadrenko is exploited to find solution to the problem of optimal linear estimation of the functional <img src=image/13402050_001.png> which depends on unknown values of a periodically correlated (cyclostationary with period T) with respect to time isotropic on the sphere S<sub>n</sub> in Euclidean space En random field ζ(j, x), j ∈ Z, x ∈ S<sub>n</sub>. Estimates are based on observations of the field ζ(j, x) + θ(j, x) at points (j, x), j = 0,−1,−2, . . . , x ∈ S<sub>n</sub>, where θ(j, x) is an uncorrelated with ζ(j, x) periodically correlated with respect to time isotropic on the sphere S<sub>n</sub> random field. Formulas for computing the value of the mean-square error and the spectral characteristic of the optimal linear estimate of the functional Aζ are obtained. The least favorable spectral densities and the minimax (robust) spectral characteristics of the optimal estimates of the functional Aζ are determined for some special classes of spectral densities.</p>]]></description>
<pubDate>Apr 2014</pubDate>
</item>
<item>
<title><![CDATA[Evaluating Multiple Integrals Using Maple]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1566]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Apr&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Chii-Huei Yu&nbsp; &nbsp;and Bing-Huei Chen&nbsp; &nbsp;</p><p>This paper uses the mathematical software Maple for the auxiliary tool to study two types of multiple integrals. We can obtain the infinite series forms of these two types of multiple integrals by using binomial series and integration term by term theorem. On the other hand, we propose some examples to do calculation practically. The research methods adopted in this study involved finding solutions through manual calculations and verifying these solutions by using Maple.</p>]]></description>
<pubDate>Apr 2014</pubDate>
</item>
<item>
<title><![CDATA[A Robust Estimator of R = P(X > Y ) of Heavy-tailed Distributions and its Sampling Distributions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1471]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Dais George&nbsp; &nbsp;</p><p>Heavy-tailed distributions have wide applications in life-time contexts, especially in reliability and risk modeling. So we consider the estimation problem of reliability, R = P(X > Y ) for small samples, when X and Y are two independent but not identically distributed random variables belonging to the family of heavy-tailed distributions, using a robust estimator, namely the harmonic moment estimator. Extensive simulation studies are carried out to study the performance of this estimator. The relative efficiency of the estimator with the well known Hill estimator is studied. We obtain the sampling distribution of the parameters of the distribution as well as that of estimator of R which will help us to study the properties of the estimators. Also we find out the asymptotic confidence intervals of R and its performance is studied with respect to average width and the coverage probability, through simulations.</p>]]></description>
<pubDate>Mar 2014</pubDate>
</item>
<item>
<title><![CDATA[A Simple Approximation to the Area under Standard Normal Curve]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1470]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Amit Choudhury&nbsp; &nbsp;</p><p>Of all statistical distributions, the standard normal is perhaps the most popular and widely used. Its use often involves computing the area under its probability curve. Unlike many other statistical distributions, there is no closed form theoretical expression for this area in case of the normal distribution. Consequently it has to be approximated. While there are a number of highly complex but accurate algorithms, some simple ones have also been proposed in literature. Even though the simple ones may not be very accurate, they are nevertheless useful as accuracy has to be gauged vis-à-vis simplicity. In this short paper, we present another simple approximation formula to the cumulative distribution function of standard normal distribution. This new formula is fairly good when judged vis-à-vis its simplicity. </p>]]></description>
<pubDate>Mar 2014</pubDate>
</item>
<item>
<title><![CDATA[On the Optimization Problem of Stochastic Observations of Random Walks]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1469]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Alexander A. Butov&nbsp; &nbsp;</p><p>The optimal control problem for the intensity of observation events of the process of random walk is considered for the case of counting Poisson process in semimartingale terms. The linear function of the intensity as a cost of observations and the expected value of the quadratic form of errors of estimation as a cost of an error are reckoned in a loss function. The analogues result for the problem of the optimal intensity of stochastic approximation is presented.</p>]]></description>
<pubDate>Mar 2014</pubDate>
</item>
<item>
<title><![CDATA[Hybrid Group Acceptance Sampling Plan Based on Size Biased Lomax Model]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1468]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>R. Subba Rao&nbsp; &nbsp;A. Naga Durgamamba&nbsp; &nbsp;and R.R.L. Kantam&nbsp; &nbsp;</p><p>In this paper, a hybrid group acceptance sampling plan is introduced for a truncated life test if life times of the items follow size biased Lomax model. The minimum number of testers and acceptance number are obtained when the consumer’s risk and the test termination time and group size are pre-specified. The operating characteristic values, minimum ratios of the true mean life to the specified mean life for the given producer’s risk are also derived. The results are discussed through an example, a comparative study of proposed sampling plan with existing sampling plan are elaborated. </p>]]></description>
<pubDate>Mar 2014</pubDate>
</item>
<item>
<title><![CDATA[On Offset l-Arc Models]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1467]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>S.V.S. Girija&nbsp; &nbsp;A.J.V. Radhika&nbsp; &nbsp;and A.V. Dattatreya Rao&nbsp; &nbsp;</p><p>One of the available techniques of constructing circular models, offsetting has not been paid much attention, in particular for the construction of arc models. Here making use of the method of offsetting on bivariate distributions, l-arc models are constructed. The method of transforming a bivariate linear random variable to its directional component is called OFFSETTING and the respective distribution of directional component is called offset distribution which is a univariate circular model. By employing the concept of arc models, we obtain Offset Semicircular Cauchy model. Here we obtain Arc models directly by applying offsetting on a linear bivariate models such as Bivariate Beta and Bivariate Exponential models. Existence of these arc models occur in natural phenomenon. Some of the newly proposed semicircular/arc models are bimodal models and the population characteristics of the offset semicircular and arc models are studied. </p>]]></description>
<pubDate>Mar 2014</pubDate>
</item>
<item>
<title><![CDATA[Intuitionistic Neutrosphic Soft Set over Rings]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1466]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Said Broumi&nbsp; &nbsp;Florentin Smarandache&nbsp; &nbsp;and Pabitra Kumar Maji&nbsp; &nbsp;</p><p>S.Broumi and F.Smarandache introduced the concept of intuitionistic neutrosophic soft set as an extension of the soft set theory. In this paper we have applied the concept of intuitionistic neutrosophic soft set to rings theory .The notion of intuitionistic neutrosophic soft set over ring (INSSOR for short ) is introduced and their basic properties have been investigated.The definitions of intersection, union, AND, and OR operations over ring (INSSOR) have also been defined. Finally, we have defined the product of two intuitionistic neutrosophic soft set over ring. </p>]]></description>
<pubDate>Mar 2014</pubDate>
</item>
<item>
<title><![CDATA[Partial Bi-Semimodules over Partial Semirings]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1465]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>V. Amarendra Babu&nbsp; &nbsp;M. Srinivasa Reddy&nbsp; &nbsp;and P.V.Srinivasa Rao&nbsp; &nbsp;</p><p>A partial semiring is a structure possessing an infinitary partial addition and a binary multiplication, subject to a set of axioms. The partial functions under disjoint-domain sums and functional composition is a partial semiring. In this paper we introduce the notions of ( R, S ) - partial bi-semimodule and ( R, S ) - homomorphism of ( R, S ) - partial bi-semimodules and extended the results on partial semimodules over partial semirings by P. V. Srinivasa Rao [8] to ( R, S ) - partial bi-semimodules.</p>]]></description>
<pubDate>Mar 2014</pubDate>
</item>
<item>
<title><![CDATA[On N-Fold Positive Implicative Artinian and Positive Implicative Notherian BCK-Algebras]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1464]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Mar&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>B. Satyanarayana&nbsp; &nbsp;R. Durga Prasad&nbsp; &nbsp;and L. Krishna&nbsp; &nbsp;</p><p>The notion of BCK-algebras was initiated by Imai and Iseki in 1966 as a genera-lization of both classical and non-classical posi-tional calculus. In 1999, Huang and Chen introduced the notion of n-fold positive implicative ideals in BCK-algebras. In 2011, Satyanarayana and Durga Prasad introduced foldness of intuitionistic fuzzy positive implicative ideals in BCK-algebras. In this paper, we introduce the notion of n-fold positive implicative ideals, n-fold positive implicative Artinian (shortly, PI<sup>n</sup> -Artinian) and n-fold positive implicative Noe-therian (shortly, PI<sup>n</sup>-Noetherian) BCK-algebras and study some of its properties. </p>]]></description>
<pubDate>Mar 2014</pubDate>
</item>
<item>
<title><![CDATA[Some Remarks on the Spectrum of the Magnetic Stark Hamiltonians]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1167]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Rachid Assel&nbsp; &nbsp;Mouez Dimassi&nbsp; &nbsp;and Claudio Fernandez&nbsp; &nbsp;</p><p>The main purpose of this note is to study spectral properties of the Stark magnetic Hamiltonian : <img src=image/13401691_001.png> , on the Hilbert space L<sup>2</sup>(R<sup>2</sup>). We show that if the potential V satisfies some mild regularity conditions and is sufficiently decaying at infinity, then the operator H(μ, ϵ) has possibly at most a finite number of eigenvalues.</p>]]></description>
<pubDate>Feb 2014</pubDate>
</item>
<item>
<title><![CDATA[On Spectral Representation of Varma Models with Change in Regime]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1166]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Maddalena Cavicchioli&nbsp; &nbsp;</p><p>We present various formulae in closed form for the spectral density of multivariate or univariate ARMA models subject to Markov switching, and describe some new properties of them. Many examples and numerical applications are proposed to illustrate the behaviour of the spectral density. This turns out to be useful in order to investigate various concepts of stationarity via spectral analysis.</p>]]></description>
<pubDate>Feb 2014</pubDate>
</item>
<item>
<title><![CDATA[Robust Extrapolation Problem for Stochastic Processes with Stationary Increments]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1165]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Maksym Luz&nbsp; &nbsp;and Mikhail Moklyachuk&nbsp; &nbsp;</p><p>The problem of optimal estimation of the linear functionals <img src=image/13401563_001.png> and <img src=image/13401563_002.png> depending on the unknown values of stochastic process ξ(t), t ∈ R, with stationary nth increments from observations of the process at points t < 0 is considered. Formulas for calculating the mean square error and the spectral characteristic of optimal linear estimates of the functionals are derived in the case where the spectral density of the process is exactly known. Formulas that determine the least favorable spectral densities and the minimax (robust) spectral characteristic of the optimal linear estimates of the functionals are proposed in the case where the spectral density of the process is not exactly known, but a set of admissible spectral densities is given.</p>]]></description>
<pubDate>Feb 2014</pubDate>
</item>
<item>
<title><![CDATA[Goodness-of-Fit Test Based on Arbitrarily Selected Order Statistics]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1164]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Zhenmin Chen&nbsp; &nbsp;</p><p>Checking whether or not the population distribution, from which a random sample is drawn, is a specified distribution is a popular topic in statistical analysis. Such a problem is usually named as goodness-of-fit test. Numerous research papers have been published in this area. The purpose of this short paper is to provide a goodness-of-fit test statistic which works for many kinds of censored data formed by order statistics. This is an extension of the work presented in Chen and Ye (2009). The method can be used for censored samples that are commonly used in reliability analysis including left censored data, right censored data and doubly censored data.</p>]]></description>
<pubDate>Feb 2014</pubDate>
</item>
<item>
<title><![CDATA[New Operations over Interval Valued Intuitionistic Hesitant Fuzzy Set]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1163]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Said Broumi&nbsp; &nbsp;and Florentin Smarandache&nbsp; &nbsp;</p><p>Hesitancy is the most common problem in decision making, for which hesitant fuzzy set can be considered as a useful tool allowing several possible degrees of membership of an element to a set. Recently, another suitable means were defined by Zhiming Zhang [1], called interval valued intuitionistic hesitant fuzzy sets, dealing with uncertainty and vagueness, and which is more powerful than the hesitant fuzzy sets. In this paper, four new operations are introduced on interval-valued intuitionistic hesitant fuzzy sets and several important properties are also studied.</p>]]></description>
<pubDate>Feb 2014</pubDate>
</item>
<item>
<title><![CDATA[Intuitionistic Fuzzy Soft Matrix Theory and Multi Criteria in Decision Making Based on T-Norm Operators]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1162]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Feb&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Md.Jalilul Islam Mondal&nbsp; &nbsp;and Tapan Kumar Roy&nbsp; &nbsp;</p><p>The aim of this paper is to find multi criteria decision making problems to a selected project using intuitionistic fuzzy soft matrix based on generalized operators of t-norm and t-conorm. We use the concept of level operators of intuitionistic fuzzy sets [ K.T.Atanassov, On intuitionistic fuzzy sets theory , Springer – Verlag 2012 ] to define intuitionistic fuzzy soft level matrix. Finally, we give an application of decision making problem by using the operators of t-norm and t-conorm .</p>]]></description>
<pubDate>Feb 2014</pubDate>
</item>
<item>
<title><![CDATA[Quenching Behavior of Parabolic Problems with Localized Reaction Term]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1104]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Chien-Wei Chang&nbsp;  &nbsp;Yen-Huang Hsu&nbsp;  &nbsp;and H. T. Liu&nbsp;  &nbsp;</p><p>Let p, q, T be positive real numbers, B = {x ∈ R<sup>n</sup> : <img src=image/13401114_001.png>}, ∂B = {x ∈ R<sup>n</sup> : <img src=image/13401114_002.png>}, x<sup>∗</sup> ∈ B, △ be the Laplace operator in R<sup>n</sup>. In this paper, the following the initial boundary value problem with localized reaction term is studied: <img src=image/13401114_003.png>, where u<sub>0</sub> ≥ 0. The existence of the unique classical solution is established. When x<sup>∗</sup> = 0, quenching criteria is given. Moreover, the rate of change of the solution at the quenching point near the quenching time is studied.</p>]]></description>
<pubDate>Jan 2014</pubDate>
</item>
<item>
<title><![CDATA[Fixed and Periodic Point Theorems on Symmetric Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1103]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>J Madhusudan Rao&nbsp;  &nbsp;and P Sumati Kumari&nbsp;  &nbsp;</p><p>This paper proves the existence of periodic and fixed points for self maps satisfying some contractive conditions in symmetric space and also we prove coincidence and fixed points without continuity requirement satisfying a slightly more general Seghal’s contractive conditions with suitable example.</p>]]></description>
<pubDate>Jan 2014</pubDate>
</item>
<item>
<title><![CDATA[Quick Switching Conditional RGS Plan-3 Indexed through Outgoing Quality Levels]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=1102]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>V. Kaviyarasu&nbsp;  &nbsp;</p><p>This paper tries to study the designing of new attribute sampling plan towards Quick Switching Conditional Repetitive Group Sampling System (QSCRGSS)-3 indexed through Average Outgoing Quality (AOQ), Average Outgoing Quality Limit (AOQL) and its Operating Ratio (OR). Tables are provided with numerical illustrations for newly developed plan for its various plan parameters.</p>]]></description>
<pubDate>Jan 2014</pubDate>
</item>
<item>
<title><![CDATA[The Minimal System of Defining Relations of the Free Modular Lattice of Rank 3 and Lattices Close to Modular One]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=922]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Alexander G. Gein&nbsp; &nbsp;and Mikhail P. Shushpanov&nbsp; &nbsp;</p><p>We construct the system of 11 defining relations for the 3-generated free modular lattice. This system is proved to be minimal. Systems of defining relations for lattices close to modular one are studied.</p>]]></description>
<pubDate>Jan 2014</pubDate>
</item>
<item>
<title><![CDATA[Solution of One –dimensional Fractional Diffusion Equations Involving Caputo Fractional Derivatives in terms of the Mittag - Leffler Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=921]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>R.K. Saxena&nbsp; &nbsp;</p><p>The object of this article is to investigate the solutions of one-dimensional linear fractional diffusion equations defined by (2.1) and (4.1). The solutions are obtained in a closed and elegant forms in terms of the H-function and generalized Mittag - Leffler functions, which are suitable for numerical computation. The derived results include the results for the one-dimentional linear fractional telegraph equation due to Orsingher and Beghin [1], and recently derived results by Saxena ,Mathai and Haubold [2]. </p>]]></description>
<pubDate>Jan 2014</pubDate>
</item>
<item>
<title><![CDATA[On Some New Inequalities for Differentiable (h<sub>1</sub>,h<sub>2</sub>)– Preinvex Functions on the Co-Ordinates]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=920]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Marian Matłoka&nbsp; &nbsp;</p><p>We consider and study a new class of convex functions that are called- (h<sub>1</sub>,h<sub>2</sub>)preinvex functions on the co-ordinates. Some Hermite-Hadamard inequalities for the (h<sub>1</sub>,h<sub>2</sub>)– preinvex functions on the co-ordinates and its variant forms are derived. Some our theorems are new and other generalize some results of Dragomir and Latif.</p>]]></description>
<pubDate>Jan 2014</pubDate>
</item>
<item>
<title><![CDATA[A Study of Some Integral Problems Using Maple]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=919]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jan&nbsp;2014<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;2&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Chii-Huei Yu&nbsp; &nbsp;</p><p>This paper takes the mathematical software Maple as the auxiliary tool to study four types of integrals. We can obtain the Fourier series expansions of these four types of integrals by using integration term by term theorem. On the other hand, we provide two examples to do calculation practically. The research methods adopted in this study involved finding solutions through manual calculations and verifying these solutions by using Maple. </p>]]></description>
<pubDate>Jan 2014</pubDate>
</item>
<item>
<title><![CDATA[On the Surgery Theory for Filtered Manifolds]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=701]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Alberto Cavicchioli&nbsp; &nbsp;Friedrich Hegenbarth&nbsp; &nbsp;Yurij V. Muranov&nbsp; &nbsp;and Fulvia Spaggiari&nbsp; &nbsp;</p><p>In this paper we describe some relations between various structure sets which arise naturally for a Browder-Livesay ltration of a closed topological mani- fold. We use the algebraic surgery theory of Ranicki for realizing the surgery groups and natural maps on the spectrum level. We obtain also new relations between Browder{Quinn surgery obstruction groups and structure sets. Finally we illustrate several examples and applications.</p>]]></description>
<pubDate>Dec 2013</pubDate>
</item>
<item>
<title><![CDATA[The Continuous Wavelet Transform for A Bessel Type Operator on the Half Line]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=700]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>R.F. Al Subaie&nbsp; &nbsp;and M.A. Mourou&nbsp; &nbsp;</p><p>We consider a singular differential operator Δ on the half line which generalizes the Bessel operator. Using harmonic analysis tools corresponding to Δ, we construct and investigate a new continuous wavelet transform on [0,∞[ tied to Δ. We apply this wavelet transform to invert an intertwining operator between Δ and the second derivative operator d<sup>2</sup>/dx<sup>2</sup>.</p>]]></description>
<pubDate>Dec 2013</pubDate>
</item>
<item>
<title><![CDATA[SOME CURVATURE CONDITIONS ON α-COSYMPLECTIC MANIFOLDS]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=699]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>HAKAN OZTURK&nbsp; &nbsp;</p><p> The main interest of the present paper is to study α-cosymplectic manifolds that satisfy some certain tensor conditions. In particular, we consider α-cosymplectic manifolds with flatness conditions. We prove that there can not exist ϕ-projectively flat α-cosymplectic manifolds whose scalar curvature is zero for the dimension is greater than three. Furthermore, we work with special weakly Ricci-symmetric α-cosymplectic manifolds. We conclude the paper with an example on α-cosymplectic manifolds.</p>]]></description>
<pubDate>Dec 2013</pubDate>
</item>
<item>
<title><![CDATA[Several Methods of Determining the Continuous or Discrete Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=698]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Sergey Akimov&nbsp; &nbsp;and Olga Kvan&nbsp; &nbsp;</p><p>The article is about the problem of calculating the probability of discreteness and continuity sample from the general totality. There is a definition of discreteness. The main task of research is the definition of continuity or discreteness of unknown data. We consider the existing methodology as a method of finding the frequency of repetition of individual values variants of totality under test. The presented procedure is mathematically described. The basic disadvantage of this procedure: this procedure has great difficulties in interpreting the results. Based on the foregoing, the task of creating an algorithm determining the continuous or discrete becomes very important. The new algorithm is also based on the search for a match in the data array. However, now we use not only the array, but the quantity of changes between two successive values. To do it we need a sorting procedure of array from a minimum value to maximum one. In addition, we introduce the concept of "step" as a minimum amount of change between two values in the discrete series. An iterative method for detecting the matches in the array and defining the identity of the changes of the neighboring values is proposed in the article. Thus we have obtained three key values that define the continuity or discreteness. It has been found empirically that each of these values change its sensitivity based on the number of observations in the array. We also identified factors, which usage as (dependence on the number of values in the data) helps to attribute data array to the continuous or discrete distribution.</p>]]></description>
<pubDate>Dec 2013</pubDate>
</item>
<item>
<title><![CDATA[Homotopy Perturbation Method for Brachistochrone Problem]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=697]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Dec&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;4&nbsp;&nbsp;<p>Parivash Shams Derakhsh&nbsp; &nbsp;and Parisa Shams Derakhsh&nbsp; &nbsp;</p><p>The most famous classical variational principle is the so-called Brachistochroneproblem. In this work, Homotopy perturbation method (HPM) is applied to the Brachistochrone problem that arises invariational problems. The results reveal the efficiency and the accuracy of the proposed method. Homotopy perturbation method yields solutions in convergent series forms with easy computation</p>]]></description>
<pubDate>Dec 2013</pubDate>
</item>
<item>
<title><![CDATA[Common Fixed Point Results for Four Mappings Satisfying Almost generalized (S,T)−contractive Condition in Partially Ordered Fuzzy Metric Spaces]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=495]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>M.&nbsp;R. Rismanchian&nbsp;S.&nbsp;Sedghi&nbsp;N.&nbsp;Shobkolaei&nbsp;and K.P.R.&nbsp;Rao&nbsp;</p><p>In this paper, we define the concept of almost generalized (S; T)-contractive condition, and prove some common fixed point results for four mappings satisfying almost generalized (S; T)− contractive condition in partially ordered fuzzy metric spaces. </p>]]></description>
<pubDate>Oct 2013</pubDate>
</item>
<item>
<title><![CDATA[On Convergence Properties of Szasz-Mirakyan-Bernstein Operators of two Variables]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=494]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Tuncay&nbsp;Tunc&nbsp;</p><p>In this study, we have constructed a sequence of new positive linear operators with two variable by using Szasz-Mirakyan and Bernstein Operators, and investigated its approximation properties.</p>]]></description>
<pubDate>Oct 2013</pubDate>
</item>
<item>
<title><![CDATA[A New Extended (G'/G)-Expansion Method to Find Exact Traveling Wave Solutions of Nonlinear Evolution Equations]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=493]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Harun-Or-Roshid &nbsp;&nbsp;Md.&nbsp;Nur Alam&nbsp;M.&nbsp;F. Hoque&nbsp;and M.&nbsp;Ali Akbar&nbsp;</p><p>In this paper, we propose a new extended (G'/G)-expansion method to construct exact traveling wave solutions for nonlinear evolution equations. To check the validity and effectiveness of our method, we implement it to the (2+1)-dimensional typical breaking soliton equation. The results that we get are more general and successfully recover most of the previously known solutions which have been found by other sophisticated methods. Many of these solutions are found for the first time. Moreover, our method is direct, concise, elementary, effective and can be used for many other nonlinear evolution equations.</p>]]></description>
<pubDate>Oct 2013</pubDate>
</item>
<item>
<title><![CDATA[On Cyclicity and Regularity of Commuting Matrices]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=492]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Boris&nbsp;Shekhtman&nbsp;</p><p>It is well-known that the following properties of a matrix are equivalent: a matrix is non-derogatory if and only if is cyclic if and only if it is simple and if and only if it is 1-regular. In this article we attempt to extend these properties to a sequence of commuting matrices and examine the relation between them.</p>]]></description>
<pubDate>Oct 2013</pubDate>
</item>
<item>
<title><![CDATA[Criteria of Two-weighted Inequalities for Multidimensional Hardy Type Operator in Weighted Musielak-Orlicz Spaces and Some Application]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=491]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Rovshan&nbsp;A. Bandaliev&nbsp;</p><p>In this paper a two-weight criterion for multidimensional Hardy type operator and its dual operator acting from weighted Lebesgue spaces into weighted Musielak-Orlicz spaces is proved. As application we prove the boundedness of multidimensional geometric mean operator in the weighted Musielak-Orlicz spaces. In particular, from obtained results implies the boundedness of multidimensional Hardy operator and its dual operator acting from usual weighted Lebesgue spaces into weighted variable Lebesgue spaces. In this paper we establish integral-type necessary and sufficient condition on weights, which provides the boundedness of the multidimensional Hardy type operator from weighted Lebesgue spaces into weighted Musielak-Orlicz spaces.</p>]]></description>
<pubDate>Oct 2013</pubDate>
</item>
<item>
<title><![CDATA[On r-Edge-Connected r-Regular Bricks and Braces and Inscribability]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=490]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Kevin&nbsp;K. H. Cheung&nbsp;</p><p>A classical result due to Steinitz states that a graph is isomorphic to the graph of some 3-dimensional polytope P if and only if it is planar and 3-connected. If a graph G is isomorphic to the graph of a 3-dimensional polytope inscribed in a sphere, it is said to be of inscribable type. The problem of determining which graphs are of inscribable type dates back to 1832 and was open until Rivin proved a characterization in terms of the existence of a strictly feasible solution to a system of linear equations and inequalities which we call sys(G), which, surprisingly, also appears in the context of the Traveling Salesman Problem. Using such a characterization, various classes of graphs of inscribable type can be described. Dillencourt and Smith gave a characterization of 3-connected 3-regular planar graphs that are of inscribable and a linear-time algorithm for recognizing such graphs. In this paper, their results are generalized to r-edge-connected r-regular graphs for odd r ≥ 3 in the context of the existence of strictly feasible solutions to sys(G). An answer to an open question raised by D. Eppstein concerning the inscribability of 4-regular graphs is also given.</p>]]></description>
<pubDate>Oct 2013</pubDate>
</item>
<item>
<title><![CDATA[Lyapunov Exponents and Large Deviations Analysis of Eigenfunctions in Anderson Models on Graphs]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=489]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>VICTOR&nbsp;CHULAEVSKY&nbsp;</p><p>We propose a new probabilistic approach to the analysis of decay of the Green’s functions and the eigenfunctions of the Anderson Hamiltonians on countable graphs. Our method is close in spirit to the Fractional Moment Method, but we show how the use of the fractional moments can be avoided, so that exponential decay of the Green’s functions can be established in some models where the fractional moments diverge, due to low regularity of the random potential. We elucidate the exceptional role of the Holder continuity condition, usual in the FMM, in terms of Cramer’s condition in the large deviations problem for a suitably constructed rigorous path expansion.</p>]]></description>
<pubDate>Oct 2013</pubDate>
</item>
<item>
<title><![CDATA[The Existence of Noise Terms for Systems of Partial Differential and Integral Equations with (HPM) Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=488]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Parivash&nbsp;Shams Derakhsh&nbsp;and Jafar Biazar&nbsp; &nbsp;</p><p>In this paper we develop a framework for necessary condition for the existence of noise terms for systems of partial differential and integral equations with (HPM) method. We show that the noise terms are conditional and are generated for inhomogeneous equations if specific criteria are justified. And to illustrate the capability and reliability of this method we numerically test our approach for a variety of systems of inhomogeneous problems.</p>]]></description>
<pubDate>Oct 2013</pubDate>
</item>
<item>
<title><![CDATA[Study of Nonlinear Evolution Equations to Construct Traveling Wave Solutions via the New Approach of the Generalized (G'/G) -Expansion Method]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=487]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>Md.&nbsp;Nur Alam&nbsp;M.&nbsp;Ali Akbar&nbsp;and Harun-Or-Roshid &nbsp;&nbsp;</p><p>Exact solutions of nonlinear evolution equations (NLEEs) play very important role to make known the inner mechanism of compound physical phenomena. In this paper, the new generalized  (G'/G)-expansion method is used for constructing the new exact traveling wave solutions for some nonlinear evolution equations arising in mathematical physics namely, the (3+1)-dimensional Zakharov-Kuznetsov equation and the Burgers equation. As a result, the traveling wave solutions are expressed in terms of hyperbolic, trigonometric and rational functions. This method is very easy, direct, concise and simple to implement as compared with other existing methods. This method presents a wider applicability for handling nonlinear wave equations. Moreover, this procedure reduces the large volume of calculations.</p>]]></description>
<pubDate>Oct 2013</pubDate>
</item>
<item>
<title><![CDATA[Solution of Mixed Problem Including a First Order Three Dimensional P.D.E with Nonlocal and Global Boundary Conditions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=486]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Oct&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;3&nbsp;&nbsp;<p>N.A.&nbsp;Aliev&nbsp;O.H.&nbsp;Asadova&nbsp;and A.M.&nbsp;Aliev&nbsp;</p><p>In this paper solution of mixed complex boundary value problem of first order is considered. The basic term in the problem with respect to space variables, has Cauchy-Riemann operator. We first use Laplace transformation to introduce spectral problem. Then we investigate for corresponding Fredholm’s type. The spectral problem here is different from classical boundary value problems. Here boundary conditions are nonlocal and global and in general linear. At the end we find asymptotic expansionfor the solution of spectral problemwhich depends on unknown complex parameter. With the help of this asymptotic expansion we prove existence and uniqueness of mixed problem.</p>]]></description>
<pubDate>Oct 2013</pubDate>
</item>
<item>
<title><![CDATA[Some New Hermite-Hadamard Type Inequalities for Geometrically Convex Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=152]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug &nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>imdat&nbsp;İşcan &nbsp;</p><p>In this paper, some new integral inequalities of Hermite-Hadamard type related to the geometrically convex functions are established and some applications to special means of positive real numbers are also given.</p>]]></description>
<pubDate>Aug  2013</pubDate>
</item>
<item>
<title><![CDATA[Estimation in Misclassified Size-Biased Generalized Negative Binomial Distribution]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=151]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug &nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>B.&nbsp;S. Trivedi&nbsp;and M.&nbsp;N. Patel&nbsp;</p><p>In this paper, we are concerned with the situations, where sometimes value two is reported erroneously as one in relation to size biased generalized negative binomial distribution (SBGNBD) with probability α. We have obtained the Maximum likelihood estimator and Bayes estimator under general entropy loss function. A simulated study is carried out to access the performance of the maximum likelihood estimators and Bayes estimators. Also comparison has been made between maximum likelihood estimator and Bayes estimator.</p>]]></description>
<pubDate>Aug  2013</pubDate>
</item>
<item>
<title><![CDATA[Representing Data Distributions with a Nonparametric Kernel Density: The Way to Estimate the Optimal Oil Contents of Palm Mesocarp at Various Periods]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=150]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug &nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Divo&nbsp;Dharma Silalahi&nbsp;Putri&nbsp;Aulia Wahyuningsih&nbsp;and Fahri&nbsp;Arief Siregar&nbsp;</p><p>The most popular nonparametric density estimates is kernel density estimate. This estimate depends on the bandwidth choice which was given the optimization to kernel optimality process. We proposed Epanechnikov kernel which is the most optimal kernel in the AMISE. The resample data as replicate samples has been obtained by using bootstrap mechanism to provide the information about the sampling distribution. Then the resample data was used in Epanechnikov kernel simulation to estimate the optimal solution. This study was simulated using oil contents (%) data at various periods after pollination. The oil contents (%) were obtained by extraction of oil palm mesocarp. The result show that, Epanechnikov kernel using resamples data from bootstrap could be used for nonparametric optimization cases such as oil content (%) of oil palm mesocarp.</p>]]></description>
<pubDate>Aug  2013</pubDate>
</item>
<item>
<title><![CDATA[On a Fractional Master Equation and a Fractional Diffusion Equation]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=149]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug &nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>R.&nbsp;K. Saxena&nbsp;</p><p>In this paper , we derive the solutions of fractional master equation defined by (2.1) and fractional diffusion equation defined by (3.3). The method followed in deriving the solution is that of Laplace and Fourier transforms. The solutions are obtained in a neat and compact forms in terms of the generalized Mittag –Leffler function and Fox’ H-function. The results established are of general character and include some known results, as special cases.</p>]]></description>
<pubDate>Aug  2013</pubDate>
</item>
<item>
<title><![CDATA[0-event: Point and Interval Estimates]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=148]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug &nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Sergey&nbsp;Gurov&nbsp;</p><p>Point and interval probability estimates for an event that has never been observed in a Bernoulli trial series (0-event) are proposed and validated. In this case, the classical statistical methods yield a zero point estimate, which is often unacceptable in practice. Nonzero point and interval probability estimates for a 0-event are proposed and validated. A classification of samples by size for the case of a 0-event is proposed.</p>]]></description>
<pubDate>Aug  2013</pubDate>
</item>
<item>
<title><![CDATA[The Leibniz Rule for Fractional Derivatives Holds with Non-Differentiable Functions]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=147]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug &nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Guy&nbsp;Jumarie&nbsp;</p><p>In order to convince the sceptical reader, we herein give another proof of the fact that the Leibniz rule <img src=image/13400340_001.png> for fractional derivatives applies whenever we are dealing with non-differentiable functions, as they occur for instance, when one considers problems involving fractal space-time</p>]]></description>
<pubDate>Aug  2013</pubDate>
</item>
<item>
<title><![CDATA[Intuitionistic Fuzzy Soft Matrix Theory]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=146]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug &nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Md.Jalilul&nbsp;Islam Mondal&nbsp;and Tapan&nbsp;Kumar Roy&nbsp;</p><p>The purpose of this paper is to put forward the notion of intuitionistic fuzzy soft matrix theory and some basic results. In this paper, we define intuitionistic fuzzy soft matrices and have introduced some new operators with weights, some properties and their proofs and examples which make theoretical studies in intuitionistic fuzzy soft matrix theory more functional. Moreover, we have given one example on weighted arithmetic mean for decision making problem.</p>]]></description>
<pubDate>Aug  2013</pubDate>
</item>
<item>
<title><![CDATA[Inequalities for the Polar Derivative of A Polynomial]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=145]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug &nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>M.S.&nbsp;Pukhta&nbsp;</p><p>If <img src=image/17100031_001.png> has all its zeros on <img src=image/17100031_002.png> K≤1, then it was recently proved by Dewan and Ahuja [3] that for every real or complex number <img src=image/17100031_003.png> <img src=image/17100031_004.png> where <img src=image/17100031_005.png> In this paper, we improve the above result and obtain new inequality for the polar derivative of a polynomial.</p>]]></description>
<pubDate>Aug  2013</pubDate>
</item>
<item>
<title><![CDATA[Strong Insertion of A Baire-one Function]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=144]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug &nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Majid&nbsp;Mirmiran&nbsp;</p><p>Necessary and sufficient conditions in terms of lower cut sets are given for the strong insertion of a Baire-one function between two comparable real-valued functions on the topological spaces that <img src=image/13400486_001.png> sets are <img src=image/13400486_002.png></p>]]></description>
<pubDate>Aug  2013</pubDate>
</item>
<item>
<title><![CDATA[On Decay of Solutions to Systems of Integro-differential Equations with Strongly Damped]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=143]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug &nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>Erhan&nbsp;Pişkin&nbsp;</p><p>We study the system of nonlinear integro- differential equations with strong damping and weak damping terms, in a bounded domain with the initial and Dirich let boundary conditions. The existence of global solutions by using the potential well method, and the energy decay estimate by applying a lemma of Komornik [3]</p>]]></description>
<pubDate>Aug  2013</pubDate>
</item>
<item>
<title><![CDATA[A Unique Common Fixed Point Theorem of Meir-Keeler Type in a Partial Metric Space]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=142]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Aug &nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;2&nbsp;&nbsp;<p>K.P.R.&nbsp;Rao&nbsp;G.N.V.&nbsp;Kishore&nbsp;and P.R.Sobhana&nbsp;Babu&nbsp;</p><p>In this paper, we obtain a unique common fixed point theorem for four self mappings satisfying Meir-Keeler type contractive condition in partial metric spaces, which is slightly different from the result of Aydi and Karapinar [5].</p>]]></description>
<pubDate>Aug  2013</pubDate>
</item>
<item>
<title><![CDATA[Inequalities for the sth Derivative of A Polynomial with Prescribed Zeros]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=10]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jun&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>M.S.&nbsp;Pukhta&nbsp;</p><p>Let p(z) be a polynomial of degree n which does not vanish in ,<img src=image/17100030_001.bmp> k≧1, then for 1≦R≦k Bidkham and Dewan [J. Math. Anal. Appl. 166 (1992),191-193] proved <img src=image/17100030_002.bmp>In this paper we shall present several intersting generalizations and a re nement of this result which includes some results due to Malik, Govil and others. We also present a re nement of some other results.</p>]]></description>
<pubDate>Jun 2013</pubDate>
</item>
<item>
<title><![CDATA[Solving a Class of Self-adjoint Differential Equations of the Fourth Order and its Algorithms in MATLAB]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=9]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jun&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Mehdi&nbsp;Delkhosh&nbsp;</p><p>In many of applied sciences, various Self-adjoint differential equations are generated, where, methods for their solution is very complex. Usually, numerical methods used to solve them. Leighton et al were investigated oscillation properties of solutions of self-adjoint differential equations of the fourth order, with specific conditions. In this paper, we use a new method for the solving a class of Self-adjoint differential equations of the fourth order. We use a variable change in the equation, and then obtain an analytical solution for the equation with a specific condition. Because in this method, an analytical solution is obtained, therefore, it is not necessary to use numerical methods to solve the problem.</p>]]></description>
<pubDate>Jun 2013</pubDate>
</item>
<item>
<title><![CDATA[Insertion of A Baire-one Function]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=8]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jun&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>MAJID&nbsp;MIRMIRAN&nbsp;</p><p>A necessary and sufficient condition in terms of lower cut sets are given for the insertion of a Baire-one function between two comparable real-valued functions on the topological spaces that <img src=image/17100013_001.bmp> are <img src=image/17100013_002.bmp>.</p>]]></description>
<pubDate>Jun 2013</pubDate>
</item>
<item>
<title><![CDATA[Robust Within Groups Anova: Dealing With Missing Values]]></title>
<link><![CDATA[https://www.hrpub.org/journals/article_info.php?aid=7]]></link>
<description><![CDATA[Publication date:&nbsp;&nbsp;Jun&nbsp;2013<br /><b>Source:</b>Mathematics and Statistics&nbsp;&nbsp;Volume&nbsp;&nbsp;1&nbsp;&nbsp;Number&nbsp;&nbsp;1&nbsp;&nbsp;<p>Jinxia&nbsp;Ma&nbsp;and Rand&nbsp;R. Wilcox&nbsp;</p><p>The paper considers the problem of testing the hypothesis that J≧2 dependent groups have equal population measures of location when using a robust estimator and there are missing values. For J = 2, methods have been studied based on trimmed means. But the methods are not readily extended to the case J > 2. Here, two alternative test statistics were considered, one of which performed poorly in some situations. The one method that performed well in simulations is based on a very simple test statistic with the null distribution approximated via a basic bootstrap technique. The method uses all of the available data to estimate each of the marginal (population) trimmed means. Other robust measures of location were considered, for which imputation methods have been derived, but in simulations the actual Type I error probability was estimated to be substantially less than the nominal level, even when there are no missing values.</p>]]></description>
<pubDate>Jun 2013</pubDate>
</item>
</channel>
</rss>