A Case Study of Innovation in the Implementation of a DSS System for Intelligent Insurance Hub Services

This paper presents a case study of the Project ‘ DSS INSURANCE HUB ’. Specifically, research activities are carried out in the context of digital transformation in the insurance service sector. In the first part of the paper, a core of Key Performance Indicators (KPIs) of insurance service performance is identified, mainly tracking agents’ activities, starting to the Plan, Do, Check, Act (PDCA) process mapping of the insurance activities about claims. Then the study focused on the implementation of a Long Short Term Memory (LSTM) artificial neural network predicting the value of agent-related KPIs. In particular, the neural network is tested on the prediction of the KPI called SP defined by the ratio between the cost of the claims and the insurance premium collected. In order to validate the LSTM model, further artificial records (AR) are added for the training dataset construction, by generating 2.800 records of variables. The LSTM-AR increases of 25% the LSTM performance. The adopted approach is typical for real cases of study where often no much data is available. The LSTM model, created for the SP prediction, is suitable to calculate the value of other KPIs. The formulated KPI dashboards are implemented in a Decisional Support System (DSS) platform providing the agent activity and company information, and opportunities to improve the business processes.


Introduction
A Decision Support System (DSS) is an essential tool in various sectors as it is powerful in finalizing the information extracted from the numerous data available to corporations. The DSS systems use Artificial Intelligence (AI) algorithms that allow to increase the knowledge of the mechanisms that enable to optimize the management of company processes. For these reasons, DSS systems have been used by an increasing number of companies active in the insurance, health-care, banking and retail sectors in recent years [1][2][3].
This work proves that innovative tools, such as AI [4], [5] and Data Mining (DM) algorithms in combination with KPIs can certainly bring benefits in the insurance service sector, facilitating the improvement of processes. AI can play a crucial role in automation and improvement of the daily insurer operations in the insurance sector involving data processing. Downstream of the re-engineering of the processes, a study was carried out that made it possible to design and implement a DSS supporting service sales activities in the insurance sector. The paper proposes results of project which involves the development of advanced Business Intelligence (BI) algorithms and dashboards for the control and monitoring of work activities, thus constructing the DSS for the insurance sector. Specifically, through the use of new approaches and methodologies, it is proposed the creation of efficient KPI and LSTM algorithms suitable for the mapping and the engineering of processes. Starting to the basic knowledge of company know-how, it is created an innovative knowledge system and increase the stock of basic knowledge according to the basic principles of the "Frascati Manual" of the research [6]. The DSS potentially is useful for the:  optimization of services according to the availability of resources;  intelligent scheduling of customers to whom services are to be offered;  segmentation of customers and of associated services;  formulation of KPIs  forecasting of KPIs.
The basic functional scheme of the research project is shown in Figure 1, in which are distinguished different modules, such as: (1) Process Engineering module: all the 'AS IS' business processes and company actors' actions are analyzed; the individual activities are mapped by means of the Plan, Do, Check, Act (PDCA) logic defined by the standard ISO 9001:2015; the PDCA is an ordered method consisting of sequential phases that is useful to both managers and operators in dealing with business problems and allows for a continuous improvement of services. The PDCA is able to understand the process stage where it is possible to allocate the DSS able for analytics. (2) KPI module: the KPI makes it possible to detect the trend of crucial activities that can affect company productivity. (3) DSS module: it optimizes the production processes providing operational suggestions based on data trend analyses and KPI forecasting. The KPI estimation is a very important aspects for the BI. In fact, KPI are applied in different application fields, monitoring services in food industry [7], monitoring production in working sites [8], and advertising services linked to urban square [9]. KPIs and DSS can be improved furthermore by data mining and big data technologies [10], and can be structured in different levels of the data analysis as for the multilevel analytics model applied for car services [11]. The DSS can be constituted by different data mining algorithms into a single information system based on big data connections [12], [13]. AI is adopted for the improvement of the visual merchandising in Global Distribution [14], for predictive maintenance in industrial production [15], for sales prediction [16], [17], and for driver behaviour estimation [18]. All the listed application fields enhance the importance of the DSS embedding AI, which must be constructed by analysing production processes and defining accurately the variables which are different for each case of study. The proposed work is oriented on the modality to set a DSS following the insurance services specifications and design approaches typical for the analysed case of study.

Objectives and Design Methods
The first step in engineering new processes is the mapping of the AS IS business processes to be performed according to the PDCA, also called Deming Cycle, which is made by four iterative phases cyclically repeated. During the 'Plan' phase, problems are identified, i.e., criticalities, and an improvement plan is developed carrying out corrective actions. In the next step, 'Do' phase, the plan is executed on a test basis by taking small steps. The 'Check' phase examines feedback and makes corrections to the improvement plan. Finally, the 'Act' phase refines and standardizes the action plan. This management method is applied to the following areas of interest of the insurance services:  claims area;  commercial area;  administrative area;  compliance area. Figure 2 shows the PDCA diagram created for the Claims area, which shows the critical issues, corrective actions and the purpose of each phase that characterizes the cycle. The main found critical issues are the difficulty in communicating between claims handlers and agents that often occurs via email, thus slowing down communication and lengthening waiting times. Furthermore, the data available to agents is often not up-to-date or incorrect. These problems are solved by introducing an innovative DSS platform that allows the automatic management of data about negotiations trend favoring the constant monitoring of specific parameters, such as retention, efficiency of agents, portfolio growth per single employee, sales growth rate, and others.
Among the KPIs of the insurance domain, of greatest interest are:  Portfolio Turnover: it allows to calculate the economic loss of the company's portfolio;  Acquisition: ratio between customers purchased and initial customers in a period, indicating the agency's ability to acquire new customers;  Churn: ratio between customers lost and initial customers in a period, measuring the churn rate in a given period;  Cross Selling: it indicates to what extent a customer buys more than one product and in which segments.
Starting to the calculation of these indices, and to the evaluation with respect to the thresholds set by the administrator, the DSS suggests activities to be carried out by the human the resources, automating and engineering the service processes. The support provided by the DSS is then based on the evaluation of the KPIs and on the output obtained from the LSTM [19], [20] algorithm predicting KPIs. All tools made by KPIs and algorithms constitutes the DSS system designed by the Use Case Diagram (UCD) of Figure 3, where it is possible to distinguish the actors of the DSS systems (Administrator and System), and all the involved functions including KPIs estimation and LSTM prediction. In this scheme, the administrator can access all the other KPIs indifferently, and can monitor any dashboard; agents belonging to the "Commercial", "Claims" and "Administrative" areas can upload datasets useful for the categorization of files, customers and collaborators on the platform; furthermore, agents can check the KPI trends within the dashboard based on the data processed by the DSS. The prediction is also able to optimize resources management. The choice to monitor and predict KPIs is due to the importance of these parameters on the efficiency of business activities. In particular, a LSTM neural network was developed and tested on the prediction of SP ratio between the cost of the claims and the insurance premium, which is a KPI of the Claims Area relating to collaborators. The architecture of the adopted LSTM neural network is shown in Figure 4. Figure 5 shows the UML Sequence Diagram (SD) that specifies the time and control aspects of the displaying of KPI. In detail, this diagram schematizes the sequence of operations that lead to a correct display of the KPI graphs. In this way, the user is able to constantly check the trends of the graphs, and views any changes that are useful for discerning any decision states.       The agents can access the value of the main KPIs, which the DSS provides together with a traffic light indication. For each KPI is defined threshold, so when the value of the indicators exceeds these thresholds, its value is marked by the red colour indicating an alert (anomalous value). The developed DSS system accesses the dataset, and calculates the KPIs from these data. Each dataset contains information relating to a specific month. The system allows employees to view the graphs obtained from data processing and to compare, for example, the values of the graphs of a certain KPI for different periods. The LSTM neural network is tested to predict the SP value (Claims area). Figure 7 shows  Concerning the graphical dashboard of the DSS, Fig. 8 shows pie charts which summarize the analysis of some parameters related to the selected employee. In this figure, in which the data for different months are compared, are shown the details on the various business units: the figure shows the DSS Pie charts which summarize the analysis of some parameters relating to the selected employee. In this figure, in which the data for different months are compared, the details on the various business units are shown, for example the fire unit is highlighted with an indication of the number of taken out insurance policies.

Results and Discussion
In this way, it is possible to easily and efficiently consult the data of each agent.  The LSTM neural network, which provides KPIs prediction, is implemented in the DSS by means of the python language. This network, is tested on the main SP for each employee. In Fig. 9 is illustrated the architecture of the LSTM network.
The LSTM model is trained using by randomly extracting 640 records from the dataset, and using 80% of the data as training dataset. The remaining 20%, corresponding to 160 records, is used for model validation. This choice ensures a good balance between testing and training dataset and an optimal model convergence. Training allows to adapt the parameters of the model, while validation provides an impartial evaluation of the model. The LSTM layer (see Fig. 9), which receives the inputs from the reshape layer, has 300 neurons. Then there are 3 dense layers with relu-type linear activation function. After each of these dense layers, there is a dropout layer with dropout fraction equal to 0.5, in order to reduce overfitting in the neural network. Finally, there is a last dense layer that computes the model output. The choice of the above listed hyperparameters allows to perform the best convergence of the neural network.
In Appendix A is listed the LSTM model implementing the above mentioned hyperparameters. Figure 10 shows the LSTM performance graphs about the SP prediction, plotting the Loss (Fig. 10 (a)) and for the Accuracy (Fig.  10 (b)) parameters. At first, a dataset consisting of 800 records is used, obtaining the cyan curve plot of Fig. 10. As can be seen, the loss function measured on the validation test becomes smaller already in the first few epochs, due to the correct choice of the training dataset. After about 150 epochs, it stabilizes around a value close to the very low value 0.003. The Accuracy reaches a value close to 0.5, which indicates that probably the dataset typology is complex to process. In order to improve this last result, is adopted the LSTM with Artificial Records (LSTM-AR) approach, which is discussed in [21]. As the available data are not sufficient, the dataset is fed with artificial ones generated from the available data. The generation of new data is performed randomly to minimize the possibility to not consider correlated data which could decrease the model performances. The new dataset containing artificial data, is made up of 2.800 records, and is used to train and test again the model. The orange plot in Figure 10 shows the new obtained results. The accuracy of the validation phase has increased by 25% reaching a value of 0.75. The trend of the loss function for both LSTM and LSTM-AR approaches remains of the same good order of 10 -3 . After the validation of the LSTM neural network, it is estimated the prediction of the SP KPI of the agents. Figure 11 shows the predictions of SP for some collaborators for a certain selected month adopting the optimize LSTM network. The LSTM algorithm is developed in Python language in order to facilitate the integration into the prototype platform behaving as an Enterprise Resource Planning (ERP) properly designed for insurance services. Other Graphical User Interfaces (GUIs) implementing neural network and dashboards [21] can be furthermore considered for other prediction calculations improving the analysis for the Business Intelligence (B.I.).

Conclusions
A DSS for the estimation and prediction of KPIs relating to the agent and company performance has been developed. Monitoring KPIs, the company working in insurance services, is able to evaluate its performance during the time, by focusing the attention on processes having the greatest impact on company productivity. Based on the alerting thresholds implemented for each KPI, the DSS provides operational suggestions. The DSS, together with the LSTM neural network, solves the critical issues highlighted in the PDCA regarding the efficiency of the activities. Therefore, through the KPIs dashboards, it is possible to monitor the status of activities involving the company agents. These dashboards make it possible to reduce the time and functional gap detected between the various areas, and to optimize their production. The model based on the LSTM neural network has been tested for the prediction of the important SP parameter, which is a main KPI defined for the Claims area. The implemented LSTM-AR neural network allows to predict the value of the SP parameter relating to the Claims area with a validation accuracy of 75%. The LSTM performance can be furthermore increased by considering and selecting new real data of the activities of the agents, for the improvement of the training dataset. The LSTM network can be applied to the prediction of other KPIs. The work has been developed with a framework of a research industry project, and can considered as a pilot model to construct DSS for companies working in services managing a high number of human resources.