Software Quality Methodology to Train Engineers as Evaluators of Information Systems Development Tools

To have software tools that facilitate the rapid development of applications and generate information systems is a requirement in a globalized world. Worldwide in the engineers training field it is basic to apply measurements and evaluations to the rapid design tools used to obtain applications faster and generate quality information systems in order to determine the best for this purpose. This paper revises MECRAD and proposes its introduction in the formation programs of engineers which is a methodology for the technical evaluation of the visual environments tools to generate information systems where this type of commercial products are evaluated using as reference basic elements of international standards. With this methodology one can evaluate and select in an effective and easy way, those tools and development platforms best suited to create applications in visual environments, in order to generate information systems with quality and sustainability. It is useful for higher education institutions, organizations, companies and system’s end users, among others.


Introduction
To improve any software, it is necessary to measure attributes through a significant set of metrics that provide indicators that lead to a technical quality assessment of the product.By carrying out this process through a software qualimetric model, it is more probable that one will access the requirements to comply.The way in which the quality characteristics have been defined in most standards models does not allow their direct measurement, so the establishment of metrics to correlate these features in a software product is needed.
The first step in designing a software qualimetric model is the determination of the relevant quality properties.Usually they are described through a hierarchical tree structure where the characteristics appear at the highest level, the sub-characteristics in the intermediate and the attributes in the lowest.Its goal is to facilitate the qualitative and quantitative evaluation of these components.[1] [2] [3] [4].
At an international level, tools to facilitate the creation of new information systems for the most diverse applications periodically emerge.It is imperative to identify and evaluate these tools as quick as possible, to determine whether they meet the quality requirements established globally or by the software houses that produce those [5], [6], [7], [8].

Objectives
When training world-class software and systems engineers, it is necessary to carry out evaluations of various products.Therefore it is essential to perform the evaluations of software applications designing and developing tools to determine, which are the best for the programmers to choose to work on.Actually, even some jobs proposals are focused on the software development processes evaluation.
As an example of such particular instruments we take the RAD (Rapid Application Development) tools, which are commonly part of an IDE (Integrated Development Environment) that is a very popular framework among application programmers for creating information systems.So the training of software or informatic engineers, with a wide knowledge and manipulation of qualimetric models and tools as those that perform the evaluation of RAD tools for the generation of information systems of quality in

State of the Art and Related Works
During the last three decades a varied type of software quality models have been proposed.They are very useful, but in turn very generic, so they should be adapted or reconfirmed in order to use them to articulate more particular and concrete applicable models [21]

Methodology
A proper methodology was generated (with models, processes, techniques and tools), that allows making comparisons and carrying out RAD tools technical evaluations, which is briefly describe ahead.The operation method is quite simple and the evaluators just must fulfill the data asked for by the program.The rest of the process is fully atomized.The complete evaluation process is carried out and culminates with a technical opinion of the quality reached by the product, together with the recommendations and criteria to be followed.This result can be analyzed by those interested according to the purposes of the evaluation, for example acquiring a product.

Evaluation Process
When evaluating software quality, first one establishes the quality requirements model under which the evaluation is specified, designed and executed.The evaluation activities are then indicated in a process.In this proposal it comprises five activities as it is shown in Figure 1.

Compacting MECRAD Pattern
MECRAD´s complete model is shown in Figure 2. Since one of the purposes of this technical model is to provide a range for comparison for any kind of user (expert or beginner) the metrics suggested for beginner users are defined in a subset of this complete model, and are shown in Figure 3.

Metrics and Evaluation Scale Definition
The quantifiable attributes must be measured, quantitatively, through metrics.The result, the value measured, can then be mapped on a scale.This value does not show in itself the level of satisfaction of the requirements.For this purpose the scale is divided into ranges corresponding to the different degrees of satisfaction.There are several ways to do this.For example one can simply divide the scale in two categories: unsatisfactory and satisfactory, or create a scale with five levels (mandatory categories) for an evaluated product: levels A, B, C, D and E as shown in Figure 4 where level A is the best case, the one that would be the ideal level to achieve.Level B is considered achievable within the reasonable use of the available resources.Level C indicates the control point, the one should be maintained so that the system does not further deteriorate.Level D is the user's acceptance limit value.Finally, the worst case is level E, where the product does not meet requirements minimum quality.Since a metric is defined as "a quantitative measure of the degree to which a system, component or process possesses a given attribute " [5], in order to properly measure the different tool performance one must follow these guidelines: • Observation of the software performance in order to evaluate the difference between the current execution results and the requirements specification (a view on test and quality validation).• Unexpected occurrences on performance time or resources utilization during the software operation.Therefore, evaluating all attributes belonging to a given sub characteristic one obtains an average value that evaluates that sub characteristic in particular.Then, evaluating all the sub characteristics of a given characteristic the user calculates another average value that evaluates that characteristic in particular.Finally, evaluating all the characteristics a new average value that corresponds to the software product as a whole is calculated.The mathematical method is the following: Quality indicator of the product t:

𝑛𝑛
Where: ICCj is the quality indicator of the characteristic j n is the number of characteristics in the model Quality indicator of the characteristic j: Where: ICSCk is the quality indicator of the subcharacteristic k m is the number of subcharacteristics within the characteristic k Quality indicator of the subcharacteristic k:

𝑘𝑘
Where: VAAx is the assigned value to the attribute x K is the number of attributes within the subcharacteristic k.Thus, when applying the evaluation format you use three types of metrics: • Direct instructions to the user for carrying out a specific task, taking note of certain indicators (for example: time, number of occurrences of certain event, etc.)The result will be a quantity within the proposed range.• Direct questions to the user to determine the existence of an essential attribute within the evaluated tool.The result will be an affirmative () or a negative (0) one.• Metrics that depend on the value of certain indicator derived from the realization of a certain task.They serve to calculate a set of parameters with values within the proposed interval.In order to support the model, forty-four metrics were documented and developed, just as it appears in the formats in Figure 5 and 6.Another eleven metrics were adapted from SUMI [0], making a grand total of fifty-five metrics involved.Recording the partial and total results of the software quality evaluation is not an easy task.Simple and understandable formats must be chosen to obtain a quick and reliable assessment of their measurement values.Therefore checklists, simple relationship tables and control matrices are implemented.Checklists are questionnaires where assertions must confirmed by selecting one of the values given in a scale.These questions are in principle made in such a way that they generate ideas (valuations).They are used to control each separate phase or all the work to be done.A control matrix is a complementary tool related to all aspects of a process that serves to summarize the content and the development of a whole system.It usually includes a control variable (what is measured), the measurement form, place and time, the base standard, who does the analysis, who acts and how to act.The control matrices are important for the design, implementation and maintenance of the control system of the obtained results.

Results Discussion
The case studies chosen for the tests of the RAD tools are the commercial visual platforms Visual Studio.Net, Net Beans and Eclipse.The results obtained through the application of the MECRAD tool are the following: VisualStudio.Net obtained a general average evaluation of 0.89 (89%) for beginners and a punctuation of 0.88 (88%) among experts (See Figure 7).Its weakness lies in portability.This is comprehensible, due to its dependence upon Microsoft´s Windows platform.Its quality classification level is Satisfactory, without recommendations, since it does not require modifications in its design (only updating) and it is accepted thoroughly.
The results obtained from the other two products in their evaluation, have only 2% of variability.The level of quality classification obtained in these development platforms was Excellent for Net Beans (See Figure 8) and Eclipse (See Figure 9).
To provide a more realistic assessment the final result is the combination of different users' evaluation of the same type (expert or basic).This will allow a more realistic final technical report.It is presented in Figure 10.

Conclusions and Recommendations
Any of the three visual environment system mentioned above are considered technically advisable for application developments.For that reason, if one requires a decision about the acquisition of some of these environments, one must consider other important parameters, such as cost, platform, systems interacting within the environment, etc.
The model does not contemplate these parameters, since it is limited to the technical quality evaluation of the visual tools themselves.MECHDAV AND MECRAD are already commercially in operation, therefore the information concerning its development and its source code is not available As a future work, it would be advisable to make periodic revisions of the model for its improvement, attempting for example to introduce the evaluation of tools in the visual WEB sites environment.
As a conclusion we once more state that the inclusion of the training and manipulations of this kind of quality tools will enrich the professional stock of future system and software engineers.

Figure 7 . 4 Figure 8 .Figure 9 .Figure 10 .
Figure 7. Final technical evaluation report of the Visual Studio.NET environment version 20environment version 4.4 order to select the most suitable has become today and updated technical necessity That is the reason why, in this case, this article proposes the introduction in high level engineering programs of the training and manipulation of MECHDAV (Quality