Why software requires quality assurance


Definitions for software quality are often limited to general explanations: The product can be used with little effort with regard to the group of users, it is portable and performs the required functions correctly, reliably and efficiently. In addition, in the interests of the software developers, the effort required for testing, maintenance and changes should be as low as possible. However, this by no means defines the quality of the product sufficiently.

The properties listed are of a qualitative nature and therefore cannot be secured for a large number of participants because these quality terms are not specific enough for a review. Discussions within the development team or with customers are currently often characterized by the fact that although everyone involved demands criteria such as user-friendliness or reliability, everyone in turn understands something different specifically by these properties. For one customer, user-friendliness is expressed in the menu technology, for the other in the command technology. User A measures efficiency in time units of a dialog function, user B in the required system resources of the computer system.

But even in the development group itself, the requirements are not uniform. One intends to document each programming instruction inline in prose; the other - more common in practice - only sees the need to record the author's name in the program. For some systems analysts, modularization means the atomization of a complex task, while others also define modularity, but consider only fifty instead of a hundred modules to be necessary. The list of different views on software quality properties can be continued at will.

In order that the requirements listed at the beginning do not remain pure waste, it is necessary to define quantitative requirements for the product. At this point you will currently experience the first disappointment. Apart from a large number of qualitative properties, interested parties will only find suggestions for quantitative parameters in a few publications.

Characterization varies according to product type

This may be due to the fact that many parameters, viewed individually, appear trivial compared to a qualitative definition of a requirement. Ultimately, however, only the seemingly banal quantitative parameter can be assured, although its characteristics can vary depending on the product type and application environment.

The general procedure for developing quality measures begins with the definition of software quality criteria. These criteria are described or specified in more concrete terms in a top-down procedure using additional criteria. This refinement is continued step by step until a level is reached at which the qualitative properties can be converted into quantitative measurements. With these quantitative parameters, formal properties of a program such as size or complexity can now be measured. The measurement results obtained on the lower level are condensed into statements on the criteria levels in a bottom-up approach. With this aggregation one receives only quantified statements about the quality or individual qualitative properties of software products.

Various institutions are currently working on the definition of such quality criteria and measurands. The aim here is a largely uniform use of the terms in order to arrive at a quantitative and thus objective assessment of the quality of software products.

To answer the question "Who will ensure quality", the tasks of a quality assurance body and the alternatives of process and organizational integration must be examined more closely. The tasks of the quality assurance body include analytical measures to determine the quality, for example checking defined intermediate products in order to make an acceptance decision. In order to achieve the required quality of the intermediate products, however, constructive measures on the part of the quality assurance body are also necessary. This includes the use of suitable tools and methods for system development.

It is impossible to imagine a quality assurance body without some key tasks:

- Preparation of a general quality assurance plan and modification of this framework plan for individual projects.

- Selection and support of the necessary methods and tools for system development.

- Advice and training for programmers and systems analysts in the use of these tools.

- Development of quantitative parameters for the acceptance of defined intermediate products between the individual development phases.

- Reporting to higher-level bodies regarding product development, identified defects and suggestions for improvement.

Competence ensures quality

The organization of the individual work processes, the creation of positions and the assignment of individual tasks will differ from company to company. The definition of the competencies and responsibility for the quality assurance office as well as the creation of the necessary information channels to other offices is a fundamental prerequisite so that a defined quality can be assured.

The organizational theory of business administration deals with different organizational structures such as line, staff, matrix or project organizations. Empirical studies already provide information about the advantages and disadvantages of the individual structures. But these are still controversial.

Alternatives to the organizational structure can be the institutionalized quality assurance department or the formation of project-related quality assurance groups.

The advantages of an institutionalized department are the continuity of the employees and the "obligation" through institutionalization. Disadvantages arise, for example, from "departmental thinking" or the rejection of this "supervisory authority" by the software developer.

These disadvantages in particular speak in favor of the formation of project-related quality assurance groups. The participants are delegates from the development team and the future users of the product.

It must be determined in detail in which phases it makes sense for the various groups of people to participate. Through the inclusion and cooperation of all groups of people who are connected to the product to be created, the frequently encountered statements such as "we were not interviewed" are prevented.

Experience is stashed away

How individual companies have organized the software quality assurance function and their work processes is currently relatively unknown. This may be due on the one hand to the fact that many companies are still in the experimental phase, but on the other hand to the misunderstood competitive thinking of making their own experiences - even bad ones - public to others.

SW quality is achieved automatically if only the appropriate tools are used. However, the claim contained in this advertising statement has not yet been met. Because a large part of the software tools available on the market are just more or less comfortable editors or very programming-related instruments.

In the meantime, it has been recognized that software quality is not just a matter of coding. The first phases of the development process such as problem description, definition of user requirements and rough draft already have a great influence on the software quality.

Several different types of tools are required to support the entire software development process. It is not enough here to use any suitable set of instruments for every phase of program development. Rather, the compatibility of the individual tools with one another is of great importance.

The results of using a tool in one phase are input for the tool in the following phase. A uniform methodology is a prerequisite for the resulting compatibility. As a result, the basic procedure is initially determined independently of certain tools. These have to be flexible enough to be able to use them for different problems.

If one examines the software development tools available on the market, the discrepancy between these generally recognized requirements and reality becomes clear.

Tools are not good for analysis

Quality assurance is both constructive and analytical through its measures. With the appropriate use of tools, a product is constructed with quality. In addition, tools are also required to analyze the extent to which the quality requirements have been met within the framework of the construction. The majority of the tools offered on the market are primarily of a constructive nature.

As already mentioned, test criteria must be defined in order to be able to measure the quality. These are to be determined depending on the development phase. Most specifically, such quality criteria have so far been developed for the end product "program". However, it is now very tedious to check quantitative parameters manually. The effort involved in program systems with several thousand lines of code is considerable. This results in the need for suitable tools to accomplish this task.

A large number of test criteria relate to the formal properties of a program and can therefore also be measured automatically. In addition, however, there are also properties that previously could only be assessed by humans. Tool support is also possible here, for example, to carry out a preselection of the program parts to be assessed and to manage and further process the result of the assessment.

These functional requirements for a tool for measuring the program quality were the starting point for the Institute for Information Systems at the University of Saarbrücken for the development of the program system "Epsos-D", which enables the assessment of formal properties and also supports the testing of the functional properties. The program system is a dialog-oriented examination environment in which various tools for quality assurance are combined.

The system is used in several steps. First, the formal quality properties of programs are dismantled:

- lexical analysis

All elements contained in the program such as instructions, data or comments are recorded and stored in the database

- Structural analysis

The sequence of instructions as they run when the program is executed are shown in a program graph.

- Measurement of quality characteristics

Based on the previous analysis results, measurands for the quality features are determined. A distinction is made between quality features that can be measured automatically and those that can only be measured in dialogue with humans.

- Assessment of the quality criteria

Assessments of the quality criteria are derived from the measured variables.

The results condensed on different levels then offer a good overview of the quality of programs and program parts. The measured variables reveal weak points in great detail.

On the basis of the measurement results, those program elements are then determined which, because of their construction, contain software-technical risks. This preselection can be used to subject such risky parts of the program to a particularly intensive functional test.

The program system is implemented on an IBM-PC-XT. It is designed for use in software development and quality assurance on microcomputers, but also as a workstation for testing programs on mainframes.

Jörg Ahlers and Wilfried Emmerich are research assistants at the Institute for Computer Science at Saarland University, Saarbrücken.

Information on the Epsos-D program system is available from the Institute for Information Systems at Saarland University, Director Prof. A.-W. Scheer, Building 14, 6600 Saarbrücken 11.