Software testing – overview | Статья в журнале «Молодой ученый»

Отправьте статью сегодня! Журнал выйдет 23 ноября, печатный экземпляр отправим 27 ноября.

Опубликовать статью в журнале

Библиографическое описание:

Алламов, О. Т. Software testing – overview / О. Т. Алламов. — Текст : непосредственный // Молодой ученый. — 2016. — № 9.5 (113.5). — С. 12-17. — URL: https://moluch.ru/archive/113/29764/ (дата обращения: 15.11.2024).



I. INTRODUCTION

The common user of modern software is most probably expecting the software to perform effectively, be usable and function reliably. These non-functional requirements have to be assessed by software developers throughout their development process. This paper focuses on a subtopic of the third requirement, software reliability. Reliability can be assured by Software Quality Engineering with one of it’s main topics being Software Testing.

By using the principles and activities of software testing in the software development process, developers aim to make their software more reliable by increasing the functional correctness and stability of the system.

This work aims to provide a complete survey on the topic of software testing in a scope, that allows the reader to have read about every relevant topic in the context. The topics covered in this work might for example be used as a structure for a lecture on software testing.

Descriptions of topics are covering the topics only on surface level. Certain topics will be investigated more thoroughly in separate works appended to this overview.

Section II gives an introduction to terms that are commonly used in the software testing context to have a terminology foundation for the rest of this work. Section III gives a brief survey on one of the most current software testing standards which is used to structure the rest of the covered topics. In section IV the fundamentals are covered and the main objectives of testing are covered in section V. Section VI embeds software testing into the software (development) lifecycle and section VIII covers the management of the testing process. A classification of tests is given in section IX which is consecutively applied to levels of testing in section X. Techniques for test case derivation are given in section XI and a survey of tools to support the software testing process is given in XII.

II. TERMINOLOGY

This section clarifies some of the fundamental terms used in software testing according to the different published standards and definitions, that found consensus among the international testing community. A number of terms related to software testing are presented, as they represent the essence of the software testing lexicon and will be consequently used throughout this work.

A. Error, Defect, Fault and Failure...or Bug ?

The existing definitions are never precise and there is a sort of confusion or different point of view concerning a unified general definition of these terms. Until now there is no definitive software testing standard, except the ISO/IEC/IEEE 29119 standard that is still in development. We tried below to ”filter”the most exhaustive definitions from the different existing standards and software testing books. According to [1], the term bug lacks precision, as it can mean an incorrect program code, an incorrect program state or an incorrect program execution. Because of the ambiguity of the term it will not be used in this work.

“An Error is a human action that produces an incorrect result“[2].

A Fault is a flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A Fault, if encountered during execution, may cause a failure of the component or system.

A Failure is a behavioral deviation of the component or system from its expected delivery, service or result. Therefore, the term failure refers to the inability of a system or component to perform its required functions within specified performance requirements. The definitions of fault and failure allows us to distinguish testing from debugging [3].

The software testing community frequently uses several synonyms of the previous defined terms like for example defect to describe a fault and an infection to describe an error. These two alternatives are, according to W. Dijkstra, pejorative alternatives to increase the programmer’s sense of responsibility. Finally our infection chain (the cause-effect chain from a fault to a failure) will be as follow.

Fault Error Failure

B. Testing and Tests

Testing is the process consisting of all life-cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect errors [4].

”A Test is a set of one or more test cases“ [5].

A Test case is a set of input values, execution preconditions, expected results and execution post-conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement [2].

C. Debugging

”The process of finding, analyzing and removing a fault that caused the failure in a system or component“ [6].

D. Software Quality

”The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs“ [7].

E. Verification and Validation

Verification: ”Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled“ [8].

Validation: ”Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled“ [8].

III. STANDARDS FOR SOFTWARE TESTING

Almost all goods and services must satisfy certain requirements. Typically, such requirements are defined in standards. The aim of a standards is to make sure that the end product complies with market requirements and attains end user satisfaction. In fact, many software applications, mobile apps, and even full enterprise systems are sold to various customers every day that might not have been developed using any standard. However, people buy them anyway. Of course, ignoring standards is not directly proportional to the poor software quality and lesser demand for the end product (as long as it’s not life-critical software). But such as life-critical or huge losses software should be compliant with one of the standards. The problem is not in following standards. What really matters is ignoring or diminishing the importance of the quality of the software.

A standard is a document that provides requirements, specifications, guidelines or characteristics that can be used consistently to ensure that materials, products, processes and services are fit for their purpose[ISO].

Nowadays International Standards exist almost for all kind of products and services. Software Testing is also no exclusion.

The first Standard of relevance Software Testing was published in 1983 by IEEE [5]. The latest version of this standard was updated in 2008. Previously, all standards including testing were only considering some aspects of Software Testing. By ISO/IEC JTC1/SC7 Working Group 26 are developed new international standards on Software Testing - ISO/IEC/IEEE 29119. This standard was provied all existing standards. It involved all aspects of Software Testing and replaced a number of existing standards.

ISO/IEC/IEEE 29119 consists of following parts:

- Part 1: Concepts and Definitions. It is informative and provides definitions, a description of the concepts of software testing and ways to apply the processes, documents and techniques defined.

- Part 2: Test Processes. It is to define a generic process model for software testing that can be used within any software development life cycle. The model specifies test processes that can be used to govern, manage and implement software testing in any organization, project or testing activity.

- Part 3: Test Documentation. It is to define templates for test documentation that cover the entire software testing life cycle.

- Part 4: Test Techniques. It is to define one international standard covering software test design techniques that can be used during the test design and implementation process within any organization or software development life cycle model.

- Part 5: Keyword Driven Testing. It is to define an international standard for supporting Keyword Driven Testing.

The first three part of standards have been published in 2013, the fourth part is going to be published by the end of year 2014 and fifth part is likely to be rolled out in 2015 [ISO].

IV. FUNDAMENTALS

With the rapid development of software systems, system quality is focused more and more in software engineering. To assure a better software quality, more sophisticated software testing methods and techniques are needed in order to achieve the system quality described by the customer requirements.[9] Software testing is a process, or a series of processes, designed to make sure computer code does what it was designed to do and, conversely, that it does not do anything unintended. Software should be predictable and consistent, presenting no surprises to users. During the test process we distinguish different actors, they are the programmers, the test engineers, the project managers, and the customers. They influence a system’s behavior or are impacted by that system [10, P.10].

V. OBJECTIVES OF SOFTWARE TESTING

Tests can have very different objectives. Test cases written on implementation level aim to make sure that the implementation of a system fulfills a certain level of software quality and stability. But there are more aspects and objectives to software testing that aim to make a system tested and covered from top to bottom. Including the part, that is actually used by the customer. Such test objectives are described in this section.

A. Correctness

One of the main purposes of software testing is checking whether a system is or behaves correctly in the meaning of not producing wrong results or behaving unexpectedly. For this objective software systems can be tested on different levels as described later on in section X. The different levels for correctness tests however always apply on implementation level, e.g. one cannot simply check for correctness in terms of the following objectives [11].

B. User Interface

The process of testing user interfaces (UI) of software systems is a very nontechnical step in the testing process. The automated or tool-supported test of a UI is not appropriate. Hence, a tester is supposed to look at the following aspects:

- Standardization and Guidelines: Depending on the use case of the system or the operation system it will be used on, there may exist standards on how end users will expect the UI to be layout and designed.

- Intuitive: UIs should be intuitive for the end user who may also be covered by guidelines and standardization.

- Consistency: UIs should be consistent throughout the system so that the UI never behaves differently than the user would expect from a previous action.

Accessibility Testing is a special case of UI Testing where user interfaces are tested for disabled or handicapped people. This can include specific tests for visual, audio, motion, cognitive or language handicaps [12, p.177f].

C. Usability

Another approach of testing the interface between the user and the software is to do Usability Testing. In contrast to the UI testing process, where testers from the developer domain check the UI for conformance to certain aspects, Usability testing uses studies of the behavior, expectations and feedback of real users of the software system. This process can be aligned to the general testing process. Test cases have to be derived, e.g. a specific use case of the system should be executed by a surveyed users and the results will be reported. Users have to be selected for the survey and their results have to be evaluated. The results can be used to improve the usability of the software and the next iteration of usability testing can begin [9, p.143f].

D. Performance

Non-functional requirements for a software might often include performance requirements such as response times. Performance Testing aims on testing whether such requirements are fulfilled [4, p.5-5]. Performance tests might however not always be accurate enough, due to their nature of being executed in a test environment, so that a general fulfillment of a performance requirement can be guaranteed in all environments [13, p.803f].

VI. SOFTWARE TESTING IN THE SOFTWARE LIFECYCLE

The software testing process and the software development process relate much to each other. The software development models, or development processes, help to organize and control structure and workflow during of the software system development. Software testing takes its particular place in the software development lifecycle. It depends on the software development model [14]. According to [15, p. 30], the software development lifecycle model is the process used to create a software product from its initial conception to it’s public release.

There are six common phases in the software development lifecycle. In the first phase, initiation, a problem is recognized and a need is identified. During the second phase, definition, the functional requirements are defined, and detailed planning for the development begins. The system design phase is the third one. During this phase the solution of the problem is specified. Phase four includes programming and testing. In the next phase, evaluation and acceptance, integration and system testing occurs. The final lifecycle phase, installation and operation, exists to implement the approved operational plan, control all changes, etc. [16, p. 588-590].

VII. SOFTWARE DEVELOPMENT MODELS

Software Development models provide guidelines on how to build software. Such models are mostly standardized, so the development team can concentrate on system development. It helps to achieve consistency and improve the development process itself [16, p. 583].

1) Waterfall: The waterfall model follows a logical progression of software development lifecycle. It assumes an the end of one phase the prerequisites for the next stage are known [16, p. 587]. On the one hand this model brings a number of advantages for the testing team: everything is carefully and thoroughly planned, every detail has been decided on, written down, and turned into software. So the testing team can plan accurately. On the other hand, because testing occurs only at the end, fundamental problems can remain uncovered till the end of the project [15, p. 32].

2) V-Model: This model develops two processes: One for building the system and one for testing the system. Each stage of development process has one test type on the testing side, e.g. during the requirements stage of development, the software acceptance testing side is developed. In this model one-half of development effort will be spent on testing. It integrates testing so that it is more effective and helps to discover defects in earlier stages of development [16, p. 588].

3) Iterative Model: The iterative model does not start with a fill specification ofe requirements. The development begins with specifiing and implementing a part of the software. this parn will be reviewed in order to test and specifiy new requirenments. This process ist repeaten for each cycle. So in each cycle had to be tested [17].

4) Agile Model: Agile development model is a type of inremental model. Software is developed here in incremental, rapid cycles. The resul is small incremental release. Each of those releases is build on previous functionality. Each release is thoroughly tested to ensure software quality is maintained. It is used for time critical applications [18].

VIII. SOFTWARE TESTING MANAGEMENT

Test management comprises not only classical methods of project and risk management but also knowledge about suitable test methods. Test management helps to select and implement appropriate measures to ensure that a defined basic product quality will be achieved [14]. So the test management includes a test process for better structure.

A. Test Process

A detailed test procedure is necessary to integrate testing into the development process [14]. According to [16, p. 157], the software testing consists of seven steps, which are described below.

1) Organising for Testing: In this step the test scope should be defined. It means the performed testing type should be determined. The organisation of the test team occurs in this stage, too. In this step it is important to assess the development plan and status. This helps to build the test plan [16, p. 157].

2) Developing the Test Plan: Identification of the test risks and the writing of the test plan belong to the development of a test plan. It is important to follow the same pattern as any software planning process. The structure is the same, but this plan focuses on the degree of the tester perceived risks [16, p.157–158].

3) Verification Testing: During this step the testers have to determine that the requirements are accurate and complete, and that they are not in conflict to each other. The second important point here is to concern that the software design will achieve the objectives of the project as well as that it will be effective and efficient. The software constructions heave to be tested here, too [16, p. 158].

4) Validation Testing: At this step the validation testing should be performed. This involves the testing of code in a dynamic state. The results of these tests should be recorded [16, p. 158].

5) Analysing and Reporting Test Results: Analysing and reporting of test results belongs to the fifth step of test planning [16, p. 158]. The objective of this step is to determine what the team has learned from testing and to inform appropriate individuals [16, p. 160].

6) Acceptance and Operational Testing: In this step one should perform acceptance, software installation and software changes testing [16, p. 159].

7) Post-Implementation Analysis: The objective of this step is to determine whether testing was performed effectively and what changes could be made, if not [16, p. 160].

IX. CLASSIFICATION OF TESTS

Test levels are a group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test.

Test techniques are procedures to derive and/or select test cases, i.e Black Box tests and White Box tests.

Test objectives are a group of test activities aimed at testing a component or system focused on a specific test objective, i.e. functional test, usability test, regression test etc. A test type may take place on one or more test levels or test phases.

X. TESTING LEVELS

Throughout the life cycle of a software product, different testing is performed at different levels

A. Unit Testing

A level of the software testing process where units, components, functions or methods are tested to see if they perform as designed. In unit testing programmers test various program units, such as classes, procedures or functions until they satisfy a set of precise requirements.

B. Integration Testing

Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. It’s performed directly after the unit testing and it’s done with the collaboration of software developers and integration test engineers.

C. System Testing

The process of testing an integrated system to verify that it meets specified requirements [19].

D. Acceptance Testing

Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system [2].

XI. TECHNIQUES FOR TEST CASE DERIVATION

This section focuses on the various techniques for writing test cases for a software system. All techniques mentioned in this section refer to a dynamic testing approach, meaning that the tests or the system has to be run rather than testing it statically by analyzing and investigating the system without executing it [11, p.22].

The embedding of writing test cases during the software development or implementation process can be classified into two distinct approaches: Test-last, where test cases are written after the implementation is already done, or test-first, where the test cases are written before the implementation [20].

The goal of test case derivation is always to achieve reasonably good test coverage. The test coverage of a program can be defined as the number of possible input values, program flows or statements that are tested. Theoretically, developers desire to have their software fully tested, for all possible variations of input. The number of possible inputs is however effectively infinite for most software systems. Hence, complete input coverage is impossible to achieve. To achieve reasonable input test coverage it is possible to test equivalence classes of input values. This approach is known as partitioning. Other approaches to increasing test coverage are covering all program flows or statements on implementation level [11].

A. Top-down and Bottom-Up Testing

Top-down and bottom-up testing are techniques used to derive test cases for a given software system. Using a test-last approach, test coverage for a complete software system can be achieved by two different strategies:

- Top-Down Testing: Test the entry point of the system, e.g. the main method, first and proceed to underlying methods that are being called.

- Bottom-Up Testing: Test leaf methods, methods that do not invoke other methods, first and proceed to methods that depend on these.

This approach however seems to be obsolete as object-oriented programming languages often results in cyclic dependencies between components (coupling). Testers would therefore have to decide, which component of such a cyclic dependency graph has to be tested first [11, p.21f].

B. Black Box and White Box Testing

The two most well-known methods for test case derivation are black box and white box testing. The two are essentially the opposite of each other.

Black Box tests are written against the specification of a software system. Given, for instance, the specification or an API documentation of the software system, black box tests are written against publicly exposed components, or rather their methods. Internals of those tested components are unknown to the tester [11, p.22]. Test case values, expected assertion values for instance, have to be guessed by the tester as edge or special cases might not be documented by the specification of the software system [15, p.64f].

White Box tests on the other hand, are written with the implementation of a tested component known by the tester. This allows him to test for certain aspects, for which the software might likely fail due to implementation decisions [15, p.63f].

C. Test Driven Development

Test Driven Development (TDD) is actually not directly a technique for test case derivation but rather an implementation approach. Still, the result of TDD is good test coverage of the developed software system.

When doing TDD a developer works in a test-first manner: Writing the test before the implementation. Additionally, in TDD the problem is divided into smaller problems and then conquered by very small test cases which prove that the smaller problem has been solved. The process of TDD is iterative and consists of the following four steps:

1) Write a failing Test

2) Run the test and see it fail

3) Write an implementation that makes the test succeed

4) Refactor implementation and test

The iteration of these steps leads to a very generic architecture of the (problem solving) implementation and is when perfectly executed in a TDD-manner, fully test-covered [20].

D. Exploratory Testing

Exploratory Testing is a more agile approach to test case derivation. In contrast to the previously described approaches where test cases where derived and run, this technique is a simultaneous execution of learning about the tested system, test design and text execution. Hence, tests are not designed prior to test execution and remain unchanged, but rather develop proportionally to the knowledge of the tester and therefore are run more often with different expectations [4, p.5-5].

XII. TOOLS

Previously techniques and objectives have been discussed in the software testing context. Tools can be used to assist in the techniques of the software testing process or to achieve testing objectives.

The selection and usage of specific testing tools is always depending on the actual testing process. Hence, analog to the implementation process where for instance different technologies might have to be evaluated whether to be used or not, testing tools must be investigated and evaluated for their usability. This depends on the testing environment, such as the programming language of the tested system, the knowledge of the testers, financial factors and so on. Additionally, used tools must be managed along the testing process [13, p.115f].

In the context of testing, the term tools covers a large domain. Testing tools are not only limited to automated software but also to analog tools such as a paper and a pen used to write a checklist [13, p.103f]. Hence, the list of available testing tools is very large and can hardly be investigated completely in such a short overview.

One important tool for software testing, especially in the domain of unit, integration and acceptance tests, are frameworks used to test a software system on different levels of implementation. JUnit [21] is on of the available unit-testing frameworks for the Java Programming Language. It allows the repeated execution of Java-written test cases against a Javawritten software system or parts of it. It can also be used for integration tests. Other frameworks are specifically designed for acceptance testing, e.g. Concordion [22] or Selenium [23]. With such frameworks at hand, developers and testers are able to repeatedly test software systems in short intervals.

Literature:

  1. A. Zeller, Why Programs Fail: A Guide to Systematic Debugging. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2005.
  2. I. C. Society, 610.12-1990 - IEEE Standard Glossary of Software Engineering Terminology. IEEE Computer Society, 1990.
  3. N. Fenton, Software Metrics: a Rigorous Approach. Chapman & Hall, 1991.
  4. IEEE Computer Society, SWEBOOK. Angela Burgess, 2004.
  5. I. C. Society, IEEE 829-2008 IEEE Standard for Software and System Test Documentation. IEEE Computer Society, 2008.
  6. I. S. T. Q. Board, Standard Glossary of Terms used in Software Testing. International Software Testing Qualifications Board, 2014.
  7. ISO/IEC, ISO/IEC 9126. Software engineering – Product quality. ISO/IEC, 2001.
  8. I. technical committee (TC) ISO/TC, ISO 9000 - Quality management. ISO technical committee (TC) ISO/TC, 2008.
  9. G. J. Myers, T. Badgett, and C. Sandler, The Art of Software Testing. John Wiley & Sons, Inc., 2012.
  10. K. Naik and P. Tripathy, SOFTWARE TESTING AND QUALITY ASSURANCE Theory and Practice. John Wiley and Sons, 2008.
  11. P. Ammann and J. Offutt, Introduction to Software Testing. CambridgeUniversity Press, 2008.
  12. R. Patton, Software Testing. Sams Publishing, 2001.
  13. W. E. Perry, Effective Methods for Software Testing. Wiley Publishing, Inc., 2006.
  14. A. Spillner, T. Linz, T. Rossner, and M. Winter, Software Testing Practice: Test Management: A Study Guide for the Certified Tester Exam ISTQB Advanced Level. Rocky Nook, 2007. [Online]. Available: http://books.google.de/books?id=Hjm4BAAAQBAJ
  15. R. Patton, Software testing. Indianapolis, Ind.: Sams, 2001. [16] W. E. Perry, Effective methods for software testing, 3rd ed. Indianapolis, IN: Wiley, 2006.
  16. [Online]. Available: http://istqbexamcertification.com/what-is-incremental-model-advantages-disadvantages-and-when-to-use-it/
  17. [Online]. Available: http://istqbexamcertification.com/what-is-incremental-model-advantages-disadvantages-and-when-to-use-it/
  18. B. Hetzel, The Complete Guide to Software Testing, 2nd ed. Wellesley, MA, USA: QED Information Sciences, Inc., 1988.
  19. K. Beck, Test Driven Development by Example. Addison-Wesley, 2002.
  20. V. Masso, JUnit in action. Manning, 2003.
  21. D. Peterson. (2015) Concordion. [Online]. Available: http://concordion.org/
  22. Selenium. (2015) Selenium browser automation. [Online]. Available: http://www.seleniumhq.org/
Основные термины (генерируются автоматически): ISO, IEEE, IEC, SOFTWARE, TESTING, TDD, FOR, III, USA, VIII.


Задать вопрос