Testing Object-Oriented Software / Edition 1

Testing Object-Oriented Software / Edition 1

ISBN-10:
0818685204
ISBN-13:
9780818685200
Pub. Date:
11/10/1998
Publisher:
Wiley
ISBN-10:
0818685204
ISBN-13:
9780818685200
Pub. Date:
11/10/1998
Publisher:
Wiley
Testing Object-Oriented Software / Edition 1

Testing Object-Oriented Software / Edition 1

Paperback

$110.95
Current price is , Original price is $110.95. You
$110.95 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Overview

Object-oriented programming increases software reusability, extensibility, interoperability, and reliability. Software testing is necessary to realize these benefits by uncovering as many programming errors as possible at a minimum cost. A major challenge to the software engineering community remains how to reduce the cost while improving the quality of software testing. The requirements for testing object-oriented programs differ from those for testing conventional programs.

Testing Object-Oriented Software illustrates these differences and discusses object-oriented software testing problems, focusing on the difficulties and challenges testers face. The text contains of nineteen reprinted papers providing a general framework for class- and system-level testing and examines object-oriented design criteria and high testability metrics. It offers object-oriented testing techniques, ideas and methods for unit testing, and object-oriented program integration-testing strategy.

Readers are shown how to drastically reduce regression test costs, presented with steps for object-oriented testing, and introduced to object-oriented test tools and systems. The book's intended audience includes object-oriented program testers, program developers, software project managers, and researchers working with object-oriented testing.

Product Details

ISBN-13: 9780818685200
Publisher: Wiley
Publication date: 11/10/1998
Series: Practitioners , #32
Pages: 284
Product dimensions: 11.00(w) x 8.50(h) x 0.60(d)

About the Author

David C. Kung and Pei Hsia are the authors of Testing Object-Oriented Software, published by Wiley.

Read an Excerpt

Testing Object-Oriented Software


By David C. Kung Pei Hsia Jerry Gao

John Wiley & Sons

ISBN: 0-8186-8520-4


Chapter One

OO Testing Problems

The OO paradigm enjoys increasing acceptance in the software industry, due to its visible benefits in analysis, design, and coding. Numerous software development organizations have adopted OO as the development paradigm. However, the conventional software testing and maintenance methods are not adequate for OO programs [harr91a] [kung93b] [kung93b] [perr90a] [smit90a], because they do not address the testing and maintenance problems associated with the OO features. It can be anticipated that as more OO software is developed testing and maintaining OO programs will become a real challenge.

The papers in this chapter present to the reader the OO testing problems from different angles. The first paper [perr90a], by D.E. Perry and G.E. Kaiser, discusses the problems from a theoretical point of view. It revisits some of the test adequacy axioms originally proposed by Weyuker [weyu86a] [weyu88a]:

Antiextensionality. If two programs compute the same function (that is, they are semantically close), a test set adequate for one is not necessarily adequate for the other.

General Multiple Change. When two programs are syntactically similar (that is, one can be obtained from the other by changing constants and/or relational/arithmetic operators) they usually require different test sets.

Antidecomposition. Testing a program in the context of an enclosing program may be adequate with respect to that enclosing program, but not necessarily adequate for other uses of the component.

Anticomposition. Adequately testing each individual program component in isolation does not necessarily suffice to adequately test the the entire program. Composing two program components results in interaction that cannot arise in isolation.

The paper then derives a surprising conclusion which is opposite to what we used to believe. That is, we often think that an inherited method can be used without testing if it has been tested in the context of the based class. However, Perry and Kaiser's paper points out that it is not always the case. They point out that (1) when a subclass or superclass is added to a class, the inherited methods must be retested in this newly formed context; (2) even if the overriding and overridden methods are semantically similar, there is a need to retest the classes in the context of overriding and overridden methods; and (3) if the order of specification of superclasses of a subclass is changed, the subclass must be retested even though only syntactic changes are made.

The second paper [smit90a], by Smith et al., considers problems involved in testing classes, abstract classes, message passing, concurrency, inheritance, polymorphism, and template classes. For example, these authors discuss how to test a class, an abstract class, or a template class. Another problem is the concept of control flow through a conventional program does not map readily to an OO program. Flow of control in OO programs may be thought of as message passing from one object to another, causing the receiving object to perform some operation, be it an examination or alteration, of its state. In order to test such programs, when there is no conceptual input/process/output, it is probably more appropriate to specify how the object's state will change under certain conditions. If one object sends two messages to two other objects, the two can respond concurrently. The complexity of testing such systems, considering the possible time-dependent interactions between objects, is potentially greater than that for normal sequential OO programs.

Smith et al. [smit90a], classify inheritance into restrictive, nonrestrictive and repeated inheritance, and multiple and simple inheritance. Simple inheritance means inheriting from only one parent class while multiple inheritance occurs when a child class inherits from two or more parent classes. Strict inheritance occurs when the child class takes all the features from the parent class. Nonstrict inheritance occurs when some of the features from the parent class are not present or are renamed in the child class. This can occur in the simple case by omission, or through multiple or repeated inheritance. Repeated inheritance occurs when a child class derives from the same parent class more than once. This occurs when a class inherits from two other classes that inherit from a common parent. Table 1 summarizes the implications of these inheritances to testing.

The third paper [wild92a], written by Wilde and Huit, addresses problems in maintaining an OO system and proposes potential solutions to these problems. These include problems of dynamic binding, object dependencies, dispersed program structure, control of polymorphism, high-level understanding, and detailed code understanding. In fact, all of these are problems in the testing and regression testing processes and this is the reason that we include this paper. For example, dynamic binding implies that the code that implements a given function is unknown until run time. Therefore, static analysis cannot be used to precisely identify the dependencies in the program. Hence, it is difficult for a tester to prepare test stubs as well as identifying the change impact in regression testing.

The dependencies occurring in conventional systems are:

data dependencies between variables;

calling dependencies between modules;

functional dependencies between a module and the variables it computes;

definitional dependencies between a variable and its type.

OO systems have additional dependencies:

class-to-class dependencies;

class-to-method dependencies;

class-to-message dependencies;

class-to-variable dependencies;

method-to-variable dependencies;

method-to-message dependencies; and

method-to-method dependencies.

Environments for maintaining object-oriented programs need to provide ways of browsing these different kinds of relationships. The multidimensional nature of interconnections will make it very difficult to use listing or text-screen-based systems for program understanding.

Although all three papers address OO test problems, we can see that their emphases are different. Perry and Kaiser's paper [perr90a] emphasizes retesting inherited methods in the context of the derived class, based on Weyuker's test adequacy axioms. Smith and Robson's paper [smit90a] emphasizes the practical difficulties in testing OO programs. Wilde and Huitt's paper [wilde92] emphasizes the complexity, dependency, and understanding problems relating to testing and maintaining OO programs. We believe that the these papers together provide a more or less complete picture (except object state testing which will be addressed in Chapter 5) of the problems and difficulties involved in OO software testing.

Adequate Testing and Object-Oriented Programming

By Dewayne E. Perry and Gail E. Kaiser

Introduction

Brooks, in his paper "No Silver Bullet: Essence and Accidents of Software Engineering", states:

Many students of the art hold out more hope for object-oriented programming than for any of the other technical fads of the day. I am among them.

We are among them as well. However, we have uncovered a flaw in the general wisdom about object-oriented languages-that "proven" (that is, well-understood, well-tested, and well-used) classes can be reused as superclasses without retesting the inherited code. On the contrary, inherited methods must be retested in most contexts of reuse in order to meet the standards of adequate testing. In this article, we prove this result by applying test adequacy axioms to certain major features of object-oriented languages-in particular, encapsulation in classes, overriding of inherited methods, and multiple inheritance which pose various difficulties for adequately testing a program. Note that our results do not indicate that there is a flaw in the general wisdom that classes promote reuse (which they in fact do), but that some of the attendant assumptions about reuse are mistaken (that is, those concerning testing.)

Our past work in object-oriented languages has been concerned with multiple inheritance and issues of granularity as they support reuse. Independently, we have developed several technologies for change management in large systems and recently have been investigating the problems of testing as a component of the change process, especially the issues of integration and regression testing. When we began to apply our testing approach to object-oriented programs, we expected that retesting object-oriented programs after changes would be easier than retesting equivalent programs written in conventional languages. Our results, however, have brought this thesis into doubt. Testing object-oriented programs may still turn out to be easier than testing conventional-language programs, but there are certain pitfalls that must be avoided.

First we explain the concepts of specification-and program-based testing, and describe criteria for adequate testing. Next, we list a set of axioms for test data adequacy developed in the testing community for program-based testing. We then apply the adequacy axioms to three features common to many object-oriented programming languages, and show why the axioms may require inherited code to be retested.

Testing

By definition, a program is deemed to be adequately tested if it has been covered according to the selected criteria. The principle choice is between two divergent forms of test case coverage reported by Howden: specification-based and program-based testing.

Specification-based (or "black-box") testing is what most programmers have in mind when they set out to test their programs. The goal is to determine whether the program meets its functional and nonfunctional (i.e., performance) specifications. The current state of the practice is informal specification, and thus informal determination of coverage of the specification is the norm. For example, tests can be cross-referenced with portions of the design document, and a test management tool can make sure that all parts of the design document are covered. Test adequacy determination has been formalized for only a few special cases of specification-based testing-most notably, mathematical subroutines.

In contrast to specification-based testing, program-based (or "white-box") testing implies inspection of the source code of the program and selection of test cases that together cover the program, as opposed to its specification. Various criteria have been proposed for determining whether the program has been covered-for example, whether all statements, branches, control flow paths, or data flow paths have been executed. In practice, some intermediate measure such as essential branch coverage or feasible data flow path coverage is most likely to be used, since the number of possibilities might otherwise be infinite or at least infeasibly large. The rationale here is that we should not be confident about the correctness of a program if (reachable) parts of it have never been executed.

The two approaches are orthogonal and complimentary. Specification-based testing is weak with respect to formal adequacy criteria, while program-based testing has been extensively studied. On the one hand, specification-based testing tells us how well it meets the specification, but tells us nothing about what part of the program is executed to meet each part of the specification. On the other hand, program based testing tells us nothing about whether the program meets its intended functionality. Thus, if both approaches are used, program-based testing provides a level of confidence derived from the adequacy criteria that the program has been well tested whereas specification-based testing determines whether in fact the program does what it is supposed to do.

Axioms of Test Data Adequacy

Weyuker in "Axiomatizing Software Test Data Adequacy" develops a general axiomatic theory of test data adequacy and considers various adequacy criteria in the light of these axioms. Recently, in "The Evaluation of Program-Based Software Test Data Adequacy Criteria", Weyuker revises and expands the original set of 8 axioms to 11. The goal of the first paper was to demonstrate that the original axioms are useful in exposing weaknesses in several well-known program-based adequacy criteria. The point of the second paper is to demonstrate the insufficiency of the current set of axioms, that is, there are adequacy criteria that meet all eleven axioms but clearly are irrelevant to detecting errors in programs. The contribution of our article is that, by applying these axioms to object-oriented programming, we expose weaknesses in the common intuition that programs using inherited code require less testing than those written using other paradigms.

The first four axioms state:

Applicability. For every program, there exists an adequate test set.

Non-Exhaustive Applicability. There is a program P and test set T such that P is adequately tested by T, and T is not an exhaustive test set.

Monotonicity. If T is adequate for P, and T is a subset of T then T is adequate for P.

Inadequate Empty Set. The empty set is not an adequate test set for any program.

These (intuitively obvious) axioms apply to all programs independent of which programming language or paradigm is used for implementation, and apply equally to program-based and specification-based testing.

Weyuker's three new axioms are also intuitively obvious.

Renaming. Let P be a renaming of Q; then T is adequate for P if and only ifTis adequate for Q.

Complexity. For every n, there is a program P, such that P is adequately tested by a size n test set, but not by any size n-1 test set.

Statement Coverage. If T is adequate for P, then T causes every executable statement of P to be executed.

A program P is a renaming of Q if P is identical to Q except that all instances of an identifier x of Q have been replaced in P by an identifier y, where y does not appear in Q, or if there is a set of such renamed identifiers. The first two axioms are applicable to both forms of testing; the third applies only to program-based testing. The concepts of renaming, size of test set, and statement depend on the language paradigm, but this is outside the scope of this article.

Antiextensionality, General Multiple Change, Antidecomposition, and Anticomposition Axioms

We are interested in the four remaining (not so obvious) axioms: the antiextensionality, general multiple change, antidecomposition and anticomposition axioms. These axioms are concerned with testing various parts of a program in relationship to the whole and vice versa, and certain of them apply only to program-based and not to specification- based adequacy criteria. They are, in some sense, negative axioms in that they expose inadequacy rather than guarantee adequacy.

Antiextensionality

If two programs compute the same function (that is, they are semantically close), a test set adequate for one is not necessarily adequate for the other.

There are programs P and Q such that P=Q, [test set] T is adequate for P, but T is not adequate for Q.

This is probably the most surprising of the axioms, partly because our intuition of what it means to adequately test a program is rooted in specification-based testing. In specification-based testing, adequate testing is a function of covering the specification. Since equivalent programs have, by definition, the same specification, any test set that is adequate for one must be adequate for the other. However, in program-based testing, adequate testing is a function of covering the source code. Since equivalent programs may have radically different implementations, there is no reason to expect a test set that, for example, executes all the statements of one implementation will execute all the statements of another implementation.

General Multiple Change

When two programs are syntactically similar (that is, they have the same shape), they usually require different test sets.

There are programs P and Q which are the same shape, and a test set T such that T is adequate for P, but T is not adequate for Q.

(Continues...)



Excerpted from Testing Object-Oriented Software by David C. Kung Pei Hsia Jerry Gao Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Preface.

Chapter 1. OO Testing Problems.

Adequate Testing and Object-Oriented Programming (Dewayne E. Perry and Gail E. Kaiser).

Object-Oriented Programming—The Problems of Validation (M.D. Smith and D.J. Robson).

Maintenance Support for Object-Oriented Programs (Norman Wilde and Ross Huitt).

Chapter 2. Specification and Verification.

Design for Testability in Object-Oriented Systems (Robert V. Binder).

Method Sequence Specification and Verification of Classes (Shekhar Kirani and W.T. Tsai).

Chapter 3. Unit Testing and Integration Testing.

A Class Testing Technique Based on Data Bindings (Heechern Kim and Chisu Wu).

Automated Flow Graph-Based Testing of Object-Oriented Software Modules (Allen S. Parrish, et al.).

Object-Oriented Integration Testing (Paul C. Jorgensen and Carl Erickson).

Chapter 4. Regression Testing.

Change Impact Identification in Object-Oriented Software Maintenance (D. Kung, et al.).

Selecting Regression Tests for Object-Oriented Software (Gregg Rothermel and Mary Jean Harrold).

A Technique for the Selective Ravalidation of OO Software (Pei Hsia, et al.).

Chapter 5. Object State Testing.

Object State Testing and Fault Analysis for Reliable Software Systems (D. Kung, et al.).

The State-Based Testing of Object-Oriented Programs (C.D. Turner and D.J. Robson).

ClassBench: A Framework for Automated Class Testing (Daniel Hoffman and Paul Strooper).

Chapter 6. Test Methodology.

Incremental Testing of Object-Oriented Class Structures (Mary Jean Harrold, et al.).

Integrated Object-Oriented Testing and Development Processes (John D. McGregor and Timothy D. Korson).

Chapter 7. Test Tools.

Developing an Object-Oriented Software Testing and Maintenance Environment (David Kung, et al.).

The ASTOOT Approach to Testing Object-Oriented Programs (Roong-Ko Doong and Phyllis G. Frankl).

Automated Testing from Object Models (Robert M. Poston).
From the B&N Reads Blog

Customer Reviews