The
road to more reliable software is paved with customer intentions. A
profile built from use cases can guide you in testing system
functions and eliminating defects users encounter the most.
by John D. McGregor and Melissa L. Major
Your product's release date looms. Though you're scrambling to
cover every testing contingency, a worry still gnaws at you: will
the user base curse your name in three months' time as a poorly
coded module repeatedly causes problems? How do you know you've done
every reasonable, appropriate test?
If your system-test strategy is implementation-based, you will
attempt to test every line of code, or even every path through the
code. You'll certainly find defects, but the process is
expensive—and impossible for those portions of the system for
which you only have access to the executable, or where there are
infinite paths. Specification-based techniques, on the other hand,
will cover all of the assumptions and constraints that were imposed
by the software's developers.
However, neither approach addresses a crucial point of view: your
users' priorities. If you are in the shrink-wrap software business,
you may have made some vague assumptions about your users; by
contrast, if you are building a product after a formal request for
proposals, you may be following precisely defined user profiles.
Regardless of the rigorousness of your design process, one thing
holds true: the frequency with which each type of user uses the
system will reflect the relative importance of the specific system
features.
In system testing, this frequency of use has traditionally been
represented by an operational profile, which guides the selection of
test cases so that the most popular system operations are tested
most frequently. This is an effective technique for discovering
defects that users would encounter the most. While operational
profiles are easy enough to construct after accumulated experience
with the system, they are harder to build prior to release—which
is when, of course, they are most useful.
The familiar use-case model of system requirements can play a
part in computing the relative frequency of use, thus guiding the
selection of test cases for the system. We have developed a set of
extensions—including an expanded description of the system
actors—to existing use-case templates that capture information
relevant to testing.
Increasing System Reliability
The reliability of a software program, as defined by John Musa in
Software Reliability Engineering (McGraw-Hill, 1999), is the
probability of failure-free operation in a specified context and
period of time. During system testing, it's possible to estimate the
reliability of the system as it will be experienced in normal
operation. Accurate estimates require that you must specify the
context, which is in part comprised of the system functions that
will be exercised. The context should also include a description of
the operating environment consisting of the operating system version
number, the run-time system's version (if applicable), as well as
version specifications for all DLLs used. One technique for
specifying the system functions' portion of the context is to use
the same operational profile that drives system testing.
Reliability requirements are stated in terms of a specified
period of failure-free operation (for example, "no failures in
24 hours"). The frequencies of operation shown in the
operational profile should be based on the user's actions within the
same time period as expressed in the reliability requirement. This
relationship between the two time periods provides a clear direction
for system testing. Using an operational profile designed for the
appropriate time interval to direct the tests, and then repairing
the failures encountered produces the fastest possible improvement
in system reliability.
Actors and Use Cases
The use-case technique, incorporated into the Rational Unified
Process, provides an easy-to-understand representation of the
functional requirements for a system. The technique identifies all
external forces (or actors) that trigger system functionality. Each
use case provides a description of a use of the system by one or
more of the actors.
An actor can represent a human user of the system or a stimulus
from another system. However, because each actor actually represents
a type rather than a specific user, each will interact differently
with the system.
Use cases describe the details of system functionality from the
user perspective, with scenario sections detailing the system's
response to specific external stimuli. The scenario section also
outlines what triggers the use and provides the information needed
to establish the criteria that will determine whether the system
passed the test. Additionally, the use case describes the
preconditions that must be established prior to the execution of the
test case.
For our purposes, let's focus on the frequency and criticality
fields in the use case template. The criticality attribute defines
how necessary a use is to the successful operation of the system;
the frequency attribute defines how often a specific use is
triggered. By combining these two attributes, you can prioritize
uses and tests and thus test the most important, most frequently
invoked uses. In our simple banking system, making deposits and
making adjustments might have about the same frequency, but making
deposits would have a higher criticality and should be tested more
rigorously.
Criticality is easy for an expert to judge, so this field can be
completed as the use case is constructed. Frequency is more
difficult to quantify, however, because different actors may trigger
the same use at very different rates.
Actor Profiles
Each actor is described by a brief profile. The major attribute
is the actor's use profile, which ranks the frequency with which
this actor triggers each individual use. It is usually easy to
determine the relative frequency with which a specific actor does
this, either by analyzing the responsibilities of the actor or by
simply reasoning about the domain.
You can note the frequency attribute with relative rankings (for
example, first, second or third), implied rankings (high, medium or
low) or the percentage of invocations that are applied to this use
(0 to 100 percent). However, the actors seldom trigger each use the
exact same percentage during each program execution, making this
last approach less accurate.
Though the ranking values are identical, it's often easier to
attach meaning to high, medium and low than to 1, 2 and 3. On the
other hand, combining numeric rankings is more intuitive than
combining subjective values.
Use-case Profiles
Now, you can combine the individual actor's use profiles to rank
each use case. Record the ranking in the frequency attribute of the
use case (we also summarize it in a table for ease of reference).
Combine the actor profile values with a weighted average (the weight
represents the relative importance of the actor).
For simplicity, in our example we treat all the actors equally,
each with a weight of 1. The values in the abstract actor use
profile aren't included in the computation, but they do help
determine the value for specialized actors.
The test priority column is determined by combining the frequency
and criticality columns, typically with either a conservative or an
averaging strategy. While averaging is self-explanatory, the
conservative strategy—choosing the highest rating by
default—often comes into play with life- or mission-critical
systems.
Allocating Tests
The technique presented here doesn't change the basic process of
selecting types of tests and specific input data, nor does it change
the calculation of how many test cases can be constructed given the
available resources. What does change is that now you can
systematically distribute test cases to specific use cases with a
calculated test priority.
Once the priorities have been determined, you can compute the
number of tests to be associated with each use case. One easy method
is to value the use cases' test priorities numerically. In our
example, the ranks of high, medium, and low are replaced with 3, 2
and 1, which sum to 6. Assume that there is time for approximately
100 tests. Then, assigning 3 to high, use case number 1 would rate
100 * 3/6, or 50 tests. Use case number 2 would rate 100 * 2/6
tests, or 33 tests, and use case number 3 would rate 100 *1/6, or 17
tests.
When you calculate the number of tests in this fashion, adopt a
standard for determining which test cases to construct. In general,
allocate your tests to the sections of the use-case description in
the following order of priority: first, the basic scenario; second,
exceptions; third, alternative courses of action; and last,
extensions.
Although the basic scenario and exceptions should be the focus of
the majority of the tests, do be sure to evaluate each section to a
greater or lesser degree.
Maximizing Testing ROI
Prioritizing the uses of the system produces a weighted
operational profile, or use profile. Many of the steps might seem
straightforward enough in the context of the simple example
presented here, but in a larger project, determining which use cases
to stress and which to cover only briefly can be tricky.
Indeed, building use profiles maximizes the return on investment
of resources allocated to system testing, directing system testing
in such a way that the reliability of the software increases the
fastest. That is, test cases are selected so that those operations
users depend on most are triggered most often.
The technique also links the system's actors, uses and tests;
now, when uses change, you can easily identify which tests to
change, and when tests fail, you can easily find the affected uses.
|