sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr

         +===================================================+
         +======= Testing Techniques Newsletter (TTN) =======+
         +=======           ON-LINE EDITION           =======+
         +=======            November 1996            =======+
         +===================================================+

TESTING TECHNIQUES NEWSLETTER (TTN), On-Line Edition, is E-mailed
monthly to support the Software Research, Inc. (SR) user community and
provide information of general use to the worldwide software testing
community.

(c) Copyright 1996 by Software Research, Inc.  Permission to copy and/or
re-distribute is granted to recipients of the TTN On-Line Edition
provided that the entire document/file is kept intact and this copyright
notice appears with it.

========================================================================

INSIDE THIS ISSUE:

   o  Emphasizing Software Test Process Improvement, by Gregory T. Daich

   o  CALL FOR PARTICIPATION -- 10th International Software Quality Week
      (QW'97)

   o  Identifying Code Coverage Rules, by Kenneth A. Foster

   o  SR/Institute's Software Quality HotList Available on WWW

   o  Letter to the Editor of ComputerWorld: On the IRS Mess, by Larry
      Bernstein

   o  Incomplete Tests are Worse than None at All

   o  On the Improvement of Standards, by Tom O'Hare (forwarded by John
      Favaro)

   o  TTN SUBSCRIPTION INFORMATION


========================================================================

      Emphasizing Software Test Process Improvement (Part 1 of 2)

                           Gregory T. Daich

                   Software Technology Support Center

On average, programmers create six to eight defects in every 100 lines
of code (LOC) [1]. They create 12 to 20 defects per 100 LOC if the code
is not structured and is poorly documented. These defect rates improve
to two to three defects per 100 LOC if the code is structured and
documented. However, the rates only improve to one to 1.5 defects per
100 LOC for subsystems, programs, or modules after typical unit testing.
We certainly need techniques to improve defect detection during coding
and unit testing.

Although the above metrics are alarming, we also know that 70 percent of
the defects in most systems are caused during the analysis and design
phases and 30 percent are introduced during coding [3]. Furthermore, as
an industry in the United States, we deliver, on average, between four
and six defects per 1,000 LOC [4]. Using one of the most widely
referenced metrics in the industry, we can expect costs to fix
requirements defects found late in system testing or after delivery to
be two orders of magnitude or more than if found during requirements
analysis [5]. In other words, we especially need techniques to improve
our defect detection capabilities during the early software lifecycle
phases.

This article presents an approach to improve software testing that
employs the practices of many leading software testing organizations.
This approach shows key steps and activities that increase the
probability of success in adopting software testing techniques and tools
to ultimately improve software quality and overall development
productivity. Note that enlightened software development techniques such
as software inspections and early software test planning and design have
made it possible for several organizations to deliver software with
defects down by 50 percent and delivery time down by 12 to 15 percent
[1]. One testing consultant states it as follows:

"The act of designing tests is one of the most effective error
prevention mechanisms known. The thought processes that must take place
to create useful tests can discover and eliminate problems at every
stage of development." [6]

Organizations need road maps that help identify applicable opportunities
and provide guidance for inserting technologies that improve quality and
increase productivity. These road maps must remind us about our
strategic, long-term organizational goals while helping us meet our
short-term, tactical objectives. They must provide mechanisms that feed
back progress information and that allow us to intelligently modify our
course to meet changing demands and problems. They must also show where
and how complementing technologies, such as software inspections,
support testing. Finally, a software test process improvement road map
must outline major activities to improve key testing disciplines and
provide pointers to obtain detailed test technology information to solve
specific problems.

TEST TECHNOLOGY IMPROVEMENT ROAD MAPS

Figure 1 presents a test improvement road map to improve principle
software testing disciplines.  We use the word discipline here to
emphasize that a separate body of knowledge and set of practices are
required to support each major grouping of test activities. This road
map advocates adoption of effective software test management practices
prior to improving techniques to perform requirements-based (functional)
testing and code-based (structural) testing, both of which must be in
place before effective regression testing can be accomplished. Other
specialized testing practices can then be effectively adopted once these
principle disciplines are in place. This road map provides a high-level
view of an approach to improve testing practices that can be likened to
the road map for a country that only shows major boundaries, terrain,
and highways.

Figure 2 presents our test technology adoption guide or a type of travel
guide to address each of the testing disciplines identified in Figure 1.
This guide identifies major improvement stages that apply to all testing
disciplines. Details for each stage are provided in Tables 1 and 2.The
process to improve each of the software testing disciplines follows a
general pattern that includes similar activities to establish needs;
examine practices; evaluate alternative techniques, methods, and tools;
enable selected improvements; and evolve those technologies over time.
This guide follows the counsel from the Software Engineering Institute's
(SEI) IDEAL (Initiating, Diagnosis, Establishing, Acting, and
Leveraging) consulting model to help organizations improve their
practices [16]. However, our travel guide written for use by
organizations that want to improve themselves and enable change in their
organization with or without consulting support.

ENABLING CHANGE

Recently, a small corporation sent two people to a public seminar on
improving testing practices. One person rated the seminar favorably, and
the other rated it as unsatisfactory. One naively assumed that the new
techniques could be readily applied back at the office, and the other
recognized that the process of matching their problems with appropriate
techniques was not trivial and required considerable guidance that he
did not feel they received. However, one saw answers to specific testing
questions while the other may still not know what questions to ask in
the first place. Many seminars do not address the process to improve
testing practices. However, you can come away with good ideas from which
you can choose those elements that apply to evaluate, select, and adopt.
Here are some key questions to ask attending a test improvement seminar:

* How do you plan enhancements to software testing practices?

* How do you overcome organizational inertia to help people embrace new
techniques and tools?

* How do you explore and set improvement priorities?

* What practices must first be in place before you endorse specific
testing practices?

* How do you enable change that becomes institutionalized and further
improved?

It is a common practice and often advisable to engage outside
consultants to help you adopt improved technologies. Many consultants
are naturally prejudiced about the technologies they recommend. Some may
focus on their specialties and interests and may recommend "point
solutions" that impact other development efforts. Obtaining consulting
support from organizations that can provide both breadth and depth of
experience is recommended, especially during early software process
improvement initiatives. Look for organizations with specialists who
collaborate with other specialists to ensure priorities for planned
improvement efforts are consistent with and compliment other needs.

Obviously, to effectively adopt any new technologies, certain
prerequisite practices need to be in place. The next section discusses
organizational readiness to enable change to improve testing practices.

READINESS FOR IMPROVING

With the current interest in process improvement throughout government
and industry, many organizations have positioned themselves to begin to
improve their testing practices. These organizations have fundamental
software management practices in place and now want to do something
about their less-than-effective testing practices and their huge testing
budgets (often 50 percent of the total system development or system
maintenance budgets [2]). This is evident in many of these
organizations' budgets for continuing process improvements, acquiring
testing tools and training, and attending testing seminars and
workshops.

In summary, some signs of readiness to improve software testing
practices include Organization has adopted several fundamental software
 management practices. Management wants better visibility into the
status of software test development, execution, and evaluation. A budget
exists to continue process improvements.

* Organization desires an adequate and cost-effective approach to
testing.

By design, the fundamental software management practices include the
following SEI Capability Maturity Model (CMM) Level 2 activities:
Requirements are documented and managed. Software development plan
exists and is used to monitor progress. Basic software metrics are
collected about software size, effort, schedule, quality, and rework.
Software configuration is managed. Mechanisms exists to assure software
practices are followed.

The road map to improve testing practices in Figure 1 follows the CMM's
guidance by advocating changes to software management practices prior to
inserting improved software engineering practices. The next section
discusses the recommended order to improve the testing disciplines
identified in Figure 1 that is consistent with the CMM's guidance.

SOFTWARE TEST MANAGEMENT

All of the practices listed in the previous section enable testing to be
accomplished more effectively. Although not listed as a fundamental
software management practice, effective software test management
practices also are a prerequisite for effective testing. This is why a
road map to improve testing must first address the management aspects of
the testing effort. Effective software test management practices include
[14]

* Initial test planning activities (including planning for verification
activities at each lifecycle phase).

* Defining and managing test objectives (requirements).

* Test progress monitoring activities.

* Product and test quality tracking activities.

* Test configuration control activities.

Effective software test management practices form the basis for the
enlightened approach mentioned above that has enabled many organizations
to achieve dramatic quality and productivity improvements. These
software test management practices compliment the fundamental CMM
software management practices by enabling organizations to determine
when a development or maintenance phase has been completed. For early
lifecycle phases such as requirements definition and design,
verification activities such as software inspections and test
specification and design evaluate early software work products to
determine readiness to proceed to the next phase.

Many organizations do not take full advantage of early test planning and
design, which can be likened to trying to build a house without all of
the plans and tools normally employed in this day and age. Why approach
our development projects poorly equipped? We consider awareness and
resistance to investing additional effort in the early lifecycle phases
to be some of the main obstacles for adopting these enlightened
practices. Restating this,

"Although programmers, testers, and programming managers know that code
must be designed and tested, many appear to be unaware that tests
themselves must be designed and tested>designed by a process no less
rigorous and no less controlled than used for code." [6]

The following paragraphs discuss some present-day concerns about the
above listed software test management practices.

Test Planning - Many software developers and testers have never received
training in effective software testing practices and believe that test
planning is largely a waste of time. "If the testers don't have their
fingers on the keyboards (e.g., they are actually executing tests), they
are not really productive and they are not getting their jobs done," was
heard recently as one person's perspective on test planning [1]. More
emphasis is needed on appropriate levels of test planning that support
each required level of testing.

Test Requirements - Many people have asked me, "What are test
requirements? How are they different from software requirements?" We
often establish software requirements without documenting what we plan
to do to test those requirements. Testing is a risk game that tries to
search for the most likely defects since we know that we cannot
exhaustively test any system with all possible inputs. Test requirements
include the functions and logic to be tested, program constraints,
software states, input and output data conditions, and usage scenarios
[7]. Some testing consultants have cataloged lists of testing
requirements that can be selected based on concerns, risks, and business
logic [8]. These catalogs serve as reminders of where the most plausible
defects lurk in our software and are expected to be adapted for a
project's specific needs. Note that some requirements definition
languages include constructs for specifying testing requirements [9].

Tracking Test Progress - Many organizations include rework effort in
their testing budgets and basically know when testing starts and ends
but have little clue about its status along the way. This tends to give
testing a bad reputation and accounts for much of the 50 percent budget
attributed to testing. We see leading organizations establish software
test lifecycles that distinguish between test development, (setting test
requirements, designing tests, and implementing or building test cases
and procedures or scripts), test execution, and test evaluation [4].
One major problem associated with tracking testing's progress centers is
the lack of accurate time accounting information. We have seen software
engineers in Level 3 organizations who freely admit that their time
accounting information is bogus (inaccurate). I am convinced we can all
do a better job of tracking effort. Sometimes, we need more refined
information than our present charge codes provide to adequately track
progress.

Configuration Control of Test Work Products - All too often important
testing resources and work products are discarded following use. Saving
test work products often involves storing large files of information.
Identifying and managing key test work products require careful planning
and configuration control practices that deal with test artifacts.

If you have any of these concerns with respect to your software test
management practices, perhaps our road map and associated travel guide
can help your organization make lasting improvements that can
significantly improve software quality and development and maintenance
productivity.

                           (TO BE CONTINUED)

========================================================================

         TENTH INTERNATIONAL SOFTWARE QUALITY WEEK 1997 (QW`97)

              Conference Theme: Quality in the Marketplace

            San Francisco, California USA -- 27-30 May 1997

QW`97 is the tenth in a continuing series of International Software
Quality Week Conferences focusing on advances in software test
technology, quality control, risk management, software safety, and test
automation.  Software analysis methodologies, supported by advanced
automated software test methods, promise major advances in system
quality and reliability, assuring continued competitiveness.

The mission of the QW`97 Conference is to increase awareness of the
importance of software quality and methods used to achieve it.  It seeks
to promote software quality by providing technological and educational
opportunities for information exchange within the software development
and testing community.

The QW`97 program consists of four days of mini-tutorials, panels,
technical papers and workshops that focus on software test automation
and new technology.  QW`97 provides the Software Testing and QA/QC
community with:

   o  Analysis of method and process effectiveness through case studies.
   o  Two-Day Vendor Show
   o  Quick-Start, Mini-Tutorial Sessions
   o  Vendor Technical Presentations
   o  Quality Assurance and Test involvement in the development process
   o  Exchange of critical information among technologists
   o  State-of-the-art information on software test methods

QW`97 is soliciting 45 and 90 minute presentations, half-day standard
seminar/tutorial proposals, 90-minute mini-tutorial proposals, or
proposals participation in a panel and "hot topic" discussions on any
area of testing and automation, including:

      Cost / Schedule Estimation
      ISO-9000 Application and Methods
      Test Automation
      CASE/CAST Technology
      Test Data Generation
      Test Documentation Standards
      Data Flow Testing
      Load Generation and Analysis
      SEI CMM Process Assessment
      Risk Management
      Test Management Automation
      Test Planning Methods
      Test Policies and Standards
      Real-Time Software
      Real-World Experience
      Software Metrics in Test Planning
      Automated Inspection
      Reliability Studies
      Productivity and Quality Issues
      GUI Test Technology
      Function Point Testing
      New and Novel Test Methods
      Testing Multi-Threaded Code
      Integrated Environments
      Software Re-Use
      Process Assessment/Improvement
      Object Oriented Testing
      Defect Tracking / Monitoring
      Client-Server Computing

IMPORTANT DATES:

      Abstracts and Proposals Due:            15 December 1996
      Notification of Participation:          1 March 1997
      Camera Ready Materials Due:             15 April 1997

FINAL PAPER LENGTH:

      Papers should be limited to 10 - 20 pages, including Text, Slides
      and/or ViewGraphs.

SUBMISSION INFORMATION:

      Abstracts should be 2-4 pages long, with enough detail to give
      reviewers an understanding of the final paper, including a rough
      outline of its contents. Indicate if the most likely audience is
      technical, managerial or application-oriented.

      In addition, please include:
         o  A cover page with the paper title, complete mailing and
            Email address(es), and telephone and FAX number(s) of each
            author.
         o  A list of keywords describing the paper.
         o  A brief biographical sketch of each author.

      Send abstracts and proposals including complete contact
      information to:

      Ms. Rita Bral
      Quality Week '97 Director
      Software Research Institute
      901 Minnesota Street
      San Francisco, CA  94107 USA

      For complete information on the QW'97 Conference, send Email to
      qw@soft.com, phone SR Institute at +1 (415) 550-3020, or, send a
      FAX to SR/Institute at +1 (415) 550-3030.

========================================================================

                    Identifying Code Coverage Rules

                           Kenneth A. Foster

GENERAL

Most code test strategies use an arbitrary number of random values to
exercise source code.  Identifying code coverage rules specify both the
number and the values of data for each component in the code under test.
The hypothesis is that when source code is completely covered and
intermediate results propagate to affect terminal results, then terminal
results will be incorrect if the source code is incorrect.  As in all
code or program based testing, code omissions are not necessarily
detectable.  However, if there are no omissions, code coverage is
complete and terminal results are correct with respect to the
specification, then the code is presumably correct.

The test data developed using these rules will be a minimum set, and
adequate in the sense that it will distinguish the source code from all
non-equivalent alternative programs. Since the rules have not been
automated, they should be applied manually at the routine or unit level.

CONDITIONS AND RELATIONS

When a variable can only have two values (usually true and false) both
these values are required to test a condition.  Despite many test tools
that claim 100% coverage when a condition or relation has both pass/fail
results, this is the only instance where pass/fail branch tests are
sufficient and 100% coverage is a valid statistic.

When the condition involves a variable and a constant as in X?j with
variable X, relation ? (? is one of < <= = <> > >=) and constant j then
values for X are any three values ij        X =  i  j  k           f t f  :  t f t
      X< : >=j        X =  j- j  k           t f f  :  f t t
      X> : <=j        X =  i  j  j+          f f t  :  t t f

For example, the condition x<9 could be exercised with x=7,9 for
true/false results and so-called 100% coverage.  However, the conditions
x<=7, x=7, x<8, x<=8 x<=9 and x<>9 also give identical results.  When
the condition is tested with 8,9 and any number >9 with results
true/false/false only the equivalents x<=8 and x<9 are identified and
the ambiguities are eliminated.

Similar rules apply to condition in two variables X and Y and also use
only two or three values in three cases:

      Code         Test Values                        Results
      X= : <>Y     XY= ij ii ji  or ij  jj  ji        f t f  :  t f t
      X< : >=Y     XY= ij ii kj  or jk kk ji          t f f  :  f t t
      X> : <=Y     XY= jk ii ji  or ij  kk kj         f f t  :  t t f

There is also a universal test set for X?Y with four cases of two values
that gives mutually exclusive results for all relations:

      XY = ik ii kk ki

Note that three cases are required to test each single two variable
relation or condition, and test values must be strict relational
exhaustive (< = and >).  Simply using any values that agree with the
relations but does not match i,j,k values does not eliminate some
ambiguities.  For example, exercising the condition X=Y with X=3,4,8 and
Y=7,4,6 yielding false/true/false results in alternatives x<=4, 4>=y,
4>y, 5>=y, 5>y, 6>y and x=y.  If the alternatives are listed with the
source code as in a trace report the incorrect alternatives could be
inspected away in this example since the source code shows variable
names rather than constants.

A heuristic suggests that the failure or bypass condition(s) of a WHILE
loop should be executed initially, before entering the WHILE body.  Thus
if the WHILE body changes an uninitialized variable the bypass will
force fault detection.  Otherwise, the assignment in the WHILE body will
mask that variable and the fault will not be detected.

MULTIPLE DECISIONS, BOOLEAN EXPRESSIONS

When multiple variables have only true/false values then:  AND (OR)
Select a case with all values true (false), then cases with each
variable individually false (true) while all other relations are true
(false).  This requires a total of n+1 cases, where n = number of
variables.

The pattern of values for the AND of three Boolean variables is:

      X & Y & Z               RESULT
      T   T   T               T
      T   T   f               F
      T   f   T               F
      f   T   T               F

When multiple relational conditions occur then the cases should reflect
all the logically possible combinations of strict true relations as in:
      W<=X & Y=Z    RESULTS
       <      =     T T = T
       =      =     T T = T

followed by individually true with false

      <       <     T F = F
      =       <     T F = F
      <       >     T F = F
      =       >     T F = F

and individually false with true

      >       =     F T = F


The >,> and >,< cases (both false) are unnecessary unless XOR is a legal
boolean operator in the source language.  The values for each relation
should be selected from the previous single X?Y conditions unless
inspection of the source code and case values together is used.

ARITHMETICS, COMPUTATION

Assign n+1 unequal (non-zero, non-one) values where n equals the number
of references to the most frequently referenced variable in an
arithmetic statement or formula.  After n+1 values are assigned, assign
m+1 (m>>          TTN SUBMITTAL POLICY            <<<------------
========================================================================

The TTN On-Line Edition is Emailed the 15th of each month to subscribers
worldwide.  To have your event listed in an upcoming issue Email a
complete description details of your Call for Papers or Call for
Participation to "ttn@soft.com".

TTN On-Line's submittal policy is as follows:

o  Submission deadlines indicated in "Calls for Papers" should provide
   at least a 1-month lead time from the TTN On-Line issue date.  For
   example, submission deadlines for "Calls for Papers" in the January
   issue of TTN On-Line would be for February and beyond.
o  Length of submitted non-calendar items should not exceed 350 lines
   (about four pages).  Longer articles are OK and may be serialized.
o  Length of submitted calendar items should not exceed 60 lines (one
   page).
o  Publication of submitted items is determined by Software Research,
   Inc. and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items are the opinions of their authors or
submitters and TTN-Online disclaims any responsibility for their
content.

TRADEMARKS:  STW, Software TestWorks, CAPBAK/X, SMARTS, EXDIFF,
CAPBAK/UNIX, Xdemo, Xvirtual, Xflight, STW/Regression, STW/Coverage,
STW/Advisor and the SR logo are trademarks or registered trademarks of
Software Research, Inc. All other systems are either trademarks or
registered trademarks of their respective companies.

========================================================================
----------------->>>  TTN SUBSCRIPTION INFORMATION  <<<-----------------
========================================================================

To request a FREE subscription, to CANCEL a current subscription, or to
submit or propose an article, send Email to "ttn@soft.com".

TO SUBSCRIBE: Include in the body of your letter the phrase "subscribe
".

TO UNSUBSCRIBE: Include in the body of your letter the phrase
"unsubscribe ".

                     TESTING TECHNIQUES NEWSLETTER
                        Software Research, Inc.
                            901 Minnesota Street
                   San Francisco, CA  94107 USA  USA

                   Phone:          +1 (415) 550-3020
                   Toll Free:      +1 (800) 942-SOFT (USA Only)
                   FAX:            +1 (415) 550-3030
                   E-mail:         ttn@soft.com
                   WWW URL:        http://www.soft.com


                               ## End ##