sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr

         +===================================================+
         +======= Testing Techniques Newsletter (TTN) =======+
         +=======           ON-LINE EDITION           =======+
         +=======               May 1998              =======+
         +===================================================+

TESTING TECHNIQUES NEWSLETTER (TTN), Online Edition, is E-mailed monthly
to support the Software Research, Inc. (SR)/TestWorks user community and
to provide information of general use to the worldwide software quality
and community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of TTN-Online provided that the
entire document/file is kept intact and this complete copyright notice
appears with it in all copies.  (c) Copyright 1998 by Software Research,
Inc.

========================================================================

INSIDE THIS ISSUE:

   o  11th International Software Quality Week, 26-29 May 1998, San
      Francisco, California.

   o  Real-Time Feedback Management: Maximizing the Value of Customer
      Feedback

   o  Measuring Software Quality to Support Testing, by Shari Lawrence
      Pfleeger (Part 2 of 2)

   o  Using AONIX' UML Use Case Editor, Test Generator to Drive Capture
      Playback with CAPBAK/X and Do Coverage Assessment with TCAT --
      Free Seminars by AONIX (June 1998).

   o  Software Testing: Challenges for the Future (4 June 1998 Meeting,
      Brussels, Beligum)

   o  Frequently Begged Questions and How To Answer Them, by Nicholas
      Zvegintzov

   o  Call for Papers, Birds-of-a-Feather Sessions, 2nd International
      Quality Week/Europe (9-13 November 1998, Brussels, Belgium)

   o  MCDC Standard Effort -- IEEE P1506, by John Joseph Chilenski

   o  You Know You're In Trouble When...

   o  International Conference on Reliable Software Technologies (8-12
      June 1998, Uppsala, Sweden)

   o  "Software Quality Professional" Premiering November 1998, by Taz
      Daughtrey

   o  TTN Submittal Policy

   o  TTN SUBSCRIPTION INFORMATION

========================================================================

            11th International Software Quality Week (QW'98)

At Quality Week '98 you get state-of-the-art information and the latest
advances in software QA technologies, from leading people and
organizations in Industry, Government and the Academic world.  90
Presentations in 4 Days!  A two-day vendor show with the latest tools
and services that you need.  Plus, you can attend 14 current-topic
Birds-Of-A-Feather sessions.

                11th International Software Quality Week
                       The Sheraton Palace Hotel
                       San Francisco, California
                             26-29 May 1998

Only a few days left to register by FAXing in your form with your credit
card number and signature to [+1] (415) 550-3030.  Or, stop by our
WebSite at:

                  <http://www.soft.com/QualWeek/QW98>

or just send Email to qw@soft.com and let us know you want to register.

========================================================================

                     Real-Time Feedback Management:
               Maximizing the Value of Customer Feedback

                              3 June 1998
                    Center for Software Development
                          San Jose, California

Sign up now to learn how successful Developers, Product Managers, QA
Managers, and entrepreneurs are using real-time customer feedback
management to:

  - accelerate product and market development
  - accelerate customer acceptance
  - simplify the process for defining new product and
    marketing requirements
  - improve product quality in real-time
  - create reference-able super-users, champions, and evangelists

Speakers include:

 * Rob Fuggetta, Partner, Market Relations Group,
   a Regis McKenna Company
 * William Ryan, Chairman of the Board, Niehaus Ryan Wong Inc.
 * Tom Drake, Management and Technology Transformation
   Specialist, Booz Allen & Hamilton
 * Intel Corporation
 * Wallop Software, Inc.


The seminar is sponsored by BetaSphere, Inc., a leader in
providing real-time customer feedback management solutions
to companies developing, deploying, and supporting business
to business, consumer, and developer products and services,
in partnership with the Center for Software Development,
the Silicon Valley Association of Software Entrepreneurs,
and The Red Herring.

The event will be held from 9am - 2pm on Wednesday,
June 3, 1998, at The Center for Software Development at
111 West Saint John, Suite 200 in San Jose (lunch provided).

Space is limited so register today by visiting:

              <http://www.betasphere.com/conference.html>

or sending e-mail to: rsvp@betasphere.com.
You can call for details at 1.888.238.2243 or 650.930.0200

========================================================================

      Measuring Software Quality to Support Testing (Part 2 of 2)

                        Shari Lawrence Pfleeger
                         Systems/Software, Inc.
                         4519 Davenport St. NW
                       Washington, DC 20016-4415
                       Email: s.pfleeger@ieee.org

   Note:  This material is adapted from Software Engineering; Theory
   and Practice, by Shari Lawrence Pfleeger (Prentice Hall, 1998)  A
   version of this paper was published in Dr.  Dobb's Journal, March
   1998.

Confidence in the Software

We can use fault estimates to tell us how much confidence we can place
in the software we are testing.  Confidence, usually expressed as a
percentage, tells us the likelihood that the software is fault-free.
Thus, if we say a program is fault-free with a 95 percent level of
confidence, then we mean that the probability that the software has no
faults is 0.95.

Suppose we have seeded a program with S faults, and we claim that the
code has only N actual faults.  We test the program until we have found
all S of the seeded faults.  If, as before, n is the number of actual
faults discovered during testing, then the confidence level can be
calculated as

     C =  1         if n > N

     S/(S-N+1)      if n < N

For example, suppose we claim that a component is fault-free, meaning
that N is zero.  If we seed the code with 10 faults and find all 10
without uncovering an indigenous fault, then we can calculate the
confidence level with S = 10 and N = 0.  Thus, C is 10/11, for a
confidence level of 91 percent.  If the requirements or contract mandate
a confidence level of 98 percent, we would need to seed S faults, where
S/(S-0+1) = 98/100.  Solving this equation, we see that we must use 49
seeded faults and continue testing until all 49 faults were found (but
no indigenous faults discovered).

This approach presents a major problem:  we cannot predict the level of
confidence until all seeded faults are detected.  Richards (1974)
suggests a modification, where the confidence level can be estimated
using the number of detected seeded faults, whether or not all have been
located.  In this case, C is

     C =  1     if n > N

          ?     if n < N

These estimates assume that all faults have an equal probability of
being detected, which is not likely to be true.  However, many other
estimates take these factors into account.  Such estimation techniques
not only give us some idea of the confidence we may have in our programs
but also provide a side benefit.  Many programmers are tempted to
conclude that each fault discovered is the last one.  If we estimate the
number of faults remaining, or if we know how many faults we must find
to satisfy a confidence requirement, we have incentive to keep testing
for one more fault.

These techniques are also useful in assessing confidence in components
that are about to be reused.  We can look at the fault history of a
component, especially if fault seeding has taken place, and use
techniques such as these to decide how much confidence to place in
reusing the component without testing it again.  Or, we can seed the
component and use these techniques to establish a baseline level of
confidence.

Identifying Fault-Prone Code

There are many techniques used to help identify fault-prone code, based
on past history of faults in similar applications.  For example, some
researchers track the number of faults found in each component during
development and maintenance.  They also collect measurements about each
component, such as size, number of decisions, number of operators and
operands, or number of modifications.  Then, they generate equations to
suggest the attributes of the most fault-prone modules.  These equations
can be used to suggest which of your components should be tested first,
or which should be given extra scrutiny during reviews or testing.

Porter and Selby (1990) suggest the use of classification trees to
suggest fault-prone components.  Classification tree analysis is a
statistical technique that sorts through large arrays of measurement
information, creating a decision tree to show which measurements are the
best predictors of a particular attribute.  For instance, suppose we
collect measurement data about each component built in our organization.
We include size (in lines of code), number of distinct paths through the
code, number of operators, depth of nesting, degree of coupling and
cohesion (rated on a scale from 1 as lowest to 5 as highest), time to
code the component, number of faults found in the component, and more.
We use a classification tree analysis tool (such as C4, CART, or the
Trellis graphics capabilities of S-Plus) to analyze the attributes of
the components that had five or more faults, compared with those that
had less than five faults.

The tree is used to help us decide which components in our current
system are likely to have a large number of faults.  According to the
tree, if a component has between 100 and 300 lines of code and has at
least 15 decisions, then it may be fault-prone.  Or, if the component
has over 300 lines of code, has not had a design review, and has been
changed at least five times, then it too may be fault-prone.  We can use
this type of analysis to help us target our testing when testing
resources are limited.  Or, we can schedule inspections for such
components, to help catch problems before testing begins.

                     Measuring test effectiveness.
      One aspect of test planning and reporting is measuring test
      effectiveness.  Graham (1996) suggests that test
      effectiveness can be measured by computing the number of
      faults found in a given test by the total number of faults
      found (including those found after the test).  For example,
      suppose integration testing finds 56 faults, and the total
      testing process find 70 faults.  Then Graham's measure of
      test effectiveness says that integration testing was 80%
      effective.  However, suppose the system is delivered after
      the 70 faults were found, and 70 additional faults are
      discovered during the first six months of operation.  Then
      integration testing is responsible for finding 56 of 140
      faults, or only 40% test effectiveness.

      This approach to evaluating the impact of a particular
      testing phase or technique can be adjusted in several ways.
      For example, failures can be assigned a severity level, and
      test effectiveness can be calculated by level.  In this way,
      integration testing might be 50% effective at finding
      critical faults, but 80% effective at finding minor faults.
      Alternatively, test effectiveness may be combined with root
      cause analysis, so that we can describe effectiveness in
      finding faults as early as possible in development.  For
      example, integration testing may find 80% of faults, but
      half of those faults might have been discovered earlier,
      such as during design review, because they are design
      problems.

      Test efficiency is computed by dividing the number of faults
      found in testing by the cost of testing, to yield a value in
      faults per staff-hour.  Efficiency measures help us to
      understand the cost of finding faults, as well as the
      relative costs of finding them in different phases of the
      testing process.

      Both effectiveness and efficiency measures can be useful in
      test planning;  we want to maximize our effectiveness and
      efficiency based on past testing history.  Thus, the
      documentation of current tests should include measures that
      allow us to compute effectiveness and efficiency.

Measure for Measure

The metrics described in this article represent only some of the ways
that measurement can assist you in testing your software.  Measurement
is useful in at least three ways:

  *    Understanding your code and its design
  *    Understanding and controlling the testing process
  *    Predicting likely faults and failures

You can measure processes, products and resources to make you a better-
educated tester.  For example, measures of the effectiveness of your
testing process suggest areas where there is room for improvement.
Measures of code or design structure highlight areas of complexity where
more understanding and testing may be needed.  And measuring the time it
takes to prepare for tests and run them helps you understand how to
maximize your effectiveness by investing resources where they are most
needed.  You can use the goals of testing and development to suggest
what to measure, and even to define new measurements tailored to your
particular situation.  The experience reported in the literature is
clear:  measurement is essential to doing good testing.  It is up to you
to find the best ways and best measures to meet your project's
objectives.

References

Chillarege, Ram, Inderpal S. Bhandari, Jarir K. Chaar, Michael J.
Halliday, Diane S. Moebus, Bonnie K. Ray and Man-Yuen Wong, "Orthogonal
defect classification:  A concept for in-process measurements," IEEE
Transactions on Software Engineering, 18(11), pp. 943-956, November
1992.

Grady, Robert B., Successful Software Process Improvement, Prentice
Hall, Englewood Cliffs, New Jersey, 1997.

Graham, Dorothy R., "Measuring the effectiveness and efficiency of
testing," Proceedings of Software Testing `96, Espace Champerret, Paris,
France, 14 June 1996.

Mays, R., C. Jones, G. Holloway and D. Studinski, "Experiences with
defect prevention," IBM Systems Journal, 29, 1990.

Mills, Harlan D., "On the statistical validation of computer programs,"
Technical report FSC-72-6015, IBM Federal Systems Division,
Gaithersburg, Maryland, 1972.

Myers, Glenford J., The Art of Software Testing, John Wiley, New York,
1979.

Porter, Adam and Richard Selby, "Empirically-guided software development
using metric-based classification trees," IEEE Software 7(2), pp. 46-54,
March 1990.

Richards, F. R., "Computer software:  Testing, reliability models and
quality assurance," Technical report NPS-55RH74071A, Naval Postgraduate
School, Monterey, California, 1974.

Shooman, M. L. and M. Bolsky, "Types, distribution and test and
correction times for programming errors," Proceedings of the 1975
International Conference on Reliable Software, IEEE Computer Society
Press, New York, 1975.

========================================================================

                           AONIX 10X Seminars

            Using AONIX' UML Use Case Editor, Test Generator
                to Drive Capture Playback with CAPBAK/X
                  and Do Coverage Assessment with TCAT

AONIX has a set of free half-day seminars featuring TestWorks in concert
with AONIX' award winning UML and test generation capability.  This
end-to-end solution promises ten times (hence the name "10X")
productivity increases when the product suite (AONIX' Stp plus
TestWorks) is combined with training in proper automated tool use.

Dates and locations for these free half-day seminars (8:00 AM through
12:00, continental breakfast included) are...

        DATE                    CITY

        9 June 1998             Montreal, Quebec CANADA
                                Phoenix, AZ

        10 June 1998            Dallas, TX
                                Pittsburgh, PA

        11 June 1998            Houston, TX
                                Reston, VA

        12 June 1998            Raleigh, NC
                                Seattle, WA

For information about how to register send Email to seminars@aonix.com
or visit the Seminar Website:

                    <http://www.aonix.com/seminars>

Or, call the registrar at +1 (619) 824-0238.

========================================================================

              Software Testing - Challenges for the Future
                              June 4, 1998
                 Hotel President WTC, Brussels, Belgium

The Belgian Software Testing User Group has organized a one day seminar
on Software Testing on June 4, 1998.  An overview of the programme can
be found below.  The brochure for this event (PDF format) can be found
on the WWW at <http://www.ti.kviv.be/pdf/8ASDSTE.pdf>.

This event is organized with the support of British Computer Society -
Special Interest Group in Software Testing (BCS-SIGIST - UK) and TestNet
(the Netherlands).

If you would like additional information on this event, please do not
hesitate to contact Luc Baele (luc.baele@ti.kviv.be).

========================================================================

Program

09.30:  Welcome and Introduction, Bruno Peeters, Gemeentekrediet van
        Belgia, Brussels, Belgium; Chairman of the Section on Software
        Metrics & Software Testing

Morning Session

09.45:  Using Testing for Process Conformance Management, Adrian Burr,
        tMSC, Haverhill, England

10.30:  Coffee - Exhibition

11.00:  Millennium and Euro testing, Niek Hooijdonk, CTG, Diegem,
        Belgium

11.45:  Testing WWW Applications, Gualterio Bazzana, Onion, Brescia,
        Italy

12.30:  Lunch - Exhibition

Session A : Tools

14.00:  Introduction of a capture/playback tool, Geert Lefevere and Gino
        Verbeke, LGTsoft, Roeselare, Belgium

14.40:  Usability Testing, Christine Sleutel, ING, Amsterdam,
        Netherlands

15.20:  Test Factory (TSite), Ingrid Ottevanger, IQUIP, Diemen,
        Netherlands

Session B : Management

14.00:  Test Assessment, Jens Pas, ps_testware, Leuven, Belgium

14.40:  Organization of acceptance test team at Banksys, Hugo Gabriels,
        Banksys, Brussels, Belgium

15.20:  Lessons learned on test management of large, complex integration
        projects, Michel Luypaert, CSC, Brussels, Belgium

Closing Session

16.00:  Coffee - Exhibition

16.30:  Test Process Improvement
        Martin Pol, GiTek Software, Antwerp, Belgium

17.15:  Closing

General Information

Date:   June 4, 1998

Venue:  Hotel President World Trade Center, Bd. Em. Jacqmain 180, 1000
BRUSSELS.  Tel. +32.(0)2.203.20.20

The Hotel is situated next to the World Trade Center and in front of
Brussels' North Station.

Secretariat:

        Technologisch Instituut vzw
        Ingenieurshuis - K VIV
        Mr L. Baele, project co ordinator
        Desguinlei  214
        2018  Antwerpen 1
        Tel. : 03/216.09.96
        Fax : 03/216.06.89
        e-mail : luc.baele@ti.kviv.be

========================================================================

           Frequently Begged Questions and How To Answer Them

                         by Nicholas Zvegintzov
    Copyright 1998 by Nicholas Zvegintzov; Reprinted with Permission
       First published in IEEE Software, 15, 2, March/April 1998

From time to time I receive calls or e-mail asking empirical questions
about software and the software business.  Typical questions include
"How productive are programming teams? What are the industry norms? What
are the best practices? How should I measure the productivity of a
programming team?"  These, and others like them, are frequently asked
questions.  I always answer these questions with a question.  "What are
you trying to decide?"

People almost always ask empirical questions because they need to make
decisions.  These decisions are important, involving risk to human life
and health, affecting economic and societal well-being, and determining
the equitable and productive use of scarce resources.

The questioners seem to feel -- because the questions are so frequently
asked and the answers so often provided -- that the answers must be
widely known.  But in looking at answers I observe the same pattern
repeatedly.  Answers that seemed well-known and widely supported turn
out to be only widely proliferated.  They are based on one or two
sources, which often prove thin and unreliable. They are "transparency
facts", copied from presentation to presentation.

When confronted with such questions, the questioners have no time to
research the answers, so they grab whatever answers are at hand. Thus,
many frequently asked questions about software are more truly frequently
begged questions.

ANSWER SOURCES

Before analyzing some sample questions, let me classify likely answer
sources.  I have graded the following sources from A through F, in order
of reliability.

A. Systematic gathering of real data

Real data gathered systematically from a representative sample of real
organizations or systems offers the best answer to empirical questions.
Such data might, for example, show the amount of software at a
representative sample of organizations.

B. Questionnaire studies

A questionnaire study substitutes secondhand reports for firsthand
observations.  A typical questionnaire might ask software managers at a
representative sample of organizations for the amount of software
installed there.

C. Isolated data points from observation

An isolated data point is a single observation, for example, the amount
of software at a single organization or in a single application.

D. Isolated data points from advocacy

Advocacy is the action of basing an argument on, or drawing a conclusion
from, a comparison of different situations.  Much published research
consists of experimental or observational comparisons of one method
against another, and many of these accounts contain data points.  An
example might be the amount of software in an application programmed in
one language versus the amount of software in the same or similar
application programmed in another language.  A data point is not itself
tainted by being embedded in such advocacy, but it has less
persuasiveness as a sample of reality because of the motivation for
choosing it.

E. Statement by information provider

Some organizations exist to collect and sell information.  These include
the two main purveyors of software-related statistics, Howard Rubin
Associates and Capers Jones' Software Productivity Group, large
"industry analyst" organizations like the Gartner Group, and many
smaller organizations.  Even a small organization can become well known
by supplying an answer to a much-asked question.  These organizations
may collect their data from client surveys, industry sources,
questionnaires, or direct interviews, but they keep hidden the actual
data and data sources, and in many cases their information appears to be
no more than plausible extrapolations from facts or fancies.

F. Folklore

Finally we descend to folklore -- fancy, imagination, prejudice, or
unsupported opinion.  Some of the sheerest folklore has the widest
currency, having been laundered through innumerable retellings. Some of
my favorite folklore examples include "Two thirds of software life-cycle
cost is in the operation and maintenance phase" (how would you
corroborate that?), "A one-line change has a 50 percent chance of being
wrong", and "Most projects fail."

NINE QUESTIONS

Let us now consider nine frequently asked questions and, for each, ask
the following.

Why is it important?

What are the sources and the reliability of its frequently offered
answers?

What would be a reliable basis for an answer to it?

1. How much software is there?

Importance.  This information affects innumerable decisions about the
cost of software development and maintenance, the penetration and
extensiveness of software, and society's dependence on software -- for
example, about the dangers from and the costs of renovating the Year
2000 software problem.

Sources and reliability.  Answer sources include a few isolated data
points on the amount of software at organizations or in devices, and
statements by information providers.  Grade: D.

Reliable basis.  Systematic sampling of the amount of software at
organizations or in devices would be a reliable basis. Organizational
inventories relating to Year 2000 projects may help, and isolated data
points suggest that organizations are discovering in these projects that
they have more software than they thought.


2. Which languages are used to write software?

Importance.  This information affects many decisions about the
desirability of new languages and new software paradigms.

Sources and reliability.  Answer sources include a very few isolated
data points on language mix at organizations, a few press surveys, and
statements by information providers.  Grade: C-.

Reliable basis.  Systematic sampling of the software language mix at
organizations and in devices would be a reliable basis. Organizational
inventories relating to Year 2000 projects may help here also.

3. How old is software?

Importance.  This information affects many decisions about whether,
when, and how to replace or upgrade software.

Sources and reliability.  Answer sources include a few isolated data
points, statements by information providers, and folklore. Grade: E+.

Reliable basis.  Systematically sampling of the age of software
installed at organizations and embedded in devices would be a reliable
basis.

4. How many software professionals are there?

Importance.  This information affects many decisions about the role of
software in the economy, about the possibility of improving software
both in individual organizations and in general, and about recruitment,
training, and education.

Sources and reliability.  Answer sources include systematic gathering of
real data -- for example, the Census Bureau and the Bureau of Labor
Statistics.  It is doubtful, however, that these surveys identify all
the people who actually work with software. Additional answer sources
are statements by service providers. Grade: C.

Reliable basis.  Systematic sampling of organizations' software staffs
would be a reliable basis.

5. What do software professionals do?

Importance.  This information affects many decisions about method,
management, and quality, and about changing methods and processes.

Sources and reliability.  Answer sources include a few questionnaire
studies (about software practices, for example), a few isolated data
points from observation, and many more data points gathered from
experiments or from projects that advocate changing methods or
processes.  Much folklore also addresses this question, but I will not
count that against the grade.  Grade: C.

Reliable basis.  Systematic observations and samples would be a reliable
basis.

6. What do software professionals know?

Importance.  This information affects many decisions about quality,
quality improvement, organization, training, and so on.

Sources and reliability.  Answer sources include a few questionnaire
studies (about methods used, for example), a few isolated data points
derived from observation, and many more data points gathered from
advocacy, including projects that advocate a change of capability
maturity.  Folklore exerts an influence here as well, but I will not
count that against this grade, either. Grade: C.

Reliable basis.  Systematic surveys would be a reliable basis.

7. How good is software?

Importance.  This information affects many decisions about human safety
and society's well-being, and about software improvement and methods for
achieving it.

Sources and reliability.  Answer sources include a few isolated data
points -- derived from either observation or advocacy -- about software
failures per line of delivered code.  Other sources include a systematic
though "sanitized" study of software failures per executed line of code
(Edward N. Adams, "Optimizing Preventive Service of Software Products",
IBM J. Research and Development, Jan. 1984, pp. 2-14) and statements by
information providers. Grade: C-.

Reliable basis.  Systematic measurement and observation would be a
reliable basis.

8. How good is development?

Importance.  This information affects many decisions about software
methods and advocacy of changed methods.

Sources and reliability.  To assess how good development is, you must
know more than which projects succeeded and which failed, either at a
given organization or generally.  To obtain reliable data, you must
observe a representative sample of development projects and assess their
level of success in cost, schedule, delivery, acceptance, and fitness
for purpose.

Very few questionnaire studies have claimed to do this.  For a long
time, software experts cited figures from a 1979 General Accounting
Office (GAO) report ("Contracting for Computer Software Development:
Serious Problems Require Management Attention to Avoid Wasting
Additional Millions", FGMSD-80-4, Washington, D.C.) that 29 percent of
software contracted for was never delivered, 45 percent could not be
used, 24 percent was usable only after modification, and only the tiny
remainder actually succeeded.  These numbers fell out of favor partly
because of their age and partly because Bruce Blum (Software
Engineering: A Holistic View, Oxford University Press, 1992) pointed out
that the GAO's numbers were based -- as the report itself clearly said
-- on a sample of eight contracts selected because they were already in
some form of trouble, such as litigation.

In 1995, new data superseded the GAO numbers.  Supplied by The Standish
Group, a consulting company with fewer than nine members, these numbers
-- reputedly based on a questionnaire study -- indicated that 31.1
percent of software projects were canceled, 52.7 percent were
"challenged", and only 16.2 percent succeeded (CHAOS, The Standish Group
International, Dennis, Mass., http://www.standishgroup.com, 1995).

These results, amounting to one misinterpretation and one instance of a
statement by an information provider, add up to a dismal grade barely
better than folklore.  Grade: F+.

Why would the software community be so ready to use numbers this
unreliable? Perhaps because they confirm deep-seated fears that the
situation is bad, and because they make absolutely any proposal look
good by comparison.

Reliable basis.  In this case, a representative sample of development
projects followed by an assessment of their level of success would be a
reliable basis.

9. How good is maintenance?

Importance.  This information affects many decisions about software
cost, software replacement, and software correctness and safety.

Sources and reliability.  Answer sources include questionnaire studies,
replicated in several contexts, of the effort and productivity of
maintenance, isolated data points on the reliability of maintenance
changes, both from observation and advocacy, plus a conclusion from
Adams' study.  Folklore, laundered through many retellings, also
contributes answers to this question. Grade: C-.

Reliable basis.  Systematic examinations of maintenance actions would be
a reliable basis.

THE TENTH QUESTION

These nine questions are typical of common factual questions and their
answers.  Averaging the evidence for all the questions, the total grade
for our knowledge of them is a lowly D+.  Yet these are the most basic
questions -- nothing fancy like "Does object- oriented design or
Cleanroom create more reliable systems when applied by a Level 5
organization?"

This leads to a tenth question.

10. Does it matter whether we know the answers?

If you accept the reason for knowing these facts, this question answers
itself.  We need the facts because we need to make decisions based upon
them.  We should not be basing decisions that involve risks to human
life and health and that impact economic and societal well-being on
wishes, fears, prejudices, hearsay, or invention.  Our need to know
should drive the inquiry, not substitute for it.

What then is to be done about this? I suggest a three-step solution.

First, everyone must be more skeptical.  If you think the answer is hard
to find, it is.  Do not just grab the first fact that presents itself.
The research community especially must adhere to this standard, for they
have been almost as bad as decision-makers in seizing a convenient
answer to introduce a research paper.  Stop laundering facts!  A
community understandably concerned with the correctness of software
should also be concerned about the correctness of facts about software.

Second, practitioners who must make their own decisions should also look
for their own facts.  Information providers often make this challenge:
"If you don't believe our figures, find your own!" Given these
providers' reliability, or lack thereof, it is a challenge we should
accept.  Enough real facts might come together to form that glittering
goal: an industry norm.

Third and finally, the research community must pay attention to
heavyweight empirical issues as well as to elegant speculations and
intricate experiments.

If we do these things, then the questions I have listed here, and many
others, will be firmly answered instead of frankly begged, computer
science and software engineering will rest on a firmer basis, and vital
decisions will be based on real knowledge.

                    --------------------------------

Nicholas Zvegintzov is president of Software Management Network, a
publishing, consulting, and teaching group that specializes in
techniques and technology to manage installed software systems. His
professional interests are testing, documentation, adaptation,
functional modification, and reengineering.  He founded and edited
"Software Management News" for 12 years.  Mr. Zvegintzov received a BA
and an MA in experimental psychology and philosophy from Oxford
University.

Contact him at Software Management Network, 141 Saint Marks Place, Suite
5F, Staten Island NY 10301 USA, telephone +1-718-816-5522, fax +1-718-
816-9038, email zvegint@ibm.net, http://www.softwaremanagement.com

========================================================================

          CALL FOR PAPERS, BIRDS-OF-FEATHER SESSIONS -- QWE'98

Quality Week Europe 1998 (QWE'98) has been set for 9-13 November 1998 in
Brussels, Belgium.  The QWE'98 conference theme is "EURO & Y2K: The
Industrial Impact" and papers that deal directly or indirectly with
either issue are wanted.  Complete details are found in the Call for
Participation:

          <http://www.soft.com/QualWeek/QWE98/qwe98.call.html>

For a printed copy send Email to qw@soft.com and specify that you want
the call for QWE'98.

========================================================================

                   MCDC STANDARD EFFORT -- IEEE P1506

                         John Joseph Chilenski
                      Associate Technical Fellow
           Embedded Software Specialist - Systems Engineering
                    Boeing Commercial Airplane Group
                  Email: John.Chilenski@pss.boeing.com

      Editor's Note: John Chilenski of Boeing has begun an effort
      to standardize the MCDC test coverage criterion.  This
      article is from a recent Email to the informal working group
      he is assembling.  Contact John at the Email above for
      further details.

For those of you who are interested, here is some background behind this
standard effort.  Boeing has been using some form of condition
independence for nearly 18 years.  We started in the late 1970's and
early 1980's on the 757 and 767 airplanes.  Back then we called it
Modified Decision/Condition Coverage (MDCC).  We came up with this
measure because we realized that we needed something between Myer's
Decision/Condition Coverage (DCC) and Multiple-Condition Coverage (MCC).
DCC can be satisfied with 2 tests in expressions with no XOR operator,
and those tests can't tell the difference between ANDs and ORs.  Those
tests can also fail to distinguish between conditions.  MCC can tell the
difference between ANDs and ORs and conditions, but requires 2**N tests
for N independent conditions.  This number of tests can become
impractical when N grows larger.  At the time, the boxes were in
assembly language so N was 2, therefore MCC would have required only 4
tests to be satisfied.  However, it was realized that only 3 tests were
necessary to distinguish between ANDs and ORs and conditions, hence MDCC
was born.

Along about the mid 1980's the 747-400 program was started.  On this
program, MDCC was used on multiple boxes, many in different HOL
programming languages as well as assembler.  Because of this experience,
Boeing published MDCC in its internal standard for airborne software in
1990.

It was at this time that RTCA formed SC-167 to revise DO-178A, the
guidelines used by the FAA to support the software aspects of
certification of commercial airplanes (containing systems with
software).  This committee adopted a variant of condition independence
which they named Modified Condition/Decision Coverage (MCDC) which was
published in DO-178B in December 1992.  For those of you (like me) for
whom details are important, the definitions of MDCC and MCDC are
slightly different.

The use of MCDC has now proliferated beyond the bounds of commercial air
transport.  Along with this proliferation has come a proliferation of
interpretations.  This standard is intended to provide a common
definition which everyone can build off of (i.e., allow for a family of
measures).  As such, it purposely defines the weakest form of condition
independence.  This is what we in Boeing call Masking MCDC (see:
<http://www.boeing.com/nosearch/mcdc/>).  The other forms of which we
are aware can all be built off of this version, and will comply with
this standard.

In addition, there are other measures which are related to condition
independence.  I hope that we can incorporate as many of those other
measures being compliant to this standard as possible.

========================================================================

                YOU KNOW WHEN YOU'RE IN TROUBLE WHEN...

If lawyers are disbarred, if and clergymen are defrocked...  doesn't it
follow that...

 ...electricians could be delighted?
 ...musicians be denoted?
 ...cowboys be deranged?
 ...models be deposed?
 ...and dry cleaners become depressed?

Wouldn't you expect laundry workers to decrease, eventually becoming
depressed and depleted?

Likewise, would...
 ...bedmakers be debunked?
 ...baseball players be debased?
 ...bulldozer operators be degraded?
 ...organ donors be delivered?
 ...composers one day decompose?
 ...underwear manufacturers be debriefed, and,
 ...software engineers become detested?

On a final, more positive note, can we perhaps hope that politicians
will someday be devoted?

========================================================================

       International Conference on Reliable Software Technologies
        Ada-Europe'98 Conference, June 8 to 12, Uppsala, Sweden

The full conference will comprise a three-day technical programme and
exhibition from Tuesday to Thursday, and parallel workshops and
tutorials on Monday and Friday.

You will find more information on the conference WebSite:

                      <http://www.ada-europe.org>

Details by Email from Dirk Craeynest (Dirk.Craeynest@cs.kuleuven.ac.be).

========================================================================

                    "Software Quality Professional"
                      Premiering November 1998...

The Software Quality Professional, a peer-reviewed quarterly journal, is
to be published by the American Society for Quality (ASQ).  Focusing on
the practical needs of professionals including engineers and managers,
the Software Quality Professional will provide readers with significant
information that will contribute to their personal development and
success in the field of software quality.

Under the direction of the founding Editor, Taz Daughtrey, former Chair
of the ASQ Software Division, articles from known experts in the field
of software quality provide an intersection between quality engineering
and software engineering. The scope of the journal is defined by the
Body of Knowledge for ASQ's Certified Software Quality Engineer, but the
content will also prove useful to a wide range of technical and
managerial people.

You won't want to miss articles such as:

"The Software Quality Profile" by Watts Humphrey, Software Engineering
Institute

"Software Is Different" by  Boris Beizer, ANALYSIS

"More Reliable, Faster, Cheaper Testing with Software Reliability
Engineering" by John Musa, Software Reliability Engineering and Testing
Courses

"Validating the Benefit of New Software Technology" by Marvin V.
Zelkowitz, University of Maryland and Dolores R. Wallace, National
Institute of Standards and Technology

"Making Untestable Software More Testable" by Jeffrey Voas, Reliable
Software Technologies and Lora Kassab, College ofWilliam and Mary

"International Trends in Software Engineering and Quality System
Standards" by John Harauz, Ontario Hydro

"You Can't Measure Client/Server, We're Different-- and Other
Developer's Myths" by Carol Dekkers, Quality Plus Technologies

"Conflict Analysis and Negotiation Aids for Cost-Quality Requirements"
by Barry Boehm and Hoh In, University of Southern California

Contact:

        Software Quality Professional
        American Society for Quality
        PO Box 3005,
        Milwaukee, WI 53201.

Or, call ASQ at 1-800-248-1946 for complete details on how to subscribe.

========================================================================
------------>>>          TTN SUBMITTAL POLICY            <<<------------
========================================================================

The TTN Online Edition is E-mailed around the 15th of each month to
subscribers worldwide.  To have your event listed in an upcoming issue
E-mail a complete description and full details of your Call for Papers
or Call for Participation to "ttn@soft.com".

TTN On-Line's submittal policy is as follows:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the TTN On-Line issue date.  For
  example, submission deadlines for "Calls for Papers" in the January
  issue of TTN On-Line would be for February and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK and may be serialized.
o Length of submitted calendar items should not exceed 60 lines (one
  page).
o Publication of submitted items is determined by Software Research,
  Inc. and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items are the opinions of their authors or
submitters; TTN-Online disclaims any responsibility for their content.

TRADEMARKS:  STW, TestWorks, CAPBAK, SMARTS, EXDIFF, Xdemo, Xvirtual,
Xflight, STW/Regression, STW/Coverage, STW/Advisor, TCAT, TCAT-PATH, T-
SCOPE and the SR logo are trademarks or registered trademarks of
Software Research, Inc. All other systems are either trademarks or
registered trademarks of their respective companies.

========================================================================
----------------->>>  TTN SUBSCRIPTION INFORMATION  <<<-----------------
========================================================================

To SUBSCRIBE to TTN-Online, to CANCEL a current subscription, to CHANGE
an address (a CANCEL and a SUBSCRIBE combined) or to submit or propose
an article, use the convenient Subscribe/Unsubscribe facility at
<http://www.soft.com/News/TTN-Online>.  Or, send E-mail to
"ttn@soft.com" as follows:

   TO SUBSCRIBE: Include in the body the phrase "subscribe {your-E-
   mail-address}".

   TO UNSUBSCRIBE: Include in the body the phrase "unsubscribe {your-E-
   mail-address}".

                     TESTING TECHNIQUES NEWSLETTER
                        Software Research, Inc.
                            901 Minnesota Street
                   San Francisco, CA  94107 USA

              Phone:          +1 (415) 550-3020
              Toll Free:      +1 (800) 942-SOFT (USA Only)
              FAX:            +1 (415) 550-3030
              E-mail:         ttn@soft.com
              WWW URL:        http://www.soft.com

                               ## End ##