sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr

         +===================================================+
         +======= Testing Techniques Newsletter (TTN) =======+
         +=======           ON-LINE EDITION           =======+
         +=======           September 1998            =======+
         +===================================================+

TESTING TECHNIQUES NEWSLETTER (TTN), Online Edition, is E-mailed monthly
to support the Software Research, Inc. (SR)/TestWorks user community and
to provide information of general use to the worldwide software quality
and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of TTN-Online provided that the
entire document/file is kept intact and this complete copyright notice
appears with it in all copies.  (c) Copyright 1998 by Software Research,
Inc.


========================================================================

INSIDE THIS ISSUE:

   o  2nd International Quality Week Europe, QWE'98 (9-13 November 1998,
      Brussels, Belgium) -- A Complete Program Tour

   o  Public TestWorks Training Available

   o  Two Digits for a Date (Songs, Humor)

   o  Software Certification Laboratories by J. Voas

   o  Availability of Musa Book: A Note From the Author

   o  TestWorks Corner: Changes and Additions

   o  9th International Symposium on Software Reliability Engineering,
      ISSRE'98 (4-7 November 1998, Paderborn, Germany)

   o  The Official 1998 Edition of Bumper Stickers

   o  Automated Testing is Costly

   o  TTN Submittal Policy

   o  TTN SUBSCRIPTION INFORMATION

========================================================================

        2nd International Software QUALITY WEEK EUROPE (QWE'98)

                           9-13 November 1998
                           Brussels, Belgium

          Conference Theme: EURO & Y2K: The Industrial Impact

                  <http://www.soft.com/QualWeek/QWE98>

                          GENERAL DESCRIPTION

Advances in software quality technology have for years improved quality
and assured faster delivery of software systems.  Applied to new
developments as well as to upgrades, software quality technologies have
demonstrated their effectiveness.

But there never has been anything quite like the twin EURO and Y2K
problems now commanding so much attention in the community.  Many
experts view the "fix" required as simple enough -- EURO conversions and
Y2K remediations are very well understood.  The most imposing challenge
is to insure that known technical means are well and effectively applied
where and when they need to be.

Every company and government agency has a real concern in providing
solutions that "work" and can be delivered on time.  Neither the EURO or
Y2K efforts provide for "project slip".  In turn, these needs impose a
big burden on software quality technology and engineering. Now is the
time that the excellent results of prior decades' work has to be applied
-- and the results must be good!

The 2nd INTERNATIONAL SOFTWARE QUALITY WEEK brings together over 60
presentations by industry experts and university researchers to describe
and discuss the latest technologies, and to assess how they can apply to
the EURO and Y2K efforts.

                           PROGRAM HIGHLIGHTS

The eleven Pre-Conference Tutorials include full day and half-day
presentations by world-renowned speakers spread over two days.

Five Keynote Presentations give assessments of current technology, a
view of government and industry efforts for Y2K and EURO, and unique
perspective on the coming few years.

There is a Special Award Ceremony for Mr. Jim Clarke, winner of the
QW'98 Best Paper Award.

And, we have put together a Special Panel Session: "The Euro Conversion
-- Myth versus Reality!"  moderated by Mr. Thomas Drake (Coastal
Research & Technology, Inc.).

The General Technical Conference offers four parallel tracks of
presentations:  Technology, Tools & Solutions, Process & Management, and
selected Vendor Technical Presentations.

                               TUTORIALS

For starters, we have the well-known author and lecturer Dr. Boris
Beizer (Analysis, Inc.) "An Overview of Testing -- Unit, Integration,
System", sure to be an excellent introduction to the software quality
field.

From the "before code" phase of the life cycle we have two speakers:
Ms. Suzanne Robertson (The Atlantic Systems Guild) "Making Requirements
Testable"; and, Mrs. Dorothy G. Graham (Grove Consultants) "Software
Inspection".

Metrics -- the underpinning of any engineering process -- are addressed
by two speakers:  Mr. Thomas Drake (Coastal Research & Technology, Inc.)
"Measuring Quality in Object-Oriented Software"; and, Dr. Linda
Rosenberg & Mr. Ted Hammer (GSFC NASA / Unisys) "Metrics for Quality
Assurance and Risk Assessment".

Modern methods, employing object-oriented and reliability methods, among
others, are treated in:  Mr. Robert V. Binder (RBSC Corporation) "Modal
Testing Strategies for Object-Oriented Systems"; Mr. Martin Pol (GITEK
Software N.V.) "Test Process Improvement"; Dr. Gualtiero Bazzana & Dr.
E. Fagnoni (ONION s.r.l.) "Testing Internet/Intranet Applications"; and,
Dr. John D. Musa (Consultant) "More Reliable, Faster, Cheaper Testing
with Software Reliability Engineering".

Tools are the key to productivity, and you can learn from the experts
with:  Mr. Bart Broekman & Mr. Christiaan Hoos (IQUIP Informatica B.V.)
"Test Automation, Eternal Struggle or Lasting Benefits?"; and, Mr. Ed
Kit (Software Development Technologies) "Automating Software Testing and
Reviews".

Finally, addressing the Conference Theme Issue, there is a sterling
presentation by Dr. Boris Beizer (Analysis, Inc.) "Testing and Y2K".

                               KEYNOTERS

Just how serious the EURO and Y2K problem really is can be learned from
these two keynoters' talks:  Mr. Malcolm Levitt (Barclays Bank) "EMU:
The Impact on Firms' Global Operations"; and, Mr. David Talbot (ESPIRIT)
"EC Commission Actions for Y2K and EURO".

How technology can be brought to bear on EURO and Y2K conversion efforts
is the subject of the keynotes by:  Mrs. Dorothy G. Graham (Grove
Consultants) "Inspection: Myths and Misconceptions"; and, Dr. John D.
Musa (Consultant) "Applying Operational Profiles to Testing + ISSRE
Results Summary".

Lastly, in a futuristically oriented presentation, we hear from Dr.
Boris Beizer (Analysis, Inc.) "Nostradamus Redux".

                            TECHNOLOGY TRACK

Front-end design and development of tests are the subject for four
technology papers:  Mr. James Clarke (Lucent Technologies) "Automated
Test Generation From a Behaviorial Model"; Dr. Matthias Grochtmann & Mr.
Joachim Wegener (Daimler-Benz AG) "Evolutionary Testing of Temporal
Correctness"; Mr. Stacy J. Prowell (Q-Labs, Inc.) "Impact of Sequence-
Based Specification on Statistical Software Testing"; and, Dr. Linda
Rosenberg, Mr. Ted Hammer & Ms. L. Hoffman (GSFC NASA / Unisys) "Testing
Metrics for Requirement Quality".

The application of modern object oriented and data flow based ideas is
seen in:  Ms. Brigid Haworth (Bournemouth University) "Adequacy Criteria
for Object Testing"; Mr. Bill Bently & Mr. Robert V. Binder () "The
Dynamic Information Flow Testing of Objects: When Path Testing Meets
Object-Oriented Testing"; Ms. Martina Marre, Ms. Monica Bobrowski & Mr.
Daniel Yankelevich (Universidad de Buenos Aires) "A Software Engineering
View of Data Quality"; and, Mr. Rene Weichselbaum (Frequentis
Nachrichtentechnik GesmbH) "Software Test Automation".

Long term issues of reliability are addressed by:  Dr. Denise Woit &
Prof. David Mason (Ryerson Polytechnic University) "Component
Independence for Software System Reliability".

And, closing the feedback loop -- by analyzing reported defects and/or
by trying to predict the number that will be detected -- is the subject
of:  Mr. Jon Huber (Hewlett Packard) "Software Defect Analysis: Real
World Testing Implications & A Simple Model for Test Process Defect
Analysis"; and, Prof. Antonia Bertolino & Ms. E. Marchetti (CNR-IEI) "A
Simple Model to Predict How Many More Features Will Appear in Testing".

                            SOLUTIONS TRACK

EURO and Y2K issues may become most -- and possibly first -- evident
through the WWW, and these two papers provide a basis for thinking about
validating Web-based applications:  Mr. Manuel Gonzalez (Hewlett
Packard) "System Test Server Through the Web"; and, Mr. Felix Silva
(Hewlett Packard) "Product Quality Profiling: A Practical Model to
Capture the Experiences of Software Users">


There's no question about it: an ounce of prevention is worth a pound of
cure!  (To be more precise, a milligram of prevention is worth a
kilogram of cure!)  And early-on starts make a difference, as seen in
these papers:  Mr. Otto Vinter (Consultant) "Improved Requirements
Engineering Based On Defect Analysis"; Mr. Robert J. Poston (AONIX)
"Making Test Cases from Use Cases Automatically"; Mr. Avi Ziv & Dr.
Shmuel Ur (IBM Research Lab in Haifa) "Off-The-Shelf vs. Custom Made
Coverage Models, Which Is The One For You?"; and, Mr. Howard Chorney
(Process Software Corp.) "A Practical Approach to Using Software
Metrics".

What about results from the field?  Take a close look at these papers:
Mr. Felix Silva (Hewlett Packard) "Product Quality Profiling: A
Practical Model to Capture the Experiences of Software Users"; Dr. Peter
Liggesmeyer, Mr. Michael Rettelbach & Mr. Michael Greiner (Siemens AG)
"Prediction of Project Quality by applying Stochastical Techniques to
Metrics based on Accounting Data: An Industrial Case Study"; Mr. Lionel
Briand, Mr. Bernd G. Freimut, Mr. Oliver Laitenberger, Dr. Gunther Ruhe
& Ms. Brigitte Klein (Fraunhofer IESE) "Quality Assurance Technologies
for the EURO Conversion -- Industrial Experience at Allianz Life
Assurance"; and, Mr. Jakob-Lyng Petersen (ScanRail Consult) "An
Experience In Automatic Verification for Railway Interlocking Systems".

And, for a refreshingly new approach of how to tackle the problem of
managing risk rationally, be sure not to miss:  Mr. Tom Gilb (Result
Planning Limited) "Risk Management Technology: A rich practical toolkit
for identifying, documenting, analyzing and coping with project risks"

Finally, from an organization that knows from experience, we hear:  Mr.
John Corden (CYRANO) "Year 2000 -- Hidden Dangers".

                            MANAGEMENT TRACK

Well managed projects surely turn out better, and the importance of
taking an enlightened approach to the management side of the issue has
never been more important than with the EURO and Y2K question.  You
shouldn't miss:  Mr. Staale Amland (Avenir (UK) Ltd.) "Risk Based
Testing"; Mr. Joseph Tullington (Intermetrics) "Testing Without
Requirements"; and, Mr. Leslie A. Little (Aztek Engineering)
"Requirements Management -- Simple Tools...Simple Processes".

Even more specific to the EURO and Y2K question are:  Mr. Juan Jaliff,
Mr. Wolfgang Eixelsberger, Mr. Arne Iversen & Mr. Roland Revesjf (ABB)
"Making Industrial Plants Y2K-ready: Concept and Experience at ABB"; Mr.
Graham Titterington (Ovum, Ltd.) "A Comparison of the IT Implications of
the Y2K and the EURO Issues (10M)"; and, Mr. L. Daniel Crowley (DENAB
Systems) "Cost of Quality -- The Bottom Line of Quality".

New and novel methods have their place too, as seen in this pair of
papers:  Dr. Erik P. VanVeenendaal (Improve Quality Services)
"Questionnaire Based Usability Testing"; and, Mr. Gorka Benguria, Ms.
Luisa Escalante, Ms. Elisa Gallo, Ms. Elixabete Ostolaza & Mr. Mikel
Vergasa (European Software Institute) "Staged Model for SPICE: How to
Reduce Time to Market -- TTM".

Enlightened technology transfer is a key to many successful infusions of
new methodology, and these papers discuss three important areas:  Mr.
Mark Buenen (GITek Software n.v.) "Introducing Structured Testing in a
Dynamic, Low-Mature Organisation"; Ms. Elisa Gallo, Mr. Pablo Ferrer,
Mr. Mikel Vergasa & Chema Saris (European Software Institute) "SW CMM
Level2: The Hidden Structure"; and, Mr. Antonio Cicu, Mr. Domenico
Tappero Merlo, Mr. Francesco Bonelli, Mr. Fabrizio Conicella & Mr. Fabio
Valle (QualityLab Consortium/MetriQs) "Managing Customer's Requirements
in a SME: A Process Improvement Initiative Using a IT-Based Methodology
and Tool ".

========================================================================

                  Public TestWorks Training Available

TestWorks public training courses are held at SR's headquarters and
other locations in the USA and Europe.  Normally our publically
available TestWorks courses are available approximately every other
month.  All courses include high-level technology orientation,
completely worked examples, "hands on time" with the test suites, and
active demonstration of system capabilities.

There are nominal registration fees and course materials costs on a
per-seat, per-day basis.  Class space is strictly limited; early
reservation is recommended.  Complete information available from
training@soft.com.

                        Training Course Calendar

Currently available training course weeks for training at SR's
Headquarters in San Francisco are:

    TW#98-43  Week of 19-23 October 1998

    TW#98-50  Week of 7-11 December 1998

    TW#99-07  Week of 15-19 February 1999

                 Training Course Design and Objectives

The complete 1-week curriculum on TestWorks for Windows and UNIX is
designed so that individual days can be taken in any combination.  All
1-day courses are completely self-contained and provide complete
coverage of their respective areas:

   Monday       Windows Coverage Expertise in test coverage analysis
                techniques for Windows with TCAT/C-C++ and TCAT for
                Java.

   Tuesday      Windows Regression Expertise in Windows testsuite
                development with CAPBAK and SMARTS.

   Wednesday    WebSite Testing Expertise in testsuite development with
                CAPBAK and SMARTS.

   Thursday     UNIX Regression Expertise in UNIX testsuite development
                with CAPBAK, SMARTS, EXDIFF and Xvirtual.

   Friday       UNIX Coverage Expertise in test coverage analysis
                techniques for UNIX with TCAT/C-C++ and TCAT for Java.

              2-Day and 3-Day Course Combination Training

Most organizations achieve the best results by combining two or three
days training in the major TestWorks curriculum topics.  All of the 1-
day, 2-day and 3-day sequences from the above schedule work extremely
well to provide indepth training on different parts of the TestWorks
product suite when applied to different types of applications.  For
example, the Tuesday-Wednesday-Wednesday sequence is strong coverage of
WebSite Testing.

========================================================================

                         Two Digits for a Date

        (Sung to the tune of "Gilligan's Island," more or less)
Author Unknown

        Just sit right back and you'll hear a tale
        Of the doom that is our fate.
        That started when programmers used
        Two digits for a date.
        Two digits for a date.

        Main memory was smaller then;
        Hard disks were smaller, too.
        "Four digits are extravagant,
        So let's get by with two.
        So let's get by with two."

        "This works through 1999,"
        The programmers did say.
        "Unless we rewrite before that
        It all will go away.
        It all will go away."

        But Management had not a clue:
        "It works fine now, you bet!
        A rewrite is a straight expense;
        We won't do it just yet.
        We won't do it just yet."

        Now when 2000 rolls around
        It all goes straight to @#%&,
        For zero's less than ninety-nine,
        As anyone can tell.
        As anyone can tell.

        The mail won't bring your pension check
        It won't be sent to you
        But minus thirty-two.
        But minus thirty-two.

        The problems we're about to face
        Are frightening, for sure.
        And reading every line of code's
        The only certain cure.
        The only certain cure.

        (key change, big finish)

        There's not much time,
        There's too much code.
        And Cobol-coders, few
        When the century is finished with,
        We'll be finished, too.
        We'll be finished, too.

        Eight thousand years from now I hope
        That things aren't left too late,
        And people aren't lamenting then
        Four digits for a date.
        Four digits for a date.

========================================================================

                  Software Certification Laboratories?

                             Jeffrey Voas
                     Reliable Software Technologies
                       Email: jmvoas@rstcorp.com

   ABSTRACT:  Software Certification Laboratories (SCLs) will
   potentially change the manner in which software is graded and
   sold.  The main issue, however, is who is to blame when a
   certified piece of software acts in a manner during operation that
   the SCL certified was not possible?.  Given software's inherently
   unpredictable behaviors, can SCLs ever provide precise enough
   predictions about software quality to reduce their liability from
   misclassification to a reasonable level?

If you visit a doctor's office, you will often hear terms such as
"independent laboratory", "second opinion", "additional tests", or
"colleague consultation."  What these amount to is a doctor getting
another party or process involved in a diagnosis or treatment decision.
Doctors use outside authorities, in part, to reduce the risk of
malpractice.  The more consensus that gets built with respect to a
particular course of action, the more due diligence has been shown. And
the more parties that are then culpable if something goes wrong. For
instance, if a medical lab falsely returns a diagnosis that a tissue
sample is cancerous and the doctor begins treatments that were not
necessary, the doctor can ascribe some or all of the liability for this
mistake onto the laboratory.  The added costs from spreading liability
around in this manner are one reason for the cost increases in health
care.  Each extra opinion and extra test increase patient costs, because
each care provider is a malpractice target.

In the software world, a similar phenomenon is being observed.  Demands
for independent agencies to certify that programs meet certain criteria
are becoming more frequent.  These demands are coming from software
producers and consumers.  Vendors prefer to not be responsible for
guaranteeing their own software, and software consumers want unbiased
assessments that are not based on sales pitch hype.  Incredible as it
may seem, vendors, who typically "cut all corners" in costs, are willing
to pay the costs associated with placing this responsibility on someone
else.

The beauty of having Software Certification Laboratories (SCLs) is that
they provide a "quasi"-fair "playing field" for all software vendors---
each product is supposed to be given equal treatment.  The issue is that
when software fails in the field, and an independent party provided an
assessment that suggested that the software was good, does the
independent party bear any responsibility for the failure?

Because of the demands for SCL services, business opportunities exist
for organizations that wish to act in this capacity.  By paying SCLs to
grant software certificates, Independent Software Vendors (ISVs)
partially shift responsibility onto the SCL (like when a doctor orders a
second opinion or another test) for whether or not the software is
"good."  The question is whether this method of liability transfer will
be as successful in software as it has been in health care.  As we will
discuss, if SCLs set themselves up right, they can build more protection
around themselves than you might think, leaving the ISV holding a "hot
potato."

There are several relatively obscure SCLs in existence today (e.g.,
KeyLabs which handles applications for 100% Pure Java).  Other than
these small, specialized labs, the next closest organization to what you
would think of as a SCL (conceptually speaking) is Underwriter's
Laboratory (UL).  UL certifies electrical product designs to ensure that
safety concerns are mitigated.  Rumors are that UL is interested in
performing SCL services, but UL has not yet become an SCL to our
knowledge.

Commercial software vendors are not the only organizations that see the
benefit of SCLs.  NASA felt the need for standardized, independent
software certification both for the software they write as well as the
software they purchase.  NASA now has their own SCL---the Independent
Verification & Validation facility in Fairmont, WV.  Intermetrics is the
prime contractor at the facility and their job is to oversee the
certification process and provide the necessary independence.  This SCL
provides NASA with a common software assessment process over all
software projects (as opposed to each NASA center performing assessment
in different ways).  The NASA facility certifies both software developed
by NASA personnel as well as NASA's contractors.

Our interest in SCLs is in figuring out who is liable when certified
software fails} The ISV, the SCL, both, or neither?  More specifically,
we are interested in how liability is divided between these groups?  We
will first address the question of "how much liability, if any, can be
placed onto the SCL?"  By figuring out the liability incurred by an SCL
for its professional opinions, we can determine how much liability is
offloaded from the ISV.

SCLs stand as experts, rendering unbiased professional opinions.  This
opens up the SCL to possible malpractice suits.  Schemes for reducing an
SCL's liability include insurance, disclaimers on validity of the test
results, and SCLs employing accurate certification technologies based on
objective criteria.  Of these, the best approach is to only certify
objective criteria, and avoid trying to certify subjective criteria.

Different software criteria can be tested for by SCLs, spanning the
spectrum from guaranteeing correctness to counting lines of code.
Subjective criteria are imprecise and prone to error.  Objective
criteria are precise and less prone to error.  For example, deciding
whether software is correct is subjective because of the dependence on
what "correctness" really means for a piece of software.  SCLs should
avoid rendering professional opinions for criteria that are as
contentious as this.  But SCLs should be able to assess characteristics
such as whether a program has exception handling calls in it and how
many lines of code a program has.  Testing for these criteria is not
"rocket-science."  Troubles will begin, however, when an SCL tries to
get into the tricky business of estimating a criterion such as software
reliability.  Further, by only certifying objective criteria, the
chances of inadvertent favoritism being shown to one product over
another is reduced.

The National Security Computer Association (NCSA) is a for-profit SCL
that has taken an interesting approach to this liability issue.  They
use industry consensus building.  NCSA only certifies that specific
known problems are not present in an applicant's system.  This is an
objective criteria.  Their firewall certification program is based on
the opinions of industry representatives who meet periodically to decide
what known problems should be checked for.  Over time, additional
criteria are introduced into the certification process.  This adaptive
certification process serves two purposes: it adds rigor to the firewall
certification process, and it produces a steady stream of business for
the NCSA.  To further reduce liability, NCSA adds the standard
disclaimer package that their firewall certificate does not guarantee
firewall security.

ISV's have a different liability concern, particularly when their
software fails in the field.  For example, even if an SCL tells an ISV
that their software is "certified to not cause Problem X", when the
software fails causing Problem X and the ISV faces legal problems, can
the ISV use their SCL certificate as evidence of due diligence?
Further, can the ISV assign blame to the SCL?  The answer to the first
question is "probably", and the answer to the second question depends on
what "certified to not cause Problem X" meant. If this certification was
based on objective criteria and the process was performed properly, the
ISV probably cannot blame the SCL.  If the process was improperly
applied, then the SCL will probably be culpable.  If subjective criteria
were applied, the answer is unclear.

If the SCL used consensus building as their means for developing their
certification process, then the question that may someday be tested in
the courts is whether or not abiding by an industry consensus on what
are reasonable criteria protects the SCLs from punitive damages.
Generally speaking, as long as a professional adheres to defined
standards, then punitive damages are not administered. Professions such
as medical, engineering, aviation, and accounting have defined standards
for professional conduct.

Software engineering has never had such standards, although several
unsuccessful attempts to do so have been waged.  State-of-the-practice
rules that differentiate code meeting professional standards from code
not meeting professional standards are non-existent.  Consider the fact
that the phrase "software engineer" is illegal in 48 of 50 states
because the term "engineer" is reserved for persons who have passed
state-sanctioned certification examinations to become professional
engineers [1].  Because we do not have professional standards, it could
also be argued that what organizations such as the NCSA have done is
laudable.

Since software engineering has no professional organization to accredit
its developers, the approach taken by the NCSA could also be argued in a
court of law as state-of-the-practice.  If argued successfully, software
developers whose software passed the certification process could expect
to avoid punitive damages.  But if these state-of-the-practice standards
are deliberately weak, even though consensual, satisfaction of the
standards may fail to satisfy a jury.

The reason for this is because it is widely held by the public that
industry policing itself is a failed policy.  When those being forced to
comply are those making the rules, are the rules trustworthy?
Challenges in the courts could be foreseen claiming a conflict of
interest. This would invalidate claims that consensus-based standards
sufficiently protected customers.  One example of where industry-guided
standards have worked quite well, however, is the commercial aviation
industry.  Here, rigorous software guidelines in the DO-178B standard
were approved through an industry/government consensus. Those guidelines
for software safety are still the most stringent software certification
standards in the world.  No doubt the FAA's influence during the
formation of these standards played a role here.  And it cannot be
ignored that an industry such as air travel, if it were to fail to
police itself, would lose so much favor with its customer base that the
entire industry could fail.  So there are self-correcting mechanisms
that do work to some degree in self-policing industries.

Possibly the best defense for any ISV is the use of disclaimers, not
reliance on an SCL.  There is a perverse advantage to disclaiming one's
own product.  The less competent an ISV portrays themselves to be, the
lower the standard of professionalism to which they will be held.
Taking this principle to an extreme, we might suggest that a disclaimer
be included in a comment at the top of each program, stating: this
software was developed by incompetent persons trying to learn how to
program and it probably does not work.  The degree to which this
tongue-in-cheek disclaimer actually reflects reality is a sad commentary
on the state of our industry.  But until more cases are tested in the
courts, who really knows how much protection software disclaimers really
afford.

There is one more interesting development that will occur in the near
future and that is Article 2B of the Uniform Commercial Code (which
pertains to computers and computer services).  Article 2B will be
released in the Fall of 1997, and it will play an important role in
defining software warranties. Note that Article 2B will only serve as a
model template, and each state in the US will be responsible for
modifying it to their tastes before adopting it as law.  Further,
Article 2B has the potential to relax the liability concerns that might
force an ISV to use a certification lab.  This could turn out to be a
disaster for those parties most concerned with software quality.

In summary, we are going to have to wait for more cases to be tested in
the courts to see what standard of professionalism ISVs are held to
before we will know what role SCLs play in software liability.   We can
say that if the criteria that SCLs test for are not meaningful, SCLs
will find that neither developers nor consumers of software care about
the certification process.  For an SCL to succeed, it is also imperative
that the SCL employ accurate assessment technologies for objective
criteria.  If SCLs do this, malpractice suits against them will be very
difficult to win, unless the SCL simply fowls up on a particular case or
makes false statements.

This piece is titled "Software Certification Laboratories?", because
until these hard issues are resolved, the degree of liability protection
afforded an ISV by hiring the services of an SCL is hard to measure.
Nonetheless, if SCLs can measure valuable criteria (and by that I do not
mean "lines of code") in a quick and inexpensive manner, SCLs have the
ability to foster greater software commerce between vendors and
consumers.  And this could move SCL certificates from being viewed as
taxes to trophies.

REFERENCES

[1] C.Jones. Legal status of software engineering.  IEEE Computer, May
1995.

========================================================================

           Availability of Musa Book: A Note From The Author

A number of you asked me to let you know when my new book "Software
Reliability Engineering: More Reliable Software, Faster Development and
Testing" was published and/or ordering information.  It has been;
McGraw-Hill has done a superb job.  The title is changed from the
initial one, but the book is exactly the same.  The book is available
direct from the publisher (800 722 4726), from amazon.com, or through
bookstores.  The ISBN is 0-07-913271-5 and the list price is $65.  There
is a detailed description of the book on my website.

                         Contact: John D. Jusa
          Software Reliability Engineering and Testing Courses
                        E-mail: j.musa@ieee.org

========================================================================

                TestWorks Corner: Changes and Additions

As regular TTN-Online readers know, TestWorks is SR's family of software
testing and validation tools aimed at Windows and UNIX workstations.
Complete information about TestWorks can be found at:
<http://www.soft.com/Products>.

            New CAPBAK and TCAT/C-C++ for Windows Versions.

   Aimed at support for more-complex Windows 95/NT applications,
   CAPBAK/MSW Ver. 3.2 adds enhanced ObjectMode operation, simplified
   licensing, and many other features.

   You can complete your coverage analyses from within the MS Visual
   C++ environment.  Full access to TCAT's features is just a mouse-
   click away in the newest TCAT/C-C++ Ver. 2.1 for Windows 95/NT.

   You can download the new product builds from our WebSite at:

       <http://www.soft.com/Products/Downloads>

   If you already have a key it should work with the latest builds.
   But if you are a new evaluator and/or if your key has expired,
   simply go to:

       <http://www.soft.com/Products/Downloads/send.license.html>

   Answer a few questions and a license key will be Emailed to you in
   less than a day.

                       UNIX Products Downloadable

   If you are a TestWorks for UNIX user, and you are on maintenance,
   you can download the latest builds of TestWorks for SPARC/Solaris,
   x86/Solaris and DEC-Alpha/OSF from our website.  The remainder of
   the UNIX platforms' downloadables should be available before the
   end of the month.

Complete information on any TestWorks product or product bundle is
available from sales@soft.com.

========================================================================

 The Ninth International Symposium on Software Reliability Engineering
                                ISSRE'98
                 Paderborn, Germany, November 4-7, 1998

                             Sponsored by:

                         IEEE Computer Society,
     Computer Society Technical Council on Software Engineering,
                       IEEE Reliability Society.

The preliminary program and conference registration information are
available on the ISSRE'98 web site:

       <http://adt.uni-paderborn.de/issre98/

This year's conference contains 39 regular papers, 5 keynote speakers, 5
panels, and 9 tutorials. In addition, approximately 25 industrial
experience reports will also be presented and a tool fair will be
organized.

The ISSRE'98 organizing committee looks forward to meeting you in
Paderborn!

Contact: Lionel Briand, Publicity chair ISSRE'98, briand@iese.fhg.de

========================================================================

              The Official 1998 Edition of Bumper Stickers

             Rap is to music what Etch-a-Sketch is to art.
                        Dyslexics have more fnu.
                         Clones are people two.
                   Entropy isn't what it used to be.
             Jesus saves, passes to Moses; shoots, SCORES!
                     Microbiology Lab: Staph Only!
         Santa's elves are just a bunch of subordinate Clauses.
                          Eschew obfuscation.
                    Ground Beef: A Cow With No Legs.
         186,000 miles/sec: Not just a good idea, it's the LAW.
                  A waist is a terrible thing to mind.
                  Air Pollution is a "mist-demeaner."
              Anything free is worth what you pay for it.
                 Atheism is a non-prophet organization.
       Chemistry professors never die, they just smell that way.
                   COLE'S LAW: Thinly sliced cabbage.
                   Does the name Pavlov ring a bell?
                    Editing is a rewording activity.
                  Everyone is entitled to my opinion.
          Help stamp out and eradicate superfluous redundancy.
               I used to be indecisive; now I'm not sure.
                     My reality check just bounced.
             What if there were no hypothetical questions?
            Energizer Bunny arrested, charged with battery!
          No sense being pessimistic. It wouldn't work anyway.

========================================================================

                      Automated Testing Is Costly

Editors Note: This fragment of an exchange appeared on a public news
group but is good enough advice to be passed on.  -efm

From: Stefan Steurs 
Newsgroups: comp.software.testing
Subject: Re: Implementing Automated Test Tools?
Date: Mon, 24 Aug 1998 16:17:07 +0200

"Steven M"  wrote:
> Hello,  I am currently looking at implementing some automated test
> tools in my company.   The company currently works with a variety
> of OS's including Win95, Win3.11, NT 4 and Unix,  it is important
> to note that we do both client/server and web based application and
> our development is using VB(3,4,5), Powerbuilder, Java, C/C++,
> VC++, Access & Oracle.  What I would like is a variety of opinions
> from people who have recently implemented some of these tools
> (e.g. SQA suite, QA Partner ext....) for their testing departments.
> If it is available I would also like to get some accurate
> statistics on the cost benefits of implementing these tools.

Where have I heard that before.  The cost benefits of testing.  Now on a
particular area which is "tools".  I don't think it makes sense to talk
about cost benefits.  Look at it differently.  What is the risk that you
run when you don't perform the test.  What is the cost of that risk and
what is the likelihood.  Now, from that point of view you can start to
determine not only what you should test but also what you should
automate.  Some tests cannot be automated and some tests cannot be done
manually, the first question should still be "Is this test really
necessary."  Many people start from the point of view "When I automate
my test, I can do this and that and such."  They are more interested in
the automation then in the results.

"Test Automation" in my experience is costly.  The initial investment
can only be recovered if you can/will rerun the tests (most likely
candidates for automation are regression tests).  You definitely need to
rerun the tests 3 to perhaps 10 or 20 times before you will have saved
as much time as you have invested in creating/writing/maintaining the
automated tests.

Test automation is also costly because automated test tools require
training.  Most tools that I know of have advanced testing capabilities
and the more advanced they are, the more programming like they become.
So, instead of people who are programming illiterate (ok, bad choice of
words here), you need testers that can program as well.  And that is not
enough.  Test automation requires configuration management.  Test
automation also requires a lot of test design.

My advice is to be careful about test automation.  There are areas where
it makes perfect sense and there are areas where it becomes a real
throw-away-money (or throw it to the vendor) affair.  You can automate
everything, even tests, you have to ask yourself if that is a reasonable
thing to do.

Automating tests is a software project just like any other computer
implementation.

There is a danger that you will double the cost of your software project
if you want to test everything.  For every line that is coded you will
have to write test code.  Sometimes this is justified but certainly not
always.

And one final remark.  Test tools don't replace the needs for brains.
You will still need to think about test design and finally when all the
tests are run you will still need your brains to investigate the
anomalies and to do the debugging.  Remember that test execution is less
then half of all the effort spent on testing.  Also remember that
executing the tests is the fun part.

========================================================================
------------>>>          TTN SUBMITTAL POLICY            <<<------------
========================================================================

The TTN Online Edition is E-mailed around the 15th of each month to
subscribers worldwide.  To have your event listed in an upcoming issue
E-mail a complete description and full details of your Call for Papers
or Call for Participation to "ttn@soft.com".

TTN On-Line's submittal policy is as follows:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the TTN On-Line issue date.  For
  example, submission deadlines for "Calls for Papers" in the January
  issue of TTN On-Line would be for February and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK and may be serialized.
o Length of submitted calendar items should not exceed 60 lines (one
  page).
o Publication of submitted items is determined by Software Research,
  Inc. and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items are the opinions of their authors or
submitters; TTN-Online disclaims any responsibility for their content.

TRADEMARKS:  STW, TestWorks, CAPBAK, SMARTS, EXDIFF, Xdemo, Xvirtual,
Xflight, STW/Regression, STW/Coverage, STW/Advisor, TCAT, TCAT-PATH, T-
SCOPE and the SR logo are trademarks or registered trademarks of
Software Research, Inc. All other systems are either trademarks or
registered trademarks of their respective companies.

========================================================================
----------------->>>  TTN SUBSCRIPTION INFORMATION  <<<-----------------
========================================================================

To SUBSCRIBE to TTN-Online, to CANCEL a current subscription, to CHANGE
an address (a CANCEL and a SUBSCRIBE combined) or to submit or propose
an article, use the convenient Subscribe/Unsubscribe facility at
<http://www.soft.com/News/TTN-Online>.  Or, send E-mail to
"ttn@soft.com" as follows:

   TO SUBSCRIBE: Include in the body the phrase "subscribe {your-E-
   mail-address}".

   TO UNSUBSCRIBE: Include in the body the phrase "unsubscribe {your-E-
   mail-address}".

               TESTING TECHNIQUES NEWSLETTER
               Software Research, Inc.
               901 Minnesota Street
               San Francisco, CA  94107      USA

               Phone:          +1 (415) 550-3020
               Toll Free:      +1 (800) 942-SOFT (USA Only)
               FAX:            +1 (415) 550-3030
               E-mail:         ttn@soft.com
               WWW:            <http://www.soft.com/News/TTN-Online>

                               ## End ##