sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr

         +===================================================+
         +======= Testing Techniques Newsletter (TTN) =======+
         +=======           ON-LINE EDITION           =======+
         +=======            October 1996             =======+
         +===================================================+

TESTING TECHNIQUES NEWSLETTER (TTN), On-Line Edition, is Emailed monthly
to support the Software Research, Inc. (SR) user community and provide
information of general use to the worldwide software testing community.

(c) Copyright 1996 by Software Research, Inc.  Permission to copy and/or
re-distribute is granted to recipients of the TTN On-Line Edition
provided that the entire document/file is kept intact and this copyright
notice appears with it.

========================================================================

INSIDE THIS ISSUE:

   o  CALL FOR PARTICIPATION -- 10th International Software Quality Week
      (QW'97)

   o  Inspection Failure Causes, by Tom Gilb

   o  14th Annual Pacific Northwest Software Quality Conference:
      Conference Announcement

   o  Useful Features of a Test Automation System (Part 3 of 3), by
      James Bach

   o  Software Testing: Poor Consideration, by Franco Martinig

   o  Seminar on Software Testing

   o  Technical Article: Evaluation of Expert System Testing Methods, S.
      H. Kirani, et. al., C.ACM, November 1994

   o  Standard for Software Component Testing, by Brian A. Wichmann

   o  CALL FOR PAPERS: ESEC/FSE'97

   o  TTN SUBSCRIPTION INFORMATION


========================================================================

         TENTH INTERNATIONAL SOFTWARE QUALITY WEEK 1997 (QW`97)

              Conference Theme: Quality in the Marketplace

            San Francisco, California USA -- 27-30 May 1997

QW`97 is the tenth in a continuing series of International Software
Quality Week Conferences focusing on advances in software test
technology, quality control, risk management, software safety, and test
automation.  Software analysis methodologies, supported by advanced
automated software test methods, promise major advances in system
quality and reliability, assuring continued competitiveness.

The mission of the QW`97 Conference is to increase awareness of the
importance of software quality and methods used to achieve it.  It seeks
to promote software quality by providing technological education and
opportunities for information exchange within the software development
and testing community.

The QW`97 program consists of four days of mini-tutorials, panels,
technical papers and workshops that focus on software test automation
and new technology.  QW`97 provides the Software Testing and QA/QC
community with the following:

   o  Analysis of method and process effectiveness through case studies
   o  Two-Day Vendor Show
   o  Quick-Start, Mini-Tutorial Sessions
   o  Vendor Technical Presentations
   o  Quality Assurance and Test involvement in the development process
   o  Exchange of critical information among technologists
   o  State-of-the-art information on software test methods

QW`97 is soliciting 45 and 90 minute presentations, half-day standard
seminar/tutorial proposals, 90-minute mini-tutorial proposals, or
proposals participation in a panel and "hot topic" discussions on any
area of testing and automation, including:

      Cost / Schedule Estimation
      ISO-9000 Application and Methods
      Test Automation
      CASE/CAST Technology
      Test Data Generation
      Test Documentation Standards
      Data Flow Testing
      Load Generation and Analysis
      SEI CMM Process Assessment
      Risk Management
      Test Management Automation
      Test Planning Methods
      Test Policies and Standards
      Real-Time Software
      Real-World Experience
      Software Metrics in Test Planning
      Automated Inspection
      Reliability Studies
      Productivity and Quality Issues
      GUI Test Technology
      Function Point Testing
      New and Novel Test Methods
      Testing Multi-Threaded Code
      Integrated Environments
      Software Re-Use
      Process Assessment/Improvement
      Object Oriented Testing
      Defect Tracking / Monitoring
      Client-Server Computing

IMPORTANT DATES:

      Abstracts and Proposals Due:            15 December 1996
      Notification of Participation:          1 March 1997
      Camera Ready Materials Due:             15 April 1997

FINAL PAPER LENGTH:

      Papers should be limited to 10 - 20 pages, including Text, Slides and/or
      View Graphs

SUBMISSION INFORMATION:

      Abstracts should be 2-4 pages long, with enough detail to give reviewers
      an understanding of the final paper, including a rough outline of its
      contents. Indicate if the most likely audience is technical, managerial
      or application-oriented.

      In addition, please include:
         o  A cover page with the paper title, complete mailing and Email
            address(es), and telephone and FAX number(s) of each author.
         o  A list of keywords describing the paper.
         o  A brief biographical sketch of each author.

      Send abstracts and proposals including complete contact information to:

      Ms. Rita Bral
      Quality Week '97 Director
      Software Research Institute
      901 Minnesota Street
      San Francisco, CA  94107 USA

      For complete information on the QW'97 Conference, send Email to
      qw@soft.com, phone SR Institute at +1 (415) 550-3020, or, send a
      FAX to SR/Institute at +1 (415) 550-3030.

========================================================================

                       Inspection Failure Causes.

                             By Tom Gilb,
       Senior Partner, Result Planning Ltd.  Email: Gilb@ACM.org

                    Co-Author "Software Inspection"
               Teacher of Inspection Team Leader Courses.

Summary: This is a short 4 page summary of problems in implementing
software inspections. It is based on a summary of findings of an audit
we conducted at a multinational high tech electronics client in April
1995.  We had trained over 200 Team leaders.  The audit findings
resulted in much better management attention and improvement by October
1996.  More detail relating to the terms and comments used in the paper
will be found in our book "Software Inspection" (Addison-Wesley 1993)
and in 2 sets of our Inspection slides located at:

        http://www.stsc.hill.af.mil/www/stc/gilb.html/.

The complete set of slides for our one-week Inspection Team Leader
Training Course is there, along with a set of lecture slides on Advanced
Inspection Insights.

           o       o       o       o       o       o       o

Failure to Collect Controlling Data

Almost no groups collect data about defects in testing and data from the
field that would allow us to understand the downstream costs of quality.
This lack of downstream error data prevents management from
understanding how much Inspection can and does save them now, thus
motivating them to fully and professionally support the inspection
process.

Failure to Exploit Inspection Data to Control Successful Implementation
of Inspection

Extensive inspection cost and benefit data is being collected and put
into computer databases, with some reports and diagrams being made.
However, it is not really being used to systematically manage the
process. No staff is devoted to the task, nor is trained or guided in
doing so.  Therefore, example situations with ridiculously high checking
rates are allowed to persist unchallenged.

Lack of Resources for Support Functions

Management has supported proper Team Leader training for a large
fraction of professional staff out of faith that Inspection will help
them improve quality and save time. But, they have not budgeted for nor
planned for any form of support staff to make sure these trained Team
Leaders were effectively deploying and allowed to deploy what they had
learned. This is like assuming aviation will be safer if we just train
pilots better.

Lack of Focused Expertise to Exploit Inspections

No people were assigned the task of making inspections work measurably
up to their known potential. Some quality "bureaucrats" wrote some
handbooks, made some nice color inspection posters, and moved on to new
assignments.  Perhaps only one person on site had the expertise to help
people fully exploit inspections, but people in other divisions have had
almost no contact with the person, and none of them had real expertise
themselves.  The result was that a large number of poor practices
flourish and people are either not aware of them or feel powerless to do
anything about improving them.

Lack of A Clear Powerful Management Policy

Top management has failed to decide on, to publish and to support even a
simple one page policy for the most fundamental use of Inspection. For
example, regarding our objectives, minimal practices, and the scope; the
result is there is no clear, well thought-out vision driving the next
level of management to accomplish specific goals (like achieve document
exit (release) at less than 0.3 Majors per page).

Lack of Project Management Motivation to Exploit Inspection Potential

Project Managers are given no specific motivation to fully exploit the
Inspection potential for cleaning up the upstream requirements and
design processes. The result is still far too much business as usual
with high downstream test defect removal costs.

Trained Inspection Team Leaders Have High Conviction of the Value of
Inspections are Loosing Motivation.

Trained Leaders are Prevented From Applying Inspections Properly.

Project Managers are told they are expected to do Inspections, but as
there is no guidance about how to do them intelligently and properly,
they have been observed to coerce trained Leaders (who know better!) to
go too fast and to skip vital planning and monitoring work. The result
is a farce. Managers say they do inspections, and probably believe it.
They have no training whatsoever and they are under very high pressure
generally. But something like 90% of the Major defects which could be
captured are not.

Process Improvement Efforts Have Not Gotten Off The Ground

The component of Inspections known as Defect Prevention Process (DPP,
CMM Level 5) has only been flirted with so lightly that it is not worth
reporting. No project has made any serious effort to make it work. It is
capable of avoiding 50% of all defects in first year by process change.
But we are doing nothing. (Note Sept. 1996 the first serious project use
of DPP was underway).

Measurable Results for Process Improvement Efforts Do Not Exist.

Although 'continuous improvement' is officially one of the Corporation's
major policies in public, and there are a few numeric official goals
about product quality improvement in circulation, there are no known
numeric results in our Division for any form of process improvement, not
as an objective, not as a fact. We should be targeting things like 40%
annual productivity improvements recommended by Juran and achieved by
Raytheon (Dion).

Minimum EXIT Levels of Document Quality are Not Being Applied

The basic control that Inspection provides, exit criteria of no more
than 3 Majors per 300 Non-commentary words (initially, and max. 0.3
after about 2 years) is in 3 of 4 major Groups not being applied at all.
The result is a farce. Inspections are being "done". But, documents are
exiting with the usual 10 to 50 or more 'major defects per page'
uncontrolled. This is like having armed guards at the gates but letting
saboteurs inside with bombs anyway.

The Optimum Checking Rate is Not Being Respected, thus Dramatically
Reducing the Effectiveness of Inspection

With one Group exception, the Groups and projects are not attempting to
determine the optimum checking rates.  Although they have the data and
statistical reports to do so. They are of course not attempting to use
even known minimal default checking rates. Their average checking rates
are in fact at least an order of magnitude too fast. This results in
about 5% or less defect-finding effectiveness, and the dangerous
illusion that their documents are clean and can be released.  That
single inspections have actually beent attempted is insane.

The use of systematic sampling, which allows optimum rates even under
time pressure, is not being practiced by most.

No Clear Rational For Where To Apply Inspection

There is no clear policy regarding what should be inspected.  We are not
applying it in the high profit areas.

No Clear Goal For What Inspection is Trying to Achieve

Inspection is being done because some managers have said it will be, but
they have not indicated what benefits they expect as a result of doing
so. The result is that Inspections are being done, but it is not at all
clear what the benefits are. The Software Director claimed he was happy
with the use of Inspections and they are done on a large scale in his
area. But specific measurable benefits are not available as they
normally should be.  Note: In September 1996 this Software Manager was
no longer in the same position; he was reported to be attending to some
"Quality" consulting course.

POSTSCRIPT...

This organization got a technical director about April 1996 who has
displayed power and will to rectify the problems above. He has made it
his clear personal responsibility to make Inspection work properly, and
he clearly expects measurable results. So the situation has turned
around but is not perfect!  Our original strategy was to train the grass
roots in leading teams and let the management wake up afterwards. We
were told that is how the organization did things.

With hindsight, the time lag (about 3-4 years) to wake up management was
far too much, and was wasteful. We trained far too many people who were
thrown back into an unreceptive high pressure working environment,
without appropriate management planning, training and support.  It would
have been better strategy to focus on fewer areas and absolutely master
the process before expanding it. I believe it would have wakened
management up faster and the Corporation would be further along than it
is today.  They would probably reply. Maybe. But this is how we do
things in our culture and we are happy we did it.

========================================================================

       14th Annual Pacific Northwest Software Quality Conference

                          October 29-30, 1996
               Oregon Convention Center, Portland, Oregon

KEYNOTE SPEAKERS

Watts Humphry     "What If Your Life Depended On Software" he is
                  currently with the Software Engineering Institute of
                  Carnegie Mellon University and has 27 years experience
                  with IBM.

Susan Dart        "Technology Adoption"  She is currently with Dart
                  Technology Strategies and has 20 experience including
                  the Software Engineering Institute of Carnegie Mellon
                  University.

PANEL DISCUSSION   Capers Jones, Bill Hetzel and Judy Bamberger
                  "Software Engineering in the Year 2000"

Also featured are refereed paper presentations, invited speakers, vendor
exhibits and Birds of a Feather discussions.

Workshops, October 28,1996 at the Oregon Convention Center:

Susan Dart        "Successful Ways for Improving the Quality of Your
                  Configuration Management Solution"
Neal Whitten      "Twelve `Big Ticket' Best Practices to Speed Your
                  Product Development"
Norm Kerth        "Leadership from the Technical Side of the Ladder"
Robert Binder     "Object-Oriented System Testing: the FREE Approach"
Edward Kit        "Software Testing Tools"
Jarrett Rosenberg "A Quick Introduction to Reliability Models for
                  Software Engineers"

For more information, send Email to info@pnsqc.org, or visit the PNSQC
Home Page at:

        http://www.pnsqc.org/~pnsqc

or call Pacific Agenda at +1 (503) 223-8633.

========================================================================

       Useful Features of a Test Automation System (Part 3 of 3)

                                  by

                               James Bach

   Editors Note: This article, by noted testing guru Jim Bach, is his
   personal set of hints and recommendations about how to get the
   best out of your effort at building a regression test system.  You
   can reach Jim at STL, South King St. Suite 414, Seattle, WA 98104.

o  Tests can be selectively activated or deactivated.

   There should be a mechanism (other than commenting out test code) to
   deactivate a test, such that it does not execute along with all the
   other tests. This is useful for when a test reveals a crash bug, and
   there is no reason to run it again until the bug is fixed.

o  Tests are easily reconfigured, replicated, or modified.

   An example of this is a functional test of a program with a graphical
   user interface that can be configured to simulate either mouse input
   or keyboard input. Rather than create different sets of tests to
   operate in different modes and contexts, design a single test with
   selectable behavior. Avoid hard-coding basic operations in a given
   test. Instead, engineer each test in layers and allow its behavior to
   be controlled from a central configuration file. Move
   sharable/reusable code into separate include files.

o  Tests are important and unique.

   You might think that the best test suite is one with the best chance
   of finding a problem. Not so.  Remember, you want to find a *lot* of
   *important* problems, and be productive in getting them reported and
   fixed. So, in a well- designed test suite, for every significant bug
   in the product, one and only one test will fail. This is an ideal, of
   course, but we can come close to it.  In other words, if an
   enterprising developer changes the background color of the screen, or
   the spelling of a menu item, you don't want 500 tests to fail (I call
   it the "500 failures scenario" and it gives me chills). You want one
   test to fail, at most. Whenever a test fails, you know that at least
   one thing went wrong, but you can't know if more than one thing went
   wrong until you investigate. If five hundred tests fail, it would
   help you to budget your time if you were confident that 500 different
   and important problems had been detected. That way, even before
   you're investigation began, you would have some idea of the quality
   of the product.

   Likewise, you don't want tests to fail on bugs so trivial that they
   won't be fixed, while great big whopper bugs go unnoticed.
   Therefore, I suggest automating interesting and important tests
   before doing the trivial ones. Avoid full-screen snapshots, and use
   partial screen shots or pattern recognition instead. Maybe have one
   single test that takes a series of global snapshots, just to catch
   some of those annoying little UI bugs.  Also, consider code
   inspection, instead of test automation, to test data intensive
   functionality, like online help. It's a lot easier to read files than
   it is to manipulate screen shots.

o  Dependencies between tests can be specified.

   Tests that depend on other tests can be useful. Perhaps you know that
   if one particular test fails, there's no reason to run any other
   tests in a particular group. However, this should be explicitly
   specified to the test suite, so that it will skip dependent tests.
   Otherwise, many child tests will appear to fail due to one failure in
   a parent test.

o  Tests cover specific functionality without covering more than
   necessary.

   Narrowly defined tests help to focus on specific failures and avoid
   the 500 failure scenario. The downside is that overly narrow tests
   generally miss failures that occur on a system level. For that
   reason, specify a combination of narrow and broader tests. One way to
   do that is to create narrow tests that do individual functions; then
   create a few broad tests, dependent on the narrow ones, that perform
   the same functions in various combinations.

o  Tests can be executed on a similar product or a new version of the
   product without major modification.

   Consider that the test suite will need to evolve as the software that
   it tests evolves. It's common for software to be extended, ported, or
   unbundled into smaller applications. Consider how your suite will
   accommodate that. One example is localization. If you are called upon
   to test a French version of the software, will that require a
   complete rewrite of each and every test?  This desirable attribute of
   test automation can be achieved, but may lead to very complex test
   suites. Be careful not to trade a simple suite that can be quickly
   thrown out and rewritten for a super-complex suite that is
   theoretically flexible but also full of bugs.

o  Test programs are reviewable.

   Tests must be maintained, and that means they must be revisited.
   Reviewability is how easy it is to come back to a test and understand
   it.

o  Test programs are easily added to suites.

   In some suites I've seen, it's major surgery just to add a new test.
   Make it easy and the suite will grow more quickly.

o  Tests are rapidly accessible.

   I once saw a test management system where it took more than 30
   seconds to navigate to a single test from the top level of the
   system. That's awful. It discouraged test review and test
   development. Design the system such that you can access a given test
   in no more than a few seconds.  Also, make sure the tests are
   accessible by anyone on the team.

o  Tests are traceable to a test outline.

   To some it is obvious; for me it was a lesson that came the hard way:
   Do not automate tests that can not already be executed by hand. That
   way, when the automation breaks, you will still be able to get your
   testing done. Furthermore, if your automated tests are connected to a
   test outline, you can theoretically assess functional test coverage
   in real-time. The challenge is to keep the outline in sync with both
   the product and the tests.

o  Tests are reviewed, and their review status is documented in-line.

   This is especially important if several people are involved in
   writing the tests. Believe it or not, I have seen many examples of
   very poorly written tests that were overlooked for years. It's an
   unfortunate weakness (and most of us have it) that we will quickly
   assume that our test suite is full of useful tests, whether or not we
   personally wrote those tests. Many bad experiences have convinced me
   that it's dangerous to assume this! Test suites most often contain
   nonsense. Review them!  In order to help manage the review process, I
   suggest recording the date of review within each test. That will
   enable you to periodically re-review tests that haven't been touched
   in a while.  Also, review tests that never fail, and ones that fail
   often.  Review, especially, those tests that fail falsely.

o  Test hacks and temporary patches are documented in-line.

   Adopt a system for recording the assumptions behind the test design,
   and any hacks or work-arounds that are built into the test code. This
   is important, as there are always hacks and temporary changes in the
   tests that are easy to forget about. I recommend creating a code to
   indicate hacks and notes, perhaps three asterisks, as in:  "***
   Disabled the 80387 switch until co-processor bug is fixed."
   Periodically, you should search for triple-asterisk codes in order
   not to forget about intentionally crippled tests.

o  Suite is well documented.

   You should be able to explain to someone else how to run the test
   suite. At Borland I used the "Dave" test: When one of my testers
   claimed to have finished documenting how to run a suite, I'd send
   Dave, another one of the testers, to try to run it.  That invariably
   flushed out problems.

========================================================================

                  Software Testing: Poor Consideration

                            Franco Martinig

The Methods & Tools newsletter presents each year the aggregated results
of software development processes assessments. These evaluations are
performed using the first version of the Capability Maturity Model (CMM)
questionnaire developed by the Software Engineering Institute (SEI). The
data has been supplied by 118 organizations located in Europe, Canada
and the USA.

The average results show that quality is still poor and that only 2% of
the participants are assessed at the Maturity Level 2.  (Level 2 is the
first achievable goal on a five level scale.)  The efforts are more
important in the earlier part of the software development life cycle and
in project management activities.  Control-oriented activities, like
testing, and related quantitative analysis activities are also seldom
performed.

Looking at the detailed elements concerning the testing activities and
tools, we can see that testing can be considered one of the poor parents
of software development. Around 25% of the organizations have formal
procedures or standards for the testing phase, and automated testing is
used by only 10% of the companies.

Here are the rate of positive answers for questions concerning testing.

Process quality questions

* Are standards applied to the preparation of unit test cases?  26%
* Are statistics on software code and test errors gathered?     41%
* Is test coverage measured and recorded for each phase of
functional testing?                                             23%
* Are software trouble reports resulting from testing tracked
to closure?                                                     74%
* Is there a mechanism for assuring that regression testing
is routinely performed?                                         25%
* Is there a mechanism for assuring the adequacy of
regression testing?                                             12%
* Are formal test case reviews conducted?                       25%

Tools questions

* Are automated test input data generators used for testing?    10%
* Are computers tools used to measure test coverage?             8%
* Are computer tools used to track every required function
  and assure that it is tested/verified?                         7%

Organizations interested in having the quality of their software
development process and tools assessed can look at the Methods & Tools
web site:

      Martinig & Associates
      Avenue Nestle 28
      CH-1800 Vevey / Switzerland

      Tel: +41-21-922-1300
      Fax: +41-21-921-2353
      Mail: martinig@iprolink.ch
      URL: http://www.astarte.ch/mt

========================================================================

                      Seminar on Software Testing

What:  Design, Develop and Execute Functional and System Test Cases
When:  October 19 and 26 1996
Where: UCSC Extension, Oxford Business Park (San Jose, CA)
How :  Call 1 800 660 8639 in California to enroll. EDP# 962D99 (fee
$365) (408) 427-6600 outside California

Designed for developers and quality assurance professionals who require
experience with state-of-the-practice software development and test
methods.

Specific topics covered in this two-day course include:

o  Functional testing with structured reviews, inspections, code walk-
   throughs, requirements traceability, checklists, orthogonal arrays,
   domains, equivalence classes, loops, syntax, error guessing, test
   assertions, and state machines. A test methodology for integration,
   system, and acceptance testing.

o  Structural testing using static code complexity analysis, custom and
   generic simulation libraries, error seeding, dynamic memory analysis,
   dynamic test coverage analysis, static and dynamic standards
   conformance checkers, and more. Some of the tools used/discussed
   include Purify, Software Research, MetaC, and many others.

o  Reliability analysis based on testability, reliability, and
   fault/failure models. Hardware vs. software reliability models.

o  Regression testing with automated software test design, development
   and execution. Important automation tips and tricks for unit through
   acceptance test phases are provided.

Recommended text: "UNIX Test Tools and Benchmarks", Rodney C. Wilson,
Prentice Hall, ISBN 0-13-125634-3.

========================================================================

     TECHNICAL ARTICLE: Evaluation of Expert System Testing Methods

        Shekhar H. Kirani, Imran A. Zualkernan, and Wei-Tek Tsai

  COMMUNICATIONS OF THE ACM, November 1994, Vol. 37, No. 11, Page 71.

Expert systems are being developed commercially to solve nontraditional
problems in such areas as auditing, fault diagnosis, and computer
configuration. As expert systems move out from research laboratories to
commercial production environments, establishing reliability and
robustness have taken on increasing importance. In this article, we
would like to assess the comparative effectiveness of testing methods
including black-box, white-box, consistency, and completeness testing
methods in detecting faults.

We take the approach that an expert system life cycle consists of the
problem-specification phase, solution-specification phase, high-level
design phase, implementation phase, and testing phase. This approach is
consistent with the modern expert system life cycle as suggested
elsewhere.  Expert system testing (generally known as verification and
validation) establishes a binary relationship between two by-products of
the software-development process. For this article, we consider testing
as the comparison between by-products by each life-cycle phase and
implementation. We use a technique called "life-cycle mutation testing"
(LCMT) for a comparative evaluation of testing methods on an expert
system.


========================================================================

                Standard for Software Component Testing

                           Brian A. Wichmann

Although ISO 9001 does not require it, testing is an important aspect of
the development of computer systems.  There are many forms of testing,
but those used to test software components are among the best.  The
British Computer Society Special Interest Group in Software Testing
(SIGIST) has been developing a formal standard for component testing
over the last few years.  This brief article describes the current draft
of this unnamed standard, which is relatively mature and has been
submitted to the British Standard Institution for formal consideration.

The Standard

The conventional approach to software testing standards is to provide
guidance, such as what appears in ISO 9000-3 and in several standards by
the American National Standards Institute (ANSI) and Institute of
Electrical and Electronics Engineers (IEEE).  These documents provide
useful advice but lack quantitative or objective measures of compliance.
Hence, it is difficult for testers to determine how much testing has
been performed or whether the tests adequately removed errors in the
software.  The SIGIST group that developed the component testing
standard believes that the degree of testing should be quantified when
possible.  Fortunately, metrics are available for component testing,
i.e., measuring the fraction of statements executed in a component
during test.  The component testing standard defines 11 test case design
techniques and their associated metrics:

- Equivalent partitioning
- Boundary value analysis
- State transition testing
- Cause-effect graphing
- Syntax testing
- Statement testing
- Branch decision testing
- Data flow testing
- Branch condition testing
- Branch condition combination testing
- Modified condition decision testing

To comply with this standard, a user must select which of these 11
techniques to apply and decide the minimum degree of testing in each
case, using the metrics defined in the standard.  A simple process to
apply the standard is contained within the standard.

An informative annex provides an illustration of each test case design
technique.  Hence, any person who tests or carries out software
component testing should have no difficulty in following the standard.
(If you do have difficulty, please let us know; it may be our mistake!)
As yet, there are no textbooks based upon this standard, but once the
formal standardization process has been completed, we anticipate that
textbooks will become available.

As part of the work involved in writing this standard, we rapidly
realized that key terms were used slightly differently by many
textbooks.  In consequence, we have produced a glossary of testing terms
to accompany the standard.  When possible, the glossary uses term
definitions from existing standards.

The British Computer Society SIGIST hopes that all software developers
will use this standard, and that TickIT assessors will be aware of its
relevance in assuring compliance with the testing requirements in ISO
9001.

Availability

The standard and glossary are available in printed form for a nominal
charge of pounds 20 to cover the copy, postage and packing from:

BCSIST c/o QCC Training Ltd.
73 West Road
Shoeburyness
Essex SS3 9DT, UK
Fax: 44-01702-298228

The standard and glossary with suitable hypertext links between them is
viewable via the WWW on http://www.rmcs.cranfield.ac.uk/~cised/
sreid/BCS_SIG/index.htm

Brian A. Wichmann
National Physical Laboratory
Teddington, Middlesex
TW11 0LW, UK
Voice: 44-181-943-6976
Fax: 44-181-977-7091
Email: baw@cise.npl.co.uk

Editors Note: This article appeared in CrossTalk, The Journal of Defense
Software Engineering, August 1996.

========================================================================

                      CALL FOR PAPERS: ESEC/FSE'97

              6th European Software Engineering Conference
                                  and
  5th ACM SIGSOFT Symposium on the Foundations of Software Engineering

                          Zurich, Switzerland

                          22-25 September 1997

The Sixth European Software Engineering Conference will be held jointly
with the Fifth ACM SIGSOFT Symposium on the Foundations of Software
Engineering. The combined conference will bring together professionals
from academia, business and industry to share information, evaluate
results and explore new ideas on software engineering models, languages,
methods, tools, processes, and practices. The conference will be
preceded by a wide range of tutorials on all aspects of software
engineering.


Organized by: The ESEC Steering Committee

Sponsored by:

- ACM SIGSOFT - CEPIS (Council of European Professional Informatics
Societies) - Swiss Informatics Society

Papers

Original papers are solicited from all areas of software engineering.
Papers may report results of theoretical, empirical, or experimental
work, as well as experience with technology transfer. Topics of
particular interest are:

*  Requirements engineering     *  Software architectures
*  Testing and verification     *  Software components
*  Configuration management     *  Design patterns
*  Software process models      *  Metrics and evaluation
*  SE environments              *  Safety-critical systems

Submission of papers

Submitted papers should be no longer than 6000 words and should include
a title page with a short abstract, list of keywords, and the authors'
addresses. Papers (6 copies) must be received by the Program Chair by
January 19, 1997. Submitted papers must not be under consideration by
other conferences or journals.

Submissions will be reviewed by the program committee for originality,
significance, timeliness, soundness and quality of presentation. You
should make clear what is novel about your work and should compare it
with related work. Theoretical papers should show how their results
relate to software engineering practice; practical papers should discuss
lessons learned and the causes of success or failure.

Tutorials

Proposals for half-day or full-day tutorials related to the topics of
the conference are invited. The proposal should detail the purpose and
contents of the tutorial, the required audience background and the
qualifications of the presenter. For further information consult the
conference web pages.

Conference Location

The conference will be held in Zurich, Switzerland, easily reachable by
air, by rail and by highway. Zurich is an international city with plenty
of cultural and sightseeing opportunities. The climate in Zurich is
usually mild and pleasant in September.

Important Dates

     Paper submissions due: January 19, 1997
     Tutorial submissions due: March 30, 1997
     Notification of acceptance: April 30, 1997
     Final versions of papers due: June 20, 1997

Chairs

Executive Chair: Helmut Schauer
                 Department of Computer Science
                 University of Zurich
                 Winterthurerstrasse 190
                 CH-8057 Zurich, Switzerland
                 Phone +41-1-257 4340, Fax +41-1-363 00 35
                 E-mail: schauer@ifi.unizh.ch

Program Chair:   Mehdi Jazayeri
                 Distributed Systems Department
                 Technische Universitaet Wien
                 A-1040 Vienna, Austria
                 Phone +43-1-58801-4467, Fax +43-1-505 84 53
                 E-mail: jazayeri@tuwien.ac.at

Tutorials Chair: Dino Mandrioli
                 Politecnico di Milano
                 Piazza Leonardo da Vinci 32
                 I-20133 Milano, Italy
                 Phone +39-2-2399 3522, Fax +39-2-2399 3411
                 E-mail: mandriol@elet.polimi.it

Program Committee Members

V. Ambriola (Italy)                 J. Kramer (United Kingdom)
A. Bertolino (Italy)                P. Kroha (Germany)
W. Bischofberger (Switzerland)      J. Kuusela (Finland)
P. Botella (Spain)                  A. van Lamsweerde (Belgium)
R. Conradi (Norway)                 A. Legait (France)
J.-C. Derniame (France)             G. Leon (Spain)
F. De Paoli (Italy)                 B. Magnusson (Sweden)
A. Di Maio (Belgium)                H.-P. Moessenboeck (Austria)
A. Finkelstein (United Kingdom)     H. Mueller (Canada)
A. Fuggetta (Italy)                 O. Nierstrasz (Switzerland)
D. Garlan (USA)                     H. Obbink (Netherlands)
C. Ghezzi (Italy)                   J. Palsberg (USA)
M. Glinz (Switzerland)              W. Schaefer (Germany)
V. Gruhn (Germany)                  I. Sommerville (United Kingdom)
K. Inoue (Japan)                    S. D. Swierstra (Netherlands)
G. Kappel (Austria)                 F. van der Linden (Netherlands)
R. Kemmerer (USA)                   S. Vignes (France)
R. Kloesch (Austria)                J. Welsh (Australia)

Please send mail to esec97@ifi.unizh.ch with the keyword SUBSCRIBE in
the subject field. To unsubscribe please send a mail with the keyword
UNSUBSCRIBE in the subject field.  For more information:

   http://www.ifi.unizh.ch/congress/esec97.html

Any questions? Please send mail to esec97@ifi.unizh.ch

========================================================================

           TTN Printed Edition Eliminated (Continued Notice)

Because of the more-efficient TTN-Online version and because of the
widespread access to SR's WWW, we have discontinued distributing the
Printed Edition (Hardcopy Edition) of Testing Techniques Newsletter.

The same information that had been contained in the Printed Edition will
be available monthly in TTN-Online, issues of which will be made
available ~2 weeks after electronic publication at the WWW site:

        URL: http://www.soft.com/News/

Issues of TTN-Online from January 1995 are archived there.


========================================================================
------------>>>          TTN SUBMITTAL POLICY            <<<------------
========================================================================

The TTN On-Line Edition is forwarded on approximately the 15th of each
month to Email subscribers worldwide.  To have your event listed in an
upcoming issue Email a complete description details of your Call for
Papers or Call for Participation to "ttn@soft.com".

TTN On-Line's submittal policy is as follows:

o  Submission deadlines indicated in "Calls for Papers" should provide
   at least a 1-month lead time from the TTN On-Line issue date.  For
   example, submission deadlines for "Calls for Papers" in the January
   issue of TTN On-Line would be for February and beyond.
o  Length of submitted non-calendar items should not exceed 350 lines
   (about four pages).  Longer articles are OK and may be serialized.
o  Length of submitted calendar items should not exceed 60 lines (one
   page).
o  Publication of submitted items is determined by Software Research,
   Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items are the opinions of their authors or
submitters and TTN-Online disclaims any responsibility for their
content.

TRADEMARKS:  STW, Software TestWorks, CAPBAK/X, SMARTS, EXDIFF,
CAPBAK/UNIX, Xdemo, Xvirtual, Xflight, STW/Regression, STW/Coverage,
STW/Advisor and the SR logo are trademarks or registered trademarks of
Software Research, Inc. All other systems are either trademarks or
registered trademarks of their respective companies.

========================================================================
----------------->>>  TTN SUBSCRIPTION INFORMATION  <<<-----------------
========================================================================

To request your FREE subscription, to CANCEL your current subscription,
or to submit or propose any type of article send Email to
"ttn@soft.com".

TO SUBSCRIBE: Send Email to "ttn@soft.com" and include in the body of
your letter the phrase "subscribe ".

TO UNSUBSCRIBE: Send Email to "ttn@soft.com" and include in the body of
your letter the phrase "unsubscribe ".


                     TESTING TECHNIQUES NEWSLETTER
                        Software Research, Inc.
                            901 Minnesota Street
                   San Francisco, CA  94107 USA  USA

                        Phone: +1 (415) 550-3020
                Toll Free: +1 (800) 942-SOFT (USA Only)
                         FAX: + (415) 550-3030
                          Email: ttn@soft.com
                      WWW URL: http://www.soft.com

                               ## End ##