sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr

         +===================================================+
         +======= Testing Techniques Newsletter (TTN) =======+
         +=======           ON-LINE EDITION           =======+
         +=======           September 1996            =======+
         +===================================================+

TESTING TECHNIQUES NEWSLETTER (TTN), On-Line Edition, is Emailed monthly
to support the Software Research, Inc. (SR) user community and provide
information of general use to the worldwide software testing community.

(c) Copyright 1996 by Software Research, Inc.  Permission to copy and/or
re-distribute is granted to recipients of the TTN On-Line Edition
provided that the entire document/file is kept intact and this copyright
notice appears with it.  INSIDE THIS ISSUE:

   o  Productivity Claims for ISO 9000 Ruled Untrue, by John Seddon

   o  C/C++ Software Quality Tools -- Book Review

   o  Useful Features of a Test Automation System (Part 2 of 3), by
      James Bach

   o  The 15th International Conference on Computer Safety, Reliability
      and Security (SAFECOMP 96)

   o  TestWorks Regression + Coverage for Windows 3.1x Available

   o  Fourth Symposium on the Foundations of Software Engineering
      (FSE4), ACM SIGSOFT '96

   o  NSC Value Add Study, by Don O'Neill

   o  ADVANCE PROGRAM: ISSRE'96 -- The Seventh International Symposium
      on Software Reliability Engineering

   o  CALL FOR POSITION PAPERS: International Workshop on Empirical
      Studies of Software Maintenance, Monterey, California, USA,
      November 8, 1996

   o  Reaching SR for information about TestWorks

   o  TTN SUBSCRIPTION INFORMATION

========================================================================

             Productivity Claims for ISO 9000 Ruled Untrue

                              John Seddon

   Note:  This item was forwarded to us by John Favaro, Intecs
   Sistemi S.p.A., Via Gereschi 32-34, 56127 Pisa Italy (Email:
   favaro@pisa.intecs.it).  You can contact the author, who says he
   has a followup document coming later this year, as follows:  John
   Seddon, Occupational Psychologist, Vanguard Consulting, 1 Nelson
   Street, Buckingham 20 ENGLAND MK18 1BU (WWW URL
   http://www.vanguardconsult.co.uk; Email
   john@vanguardconsult.co.uk).

The Advertising Standards Authority (ASA) has upheld complaints against
the British Standards Institute (BSI) for claiming that ISO 9000
improves productivity.

The ASA is the UK body which protects consumers from illegal, dishonest
and indecent advertising. Under a voluntary code of conduct, British
newspapers will not carry advertisements which have been found to breach
the ASA's rules.

The complainant was John Seddon, an occupational psychologist and long-
term researcher into ISO 9000. Speaking at his Buckingham offices,
Seddon said "I have no doubt that BSI will move their position to say
that ISO 9000 could improve productivity.  They will also accept, as
they have before, that it is less true for smaller companies".

According to Seddon, the first argument is flawed and the second shows
how, in fact, ISO 9000 has nothing to do with quality.

Seddon again: "To argue that ISO 9000 could improve productivity is to
rely on flimsy opinion research - research which was conducted amongst
those who have registered.  In every case we have studied we have seen
ISO 9000 damaging productivity and competitive position. The executives
of each and every one of these organizations believed that ISO 9000 had
been beneficial: they were all misguided.

"To argue that it may be less true for smaller companies is to accept
that ISO 9000 registration is a cost. There has been a lot of work done
to make it cheaper for small firms.  It is a nonsense. This view exposes
the fallacy of ISO 9000. ISO 9000 is a burden to British industry, not a
marque of quality. The costs being borne by the fifty thousand-plus
organizations are just the visible burden. Much worse is the damage
caused to operations. We have found companies providing worse service to
customers and behaving more inefficiently - and these things were direct
consequences of registration to ISO 9000."

========================================================================

              C/C++ Software Quality Tools -- Book Review

Mark L. Murphy, C/C++ Software Quality Tools.  Prentice Hall 1996
(paperback), ISBN 0-13-445123-6, 327 pages (Includes Disk), $36.00.

   Editor's Note:  This review was written by Brian O'Laughlin
   (briano@tezcat.com).  It appeared in expanded form in the ACM's
   Software Engineering Notes (July 1996).

This is an introductory book centering on the half dozen tools that it
supplies as source code.  The tools are written in C and C++, and if you
have one of the popular PC compilers, the makefiles are included.  The
book also attempts an overview of QA, testing, and how these tools will
help that process.  It is, in short, a book that attempts to cover quite
a large subject area, and has a few shortcomings as a result.

One general problem is the flow of examples for using the tools.  An
example of how to write a unit test to detect problematic code begins
with an example of the bad code, but the reader doesn't see how the tool
would help detect it.  It would, it turns out, but the example, like
many others, doesn't directly show the tool doing the work.

Another general problem is the absence of suggested readings.  Many of
the book's topics are only briefly introduced.  These thin introductions
rarely offer the depth required of their topics: for example, a mere
three pages on implementing code reviews is unlikely to provide enough
knowledge to successfully begin performing code reviews at your
organization.  Suggested readings, listed at the end of each chapter,
could have offered guidance on getting beyond the introductory level.

========================================================================

       Useful Features of a Test Automation System (Part 2 of 3)

                                  by

                               James Bach

   Editors Note: This article, by noted testing guru Jim Bach, is his
   personal set of hints and recommendations about how to get the
   best out of your effort at building a regression test system.  You
   can reach Jim at STL, South King St. Suite 414, Seattle, WA 98104.

o  Suite can be paused, single-stepped, and resumed.

   In debugging the suite, or monitoring it for any reason, it's often
   important to be able to stop it or slow it down, perhaps so as to
   diagnose a problem or adjust a test, and then set it going again from
   where it left off.

o  Suite can be executed remotely.

   Unless you live with your test machines, as some do, it's nice to be
   able to send a command from your workstation and get the test
   machines to wake up and start testing.  Otherwise, you do a lot of
   walking to the lab. It's especially nice if you can query the status
   of a suite, start it, stop it, or adjust it, over the phone. Even
   from home, then, you'd be able to get those machines cracking.

o  Suite is executable on a variety of system configurations to support
   compatibility testing.

   Automated suites should be designed with a minimum of assumptions
   about the configuration of the test machine. Since compatibility
   testing is so important, try to paramaterize and centralize all
   configuration details, so that you can make the suite run on a
   variety of test machines.

o  Suite architecture is modular for maximum flexibility.

   This is as true for testware as it is for software. You will reuse
   your testware. You will maintain and enhance it. So, build it such
   that you can replace or improve one part of it, say the test
   reporting mechanism, without having to rewrite every test. Remember:
   TESTWARE IS SOFTWARE.  Just as software requires careful thought and
   design, so you will discover that testware requires the same.
   Another aspect of good structure is to centralize all suite
   configuration parameters in one place. Here are some factors that
   might be controlled by a configurations file:

   o  Whether or not to log every navigational step taken during the
      test
   o  Whether or not to write out memory info before/after each test
   o  Where output is directed: common i/o, log file, debug monitor,
      nowhere
   o  Whether to perform screen comparisons or to rebuild capture files
      instead
   o  Which directory in which to place log files
   o  Which directories in which to read/write binary (capture) files

o  Suite can reset test machine(s) to known state prior to each test.

   There are good reasons to reset the test machine to a clean, known
   state, and there are also good reasons not to do that. Resetting
   helps in the process of investigating a problem, but not resetting is
   a more realistic test, since presumably your users will not be
   rebooting their computers between opening a file and printing it!  A
   good idea is to make it a selectable option, so the tests can run
   either way.

o  Suite execution and analysis take less time and trouble than hand-
   testing.

   I know, it sounds pretty obvious. Alas, it needs to be said. Too many
   testers and managers approach test automation for its own sake.
   Instead, look critically at how much time and effort you are heaping
   on your automation. In the beginning automation costs more, yes, but
   too often even after a couple of years it still takes more effort to
   manage the automation than it would just to do the same thing by
   hand.  The most common problem, in my experience, are false fails.
   Every time the test suite reports a fail that turned out to be a
   problem with the suite itself, or a trivial misalignment between the
   test and the latest conception of the product, all the time needed to
   solve that problem is pure automation cost. Generally speaking, keep
   the suite architecture as simple as possible, to keep maintenance
   costs down.

o  Suite creates summary, coverage, result, debug, and performance logs.

   A summary log is an overview of the results of the test suite: how
   many tests were executed, how many passed, failed, or are unknown,
   etc.  A coverage log shows what features were tested. You can achieve
   this by maintaining an electronic test outline that is associated
   with the test suite. If you have the appropriate tool, a coverage log
   should also report on code-level coverage.  A result log records the
   outcome of each test.  A debug log contains messages that track the
   progress of the test suite. Each entry should be time-stamped.  This
   helps in locating problems with the suite when it mysteriously stops
   working.  A performance log tracks how long each test took to
   execute. This helps in spotting absolute performance problems as well
   as unexpected changes in relative performance.

o  Suite creates global (suite-wide) and local (test-specific) logs.

   Except for the summary log, all the logs should have global and local
   versions. The local versions of the logs should be cumulative from
   test cycle to test cycle. Each local log pertains to a single test
   and is stored next to that test. They form a history of the execution
   of that test.  Global logs should be reinitialized at every test
   cycle, and include information about all of the tests.

o  Suite logs are accessible and readable.

   Logs ought to be both machine readable and human readable. I also
   recommend that they be tied into an icon or some other convenient
   front end so that they are easy to get to.

                           (TO BE CONTINUED)

========================================================================

         The 15th International Conference on Computer Safety,
                  Reliability and Security SAFECOMP 96

                Arcotel Hotel Wimberger, Vienna, Austria
                          October 23-25, 1996

        WWW URL: http://zditr1.arcs.ac.at/~espiti/safecomp.html

SAFECOMP is an annual event reviewing the state of the art, experiences
and new trends in the areas of computer safety, reliability and
security.  SAFECOMP was initiated by EWICS TC7 (European Workshop on
Industrial Computer Systems, Technical Committee 7) in 1979, and focuses
on critical computer applications. It is intended to form a platform for
technology transfer between academia, industry and research institutions
and between providers, customers and assessors. SAFECOMP is a one-stream
conference which, typically, attracts up to 150 participants. The
conference proceedings containing all papers will be available at the
conference (published by Springer London), and the conference language
is English.

Highlights of the technical programm:

More than 60 papers have been received and reviewed by the International
Program Committee, 33 have been selected.

Invited Speakers are:

A. Avizienis (USA): Systematic Design of Fault-Tolerant Computers
J. C. Laprie (F): Software-Based Critical Systems
W. Artner (A), S. Visram, Ph. Marsden (UK): Safety Case for the NERC Air
Traffic Control System

Papers will be presented on the following topics:

-Reliability and Safety Assessment
-Formal Methods and Models
-Testing, Validation and Verification
-The Safety Case
-Security
-Railway Applications and Experience
-Industrial Applications and Experience
-Management and Development
-Legal Aspects
-Human Factors

Additionally, four half-day tutorials will be offered:

-Software Quality through Component-based Design and Implementation (M.
Jazayeri, A)

-The Maintenance of Programmable Safety Systems (Ian C. Smith, UK)

-Safety Cases for Computer Based Systems (R. Bloomfield, P. Bishop, UK)

-Safety Related Standards and their Application (M. van der Meulen, NL,
F. Dafelmair, D, W. Ehrenberger, D)

All correspondence regarding registration and requests for information
should be addressed to:

      G. Sonneck
      FZ Seibersdorf
      A-2444 Seibersdorf AUSTRIA

      Tel     +43 2254 780 3115
      FAX     +43 2254 72133

      sonneck@arcs.ac.at

========================================================================

       TestWorks Regression + Coverage for Windows 3.1x Available

               STW/Regression (CAPBAK/MSW and SMARTS/MSW)

A number of technical revisions to SR's earlier-released Windows 4.1x
TestWorks/Regression bundle, including CAPBAK/MSW Version 2.6 and
SMARTS/MSW Version 2.6 have been completed and copies are now available.

This new build of TestWorks/Regression Version 2.6 for Windows 3.1x
features many minor changes and modifications,

FEATURES of TestWorks Regression for Windows:

o  New standardized and easy-to-use automated product installation and
   de-installation.

o  Simplified user-based licensing that also supports user, group,
   department and/or site licenses.

o  Easy interfaces to handle large, multiple complex projects, without
   capacity limitations.  VCR control model for capture and playback for
   ease of use.

o  Reliable TrueTime capture and playback with automatic output
   synchronization, OCR-based ASCII synchronization, and ASCII
   extraction.

o  Full ObjectMode recording capability to enhance script portability
   and flexibility.

o  Function-key switchability between TrueTime and ObjectMode.

o  Full "C" programmed test script to define the unlimited-size test
   trees.

o  Relational test tree, sub-tree, and test selection, plus multiple
   "go" modes for test execution.

o  Standard regression, PASS/FAIL reports built in.

o  Partial, full-window and full-screen capture modes.

o  Scripts can include DOS commands with a built-in DOS command line
   interface.

o  Improved and fully-indexed user documentation available both in hard
   copy and on-line versions.

BENEFITS of TestWorks Regression include:

o  Highly reliable, low overhead test execution and test suite
   management.

o  Early detection of errors for reduced error content in released
   products.

o  Easy interface to full coverage + regression quality process
   architecture.

APPLICATIONS and intended uses of TCAT C/C++ include:

o  "Industrial strength" test suite applications which are very large
   (1000's of tests).

o  Test suites that need to extract ASCII information from screens
   (using the OCR capability).

o  Smooth integration with the companion TestWorks/Coverage product for
   C/C++ applications.

Complete information about Windows 3.1x is available from our Web site
at http://www.soft.com or on request from SR or via E-mail to
info@soft.com.

========================================================================

                            ACM SIGSOFT '96
                        Fourth Symposium on the
               Foundations of Software Engineering (FSE4)

                     San Francisco, California USA
                           14-18 October 1996

The Fourth Symposium on the Foundations of Software Engineering will
provide a forum for discussion of innovative research results
contributing to an engineering discipline for software systems.  SIGSOFT
'96 includes a strong educational thread of tutorials, workshops,
panels, and invited speakers, supporting this year's theme of software
architecture and design.

SIGSOFT '96 is being held at the Holiday Inn Union Square, located in
the heart of downtown San Francisco, close to shopping, galleries,
theaters, and many of the best restaurants.  The Powell Street Cable Car
Line runs directly outside the hotel door, providing convenient access
to Chinatown, Fisherman's Wharf, and other parts of the city.

Highlights of the conference include:

+ Presentation of significant new research ideas
      Sessions on software modeling, analysis, testing, architecture,
      reuse, evolution, and evaluation.

+ Keynotes
      Michael Jackson   - Problems, Methods, and Structures
      Henry Petroski    - Engineering Bridges: From Concept to Reality
      Eberhardt Rechtin - Software Systems Architecting

+ Panels
      What Can Programming Languages Contribute to Software Engineering,
      and Vice Versa?
      Industrial Priorities for Software Engineering Research

- Tutorials
      Design Patterns - John Vlissides
      Integration Mechanisms : OLE and CORBA - Michael Stal
      Views & Frames for Problem Analysis - Daniel Jackson & Michael
      Jackson
      Theorem Proving and Model Checking for Software - John Rushby
      Software Design and Implementation with Generic Components -
      Mehdi Jazayeri & Georg Trausmuth

Details about the conference program and registration can be found on-
line through the conference web site:

        http://www.csl.sri.com/sigsoft96/

========================================================================

                          NSC Value Add Study

                              Don O'Neill
                          (ONeillDon@aol.com)

iN 1995 the NSC held a Software Summit in Washington DC.  As a result of
this event and the discussions that took place, several studies were
identified to be undertaken by the NSC.  One of these is to study the
Value Add of software to the software economy.

The Software Value Add Study is composed of two phases:

1. Perform a study on the national scope of software usage.

2. Prototype a mechanism for software competitiveness assessment.

The national scope of software usage includes:

1. Government- DOD Industry, Services, Agencies

2. Commercial- Telecommunications, Financial, Manufacturing,
Transportation

3. Academia- Education, Research

The mechanism for software competitiveness assessment features carefully
selected, well constructed assessment instruments.  Quick Look
Instruments are used to identify overall strengths and weaknesses in
Enterprise and Product Line software competitiveness indicators.  Probe
Instruments are applied selectively to pinpoint weaknesses in specific
Software Value Points.

The Enterprise Quick Look Instrument delves into software
competitiveness goals, software commitment, business environment,
competitiveness practice, improvement agenda, improvement goals,
improvement commitments,  and leading software indicators.

For example, the business environment may impact global software
competitiveness of the enterprise.  Important indicators establishing
the business environment are wage structure,  productivity, personnel
turnover, customer satisfaction, and the delivery  of essential value.

5. To what degree is software personnel turnover impacting business
success?

6. How important is software personnel turnover?

The findings on personnel turnover to date are summarized as follows:

1. Organizations with high software personnel turnover are more highly
competitive than those with stable, unchanging work forces.

2. Programming has become a commodity whose costs  and conditions of
employment are market driven.

3. The result of the market driven, commodity environment is that the
right people land in the right jobs.

4. Some software organizations seek as much as  20% turnover in
personnel intentionally as the means to revitalize skills and to manage
peaks and valleys in capacity.

5. Enterprises that take on the costs of downsizing and strategic
redirection often retain their highly trained software technical staff
in the face of dislocation.

I hope this information is of interest.  Please feel free to refer any
interest in the Global Software Competitiveness Assessment Program to
me.


========================================================================

                      ISSRE'96 -- ADVANCE PROGRAM

                 The Seventh International Symposium on
                    Software Reliability Engineering
               Crowne Plaza Hotel, White Plains, New York
                  October 30, 1996 - November 2, 1996

                  http://www.research.ibm.com/issre96

As software increasingly permeates our lives, software reliability
becomes critically important.  This research area, which began as a
study of the nature of software defects, failure rates, and testing has
grown into a more holistic discipline addressing a range of software
development, service and customer issues.  This year's symposium
features over sixty technical papers, a dedicated industrial experience
track, panels such as Web reliabiility, Year 2000, and perspectives from
executives in leading software companies. Tutorials on key topics
provide a jump start into topical subjects.

TUTORIALS

A: Software-Reliability-Engineered Testing, by John Musa

B: Delivering Engineering Feedback in Software Development, by Ram
Chillarege

C: Software Reliability Engineering for Client-Server Systems, by Norman
Schneidewind

D: Fault Tolerant Software, by Laura L. Pullum

E: Object-Oriented System Testing: The FREE Approach, by Robert Binder

F: Human Aspects of Software Reliability Engineering, by San Murugesan

CONFERENCE PROGRAM

THURSDAY, OCTOBER 31, 1996

Invited Speaker: Ravi Sethi, Research VP, Bell Labs., Lucent
Technologies:  "Specifying, Building, and Operating Reliable Systems"

Invited Speaker: Daniel Siewiorek, Professor, CMU: "Reliability
Perspective: Past, Present, and Future"

A1 - Technical Executive Panel, "Promising SRE Technologies:  What Works
and What Doesn't?"

Panel chair: Ram Chillarege (IBM, U.S.A.), Rich DeMillo (Bellcore,
U.S.A.)  Roger Sherman (Microsoft, U.S.A.)  Jean-Claude Laprie (LAAS,
France)

B1 - Fault/Failure Detection: Session Chair: Claes Wohlin (Lund Univ.,
      Sweden)

   o Automatic Failure Detection with Conditional-Belief Supervisors, J.
      Li (Bellcore, U.S.A.) and R. Seviora (Univ. of Waterloo, Canada)

   o  Analyze-NOW - An Environment for Collection and Analysis of
      Failures, in a Network of Workstations A. Thakur and R. Iyer
      (Univ. of Illinois, U.S.A.)

   o  Towards Automation of Checklist-Based Code-Reviews, F. Belli and
      R. Crisan (Univ. of Paderborn, Germany)

C1 - Industry Track I: Session Chair: Robert Swarz (MITRE, U.S.A.)

   o  The Link Between Complexity and Defects in Software--An Analysis,
      David I. Heimann (Fidelity Investments Systems, U.S.A.)

   o  Software Development Capacity Model, David Rentschler (Tandem
      Computers, U.S.A.)

   o  Software Manufacturing Methodologies to Assure Quality, Richard
      Schulman (Information Builders, U.S.A.)

   o  Process Pairs in Practice, Gary Tom (Tandem Computers, U.S.A.)

A2 - Operational Profile/Failure: Session Chair: Jean-Claude Laprie
      (LAAS, France)

   o  Sensitivity of Reliability Growth Models to Operational Profile
      Errors, A. Crespo (UNICAMP, Brazil), P. Matrella and A. Pasquini
      (ENEA, Italy)

   o  On Reducing the Sensitivity of Software Reliability to Variations
      in the Operational Profile, B. Cukic, F. Bastani (Univ. of
      Houston, U.S.A.)

   o  Avionics Software Problem Occurrence Rates, M. Shooman
      (Polytechnic Univ., U.S.A.)

B2 - Test Generation: Session Chair: Jacob Slonim (IBM Toronto, Canada)

   o  Automated Test Generation for Cause-Effect Graphs, A. Paradkar, K.
      Tai and M. Vouk (North Carolina State Univ., U.S.A.)

   o  Experience with the Combinatorial Design Approach to Automatic
      Test Generation, D. Cohen, S. Dalal, G. Patton and J. Parelius
      (Bell Communications Research, U.S.A.)

   o  Object State Testing and Fault Analysis for Reliable Software
      Systems, D. Kung, Y. Lu, N. Venugopalan and P. Hsia (Univ. of
      Texas at Arlington, U.S.A.)

C2 - Reliable Systems I:  Session Chair: Haim Levendel (Lucent
      Technologies, U.S.A.)

   o  Residual Fault Density Prediction Using Regression Methods, J.
      Morgan and G. Knafl (DePaul Univ., U.S.A.)

   o  Integrating Metrics and Models for Software Risk Assessment, J.
      Hudepohl, S. Aud (Nortel, U.S.A.), T. Khoshgoftaar, E. Allen
      (Florida Atlantic Univ., U.S.A.) and J. Mayrand (Bell Canada)

   o  Assessing the Reliability Impacts of Software Fault-Tolerance
      Mechanisms, V. Mendiratta (Lucent Technologies, U.S.A.)

   o  Design of Reliable Software via General Combination of N-Version,
      Programming & Acceptance Testing B. Parhami (Univ. of California,
      Santa Barbara, U.S.A.)

A3 - Panel: "Java/Web Reliability and Security Issues", Panel chair: Dan
      Siewiorek (CMU, U.S.A.)

B3 - Testing; Session Chair: William Howden (UCSD, U.S.A.)

   o  Integration Testing Using Interface Mutations, M. Delamaro, J.
      Meldonado (Univ. of Sao Paulo, Brazil) and A. Mathur (Purdue
      Univ., U.S.A.)

   o  An Approach to Testing Rule-Based Systems, A. Avritzer, J. Ros and
      E. Weyuker (AT&T, U.S.A.)

   o  A Task Decomposition Testing and Metrics for Concurrent Programs,
      C. Chung, T. Shih, Y. Wang, W. Lin and Y. Kou (TamKang Univ.,
      Taiwan, R.O.C.)

C3 - Reliable Systems II: Session Chair: Joanne Dugan (Univ. of
      Virginia, U.S.A.)

   o  Software Reliability Models: An Approach to Early Reliability
      Prediction, C. Smidts (Univ. of Maryland, U.S.A.), R. Stoddard
      (Texas Instruments, U.S.A.) and M. Strutzke (SAIC, U.S.A.)

   o  Composing Reliable Systems from Reliable Components Using
      Context-Dependent Constraints, P. Molin (University College of
      Karlskrona / Ronneby, Sweden)

   o  Using the Genetic Algorithm to Build Optimal Neural Networks for
      Faultprone Module Detection, R. Hochman, T. Khoshgoftaar, E. Allen
      (Florida Atlantic Univ., U.S.A.), and J. Hudepohl (Nortel, U.S.A.)

FRIDAY, NOVEMBER 1, 1996

Invited Speaker: Dorene Palermo, Reengineering VP, IBM: "Reengieering
      IBM Software Development"

Invited Speaker: Timothy Chou, VP, Server Products Group, Oracle:
      "Beyond Fault Tolerance: Challenges to Delivering 7x24 Computing"

A4 - Panel: "Year 2000: Software Disaster?", Panel chair: J. Robert
      Horgan (Bellcore)

B4 - Fault Injection: Session Chair: Karama Kanoun (LAAS, France)

   o  Using Fault Injection to Increase Software Test Coverage, J.
      Bieman, D. Dreilinger and L. Lin (Colorado State Univ., U.S.A.)

   o  Fault Injection Guided by Field Data, J. Christmansson (Chalmers
      Institute of Technology, Sweden) and P. Santhanam (IBM T.J. Watson
      Research Center)

   o  Comparing Disk and Memory's Resistance to Operating System
      Crashes, W. Ng, C. Aycock and P. Chen (Univ. of Michigan, U.S.A.)

C4 - Industry Track II: Session Chair: Allen Nikora (JPL, U.S.A.)

   o  Software Reliability:  Getting Started, John Teresinski (Motorola,
      U.S.A.)

   o  SoothSayer:  A Tool for Measuring the Reliability of Windows NT
      Services, Ted Weinberg (Microsoft, U.S.A.)

   o  Experience Report for the Software Reliability Program on a
      Military System Acquisition and Development, Richard W. Bentz
      (MITRE, U.S.A.) and Curtis D. Smith (E-Systems, U.S.A.)

   o  Safety and Reliability Analysis of Integrated Hard- and Software
      Systems at Saab Using a Formal Method, Ove Akerlund (Saab,
      Sweden), Gunnar Stalmarck, and Mary Helander (Linkoping Univ.,
      Sweden)

A5 - SRE Experience: Session Chair: William Farr (NSWC, U.S.A.)

   o  Validating Software Architectures for High Reliability, W.
      Ehlrich, R. Chan, W. Donnelly, H. Park, M. Saltzman, and P. Verma
      (AT&T, U.S.A.)

   o  Reliability of a Commercial Telecommunications System, M. Kaaniche
      and K. Kanoun (LAAS-CNRS, France)

   o  Reliability and Availability of Novanet - A Wide-Area Educational
      System, P. Dikshit, M. Vouk, D. Bitzer (North Carolina State
      Univ., U.S.A.) and C. Alix (University Communications, Inc.,
      U.S.A.)

   o  The DQS Experience with SRE W. Everett (SPRE, U.S.A.)

B5 - Distributed Computing: Session Chair: Hareton Leung (Hong Kong
      Polytechnic University)

   o  Software Reliability Engineering for Client-Server Systems, N.
      Schneidewind (Naval Postgraduate School, U.S.A.)

   o  Policy-Driven Fault Management in Distributed Systems, M.
      Katchabaw, H. Lutfiyya, A. Marshall and M. Bauer (Univ. of Western
      Ontario, Canada)

   o  Practical Strategy for Testing Pair-wise Coverage of Network
      Interfaces, A. Williams and R. Probert (Univ. of Ottawa, Canada)

C5 - Fault Tolerance I: Session Chair: Chandra Kintala (Lucent
      Technologies, U.S.A.)

   o  Method for Designing and Placing Check Sets Based on Control Flow
      Analysis of Programs, S. Geoghegan and D. Avresky (Texas A&M
      Univ., U.S.A.)

   o  A Replication Technique Based on a Functional and Attribute
      Grammar Computation Model, A. Cherif, M. Suzuki and T. Katayama
      (Japan Advanced Institute of Science and Technology, Japan)

   o  An On-Line Algorithm for Checkpoint Placement, A. Ziv (QuickSoft
      Development, U.S.A.) and J. Bruck (California Institute of
      Technology, U.S.A.)

A6- Panel: "How Can SRE Help System Engineers and System Architects?"

Panel chair: Bill Everett (SPRE)

Panelists:
   Willa K. Ehrlich (AT&T)
   John Musa (independent consultant)
   Patricia Mangan (Aerospace Corp.)
   Bob Yacobellis (Motorola)

B6 - Reliability Growth Models: Session Chair: Yoshihiro Tohma (Tokyo
      Denki University, Japan)

   o  Efficient Allocation of Testing Resources for Software Module
      Testing Based on the Hyper-Geometric Distribution software
      Reliability Growth Model, R. Hou, S. Kuo and Y. Chang (National
      Taiwan Univ., Taiwan, R.O.C.)

   o  Validation and Comparison of Non-Homogeneous Poisson Process
      Software Reliability Models, S. Gokhale, T. Philip, P. Marinos and
      K. Trivedi (Duke Univ., U.S.A.)

   o  A Conservative Theory for Long-Term Reliability Growth Prediction,
      P.G. Bishop and R.E. Bloomfield (Adelard, U.K.)

C6 - Fault Tolerance II:  Session Chair: Kishor Trivedi (Duke Univ.,
      U.S.A.)

   o  Primary-Shadow Consistency Issues in the DRB Scheme and the
      Recovery Time Bound, K. Kim, L. Bacellar and C. Subbaraman (Univ.
      of California, Irvine, U.S.A.)

   o  Empirical Evaluation of Maximum-Likelihood Voting in High Inter-
      Version Failure Correlation Conditions, K. Kim, M. Vouk and D.
      McAllister (North Carolina State Univ., U.S.A.)

   o  Supervision of Real-Time Systems Using Optimistic Path Prediction
      and Rollbacks, D. Simser and R. Seviora (University of Waterloo,
      Canada)

SATURDAY, NOVEMBER 2, 1996

Management Executive Panel, "Business Priority:  Is It Cost, Time to
      Market, or Reliability?", Panel chair: Walter Baziuk (Nortel)

A7 -  Panel: "Deploying SRE in your Organization or Company"

Panel chair: John Teresinski (Motorola)

Panelists:

   David Rentschler (Tandem)
   Adrian Dolinsky (Bellcore)
   Sue Steele (AT&T)
   Jim Widmaier (NSA)

B7 - Modeling/Measurement: Session Chair: George Knafl (DePaul Univ.,
      U.S.A.)

   o  Data Partition-Based Reliability Modeling, J. Tian (Southern
      Methodist Univ., U.S.A.) and J. Palma (IBM Toronto Lab., Canada)

   o  Detection of Software Modules with High Debug Code Churn in a Very
      Large Legacy System, T. Khoshgoftaar, E. Allen (Florida Atlantic
      Univ., U.S.A.) Nishith Goel, A. Nandi and J. McMullan (Nortel
      Technology, Canada)

   o  Fault Exposure Ratio:  Estimation and Applications, L. Naixin
      (Microsoft Corp., U.S.A.) and Y. Malaiya (Colorado State Univ.,
      U.S.A.)

C7 - Industry Track III:  Session Chair: Mitsuru Ohba (Hiroshima City
      Univ., Japan)

   o  The Software Development Process in the IBM Boeblingen Programming
      Lab, Harald Buczilowski (IBM, Germany)

   o  Putting Aspects of Software Reliability Engineering to Use, James
      Tierney (Microsoft, U.S.A.)

   o  A Primer on Legal Liability for Defective Software, Dana G. Fisher
      (Lucent Technologies, U.S.A.)

   o  Quality Assurance Methods for GUI Application Software
      Development, Takamasa Nara and Shinji Hasegawa (Hitachi, Japan)

PLENARY SESSION: "Lessons Learned -- Feedback and Inputs"


   Program Committee Chair

   Michael R. Lyu
   Bell Laboratories
   Lucent Technologies
   Room 2A-413
   600 Mountain Avenue
   Murray Hill, NJ 07974

   lyu@research.att.com

   Tel:  (908)582-5366
   Fax:  (908)582-3063

   Sponsored by IEEE, IEEE Computer Society.  Organized by the Committee
   on Software Reliability Engineering of the IEEE Computer Society
   Technical Council on Software Engineering. In cooperation with IEEE
   Reliability Society and IFIP 10.4 Working Group on Dependability.

   Conference Hotel:  The Crowne Plaza at White Plains

   Address: 66 Hale Avenue, White Plains, NY 10601

   Phone: 1-800-PLAINS2 or (914)682-0050

   Fax:   (914)682-0405

   ========================================================================

                         CALL FOR POSITION PAPERS

    International Workshop on Empirical Studies of Software Maintenance
                         Monterey, California, USA
                             November 8, 1996

         Web site: http://www.iese.fhg.de/Announcements/wess.html

   This workshop is to take place following ICSM-96, the International
   Conference on Software Maintenance taking place in Monterey,
   California, U.S.A. It is in part sponsored by the Fraunhofer-
   Institute for Experimental Software Engineering (IESE),
   Kaiserslautern, Germany.

   Objectives

   The focus of the workshop is on experimental quantitative and
   qualitative studies of software maintenance processes. Of particular
   interest will be the design of empirical studies, their underlying
   methodologies and techniques, and the lessons learned from them.
   Controlled experiments, field studies, pilot projects, measurement
   programs, surveys or analyses based on questionnaires, maintenance
   process models, etc., are examples of empirical studies of interest.
   Examples of applications are:

   - maintenance cost models (changes, releases)
   - reliability of maintained systems
   - assessment of system maintainability based on measurement
   - models for impact analysis and their evaluation
   - maintenance process assessment, modeling, and improvement

   The objectives of the workshop are three-fold:

   - promote discussion and exchange of ideas among researchers and
     practitioners in this area
   - better identify practical problems and research issues
   - identify and share existing and potential solutions

   Format

   Participants will be asked to submit a position paper briefly (2000
   words maximum) describing their position on one of the topics
   mentioned above or on any other topic that will be deemed relevant by
   the author. The workshop committee will select the largest possible
   number of position papers (around 40) and cluster them into a limited
   number of topics (matching, for example, the topics listed above).
   According to the topic of their position paper, participants will be
   grouped into working groups and for each of those groups a moderator
   will be selected.

   Members of each group will meet and perform a structured synthesis of
   practical problems, research issues and solutions related to the
   topic of interest. The results of this effort will be presented to
   all workshop participants and published subsequently in the workshop
   proceedings. Each session will have a moderator in charge of
   coordinating the effort and presentations of his/her group,
   introducing the speakers, and writing down a final position paper
   resulting from his/her group discussions.

   The originality of this workshop lies in the fact that, instead of
   focusing on panels and presentations, it will rely more particularly
   on discussions and working groups.  Members of each working groups
   will meet in order to make a structured synthesis of the issues,
   solutions, research directions related to their selected topic, which
   will result into a summary report to be included in the proceedings.
   This is intended to provide the workshop participants with more
   structured, concrete, and thorough outputs.

   Proceedings

   Proceedings will be printed and sent to participants after the
   workshop. They will include the position papers and the structured
   synthesis resulting from group discussions.

   Submission

   Please, send your position paper as an e-mail attachement in a
   PostScript format or Email it to the address below.

   Lionel Briand
   Fraunhofer-Institute for Experimental Software Engineering (IESE)
   Technologiepark II
   Sauerwiesen 6
   D-67661 Kaiserslautern GERMANY

   Email:  briand@iese.fhg.de
   Voice:  +49 (6301) 707-250
   Fax:    +49 (6301) 707-202

   ========================================================================

             TTN Printed Edition Eliminated (Continued Notice)

   Because of the much-more-efficient TTN-Online version and because of
   the widespread access to SR's WWW, we have discontinuing distributing
   the Printed Edition (Hardcopy Edition) of Testing Techniques
   Newsletter.

   The same information that had been contained in the Printed Edition
   will be available monthly in TTN-Online, issues of which will be made
   available ~2 weeks after electronic publication at the WWW site:

           URL: http://www.soft.com/News/

   Issues of TTN-Online from January 1995 are archived there.

   ========================================================================
   ------------>>>          TTN SUBMITTAL POLICY            <<<------------
   ========================================================================

   The TTN On-Line Edition is forwarded on approximately the 15th of
   each month to Email subscribers worldwide.  To have your event listed
   in an upcoming issue please Email a complete description of your or a
   copy of your Call for Papers or Call for Participation to
   "ttn@soft.com".  TTN On-Line's submittal policy is as follows:

   o  Submission deadlines indicated in "Calls for Papers" should
      provide at least a 1-month lead time from the TTN On-Line issue
      date.  For example, submission deadlines for "Calls for Papers" in
      the January issue of TTN On-Line would be for February and beyond.
   o  Length of submitted non-calendar items should not exceed 350 lines
      (about four pages).
   o  Length of submitted calendar items should not exceed 68 lines (one
      page).
   o  Publication of submitted items is determined by Software Research,
      Inc., and may be edited for style and content as necessary.

   TRADEMARKS:  STW, Software TestWorks, CAPBAK/X, SMARTS, EXDIFF,
   CAPBAK/UNIX, Xdemo, Xvirtual, Xflight, STW/Regression, STW/Coverage,
   STW/Advisor and the SR logo are trademarks or registered trademarks
   of Software Research, Inc. All other systems are either trademarks or
   registered trademarks of their respective companies.

   ========================================================================
   ----------------->>>  TTN SUBSCRIPTION INFORMATION  <<<-----------------
   ========================================================================

   To request your FREE subscription, to CANCEL your current
   subscription, or to submit or propose any type of article send Email
   to "ttn@soft.com".

   TO SUBSCRIBE: Send Email to "ttn@soft.com" and include in the body of
   your letter the phrase "subscribe ".

   TO UNSUBSCRIBE: Send Email to "ttn@soft.com" and include in the body
   of your letter the phrase "unsubscribe ".


                       TESTING TECHNIQUES NEWSLETTER
                          Software Research, Inc.
                             901 Minnesota Street
                         San Francisco, CA  94107 USA

                         Phone: +1 (415) 550-3020
                  Toll Free: +1 (800) 942-SOFT (USA Only)
                           FAX: + (415) 550-3030
                            Email: ttn@soft.com
                       WWW URL: http://www.soft.com

                                 ## End ##