sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr

         +===================================================+
         +======= Testing Techniques Newsletter (TTN) =======+
         +=======           ON-LINE EDITION           =======+
         +=======             January 1998            =======+
         +===================================================+

TESTING TECHNIQUES NEWSLETTER (TTN), Online Edition, is E-mailed monthly
to support the Software Research, Inc. (SR)/TestWorks user community and
to provide information of general use to the worldwide software quality
and community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of TTN-Online provided that the
entire document/file is kept intact and this complete copyright notice
appears with it in all copies.  (c) Copyright 1998 by Software Research,
Inc.

========================================================================

INSIDE THIS ISSUE:

   o  Quality Week '98 (QW'98): Update

   o  Conference Announcement: AQuIS'98 (Venice, 30 March to 2 April
      1998)

   o  The TestWorks Corner:  The Latest System Additions

   o  A Guided Tour of Software Testing Tools for the 10X Program, by
      Robert Poston

   o  Advance Program: ISSTA98 and FMSP98, 2-5 March 1998

   o  WEB Quality Assurance/Testing Survey: Three Responses

   o  Mailing List Policy Statement

   o  TTN Submittal Policy

   o  TTN SUBSCRIPTION INFORMATION

========================================================================

       QUALITY WEEK '98 (26-29 May 1998, San Francisco):  UPDATE

Record numbers of submitted paper and tutorial proposals in the just-
closed submission period will assure another high-quality conference in
May.

The complete technical program will be announced in a few weeks.  Like
prior events, expect QW'98 to include timely and focused talks and
presentations from the industry's leaders.

Information about QW'98 is being given at the Conference Website
<http://www.soft.com/QualWeek/QW98> and you are invited to visit the
website to learn the latest.  Registration to attend QW'98 is now open.

Questions about QW'98 can be Emailed to qw@soft.com.

========================================================================

                        CONFERENCE ANNOUNCEMENT
                               AQuIS '98
                 The Fourth International Conference on
                     Achieving Quality in Software
                         March 30-April 2, 1998
                          Palazzo Giovannelli
                             VENICE, ITALY

          Conference website: http://www.iei.pi.cnr.it/AQUIS98

                         PROVISIONAL PROGRAMME

1/2-DAY TUTORIALS, Monday, 30 March 1998:


Tutorial A: A.Dorling, "SPICE and ISO/IEC 15504: overview and
perspectives"

Tutorial B: P.Paolini/M.Matera (Politecnico of Milan), "Structured
Design and Evaluation of Multimedia CD-ROMs and Websites."

Tutorial C: M.Maiocchi (Etnoteam), "Process Improvement and Management
by Metrics as  a way for Business Excellence: The case of Information
Technology Companies"

Tutorial D: L.Jazayeri (University of Vienna), "C++ for Software
Engineering: Standard Solutions for Standard Problems"


TECHNICAL CONFERENCE: Tuesday, Wednesday and Thursday, 31 March - 2
April 1998:

Invited Speakers:

Prof. Zhang Shiyong (Fudan University, Shanghai)
Prof. Lori Clarke, (University of Massachusetts, Amherst)
Prof. Richard A. Kemmerer, (University of California, Los Angeles)


CONTACT INFORMAITON:

Registration material for AQuIS can be obtained from:

Consorzio Universitario in Ingegneria della Qualita'
P.zza del Pozzetto, 9
56127 Pisa, Italy
Tel. +39.50.541751   Fax. +39.50.541753

Complete details on: http://www.iei.pi.cnr.it/AQUIS98

========================================================================

             TestWorks Corner: The Latest System Additions

As TTN-Online readers may know, TestWorks is an expansive family of
software testing and validation tools aimed at UNIX and Windows
workstations.  Complete information about testworks can be found at:
<http://www.soft.com/Products>.

Some recent advances and/or additions to the TestWorks product line
include:

o  TCAT for Java for Windows is available for download.

o  The complete TestWorks product line for UNIX is now available on the
   DEC-Alpha under DEC-Unix.

o  There is a new application note on application of TestWorks to Y2K
   upgrade efforts.

o  We are also offering CAPBAK/MSW and SMARTS/MSW on the DEC-Alpha/NT
   platform.

o  Coming attraction: The Remote Testing Option (RTO) for TCAT for Java
   for UNIX will be expanded to include processing of C/C++ programs.

o  Coming attraction: TCAT for COBOL will soon be available on Windows
   and UNIX platforms.


Get complete information on any TestWorks product or product bundle from
sales@soft.com

========================================================================

     European Conference on Verification and Validation, EuroVAV97

We heard of the EuroVaV97 conference on V&V by accident, but felt that
it would be of interest to TTN-Online readers:

http://www.econ.kuleuven.ac.be/congres/eurovav/EuroVaV97.htm

Contact: Jan Vanthienen (Email: Jan.Vanthienen@econ.kuleuven.ac.be)

http://www.econ.kuleuven.ac.be/tew/academic/infosys/members/vthienen/default.htm

========================================================================

      A GUIDED TOUR OF SOFTWARE TESTING TOOLS FOR THE 10X PROGRAM
                            by Robert Poston

   Editors Note: This article is based on material that is also found
   on the AONIX website: http://www.aonix.com.  It is part of the
   descriptions of the AONX/SR 10X program that can be found there.
   It is included here because it is a very good introduction to the
   full spectrum of software testing tool requirements.  Details
   about products can be obtained from sales@soft.com or from
   sales@aonix.com.

People often approach me at conferences and say that they have a
capture-replay tool that is not producing the testing results they want.
Or they might say that they use a capture-replay tool, so their testing
is already automated. Therefore, they don't see how the 10X Program can
produce significant improvements in their testing.

Capture-replay tools are among the most widely known software testing
tools, so understandably, many people ask for information about them.
Capture-replay tools must be employed to achieve 10X gains. However,
these tools take care of only one small part of software testing: the
running or executing of tests (more accurately called test cases). In
brief, test cases are sets of input values that demonstrate that the
software does or does not perform as specified. Test cases must be
created before they can be run, and they should be evaluated after they
are run. In order to automate specification-based testing fully, we need
tools that create, execute, and evaluate test cases. A capture-replay
tool by itself cannot automate testing completely or achieve 10X Testing
results.

In the 10X Program we first concentrate on the three primary tools that
enable automated specification-based software testing: a requirements
recorder and test case generator which can be packaged as one tool; an
execution tool; and an evaluation tool. These are the so-called "big
three" testing tools. Many other helpful testing tools are available.
The others are nice-to-have tools, but they are not necessary to perform
10X testing. The big three tools should be obtained as soon as possible;
the nice-to-haves can be added to a tool set as budgets allow.

A useful way I have found to introduce people to software testing tools
is to conduct a tour of the software development life cycle.  During
this tour we stop at each phase of the life cycle and look at testing
tools that can be used during that phase. As we progress, people on the
tour become acquainted with the many types of testing tools that are
available and with the job that each tool is intended to perform.

A new tour is starting now. You are welcome to join. As we move through
the life cycle, look for tools that may be in your shop already. See if
your tools are among the big three. Perhaps you are using one or more of
the nice-to-have tools. If your company has not purchased any software
testing tools yet, this tour should help you prepare a shopping list.

Sometimes overly enthusiastic vendors will say that their individual
tool performs multiple testing jobs. That is not what I have found upon
close examination of many testing tools. For example, I cannot find a
capture-replay tool that creates test cases, or conversely, a test case
generator that runs test cases. In my experience most testing tools are
designed to do just one job. However, throughout the tour, I'll point
out tool packages that combine two or more testing tools and do perform
multiple jobs. And I'll also introduce you to a big-three tool set that
is already assembled.

                    Tools for the Requirements Phase

Cost-effective software testing starts when development teams record
requirements for the product they are going to develop.  All testing
depends on having a reference to test against. Software should be tested
against an understanding of what it supposed to do: the requirements.
Requirements can make testing (and of course, development) painless or
painful. If requirements contain all the information a tester needs in a
usable form, the requirements are said to be test-ready. Test-ready
requirements minimize the effort and cost of testing. If requirements
are not test-ready, testers must search for missing information - often
a long tedious process.

                         Requirements Recorder

To capture requirements quickly and efficiently, software practitioners
may use tools that fall into a general classification called
requirements recorders. Every software development team probably has
some of kind of computerized requirements recorder already at hand. Some
teams write their requirements in a natural language such as English and
record them with a text editor. Other teams write requirements in a
formal language such as LOTOS or Z and record them with a syntax-
directed editor. Still others use requirements modeling tools to record
information graphically.

In the past, requirements modeling tools were used mostly by software
analysts or developers. These tools seldom were used by testers.
Recently, however, requirements modeling tools have been evolving in
ways that help testers as well as analysts and developers.  First, a
method for modeling requirements called use cases was incorporated into
some of these tools. Then the use cases were expanded to include test-
ready information.

The use case will be test-ready when data definitions are added.  With
such definitions, a tool will have enough information from which to
create test cases.

Aonix offers a system called Validator that combines a use-case recorder
and a test case generator. A user models each functional requirement as
a use case and directs Validator to generate test cases from the use
cases. The recorder-test case generator is the first big-three tool
required for 10X Testing. I've briefly described the recorder part of
this big-three tool; I'll discuss test case generators shortly.

                         Requirements Verifiers

Requirements recorders are well-established tools that continue to be
updated with new features and methods like use cases. But requirements
verifiers are relatively new tools. Before requirements verifiers came
on the scene, recorded requirements information could be checked in two
ways. It could be checked by using another function available in some
requirements analysis tools to verify that information conformed to
certain methodology rules or by performing manual reviews on the
information. However, neither of these verifications could assure that
requirements information represented a testable product.

To be testable, requirements information must be unambiguous,
consistent, and complete. A term or word in a software requirements
specification is unambiguous if it has one, and only one definition.  A
requirements specification is said to be consistent if each of its terms
is used in one, and only one, way. Consider, for example, the word
"report". In a requirements specification "report" must be used as
either a noun or a verb. To use "report" as both a name and an action
would make the specification inconsistent.

Completeness from a tester's point of view means that requirements
contain necessary and sufficient information for testing. Every action
statement must have a defined input, function, and output.  Also, the
tester needs to know that all statements are present.  If any statement
is incomplete or if the collection of statements known as the
requirements specification is incomplete, testing will be difficult.

Requirements verifiers quickly and reliably check for ambiguity,
inconsistency, and statement completeness. However, an automated
verifier has no way to determine that the collection of requirements
statements is complete. The tool can check only what is entered into it,
not what should be entered. Checking completeness of the requirements
specification is still a task that people must perform.

Most testing and development tools used later in the software life cycle
depend on having reliable requirements information, so the requirements
verifier is very nice to have. Unlike most testing tools which are
packaged separately, requirements verifiers are usually embedded in
other tools. For example, the Software through Pictures modeling tool
from Aonix includes three checking functions: Check Syntax; Check
Semantics; and Check Testability.  Requirements that pass these
functions are unambiguous, consistent, and complete enough for testing.

                Specification-Based Test Case Generators

The requirements recorder discussed earlier may be coupled with a
specification-based test case generator. The recorder captures
requirements information which is then processed by the generator to
produce test cases. The test case generator must be included in the
big-three tool set in order to achieve 10X Testing results.

A test case generator creates test cases by statistical, algorithmic, or
heuristic means. In statistical test case generation, the tool chooses
input structures and values to form a statistically random distribution
or a distribution that matches the usage profile of the software under
test.

In the case of algorithmic test case generation, the tool follows a set
of rules or procedures, commonly called test design strategies or
techniques. Most often test case generators employ action-, data-,
logic-,event-, and state-driven strategies. Each of these strategies
probes for a different kind of software defect as shown next.

When generating test cases by heuristic or failure-directed means, the
tool uses information from the tester. Failures that the tester
discovered frequently in the past are entered into the tool. The tool
then becomes knowledge-based, because it uses the knowledge of
historical failures to generate test cases.

In the old days testers were concerned about the work of creating and
modifying test cases. Coming up with test cases was a slow, expensive,
and labor-intensive process. Then, if one requirement changed, the
tester had to redo many existing test cases and create new ones. With
modern test case generators, test case creation and revision time is
reduced to a matter of CPU seconds.

Validator from Aonix is an example of a test case generator packaged
with a requirements recorder. Validator applies heuristic and
algorithmic means to requirements information to create test cases.

                      Requirements to Test Tracers

Requirements to test tracers are tools that were introduced to the
commercial marketplace in the 1980s. They are of particular value on
military contracts where tracing still is a required activity. Through
most of the 1980s huge amounts of government money were allocated to
contractors who traced every requirement statement in a specification to
each of its associated test cases.  Software managers in military
organizations wanted to know how every specified function was going to
be demonstrated. They also wanted to know which test cases needed to be
changed when a requirement was modified.

As tracing gobbled up staff time on large software projects, managers
searched for a more efficient way to trace. Today automated requirements
to test tracers can take over most of the boring tracing work that once
consumed so much human time. People who used to do tracing are now free
to do creative and intellectually challenging testing work. Tracers are
available as individual tools, but they also may be included in more
comprehensive testing tools such as specification-based test case
generators.

Aonix's Validator tool includes a tracer as well as a requirements
recorder, verifier, and test case generator.

                         Expected Output Tools

No tool that will produce expected output values is available
commercially yet. Several tools do produce the names of expected outputs
but not the values. This is an important distinction.  Names of outputs
are not sufficient to determine whether or not software passes or fails
a test case. A tester must have an expected output value to judge
pass/fail.

Some advances regarding expected outputs have been achieved, however.
These advances are called oracles, reversers, and references.  We invite
questions about the latest developments in this area of testing at 10X
Seminars.

                       Tools for the Design Phase

A requirements specification defines what a software system should do.
From the requirements, designers produce descriptions of subsystems and
interfaces that will make up the software product. Each subsystem with
its associated interfaces can be treated as a small, individual system.
In the requirements phase, the recorder, verifier, test case generator,
and tracer were used at the large-system level.  In the design phase,
the same tools may be used again to test small systems. These tools are
cost-effective, because they are double-duty performers that eliminate
potential problems in the earliest life cycle phases when testing costs
are low. (These tools can be used later, too, making them multiduty
performers.)

Designers who use Aonix's Software through Pictures modeling tools can
record their requirements graphically in use cases (previously
described). In addition, designers may record their designs as either
object or structured models, depending on which methodology they follow.
Then they can use the Validator tool to generate test cases from
requirements and a tool called StP/T to generate test cases from
designs.

Finding software errors early has long been considered a good idea in
the software development community. Specification-based test case
generators have taken that idea one step further into error prevention.
If people know how a software product will be tested before they start
to build the product, they usually will create a product that will pass
its test cases. Of course, when test cases are so critical to a
product's success, they must be sound. In the past, manually derived
test cases could not be trusted (and they probably were not available
early in the life cycle anyway.) Now automated test case generators
quickly produce highly reliable test cases that demonstrate the absence
of most probable errors and show that the software product works as
specified.

By using these reliable test cases to guide them, designers can avoid or
prevent the most probable errors in design. Designers can then pass
along high-quality designs to programmers who can practice error
prevention during coding.

                    Tools for the Programming Phase

If requirements, designs, and test cases have been prepared properly,
programming will be the easiest part of software development.  In
efficient code development, programmers must write preambles or comments
in front of their code to describe to other people or programs what
their code will do. They must also create algorithms that their code
will execute. Finally they will write code.

The preambles, algorithms, and code will be inputs to testing tools used
during the programming phase. Preambles may be thought of as
requirements descriptions for tiny systems or units of code that the
programmer will develop. The tools from the requirements phase may be
used once again to test these smallest of systems.

The metrics reporter, code checker, and instrumentor also can be used
for testing during the programming phase. Sometimes these tools are
classified as static analysis tools, because they check code while it is
static (that is, before the code is exercised).

                            Metrics Reporter

The metrics reporter tool has been around for years and is still useful.
It reads source code and displays metrics information, often in
graphical formats. This tool reports complexity metrics in terms of data
flow, data structure, and control flow. It also reports metrics about
code size in terms of modules, operands, operators, and lines of code.
This tool helps the programmer correct and groom code and helps the
tester decide which parts of the code need the most testing.

Ask your SR or AONIX sales representative for a tool called METRIC, if
you are interested in a metrics reporter.  METRIC reports industry-
standard metrics and is a nice addition to a big-three tool set.

                              Code Checker

The earliest code checker that most people can remember was called LINT
and was offered as part of old Unix systems. LINT is still available in
today's Unix systems, and many other code checkers are now sold for
different operating systems. The name LINT was aptly chosen, because the
code checker goes through code and picks out all the fuzz that makes
programs messy and error-prone. The code checker looks for misplaced
pointers, uninitialized variables, deviations from standards, and so on.
Development teams that use software inspections as part of static
testing can save many staff hours by letting a code checker identify
nitpicky problems before inspection time.

                   Product-Based Test Case Generator

The product-based test case generator has been wasting testers' time
since the 1970s. Since many people have heard of this tool, it is
included here for completeness. However, the product-based test case
generator does not support good software testing practice and should not
be included in the tester's toolkit.

The product-based test case generator reads and analyzes source code and
then derives test cases from its analysis of the source code. This tool
tries to create test cases that cause every statement, branch, and path
to be exercised (structural coverage). Attaining comprehensive
structural coverage is a worthwhile goal. The problem with this tool is
that it tries to achieve structural coverage by working from the code
rather than the specification. Boris Beizer, widely published testing
author, has called testing from the code "kiddie testing at its worst."
Such comments spring from the fact that code represents only what a
software product does, not what it should do. Code can have missing or
wrong functions, but the tool will not know that. Since the tool cannot
distinguish good code from bad, it will try to generate test cases to
exercise every part of any code and will not warn the tester that some
of the code may be faulty.

Determining that code is good or bad is left up to testers. As discussed
earlier, testers make that judgment by comparing actual behavior of the
code to its specified or expected behavior. When written specifications
are not available and testers must work from their recollection of
specifications, they are inclined to trust the product-based test case
generator. Because they have no other reference, these testers
mistakenly place their faith in reports showing high structural
coverage. Testers with written or modeled specifications have the
definitive reference they need to complete software testing; they have
no need to test software against itself and no need for a product-based
test case generator.

                           Code Instrumentor

Although producing test cases from code is not a good idea, measuring
structural coverage is a very good idea, because people see from
measurements how well the code has been exercised. The code instrumentor
helps programmers and testers measure structural coverage and is an
essential big-three evaluation tool.

A code instrumentor reads source code and determines where code coverage
measurements should be made. For example, the instrumentor might choose
to make a measurement after a variable is defined or a branch is taken.
At those measurement points, the tool inserts a new line of code that
will record information such as number and duration of test executions.
That line of code is called a test probe. Once probes are inserted, the
instrumentor has done its job and can be put aside until more source
code needs to be instrumented.

Is this tool already in your shop? If not, it should be on your shopping
list. To arrange for a demonstration of a code instrumentor, ask your
Aonix or TestWorks representative to see STW/Coverage.

                      Tools for the Testing Phase

This is an important section for capture-playback tool users to read,
especially if the results of use have been unsatisfactory.

Software testing is a front-end process. All the tools discussed so far
are used in the early life cycle phases before developers get to the
testing phase. Many people have not had the chance to study systematic
software testing or testing tools. These people often think of testing
as a back-end process. That is unfortunate, because they miss the most
cost-effective opportunities to prevent and eliminate software bugs and
to produce high-quality software products.

Traditionally, people in software development defined, designed, and
coded products with little regard to testing. Once the product was
coded, people started to think about testing it. Often there was not
much time or money left in the development budget by testing time, so
testing amounted to a quick, once-over attempt to verify that major
parts of the product would work.

As professionals became more concerned with software product quality and
software development productivity, they turned to methodologies.  Some
testing methodologies available in the 1970s and 1980s were well-
conceived, but they required so much work, documentation, and constant
championing that they were hard to keep going from project to project.
People soon became discouraged with many methodologies, To prop up
testing methodologies, companies often tried tools.  Unfortunately, most
of the testing tools of the 1970s and early 1980s were test execution
tools. They did none of the test creation and management jobs discussed
in this article; they simply ran test cases. Now other newer tools are
making methodologies workable, and people are taking another look at
methodologies that can be implemented with the help of modern testing
tools.

The tools discussed next are test execution tools that have been in the
software industry in one form or another for many years.  Unlike the
static analysis tools used during programming, these tools are dynamic
analyzers. They work on or with the code as it is being exercised. They
are necessary because test cases must be run. However, test execution
tools must have test cases to run, and in traditional, back-end testing,
test cases were not available at the beginning of the testing phase.
Test cases had to be hurriedly created to get testing underway. Of
course, this resulted in many oversights and omissions in test cases.
Today, people who have done front-end testing have high-quality test
cases ready for the execution tools, and work in the testing phase goes
quickly.

                          Capture-Replay Tool

The capture-replay tool, sometimes called a capture-playback tool, works
like a VCR or a tape recorder. To visualize how this tool operates,
picture information flowing through a software system.  Now imagine a
point in that information flow where a tool is inserted.  When the tool
is in the capture mode, it records all information that flows past that
point. The recording is called a script.  A script is a procedure that
contains instructions to execute one or more test cases.

When the tool is in the replay mode, it stops incoming information and
plays a recorded script. A tester will insert a capture-replay tool into
a system and will then run test cases with the tool in the record mode.
Later the tester will rerun recorded scripts (that is, regression
testing).

Two new features have been incorporated into several commercial
capture-replay tools. The first is an object-level, record-playback
feature that enables capture-replay tools to record information at the
object, control, or widget level. Older capture-replay tools only worked
at the bit-map level. Of course, capture-replay tools that work at both
the bit-map and widget level are preferred.  The second new feature is a
load simulator, which is a facility that lets the tester simulate
hundreds or even thousands of users simultaneously working on software
under test.

Companies seeking capture-replay tools are confronted with a "build or
buy" decision. Most commercially available capture-replay tools
interrupt information flow at only one point right behind the keyboard
and CRT screen. These tools are tied strongly to system hardware and
software, which may not be useful for multihosted products. In addition,
they are helpful only to people who are testing GUI-driven systems. Most
unit testers, integration testers, and embedded system testers do not
deal with large amounts of software that interact with graphical user
interfaces (GUIs).  Therefore, most commercial capture-replay tools will
not satisfy these testers' needs. Such testers will probably need a
custom-built test execution facility.

Capture replay tools may be sold separately. Aonix markets one called
CAPBAK which includes object-level testing and load simulation features.

Capture-replay tools also may be packaged with other tools such as test
managers. When testers apply automated test case generators like Aonix's
StP/T tool, they may produce hundreds or even thousands of test scripts.
A tool called a test manager helps testers control large numbers of
scripts and report on the progress of testing.  SMARTS is such a test
manager; CAPBAK and SMARTS are included in the SR package called
STW/Regression.

No matter how a capture-replay tool is packaged, regard it as a big-
three tool. It is necessary for 10X Testing.

                              Test Harness

A capture-replay tool connects with software under test through one
interface, usually located at the screen or terminal. But the software
under test will probably also have interfaces with an operating system,
a data base system, and other application system. Each of these other
interfaces needs to be tested, too.  Often testers will build an
application-specific test execution tool to test each interface. A set
of application-specific test execution tools is called a test harness.
Sometimes a test harness is referred to as a test bed or a test
execution environment.

If some parts of the software being developed are not available when
testing must take place, testers can build software packages to simulate
the missing parts. These simulators are called stubs and drivers. They
may or may not be necessary in a test harness.

Test harnesses have been custom-built per application for years.
Because harnesses are application-specific and interface standards were
not available in years past, most harnesses did not become off-the-shelf
products. Recently, though, interface standards and standard ways of
describing application interfaces through modern software development
tools have enabled commercialization of test harness generators. Cantata
from Information Processing Ltd.is an example of a test harness
generator now offered in the commercial marketplace.

                               Comparator

The comparator tool is nice to have. It compares actual outputs to
expected outputs and flags significant differences. Software passes a
test case when actual and expected output values are the same, within
allowed tolerances. Software fails to pass when output values are
different.

When complexity and volume of outputs are low, the "compare" function in
most operating systems will provide all the comparison information
testers need. The "diff" function in the Unix operating system, for
example, regards every character in a file as significant and will
report what is different in two files. Sometimes, though, the operating
system compare function cannot run fast enough or compare data precisely
enough to satisfy testers. Then testers may opt for a separate
comparator package such as EXDIFF, available from Aonix or SR. Most of
today's capture-replay tools include a comparator.

                      Structure Coverage Analyzer

The structure coverage analyzer is a big-three tool. It tells the tester
which statements, branches, and paths in the code have been exercised.
Structure coverage analyzers fall into two categories:  intrusive and
nonintrusive.

Intrusive analyzers depend on a code instrumentor (discussed earlier) to
insert test probes into the code under test. The code with the probes
inserted is compiled and exercised. As the code is exercised, the test
probes collect information about which statements, branches, and paths
are being exercised. The intrusive analyzer then makes that collected
information available for analysis, display, or printing. STW/Coverage
is an example of an intrusive structure coverage analyzer.

Nonintrusive analyzers do not work with test probes. Instead, they
gather information in a separate hardware processor that runs in
parallel with the processor being used for the software under test. If
sold commercially, nonintrusive analyzers usually come with the parallel
processor(s) included as part of the tool package. Nonintrusive
analyzers are excellent for testing time- or memory-constrained software
products. Unfortunately, nonintrusive analyzers are costly. The larger
the processor for the software under test, the more expensive the
analyzer will be. Most nonintrusive analyzers are custom-built now.

In 1992, a special category of coverage analyzers called memory leak
detectors emerged. These tools find reads of uninitialized memory as
well as reads and writes beyond the legal boundary of a program. These
are nice-to-have coverage tools. An example is Insight from Parasoft
Corporation. Since these tools isolate defects, they may also be
classified as debuggers.

                    Tools for the Maintenance Phase

In maintenance, three kinds of work goes on; bugs are repaired, new
features are added, and the software product is adapted to changing
environments. These are developmental jobs in the small, so if testing
tools were used during development, they can easily be reused during
maintenance testing.

If testing tools were not applied during development, they must be
introduced during maintenance, change by change. Suppose for example,
that a product is already in maintenance when a new recorder/test case
generator is purchased. Maintenance engineers should not go back and
rewrite or remodel specifications for the old product.  Rather they
should apply the new tool to those parts of the product that are
affected by the changes in the upcoming release. Over time the tool will
be used on most, if not all, of the product as it comes back to the
maintenance team for upgrades.

                         The Big-Three Tool Set

I have just described a wide range of testing tools. Three of the tools
are required for 10X Testing: the requirements recorder/test case
generator; test execution tool; and test evaluation tool.  In fact, all
software testers, in or out of the 10X Program, who are going to
practice automated, front-end testing need these three primary tools.

The secondary tools I mentioned are nice to have. Besides the ones I
described, you may know of additional secondary testing tools that you
would like to add to your toolkit once you have obtained the big three.

========================================================================

          ADVANCE PROGRAM:  ISSTA98 and FMSP98, March 2-5 1998

ACM SIGSOFT Symposium on Software Testing and Analysis (ISSTA 98) March
2-4, 1998

ACM SIGSOFT Workshop on Formal Methods in Software Practice (FMSP 98)
March 4-5, 1998

Radisson Suite Resort on Sand Key, Clearwater Beach, Florida

All information and documents concerning  ISSTA and FMSP can be obtained
from the conference web sites:

    ISSTA 98 home page: http://www.cs.pitt.edu/issta98
    FMSP 98 home page:  http://www.bell-labs.com/user/maa/fmsp98/
    Registration forms: http://www.cs.pitt.edu/issta98/registration.html

Monday, March 2 - ISSTA 98

Keynote address:  Michael P. DeWalt, National Resource
      SpecialistSoftware, Federal Aviation Administration

11:00-12:30 Static Analysis

      Constructing Compact Models of Concurrent Java Programs, by James
      C. Corbett, University of Hawaii.

      Computation of Interprocedural Control Dependencies, by Mary Jean
      Harrold, Gregg Rothermel, Saurabh Sinha, Ohio State University and
      Oregon State University.

      Comparing Flow and Context Sensitivity on the Modification-side-
      effects Problem, by Philip A. Stocks, Barbara G. Ryder, William A.
      Landi, and Sean Zhang, Rutgers University.

2:00-3:00 Evaluation of Testing

      An Experiment in Estimating Reliability Growth Under Both
      Representative and Directed Testing, by Brian Mitchell and Steven
      J. Zeil, Old Dominion University.

      On Random and Partition Testing, by Simeon Ntafos, University of
      Texas at Dallas.

3:30-5:00  Panel: Most Influential Papers in Testing Research

Tuesday, March 3 - ISSTA 98

9:00-10:15 Invited speaker:  Roger Sherman, Former Director of Testing
      for Product Development, Microsoft Corporation:  Shipping the
      Right Software at the Right Time.

11:00-12:30 Test Data Generation

      Automatic Test Data Generation Using Constraint Solving
      Techniques, by Arnaud Gotlieb, Bernard Botella, and Michel Rueher,
      Dassault Electronique.

      An Applicable Test Data Generation Algorithm for Domain Errors, by
      Istvan Forgacs and Akos Hajnal, Hungarian Academy of Sciences.

      Automated Program Flaw Finding Using Simulated Annealing, by Nigel
      J. Tracey, John Clark, and Keith Mander, University of York.

2:00-3:00  Test Automation

      A Visual Test Development Environment for GUI Systems, by Thomas
      Ostrand, Aaron Anodide, Herbert Foster, and Tarak Goradia, Siemens
      Corporate Research.

      Automatic Generation of Tests for Source-to-Source Translators, by
      Mark Molloy, Kristy Andrews, James Herren, David Cutler, and Paul
      Del Vigna, Tandem Computers.

3:30-5:00 Model Checking

      Improving Efficiency of Symbolic Model Checking for State-Based
      System Requirements, by William Chan, Richard J. Anderson, Paul
      Beame, and David Notkin, University of Washington.

      Verifying Systems With Integer Constraints and Boolean Predicates:
      A Composite Approach, by Tevfik Bultan, Richard Gerber, and
      Christopher League, University of Maryland.

      Model Checking Without a Model: An Analysis of the Heart-Beat
      Monitor of a Telephone Switch Using VeriSoft, by Patrice
      Godefroid, Robert S. Hanmer, and Lalita Jategaonkar Jagadeesan,
      Lucent Technologies.

6:00 New Results Session

Wednesday, March 4 - ISSTA / FMSP 98

9:00-10:30 ISSTA Paper Session: Structural Testing

      On the Limit of Control Flow Analysis for Regression Test
      Selection, by Thomas Ball, Bell Laboratories.

      Automated Regression Test Generation, by Bogdan Korel and Ali M.
      Al-Yami, Illinois Institute of Technology.

      All-du-path Coverage for Parallel Programs, by heer-Sun Yang, Amie
      Souter, and Lori Pollock, University of Delaware.

11:00-12:00 Joint ISSTA/FMSP Panel Discussion:  If you could spend
      $1,000,000 extra this year on improving software quality, how
      would you allocate it?

1:30 - 3:00     FMSP Paper Session: Verification Techniques

      Automatic Verification of Railway Interlocking Systems:  A Case
      Study, by Jakob Lyng Petersen.

      Property Specification Patterns for Finite-State Verification, by
      Matthew B. Dwyer, George S. Avrunin, and James C. Corbett.

      Experiences in Verifying Parallel Simulation Algorithms, by John
      Penix, Dale Martin, Peter Frey, Ramanan Radharkrishna, Perry
      Alexander, and Philip A. Wilsey.

3:30 - 5:00     FMSP Paper Session: Control Systems

      Controller Synthesis for the Production Cell Case Study, by Helmut
      Melcher and Klaus Winkelmann.

      Checking Properties of Safety Critical Specifications Using
      Efficient Decision Procedures, by David Y.W. Park, Jens U.
      Skakkebaek, Mats P.E. Heimdahl, Barbara J. Czerny, and David L.
      Dill.

      Specifying the Mode Logic of a Flight Guidance System in CoRE and
      SCR, by Steven P. Miller.

Thursday, March 5 - FMSP 98

9:00 - 10:00 FMSP Keynote

10:30 - 12:00 Communication Services and Protocols

      The eXperimental Estelle Compiler -- Automatic Generation of
      Implementations from Formal Specifications, by J. Thees and R.
      Gotzhein.

      Service Specifications to B, or not to B, by Bruno Mermet and
      Dominique Mery.

      Verification of an Audio Control Protocol within Real Time Process
      Algebra, by Liang Chen.

1:30 - 3:00 Technology Transfer

      Formal Specification and Validation at Work: A Case Study Using
      VDM-SL, by Sten Agerholm, Pierre-Jean Lecoeur, and Etienne
      Reichert.

      Low-Cost Pathways Towards Formal Methods Use, by Martin S.
      Feather.

      Applying Formal Methods to Practical Systems: An Experience
      Report, by C. Heitmeyer, J. Kirby, and B. Labaw.

========================================================================

                  WEB Quality Assurance/Testing Survey

Here are three results from our survey about Software Quality on the WWW.
We'll keep publishing these based on content and available space

                           *** Response 1 ***

> o Have you experienced website content failures that have affected
>   you?  (For example, broken links, garbled pages, grossly incorrect
>   or offensive information, etc.)

Broken links are a problem.  I haven't seen garbled pages very often.

>o Have you heard about content-related failures at websites?  What
>   were the consequences of these failures?  Who or what was injured
>   and what were the consequences?

There was no injury.  I searched for the in formation in other places.

> o What do you think is the hardest problem you would face if you
>   were a website manager?

The range of capabilities in web browsers.  How do you make the page
interesting and not have people who can't read it?

> o Relative to Quality, what do you think is the weakest part of a
>   website's content?

Excluding the pages that have no real information, broken links.

> o What do you think makes a "good" website?  What makes a "bad" one?
>   How would you tell the difference?

It must have good balance between interesting & easy to follow layout,
and download time.  On a fair site, there is a lot of content but a lot
of data that I have to wade though.  On a bad site, I have to wait 5
minutes to download a lot of flash with no information.

> o Can you think of a software quality problem in websites that
>   everyone in the community ought to be concerned about?

"Blessed is the man who, having nothing to say, abstains from giving
wordy evidence of the fact."  - George Eliot  Many web pages suffer from
the too much flash and not enough to say.  Relating it to software, B.
W. Boehm said, "most of the errors have already been made before the
coding begins."

Submitted by Danny Faught, faught@rsn.hp.com.

                           *** Response 2 ***

> o Have you experienced website content failures that have affected
>  you?  (For example, broken links, garbled pages, grossly incorrect
>  or offensive information, etc.)   YES!

>o Have you heard about content-related failures at websites?  What
>  were the consequences of these failures?  Who or what was injured
>  and what were the consequences?  NO, BUT THE RISK OF LITIGATION
   IS GROWING EVERY DAY

>o What do you think is the hardest problem you would face if you
>  were a website manager?  MAKING SURE THE PAGES ARE UPDATING
   TO REFLECT ANY CHANGES.  THE MAINTENANCE ON THIS TYPE OF WORK
   IS PROBABLY VERY GREAT, ESPECIALLY FOR A CORPORATE 'INTRANET'
   WHERE PAGES MAY BE DONE BY DIFFERENT DEPARTMENTS, ETC.

>o Relative to Quality, what do you think is the weakest part of a
>  website's content?  GRAPHICS THAT ARE GRATUITOUS - DON'T SERVE
   A REAL PURPOSE OTHER THAN 'IT'S COOL'.  NOT EVERYONE HAS A T1
   LINE AND DOWNLOADING A PAGE WITH GRAPHICS TAKES A VERY LONG TIME
   AT 14.4

>o What do you think makes a "good" website?  What makes a "bad" one?
>  How would you tell the difference? A GOOD WEBSITE IS ONE THAT SERVES
   A PURPOSE.  TEXT IS SPACED FOR EASY ON-LINE READING, THERE IS A WAY
   TO NAVIGATE WITHOUT GRAPHICS (TEXT ONLY) FOR THOSE PEOPLE ON SLOWER
   ACCESS LINES.  I'M USING ALL THINGS WEB'S CHECKLIST FOR WHAT MAKES
   A GOOD AND BAD WEBSITE.  SEE http://www.pantos.org/atw/testing.html

>o Can you think of a software quality problem in websites that
>  everyone in the community ought to be concerned about?  OTHER THAN
   INANE CONTENT?  HOW ABOUT FINDING OUT WHY WE IN THE COMMUNITY ARE
   SO QUICK TO SAY IT'S A BROWSER PROBLEM WHEN A PAGE WON'T LOAD, OR
   MINIMIZING THE WINDOW CAUSES A FRAME TO GO BLANK.  THE END USER
   DOESN'T CARE WHETHER IT'S THE BROWSER OR THE SOFTWARE ON THE SITE.
   ALL HE CARES ABOUT IS THAT HE IS INCONVENIENCED.  IT'S GOING TO
   CAUSE PEOPLE TO USE ANOTHER, MORE WORKABLE SITE RATHER THAN YOURS
   IF YOU IGNORE THOSE TYPES OF PROBLEMS.

>o Do you have another concern about website quality?  Explain?  SEE
   ABOVE QUESTION.  THERE ARE MANY TOOLS OUT NOW THAT CHECK LINKS FOR
   ACTIVE VERSUS DEAD, INTERNAL VERSUS EXTERNAL.  DEAD LINKS OUGHT TO
   BE A NO-BRAINER BY NOW.  I THINK INTEGRATION (BROWSER VERSUS SOFTWARE
   SITE) IS THE BIGGEST ISSUE.

Submitted by: Sheilagh Park-Hatley, Certified Software Test Engineer,
sparkha@uswest.com.

                           *** Response 3 ***

>o Have you experienced website content failures that have affected
>  you?  (For example, broken links, garbled pages, grossly incorrect
>  or offensive information, etc.)

I experience broken links very frequently.  Occasionally they are due to
the inclusion of a link that can be reached only by a certain path.
Sometimes I find websites with loops in them -- page x takes you to page
y which takes you to page x -- and there's no way out, except back.

>o What do you think is the hardest problem you would face if you
>  were a website manager?

Satisfying marketing's desire to make constant changes.  Keeping the
site free of broken links.  Making the site structure clear to users.

>o Relative to Quality, what do you think is the weakest part of a
>  website's content?

About 15% of a page is given to information.  85% is given to ads and
useless animated graphics.  Sites make you navigate many pages to find
some very diluted information.

>o What do you think makes a "good" website?  What makes a "bad" one?
>  How would you tell the difference?

Good:  high information content, navigation frames separate from
information frames.
Bad:  requiring many page changes to get to the information

>o Can you think of a software quality problem in websites that
>  everyone in the community ought to be concerned about?

Search engines need vast improvement.  All claim to provide phrase
searching, but few do and those that do bend the definition.

Submitted by: Gary L. Smith, glsmith@mitre.org.

========================================================================

                                ISSRE'98

                  The Ninth International Symposium on
                    Software Reliability Engineering

                Paderborn, Germany,  November 4-7, 1998

                  http://adt.uni-paderborn.de/issre98/

                      Preliminary Call for Papers

Sponsored by IEEE Computer Society Organized by The Committee on
Software Reliability Engineering of the IEEE Computer Society and
Technical Council on Software Engineering

The role of software is expanding rapidly in many aspects of modern
life, ranging from critical infrastructures, such as transportation,
defense, and telecommunication systems, to work-place automation,
productivity enhancement, education, health-care, publishing, on-line
services, entertainment, etc.  Given the potentially costly impact of
software failures for many of these applications, it is important to
have sound methods of engineering reliable software as well as accurate
methods of quantitatively certifying software reliability.

ISSRE'98 seeks to bring together practitioners and researchers from
around the world to share the latest information and know-how related to
all areas of software reliability engineering for a broad range of
applications. The theme of the symposium is "Globalization: Breaking
Barriers - Theory meets Practice - East meets West".

Submissions should report promising new research breakthroughs, but
especially welcome are those that help gauge the state of SRE practice.
What are the challenges facing your industry in developing reliable
software and what SRE methods appear to work in practice? What data
collection and analysis methods have you used and what were the
consequent benefits in terms of improved reliability? Contributions are
expected to advance the state of the art or to shed light on current
best practices and to stimulate interaction between (and among)
researchers and practitioners. Topics of interest include, but are not
limited to:

o Methods of developing reliable software, including management methods
   and rigorous specification, design, and implementation techniques
o Data collection and analysis for software reliability assessment
o Software reliability models
o Testing and verification for software reliability measurement
o Software safety
o Fault-tolerant and robust software
o SRE tools, education, and technology transfer methods
o Software reliability standards and legal issues

Please submit an abstract (250 words maximum in plain ASCII text) and a
list of keywords to either one of the Program Chairs

    F. Bastani:  f.bastani@computer.org
    A. Endres:   Albert.Endres@informatik.uni-stuttgart.de

before March 1, 1998 to enable proper referee assignment.

Symposium Chair: Fevzi Belli, Universitaet Paderborn, FB 14, GERMANY
                 Email: belli@adt.uni-paderborn.de

========================================================================

              TTN-Online -- Mailing List Policy Statement

Some subscribers have asked us to prepare a short statement outlining
our policy on use of E-mail addresses of TTN-Online subscribers.  This
issue, and several other related issues about TTN-Online, are available
in our "Mailing List Policy" statement.  For a copy, send E-mail to
ttn@soft.com and include the word "policy" in the body of the E-mail.

========================================================================
------------>>>          TTN SUBMITTAL POLICY            <<<------------
========================================================================

The TTN Online Edition is E-mailed around the 15th of each month to
subscribers worldwide.  To have your event listed in an upcoming issue
E-mail a complete description and full details of your Call for Papers
or Call for Participation to "ttn@soft.com".

TTN On-Line's submittal policy is as follows:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the TTN On-Line issue date.  For
  example, submission deadlines for "Calls for Papers" in the January
  issue of TTN On-Line would be for February and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK and may be serialized.
o Length of submitted calendar items should not exceed 60 lines (one
  page).
o Publication of submitted items is determined by Software Research,
  Inc. and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items are the opinions of their authors or
submitters; TTN-Online disclaims any responsibility for their content.

TRADEMARKS:  STW, TestWorks, CAPBAK, SMARTS, EXDIFF, Xdemo, Xvirtual,
Xflight, STW/Regression, STW/Coverage, STW/Advisor, TCAT, TCAT-PATH, T-
SCOPE and the SR logo are trademarks or registered trademarks of
Software Research, Inc. All other systems are either trademarks or
registered trademarks of their respective companies.

========================================================================
----------------->>>  TTN SUBSCRIPTION INFORMATION  <<<-----------------
========================================================================

To SUBSCRIBE to TTN-Online, to CANCEL a current subscription, to CHANGE
an address (a CANCEL and a SUBSCRIBE combined) or to submit or propose
an article, send E-mail to "ttn@soft.com".

TO SUBSCRIBE: Include in the body of your letter the phrase "subscribe
 ".

TO UNSUBSCRIBE: Include in the body of your letter the phrase
"unsubscribe  ".

                     TESTING TECHNIQUES NEWSLETTER
                        Software Research, Inc.
                            901 Minnesota Street
                   San Francisco, CA  94107 USA

              Phone:          +1 (415) 550-3020
              Toll Free:      +1 (800) 942-SOFT (USA Only)
              FAX:            +1 (415) 550-3030
              E-mail:         ttn@soft.com
              WWW URL:        http://www.soft.com

                               ## End ##