sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr

         +===================================================+
         +======= Testing Techniques Newsletter (TTN) =======+
         +=======           ON-LINE EDITION           =======+
         +=======              April 1998             =======+
         +===================================================+

TESTING TECHNIQUES NEWSLETTER (TTN), Online Edition, is E-mailed monthly
to support the Software Research, Inc. (SR)/TestWorks user community and
to provide information of general use to the worldwide software quality
and community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of TTN-Online provided that the
entire document/file is kept intact and this complete copyright notice
appears with it in all copies.  (c) Copyright 1998 by Software Research,
Inc.

========================================================================

INSIDE THIS ISSUE:

   o  Quality Week '98 (QW'98) Update:  One a Few Days Left

   o  Measuring Software Quality to Support Testing, by Shari Lawrence
      Pfleeger (Part 1 of 2)

   o  Monthly Humor Section: The Daily Bread

   o  Software Testing: Challenges for the Future (Seminar Series in
      Belgium)

   o  Return On Investment From Test Automation, by Lou Adornato

   o  TTN Submittal Policy

   o  TTN SUBSCRIPTION INFORMATION

========================================================================

           11th International Software Quality Week (QW'98)
                Early Registration Deadline Approaching

The early registration rate for Quality Week '98 is only available until
April 24th.  The special rate of $1150 is $100 off the standard rate.

Compared to other conferences, this is a very decent price to get the
state-of-the-art information and the latest advances in software QA
technologies, from leading organizations in Industry, Government and the
Academic world.  80 Presentations in 4 Days.  Plus a two-day vendor show
with the latest tools and services that you need.

                11th International Software Quality Week
                          The Sheraton Palace
                       San Francisco, California
                           26-29 May 29, 1998

Register TODAY by FAXing in your form with your credit card number and
signature to [+1] (415) 550-3030 Or, stop by our WebSite at:

                     <http://www.soft.com/QualWeek>
         or just send Email to qw@soft.com and let us know you want to
register.

The deadline is only 9 days away. So, if you are planning to attend --
register NOW!

Questions? Check out our WebSite or call 415/550-3020.

========================================================================

      Measuring Software Quality to Support Testing (Part 1 of 2)

                        Shari Lawrence Pfleeger
                         Systems/Software, Inc.
                         4519 Davenport St. NW
                       Washington, DC 20016-4415
                       Email: s.pfleeger@ieee.org

Note:  This material is adapted from Software Engineering; Theory and
Practice, by Shari Lawrence Pfleeger (Prentice Hall, 1998)  A version of
this paper was published in Dr.  Dobb's Journal, March 1998.

Once you have coded your program components, it is time to test them.
Testing is not the first place where fault-finding occurs; requirements
and design reviews help us to ferret out problems early in development.
But testing is focused on finding faults, and there are many ways we can
use measurement to make our testing efforts more efficient and
effective.

Defective software

In an ideal situation, we as programmers become so good at our craft
that every program we produce works properly every time it is run.
Unfortunately, this ideal is not reality.  The difference between the
two is the result of several things.  First, many software systems deal
with large numbers of states and with complex formulae, activities and
algorithms.  In addition to that, we use the tools at our disposal to
implement a customer's conception of a system when the customer is
sometimes uncertain of exactly what is needed.  Finally, the size of a
project and the number of people involved can add complexity.  Thus, the
presence of faults is a function not just of the software but also of
user and customer expectations.

Software faults and failures

What do we mean when we say that our software has failed?  Usually, we
mean that the software does not do what the requirements describe.  For
example, the specification may state that the system must respond to a
particular query only when the user is authorized to see the data.  If
the program responds to an unauthorized user, we say that the system has
failed.  The failure may be the result of any of several reasons:

  *   The specification may be wrong.  The specification may
      not state exactly what the customer wants or needs.  In our
      example, the customer may actually want to have several
      categories of authorization, with each category having a
      different kind of access, but has never stated that need
      explicitly.

  *   The specification may contain a requirement that is
      impossible to implement, given the prescribed hardware and
      software.

  *   The system design may contain a fault.  Perhaps the
      database and query language designs make it impossible to
      authorize users.

  *   The program design may contain a fault.  The component
      descriptions may contain an access control algorithm that
      does not handle this case correctly.

  *   The program code may be wrong.  It may implement the
      algorithm improperly or incompletely.

Thus, the failure is the result of one or more faults in some aspect of
the system.

No matter how capably we write programs, it is clear from the variety of
possible faults that we should check to ensure that our components are
coded correctly.  Many programmers view testing as a demonstration that
their programs perform properly.  However, the idea of demonstrating
correctness is really the reverse of what testing is all about.  We test
a program to demonstrate the existence of a fault.  Because our goal is
to discover faults, we consider a test successful only when a fault is
discovered or a failure occurs as a result of our testing procedures.
Fault identification is the process of determining what fault(s) caused
the failure, and fault correction or removal is the process of making
changes to the system so that the fault is removed.

By the time we have coded and are testing program components, we hope
that the specifications are correct.  Moreover, having used the software
engineering techniques described in previous chapters, we have tried to
assure that design of both the system and its components reflects the
requirements and forms a basis for a sound implementation.  However, the
stages of the software development cycle involve not only our computing
skills but also our communication and interpersonal skills.  It is
entirely possible that a fault in the software can result from a
misunderstanding during an earlier development activity.

It is important to remember that software faults are different from
hardware faults.  Bridges, buildings and other engineered constructions
may fail because of shoddy materials, poor design, or because their
components wear out.  But loops do not wear out after several hundred
iterations, and arguments are not dropped as they pass from one
component to another.  If a particular piece of code is not working
properly, and if a spurious hardware failure is not the root of the
problem, then we can be certain that there is a fault in the code.  For
this reason, many software engineers refuse to use the term "bug" to
describe a software fault;  calling a fault a bug implies that the fault
wandered into the code from some external source over which the
developers have no control.

Orthogonal defect classification

It is useful to categorize and track the types of faults we find, not
just in code but anywhere in a software system.  Historical information
can help us predict what types of faults our code is likely to have
(which helps direct our testing efforts), and clusters of certain types
of faults can warn us that it may be time to re-think our designs or
even our requirements.  Many organizations perform statistical fault
modeling and causal analysis, both of which depend on understanding the
number and distribution of types of faults.  For example, IBM's Defect
Prevention Process (Mays et al. 1990) seeks and documents the root cause
of every problem that occurs;  the information is used to help suggest
what types of faults testers should look for, and it has reduced the
number of faults injected in the software.

Chillarege et al. (1992) at IBM have developed an approach to fault
tracking called orthogonal defect classification, where faults are
placed in categories that collectively paint a picture of which parts of
the development process need attention because they are responsible for
spawning many faults.  Thus, the classification scheme must be product-
and organization-independent, and be applicable to all stages of
development.  Table 1 lists the types of faults that comprise IBM's
classification.  When using the classification, the developers identify
not only the type of fault but whether it involves something that is
missing or incorrect.


       - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

            Table 1.  IBM orthogonal defect classification.

Fault Type     Meaning

Function       Fault that affects capability, end-user
               interfaces, produce interfaces, interface
               with hardware architecture, or global data
               structure

Interface      Fault in interacting with other components
               or drivers via calls, macros, control
               blocks or parameter lists

Checking       Fault in program logic that fails to
               validate data and values properly before
               they are used

Assignment     Fault in data structure or code block
               initialization.

Timing/        Fault that involves timing of shared and
Serialization  real-time resources

Build/package/ Fault that occurs because of problems in
merge          repositories, management changes, or
               version control

Documentation  Fault that affects publications and
               maintenance notes

Algorithm      Fault involving efficiency or correctness
               of algorithm or data structure but not
               design

       - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

A classification scheme is orthogonal if any item being classified
belongs to exactly one category.  In other words, we want to track the
faults in our system in an unambiguous way, so that the summary
information about number of faults in each class is meaningful.  We lose
the meaning of the measurements if a fault might belong to more than one
class.  In the same way, the fault classification must be clear, so that
any two developers are likely to classify a particular fault in the same
way.

Fault classification such as IBM's and Hewlett-Packard's help to improve
the entire development process by telling us which types of faults are
found in which development activities.  For example, for each fault
identification or testing technique used while building the system, we
can build a profile of the types of faults located.  It is likely that
different methods will yield different profiles.  Then we can build our
fault prevention and detection strategy based on the kinds of faults we
expect in our system, and the activities that will root them out.
Chillarege et al. (1992) illustrates IBM's use of this concept by
showing us that the fault profile for design review is very different
from that for code inspection.

      ------------------------------------------------------------
            Sidebar Hewlett-Packard's fault classification.
      ------------------------------------------------------------

      Grady (1997) describes Hewlett-Packard's approach to fault
      classification.  In 1986, Hewlett-Packard's Software Metrics
      Council identified several categories in which to track
      faults.  The scheme grew to be the one depicted as follows,
      where for each fault you identify an action: Missing,
      Unclear, Wrong, Changed, Better Way:

         ------- ------- ----------------------------------------
         Origin of Fault
                 Type of Fault
                         Mode
         ------- ------- ----------------------------------------
                 Specifications/Requirements
                         Functionality
                         HW Interface
                         SW Interface
                         User Interface
                         Functional Description
                 Design
                         [Inter-] Process Communications
                         Data Definition
                         Module Design
                         Logic Description
                         Error Checking
                         Standards
                 Code
                         Logic
                         Computation
                         Data Handling
                         Module Interface
                         Implementation
                         Standards

                 Environment/Support
                         Test HW
                         Test SW
                         Integration SW
                         Development Tools

                 Documentation

                 Other

      The developers used this model by selecting three
      descriptors for each fault found:  the origin of the fault
      (that is, where the fault was injected in a product), the
      type of fault, and the mode (that is, whether information
      was missing, unclear, wrong, changed, or could be done a
      better way).
      ------------------------------------------------------------

Each Hewlett-Packard division tracks its faults separately, and summary
statistics are reported on pie charts.  Different divisions often have
very different fault profiles, and the nature of the profile helps the
developers devise requirements, design, code and test activities that
address the particular kinds of faults the division usually sees.  The
overall effect has been to reduce the number of faults over time.

Software Quality

Software quality can be measured in many ways.  One way to assess the
"goodness" of a component is by the number of faults it contains.  It
seems natural to assume that software faults that are the most difficult
to find are also the most difficult to correct.  It also seems
reasonable to believe that the most easily fixed faults are detected
when the code is first examined, and the more difficult faults are
located later in the testing process.  However, Shooman and Bolsky
(1975) found that this is not the case.  Sometimes it takes a great deal
of time to find trivial faults, and many such problems are overlooked or
do not appear until well into the testing process.  Moreover, Myers
(1979) reports that as the number of detected faults increases, the
probability of the existence of more undetected faults increases.  If
there are many faults in a component, we want to find them as early as
possible in the testing process.  However, if we find a large number of
faults at the beginning, then we are likely still to have a large number
undetected.

In addition to being contrary to our intuition, these results also make
it difficult to know when to stop looking for faults during testing.  We
must estimate the number of remaining faults, not only to know when to
stop our search for more faults, but also to give us some degree of
confidence in the code we are producing.  The number of faults also
indicates the likely maintenance effort needed if faults are left to be
detected after the system is delivered.

Fault Seeding

Mills (1972) developed a technique known as fault seeding or error
seeding to estimate the number of faults in a program.  The basic
premise is that one member of the test team intentionally inserts (or
"seeds") a known number of faults in a program.  Then, the other team
members locate as many faults as possible.  The number of undiscovered
seeded faults acts as an indicator of the number of total faults
(including indigenous, non-seeded ones) remaining in the program.  That
is, the ratio of seeded faults detected to total seeded faults should be
the same as the ratio of non-seeded faults detected to total non-seeded
faults:

        (detected seeded faults)
        ------------------------
        (total seeded faults)

                =

        (detected non-seeded faults)
        ----------------------------
        (total non-seeded faults>

Thus, if a program is seeded with 100 faults and the test team finds
only 70, it is likely that 30 percent of the indigenous faults remain in
the code.

We can express this ratio more formally.  Let S be the number of seeded
faults placed in a program, and let N be the number of indigenous (non-
seeded) faults.  If n is the actual number of faults detected during
testing, and s is the number of seeded faults detected during testing,
then an estimate of the total number of indigenous faults is

     N = Sn/s

Although simple and useful, this approach assumes that the seeded faults
are of the same kind and complexity as the actual faults in the program.
But we do not know what the typical faults are before we have found
them, so it is difficult to make the seeded faults representative of the
actual ones.  One way to increase the likelihood of representativeness
is to base the seeded faults on historical records for code from similar
past projects.  However, this approach is useful only when we have built
like systems before.  And things that seem similar may in fact be quite
different in ways of which we are not always aware.

To overcome this obstacle, we can use two independent test groups to
test the same program.  Call them Test Group 1 and Test Group 2.  Let x
be the number of faults detected by Test Group 1 and y the number
detected by Test Group 2.  Some faults will be detected by both groups;
call this number of faults q, so that q < x and q < y.  Finally, let n
be the total number of all faults in the program;  we want to estimate
n.

The effectiveness of each group's testing can be measured by calculating
the fraction of faults found by each group.  Thus, the effectiveness E1
of Group 1 can be expressed as

     E1 = x/n

and the effectiveness E2 of Group 2 as

     E2 = y/n

The group effectiveness measures the group's ability to detect faults
from among a set of existing faults.  Thus, if a group can find half of
all faults in a program, its effectiveness is 0.5.  Consider faults
detected by both Group 1 and Group 2.  If we assume that Group 1 is just
as effective at finding faults in any part of the program as in any
other part, we can look at the ratio of faults found by Group 1 from the
set of faults found by Group 2.  That is, Group 1 found q of the y
faults that Group 2 found, so Group 1's effectiveness is q/y.  In other
words,

     E1 = x/n = q/y

However, we know that E2 is y/n, so we can derive the following formula
for n:

     n = q/(E1*E2)

We have a known value for q, and we can use estimates of q/y for E1 and
q/x for E2, so we have enough information to estimate n.

To see how this method works, suppose two groups test a program.  Group
1 find 25 faults.  Group 2 find 30 faults, and 15 of those are
duplicates of the faults found by Group 1.  Thus, we have

     x = 25
     y = 30
     q = 15

The estimate, E1, of Group 1's effectiveness is q/y, or 0.5, since Group
1 found 15 of the 30 faults found by Group 2.  Similarly, the estimate,
E2, of Group 2's effectiveness is q/x, or 0.6.  Thus, our estimate of n,
the total number of faults in the program, is 15/(.5*.6), or 50 faults.

The test strategy defined in the test plan directs the test team in
deciding when to stop testing.  The strategy can use this estimating
technique to decide when testing is complete.

                           (To Be Continued)


========================================================================

                 Monthly Humor Section: The Daily Bread

(Editors Note: This is a takeoff formula that keeps reappearing, year
after year, but this updated version seems to be particularly "well
done"!)

If IBM made toasters ... They would want one big toaster where people
bring bread to be submitted for overnight toasting.  IBM would claim a
worldwide market for five, maybe six toasters.

If Xerox made toasters ... You could toast one-sided or double-sided.
Successive slices would get lighter and lighter.  The toaster would jam
your bread for you.

If Radio Shack made toasters ... The staff would sell you a toaster, but
not know anything about it.  Or you could buy all the parts to build
your own toaster.

And, of course: If Microsoft made toasters ... Every time you bought a
loaf of bread, you would have to buy a toaster. You wouldn't have to
take the toaster, but you'd still have to pay for it anyway.  Toaster'97
would weigh 15,000 pounds (hence requiring a reinforced steel
countertop), draw enough electricity to power a small city, take-up 95%
of the space in your kitchen, would claim to be the first toaster that
lets you control how light or dark you want your toast to be, and would
secretly interrogate your other appliances to find out who made them.
Everyone would hate Microsoft toasters, but nonetheless would buy them
since most of the good bread only works with their toasters.

If Apple made toasters ... It would do everything the Microsoft toaster
does, but 5 years earlier.

If Oracle made toasters ... They'd claim their toaster was compatible
with all brands and styles of bread, but when you got it home you'd
discover the Bagel Engine was still in development, the Croissant
Extension was three years away, and that indeed the whole appliance was
just blowing smoke.

If Sun made toasters ... The toast would burn often, but you could get a
really good cuppa Java.

Does DEC still make toasters?... They made good toasters in the '80s,
didn't they?

If Hewlett-Packard made toasters ... They would market the Reverse
Toaster, which takes in toast and gives you regular bread.

If Sony made toasters ... The ToastMan, which would be barely larger
than the single piece of bread it is meant to toast, and can be
conveniently attached to your belt.

If Timex made toasters ... They would be cheap and small quartz-crystal
wrist toasters that take a licking and keep on toasting.

If Fisher Price made toasters ... "Baby's First Toaster" would have a
hand-crank that you turn to toast the bread that pops up like a Jack-
in-the-box.

If AOL made toasters...  You would put toast in for breakfast but have
to wait until lunch as there are too many pieces of toast pending with
the Host Toaster.

========================================================================

     Software Testing - Challenges for the Future (Seminar Series)
                             June 4, 1998 -
                 Hotel President WTC, Brussels, Belgium

Editors Note: From time to time we announce seminars of particular
interest and for our European readers this one appears noteworthy.

Program

09.30:  Welcome and Introduction, Bruno Peeters, Gemeentekrediet van
        Belgia, Brussels, Belgium; Chairman of the Section on Software
        Metrics & Software Testing

09.45:  Using Testing for Process Conformance Management, Adrian Burr,
        tMSC, Haverhill, England

10.30:  Coffee - Exhibition

11.00:  Millennium and Euro testing, Niek Hooijdonk, CTG, Diegem,
        Belgium

11.45:  Testing WWW Applications, Gualterio Bazzana, Onion, Brescia,
        Italy

12.30:  Lunch - Exhibition

Session A : Tools Track

14.00:  Introduction of a capture/playback tool, Geert Lefevere and Gino
        Verbeke, LGTsoft, Roeselare, Belgium

14.40:  Usability Testing, Christine Sleutel, ING, Amsterdam,
        Netherlands

15.20:  Test Factory (TSite), Ingrid Ottevanger, IQUIP, Diemen,
        Netherlands


Session B : Management Track

14.00:  Test Assessment, Jens Pas, ps_testware, Leuven, Belgium

14.40:  Organization of acceptance test team at Banksys, Hugo Gabriels,
        Banksys, Brussels, Belgium

15.20:  Lessons learned on test management of large, complex integration
        projects, Michel Luypaert, CSC, Brussels, Belgium

General Information

Date:   June 4, 1998

Venue:  Hotel President World Trade Center, Bd. Em. Jacqmain 180, 1000
BRUSSELS.  Tel. +32.(0)2.203.20.20

The Hotel is situated next to the World Trade Center and in front of
Brussels' North Station.

Secretariat:

        Technologisch Instituut vzw
        Ingenieurshuis - K VIV
        Mr L. Baele, project co ordinator
        Desguinlei  214
        2018  Antwerpen 1
        Tel. : 03/216.09.96
        Fax : 03/216.06.89
        e-mail : luc.baele@ti.kviv.be

========================================================================

               Return on Investment From Test Automation
                                   by
                              Lou Adornato

Editors Note: This item was posted on C.S.T. and Mr. Adornato was kind
enough to permit us to include it in TTN-Online.

> Question:
>
> I work for a financial services company which is investigating whether
> automated testing tools (primarily for regression testing) can deliver
> significant benefits and efficiencies in our environment.
>
> 1. In what circumstances have they:
>
>  Significantly benefited from automated testing tools?
>
>  Received marginal or no benefit at all?
>

I'm supposed to be presenting a paper on this topic in the fall, so I've
been looking into this.

I've been able to identify four factors that effect the rate of return.
First, there's the amount of time needed to develop the test scripts.
Actually, the more important measure is the ratio of script development
time to the time needed to execute the same test manually.  Say a
particular test would take six hours to run manually, but developing the
test script takes 60.  The development ratio is 10:1.  This means that
you won't get any benefit at all (you won't reach the breakeven point)
until you you've run the test 10 times.

The next factor is the number of times you expect to run the test during
each release cycle (I haven't made a difference between major and minor
releases yet, but the model I'm working on should get there eventually).
If this test is only going to be run once per release, then you're not
going to see a payback until you've done 10 releases.  On the other
hand, if you're expecting to run this test three times during the each
release cycle, then you'll see payback somewhere during your fourth
release.

Of course, reaching the breakeven point requires that the feature you're
testing isn't changed or removed before you get there, and that's our
next factor.  You can call it "stability" or "feature churn".  The idea
is that if you expect 10% of the product to change between releases,
then at least 10% of your test scripts will become obsolete on each
release.

The final factor is one that I call "brittleness".  This is caused by
test script design that doesn't isolate the "navigation" function
(moving through the product to get to the actual testing functions of
stimulus and response capture).  One of the best examples is a test of a
dialog box.  In order to open the dialog, your test script moves the
mouse pointer to an absolute screen position (over a toolbar button) and
simulates a click.  In the next release the toolbar gets an additional
button to the right of the button your script uses, and so your screen
coordinates are wrong.  Your script is now "broken" even though the
dialog it tests hasn't been changed.  I define script changes needed to
keep up with changing features as "primary impact", and all other script
changes as "secondary impact", and "brittleness" is the ratio of
secondary impact to primary impact.

(as an aside, I've found that carelessly used "capture/playback" offers
a really good development ratio but really BAD brittleness.  I'm still
working on the economic model and the sensitivity analysis, but right
now I'd guess that brittleness is a bigger cost factor than initial
development time)

>
> 2. What were the critical factors in the successful
>    implementation of automated testing tools?
>

The "churn rate" and "brittleness" I mentioned above is a big factor.
I've set up some Excel spreadsheets that show that you can get into a
mode where maintenance of the test suite remains greater than the amount
of staff time the automation is supposed to replace.  > In addition to
those factors, there's a number of process related factors that come
into play.  The biggest is probably the point at which the user
interface becomes stable - the earlier the UI stabilizes, the earlier
the testing department can begin scripting tests.  Another factor seems
to be budget.  The tool doesn't doesn't do you any good without someone
to use it, and in a lot of cases the test department has been expected
to automate while doing manual testing, without any additional staff.
This is definitely a case of draining the swamp while you're up to your
butt in alligators, and rarely works out.  Unless the automation
coincides with a lull in the testing responsibilities, there's going to
be a need for additional (albeit temporary) staff.

>
> 3. What were the reasons for abandoning the use or
>    implementation of automated testing tools?
>

As I mentioned above, you can get into a mode where you're spending more
time maintaining the automated tests than you're saving.  However, from
what I've seen it's usually been a case of the testing department being
assigned the automation task without getting (or in some cases, without
asking for) additional (temporary) staff to implement it.

>
> 4. What are the demonstrated tangible benefits received
>    from the successful implementation of automated testing tools?
>

A lot depends on what kinds of benefits you're talking about.  Some of
the possible benefits of test automation include:

  o Reducing the cost of the testing

  o Reducing time-to-market (shortening the testing cycle)

  o Improved (more consistent) testing and test records

  o Improved product quality (due to earlier and more frequent testing)

  o Better use of the testing staff (letting the automation system do
    the "grunt work" while the humans do the deep, hands-on testing
    that really scares out the bugs).

Note that I said "possible".  In a lot of cases I've seen that some or
all of these factors actually suffered with test automation.

>
> 5. What are the infrastructure considerations in
>    supporting an automated testing environment?
>

First, you need to consider the plans for the product.  How many
follow-on releases are you expecting to have?  How much of the product
will change in each release ("feature churn")?

The next issue is your process.  You need to be able to write the test
plan independently of code development, and that means you have to have
a solid functional definition of the product early on.  You also need to
have the UI stabilize fairly early, and you should have a number of
points in the process where test items are sent over to the testing
department.

Finally, you have to consider your testing department's skills.  Good
testers are not necessarily good programmers, and test scripting is
programming work, especially if you hope to keep the brittleness of your
test suite to a minimum.  And even if you have the necessary skill sets
in the programming group, does the group have enough bandwidth to do the
automation.

Two last things to consider:

First, the automation is NOT going to take the place of all manual
testing.  There's always going to be 20% or so that it just doesn't make
sense to automate.

Second, automation is NOT going to replace the testing staff.  You're
still going to need someone around who understands testing, software QA,
and your product to maintain the test suite, interpret the results, and
do the remaining manual testing.

Third, automation is NOT going to find the majority of your bugs.
That's not really it's purpose.  The automation frees up your testing
staff from having to do the "grunt work", and allows them the time to do
hands-on testing that shakes out the really subtle and unexpected bugs.

Lou Adornato (ladorna@btree.com)
Sr. Systems Engineer
B-Tree Systems, Inc. (http://www.btree.com)
Minnetonka, MN, USA

========================================================================
------------>>>          TTN SUBMITTAL POLICY            <<<------------
========================================================================

The TTN Online Edition is E-mailed around the 15th of each month to
subscribers worldwide.  To have your event listed in an upcoming issue
E-mail a complete description and full details of your Call for Papers
or Call for Participation to "ttn@soft.com".

TTN On-Line's submittal policy is as follows:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the TTN On-Line issue date.  For
  example, submission deadlines for "Calls for Papers" in the January
  issue of TTN On-Line would be for February and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK and may be serialized.
o Length of submitted calendar items should not exceed 60 lines (one
  page).
o Publication of submitted items is determined by Software Research,
  Inc. and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items are the opinions of their authors or
submitters; TTN-Online disclaims any responsibility for their content.

TRADEMARKS:  STW, TestWorks, CAPBAK, SMARTS, EXDIFF, Xdemo, Xvirtual,
Xflight, STW/Regression, STW/Coverage, STW/Advisor, TCAT, TCAT-PATH, T-
SCOPE and the SR logo are trademarks or registered trademarks of
Software Research, Inc. All other systems are either trademarks or
registered trademarks of their respective companies.

========================================================================
----------------->>>  TTN SUBSCRIPTION INFORMATION  <<<-----------------
========================================================================

To SUBSCRIBE to TTN-Online, to CANCEL a current subscription, to CHANGE
an address (a CANCEL and a SUBSCRIBE combined) or to submit or propose
an article, use the convenient Subscribe/Unsubscribe facility at
<http://www.soft.com/News/TTN-Online>.  Or, send E-mail to
"ttn@soft.com" as follows:

   TO SUBSCRIBE: Include in the body the phrase "subscribe {your-E-
   mail-address}".

   TO UNSUBSCRIBE: Include in the body the phrase "unsubscribe {your-E-
   mail-address}".

                     TESTING TECHNIQUES NEWSLETTER
                        Software Research, Inc.
                            901 Minnesota Street
                   San Francisco, CA  94107 USA

              Phone:          +1 (415) 550-3020
              Toll Free:      +1 (800) 942-SOFT (USA Only)
              FAX:            +1 (415) 550-3030
              E-mail:         ttn@soft.com
              WWW URL:        http://www.soft.com

                               ## End ##