sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +===================================================+
         +=======    Quality Techniques Newsletter    =======+
         +=======             July 2000               =======+
         +===================================================+

QUALITY TECHNIQUES NEWSLETTER (QTN) (Previously Testing Techniques
Newsletter) is E-mailed monthly to subscribers worldwide to support the
Software Research, Inc. (SR), TestWorks, QualityLabs, and eValid WebTest
Services user community and to provide information of general use to the
worldwide software and internet quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of QTN provided that the entire
document/file is kept intact and this complete copyright notice appears
with it in all copies.  (c) Copyright 2003 by Software Research, Inc.


========================================================================

   o  Quality Week 2000 - "Ask the Quality Experts!" Panel Summary (Part
      2 of 3)

   o  eValid: Changing the Way You Think About WebSite Testing

   o  A Cost-Benefit Model for Software Testing

   o  New TCAT/Java Download Available

   o  Comments on Dekkers' Article (QTN, June 2000) by Linda Miller,
      with Response by Dekkers

   o  Testing, Brains, Process, and Tools, by David L. Moore

   o  Word Play (Contributed by Ann Schadt)

   o  Advance Program and Call For Attendance, International Symposium
      on Software Testing and Analysis (ISSTA'00), Portland, Oregon,
      22-24 August 2000

   o  QTN Article Submittal, Subscription Information

========================================================================

     Quality Week 2000 - "Ask the Quality Experts!" Panel Summary
                             (Part 2 of 3)

      Note: This discussion is the approximate transcript of the
      "Ask the Quality Experts!" panel session at QW2000.  The
      questions were posed by guests to a web page sponsored by
      Microsoft (courtesy of Panel Chair, Nick Borelli) who voted
      on each topic.  The top-voted questions were answered first.

                Ask the Quality Experts! Panel Members:

                    Mr. Nick Borelli, Microsoft, USA
             Dr. John D. Musa, Independent Consultant, USA
                    Prof. Lee Osterweil, UMASS, USA
                      Mr. Thomas Drake, ICCI, USA
                Mr. Robert Binder, RBSC Corporation, USA

*** What is the most important lesson about achieving software quality?

After 6 years it's become apparent to me that in order to test well, you
have to do a lot of other things well.  Requirements analysis,
requirements gathering, etc.  Most important lesson is to look at the
entire process of the system, pay attention to the lessons that have
been learned.  Understand what you want to build, and why.  Have some
way of deferring technical decisions until the last possible moment.
There are basic strategies that are known and known to work well.
Master and deploy all your high-quality systems.  As a society, we know
how to develop very high quality systems.

Be oriented to the user.  Consistently meet customer expectations.

I've got 1 week before the announced release date, and they just handed
me a new build! What do I do?

If you feel confident in the testing effort that you've done up until
this build, then you're probably okay executing your automation suite
and focusing on the specific parts that have changed.  If not, you're
not okay, and you're not going to make the release date.  If you do go
ahead and ship, you'll regret it.

Rerun your highly automated test regress suite.

One week is better than one day. :)

I would do a very quick operational profile.  Take a couple hours to
find out what the customer is going to do most and focus on those.

Do the minimum set of tests that run the core functionality.

Suggestion from the audience - assess whether you can ship within a week
and make that well known.

*** Do you think that the internet is secure enough?

No.  90% or greater of all users are completely defenseless to hackers
on the internet.  We are a long way before security outpaces what
hackers are able to do.  Example:  My dad installed a virus checker and
didn't enable it, so it was doing him no good.  From an engineering and
product standpoint we are at a crossroads of starting to support an
infrastructure that is both programmable and secure.

There is a tremendous amount of e-commerce that goes on without a hitch.
The hacker break-ins make the headlines but there aren't enough to flood
the news.  Do you think that downtown San Francisco is secure enough?
Just like the internet, on a cloudy day, the answer is clearly no, on a
sunny day, the answer is clearly yes.  The internet is a serviceable
thing that for a lot of things, works quite well.

> Comment from audience - Some folks not using the software and
techniques that is out there today to make their systems more secure.
Have heard comments like, that won't happen to us.

I had an up-to-date virus checker, and scanned the love bug attachment
before opening it.  It was from someone I knew. Maybe I'll need to wait
a week and see what hits the news before opening attachments.

Large attachments are scary now.

> Question from audience - What about other types of data transmitted on
the internet, such as credit card numbers?  I have had private
information stolen by hackers.  Many people will remain unwilling to use
it.

Some hackers are using the same technology we use to test our systems.

> Comment from audience - It's the same at gas stations or restaurants.

Yes, but more measures can be taken to ensure that the card user is the
card owner when it's in person.

*** Does coverage analysis really work? Is it worth the fuss and bother?

Depends.  A lot of coverage analysis probably isn't worth it.  Static,
dynamic, etc... a lot of the classical coverage analysis basically takes
the source code and trigger test cases, then look back and find out what
was covered.  When you're doing that type of analysis you really have to
be careful what you are looking for.  On the dynamic side, you work
directly at the executable level, then you're looking for the risk-type
results (dynamic run-time errors).  Sometimes it's good to find out that
you're only covering a small percentage of the code-coverage analysis
can be a check to find out how good your test cases are!

In my opinion, it works reasonably well for units, but for operational
systems it's much more cost effective.

A coverage model is not much more than a guess where we think the bugs
might be.  It tells us that we've turned over the rocks that we think
are good rocks to turn over.  Two well-known systems (Oc and Tec) were
submitted, their source code was available, no coverage analysis had
been run.  Large, expert development teams with lots of time looked at
these rather large and sophisticated test suites.  Very knowledgeable
users never reached 25% of the code.  Coverage analysis guarantees
nothing, but it is a necessary minimum criteria!

> Question from audience - Is there a specific goal that's good to shoot
for percentage-wise?

85% is a reasonable expectation

In some cases you should have much higher expectations than 85%.

> Question from audience -  In looking back to the original question, I
suspect the person asking the question isn't doing any code coverage at
all.  What should he be doing first? When my team did some coverage
analysis and saw that it was low, we were able to talk to the developers
and show them what we weren't covering, and they were able to guide us
to what were critical things that we needed to be testing.

Good point.  Talking to developers we can figure out what to do with
that 25%-50% of code that isn't covered.

Yes, it's probably useful to know what the code that isn't being covered
is.

> Comment from audience - Three things: 1. do a lot of unit testing 2.it
can help to prioritize testing 3. as we started to use the static
analyzer, the developers' quality went up

Static analysis can help you target areas for inspection/review too.

                           (To Be Continued)

========================================================================

        eValid: Changing the Way You Think About WebSite Testing

Our new eValid(tm) family of WebSite test products and WebSite test
services aims to change the way you think about testing a WebSite.

This new consolidated offering -- which integrates our unique testing,
tuning, loading, and monitoring technology under a single brand name --
is based on eValid's unique Test Enabled Web Browser(tm).  The eValid
engine runs on Windows 98/NT/2000.

eValid is a user-oriented test engine, our eValid monitoring service
platform, and the internal technology basis for our WebSite Quality
consulting work:  producing load experiments, building complete WebSite
Test Suites, and doing WebSite page tuning.

eValid as a test engine performs essentially all functions needed for
detailed WebSite static and dynamic testing, QA/Validation, and load
generation.  eValid has native capabilities that handle WebSite features
that are difficult, awkward, or even impossible with other methods such
as those based on viewing a WebSite from the Windows OS level.

eValid has a very rich feature set:

  * Intuitive on-browser GUI and on-web documentation.
  * Recording and playback of sessions in combined true-time and
    object mode.
  * Fully editable recordings/scripts.
  * Pause/SingleStep/Resume control for script checkout.
  * Performance timings to 1 msec resolution.
  * Content validation, HTML document features, URLs, selected text
    fragments, selected images, and all images and applets.
  * JavaScript and VBScript fully supported.
  * Advanced Recording feature for Java applets and ActiveX
    controls.
  * Event, timing, performance, tuning, and history charts that
    display current performance data.
  * Wizards to  create scripts that exercise links on a page, push
    all buttons on a FORM, and manipulate a FORM's complete
    contents, etc.
  * The LoadTest feature to chain scripts into realistic load
    testing scenarios.
  * Log files are all spread-sheet ready.
  * Cache management (play back tests with no cache or an initially
    empty cache).

Try out a DEMO Version of eValid (it has limited functionality but no
key is required!) by downloading from:
 <http://www.soft.com/eValid/Products/Download/down.evalid.html>

Or, download the FULL version and request an EVAL key from:
 <http://www.soft.com/eValid/Products/Download/send.license.html>

The powerful eValid LoadTest feature is described at:
 <http://www.soft.com/eValid/Products/Documentation/eV.load.html>

Take a quick look at the eValid GUI and other material about the product
at:
 <http://www.soft.com/eValid/Products/Documentation/eV.GUI.unlinked.html>

A detailed feature/benefit analysis of eValid can be found at:
 <http://www.soft.com/eValid/Products/features.benefits.html>

========================================================================

               A Cost-Benefit Model for Software Testing

                           by Rudolf Goeldner
                    

        IT Centre of North-Rhine Westphalia's Finance Department
                          Duesseldorf, Germany

Summary:  This article attempts to illustrate the economic viability of
quality assurance measures such as software testing by comparing the
costs of such measures with the costs that the user incurs when errors
occur at the workplace. Since there are hardly any empirical studies on
these costs, with the exception of Compaq's "Rage against the machine"
study, this article seeks to estimate these error-induced costs.
Obviously, the more users there are, the greater the benefit provided by
quality assurance. In other words, quality assurance is worthwhile,
irrespective of all other beneficial aspects, as soon as the number of
users exceeds a critical figure. Finally, the formula for estimating the
error-induced costs incurred by the user is applied to the data from two
projects.

1       The status of quality assurance in the software development
process

Quality assurance measures, such as testing during the software
development process, are recognized software engineering methods and are
included in every software process model. Software development is based
on four conflicting dimensions: functionality, quality, deadlines and
costs. The scope for solutions on any given project is restricted with
regard to these dimensions. Thus, if the costs and deadline are
specified in advance, the functionality and/or quality assurance
have/has to be limited. So, quality assurance is restricted by the
dimension of costs on the one hand and time-to-market requirements on
the other.  For software producers, quality assurance especially in the
form of software testing entails effort which, in turn, affects the
release date for the software product. Moreover, the effort required to
detect and correct errors does not really depend on whether these
activities have to be performed before the product is used or when it is
already being used - with the exception of analysis errors.

This is why it is often doubted that the effort invested in quality
assurance is necessary.  However, the economic viability of quality
assurance has to be measured using other criteria, e.g. at the user
level, where the software is in productive use. There is no doubt that
system crashes and malfunctions cost more effort on the part of the
user. Unfortunately, there are hardly any representative studies on this
issue. One of the few that do exist is Compaq's "Rage against the
machine" study.  An extract from this study was available on Compaq's
website <http://www.compaq.uk> from 27.05.1999 but has since been
removed.

2       Compaq's "Rage against the machine" study from the point of view
of quality assurance.

2.1     Technology-Related Anger

In March and April 1999 a representative survey of 1,255 employees who
use computers at work was carried out in Great Britain.

The amazing and worrying result was the "technology-related anger" (TRA)
detected by the survey. This type of anger is much more common than the
much better known phenomenon of traffic-related anger and results in
outbursts of rage that can go as far as actual physical attacks against
the computer.

The survey questions referred to IT as a whole, i.e. hardware, software
and networks. The responses showed:

  * there is a high level of stress, anger and frustration caused by
    computer problems;
  * this leads to substantial costs for the company;
  * 23 per cent of those questioned said that their work was interrupted
    at least once a day; and
  * 20 per cent of those questioned said that they spent up to three
    hours per day solving IT problems.

The damage for the company is not only caused by the fact that staff
spend time on computer problems themselves, but also by the
psychological stress exerted for the individual employees. Although they
may react differently according to their disposition, their reactions as
a whole have a negative influence on the atmosphere and productivity
within the company.

2.2     CBI Statistics

Statistics compiled by the Confederation of British Industry (CBI) show
that, on average, it takes one hour per day and user to solve IT
problems . (These figures are quoted in the Compaq study, too.)

This figure is more or less the same as the result of the Compaq study
with regard to the 20 per cent who said that they spent up to three
hours per day solving IT problems. Thus, if this figure is transferred
to all of the users (i.e. 100 per cent), an average of 0.6 hours per day
is used for this purpose.

The CBI concludes that these breakdowns cost =A325,000 per person per
year. Since 9 million people in Great Britain have a computer at their
workplace, the macroeconomic damage caused by the breakdowns comes to
almost =A3 225 billion per year.

The Compaq study focuses on the psychological effects rather than the
costs of IT breakdowns. It also makes no distinction between hardware,
software and network failures. But our experience shows that the
computer hardware (even the desktop printer) and the network hardware
rarely break down in a stable system and certainly not on a daily basis.
This means that the main cause is the software.

3       The Cost-Benefit Model

3.1     Error-Induced Costs Incurred by the User

As software testing is one of the most important measures in quality
assurance I will restrict my reflections to software testing, now.  The
costs incurred during use are defined as downtimes in person days (PD),
with one PD equaling eight hours. Only serious errors, i.e.  those
errors that cause the user effort, are taken into account. There is a
link between the costs (C) during use, the number of errors (ER) in the
software and the number of users (U). If the average effort during use
per error and user is defined as B(m) the following is true:

        C =3D B(m) * U * ER      (PD)

3.2     Optimistic estimate of the error-induced costs incurred by the
user

The next step is to attempt to produce an optimistic estimate of the
average effort B(m) per error and user. This process does not cover
follow-up effort of the type necessary in the case of data being lost
due to software bugs. Though this follow-up effort can take on huge
dimensions, it can hardly be calculated without empirical studies. The
average effort per error and user is determined as follows:

        B(m) =3D e * p      (PD per error and user)

where e is the average effort in PD per error and p is the proportion of
users that is actually affected by the error. For e, i.e. the effort
invested by the user per error, empirical values can be quoted. Users
hardly ever spend more than half a day dealing with an error. However,
the minimum time required is normally not less than half an hour. In
this case, the following is true:                   0.0625 < e < 0.5
(PD per error and user)

The proportion (p) of users affected by an error should really be 100
per cent but this is not the case because:

      30 per cent of users may be absent or may perform another task
      10 per cent may hear about the error from other users and do not
      use the erroneous function
      10 per cent may get an error message from the hotline in time and
      do not use the erroneous function

Thus, p could be said to equal 50 per cent or 0.5.

The result would be

        0.03125 < B(m) < 0.25      (PD per /error and user)

and the minimum costs would be

        C(min) =3D 0.03125 * U * ER      (PD)

and the maximum costs

        C(max) =3D 0.25 * U * ER       (PD)

The average value is

        EX(B(m)) =3D 0.14     (PD per error and user)

and the average costs

        C(m) =3D 0.14 * U * ER      (PD)

If a software system that has 20 serious errors is delivered and then
used by 400 people, the minimum costs will be 250 PD, average costs
1,120 PD and maximum costs 2,000 PD. If there were 2,000 users, the
costs would be 5 times as high. The average costs would then come to
5,600 PD or 28 person years and, as said, this model calculation doesn't
even cover the considerably larger effort required when data is lost due
to software bugs.

3.3     Financial Benefits of Software Testing

It is now possible to compare the costs incurred by the user with the
costs of software testing in order to assess the efficiency. Obviously,
the software testing costs C(ST) must be lower than the costs incurred
by the user if no software testing is done. ER(ST) stands for the errors
detected through software testing.

        C - C(ST) =3D B(m) * U * ER(ST) - C(ST) > 0

In other words, software testing is also worthwhile if the number of
users exceeds a certain, critical level U(crit), as follows:

        U > C(ST) / B(m) * 1 / ER(ST) =3D U(crit)

We shall now use data regarding two projects in North-Rhine Westphalia's
finance department to conduct a model calculation. The developers' test
(software developers' white box test) was followed by user acceptance
testing, during which serious errors were discovered ER(ST). The effort
was defined as C(ST).

386 errors were detected on Project A and the effort involved came to
1.820 PD. The product is used by 200 people. The average costs incurred
by the users due to this error would have been 10.808 PD. Thus, the
benefit from using user acceptance testing totaled 8.988 PD. Or, to put
it another way, user acceptance testing would have even been worthwhile
if only 34 people had been using the product.

On Project B, the results were even more drastic. 604 errors were
discovered and the effort required came to 1.362 PD. This product is
also used by 200 people. The average costs incurred by the users due to
this error would have totaled 16.912 PD. This means that the benefit
from using user acceptance testing was 15.550 PD. Or, in other words,
user acceptance testing would have even been worthwhile if only 17
people had been using the product.

On both projects the costs of user acceptance testing were less than 20
per cent of the total costs.  The method used to perform the user
acceptance testing was "SQS-TEST" by Software Quality Systems AG,
Cologne.

4       Outlook

It is clear that the U (user) value is the significant factor. The
higher U is, the greater the benefit and the justifiable effort that can
be invested in software testing.

If the average value is used for B(m), the justifiable effort for
software testing can be calculated as follows:

        C(ST) =3D 0.14 * ER * U      (PD)

It is not as easy to estimate the effort incurred by users when software
bugs lead to data loss or corruption. Such estimates necessitate
company-specific, empirical studies, which, in view of the
rationalization potential that could be tapped using error-free
software, are really crucial.

If, for whatever reason, empirical studies cannot be carried out, the
only real alternative is the "minimum marginal costing" method presented
here.

========================================================================

                    New TCAT/Java Download Available

We are pleased to announce that we have completed a new build of
TCAT/Java, Ver. 1.3, for Windows 98/NT/2000 build, and have made this
build available on our website.  All current TCAT/Java users should
download this new version; all existing license keys will work with the
new release.

This new TCAT/Java version includes:

   o  Runtime.  There is an improved java runtime that includes more
      buffer size options and more efficient operation.  Typically the
      slowdown for fully-instrumented Java applications that use the
      "infinite buffering" option is less than 5% additional execution
      time compared with that of the non-instrumented Java application.

      The new build provides buffer sizes of 1, 10, 100, 1000, 10,000,
      100,000 in addition to the "jruninf" (infinity) option.

      You use the non-infinite buffering in case you have an application
      that terminates so abnormally that the last buffer load is lost.

      The internal buffers in all of the other options are dumped
      automatically after the indicated number of "hits", and are also
      flush automatically at the end of the execution.

   o  Examples.  The new release includes updated example programs that
      show how TCAT/Java operates.  The examples include:

         o  "Fib.java": A small, 2 module, 6-segment Java application
            that calculates Fibonacci numbers.  Though very small, this
            Java application can generate a lot of segment hits; for
            example, Fib(20) generates 40,588 hits.

         o  "Metalworks.java": A medium sized, 115 segment, 117 module
            Java application.  A typical use of this application
            generated 7,684 hits.

         o  "SwingSet.java": A large sized, 709 segment, 232 module Java
            application that provides a wide rage of GUI elements.  A
            minimal exercise of this collection of GUI elements yielded
            169,605 hits.

   o  Other Improvements.  A range of other minor changes and repairs to
      improve operation and simplify processing have been made
      throughout the product.

========================================================================

              Comments on Dekkers' Article (QTN June 2000)
                            by Linda Miller
                       with Response by Dekkers.

From: "Linda Miller" 
To: QTN
Subject: Re: Quality Techniques Newsletter (QTN) -- June 2000 Issue
Date: Tue, 27 Jun 2000

I totally disagree with the article Real World Strategies for Improving
the Test Process presented by  Marco Dekkers.

I suppose Mr. Marco thinks the CMM is also a waste of time since it has
maturity levels and questionnaires?  The TMMsm (Testing Maturity Model)
developed by Illinois Institute of Technology by highly recognized
technology experts with more credentials and degrees than anyone I have
encountered in testing.

Everyone has their opinion and has the right to voice it, however they
should stick to the facts. My company uses the TMM and it has made a
significant impact on how we perform testing. It establishes repeatable
measurable standards and procedures that can be integrated throughout
the development life-cycle.

We work with the IIT and have developed all the necessary documents and
an assessment that has proven to be very successful.  There are many
questions in the assessment, but there is a purpose and theory for each
question. Also IIT and MST Lab, recognizes and insist on organization
wide support for any process as does the CMM.

Thank you,
Linda Miller
MST Lab, Inc.
(314) 487-1178

               o       o       o       o       o       o

From: MDekkers@kza.nl
To: QTN
Date: Wed Jun 28 2000
Subject: Response to Ms. Linda Miller's Comments

Thank you for your frank reaction to my article. The purpose of this
publication was to stimulate discussion on the subject of testprocess
improvement. It is only natural that different people have different
perspectives. Nevertheless, I think this means that there is an
opportunity to learn from each other.

I would like to react in more detail with regard to some point you make.
First of all I have never stated that the use of models for test process
improvement is "a waste of time". On the contrary my article states that
organizations can benefit from using or or more models, if they are used
with care. I have however seen countless instances where a model was
followed "blindly",  without regard for the company's business and IT
goals and specific circumstances. As I write in my article I view models
as useful tools.

Also my criticism of questionnaires is not addressed at the principle of
using them, but at the lenght of some of these. This criticism was not
specifically targeted at the TMM. However, some other models, including
the TPI, have lengthy questionnaires for which no theoretical of
practical purpose is given. In fact we ourselves us a compact
questionnaire when assessing the testprocess (which I mentioned in the
article). Assessments would be very hard to perform without any form of
questionnaire.

Maturity levels can be useful to assess where an organization stands and
to help in establishing goals. However, reaching a certain maturity
level can never be a goal in itself. Always there has to be a
relationship with the business goals. Therefore I encourage
organizations to establish goals for testprocess improvement which are
linked to their business goals. If these goals can be satisfied by
reaching a certain maturity level there is no reason not to do so.
Again, however, the question has to be asked what is accomplished when a
higher level is reached. Management would not be doing it's job if such
questions where not asked.

I am glad to hear that you have had success using the TMM. If it works,
I encourage you to use it.

You wrote that you totally disagree with my article. Could you please
let me know what other point besides my coment on the length of
questionnaires you disagree with? Also, are there any aspects in the
article that you find to be correct?

In closing I would like to thank you again for taking the time to react.
I would be very thankful if you could further elaborate on your thoughts
regarding the article and the subject of testprocess improvement. Also I
am very interested in how you made testprocess improvement work for your
company.

With kind regards,
Marco Dekkers

========================================================================

                  Testing, Brains, Process, and Tools
                                   by
                             David L. Moore
                     Independent Testing Consultant

This white paper will discuss the inextricable link between the need for
testers to think about what they are doing, the process on which to base
this thought, and the use of tools to assist them in these tasks.

Why do testers need brains?

Today's tester has a job that fully parallels that of any developer. To
be effective, testers should be on the project from the start to the
finish ie. End-to-End testing. It is widely understood that testing can
no longer be dismissed as something that can happen at the end of a
project. Certainly not without significant risk of failure.

The variety and complexity of tasks that a modern tester must perform
certainly means that good testers must be smart people.

And, obviously, testers require a high degree of cunning and a
willingness to "try something" different just to be effective at their
jobs. As they say, "If you do what you've always done, you'll get what
you've always got".

Testers need to decide what to do and what not to do. Tests need to be
designed!

Why do testers need process?

Complex tasks are often broken into numerous smaller less complex tasks.
As a result they need to be part of a process to be performed
effectively, analyzed and improved upon. Process provides a focus for
communicating and tying together the component tasks into an effective
activity. It is simply not possible for the average human to keep it all
in their head.

This applies to testers and indeed any professional performing a complex
job.

Because testers are often involved in tasks and outcomes that have an
impact on customers, they are required to be professional and beyond
reproach more visibly than anyone else in their organization. Processes
on which to base testing are an essential and an easy starting point for
achieving this credibility.

Why do testers need tools?

Testers need tools for all the same reasons that developers need tools;
to work faster, more efficiently, and to be freed from mundane tasks.
Also, there are some things that cannot be done without a tool.

If the testing effort is not constantly proportional to the development
effort ie. testing is being increasingly left behind, then, more often
than not, some tests don't get done. As this gulf increases, the
likelihood of important functionality going untested and the project's
risk of failure also increases.

Bearing in mind that today's testers are smart, they aren't going to be
interested in doing the same old thing over and over again. So they need
tools to apply to the mundane and time-consuming tasks.

Once again, the sophistication of the tester's role demands that some of
the complex issues be captured in places other than in their heads.
Tools, for example.

What happens when you don't have process and brains before getting a
tool?

The effect is somewhat similar to throwing confetti into the wind. The
first thing that happens is that the task spreads out and then it starts
to go backwards. The initial direction is rapidly lost.  The reason for
this is simple. Tools are inherently flexible. They always will be. They
have to be. The market dictates that they are customizable and it is an
economic reality that these tools need to be sold to more than just one
customer and no two customers are alike.

Without strong direction a tool will be purchased, installed, and all of
its features will be investigated and implemented, regardless of whether
these features really help to achieve the particular task at hand.

Why is it important to have process and brains before getting a tool?

 * A tool must do the right job for you. If you don't know what that job
   is, then no tool, of any nature, will provide you with that solution.

 * Testing is far from just a mechanical task. It is not possible to
   remove the tester's expertise from testing.

 * A tool must not waste your time. You must have the time available to
   implement a tool. Don't expect a tool to make time for you out of
   thin air when a deadline is tight.

 * A tool must be cost effective. Ignoring that time costs money, the
   fact of the matter is that many tools are very expensive and their
   cost effectiveness must be clearly understood before adoption.

 * A tool must actually be used to be effective. If it is not clear how
   to use the tool, through a well defined process, then most people
   don't have the spare time to figure it out.

 * A tool consumes resources. From people, to shelf space, to CPU
   cycles, to hard disk space etc. If the tool cannot live happily with
   your resources, then it is going to detract from the goal at hand.

How do you know if you are ready to buy a tool?

If you can answer "yes" to the following questions:

 * Do you have a documented process that works?

 * Do you have the metrics to prove the need to stakeholders?

 * Are your testers complaining about the proportion of mundane to
   interesting tasks?

 * Do you have the money to waste?

 * Are your testers mature enough to focus on applying your process to
   the tool rather than exploring the tool's capabilities?

 * Do your staff think critically rather than perform mechanically?

 * Are you prepared to try again and refine the use of the tool when it
   doesn't work the first time?

 * Do you have understanding management?

 * Have you identified a pilot project, that has not yet started, and
   have you included tools in the project plan?

 * Do you have the resources for tool evaluations?

How should you implement a tool?

 * It should be done in addition to an already satisfactory level of
   testing. Don't do anything critical with your tool on the pilot
   project/s. Be prepared to exclude its results from your deliverables.

 * It should be done in parallel with your existing process. Don't
   discard existing manual routines until you have proof that the
   automatic ones are at least as good, hopefully better/faster.

 * You should be prepared to run the tool implementation as a separate
   project with the usual project planning principles applied to it.

 * It should be implemented with close monitoring and carefully thought
   out metrics. You need visibility throughout the project to tweak it
   along the way and reach concrete conclusions at the end for
   postmortems.

 * It should be used where appropriate. Don't expect to do everything
   with the tool and be disappointed. Expect to do a little with the
   tool and be pleasantly surprised.

 * It should be done on a project that has plenty of time, money and is
   low risk.

 * The tool should be implemented with appropriate training for the
   users. Don't go to the trouble of doing the right thing up to now,
   and then throw a shrink-wrapped box to your testers.

Conclusions

Modern testing demands that testers are professionals with a significant
degree of intelligence. The ever increasing size and complexity of the
task means that intelligence alone is not enough to keep pace with what
is required of the job.

A process is required as a starting point and as a mode of communicating
direction throughout an organization. Tools are a useful mechanism for
handling complex processes and voluminous tasks.

The testing solution for today, and the future, is a carefully thought
out balance of brains, process and tools.

========================================================================

                 Word Play (Contributed by Ann Schadt)

*** Part I:  There are definitely some creative people out there... The
Washington Post recently published a contest for readers in which they
were asked to supply alternate meanings for various words. The following
were some of the winning entries:

 - Abdicate (v.), to give up all hope of ever having a flat stomach.

 - Carcinoma (n.), a valley in California, notable for its heavy smog.

 - Esplanade (v.), to attempt an explanation while drunk.

 - Flabbergasted (adj.), appalled over how much weight you have gained.

 - Negligent (adj.), describes a condition in which you absentmindedly
   answer the door in your nightie.

 - Lymph (v.), to walk with a lisp.

 - Gargoyle (n.), an olive-flavored mouthwash.

 - Coffee (n.), a person who is coughed upon.

 - Flatulence (n.) the emergency vehicle that picks you up after you are
   run over by a steamroller.

 - Balderdash (n.), a rapidly receding hairline.

 - Semantics (n.), pranks conducted by young men studying for the
   priesthood, including such things as gluing the pages of the priest's
   prayer book together just before vespers.

 - Rectitude (n.), the formal, dignified demeanor assumed by a
   proctologist immediately before he examines you.

 - Marionettes (n.), residents of Washington who have been jerked around
   by the mayor.

 - Circumvent (n.), the opening in the front of boxer shorts.

 - Frisbatarianism (n.), The belief that, when you die, your soul goes
   up on the roof and gets stuck there.

*** Part II:  The Washington Post's Style Invitational also asked
readers to take any word from the dictionary, alter it by adding,
subtracting or changing one letter, and supply a new definition.  Here
are some recent winners:

 - Sarchasm: The gulf between the author of sarcastic wit and the reader
   who doesn't get it.

 - Reintarnation: Coming back to life as a hillbilly.

 - Giraffiti: Vandalism spray-painted very high.

 - Inoculatte: To take coffee intravenously.

 - Osteopornosis: A degenerate disease.

 - Karmageddon: It's like, when everybody is sending off all these
   really bad vibes, right? And then, like, the Earth explodes and it's
   like a serious bummer.

 - Glibido: All talk and no action.

 - Dopeler effect: The tendency of stupid ideas to seem smarter when
   they come at you rapidly.

 - Intaxication: Euphoria at getting a refund from the IRS, which lasts
   until you realize it was your money to start with.

========================================================================

                Advance Program and Call For Attendance

  International Symposium on Software Testing and Analysis (ISSTA'00)
                  Portland, Oregon, 22-24 August 2000

       Workshop on Formal Methods in Software Practice (FMSP'00)
                  Portland, Oregon, 24-25 August 2000

                        Sponsored by ACM SIGSOFT

ISSTA is the leading research conference in software testing and
analysis, bringing together academics, industrial researchers, and
practitioners to exchange new ideas, problems, and experience.

FMSP aims to bring together researchers and practitioners from around
the world to work on the problems of the practical use of formal
techniques.  FMSP is a co-located conference to ISSTA.

Organizing Committee:
General Chair: Debra J. Richardson, University of California, Irvine, USA.
ISSTA Program Chair: Mary Jean Harrold, Georgia Institute of Technology, USA
FMSP Program Chair: Mats Heimdahl, University of Minnesota, USA

Conference Home Page: http://www.ics.uci.edu/issta-fmsp
ISSTA Home Page: http://www.cc.gatech.edu/~harrold/issta00
FMSP Home Page: http://www.cs.umn.edu/crisys/fmsp/

On Line Registration: http://regmaster.com/issta2000/

Early Registration Before July 28, 2000.  Reduced rate for ACM SIGSoft
members and participants who attend both ISSTA and FMSP

Hotel Information: Portland Marriott Downtown Reservations should be
made before July 28 by calling:  Marriott reservations: 1-800-228-9290
or Portland Marriott Downtown: 1-503-226-7600 and identifying "ACM" as
the organization and "ISSTA" or "FMSP" as the conference. Group rates
are available three days prior/post to conference dates, subject to
availability.

                         ISSTA ADVANCE PROGRAM

Session I: ISSTA Opening and Keynote Address Welcome: Debra J.
Richardson, General Chair (University of California, Irvine) Opening
Remarks: Mary Jean Harrold, ISSTA Program Chair (Georgia Institute of
Technology)

Analysis is necessary, but far from sufficient: Experiences building and
deploying successful tools for developers and testers Invited Speaker:
Jon Pincus (Software Design Engineer, PPRC, Microsoft Research)

Session II: Static Analysis Chair: Frank Tip (IBM TJ Watson Research
Center)

Formal Analysis of Network Simulations, by: Karthikeyan Bhargavan, Carl
A. Gunter, Moonjoo Kim, Insup Lee, Davor Obradovic, Oleg Sokolsky, and
Mahesh Viswanathan (University of Pennsylvania)

A Relational Method for Finding Bugs in Code, by: Daniel Jackson and
Mandana Vazir (Massachusetts Institute of Technology)

Putting Static Analysis to Work for Verification: A Case Study, by: Tal
Lev-Ami (Tel-Aviv University, Israel), Thomas Reps (University of
Wisconsin), Mooly Sagiv  Tel-Aviv University, Israel), Reinhard Wilhelm
(University of Saarlandes)

Session III: Testing Object-Oriented Software and Components Chair:
Thomas Ball (Microsoft Research)

Automated Testing of Classes, by:  Ugo Buy (University of Illinois,
Chicago), Alessandro Orso and Mauro Pezze' (Politecnico di Milano,
Italia)

OMEN: A Strategy for Testing Object-Oriented Software. Amie L. Souter
and Lori L. Pollock (University of Delaware)

UML-Based Integration Testing. Jean Hartmann and Claudio Imoberdorf
(Siemens Corporate Research), Michael Meisinger (Technical University,
Munich, Germany)

On Subdomain, Testing, Profiles, and Components. Dick Hamlet (Portland
State University)

Session IV: Real Time and Process Chair: Pascale Thevenod-Fosse (LAAS-
CNRS, France)

Requirements-based Monitors for Real-Time Systems. Dennis K. Peters
(Memorial University of Newfoundland) David L. Parnas (McMaster
University)

Classification Schemes to Aid in the Analysis of Real-Time Systems. Paul
Z. Kolano (Lockheed Martin), Richard A. Kemmerer (University of
California, Santa Barbara)

Verifying Properties of Process Definitions. Jamieson M. Cobleigh; Lori
A. Clarke; and Leon J. Osterweil (University of Massachusetts, Amherst)

Session V: Empirical Studies Chair: Antonia Bertolino (CNR Italia)

Prioritizing Test Cases for Regression Testing, by:  Sebastian Elbaum
(University of Nebraska, Lincoln) Alexey G. Malishevsky & Gregg
Rothermel (Oregon State University)

Which Pointer Analysis Should I Use? Michael Hind (IBM TJ Watson
Research Center), Anthony Pioli (Register.com)

Comparison of Delivered Reliability of Branch, Data Flow and Operational
Testing: A case study. Phyllis G. Frankl and Deng Yuetang (Polytechnic
University)

Minimizing Failure-Inducing Input. Ralf Hildebrandt and Andreas Zeller
(University of Passau, Germany)

Session VI: State of the Art and Future Directions Report Chair: Mary
Lou Soffa (University of Pittsburgh)

Finite State Verification: A New Approach for Validating Software
Systems. Invited Speaker: Lori Clarke (Professor, Computer Science
Department, University of Massachusetts, Amherst)

Session VII: Testing Chair: Istvan Forgacs (Balthazar Ltd., Hungary)

A Framework for Testing Database Applications. David Chays, Saikat Dan,
and Phyllis G. Frankl (Polytechnic University), Filippos Vokolos (Lucent
Technologies), Elaine J. Weyuker (AT&T Labs Research)

jRapture: A Capture/Replay Tool for Observation-Based Testing. John
Steven, Pravir Chandra, and Andy Podgurski (Case Western Reserve
University)

Testability, Fault Size and the Domain-to-Range Ratio:An Eternal
Triangle. Martin R. Woodward (University of Liverpool), Zuhoor A. Al-
Khanjari (Sultan Qaboos University, Sultanate of Oman)

Black-Box Test Reduction Using Input-Output Analysis. Patrick J.
Schroeder and Bogdan Korel (Illinois Institute of Technology)

Session VIII: State of the Art and Future Directions Report Chair:
Michal Young (University of Oregon)

Testing Component-Based Software Invited Speaker: Craig H. Wittenberg
(Development Manager, Component Applications Group, Microsoft Research)

Session IX / FMSP Session I: FMSP Opening and Keynote Address Welcome:
Debra J. Richardson, General Chair Opening Remarks: Mats Heimdahl: FMSP
Program Chair (University of Minnesota)

Model Checking Java Programs Invited Speaker: David Dill (Associate
Professor, Computer Science Department, Stanford University)

Session X: Concurrency Analysis (FMSP Attendees welcome) Chair: Joanne
Atlee (University of Waterloo, Canada)

Slicing Concurrent Programs. Mangala Gowri Nanda (IBM Research
Laboratory, India), S. Ramesh (Indian Institute of Technology, Bombay,
India)

Improving the Precision of INCA by Preventing Spurious Cycles. Stephen
F. Siegel and George S. Avrunin (University of Massachusetts, Amherst)

A Thread-Aware Debugger with an Open Interface. Daniel Schulz and Frank
Mueller (Humboldt University, Berlin, Germany)

ISSTA Closing Symposium Wrap-up:  Program Chair: Mary Jean Harrold
(Georgia Institute of Technology); General Chair: Debra J. Richardson
(University of California, Irvine)

========================================================================
      ------------>>> QTN ARTICLE SUBMITTAL POLICY <<<------------
========================================================================

QTN is E-mailed around the middle of each month to over 9000 subscribers
worldwide.  To have your event listed in an upcoming issue E-mail a
complete description and full details of your Call for Papers or Call
for Participation to "ttn@sr-corp.com".

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the QTN issue date.  For example,
  submission deadlines for "Calls for Papers" in the January issue of
  QTN On-Line should be for February and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items are the opinions of their authors or
submitters; QTN disclaims any responsibility for their content.

TRADEMARKS:  eValid, STW, TestWorks, CAPBAK, SMARTS, EXDIFF,
STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR logo are
trademarks or registered trademarks of Software Research, Inc. All other
systems are either trademarks or registered trademarks of their
respective companies.

========================================================================
          -------->>> QTN SUBSCRIPTION INFORMATION <<<--------
========================================================================

To SUBSCRIBE to QTN, to CANCEL a current subscription, to CHANGE an
address (a CANCEL and a SUBSCRIBE combined) or to submit or propose an
article, use the convenient Subscribe/Unsubscribe facility at:

         <http://www.soft.com/News/QTN-Online/subscribe.html>.

Or, send Email to "qtn@sr-corp.com" as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:

         subscribe Email-address

   TO UNSUBSCRIBE: Include this phrase in the body of your message:

         unsubscribe Email-address

   NOTE: Please, when subscribing or unsubscribing, type YOUR email
   address exactly as you want it to be used.  Unsubscribes that don't
   match an email address on the subscriber list are ignored.

		QUALITY TECHNIQUES NEWSLETTER
		Software Research, Inc.
		1663 Mission Street, Suite 400
		San Francisco, CA  94103  USA

		Phone:     +1 (415) 861-2800
		Toll Free: +1 (800) 942-SOFT (USA Only)
		Fax:       +1 (415) 861-9801
		Email:     qtn@sr-corp.com
		Web:       <http://www.soft.com/News/QTN-Online>

                               ## End ##