sss ssss      rrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr


         +===================================================+
         +======= Testing Techniques Newsletter (TTN) =======+
         +=======           ON-LINE EDITION           =======+
         +=======              June 1995              =======+
         +===================================================+

TESTING TECHNIQUES NEWSLETTER (TTN), On-Line Edition, is E-Mailed
monthly to support the Software Research, Inc. (SR) user community and
provide information of general use to the world software testing commun-
ity.

(c) Copyright 1995 by Software Research, Inc.  Permission to copy and/or
re-distribute is granted to recipients of the TTN On-Line Edition pro-
vided that the entire document/file is kept intact and this copyright
notice appears with it.

TRADEMARKS:  Software TestWorks, STW, STW/Regression, STW/Coverage,
STW/Advisor, X11 Virtual Display System, X11virtual and the SR logo are
trademarks of Software Research, Inc.  All other systems are either
trademarks or registered trademarks of their respective companies.

========================================================================

INSIDE THIS ISSUE:

   o  EIGHTH INTERNATIONAL SOFTWARE QUALITY WEEK (QW'95)
      A SUCCESS!

   o  UPDATED FAQ OF STW PRODUCTS

   o  THOUGHTFUL THREAD APPEARS ON INTERNET
      Part 1 of 2

   o  KRAZY KONTEST

   o  APRIL KRAZY KONTEST RESULTS

   o  CALENDAR OF EVENTS

   o  TTN SUBMITTAL POLICY

   o  TTN SUBSCRIPTION INFORMATION

========================================================================

           EIGHTH INTERNATIONAL SOFTWARE QUALITY WEEK (QW'95)
                               A SUCCESS!

With the closing of this year's Quality Week in San Francisco, Software
Research is delighted to report that it has been even more successful
than previous years' conferences!  Attendance was up again this year,
with the total number of attendees reaching over 650.

The Conference drew over 70 speakers, academics as well as practition-
ers, from all over the world for the four-day conference, and vendors
showing state-of-the-art testing technologies.

Attendees came from all over the world, including Australia, Austria,
Belgium, Canada, Denmark, England, France, Germany, Ireland, Israel,
Korea, Latvia, Norway and Sweden, as well as, of course, the U.S.

At the conference, response from attendees was overwhelmingly favorable.
Robert Poston of IDE, who delivered one of the tutorial sessions, had
this to say about the conference:

   "Out of the three testing conferences I attended this year, Quality
   Week had the most technically qualified speakers, and the most even
   distribution between practicing and academic orientation.

   "It had to be good, because nobody threw fruit at my presentation,
   and even laughed at some of my jokes."

Extra copies of the Conference Proceedings are available for $150.  The
Tutorial Notes are also available, for $50.  If you would like to order,
or would like further information, email us at "qw@soft.com".

If you attended this year's conference and would like to send us com-
ments about it, please address your comments to "qw@soft.com" or
"ttn@soft.com" with the phrase "QW comments" in the "Subject" line.

Look for the QW'96 Call for Participation soon in comp.software.testing,
comp.software-eng and news.announce.conferences on InterNet. QW'96's
Conference Theme is "Quality Process Convergence", and the conference
dates are 21 - 24 May 1996.  We hope to see you there!

========================================================================

                      UPDATED FAQ OF STW PRODUCTS

For those of you who may not have seen the most recent FAQ about testing
tools, here is a partial list of our most recent version:

-----------------------------------------------------------------------------
Name of Tool  :  Software TestWorks (STW(tm))
Kind of Tool  :  Automated Testing Tools Suite
Company Name  :  Software Research, Inc.
Address       :  901 Minnesota Street
                 San Francisco, CA 94107 USA
Internet Addr :  info@soft.com
Phone and Fax :  (415) 550-3020; USA Only: (800) 942-SOFT
                 FAX: (415) 550-3030
Description   :  Automate and streamline your testing process with
                 Software Research's Software TestWorks (STW(tm)).
                 STW/Regression automates test execution and
                 verification for GUI and Client/Server applications.
                 STW/Coverage measures how well test cases exercise
                 a program at unit, system and integration levels.
                 STW/Advisor analyzes source code, providing insight
                 into resource management, quality and predictability.
Platforms     :  DEC Alpha; HP 9000/700, 800; IBM RS/6000; NCR 3000;
                 SGI; Sun SPARC; x86 SCO, Solaris; x86 MS-DOS/MS-Windows
-----------------------------------------------------------------------------
Name of Tool  :  STW/Regression
Kind of Tool  :  Test management, execution and verification toolset
Company Name  :  Software Research, Inc.
Address       :  901 Minnesota Street
                 San Francisco, CA 94107 USA
Internet Addr :  info@soft.com
Phone and Fax :  (415) 550-3020; USA Only: (800) 942-SOFT
                 FAX: (415) 550-3030
Description   :  STW/Regression automates and manages tests on both text
                 and GUI-based applications. It increases test speed and
                 accuracy, improving cycle time and quality.
                 STW/Regression works for host and client-server
                 applications with automated load generation for
                 multi-user client-server applications; it also employs
                 a test management component for automated test
                 execution and management.
Platforms     :  DEC Alpha; HP 9000/700, 800; IBM RS/6000; NCR 3000;
                 SGI; Sun SPARC; x86 SCO, Solaris; x86 MS-DOS/MS-Windows
-----------------------------------------------------------------------------
Name of Tool  :  STW/Coverage
Kind of Tool  :  Code coverage analysis toolset
Company Name  :  Software Research, Inc.
Address       :  901 Minnesota Street
                 San Francisco, CA 94107 USA
Internet Addr :  info@soft.com
Phone and Fax :  (415) 550-3020; USA Only: (800) 942-SOFT
                 FAX: (415) 550-3030
Description   :  The STW/Coverage multi-platform suite of testing tools
                 measures how well test cases exercise a program and
                 identifies what code has not been exercised at unit,
                 system and integration levels.  STW/Coverage can be
                 used with GUI and Client/Server development tools, and
                 is available for C, C++, Ada, COBOL, and FORTRAN.
Platforms     :  DEC Alpha; HP 9000/700, 800; IBM RS/6000; NCR 3000;
                 SGI; Sun SPARC; x86 SCO, Solaris
-----------------------------------------------------------------------------
Name of Tool  :  STW/Advisor
Kind of Tool  :  Advisor tool suite
Company Name  :  Software Research, Inc.
Address       :  901 Minnesota Street
                 San Francisco, CA 94107 USA
Internet Addr :  info@soft.com
Phone and Fax :  (415) 550-3020; USA Only: (800) 942-SOFT
                 FAX: (415) 550-3030
Description   :  STW/Advisor provides static source code analysis and
                 measurement, using seventeen metrics to measure a
                 program's data, logic and size complexity. Test
                 data/file generation more fully tests applications by
                 creating additional test from existing tests. Static
                 Analysis is available for C; metrics are available for
                 C, C++, Ada and FORTRAN.
Platforms     :  DEC Alpha; HP 9000/700, 800; IBM RS/6000; NCR 3000;
                 SGI; Sun SPARC; x86 SCO, Solaris
-----------------------------------------------------------------------------

========================================================================

                 THOUGHTFUL THREAD APPEARS ON INTERNET
                              Part 1 of 2

Editor's note:  The following posting, written by Cem Kaner, appeared in
the comp.software.testing user group on InterNet.  We thought it was
worth passing along to you.

This is the first of two parts; the second half will appear in next
month's TTN/Online.

                         *      *      *      *

I posted a long message to a spinoff thread from this one, but almost no
one has responded to it. Might be, of course, that you read it and
decided it wasn't worth responding to, but I thought it might be that no
one noticed it because of where it was posted.

I spent a few hours drafting this note, and put in a few thoughts and
suggestions near the end that I haven't seen here before, so I decided
to re-post. If you've seen this message before, sorry for wasting your
time and storage.

The original post was in response to a thoughtful comment by Kent
Archie.

-- cem


Some of the comments in this series probably reflect differences in
applications. Some applications are more amenable to automation than
others.

I certainly also acknowledge that some tools are more effective than
others.

But I rarely see it acknowledged that automated techniques have often
trashed development projects (even though I hear stories like this in
face-to-face discussions.

My observation is that it takes a non-trivial amount more time to create
a test that you will re-use. You type or mouse more slowly and more
carefully. You recheck your result, re-running the test and seeing what
it does. If you have a tool that lets you edit the capture's underlying
sequences, you  probably scan the code and edit out unnecessary keys-
trokes (e.g. maybe  you mistyped and edited a word during the test),
then you re-run the test to recheck it. Some people also tend to run
simpler tests (easier to check) when they use capture/replay.

Granted that this only doubles or triples the amount of time it takes to
run the test, but it adds up.

: The hard part of automated testing is the bookkeeping you mentioned.
: Cataloging the tests, recording the results, choosing which ones to
: run. I can point you at a commercial test
: management system that handles most of this.

References to good test management systems are always welcome. (No sar-
casm here -- please send me references at cemkaner@netcom.com).

But these systems still take time. I tend to define a group of tests in
a carefully thought-through test matrix. Then I execute some or all of
the tests, checking off the cells as I run each test. This matrix
defines the test and tracks my progress. My style of checkmark (check
for a pass or, X with a bug report number) lets me track my results.
Paperworking beyond this level is an additional expense, even if the
paperwork-support tool is well designed.

: The second point is that the automated tests are designed to pass, or
: rather that the code is written
: to pass the test. You state that tests like these are unlikely to
: detect bugs after the release
: they were written for. I agree, however, after the release, they are now
: testing a different aspect
: of the code. Instead of finding bugs, they are testing for
: compatability with the previous release and
: looking for side effects due to changes made in the code. In a sense, the
: tests become the requirements.

The key point that I am making is that the process of automation focuses
your attention on a relatively smaller pool of tests -- at least during
the first release of the program that you test.

Time is finite. Testing staff is finite. Time you spend automating is
not spent developing and running other tests. Maybe -- maybe, maybe,
maybe -- you will save time later when you used these tests, and maybe,
maybe, maybe these regression tests will be effective tests in the
future. But stick with me for the first version of the program, for the
moment.

During this first version, if you spend a large amount of time on auto-
mation:

a) you will run fewer tests

b) you will run MANY fewer tests during the early wave of testing. If
   you believe that the cost of finding and fixing bugs rises exponen-
   tially with time since the code was written, then delaying finding
   bugs is expensive.  And if you also believe with me that the proba-
   bility that the project manager will agree to fix an average-severity
   bug declines with the time since the code was written, then delaying
   finding bugs imposes quality risks.

c) because you find fewer bugs during your initial waves of testing (and
   you find fewer bugs because you're running fewer tests because you're
   spending your time developing and documenting your automatons), the
   bug statistics you produce are misleadingly reassuring.  Some people
   (maybe many people, especially and including executives) will think
   that the schedule is in good shape because the software is in good
   shape. And they know the software is in good shape because you aren't
   finding many bugs in it.

d) because you run fewer tests, your test series is less diverse. One
   reader didn't understand why this is so.  When I say that the test
   series is less diverse, I mean that you are covering fewer conditions
   or fewer types of bugs. You cover fewer because you are running fewer
   distinct tests. You run fewer distinct tests because you spend more
   time per distinct test capturing and papering it.

Now as to the value of the series for regression, well that depends. As
the program changes, some of your test cases are just plumb wrong. The
more the program has changed in ways that perturb your comparison algo-
rithm, the more maintenance time you spend finding and fixing bugs in
your automated tests.

If you tend to send out lots of maintenance releases, all those regres-
sion tests will come in pretty handy. If you tend to make big changes,
all those regression tests might be pretty expensive.

By the way, one of the really annoying side-effects of big, automated
regression suites is that they can discourage improvements in the user
interface. A significant UI change invalidates a significant proportion
of the regression suite. Therefore it is too expensive to do. Seems to
me that in this case, the suite has become part of the problem, not part
of the solution.

<<>>

: Automating the tests doesn't add terribly much to the cost of testing
: but can aid greatly in the long term
: quality of the test.

It really depends on the specific situation. The actual results can be
pretty dismaying. I've heard of dismaying results at big companies as
well as smaller companies. If you don't plan to make a BIG investment in
automation budget, but you decide to do LOTS of automation, you are in
for big trouble, no matter how big your company is, no matter how slick
your wonderful tool's sales literature is, etc.

Now, this isn't to say that all automation is bad or stupid. It's to say
that we need to make informed business decisions, or we risk making
things worse, not better.

Additionally, regression testing isn't the only way to use automation
tools. Instead of getting locked into the mindset of automating every-
thing, why not automate selectively, looking to increase near-term effi-
ciency and effectiveness, rather than betting that you are increasing
distant future efficiency?

For example, you can use automation tools to drive a program through a
long series of state transitions, logging status variables as you go, in
order to track down memory leaks. It can be really, really hard to track
these leaks "by hand". The tool helps you do things you couldn't do
before.

                             *     *     *

End Part 1

Part 2 can be seen in July 1995 edition of TTN/Online.

========================================================================

                             KRAZY KONTEST

Krazy Kontest is a technical challenge for testers! Each Krazy Kontest
focuses on a specific technical question or situation, and invites
responses.  While serious answers are expected they are not required.
What's Krazy is that we don't necessarily know if there *IS* a correct
answer to each Krazy Kontest situation.

We promise in each Krazy Kontest, scheduled to run a couple of months,
to listen and analyze everyone's response, to summarize all responses,
and to include the summary (including the best, most quotable quotes!)
in a future issue.

Be sure to identify your answer with the correct Krazy Kontest Serial
Number.  E-mail your responses to: ``ttn@soft.com'' making sure that the
Subject: line says ``Krazy Kontest''.

                   Krazy Kontest Serial Number No. 2

Suppose the C function ``foo_2()'' has to be tested by choosing a
range of different values.  You don't get to see the source code of
``foo'', but you do know that ``foo_2'' will somehow look at every char-
acter in .

What files containing what data would YOU feed to ``foo_2'' to test it
thoroughly?  Why?

========================================================================

                      APRIL KRAZY KONTEST RESULTS

Here are some of the results of the "Krazy Kontest" we ran in April:

*   *   *   *

Robert.L.Vanderwall@att.com wrote:

>>Suppose that your C program "foo_1" contains the passage:
>>
>>        foo_1(a,b,c,d,e)
>>        int a, b;
>>        float c, d;
>>        long e;
>>        {
>>        /* This is the expression to test... */
>>        e = (long) ((a + 1) / (exp(b, c) - d));
>>        }
>>
>>where exp(..) is a function that returns a float, and the rest of the
>>program takes care of setting a, b, c and d to values you select.
>>
>>What sets of initial values of a, b, c and d are the "best" test values
>>to use to make sure this expression is thoroughly tested?  Why?

The answer depends on what the requirements are!!!

ASSUME:
   requirements:   Write a routine to implement the following function:
                e=(long) ((a+1) / (exp(b,c) -d))
                where a and b are integers between MIN_INT and MAX_INT
                      c and d are floating point numbers

SOLUTION:
   By inspection, the solution is correct.

COMMENTS:
Unless the requirements specify that a particular value should
be returned or that a specific behavior should be exhibitied for
the case that exp(b,c) = d, then a 'divide by zero error' is
perfectly acceptable.



ALTERNATIVE:
ASSUME:
   requirement:  Write a routine to return the square root of a given
                 integer, a.
SOLUTION:        a=1, b=1,c=1,d=1
                 a=2, b=1,c=1,d=1
                At least one of the above vectors will fail.  The program is
                incorrect.



Bob Vanderwall
---------------------------------------------------------------
The solutions above are my own and not the property or responsibility
of AT&T. Nor would they want them, being only half baked.
---------------------------------------------------------------

*   *   *   *

stewart crawford  wrote:

the program `foo_1' calculates a value for `e'
but returns no value to its calling routine;
since it has a null effect, it isn't worth
spending any time testing it!

stew crawford-hines

*   *   *   *

Thomas Newell  wrote:

Answer:
-------
Values of c and d in the range 0.999 <= x <= 0.99999999 to try and catch
the math processor returning 1 instead of 0 (ie. e is too large by 1).


Tom

*   *   *   *

We'll post the results of this month's "Krazy Kontest" in the August
issue of the TTN/Online.

========================================================================
---------------------->>>  CALENDAR OF EVENTS  <<<----------------------
========================================================================

The following is a partial list of upcoming events of interest.  ("o"
indicates Software Research will participate in these events.)

   o  June 12 - 15: USPDI Software Testing Conference
      Crystal Gateway Marriott
      Washington, D.C.
      Contact: Genevieve (Ginger) Houston-Ludlam
      tel: 301-445-4400
      fax: 301-445-5722

   +  June 20 - 22: PC Expo
      Jacob K. Javitz Convention Center
      New York, NY
      Contact: Bruno Blenheim, Inc.
      tel: 800-829-3976
      fax: 201-346-1602

   o  June 20 - 23: ASQC First World Congress on Software Quality
      Fairmont Hotel
      San Francisco, CA
      Contact: Karen Snow
      tel: 415-388-1963
           800-248-1946

   +  July 9 - 14: CASE '95
      7th International Workshop on Computer-Aided Software Engineering
      Toronto, Ontario, Canada
      Contact: Francois Coallier
      tel: 514-448-5133
      fax: 514-647-3163
      E-mail: fcoallie@qc.bell.ca

   +  July 10 - 13: UNIX Open '95
      World Trade Center
      Mexico City, Mexico
      Contact: Wendy Hesketh
      tel: 525-604-4627
      fax: 525-543-2931
      E-mail: confrgrp@indirect.com

   o  August 9 - 11: COMPSAC 95
      Contact: Mr. David Kung
      tel: 817-273-3627
      fax: 817-273-3784
      E-mail: kung@cse.uta.edu

========================================================================
------------>>>          TTN SUBMITTAL POLICY            <<<------------
========================================================================

The TTN On-Line Edition is forwarded on the 15th of each month to sub-
scribers via InterNet.  To have your event listed in an upcoming issue,
please e-mail a description of your event or Call for Papers or Partici-
pation to "ttn@soft.com".  The TTN On-Line submittal policy is as fol-
lows:

o  Submission deadlines indicated in "Calls for Papers" should provide
   at least a 1-month lead time from the TTN On-Line issue date.  For
   example, submission deadlines for "Calls for Papers" in the January
   issue of TTN On-Line would be for February and beyond.
o  Length of submitted items should not exceed 68 lines (one page).
o  Publication of submitted items is determined by Software Research,
   Inc., and may be edited as necessary.

========================================================================
----------------->>>  TTN SUBSCRIPTION INFORMATION  <<<-----------------
========================================================================

To request a FREE subscription or submit articles, please send E-mail to
"ttn@soft.com".  For subscriptions, please use the keywords "Request-
TTN" or "subscribe" in the Subject line of your E-mail header.  To have
your name added to the subscription list for the biannual hard-copy ver-
sion of the TTN -- which contains additional information beyond the
monthly electronic version -- include your name, company, and postal
address.

To cancel your subscription, include the phrase "unsubscribe" or
"UNrequest-TTN" in the Subject line.

Note:  To order back copies of the TTN On-Line (August 1993 onward),
please specify the month and year when E-mailing requests to
"ttn@soft.com".

                     TESTING TECHNIQUES NEWSLETTER
                        Software Research, Inc.
                            901 Minnesota Street
                      San Francisco, CA 94107 USA

                         Phone: (415) 550-3020
                       Toll Free: (800) 942-SOFT
                          FAX: (415) 550-3030
                          E-mail: ttn@soft.com

                               ## End ##