sss ssss      rrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr


         +===================================================+
         +======= Testing Techniques Newsletter (TTN) =======+
         +=======           ON-LINE EDITION           =======+
         +=======            January  1996             =======+
         +===================================================+

TESTING TECHNIQUES NEWSLETTER (TTN), On-Line Edition, is E-Mailed
monthly to support the Software Research, Inc. (SR) user community and
provide information of general use to the world software testing commun-
ity.

(c) Copyright 1995 by Software Research, Inc.  Permission to copy and/or
re-distribute is granted to recipients of the TTN On-Line Edition pro-
vided that the entire document/file is kept intact and this copyright
notice appears with it.

TRADEMARKS:  STW, Software TestWorks, CAPBAK/X, SMARTS, EXDIFF,
CAPBAK/UNIX, Xdemo, Xvirtual, Xflight, STW/Regression, STW/Coverage,
STW/Advisor and the SR logo are trademarks or registered trademarks of
Software Research, Inc. All other systems are either trademarks or
registered trademarks of their respective companies.

========================================================================

INSIDE THIS ISSUE:

   o  SR's WWW Page Active

   o  Win95 Bug in Daylight Savings Time: A Testing Tale

   o  Special Request for QW'96 Quick-Start Track Topics

   o  Book Review:  CAST Report (Reviewed by Hans Schaefer)

   o  Software Negligence and Testing Coverage (Part 1), by Cem Kaner

   o  When is Sofware "Correct Enugh"? by Branislav Meandzija and Len
      Berstein

   o  MS Software Essentially Bug Free (From Klaus Brunnstein)

   o  TTN SUBMITTAL POLICY

   o  TTN SUBSCRIPTION INFORMATION

========================================================================

                S R ' s   W E B   P A G E   A C T I V E

SR is pleased to announce availability of a completely revised Web Page
located at:

        http://www.soft.com

In coming months we will be adding archival copies of every issue of
TTN-online for 1994 and 1995.  For now you will find the prior three
months' issues there.

In addition, you can check into other parts of the SR Web Page plex for
information on Quality Week, on SR's software test products, etc.

Also, look there for the new "Ask Dr. Test" announcement.  This is a new
free SR service that answers your most-commonly asked questions.

Comments, suggestions and observations are welcome!

========================================================================

                 Win95 Bug in Daylight Savings Feature:
                             A Testing Tale

                                   by

                           Christopher Thorpe
                         t-cthorp@microsoft.com

When I set up my Win95 machine here in Redmond, Washington, a few weeks
ago, I set up my time zone as "(GMT-08:00) Pacific Time (US & Canada);
Tijuana" and checked the "Automatically adjust clock for daylight saving
changes" check box.

Tonight, having given up a battle with insomnia, I was working late in
the office.  When 2 am (Pacific Daylight Time) rolled around, Windows
gave me a friendly message that it had updated my clock to reflect the
Daylight Savings Time change.  I clicked "OK," checked the time
(correctly changed) in the clock window that had automatically appeared
on my screen, and clicked "OK" there as well.

I thought, "What a nifty feature."

Until 2 am (Pacific Standard Time).  I again got the friendly message
from Windows that it had updated my clock to reflect the Daylight Sav-
ings Time change.  I thought I had just done this, and sure enough, Win-
dows had re-reset my clock to 1 am.

I am confident that if I had not changed my clock settings to past 2 am
at that point, I could have sat here all night, clicking OK and watching
Windows set my clock back an hour every hour on the hour.  I even reset
my clock to 1:59 several times to see what would happen when it hit 2,
and sure enough, it went back an hour every time.

Since this "window" of opportunity happens only once a year for one hour
when most normal people are fast asleep, it doesn't seem like a serious
bug.  If the computer is off, or if the user doesn't click OK for an
hour, and the dialog box stays open until after 3 am Standard time, the
clock is reset correctly.

However, given that Windows so intelligently changes the "Current time
zone" in the "Date & Time" tab of the same Date/Time Properties dialog
from Pacific Daylight Time to Pacific Standard Time once it hits 2 am,
you might think that when it decrements the hour, it could set the time
zone to "Pacific Standard Time."  Then, Windows could only take back the
hour if the time zone were currently "Pacific Daylight Time."

Oh well, you can't win 'em all.  Happy End of Daylight Savings!

========================================================================

            ************************************************
            INTERNATIONAL SOFTWARE QUALITY WEEK 1996 (QW'96)
            ************************************************

           San Francisco, California  USA --  21-24 May 1996

                 S P E C I A L   R E Q U E S T   F O R

            Q U I C K - S T A R T   T R A C K   T O P I C S

QW'96, the ninth in a continuing series of International Software Qual-
ity Week Conferences, provides state-of-the-art technology information
to the software engineering, software quality and software testing audi-
ence.

The QW'96 conference is organized as a triple-track (Applications, Tech-
nology, Management) event with kickoff and wrapup plenary sessions and
some in-conference mini-tutorials and/or debates on selected topics.
QW'96 will be preceded by a Tutorial Day in which leading experts give
in-depth presentations on various topics.

This year, based on attendee requests from the 1995 and 1994 events,
QW'96 will be adding a "quick-start" track intended to provide attendees
who are new to the software quality area basic information to help them
better appreciate the technical details in the other sessions of the
conference.  There is time in the QW'96 conference schedule for approxi-
mately seven such 90-minute mini-tutorial presentations.

The program committee has come up with some ideas (e.g. What does test
coverage mean?  What does ISO-9000 really do?  What does regression
testing actually involve?)  but would like to have your suggestions on:

o  What basic software quality questions should Quick-Start track mini-
   tutorials address?
o  Who should give the presentations?
o  What should the main points of the session be?
o  How important is the topic to overall software quality?

Please send all suggestions and Quick-Start Track ideas to Edward
Miller, QW'96 Program Chairman, by either E-mail (miller@soft.com) or
FAX to SR/Institute (+1 (415) 550-3030)

NOTE: Check out details on QW'96 at the URL:

                http://www.soft.com/QualWeek/index.html

========================================================================

                        BOOK REVIEW: CAST REPORT

         Reviewed by: Hans Schaefer, Software Test Consultant,
                    N/5240 Valestrandsfossen NORWAY
                             Hascha@bbb.no

CAST Report, Computer Aided Software Testing, Third edition, by Dorothy
Graham, Paul Herzlich and Cindy Morelli, 330 pages, Published by Cam-
bridge Market Intelligence, London House, Parkgate Road, London SW11 4
NQ, Published in May 1995.  Price: On Paper 495 GBP, on CD-ROM 595 GBP,
on CD-ROM with network license 1450 GBP.

This much money for a list of software testing tools? Is it worth it?
There are several test tool catalogues, some of them are even free. The
CAST Report is expensive. You can hire a consultant for a day for the
same money. But what do you need?

First you need an idea of what technology can do for you, which tasks in
the testing area can be automated, the pitfalls with it and the costs
and benefits. The CAST Report starts with a 75 page introduction just
about this. Your next need is to find the right tool. The CAST Report
gives you details about every commercially available tool in Europe, and
it gives you cross reference lists to help you make a short list even
before you contact tools suppliers.

But let us read from the beginning. In the first chapter, the Report
presents the test process, shows you what the test phases and work steps
are, and what can be automated and what not. It then proceeds to
describe the process of tool selection and implementation. There are
many tools, but not all of them are right for you. Here you find a pro-
cess for selecting the right one. But the report does not stop with
this. Surveys show that a lot of nice test tools end up as shelfware.
The reason is bad implementation, lack of follow-up, and a disorganized
test methodology. The chapter on implementation gives you many ideas for
what to improve before you even think of automated testing tools. This
initial work is important. A good idea is to run a pilot. Another good
idea is to have a champion for the tool. Implementation must be planned
and organized, and this will cost a lot, often more than the tool. But
without implementation, the tool may as well fail. Reading this far, you
may as well stop and decide whether the remainder of the report is
interesting, because if your organization is not ready...

Two other short overview chapters also give interesting information. One
is on direction and trends. What can CAST do for you now and where does
it go? There is a list of books, conferences, special interest groups
and training courses. This chapter, however, should be improved. Some of
the articles are annotated to show their applicability for the reader.
The list of books, however, is not annotated, and not ordered, leaving
the reader wondering which book might be good. The section about confer-
ences and groups is also incomplete. There are other conferences than
EuroSTAR dealing with testing. (EOQC conferences  and the Durham
Workshops) There are also special interest groups at least in Germany,
Norway, Finland and Sweden. The section on testing seminars also lists
British references only.

Then we are introduced into the world of the tools themselves. A clas-
sification is given and explained (12 categories from test running to
defect tracking), and then we are off into the tool selection process.

The problem, however, is to assemble all this into one, as is done here.
Even for me, having consulted in testing for 10 years, there were valu-
able new items here.

What about the other pages in the report? The next 155 pages are a list
of testing tools, one page per tool. The tool information is based on
data given by the suppliers. Readers should be cautious: Not all the
functions listed for a tool may be working for all supported environ-
ments.  Some tools are mature, some not, and you cannot tell from the
report, but at least you get an impression. For every tool you find a
short description, the environments it can be applied in, a supplier
reference, even with the country given, the age of the tool and its his-
tory, the number of installations, the price and the tool features.
Sometimes a price or number of installations is not given, but this does
not allow conclusions. The tools listed are commercial tools, available
in Europe. Freeware tools, operating system or compiler utilities and
tools below a price of five hundred pounds are not included. The next 45
pages give information about the tool suppliers. The list is quite com-
plete. Even support organizations in different European countries are
mentioned. Most addresses seem to be complete and correct, judging by
the given Scandinavian addresses. You find the postal address, telephone
and fax numbers, sometimes an e-mail address, a contact person and some
information about the history and size of the company.

There is a table listing tool feature summaries. This table can be used
to find tools that cover more than one category, or to check which tools
have a broad or narrow focus. For every tool listed, you get a weighted
listing of which functionality categories this tool will support. How-
ever, the list cannot be used to compare tools against each other. The
problem with this list is that a reader does not know how the entries
have been compiled. An interpretation is, therefore, difficult.

The last table is the list of changes since the last CAST Report. Some
tools have changed names, some are no longer supplied, etc. Nice to
know, for an update of your private lists.

So will you get your money's worth? With cheap or free tools catalogues,
you normally get some tool names and supplier addresses, not much more.
With the CAST Report, you arrive at a quite short list of interesting
tools.  You can meet the supplier with good knowledge about their tool,
and from studying the introduction you know what to expect and where the
pitfalls are. Suppliers will react differently to your enquiries if they
know that you know a lot. With cheaper tools catalogues you will have to
spend a lot of time contacting vendors and getting very basic informa-
tion. This soon amounts to more than a day's work,which is probably
worth the price of the report.

A word of caution: The tool market is rapidly growing, and new tools
appear at a tremendous rate. You should buy the Report only if you
really have plans about buying test tools, but if you have plans, it is
worth it.

========================================================================

           SOFTWARE NEGLIGENCE AND TESTING COVERAGE (Part 1)

                                   by

                  CEM KANER, J.D., PH.D., ASQC-C.Q.E.

Editors Note:  This article was first published in the Software QA Quar-
terly, Volume 2, No.  2, 1995, pp. 18-26. Copyright (c) Cem Kaner, 1995.
All rights reserved.  It is reprinted in TTN-Online, in four parts, by
permission of the author.

                             (Part 1 of 4)

Several months ago, a respected member of the software quality community
posed the following argument to me:

A program fails in the field, and someone dies. This leads to a trial.
When the QA manager takes the stand, the plaintiff's lawyer brings out
three facts:

1. The failure occurred when the program reached a specific line of
   code.

2. That line had never been tested, and that's why the bug had not been
   found before the product was released.

3. A coverage monitor was available. This is a tool that allows the tes-
   ter to determine which lines of code have not yet been tested. Had
   the QA manager used the tool, and tested to a level of complete cov-
   erage (all lines tested), this bug would have been found, and the
   victim would be alive today.

Therefore, the lawyer has proved that the company was negligent and the
victim's family will  win the lawsuit.

The question is, what's wrong with this argument? Anything? After all,
the company had a well-understood tool readily available and if they had
only used it, someone would not have died. How could the company not be
liable for negligence?

This article explores the legal concept of negligence, and the technical
concept of coverage. The article advances three propositions:

1.      The phrase, "complete coverage", is misleading. This "complete-
ness" is measured only relative to a specific population of possible
test cases, such as lines of code, branches, n-length sub-paths, predi-
cates, etc. Even if you achieve complete coverage for a given population
(such as, all lines of code tested), you have not done complete, or even
adequate, testing.

2.      We can and should expand the list of populations of possible
test cases. We can measure coverage against each of these populations.
The decision as to whether to try for 1%, 10%, 50% or 100% coverage
against any given population is non-obvious. It involves tradeoffs based
on thoughtful judgment.

3.      Therefore, it is not obvious whether an incomplete level of cov-
erage of any particular population of possible test cases is negligent
or not. In particular, failure to achieve 100% line coverage might or
might not be negligent. The plaintiff will have to prove that the trade-
off made by the software company was unreasonable.

WHAT IS NEGLIGENCE?

Under negligence law, software development companies must not release
products that pose an _unreasonable_ risk of personal injury or property
damage. (Footnote 1) An injured customer can sue your company for negli-
gence if your company did not take _reasonable_measures_ to ensure that
the product was safe.

------------
Footnote 1 Note the restriction on negligence suits. Most lawsuits over
defective software are for breach of contract or fraud, partially
because they don't involve personal injury or property damage.
------------


"Reasonable" measures are those measures that a reasonable, cautious
company would take to protect the safety of its customers. How do we
determine whether a company has taken reasonable measures? One tradi-
tional approach in law involves a simple cost-benefit analysis. This was
expressed as a formula by Judge Learned Hand in the classic case of
United States v.  Carroll Towing Co. (Footnote 2):

------------
Footnote 2 Federal Reporter, Second Series, volume 159, page 169 (United
States Court of Appeals, 2nd Circuit, 1947); for a more recent discus-
sion see W. Landes and R. Posner, The Economic Structure of Tort Law,
Harvard University Press, 1987.
------------

Let B be the burden (expense) of preventing a potential accident.  Let L
be the severity of the loss if the accident occurs.  Let P be the proba-
bility of the accident.  Then failure to attempt to prevent a potential
accident is unreasonable if

                               B < P x L.

For example, suppose that a software error will cause a total of
$1,000,000 in damage to your customers. If you could prevent this by
spending less than $1,000,000, but don't, you are negligent. If preven-
tion would cost more than $1,000,000, and you don't spend the money, you
are not negligent.

In retrospect, after an accident has occurred, now that we know that
there is an error and what it is, it will almost always look cheaper to
have fixed the bug and prevented the accident. But if the company didn't
know about this bug when it released the program, our calculations
should include the cost of finding the bug:  What would it have cost to
make the testing process thorough enough that you would have found this
bug during testing?

For example, if a bug in line 7000 crashes the program, B would not be
the cost of adding one test case that miraculously checks this line,
then fixing the line. B would be:

*  the cost of strengthening the testing so that line 7000's bug is
   found in the normal course of testing, or

*  the cost of changing the design and programming practices in a way
   that would have prevented this bug (and others like it) in the first
   place.

Coming back to the coverage question, it seems clear that you can
prevent the crash-on-line-7000 bug by making sure that you at least exe-
cute every line in the program. This is "line coverage."

Line coverage measures the number / percentage of lines of code that
have been executed. But some lines contain branches -- the line tests a
variable and does different things depending on the variable's value. To
achieve complete "branch coverage", you check each line, and each branch
on multi-branch lines. To achieve complete "path coverage", you must
test every path through the program, an impossible task. (Footnote 3)

------------
Footnote 3 G. Myers, The Art of Software Testing, Wiley, 1979.
------------

The argument made at the start of this article would have us estimate B
as the cost of achieving complete line coverage:  Is that the right
estimate of what it would cost a reasonable software company to find
this bug? I don't think so.

Line coverage is just one narrow type of coverage of the program.  Yes,
complete line coverage would catch a syntax error on line 7000 that
crashes the program, but what about all the other bugs that wouldn't
show up under this simple testing? Suppose that it would cost an extra
$50,000 to achieve complete line coverage. If you had an extra $50,000
to spend on testing, is line coverage what you would spend it on? Prob-
ably not.

Most traditional coverage measures look at the simplest building blocks
of the program (lines of code) and the flow of control from one line to
the next. These are easy and obvious measures to create, but they can
miss important bugs.

A great risk of a measurement strategy is that it is too tempting to
pick a few convenient measures and then ignore anything else that is
more subtle or harder to measure. When people talk of "complete cover-
age" or "100% coverage", they are using terribly misleading language.
Many bugs will not be detected even if there is complete line coverage,
complete branch coverage, or even if there were complete path coverage.

If you spend all of your extra money trying to achieve complete line
coverage, you are spending none of your extra money looking for the many
bugs that won't show up in the simple tests that can let you achieve
line coverage quickly. Here are some examples:

*    A key characteristic of object-oriented programming is that each
object can handle any type of data (integer, real, string, etc.) that
you pass to it. Suppose that you pass data to an object that it wasn't
designed to accept. Note that you won't run into this problem by check-
ing every line of code, because the failure is that the program doesn't
expect this situation, therefore it supplies no relevant lines for you
to test. Also, if this is an unexpected situation, the program might
well crash or corrupt memory when it tries to deal with it.

There is an identifiable population of tests that can reveal this type
of problem. If you pass every type of data to every object in your pro-
duct, you will find every error that involves an object that doesn't
properly handle a type of data that is passed to it. You can count the
number of possible tests involved here, and you can track the number
you've actually run. Clearly, there is a coverage measure here.

                   ( T O   B E   C O N T I N U E D )

========================================================================

                   When is Software "Correct Enugh"?

               by Branislav Meandzija  and Len Bernstein

   Author Bernstein notes: In the premier issue of JNSM I wrote with
   C.M. Yuhas about software testing signatures so that system
   developers will know when to stop testing and return to design.
   But I still am uncomfortable with the notion that a software sys-
   tem can ever be really tested given today's practices of running
   systems for longer than their certified life cycle testing inter-
   vals.
    System Managers must constrain their systems to run 365 times,
   one day at a time and not try to run them for a year.  Each time
   we start anew we need to run some software diagnostic and rein-
   stall executable code. Only in this way can the system manager
   have confidence that the software is reasonably tested in the
   domain of use.  But we can not "test in" reliability if its not
   "designed in."

Software predictability is the biggest challenge to systems managers.
Small changes in the offered load, the environment or the code often
cause catastrophic system performance.

With software fault tolerance emerging and the meta tools described by
Branislav Meandzija in this column there is no excuse for systems today
to hang or crash.

Branislav's opinion:

"MetaWindow is an object-oriented drag-and-drop distributed systems and
protocol design and implementation environment. During its development
the development team found itself in an unusual predicament. The soft-
ware generated by the MetaWindow network compiler was crashing the SunOS
operating system. How would system managers deal with this, well they
would be irate. So the team had two choices, report the SunOS bug and
consider the MetaWindow generated software as tested or, redesign and
reimplement the MetaWindow code generation to hopefully circumvent the
SunOS bug.

The problem was quite deeply routed. It was at the the heart of the
MetaWindow generated programs. SunOS couldn't harmonize with the pro-
grams event handler responsible for managing all asynchronous inter-
rupts, manipulating all program internal queues and stacks, and initiat-
ing all message processing upon arrival of data. To protect critical
regions of the program's code from reentry it is necessary to block cer-
tain types of operating system interrupts. Otherwise it is highly likely
that one interrupt won't be finished before another interrupt occurs
which will almost certainly lead to an undefined program state and make
the program crash.

The generated program's event handler in essence manages all interac-
tions with the world outside the program. In that role it multiplexes
multiple different data paths like OSI Service Access Points or Unix or
TCP/IP sockets into a single path. Although the arrival of data on those
data paths happens asynchronously, it is necessary for the data path
merging operation to be accessible only by one data path at a time. That
means that the operating system which is hosting the generated program
has to buffer interrupts until the generated program is ready to receive
them. The only way to prevent the arriving signals from interrupting the
generated program is to explicitly block them in a manner similar to the
classical semaphore protection of critical regions. That was the basic
idea behind the design of the interrupt handler of the generated pro-
gram. It blocked interrupts to ensure that the pointer manipulation in
the data path merge process was always in a well defined state.

Well, that is theory. In practice however, this particular version of
SunOS was desperately trying to get rid of the interrupts. So
desperately indeed that it forgot about doing something else and retry-
ing later. It just had to make this particular program take its signals.
The generated program however was saying, not yet, wait a second until I
finish manipulating the merge queue pointers. That made the operating
system become so desperate that it dumped it's core and started reboot-
ing itself. System managers often must deal with such ill behaved soft-
ware.

The dilemma of the development team was clear. The only other way to
ensure mutual exclusion of interrupts without blocking them is to
include part of the operating system within the generated program. That
solution however would not only make the generated program quite large
but would also have a deteriorating effect on it's performance. The only
other possibility was to attempt by minimizing and refining the inter-
rupt blocking process itself to make this particular SunOS live in har-
mony with the generated programs.

The principle question the above anecdote poses is: when is software
"correct enugh"?  Any program runs within an operating environment. Any
operating environment, including the most thoroughly designed and tested
ones, has a number of lurking bugs which are just waiting to show their
ugly heads in the most unexpected situations. The only way to generate
correct never-failing programs is,

(a)  to prove them correct and

(b)  test them in all parameter variations of the operating environment.

Even if (a) is theoretically and practically feasible (b) is not. So,
how much correctness is "enugh correctness"? There are many different
answers to that question. In practice the answer is a trade-off between
a variety of different factors mostly dependent on the economics or the
project budget. It will never be possible to guaranty error-free soft-
ware or to guaranty that critical failures won't happen in the most
critical or life threatening moments. We will always strive to make
software correct but the same way as we wonder whether our airplane is
the next one to crash we will continue wondering whether the software we
depend on is 'correct enugh'."

========================================================================

                    MS software essentially bug-free

 From: Klaus Brunnstein 

In an interview for German weekly magazine FOCUS (nr.43, October
23,1995, pages 206-212), Microsoft`s Mr. Bill Gates has made some state-
ments about software quality of MS products. After lengthy inquiries
about how PCs should and could be used (including some angry comments on
some questions which Mr. Gates evidently did not like), the interviewer
comes to storage requirements of MS products; it ends with the following
dispute (translated by submitter; at some interesting points, I added
the German phrase):

Focus: But it is a fact: if you buy a new version of a program to
       overcome faults of an old one, you unavoidably get more features
       and need more storage.

Gates: We have strong competitors and produce only products which we
       believe to be able to sell. New versions are not offered to
       cure faults. I have never heard of a less relevant reason to
       bring a new version on the market.

Focus: There are always bugs in programs.

Gates: No. There are no essential bugs ("keine bedeutenden Fehler") in
       our software which a significant number of users might wish to
       be removed.

Focus: Hey? I get always crazy when my Macintosh Word 5.1 hides
       page numbers under my text.

Gates: Maybe you make errors, have you ever thought about that? It
       often appears that machine addicts ("Maschinenstuermer") cannot
       use software properly. We install new features because we were
       asked to. Nobody would buy a new software because of bugs
       in an old one.

Focus: If I call a hotline or a dealer and complain about a problem, I have
       to hear: `Get the update to version 6`. Everybody has such
       experiences. This is how the system works.

Gates: We pay 500 million $ a year for telephone advice. Less than 1% of
       calls which we get has to do with software bugs. Most callers wish
       advice. You are kindly invited to listen to the millions of calls.
       You must wait for weeks until one complains about a bug.

Focus: But where does this feeling of frustration come from which unites
       PC users? Everybody is confronted every day that programs do not
       work as they should?

Gates: That is talking, following the motto: `yes, I also know about this
       bug`. I understand this as sociological phenomenon, not as
       technical.

The RISK? While there is NO risk that experienced users believe Mr.
Gates, there are 2 serious ones: first, that politicians (who rarely
experience the lowlands of PCs but develop their "political visions"
from their inexperience) may believe him. Second and worst: that Mr.
Gates and his enterprise believe what he is saying, and act accordingly
:-)

Maybe someone can inform Mr. Gates that it was HIS enterprise which
recently distributed the first Macro virus WordMacro.Concept on a CD-ROM
to OEM customers, in July, and to participants of a Windows 95 seminar
in Germany, in September); but indeed, this is NOT a BUG BUT an ATTACK
on unaware users :-) According to a German saying those whose reputation
is corrupted may live free and easy ("Ist der Ruf erst ruiniert, lebt
sich's doppelt ungeniert!")

Editors Note; Reprinted with permission.

========================================================================
------------>>>          TTN SUBMITTAL POLICY            <<<------------
========================================================================

The TTN On-Line Edition is forwarded on the 15th of each month to sub-
scribers via InterNet.  To have your event listed in an upcoming issue,
please e-mail a description of your event or Call for Papers or Partici-
pation to "ttn@soft.com".  The TTN On-Line submittal policy is as fol-
lows:

o  Submission deadlines indicated in "Calls for Papers" should provide
   at least a 1-month lead time from the TTN On-Line issue date.  For
   example, submission deadlines for "Calls for Papers" in the January
   issue of TTN On-Line would be for February and beyond.
o  Length of submitted items should not exceed 68 lines (one page).
o  Publication of submitted items is determined by Software Research,
   Inc., and may be edited as necessary.

========================================================================
----------------->>>  TTN SUBSCRIPTION INFORMATION  <<<-----------------
========================================================================

                To request a FREE subscription or submit
articles, please send E-mail to "ttn@soft.com".

TO SUBSCRIBE: please use the keywords "Request-TTN" or "subscribe" **AND
INCLUDE YOUR EMAIL ADDRESS** in the Subject line of your E-mail header.

To have your name added to the subscription list for the biannual hard-
copy version of the TTN -- which contains additional information beyond
the monthly electronic version -- include your name, company, and postal
address in the body of the mail message.

TO CANCEL: include the phrase "unsubscribe" or "UNrequest-TTN" **AND
YOUR EMAIL ADDRESS** in the Subject line.

Note:  To order back copies of the TTN On-Line (August 1993 onward),
please use the keywords "Back issue request" in the Subject line, and
please specify the month(s) and year(s) in the body of your message when
E-mailing requests to "ttn@soft.com".

                     TESTING TECHNIQUES NEWSLETTER
                        Software Research, Inc.
                            901 Minnesota Street
                      San Francisco, CA 94107 USA

                         Phone: (415) 550-3020
                       Toll Free: (800) 942-SOFT
                          FAX: (415) 550-3030
                          Email: ttn@soft.com
                        WWW: http://www.soft.com