sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr

         +===================================================+
         +======= Testing Techniques Newsletter (TTN) =======+
         +=======           ON-LINE EDITION           =======+
         +=======            November 1997            =======+
         +===================================================+

TESTING TECHNIQUES NEWSLETTER (TTN), Online Edition, is E-mailed monthly
to support the Software Research, Inc. (SR)/TestWorks user community and
to provide information of general use to the worldwide software quality
and community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of TTN-Online provided that the
entire document/file is kept intact and this complete copyright notice
appears with it in all copies.  (c) Copyright 1997 by Software Research,
Inc.

========================================================================

INSIDE THIS ISSUE:

   o  Quality Week 1998, San Francisco, California, May 1998, Call for
      Participation

   o  28th Fault Tolerant Computing Conference, Munich, June 1998

   o  Formal Methods: A Practitioner's Companion

   o  Property Based Testing: A New Approach to Testing for Assurance,
      by George Fink and Matt Bishop (Part 1 of 2)

   o  TCAT for Java for Windows '95/NT Now Available

   o  Third International Baltic Workshop on Data Bases and Information
      Systems (Call for Participation), April 1998

   o  AQuIS'98 Deadline Extended

   o  7th Annual Ada-Belgium Seminar (Friday, 28 November 1998),
      Brussels, Belgium

   o  Global Software Competitiveness, by Don O'Neill

   o  Mailing List Policy Statement

   o  TTN Submittal Policy

   o  TTN SUBSCRIPTION INFORMATION

========================================================================

       ELEVENTH INTERNATIONAL SOFTWARE QUALITY WEEK 1998 (QW'98)

                  Conference Theme: Countdown to 2000

              San Francisco, California -- 26-29 May 1998

QW'98 is the eleventh in a continuing series of International Software
Quality Week Conferences focusing on advances in software test
technology, quality control, risk management, software safety, and test
automation.  Software analysis methodologies, supported by advanced
automated software test methods, promise major advances in system
quality and reliability, and help assure continued competitiveness.

The mission of the QW'98 Conference is to increase awareness of the
importance of software quality and methods used to achieve it.  It seeks
to promote software quality by providing technological education and
opportunities for information exchange within the software development
and testing community.

The QW'98 theme "Countdown to 2000" draws attention to the Year-2000
question, but also emphasizes how important it is to arrange to
accomplish system changes and testing under pressure, against a
deadline, and within real-world economic constraints.  Many believe that
the Year-2000 problem will bring much-needed focus to all aspects of
software quality, and may foster very-strong interest in software
quality questions of all kinds in the coming years.

IMPORTANT DATES:

        Abstracts and Proposals Due:            16 January 1998
        Notification of Participation:          27 February 1998
        Camera Ready Materials Due:             31 March 1998

INFORMATION

For complete information on the QW'98 Conference, send Email to
qw@soft.com, phone SR Institute at +1 (415) 550-3020, or, send a FAX to
SR/Institute at +1 (415) 550-3030.

Candidate product/service vendors should contact the QW'98 team early as
exhibit space is strictly limited.

The complete Call For Papers can be found at the QW'98 Conference
Website:

        http://www.soft.com/QualWeek/QW98/call.html

Or for a copy of the CFP in ASCII form send Email to qw@soft.com

========================================================================

                      28th FTC, Munich, June 1998

The 28th International Symposium on Fault-Tolerant Computing to be held
at Munich, June 1998, is the premier conference in dependable computing.
The paper submission time is NOW! -- abstracts this week, and the full
paper due in early December.

This year there are several new initiatives and opportunities to
participate and contribute to the conference.   An Industrial Council
with CTO level participation from major IT companies developing a vision
for research problems facing the industry, tutorials on important areas
in dependability, a work in progress session for late breaking research,
and in additon a workshop on embedded computers in the automobile
industry is being held in conjunction with the conference.

To learn more, look at the web site:  http://chillarege.com/ftcs which
also carries uptodate information on the current list of abstracts
submitted, customer satisfaction surveys from the last conference,
details on awards, and a new addition is the list of papers from recent
conferences.

Ram Chillarege & Jean Arlat
Program Co-Chairs
ftcs@chillarege.com

========================================================================

               Formal Methods: A Practitioner's Companion

Quality control via formal methods is a powerful and effective means to
assure system safety, and a new report by NASA/JPL writers is worth
noting.  The report is the result of a group effort headed by Dr. John
C. Kelly of JPL (Email: John.C.Kelly@jpl.nasa.gov) reports on
experimental use of formal methods of verification on a real-life NSAS
example.  Email to Dr. Kelly may get one of the remaining printed
copies, or point your browser to:

        http://eis.jpl.nasa.gov/quality/Formal_Methods/

========================================================================

                 PROPERTY-BASED TESTING: A New Approach
                       to Testing for Assurance
                             (Part 1 of 2)

                              George Fink
                            Sun Microsystems
                       (george.fink@eng.sun.com)

                                  and

                              Matt Bishop
                    University of California, Davis
                        (bishop@cs.ucdavis.edu)

   NOTE: This paper appeared in ACM SIGSOFT's Software Engineering
   Notes, Volume 22, No. 4, July 1997 and is reprinted here with
   permission of the ACM.

ABSTRACT:  The goal of software testing analysis is to validate that an
implementation satisfies its specifications. Many errors in software are
caused by generalizable flaws in the source code. Property-based Testing
(property based testing) assures that a given program is free of
specified generic flaws. Property based testing uses property
specifications and a data-flow analysis of the program to guide
evaluation of test executions for correctness and completeness.

                              Introduction

Analysts test computer programs to determine if they meet reliability
and assurance goals. In other words, testing validates semantic
properties of a program's behavior. In order to do this, the actual
program must be tested at the source code level, not at some higher-
level description of the program. However, to validate high-level
properties, the properties must be formalized, and the results of the
testing related formally to the properties.

Property based testing [FL94, FKAL94, FHBL95, Fin95] is a testing
methodology that addresses this need. The specification of one or more
properties drives the testing process, which assures that the given
program meets the stated property. For example, if an analyst wants to
validate that a specific program correctly authenticates a user, a
property-basted testing procedure tests the implementation of the
authentication mechanisms in the source code to determine if the code
meets the specification of "correctly authenticating the user."

This paper introduces an approach to property based testing and an
implementation of that approach. First, the analyst specifies the target
property in a low-level specification language called TASPEC (Tester's
Assistant SPECification language). The program is sliced [Wei84] and
code irrelevant to the property disregarded. The TASPEC automatically
translates the TASPEC specification into a test oracle that will check
the correctness of program executions with respect to the desired
property. A new path-based code coverage metric called "iterative
contexts" [Fin95, Fin96] efficiently captures the slice-based
computations in the program.

Property based testing speaks to the following questions:

   o What is to be accomplished or established via testing?

   o What test data should be used?

   o When has enough testing been carried out?

   o How is it determined if a test is a success or a failure?

This paper presents an overview of property based testing, its goals,
and techniques used to accomplish these goals. The next section defines
the problem, and discusses previous work. The next section describes
property based testing in general and its components in particular,
illustrating property based testing through an example.  The final
section concludes with future directions for work on this methodology.

                           Problem Statement

Trust that software programs work correctly and precisely is based upon
the belief that authors of the programs have detected and fixed flaws in
the design and implementation. Many potential flaws can be detected and
avoided; however, systematic and formal analysis (both static and
dynamic) of the finished program increases the assurance that the
software is without critical flaws.

Most errors in programs result from programming and design mistakes.
Many well-known mistakes are still common. For example, errors in bounds
checking, race conditions, and authentication, continue to be the bane
of privileged Unix programs.

Specifying well-known mistakes formally presents a clear picture of
testing goals. Then, techniques are needed to map these formal
descriptions to tests of actual code. The tests need to provide
formalizable results that relate to the flaw descriptions. The whole
process should be as automatic as possible, with reusable generic
specifications.

                              Related Work

Property based testing is complementary to software engineering life
cycle methodologies. Analysis and inspection of design, requirements,
and code help to prevent flaws from being introduced into source code.
Property based testing validates that the final product is free of
specific flaws. Because property based testing concentrates on generic
flaws, it is ideal for focusing analysis late in the development cycle
after program functionality has been established.

Specifications state what a system should or should not do. Many
specification languages support precise expression of requirements, such
as Z [Dil90] and VDM [AI91].  Treating specifications as bounds of
program behavior suggests that test oracles can be derived from
specifications; some specification languages like Larch [GH93] and TAOS
[Ric94} allow this to be done automatically.  Further, specifications
can guide the generation of test data; ADL [CRS96], TAOS [Ric94], and
VDM [DF93] allow this as does the TASPEC language presented here. The
advantage of using specifications is the formalism they establish for
verifying proper (or improper) program behavior.

Specifications are the basis of formal analytical techniques.
Determining which assumptions (axioms) are correct is substantial, and
failing to do so correctly would invalidate the analysis.  For example,
if an operation has an unanticipated side-effect during execution in
some situations, formal analysis cannot determine the impact of the
side-effect upon correctness.  While testing has similar problems, it
does test the actual execution of the program, and can determine the
precise output corresponding to a given input. For example, thorough
testing can determine unanticipated side effects.

Coverage metrics measure testing completeness; how much of the program
has been tested?  For property-based testing, a coverage metric must be
strong enough to provide formal assurance, but also be feasible to
implement and utilize.  Property based testing uses a new metric called
Iterative Contexts, which strikes a balance between simple definition-
use (def-use) pair metrics [Las90, Nta84, CPRZ89]  and stronger but
impractical path coverage metrics [RW85].

                      Testing to Validate Programs

A test consists of a set of executions of a given program using
different input data for each execution; its purpose is to determine if
the program functions correctly. A test has a negative result if an
error is detected during the test (i.e., the program crashes or a
property is violated). A test has a positive result if a series of tests
produces no error, and the series of tests is "complete" under some
coverage metric. A test has an "incomplete" result if a series of tests
produces no errors but the series is not complete under the coverage
metric.

It is impossible to execute a program on all possible data. So, testing
must approximate this, which may lead to an incorrect validation.
However, for a testing process to be valuable, it must validate a
program with respect to a property with a high degree of certainty.
Property based testing addresses this conflict with iterative contexts,
a new data-flow coverage metric.

It is important to understand the relationship between testing and
formal verification so that the two can be compared. The purpose of
property based testing is to establish formal validation results through
testing. To validate that a program satisfies a property, the property
must hold whenever the program is executed. Property-based testing
assumes that the specified property captures everything of interest in
the program, because the testing only validates that property.
Additionally, property-based testing assumes that the completeness of
testing can be measured structurally in terms of source code.

The property specification guides dynamic testing of the program.
Information derived from the specification determines what points in the
program need to be tested and if a test execution is correct. The
iterative contexts coverage metric, based upon these points, determines
when testing is complete.

Therefore, in property-based testing, checking the correctness of each
execution together with a description of all the relevant executions of
the program validates a program with respect to a given property.

 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

                    +---------------+    +------------------+
                    |    Source     |    | Program-specific |
                    |     Code      |    |  Specifications  |
                    +---------------+    +------------------+
                            |                      |
                            V                      V
                   +---------------------------------------+
                   |+---------+  +----------+ +-----------+|
+-----------+      || Program |  | Coverage | | Test Data ||
| +-----------+    || Slicer  |  | Analyst  | | Generator ||
| | +-----------+  |+---------+  +----------+ +-----------+|  +---------+
| | |   Known   |  |                                       |  | UNIX    |
+-| |   Flaw    |=>|    +-----------+  +------------+      |<=| Security|
  +-| Libraries |  |    | Execution |  |    Well-   |      |  | Specif- |
    +-----------+  |    |  Monitor  |  | behavedness|      |  | ication |
                   |    |           |  |   Checker  |      |  +---------+
 +--------------+  |    +-----------+  +------------+      |
 | System Call  |  |                                       |
 |Specifications|=>|+--------------------+                 |
 +--------------+  || Tester's Assistant |                 |
                   |+--------------------+                 |
                   +---------------------------------------+
                         |           |               |
           ______________|           |               |_______
           |                         |                      |
           V                         V                      V
     +--------------+           +--------+         +-----------------+
     |    Measure   |           |  Test  |         |    Potential    |
     |      of      |           |  Data  |         |   and Actual    |
     | Completeness |           |        |         |      Flaws      |
     +--------------+           +--------+         +-----------------+


      Figure 1: Property-based testing and the Tester's Assistant.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Tester's Assistant

Figure 1 shows an overview of the implementation of property based
testing by the Tester's Assistant. To test the source code of a program,
TASPEC specifications from a variety of sources are used. Program-
independent specifications include system call, security, and generic
flaw specifications. If necessary, program-specific specifications can
also be used. The Tester's Assistant analyzes and tests the code with
respect to the specifications. Three results of the property based
testing process are: the test suite, the coverage results, and/or flaws
discovered during the test.

Many properties are defined independently of specific programs (for
example, array bounds, race conditions, and authentication) and can be
grouped together in libraries of properties. These libraries form models
of system behavior, which are significant analytical objects in their
own right. They can be reused and also analyzed by independent means to
assess their completeness/footnote {through a previous iteration of
property-based testing, perhaps}.

Iterative Contexts

The iterative contexts coverage metric is an ideal metric for satisfying
property validation requirements.  Iterative contexts are more powerful
than other data-flow metrics [Las90, Nta84, CPRZ89], but are small
enough so they can be satisfied by a reasonable test suite. Given a set
of variables at a point in the program that are of interest, the optimal
metric requires all possible results for that set of variables; for most
sets this requires an infinite number of data values. Metrics based upon
sequences of assignments within the slice approximate this optimum for
given programs.

An iterative context is a sequence of assignments defining a sub-path of
a possible program execution. The assignments are taken from the program
slice and represent a possible computation of a value important to the
target property. Taken together, all of the contexts represent many of
the possible computations of values relevant to the property. It is not
possible to represent with a finite set of input data the infinite
number of possible computations for some loops, so in those cases
iterative contexts will not completely cover all behavior relevant to a
property. In a complete test suite, every context must be represented by
at least one test execution in the suite.

                      Static Analysis and Slicing

Program slicing [Wei84], the extraction of all code affecting
conformance to a property, reduces the amount of code that a human
tester must inspect manually. Applying automatic analysis tools to the
slice rather than to the whole program also aids the analyst.
Calculating a slice requires detailed global dependencies; this
information is also used to generate iterative contexts.

                                 TASPEC

TASPEC, the specification language used in the Tester's Assistant and
developed specifically for property-based testing, has primitive
constructs which enables it to be translated automatically into slicing
criteria and test oracles.  TASPEC includes basic logical and temporal
operators as well as location specifiers, which associate events with
code features. These events provide the primitive data for analyzing
higher-level semantic features of the program. TASPEC is a flexible
low-level specification language well suited for specifying a wide range
of properties and deriving tests from the property specifications.

Using location specifiers, generic program-independent properties in
TASPEC map automatically to source code. Therefore, test oracles can be
generated independently of descriptions of specific modules or
functions. With the emphasis on properties and not on full
specifications, test oracles can handle a wider class of behavior than
that rigidly defined by functional specifications.  Translations between
other specification languages and TASPEC can provide additional
flexibility to the specification and testing phases of development.
Helmke shows how translations from Z to TASPEC can assist in
requirements traceability [Hel95].

                           Execution Monitors

Automatic high-level execution monitors derived automatically from
property specifications in TASPEC become test oracles that assess the
correctness of executions. Location specifiers produce primitive events
for the specification state, and the execution monitor processes these
elements to raise higher-level events. The execution monitor checks for
consistency between events and the property specification.  Therefore,
checking the adherence of a program execution to a complex property
specification is automatic.

                 Example Use of Property based testing

This section describes testing a version the Unix ftpd (file transfer
protocol [CER](FTP) daemon) program with property-based testing.
Property based testing has eight steps:

   1. Selecting a property; the property is specified in TASPEC
   (currently implemented)

   2. Static analysis and slicing (currently implemented)

   3. Program instrumentation (currently implemented)

   4. Initial test case selection and execution

   5. Coverage evaluation (partially implemented)

   6. Additional test case selection and execution

   7. Correctness evaluation (partially implemented)

   8. Repeat the last three steps as necessary

Testing ftpd with respect to an authentication property reveals a flaw
in ftpd's authentication code.

                    Description of ftpd and Its Flaw

Ftp is a Unix program implementing the FTP protocol for transmitting
files across a network.  Ftpd, the program described here, is a server
program that accepts file requests and processes authentication and
other utility commands from remote client programs.

In the version of ftpd released with SunOS 3.2, a security flaw allows
any user to gain permissions to read or write files owned by any user on
the system (including root) [CER]. To do so, the user logs on with his
or her normal user name and password. As a part of the correct
authentication, a flag in the program is set. The flag records whether
the user name has been authenticated. When a second user name is
entered, the flag is never reset, so even if an incorrect password is
entered for the second user name, the program thinks that the second
user name has been authenticated; therefore, the user has the access
privilege of the second user name.  Figure 2 is a simplified flow-chart
that illustrates the flaw.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

                          +----------+  +----------+  +-------+
                          |   input  |  |   input  |  |       |  N
           +-------+   +->| username |->| password |->-----+
+------+   |       |   |  +----------+  +----------+  +-------+     |
|      |   |       |   |                                   |        |
|  ??  |-->--->  ...                              | Y      |
|      |   |       |   |                                   V        |
+------+   +-------+   |                              +---------+   |
               ^       |                              |   ??    |   |
               |       |   +-------+                  |         |-->|
               |       |   |  did  |   N              +---------+   |
               |       +-->------------------------------->|
               |           +-------+                                |
               |               |                                    |
               |               | Y                                  |
               |               |                                    |
               |               v           +--------+               |
               |           +--------+      |        |               |
               |           |        |  Y   |  File  |               |
               |           <   ??   >----->| Access |               |
               |           |        |      |        |               |
               |           +--------+      +--------+               |
               |               |                |                   |
               |               | N              |                   |
               +_______________V________________V___________________+


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

                     Figure 2: Ftpd Flaw Flowchart.

                           (To Be Continued)


========================================================================

               TCAT for Java for Windows '95/NT Available

The TestWorks product line is pleased to announce the availability of
TCAT for Java for Windows (Windows '95 and NT).  This is the companion
product to TCAT for Java for UNIX which has been available for several
months.

Like it UNIX brother, TCAT for Java for Windows supports branch coverage
and call-pair/method-pair coverage for Java applications and applets,
includes advanced structure charting and method-internal digraphs, plus
interactive coverage analysis.

For complete information about TCAT for Java for Windows please call SR
at [1] (415) 550-3020 or send Email to sales@soft.com

========================================================================

                Third International Baltic Workshop on
                   Data Bases and Information Systems

                           April 15-17, 1998
                              Riga, Latvia

STRUCTURE AND TOPICS OF INTEREST

The workshop language is English (talks as well as publications).  The
workshop is planned to include four main tracks:  traditional research
track on data base issues, regional IT infrastructure (government
regulations, conventions, education, networks, registers, etc.),
advanced information systems and software technology track, and a track
to recover once famous Riga computer program synthesis, testing,
verification and debugging conferences.  To activate and motivate PhD
students, the workshop program will be preceded by  one-day Doctoral
Consortium chaired by  invited professors.

Developers are invited to submit extended abstracts describing their
latest product innovations.  Topics of interest include but are not
limited to:

      - Database development tools
      - IS and AI technologies
      - National information infrastructure development
      - Regional and national infrastructure IS and database applications
      - Integrated and interoperable database technologies
      - Telecommunication databases
      - Multimedia database applications
      - Scientific applications

DOCTORAL CONSORTIUM

PhD students are invited to submit abstracts describing their current
work.  Topics will be in line with the main workshop topics, and the
deadline for abstracts is December 1, 1998.

SUBMISSION AND REVIEWS OF THE PAPERS

Authors are invited to submit original research contributions in the
form of extended abstracts at least in 6 pages or full papers that
should not be longer than 12 pages. WE ENCOURAGE ELECTRONIC SUBMISSIONS
IN THE FORM OF WORD, POSTSCRIPT, LATEX, ETC.

If hardcopies are submitted, four copies will be required.  Each
submitted paper will be reviewed by at least three program committee
members.

For further information, please send  email to: Juris.Borzovs@dati.lv

For electronic submission, please send email to: jbarzdin@cclu.lv

For hardcopy submission, please send 4 copies to:

      Prof. Janis Barzdins
      PC Chair Baltic DB&IS'98
      Institute of Mathematics and Computer Science
      University of Latvia
      Raina bulvaris 29
      LV-1459 Riga
      Latvia

ONLINE INFORMATION

This call for papers is available on the web at the URL:

        http://www.dati.lv/riti

Up-to-date information  will appear here as it becomes available.

IMPORTANT DATES:

      Deadline for papers :             December 1, 1997
      Notification of acceptance :      January 15, 1998
      Camera-ready papers:              February 16, 1998
      Workshop dates:                   April 15-17, 1998
      Doctoral Consortium:              April 14, 1998

========================================================================

                       AQuIS'98 DEADLINES UPDATED

SPECIAL NOTE: The deadline for submitting papers to AQuIS '98 has been
extended to the end of the year.

IEI-CNR and QUALITAL present AQuIS '98, The Fourth International
Conference on "Achieving Quality In Software: Software Quality in the
Communication Society", Venice, Italy, 30 March - 2 April 1998.

For updated information, we invite you to check at the conference
website:  http://www.iei.pi.cnr.it/AQUIS98

                            IMPORTANT DATES
             Papers due (four copies)     December 30,1997
             Notification to authors       February 3,1998
             Camera ready copy            February 18,1998


Contact:    Fabrizio Fabbrini
            IEI-CNR
            Via S. Maria, 46
            56127 Pisa (Italy)
            Phone: +39 - 50 - 593 505
            Fax:   +39 - 50 - 554 342
            Email: f.fabbrini@iei.pi.cnr.it

========================================================================

            7th Annual Ada-Belgium Seminar (Ada-Belgium'97)

                  Developing Distributed Applications

                       Friday, November 28, 1997
                       Trasys, Brussels, Belgium

    http://www.cs.kuleuven.ac.be/~dirk/ada-belgium/events/local.html

We are pleased to announce that Ada-Belgium will hold its 7th Annual
Seminar at the premises of Trasys in Brussels, on Friday, November 28,
1997.

Highlights:

  * Theme of Ada-Belgium'97 is "Developing Distributed Applications".
    The seminar features tutorial, technical, and project presentations,
    plus short product presentations.

  * Several approaches to develop distributed applications will be
    presented (CORBA, DCE, Ada Distributed Systems Annex, etc.), as
    well as practical experiences and available products, with special
    emphasis on the role of Ada 95.

  * Free Ada-related material will be distributed, e.g. free copies of
    the new Walnut Creek double CD-ROM: the November 1997 release is a
    special edition for Tri-Ada'97 and Ada-Belgium'97.


Contact:

Dirk Craeynest
Ada-Belgium Board
ada@belgium.eu.net

========================================================================

                    Global Software Competitiveness
                                  by
                        Don O'Neill (Consultant)

Please visit "The Competitor" to view the November issue (Vol 1 No2)
which has been posted.  This is a bi-monthly newsletter whose purpose is
to focus on global software competitiveness issues.

The Index issue of the "The Competitor" is found at
http://members.aol.com/ONeillDon/competitor_index.html

For a description of the Global Software Competitiveness Assessment
Program, please visit
http://members.aol.com/ONeillDon/global_competitiveness.html.  Here the
fifty leading indicators of Global Software Competitiveness are assessed
within the context of a competitiveness maturity model.

========================================================================

              TTN-Online -- Mailing List Policy Statement

Some subscribers have asked us to prepare a short statement outlining
our policy on use of E-mail addresses of TTN-Online subscribers.  This
issue, and several other related issues about TTN-Online, are available
in our "Mailing List Policy" statement.  For a copy, send E-mail to
ttn@soft.com and include the word "policy" in the body of the E-mail.

========================================================================
------------>>>          TTN SUBMITTAL POLICY            <<<------------
========================================================================

The TTN Online Edition is E-mailed around the 15th of each month to
subscribers worldwide.  To have your event listed in an upcoming issue
E-mail a complete description and full details of your Call for Papers
or Call for Participation to "ttn@soft.com".

TTN On-Line's submittal policy is as follows:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the TTN On-Line issue date.  For
  example, submission deadlines for "Calls for Papers" in the January
  issue of TTN On-Line would be for February and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK and may be serialized.
o Length of submitted calendar items should not exceed 60 lines (one
  page).
o Publication of submitted items is determined by Software Research,
  Inc. and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items are the opinions of their authors or
submitters; TTN-Online disclaims any responsibility for their content.

TRADEMARKS:  STW, TestWorks, CAPBAK, SMARTS, EXDIFF, Xdemo, Xvirtual,
Xflight, STW/Regression, STW/Coverage, STW/Advisor, TCAT, TCAT-PATH, T-
SCOPE and the SR logo are trademarks or registered trademarks of
Software Research, Inc. All other systems are either trademarks or
registered trademarks of their respective companies.

========================================================================
----------------->>>  TTN SUBSCRIPTION INFORMATION  <<<-----------------
========================================================================

To SUBSCRIBE to TTN-Online, to CANCEL a current subscription, to CHANGE
an address (a CANCEL and a SUBSCRIBE combined) or to submit or propose
an article, send E-mail to "ttn@soft.com".

TO SUBSCRIBE: Include in the body of your letter the phrase "subscribe
 ".

TO UNSUBSCRIBE: Include in the body of your letter the phrase
"unsubscribe  ".

                     TESTING TECHNIQUES NEWSLETTER
                        Software Research, Inc.
                            901 Minnesota Street
                   San Francisco, CA  94107 USA

              Phone:          +1 (415) 550-3020
              Toll Free:      +1 (800) 942-SOFT (USA Only)
              FAX:            +1 (415) 550-3030
              E-mail:         ttn@soft.com
              WWW URL:        http://www.soft.com

                               ## End ##