sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr

         +===================================================+
         +======= Testing Techniques Newsletter (TTN) =======+
         +=======           ON-LINE EDITION           =======+
         +=======              June 1999              =======+
         +===================================================+

TESTING TECHNIQUES NEWSLETTER (TTN), Online Edition, is E-mailed monthly
to support the Software Research, Inc. (SR)/TestWorks user community and
to provide information of general use to the worldwide software quality
and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of TTN-Online provided that the
entire document/file is kept intact and this complete copyright notice
appears with it in all copies.  (c) Copyright 2003 by Software Research,
Inc.


========================================================================

INSIDE THIS ISSUE:

   o  Quality Week Europe (QWE'99): Call for Papers

   o  Plug In, Turn On, and Screw Up, by Arthur Hoppe (San Francisco
      Chronicle, June 1999; Reprinted by Permission of the Author)

   o  Test Automation Snake Oil, by James Bach (Part 1 of 2)

   o  What Are You Prepared to Risk For Quality Software?

   o  Quality Week '99 -- Another Big Success!!

   o  Automating Software Testing: Introduction, Management and
      Performance, a new book by Elfriede Dustin, Jeff Rashka and John
      Paul.

   o  How To Identify A Manager From A Distance

   o  Call for Contributions: Ada-Belgium'99

   o  TTN SUBMITTAL, SUBSCRIPTION INFORMATION

========================================================================

                         QWE'99 Call For Papers

                 Brussels, Belgium -- 1-5 November 1999

                         Theme: LESSONS LEARNED

QWE'99 is the third in the continuing series of International Software
Quality Week Europe Conferences that focus on advances in software test
technology, quality control, risk management, software safety, and test
automation.

The QWE'99 Conference Theme, "Lessons Learned", reflects the tremendous
accomplishments of the past few years.  QWE'99 aims to see what can be
learned -- and applied in the future -- from such recent software
quality efforts as Y2K, Euro Conversion, Client/Server testing, the push
for E-Commerce, and the widespread use and application of mature
software quality processes.

We are soliciting Full Day and 1/2 Day Tutorials, 45- and 90-minute
presentations and panel discussions on any area of QA, Testing and
Automation, and related software quality issues.  Real-life experiences
or "How To" stories are particularly encouraged.  Special consideration
will be given to new and emerging software quality technologies.

Mark your calendars and make your preparations now!

    Abstracts and Proposals Due:       18 June 1999
    Notification of Participation:     28 July 1999
    Camera Ready Materials Due:        17 September 1999

Fill out a Speaker Data Sheet at:

     .

Exhibition space and sponsorship opportunities for QWE'99 are also
available.

Questions?  Check out the QWE'99 WebSite, call [+1] (415) 861-2800, or
send E-mail to .

========================================================================

                     Plug In, Turn On, And Screw Up

                            by Arthur Hoppe
            The San Francisco Chronicle, Friday June 4, 1999
                 Reprinted by Permission of the Author.

NEWSWEEK Magazine says we're entering a new "digital galaxy," and it
enumerates 23 "smart" electronic devices now being designed that will
make our lives easier -- if they work.

The thought of dealing with 23 more electronic devices every day is
enough to give you the shivers.  Think what they might do to any typical
American couple, like Fred and Felicia Frisbee.

The smart alarm clock goes off at 7:30 rather than 6:15.  "I was
troubled by your insomnia," the clock explains, "So you'll be late to
your psychiatrist.  You want to fall asleep at the wheel?"

Fred rushes to the bathroom and steps on his smart scale.  "Who counts?"
it says.  "But don't worry.  I ordered six pints of non-fat yogurt and a
dozen bags of raw carrots.  And I told the smart refrigerator to melt
down all that Haagen-Dazs mocha fudge."

The smart shower delivers only icy water.  "Gotta get you going, Fred,"
it says cheerfully.  And his smart toothbrush finds an incipient cavity
in his upper left bicuspid.  "You have a dentist appointment at 2 p.m.,"
it tells him.

That gets the smart toothbrush in a fight with the smart scale, which
had scheduled an appointment with the diet doctor for the same hour.

"Brush off, mushmouth" snaps the scale.

"Go weigh, flatfoot!" cries the toothbrush until Fred locks it in the
smart medicine cabinet, where he searches for an aspirin.

"Aspirin, shmaspirin," says the cabinet.  "What you need is a month on
the wagon."

As he shaves, his smart mirror asks him if he's tried Rogaine.  Once in
the kitchen the smart dishwasher announces its rotator cup has failed,
but it's e-mailed the repairman who will be coming between noon and 4 on
Tuesday.  "But not to worry," it says, "I'll be here."

The nachos he expected the smart microwave to have ready for his
breakfast are stone cold.  "I don't do nachos before 6 p.m.," it says
with a sniff.  "But your goldfish are ready."

Sure enough, the smart thermostat on the tank in the living room had
boiled them to a turn.  That's when the smart car phone calls.  It's his
dumb son, Fred Jr.  "This smart-A car's locked me in because it say I'm
speeding," complains Junior.

"Good for it," says Fred Sr.

"But I'm still in the garage," says Junior.

Of course, we can be sure that some of these miracle devices will work
just as designed by our technological geniuses.  I'm sure the micro-
computer sewn into Fred's tweed jacket will.

"I sent your tweed jacket to the cleaners yesterday," Felicia will tell
him.

"That's nice," says Fred.  "Thank you, dear."

"Naturally, I downloaded it first," she says.  "And who's this Miss
Beiburn you had three appointments with last week?"

"Oh, we were synchronizing the company's 6 percent debenture offerings,"
says Fred casually.

"At 2 a.m.?" says Felicia.

And that's when the smart toaster demanded an uncontested divorce.  Fred
didn't fight it.  In fact, he also divorced the 22 other smart
appliances.

He says life may not be any easier, but he prefers machines that are
dumber than he is.

========================================================================

                Test Automation Snake Oil (Part 1 of 2)

                  by James Bach 

Case #1: A product is passed from one maintenance developer to the next.
Each new developer discovers that the products design documentation is
out of date and that the build process is broken. After a month of
analysis, each pronounces it to be poorly engineered and insists on
rewriting large portions of the code. After several more months, each
quits or is reassigned and the cycle repeats.

Case #2: A product is rushed through development without sufficient
understanding of the problems that it's supposed to solve. Many months
after it is delivered, a review discovers that it costs more to operate
and maintain the system than it would have cost to perform the process
it automates by hand.

Case #3: $100,000 is spent on a set of modern integrated development
tools.  It is soon determined that the tools are not powerful, portable,
or reliable enough to serve a large scale development effort. After
nearly two years of effort to make them work, they are abandoned.

Case #4: Software is written to automate a set of business tasks. But
the tasks change so much that the project gets far behind schedule and
the output of the system is unreliable. Periodically, the development
staff is pulled off the project in order to help perform the tasks by
hand, which makes them fall even further behind on the software.

Case #5: A program consisting of many hundreds of nearly independent
functions is put into service with only rudimentary testing. Just prior
to delivery, a large proportion of the functions are deactivated as part
of debugging. Almost a year passes before anyone notices that those
functions are missing.

These are vignettes from my own experience, but I bet they sound
familiar.  They say that most software projects fail, and that should
not surprise us-- from the outside, software seems so simple. But, the
devil is in the details, isn't it? And seasoned software engineers
approach each new project with a wary eye and skeptical mind.

Test automation is hard, too. Look again at the five examples, above.
They aren't from product development projects. Rather, each of them was
an effort to automate testing. In the nine years I spent managing test
teams and working with test automation (at some of the hippest and
richest companies in the software business, mind you), the most
important insight I gained was that test software projects are as
susceptible to failure as any other software project. In fact, in my
experience, they fail more often, mainly because most organizations
don't apply the same care and professionalism to their testware as they
do to their shipping products.

Strange, then, that almost all testing pundits, practicing testers, test
managers, and of course, companies that sell test tools recommend test
automation with such overwhelming enthusiasm. Well, perhaps "strange" is
not the right word. After all, CASE tools were a big fad for a while,
and test tools are just another species of CASE. From object-orientation
to "programmerless" programming, starry-eyed advocacy is nothing new to
our industry. So maybe the poor quality of public information and
analysis about test automation is not so much strange as it is simply a
sign of the immaturity of the field. As a community, perhaps we're still
in the phase of admiring the cool idea of test automation, and not yet
to the point of recognizing its pitfalls and gotchas.

Now, let me hasten to agree that test automation is a very cool idea. I
enjoy doing automation more than any other testing task. Most full-time
testers and probably all developers dream of pressing a big green button
and letting a lab full of loyal robots do the hard work of testing,
freeing themselves for more enlightened pursuits, such as playing games
over the network. However, if we are to achieve this Shangri-La, we must
proceed with caution.

This article is a critical analysis of the "script and playback" style
of automation for regression testing of GUI applications.


             Debunking the Classic Argument for Automation

"Automated tests execute a sequence of actions without human
intervention.  This approach helps eliminate human error, and provides
faster results.  Since most products require tests to be run many times,
automated testing generally leads to significant labor cost savings over
time. Typically a company will pass the break-even point for labor costs
after just two or three runs of an automated test."

This quote is from a white paper on test automation published by a
leading vendor of test tools. Similar statements can be found in
advertisements and documentation for most commercial regression test
tools. Sometimes they are accompanied by impressive graphs, too. The
idea boils down to just this:  computers are faster, cheaper, and more
reliable than humans; therefore, automate.

This line of reasoning rests on many reckless assumptions. Let's examine
eight of them:

Reckless Assumption #1: Testing is a "sequence of actions."

A more useful way to think about testing is as a sequence of
interactions interspersed with evaluations. Some of those interactions
are predictable, and some of them can be specified in purely objective
terms. However, many others are complex, ambiguous, and volatile.
Although it is often useful to conceptualize a general sequence of
actions that comprise a given test, if we try to reduce testing to a
rote series of actions the result will be a narrow and shallow set of
tests.

Manual testing, on the other hand, is a process that adapts easily to
change and can cope with complexity. Humans are able to detect hundreds
of problem patterns, in a glance, an instantly distinguish them from
harmless anomalies. Humans may not even be aware of all the evaluation
that they are doing, but in a mere "sequence of actions" every
evaluation must be explicitly planned. Testing may *seem* like just a
set of actions, but good testing is an interactive cognitive process.
That's why automation is best applied only to a narrow spectrum of
testing, not to the majority of the test process.

If you set out to automate all the necessary test execution, you'll
probably spend a lot of money and time creating relatively weak tests
that ignore many interesting bugs, and find many "problems" that turn
out to be merely unanticipated correct behavior.

Reckless Assumption #2: Testing means repeating the same actions over
and over.

Once a specific test case is executed a single time, and no bug is
found, there is little chance that the test case will ever find a bug,
unless a new bug is introduced into the system. If there is variation in
the test cases, though, as there usually is when tests are executed by
hand, there is a greater likelihood of revealing problems both new and
old. Variability is one of the great advantages of hand testing over
script and playback testing. When I was at Borland, the spreadsheet
group used to track whether bugs were found through automation or manual
testing-consistently, over 80% of bugs were found manually, despite
several years of investment in automation. Their theory was that hand
tests were more variable and more directed at new features and specific
areas of change where bugs were more likely to be found.

Highly repeatable testing can actually minimize the chance of
discovering all the important problems, for the same reason stepping in
someone else's footprints minimizes the chance of being blown up by
land mine.

Reckless Assumption #3: We can automate testing actions.

Some tasks that are easy for people are hard for computers. Probably the
hardest part of automation is interpreting test results. For GUI
software, it is very hard to *automatically* notice all categories of
significant problems while ignoring the insignificant problems.

The problem of automatability is compounded by the high degree of
uncertainty and change in a typical innovative software project. In
market-driven software projects it's common to use an incremental
development approach, which pretty much guarantees that the product will
change, in fundamental ways, until quite late in the project. This fact,
coupled with the typical absence of complete and accurate product
specifications, make automation development something like driving
through a trackless forest in the family sedan: you can do it, but
you'll have to go slow, you'll do a lot of backtracking, and you might
get stuck.

Even if we have a particular sequence of operations that can in
principle be automated, we can only do so if we have an appropriate tool
for the job.  Information about tools is hard to come by, though, and
the most critical aspects of a regression test tool are impossible to
evaluate unless we create or review an industrial size test suite using
the tool. Here are some of the factors to consider when selecting a test
tool. Notice how many of them could never be evaluated just by perusing
the users manual or watching a trade show demo:

*  Capability: Does the tool have all the critical features we need,
   especially in the area of test result validation and test suite
   management?

*  Reliability: Does the tool work for long periods without failure, or
   is it full of bugs? Many test tools are developed by small companies
   that do a poor job of testing them.

*  Capacity: Beyond the toy examples and demos, does the tool work
   without failure in an industrial environment? Can it handle large
   scale test suites that run for hours or days and involve thousands of
   scripts?

*  Learnability: Can the tool be mastered in a short time? Are there
   training classes or books available to aid that process?

*  Operability: Are the features of the tool cumbersome to use, or prone
   to user error?

*  Performance: Is the tool quick enough to allow a substantial savings
   in test development and execution time versus hand testing.

*  Compatibility: Does the tool work with the particular technology that
   we need to test?

*  Non-Intrusiveness: How well does the tool simulate an actual user? Is
   the behavior of the software under test the same with automation as
   without?

(To Be Continued)

========================================================================

          What Are You Prepared to Risk for Quality Software?

                          Reginald B. Charney
                       Charney & Day Incorporated
                           1330 Trinity Drive
                          Menlo Park. CA 94025
                          Tel: +1-650-233-9082
                     Email: charney@CharneyDay.com

Software Quality Week, May 24-28(QW'99), in San Jose, CA helps you
evaluate what correct code is worth to you and your company. You know
that achieving perfection in software is an impossible goal, but what is
reasonable? What are the risks? QW'99 tutorials and courses range form
writing high quality requirement specifications to testing strategies
and the automated tools needed to make this activity successful. As one
of the leading lights in the software industry, your company's risk
management team has decided to evaluate the usual cost tradeoffs on your
current project: features, schedule, and correctness. They know that
they can have two out of three. They wisely ask you for your input,
since between your training and experience and that of your peers, whom
you have met at Software Quality Week, you are able to answer many of
the following questions:

* Should they jettison features to make the schedule on time with a
solid product, even at the risk of delivering a product without
competitive features, or "buzz"? * Should they risk missing a marketing
window of opportunity so that they can deliver a reliable, feature laden
product?  * Lastly, should they deliver a feature-rich product on time,
but so buggy that they will need to do damage control while the problems
are fixed?

The answer to these real questions will determine both your destiny and
that of your company. There needs to be a way of estimating how reliable
the product will be - even before you complete it. What impact will
adding or removing a set of features have on the design, specification
and implementation of your product as it makes it way to completion? And
realistically, how long will it take to do what needs to be done? Again,
the work and metrics presented by the speakers and exhibitors at QW'99
can help answer these questions.

Testing and quality assurance may seem like a tail end job, but the
lessons learned and metrics computed can be used to improve the software
development process. In point of fact, the earlier in a product's life
cycle you move the lessons learned, the more reliable, appropriate and
economical your product will be.

In meeting with the speakers and attendees at QW'99, I have been
impressed with their breadth of experience, like analyzing 125 million
lines of code, breadth of languages and platforms, and diversity of
fields, such as biomedical devices and world-class Internet e-commerce.
Don't be fooled - while checking software for quality may seem dull, all
these participants are actually using bleeding-edge technology to solve
critical problems. I know that I take pride in improving my product
development skills. Attending QW'99 has given me a real chance to do
just that.

For more information on Software Research and SR/Institute, visit
http://www.soft.com/ for a large list of resources.

(Click here to see what people are saying about the Quality Week Conferences.)
========================================================================

               Quality Week '99 -- Another Big Success!!

We would like to extend our sincerest gratitude to everyone involved --
Attendees, Advisory Board Members, Speakers, Exhibitors and Sponsors --
for helping to make QW'99 a huge success!  We've surpassed all previous
records and judging from our collection of "Superstar" speakers and the
highest calibre of attendees, QW truly continues as *The Premier Event*
of the software quality community.

  * Pictures:  Visit our web site to check out 350+ pictures in the
     QW'99 Photo
    Album.  The pictures are categorized by day, then by AM, PM, and
    Reception.

  * Statistics:  QW'99 was clearly an International forum.  Of the 865+
    attendees, 88% were from the US.  Distribution from the US covered
    every major area including: 152 from the East and South, 61 from the
    MidWest, 66 from the West (outside California) and 534 from
    California.

    The remaining 12% of QW'99 attendees (numbering 103 overall) were
    from:  Argentina, Australia, Austria, Belgium, Canada, Denmark,
    England, Finland, France, Germany, Israel, Italy, Japan, Korea,
    Luxembourg, Netherlands, New Zealand, Norway, Singapore, Sweden,
    Switzerland, Taiwan, and the United Arab Emirates.

    We are very proud to organize an event where software quality
    professionals from throughout the world are represented.

  * Best Paper Award:  The QW'99 Best Paper Award, as voted by the
    attendees, went to Mr. Nick Borelli (Microsoft Corporation)
     for his
    paper entitled 
    Seizing Control of the Development Lifecycle (7M2).

    Nick will be presenting his paper at QWE'99 to be held in Brussels,
    Belgium on 1-5 November 1999.

  * CD-ROM & Proceedings:  If you weren't able to join us or would like
    to buy another copy, the QW'99 CD-ROM & Proceedings are available
    for sale.  They  are only $50 each and contain *all* conference
    papers and presentation materials, as well as Exhibitor and Sponsor
    information.  Each CD-ROM offers a wealth of information which every
    software quality professional should have.

    (The QWE'98 CD-ROM is also available for $50 each. Price does not
    include Shipping & Handling.)  For your convenience, order on-line
    at  or Phone,
    FAX, or send Email.

  * Special Book Offer:  In our continuing effort to provide the best
    software quality information, we have a special offer of book kits
    by the top experts of the industry.  We have selected books we
    believe to be the very best of technology in every aspect of
    software quality and testing.  These books are "must-have's" for
    completing your collection. To order, please visit our WebSite at
    .

SR/Institute is dedicated to providing the best forum for you to share,
learn, discuss and discover the latest software quality methods, tools
and techniques and we hope to continually see you at the Quality Week
Conferences.

      Warm Regards,

      Rita Bral
      Conference Director

      SR/Institute
      1663 Mission Street, Suite 400
      San Francisco, CA  94103  USA

========================================================================

                      Automated Software Testing:
               Introduction, Management, and Performance

                                   by

              Elfriede Dustin, Jeff Rashka, and John Paul

With the urgent demand for rapid turn-around on new software releases-
without compromising quality-the testing element of software development
must keep pace, requiring a major shift from slow, labor-intensive
testing methods to a faster and more thorough automated testing
approach.

Automated Software Testing is a comprehensive, step-by-step guide to the
most effective tools, techniques, and methods for automated testing.
Using numerous case studies of successful industry implementations, this
book presents everything you need to know to successfully incorporate
automated testing into the development process.

In particular, this book focuses on the Automated Test Life- cycle
Methodology (ATLM), a structured process for designing and executing
testing that parallels the Rapid Application Development methodology
commonly in use today. Automated Software Testing is designed to lead
you through each step of this structured program, from the initial
decision to implement automated software testing through test planning,
execution and reporting. Included are test automation and test
management guidance for:

   Acquiring management support
   Test tool evaluation and selection
   The automated testing introduction process
   Test effort & test team sizing
   Test team composition, recruiting and management
   Test planning and preparation
   Test procedure development guidelines
   Automation reuse analysis & reuse library
   Best practices for test automation

Elfriede Dustin has worked as a computer systems analyst/programmer
developing software applications and utilities. Her work has included
process and data modeling using CASE tools and system design simulation
models. She has been responsible for implementing the entire development
life cycle to include requirement analysis, design, development and
automated software testing.

Jeff Rashka has managed a multitude of projects pertaining to
information systems and systems integration. His system applications
activities have encompassed the management of worldwide transportation
assets, enterprise information, finance, bar-coded inventory, and
shipboard information systems. Jeff has also implemented the Software
Engineering Institute's Capability Maturity Model on several projects.

John Paul has performed as a senior programmer/analyst on financial and
budgeting systems as well as a host of other information systems. His
software development leadership responsibilities have included system
analysis and design, application prototyping, and application
development. His role in testing has included the use of many automated
test tools.

CONTENTS

PART 1 -- WHAT IS AUTOMATED TESTING

1  Birth and Evolution of Automated Testing:  Automated Testing;
   Background on Software Testing;  The Automated Test Lifecycle
   Methodology;  Software Test Career

2  Decision to Automate Test:  Overcoming False Expectations for
   Automated Testing; Benefits of Automated Testing;  Acquiring
   Management Support

3  Automated Test Tool Selection and Evaluation:  Organization's Systems
   Engineering Environment;  Tools That Support The Testing Life Cycle;
   Test Tool Research:  Evaluation Domain Definition;  Hands On Tool
   Evaluation

PART 2 -- INTRODUCTION OF AUTOMATED TESTING TO A PROJECT

4  Automated Testing Tool Introduction Process:  Test Process Analysis;
   Test Tool Consideration

5  Test Team Management:  Organizational Structure of a Test Team;  Test
   Program Tasks; Test Effort Sizing;  Test Engineer Recruiting;  Roles
   and Responsibilities

PART 3 -- TEST PLANNING and PREPARATION

6  Test Planning: Smart Application of Testing; Test Planning
   Activities;  Test Program Scope; Test Requirements Management;  Test
   Program Events, Activities and Documentation;  The Test Environment;
   The Test Plan

7  Test Analysis & Design:  Test Requirements Analysis;  Test Program
   Design;  Test Procedure Design

8  Test Development:  Test Development Architecture;  Test Development
   Guidelines; Automated Infrastructure

PART 4 -- TEST EXECUTION & REVIEW

9  Test Execution:  Executing/Evaluating Test Phases;  Defect Tracking
   and New Build Process;  Test Program Status Tracking

10 Test Program Review & Assessment:  Test Program Lessons Learned-
   Corrective Actions and Improvement Activity;  Test Program Return on
   Investment

APPENDICES:
A  How To Test Requirements
B  Tools That Support The Automated Testing Life Cycle
C  Test Engineer Development
D  Sample Test Plan
E  Best Practices

========================================================================

               How to Identify A Manager From a Distance

A man is flying in a hot air balloon and realizes he is lost.  He
reduces height and spots a man down below.  He lowers the balloon
further and shouts, "Excuse me, can you tell me where I am?"

The man below says, "Yes, you're in a hot air balloon, hovering 30 feet
above this field. "

"You must be an engineer", says the balloonist.

"I am", replies the man. "How did you know?"

"Well", says the balloonist, "everything you have told me is technically
correct, but it's of no use to anyone."

The man below says, "You must be in management."

"I am", replies the balloonist, "but how did you know?"

"Well", says the man, "you don't know where you are, or where you're
going, but you expect me to be able to help. You're in the same position
you were before we met, but now it's my fault."

========================================================================

                         Call for Contributions

 A d a - B e l g i u m ' 9 9   -   9 t h   A n n u a l   S e m i n a r
                       A d a   9 5   W o r k s !

                       Friday, November 19, 1999
                            Leuven, Belgium

http://www.cs.kuleuven.ac.be/~dirk/ada-belgium/events/local.html

Ada-Belgium is a non-profit volunteer organization whose purpose is to
promote the use in Belgium of the Ada programming language, the first
ISO standardized object-oriented language and a great language for
engineering reliable systems.

Ada-Belgium is soliciting contributions for presentation during its next
Annual Seminar, to be held in Leuven, close to Brussels, on Friday,
November 19, 1999. Attendees will include industry, government and
university representatives that are active and interested in Ada
software development and management. The language of the Seminar is
English.

This ninth Annual Ada-Belgium Seminar will feature tutorial, paper and
project presentations. Once more, we are preparing a program with first
class invited speakers, such as the previous years John Barnes ('94),
Robert Dewar ('95), Tucker Taft ('96), Bill Beckwith ('97) and Brian
Dobbing ('98), and lots of free Ada-related material, e.g. free Ada CD-
ROMs in all Seminars from '94 on, copies of the Ada 95 Reference Manual
and Rationale ('95), of the Ada 95 Quality and Style Guide ('96), of the
CORBA IDL to Ada 95 mapping document ('97), etc.

Theme of the Seminar will be "Ada 95 Works!".

Presentations will show that Ada 95 is a viable alternative to be
considered for your next project, as well as share practical
experiences, describe available products, etc.

Contributions consistent with the general theme of the Seminar, outlined
below, are hereby invited:
  * Presentations supporting the theme of the Seminar.
  * Experience reports of projects using or trying out Ada technology.
  * Short technical presentations of available products.

More general contributions are also welcome, such as on:
  * Management of Ada software development projects, including the
    transition to Ada 95.
  * Experiences with Ada development, lessons learned.
  * Ada technology.
  * Ada research projects in universities and industry.

Those interested to present at the Seminar should submit a short
abstract (10-15 lines) in English by July 26, 1999, via e-mail to ada-
belgium-board@cs.kuleuven.ac.be for consideration by the board.

Short presentations will get a time-slot of 20-30 minutes. For longer
presentations, the organizers will work out a schedule with the authors.
Proceedings with full papers of the presentations will be available at
the Seminar.

Dates:
  * July 26, 1999: deadline for submission of abstracts.
  * end of August, 1999: notification of acceptance.
  * October 21, 1999: deadline for submission of final papers.
  * November 19, 1999: Ada-Belgium'99 Seminar.

For additional information on the Ada-Belgium'99 Seminar please contact
the Ada-Belgium Board at the e-mail address listed.

Dirk Craeynest
Ada-Belgium Board
ada-belgium-board@cs.kuleuven.ac.be

========================================================================
------------>>>          TTN SUBMITTAL POLICY            <<<------------
========================================================================

The TTN Online Edition is E-mailed around the 15th of each month to
subscribers worldwide.  To have your event listed in an upcoming issue
E-mail a complete description and full details of your Call for Papers
or Call for Participation to "ttn@soft.com".

TTN On-Line's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the TTN On-Line issue date.  For
  example, submission deadlines for "Calls for Papers" in the January
  issue of TTN On-Line would be for February and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK and may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc. and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items are the opinions of their authors or
submitters; TTN-Online disclaims any responsibility for their content.

TRADEMARKS:  STW, TestWorks, CAPBAK, SMARTS, EXDIFF, Xdemo, Xvirtual,
Xflight, STW/Regression, STW/Coverage, STW/Advisor, TCAT, TCAT-PATH, T-
SCOPE and the SR logo are trademarks or registered trademarks of
Software Research, Inc. All other systems are either trademarks or
registered trademarks of their respective companies.

========================================================================
----------------->>>  TTN SUBSCRIPTION INFORMATION  <<<-----------------
========================================================================

To SUBSCRIBE to TTN-Online, to CANCEL a current subscription, to CHANGE
an address (a CANCEL and a SUBSCRIBE combined) or to submit or propose
an article, use the convenient Subscribe/Unsubscribe facility at:

         .

Or, send E-mail to "ttn@soft.com" as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:
   "subscribe {your-E-mail-address}".

   TO UNSUBSCRIBE: Include this phrase in the body of your message:
   "unsubscribe {your-E-mail-address}".

		QUALITY TECHNIQUES NEWSLETTER
		Software Research, Inc.
		1663 Mission Street, Suite 400
		San Francisco, CA  94103  USA

		Phone:     +1 (415) 861-2800
		Toll Free: +1 (800) 942-SOFT (USA Only)
		Fax:       +1 (415) 861-9801
		Email:     qtn@soft.com
		Web:       <http://www.soft.com/News/QTN-Online>

                               ## End ##