sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr

         +===================================================+
         +======= Testing Techniques Newsletter (TTN) =======+
         +=======           ON-LINE EDITION           =======+
         +=======              July 1999              =======+
         +===================================================+

TESTING TECHNIQUES NEWSLETTER (TTN), Online Edition, is E-mailed monthly
to support the Software Research, Inc. (SR)/TestWorks user community and
to provide information of general use to the worldwide software quality
and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of TTN-Online provided that the
entire document/file is kept intact and this complete copyright notice
appears with it in all copies.  (c) Copyright 2003 by Software Research,
Inc.


========================================================================

INSIDE THIS ISSUE:

   o  Test Automation Snake Oil, by James Bach (Part 2 of 2)

   o  TestWorks Corner:  Hot Items for TestWorks Users

   o  Re: Ten Commandments of Software Testing

   o  QWE'99 Update: Advisory Board, Best Paper and Best Presentation
      Awards, Sponsors

   o  Apologies: Emailing Failure in June 1999 TTN Issue Distribution
      Explained

   o  Testing Futures in the Telephone Industry

   o  Notice: John Gannon, University of Maryland Computer Scientist

   o  ACM Workshop on Program Analysis for Software Tools and
      Engineering (PASTE'9)

   o  Software Quality Professional -- A New Journal Launched

   o  Designer Viruses

   o  Ada-Belgium'99: Call for Participation

   o  CRL Report: Sequential Scenarios Verification and Integration
      using Tabular Expressions, by Dr. Ridha Khedri

   o  24th Annual SEW (NASA Goddard, Greenbelt Maryland): Call for
      Papers

   o  TTN SUBMITTAL, SUBSCRIPTION INFORMATION

========================================================================

                Test Automation Snake Oil (Part 2 of 2)

                  by James Bach 

Reckless Assumption #4: An automated test is faster, because it needs no
human intervention.

All automated test suites require human intervention, if only to
diagnose the results and fix broken tests. It can also be surprisingly
hard to make a complex test suite run without a hitch. Common culprits
are changes to the software being tested, memory problems, file system
problems, network glitches, and bugs in the test tool itself.

Reckless Assumption #5: Automation reduces human error.

Yes, some errors are reduced. Namely, the ones that humans make when
they are asked carry out a long list of mundane mental and tactile
activities.  But other errors are amplified. Any bug that goes unnoticed
when the master compare files are generated will go systematically
unnoticed every time the suite is executed. Or an oversight during
debugging could accidentally deactivate hundreds of tests. The dBase
team at Borland once discovered that about 3,000 tests in their suite
were hard-coded to report success, no matter what problems were actually
in the product. To mitigate these problems, the automation should be
tested or reviewed on a regular basis.  Corresponding lapses in a hand
testing strategy, on the other hand, are much easier to spot using basic
test management documents, reports, and practices.

Reckless Assumption #6. We can quantify the costs and benefits of manual
vs.  automated testing.

The truth is, hand testing and automated testing are really two
different processes, rather than two different ways to execute the same
process. Their dynamics are different, and the bugs they tend to reveal
are different.  Therefore, direct comparison of them in terms of dollar
cost or number of bugs found is meaningless. Besides, there are so many
particulars and hidden factors involved in a genuine comparison that the
best way to evaluate the issue is in the context of a series of real
software projects. That's why I recommend treating test automation as
one part of a multifaceted pursuit of an excellent test strategy, rather
than an activity that dominates the process, or stands on it own.

Reckless Assumption #7. Automation will lead to "significant labor cost
savings."

"Typically a company will pass the break-even point for labor costs
after just two or three runs of an automated test." This loosey goosey
estimate may have come from field data or from the fertile mind of a
marketing wonk.  In any case, it's a crock.

The cost of automated testing is comprised of several parts:

*  The cost of developing the automation.
*  The cost of operating the automated tests.
*  The cost of maintaining the automation as the product changes.
*  The cost of any other new tasks necessitated by the automation.

This must be weighed against the cost of any remaining manual testing,
which will probably be quite a lot. In fact, I've never experienced
automation that reduced the need for manual testing to such an extent
that the manual testers ended up with less work to do.

How these costs work out depend on a lot of factors, including the
technology being tested, the test tools used, the skill of the test
developers, and the quality of the test suite. Writing a single test
script is not necessarily a lot of effort, but constructing a suitable
test harness can take weeks or months. As can the process of deciding
which tool to buy, which tests to automate, how to trace the automation
to the rest of the test process, and of course, learning how to use the
tool and then actually writing the test programs. A careful approach to
this process (i.e. one that results in a useful product, rather than
gobbledygook) often takes months of full-time effort, and longer if the
automation developer is inexperienced with either the problem of test
automation or the particulars of the tools and technology.

How about the ongoing maintenance cost? Most analyses of the cost of
test automation completely ignore the special new tasks that must be
done just because of the automation:

   *  Test cases must be documented carefully.

   *  The automation itself must be tested and documented.

   *  Each time the suite is executed someone must carefully pore over
      the results to tell the false negatives from real bugs.

   *  Radical changes in the product to be tested must be reviewed to
      evaluate their impact on the test suite, and new test code may
      have to be written to cope with them.

   *  If the test suite is shared, meetings must be held to coordinate
      the development, maintenance, and operation of the suite.

   *  The headache of porting the tests must be endured, if the product
      being tested is subsequently ported to a new platform, or even to
      a new version of the same platform. I know of many test suites
      that were blown away by hurricane Win95, and I'm sure many will
      also be wiped out by its sister storm, Windows 2000.

These new tasks make a significant dent in a tester's day. Most groups
I've worked in that tested GUI software tried at one point or another to
make all testers do part-time automation, and every group eventually
abandoned that idea in favor of a dedicated automation engineer or team.
Writing test code and performing interactive hand testing are such
different activities that a person assigned to both duties will tend to
focus on one to the exclusion of the other. Also, since automation
development is software development, it requires a certain amount of
development talent. Some testers aren't up to it. One way or another,
companies with a serious attitude about automation usually end up with
full time staff to do it, and that must be figured in to the cost of the
overall strategy.

Reckless Assumption #8: Automation will not harm the test project.

I've left for last the most thorny of all the problems that we face in
pursuing an automation strategy: it's dangerous to automate something
that we don't understand. If we don't get the test strategy clear before
introducing automation, the result of test automation will be a large
mass of test code that no one fully understands. As the original
developers of the suite drift away to other assignments, and others take
over maintenance, the suite gains a kind of citizenship in the test
team. The maintainers are afraid to throw any old tests out, even if
they look meaningless, because they might later turn out to be
important. So, the suite continues to accrete new tests, becoming an
increasingly mysterious oracle, like some old Himalayan guru or talking
oak tree from a Disney movie. No one knows what the suite actually
tests, or what it means for the product to "pass the test suite" and the
bigger it gets, the less likely anyone will go to the trouble to find
out.

This situation has happened to me personally (more than once, before I
learned my lesson), and I have seen and heard of it happening to many
other test managers. Most don't even realize that it's a problem, until
one day a development manager asks what the test suite covers and what
it doesn't, and no one is able to give an answer. Or one day, when it's
needed most, the whole test system breaks down and there's no manual
process to back it up.  The irony of the situation is that an honest
attempt to do testing more professionally can end up assuring that it's
done blindly and ignorantly.

A manual testing strategy can suffer from confusion too, but when tests
are created dynamically from a relatively small set of principles or
documents, it's much easier to review and adjust the strategy. Manual
testing is slower, yes, but much more flexible, and it can cope with the
chaos of incomplete and changing products and specs.

                   A Sensible Approach to Automation

Despite the concerns raised in this article, I do believe in test
automation. I am a test automation consultant, after all. Just as there
can be quality software, there can be quality test automation. To create
good test automation, though, we have to be careful. The path is strewn
with pitfalls. Here are some key principles to keep in mind:

   *  Maintain a careful distinction between the automation and the
      process that it automates. The test process should be in a form
      that is convenient to review and that maps to the automation.

   *  Think of your automation as a baseline test suite to be used in
      conjunction with manual testing, rather than as a replacement for
      it.

   *  Carefully select your test tools. Gather experiences from other
      testers and organizations. Try evaluation versions of candidate
      tools before you buy.

   *  Put careful thought into buying or building a test management
      harness. A good test management system can really help make the
      suite more reviewable and maintainable.

   *  Assure that each execution of the test suite results in a status
      report that includes what tests passed and failed versus the
      actual bugs found. The report should also detail any work done to
      maintain or enhance the suite.  I've found these reports to be
      indispensable source material for analyzing just how cost
      effective the automation is.

   *  Assure that the product is mature enough so that maintenance costs
      from constantly changing tests don't overwhelm any benefits
      provided.

One day, a few years ago, there was a blackout during a fierce evening
storm, right in the middle of the unattended execution of the wonderful
test suite that my team had created. When we arrived at work the next
morning, we found that our suite had automatically rebooted itself,
reset the network, picked up where it left off, and finished the
testing. It took a lot of work to make our suite that bulletproof, and
we were delighted. The thing is, we later found, during a review of test
scripts in the suite, that out of about 450 tests, only about 18 of them
were truly useful. It's a long story how that came to pass (basically
the wise oak tree scenario) but the upshot of it was that we had a test
suite that could, with high reliability, discover nothing important
about the software we were testing. I've told this story to other test
managers who shrug it off. They don't think this could happen to them.
Well, it *will* happen if the machinery of testing distracts you from
the craft of testing.

Make no mistake. Automation is a great idea. To make it a good
investment, as well, the secret is to think about testing first and
automation second.  If testing is a means to the end of understanding
the quality of the software, automation is just a means to a means. You
wouldn't know it from the advertisements, but it's only one of many
strategies that support effective software testing.

                           o       o       o

James Bach (http://www.jamesbach.com) is an independent testing and
software quality assurance consultant who cut his teeth as a programmer,
tester, and SQA manager in Silicon Valley and the world of market-driven
software development. He has worked at Apple, Borland, a couple of
startups, and a couple of consulting companies. He currently edits and
writes the Software Realities column in Computer magazine. Through his
models of Good Enough quality, testcraft, exploratory testing, and
heuristic test design, he focuses on helping individual software testers
cope with the pressures of life in the trenches and answer the questions
"What am I doing here? What should I do now?"

========================================================================

            TestWorks Corner:  Hot Items for TestWorks Users

Here are several items that will be of interest to current and
prospective TestWorks users:

o  The latest builds of our new CAPBAK/Web capture replay system are now
   available for evaluation.  You can do the download from:

       <http://www.soft.com/Products/Downloads>.

   Complete information about CAPBAK/Web is found at:

       <http://www.soft.com/Products/Web/CAPBAK/capbakweb.html>.

   There is an introductory "Frequently Asked Questions" about
   CAPBAK/Web and WebSite testing at:

       <http://www.soft.com/Products/Web/CAPBAK/faq.html>.

o  We are offering potential CAPBAK/Web customers a free "2-Deep
   TestSuite" for a specified URL to help them get started.  The 2-deep
   suite gives you a SMARTS "ats file" and a set of CAPBAK/Web "keysave
   files" that fully exercise two layers of your WebSite.  Make your
   request at:

       <http://www.soft.com/Products/Web/CAPBAK/2deep.request.html>

o  There is a new White Paper, "WebSite Testing", by Edward Miller, that
   you may want to read.  It outlines basic requirements of WebSite
   testing and gives a brief summary of how CAPBAK/Web and SMARTS can be
   used to meet most of the requirements for deep WebSite testing.  You
   can find the White Paper at:

       <http://www.soft.com/Products/Web/Technology/website.testing.html>

o  If you would like to be added to the regular TestWorks Software
   Installation List (SIL) mailing please make the request to
   .  This monthly mailing has a wealth of current
   pointers and other details about the TestWorks solution.

Complete information about TestWorks can be obtained by Emailing
.

========================================================================

           Subject: Re: Ten Commandments of Software Testing
                     Nanda Kishore Panakanti wrote:

>  Ten Commandments of Software Testing - did anyone heard or read?

Testing is harder than living. There are 16:

1. stay alert
2. run the tests as written
3. keep an activity log
4. document any problems encountered
5. re-run the tests as they should have been written
6. same as 4.
7. understand the system
8. understand the people
9. understand the tests
10. understand the requirements
11. understand the dependencies
12. hope for the best
13. prepare for the worst
14. expect the unexpected
15. don't fake any results
16. agitate for improvement

========================================================================

          QWE'99 Update: Advisory Board, Best Paper and Best
              Presentation Awards, Sponsors, Registration

Here's an update on the Premier Software Quality Conference in Europe:
QWE'99.

* Advisory Board...

We are pleased to announce the composition of the QWE'99 Advisory Board,
a distinguished group of software quality experts who will be reviewing
the paper and presentation proposals received for QWE'99.

Walter Baziuk, Nortel, Canada       Peter Liggesmeyer , Siemens,  Germany
Boris Beizer, Analysis,  USA        Edward Miller, SR/Institute, USA
Bill Bently, mu_Research, USA       Martin Pol, IQUIP, Netherlands
Gilles Bernot, Univ. d'Evry, France Suzanne Robertson, Atlantic, England
Antonia Bertolino, IEI/CNR, Italy   Henk Sanders, CMG, Netherlands
Juris Borzovs, Riga, Latvia         Antonia Serra, QualityLab, Italy
Rita Bral, SR/Institute, USA        Torbjorn Skramstad, NUST, Norway
Bart Broekman, IQUIP, Netherlands   Harry Sneed, SES,    Germany
Gunther Chrobok-Diening, Siemens, Germany   Gerald Sonneck, ARCS, Austria
Adrian Cowderoy, MMHQ, England      Bernhard Steffen , Univ. Dortmund, Germany
Dirk Craeynest, KUL, Belgium        Ruud Teunissen, GiTek, Belgium
Sylvia Daiqui, DLR, Germany         Gordon Tredgold, Testing Consultancy, England
Taz Daughtrey, ASQC/SQP, USA        Erik VanVeenendaal, Improve Quality, Netherlands
Tom Drake, CRTI, USA                Otto Vinter, Bruel & Kjaer, Denmark

The QWE'99 Advisory Board is ~2/3 European and ~1/3 from North America.
You can review the QWE'99 Advisory Board members in detail at:

        <http://www.soft.com/QualWeek/QWE99/qwe99.board.html>

* Conference Theme...

The QWE'99 conference theme, "Lessons Learned," aims to capture and
focus results from recent Y2K and EURO efforts and to set goals and
priorities for the coming decades. You can expect to hear speakers
address this subject from a range of perspectives.

* Best Paper, Best Presentation Awards...

QWE'99 will have *TWO* awards for quality among the speakers.  The Best
Paper award will be selected by voting among the QWE'99 International
Advisory Board with the top three vote-getters presented to the Best
Paper award sponsor for final selection.

The QWE'99 Best Presentation award will be selected by votes taken at
the QWE'99 event and will be announced at end of the last Plenary
session.

* Sponsors...

Non-commercial sponsors for QWE'99 include the ACM, ACM SigSoft, ASQC
Software Division, ESI, ESSI, and TestNet.  Commercial sponsors include
SR/Testworks, CMG Information Technology, IQUIP, Mercury Interactive,
and GiTek.  Complete sponsor information is found at:

        <http://www.soft.com/QualWeek/QWE99/qwe99.sponsors.html>

* Further Information...

Send Email inquiries about QWE'99 to  or go to the
Conference WebSite at:

        <http://www.soft.com/QualWeek/QWE99>

========================================================================

   Our Apologies:  Emailing Failure in June 1999 TTN Issue Explained
                   Is This a Typical Software Error?

We apologize to our readers for the failure in the TTN-Online Emailing
process last month.  Some subscribers received multiple copies while
others may not have received any copies.  We found the typo in the
Emailing scripts and corrected it.  This should not happen again.

About The Error...

The error -- which may fall into the class of interesting bugs and hence
of interest to TTN-Online Readers -- was due to the fact that one
subscriber's Email address had an apostrophe "'" in his Email address:
e.g. "O'Something@domain.com".  (Please!  All you O'Somethings's out
there please note it is NOT you!).  The "'" is, of course, a punctuation
-type delimiter to the Bourne shell, and because of the extra "'" the
sense of what is text and what is command was mixed up, with very bad
results.  The "'" should have been escaped with the usual back-slash.
That's been fixed.

Classifying this error is more difficult: Clearly our mailer scripts
were built with the assumption that "'" was a reserved character, so we
needn't have checked this.  But, good testing practice is to try things
even that aren't legal input.  Sadly, that is something we didn't do.
Again, we apologize for any inconvenience we caused.

If you didn't receive a copy and believe you should have, you can read
the June 1999 issue at:

    <http://www.soft.com/News/TTN-Online/ttnjun99.html>.

========================================================================

               Testing Futures in the Telephone Industry

                            Larry Bernstein
                       Have Laptop - Will Travel
                     (lbernstein@worldnet.att.net)

The telephone gadget is an enormously successful invention; each new
level of system that surrounds it has spawned radical innovations and
new services. Each changes adds complexity to managing the telephone
network. Object-Oriented Technology (OOT) is the best answer to
controlling the multiplying configurations that suddenly appear with new
services.  But OOT experiences the same birth pangs as every other new
idea. In addition outsourcing is changing the telecommunications
industry.  In the last decade service providers have moved from
developing their own services and equipment internally to buying them
from third parties.   The split of Lucent Technologies from AT&T in 1996
was the ultimate expression of this policy.  With outsourcing comes the
challenge of evaluating just how well vendor systems work before making
a purchase decision.

                              Test Models

GlobalOne met this challenge with an innovative use of Teradyne's
TestMaster tool. TestMaster is an automated test design and coding tool,
which was used for building an object oriented model of the outsourced
system. The model was based on the specifications contained in their
Request for Proposal and from system descriptions provided by the
supplier.  GlobalOne engineers were able to use this model to first map
the functions they wanted against the system description and then
against the system itself.  This assured them that the contracted
telephony functions were present and the GlobalOne system engineers
understood how the new service would fit into their business
environment.  This approach showed how giving modeling tools to the
customer system engineers can head off unintended consequences well
before the system is even developed.

The TestMaster model of the service node gave the GlobalOne system
engineers insight into the dynamics of this complex system of systems
that made up the service offering.  With the model, the systems
engineers were able to study the unique call flow for every variation of
the service.  For example, GlobalOne customers can use one of 12
languages to interact with their network. Manual evaluation of the
interaction of language selection based on the object libraries with the
many service variations would have been a huge task without TestMaster
and supporting evaluation tools.  In traditional manual methods the
system engineers would study the system specifications and then develop
test case to verify that the system worked as they expected.  Finding
the error paths is always a challenge. Typically many review meetings
are needed between the system engineers themselves and then with the
vendor's technical people to ferret out the potential incompatibilities.
With this approach, serious problems are often overlooked which at best
show up in service testing and at worst are found by the paying
customers.  Problems found and fixed during the service test effort
costs three times the effort then those found with the model and those
found by the user add another factor of ten in cost escalation.  The
TestMaster model-based test creation method permits the early
involvement of the test organization in the development process, and is
a powerful tool for facilitating communication between customer and
supplier engineers.

For example, the service offering systems use several different database
technologies. To install the new service a database of customers was
needed which contained administrative data and their service requests.
The database initialization process was modeled with TestMaster, such
that the database records were automatically generated from the model.
Once the testers saw the strength of the model they adopted it as their
test case database repository. Consequently, the TestMaster model of the
databases was used for both populating the component databases in the
target system, as well as serving as the input data for the test
creation process.  Expected results from the model were kept and later
compared to the results from running the test cases against the system.
When there were differences, analysts would compare the flow in the
model with the flow in the service offering and find the problem.  This
moved debugging from detective work to analysis.  Problems were found in
the object libraries, component systems, in the model and even in the
system design. The model assures all features are present, not just the
headliners.  Once the service offering is installed in the evaluation
labs the model produces test suites for automatic test drivers.  These
tests verify that that the system performs as expected.

The test scripts from the model resulted in high coverage rates for
feature testing.  Quite often testers are pressed for time and do not
have the resources for exhaustive load testing and reliability testing.
While testers focus on full load testing they often do not have the time
to run then let the system idle waiting for new work.   With the
TestMaster model, setting up such a script was easy to do and pointed to
reliability problems in the service offering system.  With the data in
hand it was clear that the offered load was triggering reliability
problems and there was no argument that it was an unrealistic test load.
A long-term benefit is that once the system is installed the model may
be used for regression testing

========================================================================

                          Notice: John Gannon,
               University of Maryland Computer Scientist

John D. Gannon, 51, chairman of the computer science department at the
University of Maryland and a well-known researcher in software
engineering who helped his department achieve high national rankings,
died of a heart attack 12 June 1999 at his Silver Spring home. He had a
heart ailment.

Dr. Gannon joined the computer science staff in 1975 and specialized in
the specification, analysis and testing of software systems.  After he
took over as chairman of the department four years ago, undergraduate
enrollment had nearly doubled, to about 2,000 students, associates said.

Dr. Gannon was among those administrators who helped the computer
science department achieve consistently high rankings by professional
associations and national organizations such as the National Science
Foundation.

Some placed it among the top 10 university departments in the country,
his colleagues said. He also helped plan his department's new
instructional building, which is to be built next year.

Dr. Gannon was born in Providence, R.I. He was a graduate of Brown
University, where he also received a master's degree in computer
science.  He received a doctorate in that subject from the University of
Toronto.

At the University of Maryland, his honors included the Distinguished
Scholar-Teacher Award.  He was a Member of the Board of Governors of the
Computing Research Association and Chairman of the Graduate Record
Examination Board's computer science committee. He was also a Program
Officer with the National Science Foundation and a Fellow of the
Association for Computing Machinery.

Survivors include his wife, Nancy Garrison of Silver Spring, and a
brother.

           This notice forwarded to TTN by Jeffery E. Payne.

========================================================================

                    ACM SIGPLAN-SIGSOFT Workshop on
          Program Analysis for Software Tools and Engineering
                               (PASTE'99)

              Monday, September 6, 1999, Toulouse, France
                      Co-located with ESEC/FSE'99
     Featured Speaker: John Field, IBM T.J. Watson Research Center

            <http://www-cse.ucsd.edu/users/wgg/paste99.html>

Workshop Co-Chairs:
  William Griswold (University of California, San Diego)
  Susan Horwitz    (University of Wisconsin)

Program Committee:
  Lori Clarke            (University of Massachusetts at Amherst)
  John Field             (IBM T.J. Watson Research Center)
  Mary Jean Harrold      (The Ohio State University)
  Yanhong (Annie) Liu    (University of Indiana)
  Gail Murphy            (University of British Columbia)
  Gregor Snelting        (University of Passau)
  Daniel Weise           (Microsoft Research)

Workshop Program:

8:50 - 10:30 Session I

      Efficient Coverage Testing Using Global Dominator Graphs, Hira
      Agrawal (Telcordia Technologies)

      Efficient and Precise Modeling of Exceptions for the Analysis of
      Java Programs, Jong-Deok Choi, David Grove, Michael Hind, and
      Vivek Sarkar (IBM T.J.  Watson Research Center)

      Safety Analysis of Hawk In Flight Monitor, Liz Whiting and Mike
      Hill (UK Defence Evaluation and Research Agency)

11:00 - 12:30 Session II

      Equivalence Analysis: A General Technique to Improve the
      Efficiency of Data-flow Analyses in the Presence of Pointers,
      Donglin Liang and Mary Jean Harrold (The Ohio State University)

      Inter-class Def-Use Analysis with Partial Class Representations
      Amie Souter, Lori Pollock (University of Delaware), and Dixie
      Hisley (U.S.  Army Research Lab)

      Using Partial Order Techniques to Improve Performance of Data Flow
      Analysis Based Verification Gleb Naumovich, Lori Clarke, and
      Jamieson Cobleigh (University of Massachusetts at Amherst)

2:00 - 3:00 Invited Talk

      Research Goes to Market: Challenges Building an Industrial Program
      Understanding Tool John Field (IBM T.J. Watson Research Center)

3:00 -  4:00 Session III

      Physical Type Checking for C Satish Chandra (Bell Laboratories,
      Lucent Technologies) and Thomas Reps (University of Wisconsin)

      New Type Signatures for Legacy Fortran Subroutines Nicky
      Williams-Preston (Ecole Normale Superieure de Cachan)

4:30 -  5:30 Session IV (Demo Presentations)

      Query by Outline: A New Paradigm to Help Manage Programs Francoise
      Balmas (Universite Paris 8)

      GIDTS - A Graphical Programming Environment for PROLOG Gabriella
      Kokai, Jorg Nilson, and Christian Niss (Friedrich-Alexander
      University of Erlangen-Numberg)

      Benefits of a Data Flow-Aware Programming Environment Christoph
      Steindl (Johannes Kepler University)

                       Registration Information

PASTE registration will be limited to 80, so be sure to register early!
See the web page:

        <http://www.cert.fr/anglais/dprs/cadres148.htm>

for full registration information (click on "Registration form" in the
box on the left).

========================================================================

                     Software Quality Professional
                         A New Journal Launched

The American Society for Quality (ASQ) has begun publishing a new peer-
reviewed quarterly journal entitled SOFTWARE QUALITY PROFESSIONAL.
Focusing on the practical needs of professionals including engineers and
managers, SOFTWARE QUALITY PROFESSIONAL provides readers with
significant information that will contribute to their personal
development and success in the field of software quality.

Under the direction of the founding Editor, Taz Daughtrey, a former
Chair of the ASQ Software Division, articles from known experts in the
field of software quality provide an intersection between quality
engineering and software engineering. The scope of the journal is
defined by the Body of Knowledge for ASQ's Certified Software Quality
Engineer (see http://www.asq.org/sd/csqe/topcert.htm), but the content
will also prove useful to a wide range of technical and managerial
personnel.

The premier issue Volume 1, Issue 1 (December 1998) contained:

  * "The Software Quality Profile" by Watts Humphrey, Software
    Engineering Institute
  * "Validating the Benefit of New Software Technology" by Marvin V.
    Zelkowitz, University of Maryland, and Dolores R. Wallace, National
    Institute of Standards and Technology
  * "More Reliable, Faster, Cheaper Testing with Software Reliability
    Engineering" by John Musa, Software Reliability Engineering and
    Testing Courses
  * "Simple Ways to Succeed at Software Process Improvement" by Rita
    Hadden, Project Performance Corporation
  * "Software Is Different" by Boris Beizer, ANALYSIS

Volume 1, Issue 2 (March 1999) featured:
  * "Cost of Software Quality: Justifying Software Process Improvement
    to Managers" by Dan Houston, Honeywell and Arizona State University
  * "Overcoming the Myths of Software Development" by Carol Dekkers,
    Quality Plus Technologies
  * "Quality Evaluation of Software Products" by J=F8rgen B=F8egh, DELTA
    Software Engineering
  * "Conflict Analysis and Negotiation Aids for Cost-Quality
    Requirements" by Barry Boehm and Hoh In, University of Southern
    California "International Trends in Software Engineering and Quality
    System Standards:  Ontario Hydro's Perspective - Part 1" by John
    Harauz, Ontario Hydro

Volume 1, Number 3 (June 1999) has just been published with:

  * "Practical Quality Assurance for Embedded Software" by Erik van
    Veenendaal, University of Technology, Eindhoven
  * "Using the Software CMM with Good Judgment" by Mark Paulk, Software
    Engineering Institute
  * "International Trends in Software Engineering and Quality System
    Standards:  Ontario Hydro's Perspective - Part 2" by John Harauz,
    Ontario Hydro
  * "Achieving Quality in Software Requirements" by Alan Davis, Omni-
    Vista, Inc.  and University of Colorado at Colorado Springs
  * "Investing in Quality Does Indeed Pay: A Software Process
    Improvement Case Study" by Giora Ben-Yaacov, Synopsis, and Arun
    Joshi, Cadence India Engineering Center
  * "Balancing Time to Market and Quality" by Steven Rakitin, Software
    Quality Consulting

Future issues will feature articles on maturity models, document
inspection, system maintenance, implementation of metrics, testing
strategies, and other topics.

SOFTWARE QUALITY PROFESSIONAL welcomes submissions on world-class
software quality practices in the form of:

* experience-based reports on techniques, tools, or other issues;
* illustrative case studies;
* topical surveys;
* practical applications of research results; or
* short communications on relevant concerns.

SOFTWARE QUALITY PROFESSIONAL would like to feature articles on the
following topics in upcoming issues:  System integration, including use
of COTS products Object-oriented and component-based software
development Network issues, including security, reliability, and
performance Ethical considerations in professional conduct
Software/system verification and validation

Any other topics within the Body of Knowledge for the ASQ Certified
Software Quality Engineer are always welcome. Submissions are accepted
at any time, but the deadlines for specific issues are:  September 15
for March issue December 15 for June issue March 15 for September issue
June 15 for December issue.

Guidelines for authors are available at:

    <http://www.asq.org/sd/sqp/submiss.htm>

Inquiries to the Editor-in-Chief, Taz Daughtrey, should be directed to
SQP_Editor@asqnet.org, phone/fax 1-804-237-2723.

========================================================================

Designer Viruses: Look Out For These Viruses!

Ronald Reagan virus: Saves your data, but forgets where it is stored.

Mike Tyson virus: Quits after one byte.

Oprah Winfrey virus: Your 300MB hard drive suddenly shrinks to 100MB,
then slowly expands to 200MB.

Dr. Jack Kevorkian virus: Deletes all old files.

Titanic virus (a strain of Lewinsky virus): Your  whole computer goes
down.

Disney virus: Everything in your computer goes Goofy.

Joey Buttafuoco virus: Only attacks minor files.

Arnold Schwarzenegger virus: Terminates some files, then leaves, but
will be back.

Lorena Bobbit virus: reformats your hard drive into a 3.5 inch floppy
then discards through windows.

     Thanks (Blame?) To: Curtis Browning 

========================================================================

                       2nd Call for Contributions

 A d a - B e l g i u m ' 9 9   -   9 t h   A n n u a l   S e m i n a r

                       A d a   9 5   W o r k s !

                       Friday, November 19, 1999
                            Leuven, Belgium

    http://www.cs.kuleuven.ac.be/~dirk/ada-belgium/events/local.html

                  DEADLINE for submission of abstracts
                         Monday, July 26, 1999

------------------------------------------------------------------------

Ada-Belgium is a non-profit volunteer organization whose purpose is to
promote the use in Belgium of the Ada programming language, the first
ISO standardized object-oriented language and a great language for
engineering reliable systems.

Ada-Belgium is soliciting contributions for presentation during its next
Annual Seminar, to be held in Leuven on Friday, November 19, 1999.
Attendees will include industry, government and university
representatives that are active and interested in Ada software
development and management. The language of the Seminar is English.

Overview

This ninth Annual Ada-Belgium Seminar will feature tutorial, paper and
project presentations. Once more, we are preparing a program with first
class invited speakers, such as the previous years John Barnes ('94),
Robert Dewar ('95), Tucker Taft ('96), Bill Beckwith ('97) and Brian
Dobbing ('98), and lots of free Ada-related material, e.g. free Ada CD-
ROMs in all Seminars from '94 on, copies of the Ada 95 Reference Manual
and Rationale ('95), of the Ada 95 Quality and Style Guide ('96), of the
CORBA IDL to Ada 95 mapping document ('97), etc.

Theme of the Seminar will be "Ada 95 Works!".

Presentations will show that Ada 95 is a viable alternative to be
considered for your next project, as well as share practical
experiences, describe available products, etc.

Invited Speaker

We are happy to announce that John Barnes will give presentations on the
following topics:

(1) An Overview of SPARK

The SPARK language consists of a subset of Ada with embedded annotations
in the form of Ada comments. One of the keys to developing correct
software is using appropriate abstractions. This presentation will show
how the SPARK language and its associated tools improve the completeness
and correctness of abstractions and thus lead to Ada programs which are
more likely to be correct.

(2) Advanced Ada

Ada 95 contains many advanced features which give the programmer greater
control over various aspects of programs.  Examples are the ability to
define storage pools, to manipulate exception occurrences, to handle
streams, and to control visibility. This presentation will discuss a
number of such topics illustrated by examples.

Call for Contributions

Contributions consistent with the general theme of the Seminar, outlined
below, are hereby invited:
  * Presentations supporting the theme of the Seminar.
  * Experience reports of projects using or trying out Ada technology.
  * Short technical presentations of available products.
  * Ada 95 project presentations (even short ones).

More general contributions are also welcome, such as on:
  * Management of Ada software development projects, including the
    transition to Ada 95.
  * Experiences with Ada development, lessons learned.
  * Ada technology.
  * Ada research projects in universities and industry.

Those interested to present at the Seminar should submit a short
abstract (10-15 lines) in English by July 26, 1999, via e-mail to ada-
belgium-board@cs.kuleuven.ac.be for consideration by the board.

Short presentations will get a time-slot of 20-30 minutes. For longer
presentations, the organizers will work out a schedule with the authors.
Proceedings with full papers of the presentations will be available at
the Seminar.

Dates:

  * July 26, 1999: deadline for submission of abstracts.
  * end of August, 1999: notification of acceptance.
  * October 21, 1999: deadline for submission of final papers.
  * November 19, 1999: Ada-Belgium'99 Seminar.

For additional information on the Ada-Belgium'99 Seminar please contact
the Ada-Belgium Board at the e-mail address listed.

Dirk Craeynest
Ada-Belgium Board
ada-belgium-board@cs.kuleuven.ac.be

========================================================================

             CRL Report: Sequential Scenarios Verification
               and Integration using Tabular Expressions

                            Dr. Ridha Khedri

ABSTRACT:  Determining requirements is acknowledged to be one of the
most crucial activities in system development.  Documenting the
requirements of a software system requires translating observations
about the real world to precise mathematical specifications. Related to
this topic, we present a method to accomplish the following tasks:  (i)
Expressing system behavior directly from the user's point of view; (ii)
Detecting the incompleteness of behavioral descriptions; (iii)
Uncovering inconsistency in scenarios provided by users; (iv) Integrate
many partial views (scenarios) to obtain a more complete view of the
user-system interactions.

Many methods have been proposed for formulating and analyzing
requirements. The method presented in this report is based on a
relational formalism, uses scenarios to help expressing system behavior
directly from the user's point of view, and uses tabular expressions to
represent relations of the formal scenarios and to increase our means to
communicate well with users.

You can download this report and other recent reports from:

        <http://www.crl.mcmaster.ca/SERG/serg.publications.html>

========================================================================

               24th Annual Software Engineering Workshop
                   <http://sel.gsfc.nasa.gov/sew.htm>
                          December 1-2, 1999.
                     Goddard Space Flight Center,
                        Building 8 Auditorium,
                          Greenbelt, Maryland

                    Announcement and Call for Papers

The Last Software Engineering Conference of the Millennium Looks at the
Past, Present, and Future of Software

This year's NASA Software Engineering Workshop includes three invited
sessions:
* The International Influence of the SEL (3 or 4 papers)
* Space Mission Software: The Effects of Your Technology on Your Space
   Center, What You Have Learned, and Where It Will Take You in the
   Future (3 or 4 papers)
* Software Past, Present, and Future: Views From Government, Industry,
   and Academia (a panel)

The NASA Software Engineering Workshop is sponsored by the NASA/Goddard
Space Flight Center (GSFC) Software Engineering Laboratory (SEL), in
conjunction with the University of Maryland (UM) and Computer Sciences
Corporation (CSC). Software practitioners from around the world attend
this forum to share experiences and exchange ideas on the measurement,
use, and evaluation of software methods, models, and tools.

The SEL invites you to take an active part in this meeting. We are
soliciting papers for the workshop.

Topics of Major Interest
* COTS Experiences (architectures, integration, etc.)
* Return on Investment (on time & under budget)
* Evaluated Software Experiences (disasters & successes)
* Software Integration
* Software Process Improvement
* Software Standards & Benchmarks (e.g., UML, ISO, CMM, IEEE)
* Novel Technologies for the 21st Century
* Technology Transfer

1999 Schedule
* September 13 -- Deadline for receipt of abstracts. Abstracts must be
submitted in electronic form (email or diskette), MS Word preferred
(version 97 or earlier).
* October 8    -- Notification to authors
* October 22   -- Issue final program
* November 15  -- Deadline for receipt of final slides and papers. Papers
must be submitted in electronic form (email or diskette), MS Word preferred
(version 97 or earlier). Slides should be submitted electronically if
possible.
* November 19  -- Deadline for registration -- Foreign nationals should
register by November 12.
* December 1-2 -- Workshop
* December 3   -- Placement of papers and slides on the SEL Web site.

Contact:

          Don Jamison/NASA
          donald.jamison@gsfc.nasa.gov
          telephone: (301) 286-2598
      or
          Jackie Boger/CSC
          jboger@csc.com
          telephone: (301) 805-3722

========================================================================
------------>>>          TTN SUBMITTAL POLICY            <<<------------
========================================================================

The TTN Online Edition is E-mailed around the 15th of each month to
subscribers worldwide.  To have your event listed in an upcoming issue
E-mail a complete description and full details of your Call for Papers
or Call for Participation to "ttn@soft.com".

TTN On-Line's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the TTN On-Line issue date.  For
  example, submission deadlines for "Calls for Papers" in the January
  issue of TTN On-Line would be for February and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK and may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc. and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items are the opinions of their authors or
submitters; TTN-Online disclaims any responsibility for their content.

TRADEMARKS:  STW, TestWorks, CAPBAK, SMARTS, EXDIFF, Xdemo, Xvirtual,
Xflight, STW/Regression, STW/Coverage, STW/Advisor, TCAT, TCAT-PATH, T-
SCOPE and the SR logo are trademarks or registered trademarks of
Software Research, Inc. All other systems are either trademarks or
registered trademarks of their respective companies.

========================================================================
----------------->>>  TTN SUBSCRIPTION INFORMATION  <<<-----------------
========================================================================

To SUBSCRIBE to TTN-Online, to CANCEL a current subscription, to CHANGE
an address (a CANCEL and a SUBSCRIBE combined) or to submit or propose
an article, use the convenient Subscribe/Unsubscribe facility at:

         <http://www.soft.com/News/TTN-Online/subscribe.html>.

Or, send E-mail to "ttn@soft.com" as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:

   subscribe your-E-mail-address

   TO UNSUBSCRIBE: Include this phrase in the body of your message:

   unsubscribe your-E-mail-address

   NOTE: Please, when subscribing or unsubscribing via email, type YOUR
   email address, NOT the phrase "your-E-mail-address".

		QUALITY TECHNIQUES NEWSLETTER
		Software Research, Inc.
		1663 Mission Street, Suite 400
		San Francisco, CA  94103  USA

		Phone:     +1 (415) 861-2800
		Toll Free: +1 (800) 942-SOFT (USA Only)
		Fax:       +1 (415) 861-9801
		Email:     qtn@soft.com
		Web:       <http://www.soft.com/News/QTN-Online>

                               ## End ##