sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr

         +===================================================+
         +======= Testing Techniques Newsletter (TTN) =======+
         +=======           ON-LINE EDITION           =======+
         +=======            November 1999            =======+
         +===================================================+

TESTING TECHNIQUES NEWSLETTER (TTN), Online Edition, is E-mailed monthly
to support the Software Research, Inc. (SR)/TestWorks user community and
to provide information of general use to the worldwide software quality
and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of TTN-Online provided that the
entire document/file is kept intact and this complete copyright notice
appears with it in all copies.  (c) Copyright 2003 by Software Research,
Inc.


========================================================================

INSIDE THIS ISSUE:

   o  Quality Week 2000 - Call for Participation

   o  "Testing Best & Worst Practices -- A Baker's Dozen" by Boris
      Beizer

   o  TestWorks' Corner: CAPBAK/Web Update

   o  "NASA Says Human Error Caused Loss of Mars Craft" by Michael
      Miller

   o  19th International Conference on Computer Safety, Reliability and
      Security -- Call for Papers

   o  Forum on Risks to the Public in Computers and Related Systems

   o  Fourth International Baltic Workshop on DB and IS -- Call for
      Papers

   o  Ada-Belgium'99 - 9th Annual Seminar - Call for Participation

   o  Best T-Shirts of the Summer

   o  TTN SUBMITTAL, SUBSCRIPTION INFORMATION

========================================================================

        13th International Software & Web Quality Week (QW 2000)

           San Jose, California, USA -- 29 May - 2 June, 2000

                  Theme: New Century, New Beginnings.

QW 2000 is the thirteenth in the continuing series of International
Software Quality Week Conferences that focus on advances in software
test technology, quality control, risk management, software safety, test
automation and web quality. This event will address the most impending
questions of the new century and millenium, such as:  What are the
software quality issues for 2000 and beyond? What about quality on the
Internet? What about embedded System Quality?  What about E-Commerce
Quality? Where do the biggest Quality problems arise?  What new Quality
approaches and techniques will be needed the most?

     Abstracts and Proposals Due:       21 January 2000
     Notification of Participation:     25 February 2000
     Camera Ready Materials Due:        31 March 2000

For more details, go to:
     <http://www.soft.com/QualWeek/QW2K/qw2k.call.html>.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
  4th International Software & Web Quality Week Europe 2000 (QWE2000)

                Brussels, Belgium -- 6-10 November 2000

                   Theme: Initiatives for the Future

For more details on this event, go to:
    <http://www.soft.com/QualWeek/QWE2K/index.html>

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Mark your calendars for these exciting events!  Exhibition space
and sponsorship opportunities for QW2000 and QWE2000 are available now.

Questions?  Check out the Quality Week WebSite, call [+1] (415) 861-2800, or
send E-mail to qw@soft.com.

========================================================================

          Testing: Best and Worst Practices -- A Baker's Dozen

                 Reprinted by permission of the author.
                   (c) Copyright 1999 by Boris Beizer
                     Email: bbeizer@sprintmail.com

Good testing practices can't be defined by simple rules of thumb or
fixed formulas.  What is best in one circumstance can be worst in
another.  Here for your consideration is a baker's dozen of  best and
worst testing practices -- you'll notice that both columns feature the
same practice.

1. Unit Testing to a 100% Coverage Standard.

Testing at the lowest level (i.e., a unit is a piece of source code that
does not include any called subroutines or functions) using a coverage
tool; as a minimum, sufficient testing to assure that every source
statement has been executed at least once under test.  In most practical
cases, it is necessary to test to assure branch cover (each branch
tested both TRUE and FALSE) and predicate cover (each term of a compound
predicate tested both TRUE and FALSE).  Unit testing is done to find
unit bugs, which is the second highest frequency kind of bug.

Best Practice

Testing to a 100% statement (also branch and predicate) coverage
standard is a mandatory, minimum testing requirement because only by so
doing can we assure that all the code has been tested at least once in
its life.  We must assure that because if we don't we are guaranteed not
to find the bugs that may be in the untested code.  Not only is this
common sense, but it is a fundamental axiom of testing -- if you don't
test it, you won't find the bugs in it.

Today, with widely available coverage tools for most popular languages,
such testing is as easy as routinely using a spell-checker.

If you don't get the unit bugs out during unit testing you'll be finding
and fixing them in system testing, or worse, in the field, where the
cost will be orders of magnitude higher.

Worst Practice

Applying the unit test coverage standard to stable code that has not
been modified and that had previously been tested to the standard --
especially code that is heavily in maintenance.  If neither the code nor
that code's requirements have changed, then what can possibly be learned
from re-running unit tests?

Attempting to apply the unit test standard to big components that are
aggregates of many units -- e.g., 50,000+ lines of source code.  These
are not "units" -- they are higher level components and the unit test
standard does not apply.  Using the unit test standard as the sole
testing criterion -- that is, stopping unit testing once coverage has
been achieved.  Just because you've covered the code under test doesn't
mean that it does anything useful, or that it even works.

2. Integration Testing.

Testing the interfaces between otherwise correct components to assure
that they are compatible.  Integration bugs are just behind unit bugs in
frequency; and for object-oriented software, are probably more frequent.

Best Practice

When two or more components are combined to create a bigger component, a
new, previously untested, interface comes into play.  Integration bugs
concern incompatibilities across interfaces.  If the interface is not
explicitly tested, then you won't find the bugs.  Integration testing,
therefore, is not a single event, but takes place at every level of the
build process.

Worst Practice

Confusing integration testing with testing something that's already been
integrated.  If it's not concerned with testing  component interfaces,
then no matter what you call it, it isn't integration testing as the
term is defined.

A project plan with a step or phase labeled "integration testing" means
that it's unlikely that any integration testing whatsoever is being
done.

3. System Testing.

Testing an entire software system, end-to-end to discover common system
bugs such as resource loss, synchronization and timing problems, and
shared file conflicts.

Best Practice

Once we've found and fixed the low level bugs it's time to address real
system bugs.  We do this because if we don't, our end-users will find
these bugs for us.  Most system bugs, such as resource loss, timing, and
synchronization bugs can be found by applying specific system testing
methods.  There's no need to be victimized by such bugs if you've made a
concerted effort to find them.  And there's also no excuse for them.

Worst Practice

Doing system testing without prior, meticulous, unit testing, lower
level component testing, and integration testing at every stage of the
build.   If you bypass unit testing, all you'll be doing is discovering
low-level unit and integration bugs in the context of an expensive,
end-to-end system testing so that you won't really have time to do real
system tests.

4. Testing to Requirements.

Testing from the users' perspective, typically end-to-end, to verify the
operability of every feature.

Best Practice

We build software for users.  They're not (should not be) concerned with
how the software works, how it's organized, or the cute programming
tricks we used.  They're only concerned with how it  behaves and
therefore, testing that behavior must be central to the testing effort.

This philosophy must be applied to every level because the "user" of an
internal subroutine, say, is the programmer who calls that subroutine.
Therefore, testing to requirements of  every level should be an
objective at every testing level.

Worst Practice

Assuming that by "user" we mean only the end user.  In typical software,
only 10%-15% of the code directly concerns things seen by the end user.
The remaining 85% concerns infrastructure items such as resource
management, protocols, data bases, etc.,  that the user doesn't know and
doesn't want to know about.

Testing only to end-user perceived requirements is like inspecting a
building based on the work done by the interior decorator at the expense
of the foundations, girders, and plumbing.

5A. Test Execution Automation.

Use of automated test drivers running from test scripts or test data
files.

Best Practice

Manual testing is the pits -- not only that, it doesn't work very well
because the test execution error rate is about 50% or worse.  Testing is
not about monkeys pounding keys.  Today, with automated test drivers and
capture/playback tools widely available, there's no difficulty in
achieving a high degree of test execution automation -- in the high 95%
range for most applications.

The biggest advantage to test execution automation can't be reaped
unless you intend to run the tests many times.  However, most
organizations under estimate the number of times tests will be run.
While they think in terms of once or twice, the actual number is
likelier to be closer to forty or fifty times over the life of the
software.

If you accept test execution automation as a capital investment in
testware, no less important than software, then you'll see a truly
remarkable long-term return on investment that beats all other software
development tools.

Worst Practice

Getting that last 5% can be a bear.  There's not much point in test
execution automation if it means that you have to build key-pounding
robots.  Some things are really difficult to automate: for example,
judging print quality, graphics rendering, color fidelity, and sound
quality.

Such inherently difficult areas aside, the biggest barrier to test
execution automation is not planning the automation at the outset.  If
that happens, you may (unfortunately) be better off with manual testing.

The next most important source of test execution automation difficulty
is not properly training your people in the use of these tools. If they
haven't been properly trained and given the time and resources to master
the tools, they'll actually be more productive doing it manually.

Finally, don't inject automation in the middle of a project and expect
to get anything but chaos, lowered productivity, and reduced quality.

5B. Test Design Automation.

Use of models and/or semi-formal requirements to automatically generate
test cases.

Best Practice

This is where the action is on the leading edge of testing.  Instead of
writing individual tests you create formal models such as finite-state
machine models, regular expressions, domain specifications, constraint
sets, or formal specifications.  The tool then automatically generates a
covering set of tests from the model.  The focus of testing then shifts
from designing individual test cases to creating models that faithfully
express the requirements.

But you don't have to be on the leading edge to exploit test design
automation.  The simplest method is to use a capture/playback system to
create a base scenario.  Once you've got that base test case debugged
you create the variants by simple editing of the test script.  It's an
effective method because while the typical test case contains 250
keystrokes, the variations from case to case involve changes in only a
dozen or so.  It isn't a new idea -- we were doing it 35 years ago with
paper tape.

Worst Practice

If you're looking for excitement and trouble then introduce test design
automation before you've got your tests under configuration management
and have implemented test execution automation.  A good test design
automation tool generates tests by the thousands and ten-thousands.

You can't run those tests by hand and there's no practical way to figure
out which tests are worth running and which are not.  And if they're not
under strict configuration management how will you know that what and
why? Education, education, education.

If you're going to use sophisticated tools then your people must be
trained in those tools and given enough time to internalize the
methodologies behind the tools.  If you don't figure that into the
equation, your attempt at test design automation will fare worse than
manual testing.  You can't expect a person who's never flown an airplane
to take off in an F-16 on the first day.

6. Stress Testing.

Subjecting a software system to an unreasonable load while denying it
the resources needed to process that load.

Best Practice

If you're trying to find true system bugs and have never subjected this
software to a real stress test, then it is high time you started.
Dollar for dollar, stress testing has probably the highest payoff of any
test technique for finding system bugs.

Proper stress testing is useful in finding synchronization and timing
bugs, interlock problems, priority problems, resource loss bugs, and
general abuse of the API.

Stress testing is usually easy to set up.  You can create stress loads
by looping transactions back on themselves so that the system stresses
itself: e.g., an incoming message is output on a looped-back line to
generate another incoming message.  Alternatively you can use another
system of comparable size to create the stress load.

Worst Practice

Stress tests wear out faster than almost any other test technique.  The
kinds of bugs found by stress testing are well-understood and limited.
Once they've been found and fixed, the stress test becomes marginally
effective.  Most organizations find that the stress test adds little or
no value after the third or fourth run.  The worst practice, then is to
continue with stress testing long after it has worn out -- everybody
feels good because the test isn't finding any bugs.  In fact, if stress
testing doesn't wear out, something is seriously wrong with your QA
because nobody seems to be learning how to avoid these common system
bugs.

Another terrible stress testing practice is to attempt manual stress
testing -- only the testers gets stressed.  It's another, worthless,
feel-good testing method.

7. Regression Testing.  More specifically, equivalency testing, that is,
rerunning a suite of tests to assure that the current version behaves
identically to the previous version except in such areas as are known to
have been changed.

Best Practice

To quote Robert Burns: "The best laid schemes of Mice and Men gang oft
awry and leave us nought but grief and pain for promised joy."  It's not
possible to keep a project perfectly aligned so that all events happen
exactly as in the initial schedule.  Therefore, it's inevitable that
some things will go into the build before all interfaces have been
tested and some things will be integrated before lower level testing has
been done.  Continual  regression testing at every stage of the build is
cheap insurance against the bugs that slip through that way.

Worst Practice

Manual testing when surrounded by high-powered computers is patently
silly.  Attempting manual regression testing is sillier still.  Your
people will hate it and their test execution error rate will be so high
that the entire effort is an exercise in feel-good self- deception and
false confidence.  Besides which, almost nobody actually ever does it.

Not using a fully automated regression test (e.g., attempted manual
regression testing) is such a worthless practice that I advise against
any regression testing if the only way you can do it is manually.

8. Reliability Testing.

Testing to determine the expected failure rate of software under a
statistically specified user load (operational profile).

Best Practice

Reliability testing under a statistically valid user profile has been
established for many applications as an effective method to determine
when enough testing has been done to warrant using the product.  Among
the primary areas where it has been validated is telecommunications and
control software.  Where its applicability has been confirmed, it is an
accepted part of the testing tool kit.  Publication of Lyu's Handbook of
Software Reliability Engineering (McGraw-Hill, 1996) attests to that.

Worst Practice

If you don't have a valid user profile, you can't do this kind of
testing (you can do it, but you won't get meaningful results).  Also,
changes in your application can change user behavior and therefore the
profile -- to the point where the entire test is invalidated.

Worst practice?  Being naive and expecting this to be done by
statistically unsophisticated people who can't or won't do the math in
order to discover which caveats do and don't apply in your case.

9. Performance Testing.

Testing to determine the expected processing delay as a function of the
applied load; also to determine resource utilization under load.
Equivalently, testing to determine the maximum number of simultaneous
users and/or transactions that the system can sustain.

Best Practice

If the cost of testing and analysis is far lower than the expected gain
and if you have stable software and valid user profiles, then this is an
important part of the toolkit.  Its use is widespread in
telecommunications, embedded systems, and other system software.

Given the heavy capital investment in a performance testing laboratory
staffed by experts, this kind of testing can forestall the worst kinds
of performance problems met in the field.  If throughput issues are
meaningful for your application, then you can't afford not to do this
testing.

Worst Practice

With today's cheap hardware, performance testing is rarely justified
unless the production run is measured in hundreds of thousands of units.
Alternatively, it can be justified for a big site or network where the
cost issues are measured in millions of dollars.   In most cases,
upgrading the hardware is cheaper than doing the performance test and
the associated analysis.

Performance testing is for experts.  There's a lot of heavy mathematics
and specialized testing expertise if you don't want to run the risk of
getting impressive but meaningless numbers.  Worst practice?  Give the
job to amateurs.

11. Independent Test Groups.

A test group that reports along a different hierarchy than the
development group.  True independence implies the ability to block a
release if it is in the organization's best interest to do so.

Best Practice

The modern independent test group is a value-added group that brings
special expertise to the test floor.  Because of specialized expertise
and dedicated resources they are more effective at doing many kinds of
testing than the typical development group would be.  Among the kinds of
testing effectively done by a value-added independent test group are:
network, configuration compatibility, usability, performance, security,
acceptance, hardware/software integration, distributed processing,
recovery, platform compatibility, third-party software.

Worst Practice

Using an independent test group to do: the testing that developers don't
want to do or should do but can't; independent unit and component
testing; as a safety net to catch any bugs that slip through the
developers' testing; in the hopes of having more objective testing;
testing in total ignorance of the underlying code; staffing the group
with people who can't do programming; using the group as a dumping
ground for failed programmers, underqualified workers, and undesirable
employees.

12. Usability Testing.

Testing the human/machine interface with respect to things such as
screen and menu layouts, help features, instruction manuals, icon style
and placement in order to confirm that such things are well thought out
and that the system can be learned and used with a minimum of hassle.

Best Practice

Usability issues are worked out and tested on prototypes long before
code is written.  Operational concepts are tested in a usability test
laboratory staffed by trained observers (through a one-way mirror) who
record actual hand and eye motions and other user actions in order to
judge the effectiveness of the placement of menu items, screen
appearances, etc.  Also, measure the frequency of references to help
screens and paper manuals.  Saves money and embarrassment.

Worst Practice

Doing it after the software has been written when it's too late to make
any substantive changes. Expecting amateurs not trained in human factors
to understand the issues and to measure the responses correctly.

Not knowing how to set up experiments involving human psychology with
the result that the subjects tell you what they think you want to hear
rather than what's real.

13. Beta Testing.

Testing done by (usually) unpaid, but representative, users  -- usually
the final stage of testing prior to official release.

Best Practice

Using small, representative sample of your installed user base (e.g.,
under 1%) can be an effective way to wring out latent configuration
sensitivity and performance bugs not previously found.  The sample must
be representative of both high-end and low-end configurations and high-
end and low-end users.

Worst Practice

Using beta testing instead of proper in-house testing.  We got away from
that a long time ago.  If beta testing finds a lot of new bugs, then the
software was released prematurely.   Beta testing "instead of" certainly
isn't free and it isn't very cost-effective.  But watch out for the
popular press and the public that still swears by the beta-testing
myths.

========================================================================

            TestWorks Corner: News Items for TestWorks Users

TestWorks products support client/server, embedded system, and website
testing and quality assurance activities.  eValid Test Services are
based on application of TestWorks products to websites.  Here are some
items that will be of interest to current and prospective TestWorks
users:

                           CAPBAK/Web Update
                           -----------------

There is a new release available of CAPBAK/Web, our fully featured Web
Browser (based on IE 4.n or 5.n) that is test enabled.

CAPBAK/Web functions include:  recording and playback of user sessions
in combined true-time and object mode; use of "test wizards" that create
all links, all buttons, and FORM-content tests passages; range of
validation options; detailed 1.0 msec resolution timing; multiple
playback capability.

New features in CAPBAK/Web [IE] Ver. 1.5 (Early Release 27 Oct 99)
include:

    * Added frames support EVERYWHERE -- especially in the wizards
    * Extended Validate Selected Text feature
    * Added Validate Selected Image feature
    * Added Validate Document Size feature
    * Added Validate Document Elements feature
    * Added Validate Document LastModified Date feature
    * Added Validate All Applets features
    * Link Wizard now includes all HREFs (URLs) specified in MAPS
    * VST now done ACROSS element boundaries with 200 character limit
    * Re-implemented player Interface
    * A lot more error handling and syntax enforcing included
    * More flexibility of editing keysave files
    * Now only ftp:, mailto:, and gopher: protocols are commented out in
      keysave files
    * Except for the 3 protocols listed above, all other protocols
      including "javascript:", etc., are recorded normally

You can download the latest CAPBAK/Web at:
    <http://www.soft.com/Products/Downloads/down.capbakweb.html>

To get a new license key send Email to licenses@soft.com or use the form
on our WebSite at:
    <http://www.soft.com/Products/Downloads/send.license.html>

If you already have a current license key it should work with the new
version.  If your current key does not work you will have to renew it;
send email to .

Please contact  for details.

========================================================================

            NASA Says Human Error Caused Loss Of Mars Craft

                           By Michael Miller



PASADENA, Calif. (Reuters) - Human error stemming from space engineers
using two sets of measurements -- one utilizing miles and the other
kilometers -- caused the loss of the Mars Climate Orbiter spacecraft
last week, NASA said Thursday.

The teams, located at the National Aeronautics and Space
Administration's Jet Propulsion Laboratory in Pasadena and at Lockheed
Martin Astronautics in Colorado, complicated matters further by failing
to realize the error, the agency said in a statement.

The $125 million orbiter, intended to serve as the first interplanetary
weather satellite, is believed to have broken up when it hit the Martian
atmosphere last week after an approach that was too near the surface.

"People sometimes make errors," said Edward Weiler, NASA's associate
administrator for space science. "The problem here was not the error, it
was the failure of NASA's systems engineering, and the checks and
balances in our processes to detect the error. That's why we lost the
spacecraft."

An investigation into the loss of the craft was launched immediately
after the spacecraft was lost. A peer review board Thursday announced
its preliminary findings.

The review board said that in making a key change to the spacecraft's
trajectory one team used the English, or avoirdupois, system of
measuring, which utilizes miles, yards, feet and inches as well as
pounds and ounces, while the other was using metric kilometers, meters,
kilograms and grams.

In a statement, the Jet Propulsion Laboratory said, `This information
was critical to the maneuvers required to place the spacecraft in the
proper Mars orbit."

There are 1.6 kilometers to a mile and 1.1 yards in a meter, while there
are 2.2 pounds in a kilogram.

At the time the spacecraft was lost, Mars Climate Orbiter Project
Manager Richard Cook said scientists had expected that the orbiter would
approach Mars at an altitude of between 87 and 93 miles when it fact it
came in at 37 miles above the surface of the planet.  He said the
minimum survival altitude was 53 miles.

Jet Propulsion Laboratory Director Edward Stone said, "Our inability to
recognize and correct this simple error has had major implications. We
have underway a thorough investigation to understand this issue."

In addition to the peer review board composed of Jet Propulsion
Laboratory scientists, a second review board that includes outsiders
also is looking into the cause of the loss, and an independent NASA
review board is to be formed shortly.

The primary mission of the orbiter had been to monitor the Red Planet's
atmosphere, surface and polar caps for one Martian year, or 687 days.

The craft also was intended as a vital link in the Mars Polar Lander
mission. That craft is due to land on Mars on Dec. 3 and the climate
orbiter would have acted as a relay station between the lander and
scientists on Earth.

Cook said the loss of the climate orbiter would complicate the lander
mission, but contingency plans already were in place for the lander to
transmit data directly to Earth through the Deep Space Network and via
the Mars Global Surveyor.

"Our clear short-term goal is to maximize the likelihood of a successful
landing of the Mars Polar Lander on December 3," said Weiler. "The
lessons from these reviews will be applied across the board in the
future."

========================================================================

            1st Conference Announcement and Call for Papers

             The 19th International Conference on Computer
                    Safety, Reliability and Security

                       ROTTERDAM, The Netherlands

                          October 25-27, 2000

Sponsors
        European Workshop on Industrial Computer Systems (EWICS)
        Simtech
        TU Delft
        TU Eindhoven

Co-sponsors
        ENCRESS
        NVRB (NL)
        ESRA
        ScSC (UK)
        Austrian Computer Society (OCG)

About the Conference
Safecomp 2000 will be held 25-27 October 2000 in Rotterdam, the
Netherlands. Safecomp is an annual 2.5 days-event reviewing the state of
the art, experiences and new trends in the areas of computer safety,
reliability and security regarding dependable applications of computer
systems. The one-stream programme provides ample opportunity to exchange
insights and experiences on emerging methods and practical applications
across the borders of the disciplines represented by participants.

Safecomp is initiated by EWICS (European Workshop on Industrial Computer
Systems: http://www.ewics.org/). The conference was first held in 1979
(Stuttgart, D), then successively in West Lafayette (Indiana-USA),
Cambridge (UK), Como (I), Manchester (UK), Sarlat (F), Fulda (D), Vienna
(A), Gatwick (UK), Trondheim (N), ZFCrich (CH), Poznan (PL), Anaheim
(USA), Belgirate (I), Vienna (A), York (UK), Heidelberg (D) and Toulouse
(F).

Scope of the Conference
The conference focuses on critical computer applications. It is intended
to be a platform for technology transfer between academia, industry and
research institutions. Papers are invited on all aspects of computer
systems in which safety, reliability and security are important.
Industrial sectors include, but are not restricted to medical devices,
avionics, space industry, railway and road transportation, process
industry, automotive industry, power plants and nuclear power plants.
Safecomp2000 welcomes in particular also contributions from application
areas of medical systems and transport & infrastructures, as well as
from methodological developments about software process improvement.

Medical systems depend progressively on - embedded - programmable
electronic systems (PES). Experience shows that the implementation of
methods to realise safe and reliable PES in the medical world is still
lacking. Therefore, safety-critical PE Medical Systems, contributions
are appreciated on design implications for assuring proper functioning
and use in clinical or non-clinical context, user needs and issues
concerning market approval.

Transport & Infrastructure systems depend progressively more on adequate
functioning of PE subsystems. Safecomp 2000 will give special attention
to experience reports of the realisation and acceptance of safety-
critical systems in this application area. Position papers are also
welcome.

Software Process Improvement (SPI) covers areas such as the application
of metrics to quantify the effectiveness of improvement activities, the
usage of safety standards, and the integration of verification &
validation techniques - such as formal inspections - in the
software/system development process. In relation to safety-critical
systems, contributions are appreciated on SPI in the development of
safety-critical systems addressing process/product dependencies, the
development of measurement programs, the application of standards and
metrics, the implementation and results of Process Improvement
Experiments (PIEs).

Contributions from research, industrial applications and experiences, as
well as on licensing questions are invited. Topics are not limited to
the above, but also include:

  * Methods: safety assessment, risk analysis, design for safety, formal
    methods and models
  * Special Topics: security relevant to safety, human factors, hardware
    solutions, verification & validation, distributed systems; safety-
    critical y2k experiences
  * Application Issues: safety guidelines, standards and certification;
    sociological, legal and organisational aspects; management and
    development; assuring emerging technologies; world wide nets in
    dependable systems; critical computer applications

Send full papers preferably as portable document format (pdf) or
postscript (ps) file by Email to the Programme Committee Chair, Floor
Koornneef, specifying as subject: Submission for Safecomp2000.
Alternatively you can send six hard copies of full papers. Papers must
clearly show the name, postal address, e-mail address, and fax number of
the contact author. Papers should not exceed 10 pages in length.
Templates can be found at http://www.springer.de/comp/lncs/. All
submissions will be reviewed by at least three members of the SAFECOMP
2000 International Programme Committee.

Dates and Deadlines

    February 15, 2000    Submission of papers
    May 3, 2000          Notification of acceptance
    June 9, 2000         Final copy of papers
    October 25-27, 2000  Conference

General Chair
Bas de Mol, TU Delft/AMC, NL

Programme Committee Chair
Floor Koornneef, TU Delft, NL
Meine van der Meulen, SIMTECH, NL

EWICS Chair
Gerd Rabe, TDCV Nord, D

International Programme Committee

     O. Anderson - DK     R. Garnier - F       G. Reijns - NL
     S. Anderson - DK     R. Genser - A        F. Saglietti - D
     A. Bertolino - I     J. Gorski - PL       E. Schoitsch - A
     H. Bezecny - D       C. Goring - UK       I. Smith - UK
     P. Bishop - UK       M. Heisel - D        T. Skramstad - N
     R. Bloomfield - UK   D. Inverso - USA     G. Sonneck - A
     S. Bologna - I       J. JE4rvi - FIN      J. Trienekens - NL
     F. Dafelmair - D     M. KaE2niche - F     U. Voges - D
     G. Dahll - N         K. Kanoun - F        A. Weinert - D
     P. Daniel - UK       V. Maggioli - USA    M. Wilikens - UK
     A. Eaton - UK        C. Mazet - F         R. Winther - N
     W. Ehrenberger - D   O. Nordland - N      S. Wittmann - D
     H. Frey - CH         A. Pasquini - I      J. Zalewski - USA
                          F. Redmill - UK      Z. Zurakowski - PL

Information about SAFECOMP 2000 is available on the internet at:

    http://www.wtm.tudelft.nl/vk/safecomp2000

In formation about Rotterdam is available on the internet at:

    http://www.conventions.rotterdam.nl/

========================================================================

     FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS
              ACM Committee on Computers and Public Policy
                        Peter Neumann, Moderator

      Editors Note:  The UseNet news group "comp.risks" is both a
      forum and a newsletter and has been a valuable resource to
      the software quality community for many years.  Superbly
      accumulated, selected, edited, and sometimes [hilariously]
      commented upon by Peter Neumann, each issue has a wealth of
      information of interest to software quality professionals.
      You can subscribe by sending email to "risks-
      request@csl.sri.com" with SUBCRIBE in the body.  Here is the
      content of a sample issue.  This issue is archived at
      <http://catless.ncl.ac.uk/Risks/20.62.html>.  You can get
      full details at <http://www.CSL.sri.com/risksinfo.html>.

RISKS-LIST: Risks-Forum Digest  Tuesday 12 October 1999  Volume 20, Issue 62

Serious security flaw in Microsoft Java (Edward W. Felten)
Latest British train collision (PGN)
TCAS unit flaw (Steve Bellovin)
Glitch switches Nevada 911 calls to San Diego CHP (Carl Maniscalco)
Supercomputer lost to fire, weather predictions reduced (Andrew Klossner)
Calif government computers fail, cars impounded, ... (Declan McCullagh)
Re: Massive fiber cut (Doneel Edelson)
ICD's save ISS: *not*! (Erann Gat)
Floyd/EDS (William Addams Reitwiesner)
Re: Internet Explorer 5.0 flaws (Dan Wallach)
GPS rollover *did* cause DoD Problems (Peter B. Ladkin)
NT Stung Again by Y2K Bug (Paul Walczak)
Iraq decides to wait and see on Y2K oil disruption (Keith A Rhodes)
FBI warns some Y2K fixes may be suspect (Jonathan de Boyne Pollard)
"Self-destructing e-mail" (Brad Arkin)
Re: Linux banned (Mark Brader)
Where do you want to be *mis*directed today? (Mark Brader)
Maybe Microsoft owns stock in Canada? (Mark Brader)
Risks of screen saver messages (Nick Brown)
Abridged info on RISKS (comp.risks)

========================================================================

                            Call for Papers

                           BalticDB&IS'2000
           Fourth International Baltic Workshop on DB and IS
                   May 1-5, 2000, Vilnius, Lithuania
   (Selected papers will be published by Kluwer Academic Publishers)

                 http://www.science.mii.lt/BalticDB&IS

AIMS AND SCOPE
The aim of the Baltic Workshop is to provide a forum for the exchange of
scientific achievements between the research communities of Baltic
countries and the rest of the world in the area of databases and
information systems. The objective of the workshop is to bring together
researchers, practitioners, and PhD students and to give the opportunity
to present their work and to exchange their ideas. The workshop
programme will be preceded by one-day Doctoral Consortium chaired by
invited professors. The Workshop will consist of regular sessions with
technical contributions reviewed and selected by an international
program committee, as well as of invited talks and tutorials given by
leading scientists. The official language of the Workshop will be
English. The Workshop continues the series of BalticDB&IS workshops held
in Trakai (1994), Tallinn (1996), and Riga (1998).

TOPICS

Submissions are invited on topics including, but not limited to, the following:

 - global information systems,
 - distributed information systems and regional (national) information
   infrastructure,
 - e-business and e-commerce,
 - information systems, mobile computing, and agents,
 - knowledge management and information systems,
 - information systems, data warehousing, OLAP servers, and knowledge
   discovery,
 - activity modelling, advanced transaction, and workflow management,
 - information system security,
 - enterprise and information system architectures,
 - systems, information systems, and software systems engineering
   (specification (especially, formal), analysis, modelling, and design
   methods/methodologies, and tools, etc.),
 - component-based information systems development,
 - domain-oriented information systems (GIS, legal IS, technical IS, etc.),
 - multimedia information systems,
 - foundations of databases,
 - database architectures (client-server architectures, parallel and
   distributed DB,  interoperable DB, mobile DB, Internet DB, etc.),
 - object-oriented, deductive and active databases, - query languages,
 - data models and database design,
 - database performance, query processing and optimisation, storage management,
 - data quality, security and integrity,
 - database and knowledge-base management systems and technology,
 - database development tools,
 - special databases (text DB, multimedia databases, statistical DB,
   scientific DB, engineering DB, real-time DB, temporal and spatial DB, etc.)
 - scientific and engineering data-intensive applications.

SUBMISSION

Please send all submissions to:
        Albertas Caplinskas
        Baltic DB&IS'2000
        Institute of Mathematics and Informatics
        Akademijos 4
        LT-2600 Vilnius
        Lithuania

TIMETABLE

Deadline for papers:                January 4, 2000
Notification of acceptance:         February 17, 2000
Camera-ready papers:                March 17, 2000
Workshop dates:                     May 2-5, 2000
Doctoral Consortium:                May 1, 2000


                         WORKSHOP ORGANISATION

                 A d v i s o r y   C o m m i t t e e :

Janis Bubenko, Sweden
Arne Solvberg, Norway

                    P r o g r a m m e   C h a i r :

Albertas Caplinskas, Lithuania

                P r o g r a m m e   C o m m i t t e e :

Janis Barzdins, Latvia              Algirdas Pakstas, UK
Alfs Berztiss, USA                  Bronius Paradauskas, Lithuania
Janis Bicevskis, Latvia             Jaan Penjam, Estonia
Raimondas Ciegis, Lithuania         Jaanus Poial, Estonia
Vytautas Cyras, Lithuania           Henrikas Pranevicius, Lithuania
Klaus R. Dittrich, Switzerland      Ivan Ryant, Czech
Hans-Dieter Ehrich, Germany         Keng Siau, USA
Remigijus Gustas, Sweden            Kazimiers Subieta, Poland
Janis Grundspenkis, Latvia          Eugenijus Telesius, Lithuania
Hele-Mai Haav, Estonia              Jaak Tepandi, Estonia
Jean-Luc Hainaut, Belgium           Janis Tenteris, Latvia
Juhani Iivari, Finland              Bernhard Thalheim, Germany
Leonid Kalinichenko, Russia         Kal Toth, Canada
Audris Kalnins, Latvia              Enn Tyugu, Sweden
Pericles Loucopoulus, UK            Aphrodite Tsalgatidou, Greece
Kalle Lyytinen, Finland             Benkt Wangler, Sweden
Mihail Matskin, Norway              Naoki Yonezaki, Japan
Julie A. McCann, UK                 Edmundas Zavadskas, Lithuania
Jorgen Fisher Nilsson, Denmark

========================================================================

                         Call for Participation

 A d a - B e l g i u m ' 9 9   -   9 t h   A n n u a l   S e m i n a r

                       A d a   9 5   W o r k s !

                       Friday, November 19, 1999
                            Leuven, Belgium

                     Organized with Assistance from
                       ACM SIGAda and Eurocontrol

    http://www.cs.kuleuven.ac.be/~dirk/ada-belgium/events/local.html

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Ada-Belgium is a non-profit volunteer organization whose purpose is to
promote the use in Belgium of the Ada programming language, the first
ISO standardized object-oriented language and a great language for
engineering reliable systems.

For companies throughout the world, Ada is the programming language of
choice for all the right reasons.  These companies know that Ada is
their most effective language for building fast, reliable, and adaptable
systems, on time.

From: "Choose Ada - The Most Trusted Name in Software tm"
http://www.adaic.com/docs/flyers/choose-ada.html

We are pleased to announce that on Friday, November 19, 1999, Ada-
Belgium organizes its 9th Annual Seminar at the Faculty Club, Groot
Begijnhof, in Leuven.

                               Highlights

  * Theme of Ada-Belgium'99 is "Ada 95 Works!". We will have a mix of
    longer tutorial like presentations and shorter experience reports or
    product presentations, given by speakers both from the s.c. Ada
    vendors and the Ada users.
  * Invited speaker is John Barnes. He will give three tutorial
    presentations: one on the SPARK language and toolset for developing
    "more correct" Ada 95 programs, and two on how to use some of the
    advanced new features in Ada 95.
  * Free Ada-related material will be distributed, a.o. the new Walnut
    Creek Ada and Software Engineering double CD-ROM set: the October
    1999 release is a special edition for SIGAda'99 and Ada-Belgium'99.
  * All presentations are in English. Everyone interested is welcome.

More information is available below and at the Ada-Belgium'99 Home Page
via URL

    http://www.cs.kuleuven.ac.be/~dirk/ada-belgium/events/local.html

Here you will find:
  * the full programme,
  * abstracts of the talks,
  * a short biography of the speakers,
  * the presenters' company/organization,
  * free Ada CD-ROMs,
  * documentation that will be handed out,
  * the location of the seminar,
  * the participation fee,
  * the seminar secretariat,
  * acknowledgements,
  * an Ada-Belgium'99 seminar registration form,
  * an Ada-Belgium membership application form.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

                         Preliminary Programme

                      08:30 - 09:30   Registration

09:30 - 09:35   Welcome (Ada-Belgium)

09:35 - 10:35   Tutorial
    [1h00]      "An Overview of SPARK"
                by John Barnes, JBI, U.K.

10:35 - 11:00   Technical Vendor Presentation
    [0h25]      "Rational's solutions for safety-critical applications"
                by Jean-Luc Adda, Rational Software Corporation, France

11:00 - 11:30   Coffee break

11:30 - 12:30   Tutorial
    [1h00]      "Building Frameworks in Ada 95"
                by Ehud Lamm, The Open University, Israel

12:30 - 12:50   Technical Vendor Presentation
    [0h20]      "ObjectAda Real-Time ETS"
                by Patricia Langle, Aonix France

-------------

12:50 - 13:50   Lunch
13:50 - 14:20   Coffee break

-------------

14:20 - 14:40   Technical Presentation:
    [0h20]      "Ada 95 and Real-Time"
                by Pierre Morhre, Aonix France

14:40 - 15:05   Experience Report:
    [0h25]      "Ada at ADSE - or - Ada for engineering applications"
                by Kees de Lezenne Coulander, Aircraft Development and
                Systems Engineering B.V., the Netherlands

15:05 - 15:50   Tutorial - Part 1:
    [0h45]      "Advanced Ada 95 - Storage Pools"
                by John Barnes, JBI, U.K.

15:50 - 16:20   Coffee break

16:20 - 17:05   Tutorial - Part 2:
    [0h45]      "Advanced Ada 95 - Multiple Inheritance"
                by John Barnes, JBI, U.K.

17:05 - 17:30   Technical Vendor Presentation:
    [0h25]      "GTKAda GUI and GUI builder technology"
                by Arnaud Charlet, ACT Europe, France

17:30 - 17:55   Technical Presentation:
    [0h25]      "The Use of CORBA with Ada 95"
                by Jean-Claude Mahieux, Top-Graph'X, France

17:55 - 18:00   Closing Remarks (Ada-Belgium)

18:00           End of the Seminar

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Dirk (Dirk.Craeynest@cs.kuleuven.ac.be for Ada-Belgium e-mail)

--
Dirk Craeynest | OFFIS nv/sa       | Email Dirk.Craeynest@offis.be
   Ada-Belgium | Weiveldlaan 41/32 | Phone +32(2)725.40.25
   Ada-Europe  | B-1930 Zaventem   |       +32(2)729.97.36 (work)
   Team Ada    | Belgium           | Fax   +32(2)725.40.12

+-------------/ E-mail: ada-belgium-board@cs.kuleuven.ac.be
|Ada-Belgium /     WWW: http://www.cs.kuleuven.ac.be/~dirk/ada-belgium/
|on Internet/      FTP: ftp://ftp.cs.kuleuven.ac.be/pub/Ada-Belgium
+----------/ Mail-list: ada-belgium-info-request@cs.kuleuven.ac.be

*** Nov 19, Ada-Belgium'99, 9th Annual Seminar, "Ada 95 Works!", Leuven
*** http://www.cs.kuleuven.ac.be/~dirk/ada-belgium/events/local.html

========================================================================

                    The Best T-shirts of the Summer

Note:  This item was forwarded anonymously with the claim that it was in
the "Bob Levey's Washington" column in the Washington Post.

(1) (Around a picture of dandelions) I Fought the Lawn and the Lawn Won

(2) So Few Men, So Few Who Can Afford Me

(3) I Suffer Occasional Delusions of Adequacy

(4) God Made Us Sisters, Prozac Made Us Friends

(5) If They Don't Have Chocolate In Heaven, I Ain't Going

(6) At My Age, I've Seen It All, Done It All, Heard It All... I Just
Can't Remember It All

(7) My Mother Is A Travel Agent For Guilt Trips

(8) I Just Do What The Voices Inside My Head Tell Me To Do

(9) (Worn by a pregnant woman) A Man Did This To Me, Oprah

(10) If It's Called Tourist Season, Why Can't We Hunt Them?

(11) Senior Citizen:  Give Me My Damn Discount

(12) Princess, Having Had Sufficient Experience With Princes, Seeks Frog

(13) No, It Doesn't Hurt (On a "well-tattooed gentleman")

(14) (On the back of a passing motorcyclist) If You Can Read This, My
Wife Fell Off

(15) I Used To Be Schizophrenic, But We're OK Now

(16) (Over the outline of the state of Minnesota) My Governor Can Beat
Up Your Governor

(17) Veni, Vedi, Visa: I came. I Saw. I Did a Little Shopping.

(18) What If The Hokey Pokey Is Really What It's All About

(19) I Didn't Climb to the Top of the Food Chain to Be a Vegetarian

(20) (On the Front) Yale Is Just One Big Party  (On the back) With a
$25,000 Cover Charge

(21) Coffee, Chocolate, Men...Some Things Are Just Better Rich

(22) Liberal Arts Major...Will Think For Money

(23) Growing Old is Inevitable; Growing Up is Optional

(24) IRS-Be Audit You Can Be

(25) Gravity...It's Not Just a Good Idea.  It's the Law.

(26) If You Want Breakfast In Bed, Sleep In the Kitchen

(27) Wanted:  Meaningful Overnight Relationship

(28) The Old Pro...Often Wrong...Never In Doubt

(29) If At First You Don't Succeed, Skydiving Isn't For You

(30) Old Age Comes at a Bad Time

(31) In America, Anyone Can Be President.  That's One of the Risks You
Take.

(32) First Things First, but Not Necessarily in That Order.


Alan W. Brown
Comptroller
The National Center for Genome Resources
(505) 995-4408  Fax: (505) 995-4461
ab@ncgr.org

========================================================================
------------>>>          TTN SUBMITTAL POLICY            <<<------------
========================================================================

The TTN Online Edition is E-mailed around the 15th of each month to
subscribers worldwide.  To have your event listed in an upcoming issue
E-mail a complete description and full details of your Call for Papers
or Call for Participation to "ttn@soft.com".

TTN On-Line's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the TTN On-Line issue date.  For
  example, submission deadlines for "Calls for Papers" in the January
  issue of TTN On-Line would be for February and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK and may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc. and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items are the opinions of their authors or
submitters; TTN-Online disclaims any responsibility for their content.

TRADEMARKS:  STW, TestWorks, CAPBAK, SMARTS, EXDIFF, Xdemo, Xvirtual,
Xflight, STW/Regression, STW/Coverage, STW/Advisor, TCAT, TCAT-PATH, T-
SCOPE and the SR logo are trademarks or registered trademarks of
Software Research, Inc. All other systems are either trademarks or
registered trademarks of their respective companies.

========================================================================
----------------->>>  TTN SUBSCRIPTION INFORMATION  <<<-----------------
========================================================================

To SUBSCRIBE to TTN-Online, to CANCEL a current subscription, to CHANGE
an address (a CANCEL and a SUBSCRIBE combined) or to submit or propose
an article, use the convenient Subscribe/Unsubscribe facility at:

         <http://www.soft.com/News/TTN-Online/subscribe.html>.

Or, send E-mail to "ttn@soft.com" as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:

   subscribe your-E-mail-address

   TO UNSUBSCRIBE: Include this phrase in the body of your message:

   unsubscribe your-E-mail-address

   NOTE: Please, when subscribing or unsubscribing via email, type YOUR
   email address, NOT the phrase "your-E-mail-address".

		QUALITY TECHNIQUES NEWSLETTER
		Software Research, Inc.
		1663 Mission Street, Suite 400
		San Francisco, CA  94103  USA

		Phone:     +1 (415) 861-2800
		Toll Free: +1 (800) 942-SOFT (USA Only)
		Fax:       +1 (415) 861-9801
		Email:     qtn@soft.com
		Web:       <http://www.soft.com/News/QTN-Online>

                               ## End ##